uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,993,263 | arxiv | \section{Introduction}
Low-temperature physics of correlated materials is often characterized by the competition between ordered phases and unconventional superconductivity.
Tipically, a static mean-field description, implying negligible fluctuations beyond the limits of the ordered phase, is not valid in these systems. Nearly all dynamical probes show strong order parameter fluctuations, not only in the neighboring superconducting phases, which suggests a natural mechanism of pairing, but also in the strange metal, present at higher temperatures. Lithium Purple Bronze (LiPB), adds the ingredient of quasi-one-dimensionality to the problem and suggests the possibility that charge and spin fluctuations alone, without the existence of real order, might be responsible of superconductivity and anomalies of the normal phase.
The metallic phase of LiPB, with chemical formula Li$_{0.9}$Mo$_6$O$_{17}$, has been characterized as a robust Luttinger Liquid (LL) in a series of Angle Resolved Photoemission Spectroscopy (ARPES) experiments ranging different temperature regimes, sample growth techniques, photon energies, and data analysis procedures \cite{denlinger,gweon2001,gweon2002,gweon2003,gweon2004,SolidStateAllen,wang2006,wang2009,dudy2013}. STM spectroscopy shows \cite{cazalilla2005,Matzdorf2013} LL single-particle density of states and thermal and electric transport measurements are in complete disagreement with Widemann-Franz law \cite{hussey2011}. When temperature is decreased, an upturn of the resistivity occurs at $T_m \sim 20 $ K \cite{Greenblatt,filippini1989,Choi,hussey2011} and the material becomes superconducting at lower temperatures around $T_c \sim 1$ K \cite{filippini1989, mercure2012}.
Unlike other low-dimensional bronzes, the resistivity upturn of LiPB \cite{dumas1985} is not associated with a lattice distortion (See Table 1 in Ref. \onlinecite{VanderSmaalen}). Neither thermal expansion \cite{dossantos2007} nor neutron scattering experiments \cite{daluz2011} have identified a phase transition at $T_m$ suggesting the idea of a soft crossover of electronic nature. No gap has been clearly observed in the spectroscopies but optical conductivity measurements \cite{Choi} suggest the presence of a weak pseudogap. Recently, thermopower \cite{Cohn} and NMR \cite{Wu} experiments have confirmed different aspects of the quasi-one-dimensionality of this material but the nature of the upturn remains a mistery.
The most recent study of superconducting properties \cite{mercure2012} confirms quantitatively that the large anisotropies observed in the upper critical field agree with those expected from the electrical resistivity in the metallic phase. The coherence lengths perpendicular to the chains are larger
than interchain distances and $H_{c2}$ increases monotonically with decreasing temperature to values 5 times larger than the estimated paramagnetic pair-breaking field. Neither spin-orbit scattering nor strong-coupling superconductivity seem to explain this behavior suggesting the possibility of spin triplet superconductivity.
A quantitative comparison with experiments \cite{lebed} shows that superconductivity can be destroyed through
orbital effects at fields higher than the Clogston paramagnetic limit {\em provided} that the superconducting pairs are in the triplet state.\\
In the last years there has been a very important theoretical effort \cite{merino2012,chudzinsky2012,JMandJV,nuss} to reduce the complexity of the unit cell
to microscopic Hamiltonians reproducing different aspects of this phenomenology. In this article, we present a microscopic theory for the unconventional superconducting properties observed in
Li$_{0.9}$Mo$_6$O$_{17}$. Based on a minimal extended Hubbard model introduced in Ref. \onlinecite{merino2012,chudzinsky2012},
we show that Li$_{0.9}$Mo$_6$O$_{17}$ superconducts in the triplet channel when charge and spin fluctuations are enhanced,
which may be also related with the upturn in resistivity at $T_m$ \cite{JMandJV}. Using the random phase approximation (RPA), we identify the CDW pattern characterized by two ordering wave vectors, ${\bf Q_1}$ and ${\bf Q_2}$. In the proximity of those phases we evaluate and analize the superconducting vertex finding dominant p-wave triplet superconductivity with nodes on the Fermi surface. Within our methodology we
find results compatible with the one presented in a very recent preprint \cite{TripletLiPB}
\begin{figure}
\epsfig{file=ModelFig1_2.jpg,width=8cm}
\caption{(Color online)
Schematic crystal structure of Li$_{0.9}$Mo$_6$O$_{17}$ projected onto the $b$-$c$ plane showing only the partially filled Mo atoms forming the zig-zag ladders relevant to the low energy electronic properties. Our choice of unit cell is highlighted and the orbitals numerated (solid line) according to the text, the hoppings (dotted line) and the Coulomb interactions (dashed line) are also represented.
}
\label{fig1}
\end{figure}
\section{Microscopic model.}
\label{sec:model}
The electronic structure close to the $E_F$ and the quasi-one-dimensionality of the system derives from two parallel zig-zag Mo-O chains per unit cell \cite{onoda1987} Fig. \ref{fig1}. Tight binding \cite{wangbo} and DFT \cite{popovic2006} band structure calculations agree that the Mo-O orbitals of the chain give rise to four bands and two of them cross the Fermi level.
ARPES confirms the quasi-one-dimensionality of the Fermi surface. A Slater-Koster
tight binding parametrization of the system has been propoposed in Ref. \onlinecite{merino2012} and the role of long-range Coulomb couplings
in the anomalies of the metallic phase has been also studied \cite{JMandJV} .
Here, we consider a strongly correlated model, which can capture the essential physics of Li$_{0.9}$Mo$_6$O$_{17}$ \cite{merino2012} consisting on an extended Hubbard lattice with 4 Mo-atoms per unit cell, which reads:
\begin{equation}
H=H_0+H_U,
\label{eq:model}
\end{equation}
where $H_0$ is the non-interacting tight-binding Hamiltonian.
The one-electron Hamiltonian can be expressed in terms of Bloch waves
with the following non-zero matrix elements, the intra-ladder: $t_{12}({\bf k})=t_{43}({\bf k}) =t_\perp=-0.024$ eV,and $t_{14}=t_{23}({\bf k}) t=0.5$ eV and the hoppings among chains: $t_{13}({\bf k})=t'=0.036$ eV, as is shown in Fig \ref{fig1} (dotted cell).
The diagonalized Hamiltonian: $H_0=\sum_{{\bf k}\mu \sigma} \epsilon_\mu({\bf k} ) d^\dagger_{{\bf k} \mu \sigma} d_{{\bf k} \mu \sigma}$,
leads to four bands denoted by $\mu$, the two lowest ones cross the $E_F$ \cite{popovic2006,merino2012,JMandJV}. The Fermi surface,
close to one quarter-filling, $n=0.225$, is shown in Fig. \ref{chi0} (a).
The Coulomb interaction terms in the Hamiltonian
includes on-site Hubbard interaction ($U$), intra-ladder interaction with the following non-zero matrix elements: $V_{12}=V_{32}=V_{\parallel}$ and $V_{12}=V_{34}=V_{\perp}$ and inter-ladder $W$ interactions: $W_{13}=W$ and $W_{12}=W_{34}=W_\perp$, as is shown in Fig \ref{fig1} (dashed cell).
\begin{multline}
H_U = U\sum_{l,i,\alpha} n^{(l)}_{i \alpha \uparrow} n^{(l)}_{i \alpha\downarrow}
+\sum_{l,i,\alpha,j,\beta}V_{i \alpha,j \beta} n^{(l)\dagger}_{i \alpha} n^{(l)}_{j \beta}
\\
+\sum_{l,i, \alpha,j, \beta}W_{i \alpha,j \beta} n^{(l)}_{i \alpha} n^{(l+1)}_{j \beta}
\\
\label{eq:hu}
\end{multline}
The interacting Hamiltonian only includes density-density Coulomb interaction contributions.
Within this work, we have consider several combinations of parameters, all of them leading to essentially the same results presented here where we reduce the parameter space to two variables ($U$ and $V$). We take the Coulomb interaction among different sites with $1/|{\bf r}|$ dependence, where ${\bf |r|}$ is the distance among orbitals. Therefore, we parametrize the interactions by weighting the V's with the interatomic distances: $V=V_\parallel r_\parallel=V_\perp r_\perp=Wr_W=W_\perp r_{W\perp}$.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=40mm,height=40mm]{FS.jpg}
\includegraphics[width=50mm,height=40mm]{chi0T1em4.jpg}
\end{tabular}
\caption{(Color online)
(a) Fermi surface with two bands, ${\bf Q_1}$ a nesting vector, and ${\bf Q_2}$ referred in the text.(b)Real part of the bare susceptibility in momentum space for $\omega=0$ and $q_a=0$. Notice the maximum reveals the warping of the Fermi surface at the nesting vector.}
\label{chi0}
\end{figure}
\section{Multiorbital RPA approach}
In this section we explain the multi-orbital random phase approximation (RPA) approach for this model, we will study spin and charge ordering based on spin and charge susceptibility respectively, and the superconducting vertex based on projections of different order parameters.
\subsection{Spin susceptibility}
The RPA spin susceptibility reads\cite{maier2009}:
\begin{multline}
(\chi_s)_{\alpha,\beta} ({\bf q})=(\chi_0)_{\alpha, \beta}({\bf q})\\
+\sum_{\alpha ' \beta '}(\chi_s)_{\alpha ' \beta '}({\bf q})(U_s)^{\alpha ' \beta '}(\chi_0)_{\alpha \beta}({\bf q})
\label{eq:chiS}
\end{multline}
where the indices $\alpha,\beta$ refer to the four Mo $d_{xy}$ orbitals present in the unit cell. This is the more general case for density-density interactions. In our case the spin interaction is a diagonal matrix $(U_s)_{\alpha\beta}=U \delta_{\alpha,\beta}$, momentum independent.
The non-interacting susceptibility, $\chi_0$, reads:
\begin{widetext}
\begin{equation}
(\chi_0)_{\alpha,\beta}({\bf q}, i\omega)=-{1 \over N} \sum_{\bf k, \mu, \nu} {a^\alpha_\mu({\bf k}) a^{\beta *}_\mu({\bf k}) a^{\beta}_\nu({\bf k + q}) a^{\alpha *}_\nu({\bf k +q}) \over i\omega +
\epsilon_\nu({\bf k +q}) -\epsilon_\mu({\bf k}) } [ f(\epsilon_\nu({\bf k+q}))-f(\epsilon_\mu({\bf k}) ) ] ,
\label{eq:chi0}
\end{equation}
\end{widetext}
where $N$ is the number of lattice sites, and $\nu,\mu$ are band indices. The matrix elements $a^\alpha_\mu({\bf k})=\langle \alpha | \mu {\bf k} \rangle $ are
the coefficients of the eigenvectors diagonalizing $H_0$.
\subsection{Charge susceptibility}
The RPA charge susceptibility reads\cite{maier2009}:
\begin{multline}
(\chi_c)_{\alpha,\beta} ({\bf q})=(\chi_0)_{\alpha, \beta}({\bf q})\\
-\sum_{\alpha ' \beta '}(\chi_c)_{\alpha ' \beta '}({\bf q})(U_c)^{\alpha ' \beta '}({\bf q})(\chi_0)_{\alpha \beta}({\bf q})
\label{eq:chiC}
\end{multline}
Where ${U}_c({\bf q})$ is the Coulomb matrix appearing in Eq. (\ref{eq:hu}) expressed in momentum space: $({U}_c)_{\alpha \beta}({\bf q})=U\delta_{\alpha,\beta}+2\hat V({\bf q})_{\alpha,\beta}$
where $\hat V({\bf q})$ is the Fourier transform of $V_{i \alpha,j \beta}$ and $W_{i \alpha,j \beta}$ interactions in real space.
\subsection{Superconducting Vertex}
Assuming that the pairing interaction arises from the exchange of spin and charge fluctuations, we can calculate the
pairing vertex using the RPA, (For a detailed description of the method, see for instance \cite{maier2009}). The strength of the interaction is weighted by $\omega^{-1}$ and making use of the Kramers-Kronig relation we only need the zero frequency vertex,\cite{maier2009}. For the multiorbital case \cite{Takimoto}\cite{japoneses}, singlet and triplet pairing vertex at zero frequency are given by:
\begin{widetext}
\begin{equation}
\Gamma_{\alpha \beta}^{\mathrm{singlet}}({\bf k,k'})= \left ( U+\frac{3}{2}U_s \chi_s({\bf k-k'})U_s+\hat V({\bf k-k'})
-\frac{1}{2}U_c({\bf k-k'}) \chi_c({\bf k-k'})U_c({\bf k-k'}) \right )_{\alpha \beta}
\end{equation}
\begin{equation}
\Gamma_{\alpha \beta}^{\mathrm{triplet}}({\bf k,k'})= \left ( -\frac{1}{2}U_s \chi_s({\bf k-k'})U_s+\hat V({\bf k-k'})
-\frac{1}{2}U_c({\bf k-k'}) \chi_c({\bf k-k'})U_c({\bf k-k'}) \right ) _{\alpha \beta}
\label{GTriplet}
\end{equation}
\end{widetext}
We transform the vertex in real space $\alpha \beta$ into momentum space $\mu \nu$ with the band structure eigenvalues $a^\alpha_\mu({\bf k})$. The Cooper pairs have an incoming momentum of ($\bf k$,$\bf -k$) and an outcoming momentum of ($\bf k'$,$\bf -k'$). We take the symmetric and antisymmetric parts for singlet and triplet channels respectively.
\begin{widetext}
\begin{equation}
\Gamma_{\mu \nu}^{\mathrm{singlet}}({\bf k,k'})=\sum _{\alpha \beta}
a^{\alpha*}_\mu({\bf -k})a^{\alpha*}_\mu({\bf k})
\mathrm{Real}\left [ \Gamma_{\alpha \beta}^{\mathrm{singlet}}({\bf k,k'}) \right ]
a^{\beta}_\nu({\bf k'})a^{\beta}_\nu({\bf -k'})+
({\bf k'}\leftrightarrow {\bf -k'})
\label{pairingVertexs}
\end{equation}
\begin{equation}
\Gamma_{\mu \nu}^{\mathrm{triplet}}({\bf k,k'})=\sum _{\alpha \beta}
a^{\alpha*}_\mu({\bf -k})a^{\alpha*}_\mu({\bf k})
\mathrm{Real}\left [ \Gamma_{\alpha \beta}^{\mathrm{triplet}}({\bf k,k'}) \right ]
a^{\beta}_\nu({\bf k'})a^{\beta}_\nu({\bf -k'})-
({\bf k'}\leftrightarrow {\bf -k'})
\label{pairingVertext}
\end{equation}
\end{widetext}
We solve the gap equation by projecting out $s$,$p$,$d$ and $f$ waves (\cite{Scalapino89}):
\begin{equation}
\lambda_\gamma=-\frac{\sum _{\mu \nu}\int_{FS} \frac{\mathit{d^2{\bf k'_\mu}}}{|v_F({\bf k'_\mu})|}\int_{FS} \frac{\mathit{d^2{\bf k_\nu}}}{|v_F({\bf k_\nu})|} g_\gamma({\bf k'_\mu}) \Gamma_{\mu \nu}^{\mathrm{P}}({\bf k,k'}) g_\gamma({\bf k_\nu})}{\sum_\mu \int_{FS} \frac{\mathit{d^2{\bf k_\mu}}}{|v_F({\bf k_\mu})|} g^2_\gamma({\bf k_\mu})}
\label{lambdaEq}
\end{equation}
where $\gamma$ numerates the different waves projected ($s$,$p$,$d$ or $f$) and $\mathrm{P}$ depends on the $\gamma$ symmetry. $\mathrm{P}$ could be $\mathrm{singlet}$ or $\mathrm{triplet}$. The gap equation has a solution when $\lambda_\gamma$ is 1. We increase the interaction parameters until the dominant wave solves the equation, for stronger interactions the gap is already opened in that channel.
\section{Phase Diagram}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=80mm]{PD_escaladoMasRoss.jpg}
\end{tabular}
\caption{(Color online)
(a) Phase Diagram $U-V$. Extended region of $p_y$ superconductivity with nodes in the Fermi surface (inset) close to the CO region.
In the inset we show $d_{x^2-y^2}$ and $p_y$ wave functions.}
\label{PD}
\end{figure}
\begin{figure}
\includegraphics[width=42mm]{VqSum.jpg}
\includegraphics[width=42mm]{EigReXCU=04V7FS.jpg}\\
\caption{(Color online) Left: momentum space distribution of the interaction, we show the sum of all components. Right: The same figure as Fig. \ref{ChargeOrderEig} (bottom right), in those momentum relevant to the pairing vertex.}
\label{ChargeOrder}
\end{figure}
\begin{figure}
\includegraphics[width=42mm]{XCQxvsW_U02_5.jpg}
\includegraphics[width=42mm]{XCQxvsW_U04_6.jpg}\\
\includegraphics[width=42mm]{EigReXCU=02.jpg}
\includegraphics[width=42mm]{EigReXCU=04V7.jpg}\\
\caption{(Color online) Top. Imaginary part of the larger eigenvalue of the charge susceptibility near the critical value (See Fig. \ref{PD}) for $U=0.2$ and ($U=0.3$) in left and (right) panel. Bottom. Real part of the larger eigenvalue in momentum space and zero frequency close to the critical value for $U=0.2$ and ($U=0.3$) in left and (right) panel.}
\label{ChargeOrderEig}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=80mm]{CMQ2.jpg}
\end{tabular}
\caption{(Color online)
Gap squared of the ${\bf Q_2}$ critical mode scaled with the critical interaction $V=V_c$. We observe a $\frac{1}{2}$ exponent near the critical value.}
\label{CMQ2}
\end{figure}
Using the parametraization described in section \ref{sec:model} we can study the complete parameter space, reduced to two variables $U$ and $V$.
The RPA spin susceptibility (Eq. \ref{eq:chiS}) breaks at $U=0.47$ indicating a Spin Density Wave (SDW) phase.
The RPA charge susceptibility (Eq. \ref{eq:chiC}) diverges for different momenta for different $U$ on-site Hubbard interaction, leading to different charge order regions in the phase diagram (See Fig. \ref{PD}).
The charge order susceptibility divergence consists in an interplay between the bare susceptibility strongly peaked at $q_b\approx \pi$ (Fig. \ref{chi0}(b)) and the charge interaction $U_c(\bf{q})$. The analysis involves $4 \times 4$ terms but in essence can be understood with the sum of the $16$ contributions.
We observe that $U_c(\bf{q})$(Fig. \ref{ChargeOrder} c) is minimum at the $(2\pi,2\pi)$ edge of the Brillouin Zone, notice that the periodicity is not required since we are dealing with the sum of the elements of a matrix. For $U=0$, red means positive and blue negative, for that reason, among all nesting vectors ($q_b\approx \pi$), $\bf Q_1$ diverges first. The divergence at this momentum stems from nesting.\\
As long as we increase $U$, $U_c(\bf{q})$ remains negative in a smaller region, leading to the displacement of the divergence to $\bf Q_2$. Why $\bf Q_2$ does not change with $U$ can be understood from the bare susceptibility structure, $\chi_0$. $\chi_0$ in the entire Brillouin Zone (only $q_b$ matters) can be divided in three zones: $0<q_b \lesssim 0.1\pi/b$, $0.1\pi/b \lesssim q_b \lesssim 0.6\pi/b$ and $0.6\pi/b \lesssim q_b<\pi/b$ (and symmetric regions). In the first zone the susceptibility increases sharply due to the warping, increasing $q_b$ we can connect more points of the Fermi surface; in the second region the system only have access to the Fermi sheets at one side, increasing slowly the value of the particle-hole susceptibility; in the third region connections among the two pairs of sheets gives also a rapid enhancement. In our case, the range of $U$ below the SDW ordered phase, makes the negative $U_c(\bf{q})$ to be in the second region of the bare susceptibility, since this region is slowly q-dependent, we observe a minimal change of $\bf Q_2$ with increasing $U$. The divergence of the charge susceptibility at this momentum is due to interactions and the softening can be described as a critical mode similar to the one found in Ref. \onlinecite{JMandJV}. In Fig. \ref{CMQ2} we see a critical exponent of $\frac{1}{2}$. \\
The transition from ${\bf Q_1}$ to ${\bf Q_2}$ ordering phases, is also shown in Fig. \ref{ChargeOrderEig}. The upper panels show the frequency against momentum of the charge susceptibility (maximum eigenvalue, which is significantly larger than the other three), the lower panels show the charge susceptibility (maximum eigenvalue) at zero frequency. On the left hand panels $U=0.2$ while on the right panels $U=0.4$. We observe a change in the spectral weight of the collective mode from ${\bf Q_1}$ to ${\bf Q_2}$ when $U$ is increased. Moreover, while the weight at ${\bf Q_1}$ exists at any value of $V$, at ${\bf Q_2}$ the mode softens signaling at the proximity to the transition.\\
Near the SDW region we found superconductivity in $d_{x^2-y^2}$ channel.This behavior is consistent with the expected for a quasi-one dimensional square lattice at quarter filling \cite{japDwave}. A very recent preprint \cite{tripletLiPB}, proposes an order parameter with different sign in each band and a total of three node planes in the b-direction (and two more in the c-direction) at $V=0$. We skip the bracketed description and project the wave (here we call it $f_x$) with our methodology. The results Fig. \ref{PD} show that both $d$- and $f$-channels are very close, with the $f_x$ dominating.
As long as experiments do not show signatures of SDW gap opening or magnetic response \cite{Matsuda}, we can work with lower $U$ values, to avoid strong spin fluctuations. The Coulomb interactions are comparable with \cite{nuss}.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=40mm]{coexistenciaU2Arreglo.jpg}
\includegraphics[width=40mm]{coexistenciaU4Arreglo.jpg}
\end{tabular}
\caption{(Color online)
The background represents the charge fluctuations near ${\bf Q_1}$ and on top of that are the CO transition line (green) and superconducting transition line (solid black) for $U=0.2$ and ($U=0.4$) in right (left) panel. The dashed black line is the CO transition due to $\bf Q_1$ if the other order were not present. The blue line is the superconducting transition.
}
\label{coexistencia}
\end{figure}
Near the CDW or CO regions of the phase diagram we found triplet superconductivity in $p_y$ channel, with nodes at the Fermi surface.
Near the $\bf Q_1$ CDW region, we found a narrow stripe \cite{GraserFootNote} of superconductivity due to charge fluctuations at $\bf Q_1$. We observe $\bf Q_1$ is a nesting vector connecting all the Fermi surface with different phase of the order parameter. See inset Fig. \ref{PD}\\
From this study, apparently we can design the interactions Fig. \ref{ChargeOrder} (c) to be minimum in a given momentum, in such a way that favors superconductivity with a certain order parameter. Nevertheless, we need to take into account the bare susceptibility structure and the orbital distribution in real space. In the present model, the bare susceptibility is peaked at $q_b\approx\pi$, and the divergence at $\bf Q_1$ is favored by perpendicular Coulomb interactions $V_\perp$ and $W_\perp$ whereas interactions along the chains does not distinguish momenta in the $b$-direction.
\\
The $\bf Q_2$ momentum is not involved in the vertex calculation (Eq. \ref{lambdaEq}) so, we are still able to work with the superconducting vertex since it has not divergences. In that region, strong charge fluctuations still persist at $\bf Q_1$, due to nesting, and superconductivity would be found if another charge ordered phase were not present.
\\
\subsection{Coexistence in the model}
If the order parameter of the CO is small, and assuming that the $\bf Q_2$ modulation does not open a gap at $E_F$,
we consider now the possibility of coexistence with SC in this model, even though it may not have relevance for the material.
We study the coexisting region with temperature (See Fig. \ref{coexistencia}). The charge fluctuations have a reentrant behavior in RPA approach \cite{merino2006}, due to the fact that the bare susceptibility $\chi_0({\bf q},\omega)$ is maximum in energy ($\omega\ll t$), when ${\bf q}$ connects different points of the band structure approximately $\omega$ away from Fermi level. In that case $\bf Q_1$ is a nesting vector and the maximum is close in energy, $\omega=0.01\approx T$. However $\bf Q_2$ exhibits its peak of reentrant behavior at a larger energy, and we only see the decrease of critical $V$ with temperature. In Fig. \ref{coexistencia} (a) the critical momentum changes from $Q_1$ at low temperatures to $Q_2$. We observed that change from charge susceptibility and it is represented by the change of behavior of the CO line. The temperature makes the bare susceptibility softer, lowering the value of $\chi_0({\bf Q_1})$ and shifting the critical momentum to $Q_2$.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=40mm]{GapVswn.jpg}
\includegraphics[width=40mm]{ConductanciaE.jpg}
\end{tabular}
\caption{(Color online)
Left.
Gap in Matsubara frequency ($i\omega_n$)at $T=0.01$, $U=0.2$ and $V=0.676$, green line. And Lorentizan fit, blue line.
Right. Conductance against energy for different superconducting order parameters. We can distinguish $p_y$ wave. $\alpha$ is the angle of the order parameter with the junction. $Z$ is the height of the tunnel barrier, in this case the insulating phase. \cite{tanaka}
}
\label{MVI}
\end{figure}
\section{Eliashberg equation with a reduced vertex}
In the previous section we have shown that the superconducting vertex is dominated by the charge susceptibility near $\bf Q_1$, see (Eq. (\ref{lambdaEq})). It results that using just a few vertex momenta we reproduce the $\lambda_{py}$ value. However, we cannot reduce easily the 4 orbital model for a simpler tight-binding, since near $\bf Q_1$ the bands has a similar weight in the four orbitals. For that reason, we select the larger values of the pairing vertex (calculated with the 4 orbital model) (Eq. \ref{pairingVertexs},\ref{pairingVertext}) and as a result, we see that less than 10\% of the vertex is enough to get more than 90\% of $\lambda_{py}$ value. In order to reproduce $\lambda_{py}$ from momenta near $\bf Q_1$ we need to multiply the value of the vertex by 4, otherwise we need to include momenta near ${\bf q}=0$ because it is significantly large,and connects many pairs of Fermi surface momenta.
Moreover, we see that the value of the pairing vertex is almost independent of $q_y$. All those simplifications, allow us to work with a simpler model and solve the linear Eliashberg equation in Matsubara frequencies, given by:
\begin{widetext}
\begin{equation}
\lambda_ {py} \Sigma({\bf k},i\omega_n)=\frac{-1}{N}
\sum_{{\bf k'}i\omega_n'} G^0({\bf k'},i\omega_n')
\Gamma^{\mathrm{triplet}}({\bf k,k'},i\omega_n-i\omega_n') G^0({\bf- k'},-i\omega_n') \Sigma({\bf k'},i\omega_n')
\end{equation}
\end{widetext}
where
\begin{equation}
G^0({\bf k'},i\omega_n')_{sp}=\sum_{\nu} \frac{a^s_\nu({\bf k}) a^p_\nu({\bf k})}{i\omega_n-\epsilon_\nu({\bf k}) )}
\end{equation}
are 4 by 4 matrices, and $\Gamma^{\mathrm{triplet}}$ is also a matrix defined in Eq. (\ref{pairingVertext}) but with $ i\omega_n$ dependence coming from the bare susceptibility (Eq. \ref{eq:chi0} in Eq. \ref{GTriplet}). We calculate the momentum dependence of the gap, by projecting on the $p_y$ order parameter: $\Sigma({\bf k},i\omega_n)=f(i\omega_n)\sin(k_cc)$.
The result (shown in Fig. \ref{MVI}) can be fitted by a Lorentzian plus a constant; analytic continuation or Pad\'e approximants gives the same result, a real constant for the relevant frequencies. Provided the gap value is small, only low frequencies are relevant for experiments, as Normal-Insulator-Superconducting junctions \cite{tanaka}. (See Fig. \ref{MVI}(b))\\
\section{Discussion and Conclusions}
As was previously mentioned, Li$_{0.9}$Mo$_6$O$_{17}$ exhibits signatures of Luttinger liquid behavior for a wide range of temperatures.
Thus, it is important to discuss its relation with the physics described above.
In Fig. \ref{Diagrams}, we present a schematic phase diagram for the model and consider its relevance for the physics of LiPB. Merging renormalization group estimation of the crossover temperature, the RPA calculations for the CDW and considering the fluctuation exchange in the superconducting vertex, we compose a schematic diagram Fig. \ref{Diagrams}. At high temperatures, the metallic phase is a LL. As the temperature goes down, the perpendicular hopping drives the system through a crossover to a Fermi liquid and the inter-chain Coulomb interactions through a thermodynamic phase transition to a CDW. Our analysis of the SC vertex comprises the dashed horizontal line. Since we are working at temperatures well below $T_{LL}$, the use of RPA, as perturbation theory of the essentially free electron system, is well justified as a starting point. In other words, we are able to describe the superconductivity as an instability of a Fermi liquid, in spite of the normal phase of the material being a LL. On the other hand, the behavior of the material as the temperature goes down, seems to be represented by the solid vertical line. This statement is based on the spectroscopies \cite{cazalilla2005, dudy2013} at temperatures right above $T_c$. The density of states show power-law behavior very similar to the ones observed at much higher temperatures and similar values of $\alpha$. Placing the material slightly on the left of that vertical line would imply an interesting crossover from one NFL (the LL) to another NFL (FL + strong charge fluctuations). At a purely qualitative level, no evidence of a Fermi edge developing at low temperatures has been observed and the experimental values of $\alpha$ seem to increase (instead of decrease). However, both alternatives rely on details of the model and should be quantitatively contrasted with the spectroscopies.
The dashed line in Fig. \ref{Diagrams} shows the dimensional crossover from Luttinger liquid to Fermi Liquid \cite{Bois,GiamarchiChem}. The small value of the perpendicular hopping suggests considering it as a perturbation. Based on the renormalization group approach, we can estimate the crossover temperature to be $T_{LL} \sim t\left(\frac{t_\perp}{t}\right)^{\frac{1}{1-\alpha}}$ where $\alpha$ is the exponent for the single-particle density of states. In Fig. \ref{TLL} we show the estimated dimensional crossover
for Luttinger chains coupled with Hubbard and V interactions. The value of $\alpha$ is computed using interaction parameters U and V following the CO border shown in Fig. \ref{PD}. W is set to zero. Note that the same Coulomb interactions driving the charge ordering, allow for large values of $\alpha$. Therefore, we expect $T_{LL}$ to be very small when the CDW is approached . This fact opens the possibility for a direct transition from the LL to the superconduting phase. It would be interesting to study this possibility with techniques similar to those used in Ref. \onlinecite{AlvarezCNT}.
The charge ordering transition for RPA apparently occurs at arbitrarily large temperatures as V increases, but we expect the slight modifications presented in Ref. \onlinecite{SelfConsistentRPA} which considers how the fluctuations effects modify the Greens function self-consistently, evaluting also vertex corrections. Other details like the reentrant behavior for the charge ordering transition, typical of RPA calculations, are unessential for the physics of the system.
\begin{figure}
\begin{tabular}{cc}
\includegraphics[width=80mm]{DiagramCartoon.jpg}
\end{tabular}
\caption{(Color online)
Schematic phase diagram for the LiPB. The present study comprises the green horizontal arrow, we believe the temperature dependence of the real material is represented by the vertical orange arrow.
}
\label{Diagrams}
\end{figure}
\begin{figure}
\includegraphics[width=80mm]{TLLandAlpha.jpg}
\caption{(Color online)
Dashed. Renormalization group estimation of the crossover temperature for the critical $V$ values found for each $U$. Solid. The exponent in the Luttinger density of states, $\alpha$.}
\label{TLL}
\end{figure}
To summarize, we have studied a microscopic extended Hubbard model for LiPB. We have characterized the couplings promoting SC close to different charge ordering patterns. A detailed analyisis within the RPA approximation of the vertex shows triplet superconductivity with nodes on the Fermi surface
close to those ordered phases. The relevance of these results is discussed in terms of the general experimental perspective of the material.
\section*{Acknowledgments.}
We thank J.W. Allen, J. Merino, L. Taillefer for fruitful discussions.
We acknowledge financial support from MINECO FIS2012-37549-C05-03.
|
1,314,259,993,264 | arxiv | \section{Introduction}
The Navier--Stokes equation for a homogeneous and incompressible
fluid in the whole plane or space, subject to an external force
field $\mathbf{F}$, is given by
\begin{eqnarray}\label{NSF}
\frac{{\partial}\mathbf{u}}{{\partial t}} + (\mathbf{u}\cdot
\nabla
)\mathbf{u}&=&\nu\Delta\mathbf{u}-\nabla\mathbf{p}+\mathbf{F};\nonumber\\[-8pt]\\[-8pt]
\operatorname{div} \mathbf{u}(t,x)&=&0; \qquad\mathbf{u}(t,x)\to0 \qquad\mbox{as }
|x|\to
\infty.\nonumber
\end{eqnarray}
Here, $\mathbf{u}$ denotes the velocity field, $\mathbf{p}$ is the (unknown)
pressure function and $\nu>0$ is the (constant) viscosity
coefficient. When $\mathbf{F}=0$ or, more generally, when $\mathbf
{F}=\nabla\Psi$
is a conservative field, a probabilistic interpretation
of (\ref{NSF}) in space dimension two
was first developed in 1982 by Marchioro and Pulvirenti \cite{MaPu}.
Their approach was based on
the vortex equation satisfied by the (scalar) field $\operatorname{curl} \mathbf{u}$,
which in 2d and for the case of a conservative external field,
was interpreted as a nonlinear Fokker--Planck (or McKean--Vlasov)
equation with signed initial condition. This was associated with
a nonlinear diffusion process in the sense of McKean, involving
singular interactions through the kernel of Biot--Savart. (For a
general background on the McKean--Vlasov model, we refer the reader
to Sznitman \cite{Szn} and M\'{e}l\'{e}ard \cite{Mel0}.) This approach
led them to the definition of a stochastic system of particle or
vortices with ``mollified'' mean field interaction, for which the
time-marginal empirical measures converge to a solution of the
vortex equation associated with (\ref{NSF}). The convergence on
the path space of that particles system (or, equivalently, the
propagation of chaos property) was proved later by M\'{e}l\'{e}ard
in~\cite{Mel1}. Those works provided a rigorous mathematical meaning
of Chorin's vortex algorithm, heuristically proposed in
\cite{Cho0} as a probabilistic method to simulate the solution of
the 2d-Navier--Stokes equation (see also \cite{Cho}).
In dimension 3, the vorticity field $\mathbf{w}=\operatorname{curl} \mathbf{u}$ is a
solution of
the vectorial nonlinear equation
\begin{eqnarray}\label{3DVF}
\frac{{\partial}\mathbf{w}}{{\partial t}} + (\mathbf{u}\cdot
\nabla
)\mathbf{w}&=&(\mathbf{w}\cdot\nabla)\mathbf{u}+\nu\Delta\mathbf
{w}+\mathbf{g},\nonumber\\[-8pt]\\[-8pt]
\operatorname{div} w_0&=&0,\nonumber
\end{eqnarray}
where $\mathbf{g}=\operatorname{curl}\mathbf{F}$ and where the relation
\begin{equation}\label{3BSlaw}
\mathbf{u}(t,x)=\mathbf{K}(\mathbf{w})(t,x):=-\frac{1} {4\pi}\int
_{\mathbb{R}^3}
\frac{(x-y)}{|x-y|^3}\wedge\mathbf{w}(t,y) \,dy
\end{equation}
holds, thanks to the
incompressibility condition $\operatorname{div}\mathbf{u}=0$ and the Biot and Savart law.
Here, $\wedge$ stands for the vectorial
product in $\mathbb{R}^3$, $K(x)\wedge:=-\frac{1}{4\pi}\frac
{x}{|x|^3}\wedge$ is the
three-dimensional Biot--Savart kernel and $\mathbf{K}$ is the Biot--Savart
operator in 3d. (We refer to Bertozzi and Majda \cite{BerMa} for
this and for background on vorticity.)
In absence of external forces, the problem of proving the
approximation of solutions of the 3d-Navier--Stokes equations by a
stochastic system of mean field interacting particles was first
addressed by Esposito and Pulvirenti \cite{EsPu}. In that work, an
approximation result of local solutions by a stochastic system
of three-dimensional vortices with cutoff and mollified
interactions was obtained for each time instant, for initial
vorticities that belonged to $L^1$ together with their Fourier
transform. The convergence held for mollifying parameters that
depended on the realizations of the empirical measures of the
paths of the driving Brownian motions.
Recently, we considered in \cite{F1} the mild version of the
3d-vortex equation with $\mathbf{g}=0$ in the $L^p$ spaces for $p
> \frac{3}{2}$. We proved local (in time) well posedness and regularity
results for that equation, and, under an additional $L^1$
assumption on $w_0$, we showed the equivalence between such
solutions and a generalized nonlinear McKean--Vlasov process with values
in $\mathbb{R}^3\times\mathbb{R}^{3\otimes
3}$ and singular drift term at $t=0$.
We then introduced a system of stochastic 3d vortices with cutoff and
mollified interaction, and
proved the pathwise propagation of chaos property with as limit
the nonlinear process,
deducing moreover stochastic particle approximation results for
the velocity and vorticity fields. (We refer to \cite{FAP} for a
rectification of the discussion in \cite{F1} about the work
\cite{EsPu}.) During the preparation of this work, we have also
become aware of the more recent work of Philipowski \cite{Phi},
who obtained (also in the case $\mathbf{g}=0$) a convergence rate for a
mean field particle approximation of the vorticity field, for a
simpler variation of the system introduced in \cite{F1}. (The
pathwise propagation of chaos property was not addressed.)
In presence of an external force field, the
additional additive term $\mathbf{g}=\operatorname{curl}\mathbf{F}$ in the (2d or
3d) vortex
equation is
physically interpreted as creation of rotation in the fluid. In order to
describe this phenomenon probabilistically, a nonlinear
McKean--Vlasov diffusion process with random space--time
birth was recently
associated with the 2d-vortex equation in Fontbona and M\'{e}l\'{e}ard
\cite{FM}. More precisely, the
law $P_0(dt,dx)$ of the instant and position of birth was suitable, defined
in terms of the initial vorticity and of the external field
$\operatorname{curl}\mathbf{F}$, and it was shown that a scalar-weighted version of the
time marginal law of this process after its birth time was equal
to the solution to the 2d-vortex equation (with $L^1$ data) in a
given interval. The propagation of chaos property was established
for an approximating system of interacting vortices, which were
given birth independently at random positions and times following
the law $P_0$, and a pathwise convergence rate was obtained under
slight additional integrability assumptions on the data.
The first purpose of the present paper is to extend the results
of \cite{F1} and \cite{FM} to the 3d-Navier--Stokes equation with
nonconservative external force field. More precisely, fix $T>0$
and assume that $w_0\dvtx\mathbb{R}^3\to\mathbb{R}^3$ and $\mathbf
{g}\dvtx\mathbb{R}^3\times[0,T]
\to\mathbb{R}^3$ are divergence-free $L^1$-fields. Denote by $I_3$ the
identity matrix in $\mathbb{R}^3$ and let $(B_t)$ be a standard
3d-Brownian motion. Our main goal will be to study the well posedness
on $[0,T]$ of the following nonlinear process, with singular
interaction kernel and values in $\mathbb{R}^3\times\mathbb
{R}^{3\otimes3}$:
\begin{eqnarray}\label{3elprocesointro}
X_t &=& X_0+\sqrt{2\nu} \int_0^t \mathbf{1}_{\{s\geq\tau\}}\, dB_s
+\int_0^t
\mathbf{K}(\tilde{\rho})
(s,X_s)\mathbf{1}_{\{s\geq\tau\}}\,ds ,\nonumber\\[-8pt]\\[-8pt]
\Phi_t &=& I_3+\int_0^t \nabla\mathbf{K}(\tilde{\rho})(s,X_s)\Phi
_s \mathbf{1}_{\{
s\geq\tau\}} \,ds,\nonumber
\end{eqnarray}
where: $(\tau,X_0)$ is a
random variable in $[0,T]\times\mathbb{R}^3$ (independent of $B$) with
law
\[
P_0(dt,dx)\propto\delta_0(dt) |w_0(x)| \,dx + |\mathbf{g}(t,x) |\,dx\,dt,
\]
$\tilde{\rho}=\tilde{\rho}(t,x)$ is defined for each $t$ from the
law of $(\tau,X,\Phi)$ as
\begin{equation}\label{lequiv}
\int_{\mathbb{R}^3}\mathbf{f}(y)\tilde{\rho}(t,y)\,dy :=E\bigl(\mathbf{f}(X_t)
\Phi_t
h(\tau,X_0)\mathbf{1}_{\{t\geq\tau\}}\bigr)\qquad \mbox{for } \mathbf
{f}\dvtx\mathbb{R}^3\to
\mathbb{R}^3,
\end{equation}
and $h$ in (\ref{lequiv}) is the density with respect to $P_0$ of
the vectorial measure
$\delta_0(dt)\times\break w_0(x)\,dx + \mathbf{g}(t,x)\,dx\,dt$. [We observe that it is
(\ref{3elprocesointro}) \textit{together} with relation
(\ref{lequiv}) that specify a ``nonlinear process'' in McKean's
sense.]
As we shall see, there will exist a correspondence between mild
$L^p(\mathbb{R}^3) \cap L^1(\mathbb{R}^3)$-solutions $\mathbf{w}$ of
(\ref{3DVF}) for
$p>\frac{3}{2}$, and suitable solutions of the nonlinear
stochastic differential equation
(\ref{3elprocesointro}) and (\ref{lequiv}), through the relation
$\mathbf{w}=\tilde{\rho}$. Thus,
(\ref{lequiv}) provides a representation formula for solutions
$\mathbf{w}$ of (\ref{3DVF}) which extends the one obtained in \cite{F1}
when $\mathbf{g}\equiv0$ (or $\tau\equiv0$). In the present case, this
representation can be intuitively understood as follows. A point
vortex is given birth at random instant and position $(\tau,X_0)$,
rotating in direction $h(\tau,X_0)\in\mathbb{R}^3$. It then evolves
under the effect of diffusion and of the velocity field $\mathbf
{K}(\mathbf{w})$
in (\ref{3elprocesointro}), while its rotation direction and
magnitude are changed under the action of the matrix process
$\Phi_t$ which accounts for the vortex stretching proper to
dimension $3$. Averaging the rotation vectors on the position of
infinitely ``already born vortices'' yields a macroscopic
vorticity field $\mathbf{w}(t)=\tilde{\rho}(t)$, weakly defined by
(\ref{lequiv}). The velocity field instantaneously experienced by
each individual vortex is finally recovered from
$\mathbf{w}$ as a mean field effect through the
interaction kernel of Biot--Savart.
We will adapt the ideas and analytic techniques in \cite{F1} to
first establish local well-posedness and regularity results for
the mild formulation of the vortex equation. Based on this, we
shall then prove local [i.e., for small enough $T>0$ or data
$(w_0,\mathbf{g})$] pathwise well posedness for the nonlinear stochastic
differential equation
(\ref{3elprocesointro}) and (\ref{lequiv}), which will have singular
drift terms at $t=0$.
We shall then introduce a stochastic system of $n$ particles in
$\mathbb{R}^3\times\mathbb{R}^{3\otimes3}$ (or 3d-vortices) with
cutoff and
mollified interaction kernels, and with random space--time births.
The second goal of this paper will be to prove the
strong pathwise convergence of each of these particles as $n$
goes to $\infty$, towards the nonlinear process, at an explicit
rate. To that end, we will improve the techniques used in
\cite{F1} to study the nonlinear process, which relied on
tightness estimates for approximating processes and martingale
problem characterization. More precisely, by a fine use of
regularity properties of the equation, and inspired by ideas
introduced in \cite{FM}, we will show that the approximating
``mollified processes''
converge pathwise at the same rate at which mollified
versions of the vortex equation converge to the original one. We
will be able to exhibit that rate for a large class of
mollified kernels, thanks to classic regularization
techniques in Raviart \cite{Rav} (which are also similar to those
used in \cite{Phi}). These results will imply the propagation of
chaos in a strong norm and, classically, an explicit rate in some
pathwise Wasserstein distance $\mathcal{W}$. From this we will also
deduce convergence rates for approximation schemes of the
vorticity and velocity fields. Unfortunately, the mollifying
parameter will be required to go very slowly to $0$ as $n$ goes to
$\infty$, which will yield a very slow (but not necessarily
optimal) rate for the particles convergence.
Finally, we point out that our regularity results on the mild
equation in $L^p$ will ensure that the stochastic flow
\begin{equation}\label{3linearflow}
\xi_{s,t}(x)=x+ \sqrt{2\nu}(B_t-B_s)+
\int_s^t\mathbf{u}(r,\xi_{s,r}(x))\,dr
\end{equation}
is of class $C^1(\mathbb{R}^3)$, and so one can write
\begin{equation}\label{formstocflow}
(X_t,\Phi_t)\mathbf{1}_{\{t\geq\tau\}}=(\xi_{\tau,t}(X_0),\nabla_x
\xi_{\tau,t}(X_0))\mathbf{1}_{\{t\geq\tau\}}.
\end{equation}
Equation
(\ref{lequiv}) can thus be seen as a stochastic analog for the
3d-Navier--Stokes equation of the ``Lagrangian representation'' of the
vorticity of the 3d-Euler equation $\nu=0$ (see, e.g., \cite{ChoMa},
Chapter 1), an analogy established in \cite{EsPu,F1} when
$\mathbf{g}
\equiv0$. Lagrangian representations of the 3d-Navier--Stokes
equations as stochastic analogues to representations formulae for
the Euler equation have been studied by several authors, some of
which have led to (local) well-posedness results for the equation.
See, for example, Esposito et al. \cite{EsMaPuSc} and, for more recent
developments,
Busnello et al. \cite{BuFraRo} and Iyer \cite{I}. The latter
works follow approaches that are in some sense ``dual'' to ours,
establishing representations of strong solutions of the vortex or
Navier--Stokes equations in terms of expectations of the initial
data, after being transported and modified by the stochastic
flow. A related stochastic approach is adopted in Gomes \cite{Go}
to establish a variational formulation of the Navier--Stokes
equation, analogous to Arnold's variational characterization of
the Euler equation. A seemingly very different further
probabilistic point of view, providing global well posedness for
small initial data, was introduced by Le Jan and Sznitman in
\cite{LJS}, who associated with the Fourier transform of the
velocity field a multitype branching process or stochastic
cascade. See, for example, Bhattacharya et al. \cite{Bha} for more recent
developments in that direction.
The remainder of this work is organized as follows. In Section \ref{sec2}
we first present a weak formulation of
(\ref{3elprocesointro}) and (\ref{lequiv}) in terms of a nonlinear
martingale problem, and discuss its connection with
(\ref{3DVF}). In Section \ref{sec3}, we shall obtain local well-posednes
and regularity results for the mild version of the vortex
equation in $L^p$, for $p \in(\frac{3}{2},3)$. In Section \ref{sec4} we
state some results about a nonlinear Fokker--Planck equation with
external field associated with the process with random space--time
birth $X$ in (\ref{3elprocesointro}). We use this and the
previous results to show strong local-in-time well posedness for
the nonlinear stochastic differential equation
(\ref{3elprocesointro}) and (\ref{lequiv}). We, moreover, obtain the
pathwise convergence result and estimates for approximating
mollified versions of that problem. In Section \ref{sec5}, we introduce the
system of 3d-stochastic vortices with random space--time birth, and
deduce the propagation of chaos property and its rate. We also
prove approximation results for the velocity and the vorticity of
the forced 3d-Navier--Stokes equation with their corresponding
convergence rates. In Section \ref{sec6} we shall discuss how these
rates of convergence are slightly improved when Sobolev
regularity of the initial condition and external field is assumed.
Let us establish some notation:
\begin{enumerate}[--]
\item[--] By $\mathcal{M}\mathit{eas}^T$ we denote
the space of measurable real-valued
functions on $[0,T]\times\mathbb{R}^3$.
\item[--] $C^{1,2}$
is the set of real-valued
functions on $[0,T]\times\mathbb{R}^3$ with
continuous derivatives up to the first order in $t\in[0,T]$ and up
to the second order in $x\in\mathbb{R}^3 $. $C_b^{1,2}$
is the subspace of bounded functions in $C^{1,2}$
with bounded derivatives.
\item[--] $\mathcal{D}$ is the space of compactly supported functions
on $\mathbb{R}^3$ with
infinitely many derivatives.
\item[--] For all $1\leq p \leq\infty$ we denote by
$L^p$ the space $L^p(\mathbb{R}^3)$ of real-valued functions on
$\mathbb{R}^3$.
By $\|\cdot\|_p$ we denote the corresponding norm, and $p^*$ stands
for the H\"{o}lder conjugate of $p$. We write
$W^{1,p}=W^{1,p}(\mathbb{R}^3)$ for the Sobolev space of functions in
$L^p$ with partial derivatives of first order in $L^p$.
\item[--] If $E$ is a space of real-valued functions (defined on
$\mathbb{R}^3$ or on $[0,T]\times\mathbb{R}^3$), then the notation
$(E)^3$ is
used for the space of $\mathbb{R}^3$-valued functions with scalar
components in $E$. If $E$ has a norm, the norm in $(E)^3$ is
denoted in the same way.
\item[--] For notational simplicity, if $\mathbf{f},\mathbf
{g}\dvtx\mathbb{R}^3\to\mathbb{R}
^3$ are vector
fields and $Z\dvtx\mathbb{R}^3\to\mathbb{R}^{3\otimes3}$ is a matrix
function, we
will write $\mathbf{f}\mathbf{g}:=\sum_i^3\mathbf{f}_i\mathbf
{g}_i$ and $\mathbf
{f}Z$ for the row-vector
$(\mathbf{f}^tZ)_i:= \sum_{j=1}^3 \mathbf{f}_j Z_{j,i}$. By $\nabla
\mathbf{f}$ we denote\vspace*{1pt}
the gradient of $\mathbf{f}$, that is, the matrix $(\nabla
\mathbf{f})_{i,j}:=\frac{\partial\mathbf{f}_i}{\partial x_j}$. We
will simply
write $(\nabla\mathbf{f}) \mathbf{g}$ for the column-vector $(\sum_j
\frac{\partial\mathbf{f}_i}{\partial x_j}\mathbf{g}_j )_i$ [instead
of the usual
``$(\mathbf{g}\cdot\nabla) \mathbf{f}$''].
\item[--] $C$ and $C(T)$ are finite positive constants that may
change from line to line.
\end{enumerate}
\section{The weak 3d-vortex equation and a probabilistic
interpretation of the external field}\label{sec2}
Let us recall a that vector field $w\dvtx\mathbb{R}^3\to\mathbb{R}^3$ with
components in $\mathcal{D}'$, and such that $\int_{\mathbb
{R}^3}\nabla
f(x)w(x)\,dx=0 $ for all $f\in\mathcal{D}$, is said to have \textit{null
divergence in the distribution sense}. We write it $\operatorname{div} w= 0$.
If the following two conditions hold, we shall say that $w_0\dvtx\mathbb
{R}^3\to
\mathbb{R}^3$ and
$\mathbf{g}\dvtx\mathbb{R}_+\times\mathbb{R}^3\to\mathbb{R}^3$
satisfy the hypothesis:
\begin{enumerate
\item[$(\mathrm{H}_p)$:]\hypertarget{Hp}
\mbox{}
\begin{itemize}
\item there exists $p\in[1,\infty[$ such that
$w_0 \in(L^p(\mathbb{R}^3))^3$ and $ \mathbf{g}(t,\cdot) \in
(L^p(\mathbb{R}^3))^3$
for all $t\in[0,T]$, and
$ \sup_{t \in[0,T]} \|\mathbf{g}(t,\cdot
)\|_p<\infty$;
\item
$\operatorname{div} w_0=0$ and $\operatorname{div} \mathbf{g}(t,\cdot)=0$ for all $t\in[0,T]$.
\end{itemize}
\end{enumerate}
A necessary assumption for our probabilistic approach will be that
\hyperlink{Hp}{$(\mathrm{H}_p)$} holds with $p=1$. We then denote
\[
\|\mathbf{g}\|_{1,T}:=\int_0^T \int_{\mathbb{R}^3}|\mathbf{g}(s,x)|\,dx \,ds.
\]
In that functional setting, the following notion of solution to
(\ref{3DVF}) will appear to be natural:
\begin{definicion}\label{weakeq} Let $w_0$ and $\mathbf{g}$ satisfy
\hyperlink{Hp}{$(\mathrm{H}_1)$}. A function $\mathbf{w}\in
L^{\infty}([0,T],\break
(L^1(\mathbb{R}^3))^3)$ is a weak solution on $[0,T]$ of the vortex
equation with initial condition $w_0$ and external field $\mathbf{g}$ (or
``weak solution'') if:
\begin{longlist}
\item For $i,j,k=1,2,3$,
\begin{eqnarray}\label{integrability}
\int_{[0,T]\times\mathbb{R}^3} |\mathbf{w}_i(t,x)||\mathbf
{K}(\mathbf{w})_j(t,x)| \,dx \,dt &<&
\infty, \nonumber\\[-8pt]\\[-8pt]
\int_{[0,T]\times\mathbb{R}^3} |\mathbf{w}_i(t,x)| \biggl|\frac
{\partial\mathbf{K}(\mathbf{w}
)_j}{\partial x_k}(t,x) \biggr| \,dx\,
dt&<&\infty.\nonumber
\end{eqnarray}
\item For any $\mathbf{f}\in(C^{1,2}_b)^3$,
\begin{eqnarray}\label{weak}\quad
&&\int_{\mathbb{R}^3}\mathbf{f}(t,y)\mathbf{w}(t,y)\,dy \nonumber\\
&&\qquad=\int
_{\mathbb{R}^3} \mathbf
{f}(0,y)w_0(y)\,dy +
\int_0^t \int_{\mathbb{R}^3}\mathbf{f}(s,y)\mathbf{g}(s,y)\,dy\, ds
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{} + \int_0^t
\int_{\mathbb{R}^3} \biggl[ \frac{\partial\mathbf{f}}{\partial s}(s,y)
+\nu\triangle\mathbf{f}(s,y)\nonumber\\
&&\qquad\quad\hspace*{47.3pt}{}
+ \nabla\mathbf{f}(s,y) \mathbf{K}(\mathbf{w})(s,y)+\mathbf
{f}(s,y)\nabla
\mathbf{K}(\mathbf{w})(s,y) \biggr]\mathbf{w}(s,y) \,dy \,ds.\nonumber
\end{eqnarray}
\end{longlist}
\end{definicion}
\begin{rem} We observe that for any
function $\mathbf{v}\dvtx\mathbb{R}^3\to\mathbb{R}^3$ in $L^1$, the
functions $\mathbf{K}(\mathbf{v})$ and
$\nabla\mathbf{K}(\mathbf{v})$ are defined a.e. on $ \mathbb{R}^3$.
Indeed, the first
one can be bounded by a (scalar) Riesz potential operator (see
Stein \cite{Stein}), and thus belongs to a suitable weak Lebesgue
space. The second one is defined through a singular integral
operator acting on $\mathbf{v}$ (see, e.g., \cite{BerMa} for this
fact), and
this implies (see also \cite{Stein}) that it is an almost
everywhere defined function of some other weak Lebesgue space.
\end{rem}
We next introduce the central probabilistic objects we shall be
dealing with, which extend the ideas introduced in two dimensions
in \cite{FM}.
\begin{definicion}\label{margsP}
We write $\mathcal{C}_T:=[0,T]\times C([0,T],\mathbb{R}^3\times
\mathbb{R}^{3\otimes3})$. The canonical process in $\mathcal{C}_T$
will be
denoted by $(\tau, X,\Phi)$, and the space of probability measures
on $\mathcal{C}_T$ is written $\mathcal{P}(\mathcal{C}_T)$.
For an element $P\in\mathcal{P}(\mathcal{C}_T)$, we write
$P^{\circ}=\operatorname{law}(X)$ for the second marginal and $P'=\operatorname{law}(\Phi)$ for
the third marginal.
\end{definicion}
We shall also denote
\begin{eqnarray}\label{initlaw}
\bar{w}_0(x) & = & \frac{|w_0(x)|}{\|w_0\|_1+\|\mathbf{g}\|_{1,T}}\quad
\mbox{and } \nonumber\\[-8pt]\\[-8pt]
\bar{\mathbf{g}}(t,x)& = &\frac{|\mathbf{g}(t,x)|}{\|w_0\|_1+\|
\mathbf{g}\|_{1,T}}.\nonumber
\end{eqnarray}
We then define a probability measure $P_0(dt,dx)$ on $[0,T]\times
\mathbb{R}^3$ by
\begin{equation}\label{P0}
P_0(dt,dx) =\delta_0(dt) \bar{w}_0(x) \,dx + \bar{\mathbf{g}}(t,x) \,dx \,dt,
\end{equation}
together with the vectorial weight function
\begin{eqnarray}\label{h0}
h(t,x) &=& \mathbf{1}_{\{t=0\}}\frac{w_0(x)}{|w_0(x)|} (\|w_0\|_1+\|\mathbf
{g}\|
_{1,T} )\nonumber\\[-8pt]\\[-8pt]
&&{}
+\frac{\mathbf{g}(t,x)}{|\mathbf{g}(t,x)|} (\|w_0\|_1+\|\mathbf
{g}\|_{1,T} )\mathbf{1}_{\{t>0\}},\nonumber
\end{eqnarray}
where $\mathbf{1}$ denotes the indicator function and the convention
``$\frac{0}{0}=0$'' is made. We notice that
$|h(t,x)|=\|w_0\|_1+\|g\|_{1,T}$ or $0$. Moreover, we have
\begin{rem}\label{P0rem}
For measurable bounded functions $\mathbf{f}\dvtx[0,T]\times\mathbb
{R}^3\to\mathbb{R}^3$,
we have
\begin{eqnarray*}
&&\int_{[0,T]\times\mathbb{R}^3} \mathbf{f}(s,x)h(s,x)P_0(ds,dx)
\\
&&\qquad=
\int
_{\mathbb{R}^3}
\mathbf{f}(0,x)w_0(x)\,dx + \int_{[0,T]\times\mathbb{R}^3}
\mathbf
{f}(s,x)\mathbf{g}(s,x)\,dx
\,ds.
\end{eqnarray*}
\end{rem}
Consider now $Q\in\mathcal{P}(\mathcal{C}_T)$ such that for all
$\in[0,T]$, $\mathbb{E}^{Q}(|\Phi_t|)<\infty$.
Then, we can associate
with $Q$ a family of $\mathbb{R}^3$-valued vector measures
$(\tilde{Q}_t)_{t\in[0,T]}$ on $\mathbb{R}^3$, defined for all bounded
measurable function $\mathbf{f}\dvtx\mathbb{R}^3\to\mathbb{R}^3$ by
\begin{equation}\label{measvect}
\tilde{Q}_t(\mathbf{f})=\mathbb{E}^{Q}\bigl(\mathbf{f}(X_t)\Phi_t
h(\tau
,X_0)\mathbf{1}_{\{\tau\leq
t\}}\bigr).
\end{equation}
Moreover, $\tilde{Q}_t$ is absolutely continuous with respect to
$Q^{\circ}_t$, with
\begin{equation}\label{density}
\frac{d \tilde{Q}_t}{d Q^{\circ}_t}(x)=E^Q\bigl(\Phi_t
h(\tau,X_0)\mathbf{1}_{\{\tau\leq t\}}\vert X_t=x\bigr),
\end{equation}
and its total mass is bounded by $ (\|w_0\|_1
+\|\mathbf{g}\|_{1,T})\mathbb{E}^{Q}(|\Phi_t|) $.
\begin{definicion} We denote by $\mathcal{P}_b(\mathcal{C}_T)$ the
subset of
probability measures $Q\in\mathcal{P}(\mathcal{C}_T)$ under which
the process $\Phi$ belongs to $L^{\infty}( [0,T]\times\Omega
,dt\otimes Q)$.
\end{definicion}
Then, we consider the following nonlinear martingale problem:
\begin{enumerate}[]
\item[(MP):]\hypertarget{MP} to find $P\in\mathcal{P}_b(\mathcal{C}_T)$
such that:
\begin{itemize}
\item
$X_t=X_0$ in $[0,\tau]$, $P$-almost surely.
\item The law of $(\tau,X_0)$ under $P$
is $P_0$ given by (\ref{P0}), and $\tilde{P}_t$
constructed
according to (\ref{measvect}) has a bi-measurable
density family
$(t,x)\mapsto\tilde{ \rho}(t,x)$.
\item $f(t,X_t)-f (0,X_0)-\int_{0}^t \frac{\partial
f}{\partial s}(s,X_s)+ [\nu\triangle f (s,X_s)
+ \mathbf{K}(\tilde{\rho})(s,X_s)\nabla f(s,\break X_s) ] \mathbf
{1}_{ s\geq\tau
}\,ds$,
$0\leq t\leq T$, is a continuous $P$-martingale for all $f
\in\mathcal{C}_b^{1,2}$
w.r.t. the filtration
$\mathcal{F}_t=\sigma(\tau,(X_s,\Phi_s),s\leq t)$.
\item $\Phi_t=I_3 +\int_{0}^t \nabla
\mathbf{K}(\tilde{\rho})(s,X_s)\Phi_s \mathbf{1}_{ s\geq\tau}
\,ds$, for all
$0\leq t\leq T$, $P$-almost surely.
\end{itemize}
\end{enumerate}
The following statement partially explains the relation between
\hyperlink{MP}{($\mathrm{MP}$)} and (\ref{3DVF}), and will be useful later on:
\begin{lema}\label{MPtoweakeq}
Assume that the problem \textup{\hyperlink{MP}{($\mathrm{MP}$)}} has a solution $P \in\mathcal
{P}_b(\mathcal{C}_T)$ satisfying
\begin{equation}\label{integKrho}
E \biggl(\int_0^T | \mathbf{K}(\tilde{\rho})(t,X_t)|\,dt \biggr)
<\infty
\end{equation}
and
\begin{equation}\label{integgradKrho}
E \biggl(\int_0^T |\nabla
\mathbf{K}(\tilde{\rho})(t,X_t)|\,dt \biggr)<\infty.
\end{equation}
Then, $\tilde{\rho}$ is
a weak solution of the vortex equation with external force field
(\ref{weak}).
\end{lema}
\begin{pf}
The assumptions on $P$ imply that point (i) in Definition
\ref{weakeq} is satisfied and, moreover, that $\int_0^t
\mathbf{K}(\tilde{\rho})(s,X_s)\,ds$ and $\int_0^t \nabla
\mathbf{K}(\tilde{\rho})(s,X_s)\,ds$ are both processes with integrable
variation (and thus absolutely continuous on $[0,T]$). Since under
$P$ the process $\Phi_t$ is almost surely bounded in $[0,T]$, it
follows that it has finite variation too.
On the other hand, the martingale associated with $f \in\mathcal
{C}_b^{1,2}$ in \hyperlink{MP}{($\mathrm{MP}$)} equals
\begin{eqnarray*}
&&f(t,X_t)-f (\tau\wedge t,X_0)\\
&&\qquad{} -\int_{0}^t \biggl[\frac{\partial
f}{\partial s}(s,X_s)+\nu\triangle f (s,X_s) +
\mathbf{K}(\tilde{\rho})(s,X_s)\nabla f(s,X_s) \biggr] \mathbf{1}_{
s\geq\tau}\,ds
\end{eqnarray*}
thanks to the first condition of \hyperlink{MP}{($\mathrm{MP}$)}.
Therefore, by It\^{o}'s product rule, we see that for each $\mathbf
{f}\in
(C^{1,2}_{b})^3$
\begin{eqnarray*}
&&\mathbf{f}(t,X_t)\Phi_t-\mathbf{f}(\tau\wedge,X_0)\\
&&\qquad{}-\int_0^t
\biggl[\frac{\partial
\mathbf{f}}{\partial s}(s,X_s)+\nu\triangle\mathbf{f}(s,X_s) +
\nabla\mathbf{f}(s,X_s)\mathbf{K}(\tilde{\rho})(s,X_s)\\
&&\qquad\hspace*{138.4pt}{} + \mathbf{f}(s,X_s)
\nabla
\mathbf{K}(\tilde{\rho})(s,X_s) \biggr]\Phi_s \mathbf{1}_{\{s\geq
\tau\}}\,ds
\end{eqnarray*}
is a local martingale issued from $0$. Moreover, the assumptions
(\ref{integgradKrho}) and (\ref{integKrho}) on $\tilde{\rho}$ and
the fact that $\Phi$ is bounded imply that it is a true
martingale. Consequently, as $h(\tau,X_0)\mathbf{1}_{\{\tau\leq t\}
}$ is
$\mathcal{F}_0$-measurable and $\mathbf{1}_{\{\tau\leq s\}\cap\{
\tau\leq
t\}}=\mathbf{1}_{\{\tau\leq s\}}$ for $s\leq t$, we see that
\begin{eqnarray}
&&
E^P \bigl( \mathbf{f}(t,X_t) \Phi_t h(\tau,X_0)\mathbf{1}_{\{
\tau\leq
t\}} \bigr) -E^P \bigl( \mathbf{f}(\tau,X_0)h(\tau,X_0)\mathbf
{1}_{\{\tau
\leq t\}} \bigr) \nonumber\\
&&\qquad{} - E^P \biggl( \int_0^t \biggl[\frac{\partial\mathbf{f}}{\partial
s}(s,X_s)+\nu\triangle\mathbf{f}(s,X_s)\nonumber\\[-8pt]\\[-8pt]
&&\qquad\hspace*{52.4pt}{} +\nabla
\mathbf{f}(s,X_s)\mathbf{K}(\tilde{\rho})(s,X_s) \nonumber\\
&&\qquad\hspace*{52.4pt}{}
+ \mathbf{f}(s,X_s)
\nabla\mathbf{K}(\tilde{\rho})(s,X_s) \biggr]\Phi_s
h(\tau,X_0)\mathbf{1}_{\{\tau\leq
s\}}\,ds \biggr)=0.\nonumber
\end{eqnarray}
Recalling that $\tilde{\rho}(t)$ is the
density of the vector measure (\ref{measvect}) for $Q=P$, the first
term in the previous equation is seen
to be equal to $\int\mathbf{f}(t,x)\tilde{\rho}(t,x)\,dx$. The second
term is equal to the expression in Remark \ref{P0rem} with
$\mathbf{f}(s,x)$ replaced by $\mathbf{f}(s,x)\mathbf{1}_{s\leq t}$,
that is,
$\int\mathbf{f}(0,y)w_0(y)\,dy +
\int_0^t \int\mathbf{f}(s,y)\mathbf{g}(s,y)\,dy \,ds$. The third expectation
can be
interchanged with the time integral
thanks
to the assumptions and Fubini's theorem, and the result follows
using again the definition of $\tilde{\rho}(s)$ in the resulting
time integral.
\end{pf}
The proof of the well posedness of problem \hyperlink{MP}{($\mathrm{MP}$)} will be based
on analytical results about the ``mild form'' of the vortex
equation (\ref{3DVF}), which we state in next section. These will
in particular provide a framework where the conditions required in
Lemma \ref{MPtoweakeq} will hold.
\section{The mild vortex equation in $L^p$ with an external field}
\label{sec3}
We shall next introduce the mild formulation of the forced vortex
equation. We refer the reader to the book of Lemari\'{e}-Rieusset
\cite{LMR} for a comprehensive account on the mild-form approach
to the Navier--Stokes equation in its velocity form. Our techniques
are adapted from that framework.
We denote the heat kernel in $\mathbb{R}^3$ by
\begin{equation}\label{heatkernel}
G^{\nu}_t (x):=(4\pi\nu t)^{-{3/2}} \exp\biggl(-
\frac{|x|^2}{4\nu t} \biggr),
\end{equation}
where $\nu>0$. One has
\begin{lema}\label{estimsw}
For all $p\in[1,\infty]$, $r\geq p$ and $w\in(L^p)^3$, there
exist positive constants $\bar{C}_0(p;r)$ and
$\bar{C}_1(p;r)$ such that for all $t>0$:
\begin{longlist}
\item$\|G_t^{\nu}*w\|_r\leq\bar{C}_0(p;r)t^{-{3/2}
({1/p}-{1/r})} \|w\|_p$,
\item$ \|\nabla
G_t^{\nu}*w\|_r\leq\bar{C}_1(p;r)t^{-{1/2}-{3/2}
({1/p}-{1/r})} \|w\|_p$.
\end{longlist}
\end{lema}
\begin{pf} Use Young's inequality and the well-known estimates
\[
\sup_{t\geq0} \|G^{\nu}_ t\|_m t^{{3/2} -
{3/(2m)}}<\infty, \qquad\sup_{t\geq0} \|\nabla G^{\nu}_
t\|_m t^{2-{3/(2m)}} <\infty.
\]
\upqed\end{pf}
\begin{definicion} Let $w_0$ and $\mathbf{g}$ be functions satisfying
\hyperlink{Hp}{$(\mathrm{H}_p)$} for some $p \in[1,\infty]$. A function $\mathbf
{w}\in
L^{\infty}([0,T], (L^p(\mathbb{R}^3))^3)$ is a mild solution on $[0,T]$
of the vortex equation with initial condition $w_0$ and external
field (or ``mild solution'') if:
\begin{longlist}
\item The functions
$\mathbf{K}(\mathbf{w})_i(t,x):=\mathbf{K}(\mathbf{w}(t,\cdot
))_i(x)$, $i=1,2,3$ are defined a.e.
on $[0,T]\times\mathbb{R}^3$ and satisfy the integrability conditions
(\ref{integrability}).
\item For $dt$-almost every $t$, the following identity
holds in $(L^p)^3$:
\begin{eqnarray}\label{eqmild}\qquad
\mathbf{w}(t,x)&=&G^{\nu}_t* w_0(x)+ \int_0^t G^{\nu}_{t-s}*\mathbf
{g}(s,\cdot)
(x)\,ds \nonumber\\
&&{}+ \sum_{j=1}^3\int_0^t \int_{\mathbb{R}^3}\frac{\partial
G^{\nu}_{t-s}}
{\partial y_j}
(x-y) [\mathbf{K}(\mathbf{w})_j(s,y)\mathbf{w}(s,y)
\\
&&\hspace*{127.4pt}{}- \mathbf{w}_j(s,y)\mathbf{K}(\mathbf{w})(s,y) ]\,dy \,ds.\nonumber
\end{eqnarray}
\end{longlist}
\end{definicion}
We shall state in Theorems \ref{exist1} and \ref{regularity} below
the analytical results we need about (\ref{eqmild}). As we shall
see, that equation will admit an abstract formulation which is the
same as in the case $\mathbf{g}=0$, and so we will be able to adapt the
techniques in \cite{F1} with no difficulties. We therefore
provide an abbreviated account of these results.
We shall simultaneously deal with a family of ``mollified''
versions of (\ref{eqmild}). Consider a smooth function
$\varphi\dvtx\mathbb{R}^3\to\mathbb{R}$ satisfying:
\begin{longlist}
\item$\int_{\mathbb{R}^3}\varphi(x)\,dx=1$,
\item${\int_{\mathbb{R}^3}}|x| |\varphi(x)|\,dx<\infty$,
\end{longlist}
which is called a ``cutoff function of order $1$.'' For
$\varepsilon>0$, let $\varphi_{\varepsilon}\dvtx\mathbb{R}^3\to\mathbb
{R}$ denote
the regular approximation of the Dirac mass
$\varphi_{\varepsilon}(x)=\frac{1}{\varepsilon^3}\varphi(\frac
{\varepsilon}{x})$.
We define the convolution operators
\begin{equation}\label{biotsavop}
\mathbf{K}^{\varepsilon}(w)(x):=\int_{\mathbb{R}^3}K_{\varepsilon}
(x-y)\wedge w(y) \,dy,
\end{equation}
where $K_{\varepsilon}:=\varphi_{\varepsilon}*K
=\mathbf{K}(\varphi_{\varepsilon})$. The fact that $K_{\varepsilon
}$ is a
regular function will follow from part (ii) in Lemma
\ref{contK} below. To unify notation, we also write $K_0=K$ and $
\mathbf{K}^{0}(w)(x):=\mathbf{K}(w)(x)$.
We introduce the
family $\{\mathbf{B}^{\varepsilon}\}_{\varepsilon\geq0} $ of operators
(formally) defined on functions $\mathbf{w},\mathbf{v}\dvtx[0,T]\times
\mathbb{R}^3 \to
\mathbb{R}^3$ by
\begin{eqnarray}\label{bil}\quad
&&\mathbf{B}^{\varepsilon}(\mathbf{w},\mathbf{v})(t,x)
\nonumber\\
&&\qquad=\int_0^t
\sum_{j=1}^3\int_{\mathbb{R}
^3}\frac{\partial G^{\nu}_{t-s}}{\partial y_j}
(x-y) \\
&&\qquad\quad\hspace*{46.2pt}{}\times [\mathbf{K}^{\varepsilon}(\mathbf{w})_j(s,y)\mathbf{v}(s,y)-
\mathbf{v}_j(s,y)\mathbf{K}^{\varepsilon}(\mathbf{w})(s,y) ]\,dy
\,ds.\nonumber
\end{eqnarray}
We are interested in the following family of ``abstract''
equations, for $\varepsilon\geq0$:
\begin{equation}\label{abs}
\mathbf{v}=\mathbf{w}_0+ \mathbf{B}^{\varepsilon}(\mathbf
{v},\mathbf{v}),
\end{equation}
where
\[
\mathbf{w}_0(t,x):=G^{\nu}_t* w_0(x)+ \int_0^t G^{\nu
}_{t-s}*\mathbf{g}(s,\cdot)
(x) \,ds.
\]
For a given time interval $[0,T]$ we shall work in the Banach
spaces
\[
\mathbf{F}_{0,r,(T;p)},\qquad \mathbf{F}_{1,r,(T;p)},\qquad \mathbf
{F}_{0,p,T}\quad\mbox{and}\quad \mathbf{F}_{1,p,T}
\]
with norms, respectively, defined by:
\begin{itemize}
\item$ {\tn\mathbf{w}\tn_{0,r,(T;p)}:=\sup_{0\leq t
\leq T}
t^{{3/2}({1/p}-{1/r})}\|\mathbf{w}(t)\|_r }$,
\item
$ \tn\mathbf{w}\tn_{1,r,(T;p)}:=\sup_{0\leq t \leq T}
\{ t^{{3/2}({1/p}-{1/r})}\|\mathbf{w}(t)\|
_r +
t^{{1/2}+{3/2}({1/p}-{1/r})}\times\break
{\sum_{k=1}^3} \| \frac{\partial\mathbf{w}(t)}{\partial
x_k} \|_r
\} $,
\item$ {\tn\mathbf{w}\tn_{0,p,T}:=\tn\mathbf{w}
\tn_{0,p,(T;p)}}$ and
\item$ {\tn\mathbf{w}
\tn_{1,p,T}:=\tn\mathbf{w}\tn_{1,p,(T;p)}}$.
\end{itemize}
The following continuity property of the Biot--Savart kernel is
crucial:
\begin{lema}\label{contK}
Let $1< p <3$ be given and $q\in(\frac{3}{2},\infty)$ be defined
by $\frac{1}{q}=\break\frac{1}{p}-\frac{1}{3}$.
\begin{longlist}
\item For every $w \in(L^3)^p$, the integral (\ref{biotsavop})
is absolutely convergent for almost every $x$ and one has
$\mathbf{K}^{\varepsilon}(w)\in(L^q)^3$. There exists further a
positive constant $\tilde{C}_{p,q}$ such that
\begin{equation}\label{Kpq}
\sup_{\varepsilon\geq0}\| \mathbf{K}^{\varepsilon}(w)\|_q\leq
\tilde{C}_{p,q} \|w\|_p
\end{equation}
for all $w \in(L^p)^3$.
\item If moreover $w \in
(W^{1,p})^3$, then we have
$\mathbf{K}^{\varepsilon}(w)\in(W^{1,q})^3$, with\break $\frac{\partial}
{\partial x_k}\mathbf{K}^{\varepsilon}(w)=\mathbf{K}^{\varepsilon}
(\frac{\partial w }{\partial x_k} )$, and
\begin{equation}\label{KWpq}
\sup_{\varepsilon\geq0} \biggl\| \frac{\partial
\mathbf{K}^{\varepsilon}(w)}{\partial x_k} \biggr\|_q\leq
\tilde{C}_{p,q} \biggl\|\frac{\partial w }{\partial
x_k} \biggr\|_p
\end{equation}
for all $k=1,2,3$.
\end{longlist}
\end{lema}
\begin{pf} See
Lemma 2.2 in \cite{F1} for the case $\varepsilon=0$ and Remark 4.3
therein for the general case.
\end{pf}
\begin{lema}\label{contB}
\textup{(i)}
Let $p\in[1,3)$ and assume \hyperlink{Hp}{$(\mathrm{H}_p)$}. Then, we
have for all $r\in[p,\frac{3p}{3-p})$ that
\[
\mathbf{w}_0\in F_{1,r,(T;p)}\qquad \mbox{with } \tn\mathbf{w}_0
\tn_{1,r,(T;p)}\leq C(r,p)(\|w_0\|_p + T \tn\mathbf{g}\tn_{0,p,T})
\]
for some finite constant $C(r,p)>0$.
{\smallskipamount=0pt
\begin{longlist}[(ii)]
\item[(ii)] Let $\frac{3}{2}< p<3, p\leq l <\min\{\frac{6p}{6-p},3\}$
and $ \frac{3l}{6-l}\leq
l'< \frac{3l}{6-2l}$. Then, there exists a finite constant
$C_1(l,l';p)$ not depending on $T>0$ such that for all $\mathbf
{w},\mathbf{v}\in
\mathbf{F}_{1,l,(T;p)}$,
\[
\sup_{\varepsilon\geq0}\tn\mathbf{B}^{\varepsilon}(\mathbf
{w},\mathbf{v}) \tn
_{1,l',(T;p)} \leq C_1(l,l';p)
T^{1-{3}/({2p})} \tn\mathbf{w}
\tn_{1,l,(T;p)} \tn\mathbf{v}\tn_{1,l,(T;p)},
\]
where
$1-\frac{3}{2p}>0$.
\end{longlist}}
\end{lema}
\begin{pf}
Part (i) follows from Lemma \ref{estimsw}. To
bound the time integral we use, moreover, the fact that for all
$r\geq p$, on has
\[
\biggl\|\int_0^t
G^{\nu}_{t-s}*\mathbf{g}(s,\cdot) \,ds \biggr\|_r\leq C
t^{1+{1/r}-{1/p}} \Bigl({\sup_{t\in
[0,T]}}\|g_t\|_p \Bigr).
\]
On the other hand, since $t\mapsto
t^{-{1/2}+{3/2}({1/r}-{1/p})}$ is
integrable in $0$ if and only if $r<\frac{3p}{3-p}$, we have
\[
\biggl\|\nabla\biggl(\int_0^t
G^{\nu}_{t-s}*\mathbf{g}(s,\cdot) \,ds \biggr) \biggr\|_r\leq C'
t^{{1/2}+{3/2}({1/r}-{1/p})} \Bigl({\sup
_{t\in
[0,T]}}\|\mathbf{g}(t,\cdot) \|_p \Bigr)
\]
from where the statement follows. Part (ii) uses Lemma
\ref{contK} and is proved in parts (ii) and (iv) of
Proposition 3.1 in \cite{F1}. See also Remarks 4.3 and 6.3 therein
for the uniformity (in $\varepsilon\geq0 $) of the bounds.
\end{pf}
\begin{rem}
Observe that the previous lemma, in particular, implies
(taking $p=r=l=l'$) that for $p\in(\frac{3}{2},3)$, the abstract
equation (\ref{abs}) makes sense in $\mathbf{F}_{1,p,T}$ for each
$\varepsilon\geq0$.
\end{rem}
Now we can state the extension of Theorem 3.1 in \cite{F1} to the
3d-vortex equation with external field.
\begin{teorema}\label{exist1}
Assume that \hyperlink{Hp}{$(\mathrm{H}_p)$} for some $\frac{3}{2}<p<3$.
\begin{enumerate}[(a)]
\item[(a)] For each $T>0$ and $\varepsilon\geq0$, equation (\ref
{abs}) has, at most,
one solution in $\mathbf{F}_{0,p,T}$.
\item[(b)] There is a constant
$\Gamma_0(p)>0$ independent of $\varepsilon\geq0$ such that for
all $T>0$, $w_0 $ and $\mathbf{g}$
satisfying
\[
T^{1-{3/(2p)}} (\|w_0\|_p+T\tn\mathbf{g}
\tn_{0,p,\theta} )<\Gamma_0(p),
\]
each one of (\ref{abs}) with $\varepsilon\geq0$, has a
solution $\mathbf{w}^{\varepsilon}\in\mathbf{F}_{1,p,T}$. Moreover,
we have
\[
\sup_{\varepsilon\geq0}\tn\mathbf{w}^{\varepsilon}\tn
_{1,p,T}\leq2 \tn
\mathbf{w}_0 \tn_{0,p,T}.
\]
\end{enumerate}
\end{teorema}
\begin{pf} For later purposes, we give, in detail, the argument of
\cite{F1}. By Lem\-ma~\ref{estimsw}(ii) (with $p$ in the place of $r$ and
$\frac{3p}{6-p}$ in that of $p$) and Lemma \ref{contK}(i),
we have for all $\mathbf{v},\mathbf{w}\in\mathbf{F}_{0,p,T}$ that
\[
\| \mathbf{B}^{\varepsilon}(\mathbf{w},\mathbf{v})(t)\|_p\leq C\int_0^t
(t-s)^{-{3}/({2p})} \|\mathbf{w}(s)\|_p\|\mathbf{v}(s)\|_p \,ds.
\]
It follows that
if $\mathbf{w}$ and $\mathbf{v}$ are two solutions, one has
\[
\|\mathbf{w}(t) -\mathbf{v}(t)\|_p \leq C ( \tn\mathbf{w}\tn
_{0,p,T}+\tn\mathbf{v}
\tn_{0,p,T} )\int_0^t
(t-s)^{-{3}/({2p})} \|\mathbf{w}(s)-\mathbf{v}(s)\|_p
\,ds
\]
and iterating the latter sufficiently many times [using the
identity $ \int_0^t s^{\varepsilon-1}(t-s)^{\theta-1} \,ds =C
t^{\varepsilon+ \theta-1}$ for $\theta,\varepsilon>0$] we get
$\|\mathbf{w}(t) -\mathbf{v}(t)\|_p \leq C \int_0^t \|\mathbf
{w}(s)-\mathbf{v}(s)\|_p \,ds$.
Gronwall's lemma concludes the proof.
(b) We
notice that for $T>0$ small enough, one has
\[
4 C(p,p) C_1(p,p;p)
T^{1-{3}/({2p})} (\|w_0\|_p + T \tn\mathbf{g}\tn_{0,p,T})<1,
\]
where $C(p,p)$ and $C_1(p,p;p)$ are, respectively, the constants in
parts (i) and (ii) of Lemma \ref{contB} with all
parameters equal to $p$. From this and Lemma \ref{contB}(i),
the same contraction argument used in Theorem 3.1(b) of
\cite{F1} can be applied here in the space $\mathbf{F}_{1,p,T}$.
\end{pf}
We observe that for $\mathbf{v}\in\mathbf{F}_{0,p,T}$, with $p\in
(\frac{3}{2},3)$ we have $\mathbf{K}(\mathbf{v}) \in\mathbf
{F}_{0,q,T}$ for $q\in
(3,\infty)$. The previous global uniqueness and local existence
result also holds in that space, and one can, moreover, show that
the solution $\mathbf{w}(t)\in(L^p)$ is a continuous function of $t$.
That type of result corresponds to a ``vorticity version'' of
Kato's theorem for the mild Navier--Stokes equation in $(L^q)^3$,
$q\in(3,\infty)$ (see~\cite{LMR}, Theorem 15.3(A)).
We shall, later on, need additional regularity properties of the
function $\mathbf{w}^{\varepsilon}$ and, more importantly, their uniformity
in $\varepsilon\geq0$. These results will rely on continuity
properties of the ``derivative'' of the Biot--Savart operator.
\begin{lema}\label{contgradK}
Let $1<r<\infty$.
\begin{longlist}
\item For all\vspace*{1pt} $w \in(L^r)^3$ and $\varepsilon\geq0$, we have
$\frac{\partial
}{\partial x_k}\mathbf{K}^{\varepsilon}(w)\in(L^r)^3$ for
$k=1,2,3$. There exists further a positive constant $C_r$
depending only on $r$ such that
\begin{equation}\label{HLp}
\sup_{\varepsilon\geq0} \biggl\| \frac{\partial
\mathbf{K}^{\varepsilon}(w)_j}{\partial x_k} \biggr\|_r\leq
\tilde{C}_r \|w\|_r
\end{equation}
for all $j=1,2,3$, where $\mathbf{K}^{\varepsilon}(w)_j$ is the
$j$th component of $\mathbf{K}^{\varepsilon}(w)$.
\item If,
moreover, $w \in(W^{1,r})^3$, we then have
$\frac{\partial}{\partial
x_k}\mathbf{K}^{\varepsilon}(w)\in(W^{1,r})^3$, with
$\frac{\partial}{\partial x_i} (\frac{\partial}{\partial
x_k}\mathbf{K}^{\varepsilon}(w) )= \frac{\partial}{\partial
x_k}\mathbf{K}^{\varepsilon}(\frac{\partial}{\partial x_i} w) $
and
\begin{equation}\label{HW1p}
\sup_{\varepsilon\geq0} \biggl\| \frac{\partial^2
\mathbf{K}^{\varepsilon}(w)_j}{\partial x_i\,\partial x_k} \biggr\|_r
\leq\tilde{C}_r \biggl\|\frac{\partial w }{\partial
x_i} \biggr\|_r
\end{equation}
for all $i,k=1,2,3$.
\end{longlist}
\end{lema}
\begin{pf} See Lemma 3.1 and Remark 4.3 in \cite{F1} for the proof,
which relies on the fact that $\mathbf{w}\mapsto\frac{\partial
\mathbf{K}(w)}{\partial x_k}$ is a singular integral operator.
\end{pf}
\begin{teorema}\label{regularity}
For $p\in(\frac{3}{2},3)$, let $\mathbf{w}^{\varepsilon}\in\mathbf
{F}_{1,p,T}$,
$\varepsilon\geq0$ be the solution of (\ref{abs}) given by
Theorem \ref{exist1}, and write
$\mathbf{u}^{\varepsilon}(s,x):=\mathbf{K}^{\varepsilon}(\mathbf
{w}^{\varepsilon})(s,x)$.
Let $\mathcal{C}^{\alpha}$ denote the space of H\"{o}lder continuous
functions $\mathbb{R}^3\to\mathbb{R}^3$ of index $\alpha\in(0,1)$.
\begin{longlist}
\item For all $r\in[p,\frac{3p}{3-p})$, we have
\[
\sup_{\varepsilon\geq0} \tn\mathbf{w}^{\varepsilon} \tn_{1,r,(T;p)}
<\infty.
\]
\item
We have
\begin{equation}\label{estimu}
\sup_{\varepsilon\geq0}\sup_{t\in[0,T]} t^{{1/2}}
\{\|\mathbf{u}^{\varepsilon}(t)\|_{\infty}+\|\mathbf
{u}^{\varepsilon}(t)\|
_{\mathcal{C}^{({2p-3})/{p}}} \} <\infty.
\end{equation}
\item For all $r\in(3,\frac{3p}{3-p})$, $i=1,2,3$ we have
\begin{equation}\label{estimgradu}
\sup_{\varepsilon\geq0}\sup_{t\in[0,T]}
t^{{1/2}+{3/2}({1/p}-{1/r})} \biggl\{
\biggl\|\frac{\partial\mathbf{u}^{\varepsilon}(t)}{\partial
x_i} \biggr\|_{\infty}+ \biggl\|\frac{\partial\mathbf
{u}^{\varepsilon
}(t)}{\partial
x_i} \biggr\|_{\mathcal{C}^{1-{3/r}}} \biggr\} <\infty.
\end{equation}
In particular, the functions
\[
t\mapsto\|\mathbf{u}(t)\|_{\infty} \quad\mbox{and}\quad
t\mapsto\biggl\|\frac{\partial\mathbf{u}(t)}{\partial
x_i} \biggr\|_{\infty},\qquad i=1,2,3,
\]
belong to $L^1([0,T],\mathbb{R})$.
\end{longlist}
\end{teorema}
\begin{pf} Observe that
parts (i) and (ii) of Lemma \ref{contB} provide an estimate
of the form
\[
\tn
\mathbf{w}^{\varepsilon}\tn_{1,l',(T;p)}\leq C(l',p)(\|w_0\|_p+ T\tn
\mathbf{g}
\tn_{0,p,T}) +\Lambda(T,l,l')A_l^2
\]
for suitable $l$ and $l'$
and with $\Lambda(T,l,l')$ a uniform upper bound for the norms of
the operators $\mathbf{B}^{\varepsilon}\dvtx(\mathbf
{F}_{1,l,(T;p)})^2\to
\mathbf{F}_{1,l',(T;p)}$ and $A_l$ a given upper bound
of $\tn\mathbf{w}^{\varepsilon}\tn_{1,l,(T;p)}$. Then, starting
from the fact
that the functions
$\mathbf{w}^{\varepsilon}\in\mathbf{F}_{1,p,(T;p)}=\mathbf
{F}_{1,p,T}$ are uniformly
bounded in $\varepsilon\geq0$, we can apply several times Lemma
\ref{contB} and the previous
inequality (using, also, the fact that $\mathbf{w}_0 \in\mathbf
{F}_{1,l',(T;p)}$
for all $l' \in[p,\frac{3p}{3-p})$), and obtain an increasing
sequence $l'=l_n$ such that $l_0=p$, $l_n \nearrow
\frac{3p}{3-p}$, and
$\mathbf{w}^{\varepsilon}\in\mathbf{F}_{1,l_n,(T;p)}$ with $\tn
\mathbf{w}^{\varepsilon
}\tn
_{1,l_n,(T;p)}$
controlled in terms of $\tn\mathbf{w}^{\varepsilon}\tn_{1,l_{n-1},(T;p)}$
and $\tn\mathbf{w}_0\tn_{1,l_n,(T;p)}$. One can thus chose $N$ large enough
such that
$l_N\geq r$ and conclude with an
interpolation inequality in the spaces $\mathbf{F}_{1,l,(T;p)}$.
We refer to the proof of Theorem 3.2(ii) in \cite{F1} for this and
for an explicit construction of the sequence~$l_n$.
Next, Lemma \ref{contK} and Theorem \ref{exist1} imply that for
$q=\frac{3p}{3-p}>3$,
\[
\sup_{\varepsilon\geq0}\tn\mathbf{u}^{\varepsilon}
\tn_{{1,q,T}}\leq C\sup_{\varepsilon\geq0}\tn\mathbf
{w}^{\varepsilon}
\tn_{{1,p,T}} \leq C '(\|w_0\|_p+ T\tn\mathbf{g}
\tn_{0,p,T}) .
\]
Using the continuous embedding of $(W^{1,m})^3$
into $(L^{\infty})^3\cap\mathcal{C}^{1-{3}/{m}}$ for all $m>3$,
we deduce part (ii), taking $m=q$. To prove part (iii)
we use part (i), Lemma \ref{contgradK} and the same embedding
result as before but with $m=r$. See Corollary 3.1 in \cite{F1}
for details.
\end{pf}
\section{The nonlinear process}\label{sec4}
We shall, in this section, use the notation $F_{0,p,T}$, $F_{1,p,T}$,
$F_{0,r,(T;p)}$ and $F_{1,r,(T;p)}$ for the scalar-function
analogues of the spaces $\mathbf{F}$ defined in Section
\ref{sec3}.\vadjust{\goodbreak}
We also need the following definition.
\begin{definicion} $\mathcal{P}_{b,{3}/{2}}^T$ is the space of
probability measures $Q\in
\mathcal{P}_b(\mathcal{C}_T)$ satisfying the following conditions:
\begin{itemize}
\item For each $t\in[0,T]$, $Q^{\circ}_t(dx)$ defined in
Definition \ref{margsP} is
absolutely continuous with respect to Lebesgue's measure.
\item The family of densities of $(Q^{\circ}_t(dx))_{t\in[0,T]}$,
which we denote by $(t,x)\mapsto
\rho^Q(t,x)$, has a version that
belongs to $F_{0,p,T}$ for some $p>\frac{3}{2}$.
\item The family of densities of the vectorial measures $(\tilde
{Q}_t(dx))_{t\in[0,t]}$ [cf. (\ref{measvect})], which
we denote by $(t,x)\mapsto\tilde{\rho}^Q(t,x)$, satisfies
$\operatorname{div}
\tilde{\rho}_t^Q=0$ for $dt$-almost every $t\in[0,T]$.
\end{itemize}
\end{definicion}
We are ready to study the nonlinear process described in
\hyperlink{MP}{($\mathrm{MP}$)}.
\begin{teorema}\label{teoMP} Assume that \hyperlink{Hp}{$(\mathrm{H}_1)$} and
\hyperlink{Hp}{$(\mathrm{H}_p)$} are satisfied for some
$p\in(\frac{3}{2},3)$. Then, the following hold:
\begin{enumerate}[(a)]
\item[(a)] For every $T>0$, the nonlinear martingale problem \hyperlink{MP}{($\mathrm{MP}$)}
has, at most, one solution $P$ in the class $\mathcal{P}_{b,
{3/2}}^T$. Moreover, if such a solution $P$ exists,
then the function defined by
\[
\mathbf{w}(t,x):=\tilde{\rho}{}^P(t,x)=\rho^P (t,x)E^P\bigl(\Phi_t
h(\tau,X_0)\mathbf{1}_{\{t\geq\tau\}} \vert X_t=x\bigr)
\]
is the unique solution in $\mathbf{F}_{0,1,T}\cap\mathbf{F}_{0,p,T}$
of the
mild equation (\ref{eqmild}).
\item[(b)] In a given filtered probability space
$(\Omega,\mathcal{F},\mathcal{F}_t,\mathbb{P})$, consider a standard
three-dimensional Brownian motion $B$, and an $\mathcal{F}_0$-measurable
random variable $(\tau,X_0)$ independent of $B$
with law $P_0$ [defined as in (\ref{P0})]. Then, on each interval
$[0,T]$, the McKean nonlinear stochastic differential equation
\begin{eqnarray}\label{nonlinSDE}
\mbox{\textup{(i)} }&& X_t=X_0+\sqrt{2\nu} \int_0^t \mathbf{1}_{\{s\geq\tau\}
} \,dB_s
+\int_0^t \mathbf{K}(\tilde{\rho})
(s,X_s)\mathbf{1}_{\{s\geq\tau\}}\,ds ,\nonumber\\
\mbox{\textup{(ii)} }&& \Phi_t=I_3+\int_0^t \nabla\mathbf{K}(\tilde{\rho
})(s,X_s)\Phi_s \mathbf{1}_{\{s\geq\tau\}} \,ds , \\
\mbox{\textup{(iii)} }&& \operatorname{law}(\tau,X,\Phi) \in\mathcal{P}_{b,{3}/{2}}^T \quad
\mbox{and}\quad \tilde{\rho}(t,x)=\tilde{\rho}^{\operatorname{law}(\tau,X,\Phi)}(t,x),\nonumber
\end{eqnarray}
has, at most, one pathwise solution. Moreover, if a solution exists,
its law is a solution of \hyperlink{MP}{($\mathrm{MP}$)}. Thus, by \textup{(a)},
uniqueness in law for (\ref{nonlinSDE}) holds.
\item[(c)] If the condition
\[
T^{1-{3}/({2p})} (\|w_0\|_p+T\tn\mathbf{g}
\tn_{0,p,\theta} )<\Gamma_0(p)
\]
is satisfied,\vspace*{1pt} where
$\Gamma_0(p)>0$ is the constant provided by Theorem \ref{exist1},
then a unique solution $P\in\mathcal{P}_{b,{3/2}}^T$ to
\hyperlink{MP}{($\mathrm{MP}$)} exists. Moreover, under the previous condition, strong
existence holds for the nonlinear stochastic differential equation
(\ref{nonlinSDE}) in $[0,T]$, and by \textup{(a)} and \textup{(b)}, one
has $P=\operatorname{law}(\tau,X,\Phi)$. Finally, $\rho^P$ is the unique
solution in $\mathbf{F}_{0,1,T}\cap\mathbf{F}_{0,p,T}$ to the vortex equation
(\ref{eqmild}).
\end{enumerate}
\end{teorema}
The proof of Theorem \ref{teoMP} requires some preliminary facts
about a scalar problem implicitly included in the vectorial
problem \hyperlink{MP}{($\mathrm{MP}$)}.
\subsection{A nonlinear Fokker--Planck equation with external
field associated with the 3d-vortex equation} Recall that the
notation $\tilde{Q}_t$ and $Q^{\circ}_t$ were, respectively,
defined in Definition \ref{margsP} and (\ref{measvect}).
For any $Q\in\mathcal{P}(\mathcal{C}_T)$, we now denote by $\hat{Q}_t$
the sub-probability measure on $\mathbb{R}^3$ defined for scalar
functions by
\begin{equation}\label{subproba}
\hat{Q}_t(f)=\mathbb{E}^{Q}\bigl(f(X_t) \mathbf{1}_{\{\tau\leq t\}}\bigr),
\end{equation}
where $(\tau,X)$ are the two first marginal of the canonical
process $(\tau,X,\Phi)$ in $\mathcal{C}_T$. Obviously, for $Q\in
\mathcal{P}_b(\mathcal{C}_T)$ we have
\[
\tilde{Q}_t\ll\hat{Q}_t \ll Q^{\circ}_t,
\]
and we shall denote
\begin{equation}\label{density'}
k^Q_t(x):=\frac{d \tilde{Q}_t}{d \hat{Q}_t}(x).
\end{equation}
Notice that, indeed,
\[
k^Q_t(x)=\frac{E^Q(\Phi_t h(\tau,X_0)\mathbf{1}_{\{\tau\leq t\}}
\vert
X_t=x)}{Q(\tau\leq t\vert X_t=x)} \mathbf{1}_{\{Q(\tau\leq t\vert
X_t=x)>0\}}.
\]
\begin{definicion}
If $Q^{\circ}_t(dx)$ has a density $\rho^Q(t,x)$ with respect to
Lebesgue measure, we shall denote by $\hat{\rho}^Q(t,x)$ the
family of densities of $\hat{Q}_t $.
\end{definicion}
Notice that one has
\[
\tilde{\rho}^Q(t,x)= k^Q_t(x)\hat{\rho}^Q(t,x).
\]
\begin{rem}\label{medibilidad} If $Q\in\mathcal{P}_b(\mathcal
{C}_T)$ is such that $Q_t$ is absolutely continuous for all $t\in
[0,T]$, the existence of a joint measurable version of
$(t,x)\mapsto\rho^Q(t,x)$ is standard by continuity of $X_t$
under $Q^{\circ}_t$. We always work with such a version. Moreover,
there exist measurable versions of $(t,x)\mapsto
\hat{\rho}^Q(t,x)$ and $(t,x)\mapsto\tilde{\rho}^Q(t,x)$. This
can be seen by Lebesgue derivation (see, e.g., Theorem 3.22 in
\cite{Foll}), taking $\delta\to0$ in the quotients
\[
\frac{Q(\tau\leq t , X_t\in B(x,\delta))}{Q(X_t\in B(x,\delta))}
\quad\mbox{and}\quad \frac{E^Q(\Phi_t
h(\tau,X_0)\mathbf{1}_{\{\tau\leq t\}}, X_t\in B(x,\delta
))}{Q(X_t\in
B(x,\delta))}
\]
and using the previous relation between
$\hat{\rho}^Q(t,x)$ and $k^Q$ [here, $B(x,\delta)$ is the open
ball of radius $r$ centered at $x$].
\end{rem}
\begin{lema}\label{eqFP}
Assume that \textup{\hyperlink{MP}{($\mathrm{MP}$)}} has a solution $P\in\mathcal{P}_b(\mathcal
{C}_T)$ such that $P^{\circ}_t$ has a density for each $t\in
[0,T]$. Let $\hat{\rho}:=\hat{\rho}^P$ and $\tilde{\rho}:=
\tilde{\rho}^P$, respectively, denote the densities of $\hat{P}_t$
and $\tilde{P}_t$ and, moreover, assume that (\ref{integKrho})
holds. We have:
\begin{longlist}
\item The couple
$(\hat{\rho},\tilde{\rho})$ satisfies the weak evolution equation
\begin{eqnarray}\label{weakFP}
&&
\int_{\mathbb{R}^3}f(t,y)\hat{\rho}(t,y)\,dy\nonumber\\
&&\qquad=\int_{\mathbb{R}^3}
f(0,y)\bar
{w}_0(y)\,dy +\int_0^t \int_{\mathbb{R}^3} f(s,y)\bar{\mathbf{g}}(s,y)\,dy
\,ds\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{} + \int_0^t \int_{\mathbb{R}^3} \biggl[ \frac{\partial f}{\partial s}(s,y)
+\nu\triangle f(s,y) \nonumber\\
&&\qquad\hspace*{59pt}{} + \mathbf{K}(\tilde{\rho})(s,y)\nabla f(s,y)
\biggr]\hat{\rho}(s,y) \,dy \,ds,\nonumber
\end{eqnarray}
for all $f\in C^{1,2}_b$, where $\bar{w}_0 $ and $\bar{\mathbf{g}}$ were
defined in (\ref{initlaw}).
\item$\hat{\rho}$ is, moreover, a solution of the mild
equation in $[0,T]$,
\begin{eqnarray}\label{eqmildFP}
\hat{\rho}(t,x) & = & G^{\nu}_t*\bar{w}_0(x)+\int_0^t G^{\nu
}_{t-s}* \bar{\mathbf{g}}(s,\cdot)(x)\,ds\nonumber\\[-8pt]\\[-8pt]
&&{} + \int_0^t\sum_{j=1}^3\int_{\mathbb{R}^3}
\frac{\partial G^{\nu}_{t-s}}
{\partial y_j}
(x-y)\mathbf{K}(k \hat{\rho})_j(s,y)\hat{\rho}(s,y) \,dy \,ds
,\nonumber
\end{eqnarray}
with the multiple integral being absolutely convergent, and where
$k:=k^P$ is the function defined in (\ref{density'}).
\end{longlist}
\end{lema}
\begin{pf} (i)
By the definition of \hyperlink{MP}{($\mathrm{MP}$)} and the fact that
$ \mathbf{1}_{\{\tau\leq t\}}$ is $\mathcal{F}_0$-measurable,
we deduce that the
expectation of the expression
\begin{eqnarray*}
&&
f(t,X_t)\mathbf{1}_{\{t\geq\tau\}}-f (\tau,X_0)\mathbf{1}_{\{t\geq
\tau\}}\\
&&\qquad{}
-\int_0^t \biggl[\frac{\partial f}{\partial s}(s,X_s)+\nu\triangle
f (s,X_s)\,ds + \mathbf{K}(\tilde{\rho})(s,X_s)\nabla
f(s,X_s) \biggr]\mathbf{1}_{\{s\geq\tau\}} \,ds
\end{eqnarray*}
vanishes (see also the beginning of the proof of Lemma
\ref{MPtoweakeq}). Taking expectation and recalling the definition of
$\hat{\rho}$ and $P_0$ [cf. (\ref{subproba}) and (\ref{P0})], we
obtain the desired result applying Fubini's theorem in the time
integral, which is possible since
\[
{\int_{[0,T]\times\mathbb{R}^3}}|\mathbf{K}(\tilde{\rho})(t,x)|\hat
{\rho}(t,x)\,dx
\,dt<\infty,
\]
thanks to condition (\ref{integKrho}).
(ii) Fix $\psi\in\mathcal{D}$ and $t\in[0,T]$ and take in
(\ref{weakFP}) the $C^{1,2}_b$-function $f_t\dvtx[0,t]\times
\mathbb{R}^3\to\mathbb{R}^3$ given by $f_t(s,y)=G^{\nu}_{t-s}*\psi
(y)$ (which
solves the backward heat equation on $[0,t]\times\mathbb{R}^3$ with final
condition $f_t(t,y)=\psi(y)$). By Lemma \ref{estimsw} and
condition~(\ref{integKrho}), it is not hard to check that
\[
\int_0^t\int_{(\mathbb{R}^3)^2} \sum_{j=1}^3 \biggl|\frac{\partial
G^{\nu}_{t-s}}{\partial y_j}(x-y) \biggr||\mathbf{K}(\tilde{\rho})_j(s,y)|
|\psi(x)|\rho(s,y)\,dx \,dy \,ds<\infty.
\]
By Fubini's theorem we easily conclude.
\end{pf}
Consider now a fixed but arbitrary function $k\dvtx[0,T]\times
\mathbb{R}^3\to\mathbb{R}^3$ of class $L^{\infty}([0,T],(L^{\infty
})^3)$, and
formally define an operator $\mathbf{b}^k$ on functions $\eta,\nu\in
\mathcal{M}\mathit{eas}^T$ by
\[
\mathbf{b}^k(\eta,\nu)(t,x)
=\int_0^t\sum_{j=1}^3\int_{\mathbb{R}^3}\frac{\partial
G^{\nu}_{t-s}}{\partial y_j} (x-y)\mathbf{K}(k
\nu)_j(s,y)\eta(s,y) \,dy \,ds.
\]
\begin{rem}\label{FF}
For each $p\in[1,\infty]$ (resp., each $p\in[1,\infty]$ and
$r\geq p$), the mapping $\eta\mapsto k\eta$ is continuous from
$F_{0,p,T}$ to $\mathbf{F}_{0,p,T}$ (resp., from $F_{0,r,(T;p)}$ to
$\mathbf{F}_{0,r,(T;p)}$).
\end{rem}
Write now
\[
\gamma_0(t,x):= G^{\nu}_t*\bar{w}_0(x)+\int_0^t G^{\nu}_{t-s}*
\bar{\mathbf{g}}(s,\cdot)(x)\,ds,
\]
where $\bar{w}_0$ and $\bar{\mathbf{g}}$ were defined in
(\ref{initlaw}). We can state the following properties of the
scalar equation (\ref{eqmildFP}).
\begin{proposicion}\label{contb}
Assume \hyperlink{Hp}{$(\mathrm{H}_1)$} and \hyperlink{Hp}{$(\mathrm{H}_p)$} with $p\in
(\frac{3}{2},3)$, and let $k\in L^{\infty}([0,T],(L^{\infty})^3)$
be a fixed but arbitrary function.
\begin{longlist}
\item For each $r\in[p,\infty)$, we have
\[
\gamma_0\in F_{0,r,(T;p)}\qquad \mbox{with } \tn\gamma_0
\tn_{0,r,(T;p)}\leq C(r,p)\|\bar{w}_0\|_p + T \tn
\bar{\mathbf{g}}\tn_{0,p,T}
\]
for some finite constant $C(r,p)>0$.
\item Suppose that $\frac{3}{2}< p<3, p\leq l <\min\{\frac
{6p}{6-p},3\}$ and $ \frac{3l}{6-l}\leq
l'< \frac{3l}{6-2l}$. Then, there exists a finite constant
$C_0(l,l';p)$ not depending on $T>0$ such that for all
$\eta,\nu\in F_{0,l,(T;p)}$,
\[
\tn\mathbf{b}^k(\eta,\nu) \tn_{0,l',(T;p)} \leq C_0(l,l';p)
T^{1-{3}/({2p})} \tn\eta
\tn_{0,l,(T;p)} \tn\nu\tn_{0,l,(T;p)}.
\]
\item The mild Fokker--Planck
equation with external field (\ref{eqmildFP}) has, at most, one
solution $\hat{\rho}\in F_{0,p,T}$ for each $T>0$.
\item If $\hat{\rho}\in F_{0,p,T}$ is a solution of
(\ref{eqmildFP}), then $\hat{\rho}\in F_{0,r,(T;p)}$ for all $ r
\in[p, \infty)$ with $\tn\hat{\rho} \tn_{0,r,(T;p)}\leq
C(T,p,r,\tn\hat{\rho} \tn_{0,p,T}) < \infty$.
\item We deduce that for all $l\in
[\frac{3p}{3-p},\infty)$, $\mathbf{K}(k \hat{\rho})\in
\mathbf{F}_{1,l,(T;{3p}/({3-p}))}$.
\end{longlist}
\end{proposicion}
\begin{pf} Part (i) follows from Lemma \ref{estimsw} in
a similar way as part (i) of Lem\-ma~\ref{contB}. We notice
that the restriction on $r$ in the latter was needed only to
ensure that the derivative of time integral was convergent, and so
it is not needed here. Thanks to Remark \ref{FF}, part (ii)
is similar to part (ii) of Proposition 3.1 in \cite{F1}.
From the previous parts, equation (\ref{eqmildFP}) admits the
abstract formulation in $F_{0,p,T}$
\[
\hat{\rho}= \gamma_0+ \mathbf{b}^k(\hat{\rho},\hat{\rho}).
\]
Then, the arguments yielding parts (i) of Theorems \ref{exist1} and
\ref{regularity} also provide
the assertions of parts (iii) and (iv), respectively. For
part (v), we notice that from (iv), $k\hat{\rho}\in
\mathbf{F}_{0,r,(T;p)}$ holds for all $r\in[ p,\infty[$. Thus, if we take
$l\geq q:=\frac{3p}{3-p}$ and set
$r:=(\frac{1}{l}+\frac{1}{3})^{-1}$, then one has $r\geq p$, and
so Lemma \ref{contK}(i) implies that
\[
\sup_{t\in
[0,T]}t^{{3/2}({1/p}-{1/r})}\|\mathbf{K}(k\hat
{\rho
})(t,\cdot)\|_l
=\sup_{t\in
[0,T]}t^{{3/2}({1/q}-{1/l})}\|\mathbf{K}(k\hat
{\rho
})(t,\cdot)\|_l
<\infty.
\]
This shows that $\mathbf{K}(k\hat{\rho}) \in\mathbf
{F}_{0,l,(T;q)}$. We conclude
that $\mathbf{K}(k\hat{\rho}) \in\mathbf{F}_{1,l,(T;q)}$, noting
that since
$k\hat{\rho}\in\mathbf{F}_{0,l,(T;p)}$ for all $l\geq q$,
Lemma \ref{contgradK}(i) implies that $\frac{\partial
\mathbf{K}(k\hat{\rho})}{\partial x_k}\in\mathbf{F}_{0,l,(T;p)}$
for all
$k=1,2,3$. In other words,
\begin{eqnarray*}
&&\sup_{t\in
[0,T]}t^{{3/2}({1/p}-{1/l})} \biggl\|\frac
{\partial
\mathbf{K}(k\hat{\rho})(t,\cdot)}{\partial x_k} \biggr\|_l
\\
&&\qquad=\sup
_{t\in
[0,T]}t^{{1/2}+{3/2}({1/q}-{1/l})} \biggl\|
\frac{\partial
\mathbf{K}(k\hat{\rho})(t,\cdot)}{\partial x_k} \biggr\|_l <\infty,
\end{eqnarray*}
which is the required estimate.
\end{pf}
\subsection{Uniqueness in law and pathwise uniqueness}\label{sec42}
We need the following version of Gronwall's lemma:
\begin{lema}\label{Gronesp}
Let $g$ and $k$ be positive functions on $[0,T]$, such that\break
$\int_0^T k(s) \,ds < \infty$, $g$ is bounded, and
\[
g(t)\leq C +\int_0^t g(s) k(s)\,ds \qquad\mbox{for all }t\in[0,T].
\]
Then, we have
\[
g(t)\leq C\exp\int_0^T k(s)\,ds \qquad\mbox{for all }t\in[0,T].
\]
\end{lema}
We are ready to prove parts (a) and (b) in Theorem
\ref{teoMP}.
\begin{pf*}{Proof of Theorem \ref{teoMP}}
Let $P\in\mathcal{P}^T_{b,{3/2}}$ be a solution of \hyperlink{MP}{($\mathrm{MP}$)}.
Since $\rho\in
F_{0,1,T}\cap F_{0,p,T}$, by interpolation we have $\rho\in
F_{0,{3/2},T}$. By Lemma \ref{contK}(i) we deduce
that (\ref{integKrho}) holds.
Moreover, by Lemma \ref{eqFP}(ii), Proposition
\ref{contb}(iv) and Lemma \ref{contgradK}(i), we have
that $\nabla\mathbf{K}(\tilde{\rho})\in F_{0,3,(T;p)}$, and, consequently,
condition (\ref{integgradKrho}) also holds. By Lemma
\ref{MPtoweakeq} we deduce that $\tilde{\rho}$ is a weak solution
of the vortex equation, and, since $k^P_t$ is bounded, we have
$\tilde{\rho}\in\mathbf{F}_{0,p,T}$.
We now need to prove that the latter implies that
$\tilde{\rho}\in\mathbf{F}_{0,p,T}$ is uniquely determined. By Theorem
\ref{exist1}(a) this will follow by checking that
$\tilde{\rho}$ is also mild solution. For fixed $\psi\in(\mathcal
{D})^3$ and $t\in[0,T]$, define $\mathbf{f}_t\dvtx[0,t]\times\mathbb
{R}^3\to
\mathbb{R}^3$
by $\mathbf{f}_t(s,y)=G^{\nu}_{t-s}*\psi(y)$, which is a function of class
$(C^{1,2}_b)^3$ that solves the backward heat equation on
$[0,t]\times\mathbb{R}^3$ with final condition $\mathbf{f}(t,y)=\psi(y)$.
One can
thus take $\mathbf{f}_t$ in the weak vortex equation and, thanks to
conditions (\ref{integKrho}) and (\ref{integgradKrho}),
apply
Fubini's theorem to deduce [since $\psi\in(\mathcal{D})^3$ is
arbitrary] that
\begin{eqnarray*}
&&\tilde{\rho}(t,x)=\mathbf{w}_0(t,x)+
\int_0^t\sum_{j=1}^3\int_{\mathbb{R}^3} \biggl[
\frac{\partial G^{\nu}_{t-s}}
{\partial y_j}
(x-y)[\mathbf{K}(\tilde{\rho})_j(s,y)\tilde{\rho}(s,y)]\\
&&\hspace*{144.1pt}{} +G^{\nu}_{t-s}
(x-y)\biggl[\tilde{\rho}_j(s,y)\,\frac{\partial\mathbf{K}(\tilde{\rho
})}{\partial
y_j}(s,y)\biggr] \biggr]
\,dy \,ds.
\end{eqnarray*}
Since $\tilde{\rho}$ is divergence-free, to see that
$\tilde{\rho}$ solves the mild equation it is enough to justify
an integration by parts of the last term in the previous equation.
We cannot do that at this point since we cannot ensure enough
(Sobolev) regularity of $\tilde{\rho}$. But noting that for
$q=\frac{3p}{3-p}$ one has $1<q^*<\frac{3}{2}$, we see that the
function $\tilde{\rho}=k^P \hat{\rho}$ belongs to $\mathbf
{F}_{0,q^*,T}$ by
interpolation. On the other hand, one has
$G^{\nu}_{t-s} (x-\cdot)\mathbf{K}(\tilde{\rho})(s,\cdot)\in(W^{1,q})^3$
thanks to Proposition \ref{contb}(v). Since by hypothesis,
$\operatorname{div} \tilde{\rho}(s)=0$ in the distribution sense, the fact that
$\tilde{\rho}(s)\in( L^q)^3$ and a density argument allow us to
check
that
\[
\sum_{j=1}^3\int_{\mathbb{R}^3} \tilde{\rho}_j(s,y)\, \frac
{\partial}
{\partial y_j}[G^{\nu}_{t-s}(x-y)\mathbf{K}(\tilde{\rho})(s,y)]\,dy=0
\]
for all
$s\in\ ]0,T]$. Thus, $\mathbf{w}:=\tilde{\rho}$ is the unique
solution of
(\ref{eqmild}) in $\mathbf{F}_{0,p,T}$.
Now, by a standard argument using the semi-martingale
decomposition of the coordinate processes $X^i$ and their products
$X^iX^j$, we obtain that the martingale part of $f(t,X_t)$ in
\hyperlink{MP}{($\mathrm{MP}$)} is given by the stochastic integral $ \sqrt{2\nu}
\int_0^t\nabla f(s,X_s
\mathbf{1}_{\{s\geq\tau\}} \,dB_s, $ with
respect to
a Brownian motion $B$ defined on some extension of the canonical
space. From this and the previously established uniqueness of
$\tilde{\rho}$, $P$ is the law of a weak solution of the
stochastic differential equation
\begin{eqnarray}\label{linSDE}
\mbox{\textup{(i)} }&& X_t=X_0+\sqrt{2\nu} \int_0^t \mathbf{1}_{\{s\geq\tau\}
} \,dB_s
+\int_0^t \mathbf{K}(\mathbf{w})
(s,X_s)\mathbf{1}_{\{s\geq\tau\}}\,ds ,\nonumber\\[-8pt]\\[-8pt]
\mbox{\textup{(ii)} }&& \Phi_t=I_3+\int_0^t \nabla\mathbf{K}(\mathbf
{w})(s,X_s)\Phi_s
\mathbf{1}_{\{s\geq\tau\}} \,ds.\nonumber
\end{eqnarray}
Since (\ref{linSDE}) is \textit{linear} in the sense of
McKean, to conclude uniqueness in law it is enough to prove
pathwise uniqueness for it. This is done first for $X$ and then
for $\Phi$, both with help of the estimate on $ \| \nabla
\mathbf{K}(\mathbf{w})(t) \|_{\infty}$ in Theorem \ref
{regularity} and
Gronwall's lemma.
\end{pf*}
\subsection{Pathwise convergence of the mollified processes and strong
existence for small time}
To prove part (c) of Theorem \ref{teoMP}, we shall construct
a strong solution to the nonlinear SDE of part (b) therein.
We shall do so via approximation by solutions to nonlinear SDEs
with regular drift terms $\mathbf{K}^{\varepsilon}(\mathbf
{w}^{\varepsilon})$ and
$\nabla\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})$, where
for each
$\varepsilon>0$, $\mathbf{w}^{\varepsilon}\in F_{1,p,T}\cap
F_{0,1,T}$ is
given by Theorem \ref{exist1}. Thus, our
arguments improve the ones developed in \cite{F1} by providing
a pathwise approximation result at an explicit rate. This will be the
key to
carry out the additional improvements on that work in the forthcoming sections.
If $q=\frac{3p}{3-p}$, H\"{o}lder's inequality and the properties
of $\mathbf{K}$ imply that that for all $t\in[0,T]$,
\begin{eqnarray*}
\|\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(t,\cdot)\|
_{\infty}&\leq& C
\|\varphi_{\varepsilon}\|_{q^*}\tn\mathbf{K}(\mathbf
{w}^{\varepsilon})\tn
_{0,q,T}\\
&\leq&
C \|\varphi_{\varepsilon}\|_{q^*}\tn\mathbf{w}^{\varepsilon}\tn_{0,p,T}.
\end{eqnarray*}
Similarly, one has $\|\nabla
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(t)\|_{\infty
}\leq C \|\nabla
\varphi_{\varepsilon}\|_{q^*}\tn\mathbf{w}^{\varepsilon}\tn
_{0,p,T}$ and
analogous estimates hold for all derivatives. Thus, for each
$\varepsilon>0$, the function $(s,y)\mapsto
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(s,y)$ is bounded
and continuous
in $y\in\mathbb{R}^3$, and has infinitely many derivatives in $y\in
\mathbb{R}^3$, which are uniformly bounded in $[0,T]\times\mathbb{R}^3$.
We fix now the time interval $[0,T]$ given by Theorem \ref{teoMP}.
It will be useful to consider in what follows the stochastic flow
\begin{eqnarray}\label{stochflown}
\xi_{s,t}^{\varepsilon}(x)&=&x+\sqrt{2\nu}
(B_t-B_s)\nonumber\\[-8pt]\\[-8pt]
&&{}+\int_s^t
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(\theta,\xi
^{\varepsilon
}_{s,\theta}(x))\,
d\theta\qquad\mbox{for all } t\in[s,T],\nonumber
\end{eqnarray}
which has a version that is continuously differentiable in $x$ for
all $(s,t)$ thanks to the previously mentioned regularity
properties of $\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})$
(cf. Kunita
\cite{Kun}).
We also consider the strong solution of the stochastic
differential equation in $[0,T]$,
\begin{eqnarray}\label{nonlinSDEreg}
X^{\varepsilon}_t&=&X_0+\sqrt{2\nu} \int_0^t
\mathbf{1}_{\{s\geq\tau\}} \,dB_s+\int_0^t
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})
(s,X^{\varepsilon}_s)\mathbf{1}_{\{s\geq\tau\}}
\,ds,\nonumber\\[-8pt]\\[-8pt]
\Phi^{\varepsilon}_t&=&I_3+\int_0^t \nabla
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(s,X^{\varepsilon
}_s)\Phi
^{\varepsilon}_s \mathbf{1}_{\{s\geq\tau\}}\,ds,\nonumber
\end{eqnarray}
where $(\tau,X_0)$ is independent of $B$. We denote by
$P^{\varepsilon}$ the joint law of
$(\tau,X^{\varepsilon},\Phi^{\varepsilon})$ and observe that
$P^{\varepsilon}\in\mathcal{P}_b^T$. Since $X^{\varepsilon}_t=X_0$
for all $t\leq\tau$, we have that
\[
X_t^{\varepsilon}=\xi^{\varepsilon}_{\tau,t}(X_0)\mathbf{1}_{\{
t\geq\tau\}
}+X_0\mathbf{1}_{\{t<\tau\}}.\vadjust{\goodbreak}
\]
Denoting by $G^{\varepsilon}(s,x;t,y),(s,x,t,y)\in(\mathbb
{R}_+\times
\mathbb{R}^2)^2, s<t$, the density of $\xi^{\varepsilon}_{s,t}(x)$ (which
is a continuous function of $(s,x,t,y)$, see \cite{Fried}), and
conditioning with respect to $(\tau,X_0)$, we obtain for bounded
and measurable functions $f$ that
\begin{eqnarray*}
E(f(X^{\varepsilon}_t)) &=&
\int_0^t\int_{(\mathbb{R}^3)^2}
f(y) G^{\varepsilon}(s,x;y,t)\,dy P_0(ds,dx)\\
&&{} +\int_t^T\int_{\mathbb{R}
^3}f(x) P_0(ds,dx)\\
&=&\int_{\mathbb{R}^3}f(x)\bar{w}_0(x)\,dx\\
&&{} +\int_0^t\int_{\mathbb{R}^3}
\biggl[\int_{\mathbb{R}^3}f(y)G^{\varepsilon}(s,x;t,y)\,dy \biggr]
\bar
{\mathbf{g}}(s,x) \,dx \,ds \\
&&{}+ \int_t^T\int_{\mathbb{R}^3}f(x)\bar{\mathbf{g}}(s,x)\,dx \,ds.
\end{eqnarray*}
Consequently, $X^{\varepsilon}_t$ has a (bi-measurable) family of
densities that we denote by $\rho^{\varepsilon}$. Observe that one
has $\rho^{\varepsilon}(t)\in L^p$ for all $t\in[0,T]$ from the
assumption on $w_0$ and $\mathbf{g}$ and standard Gaussian bounds for
$G^{\varepsilon}(s,x;t,y)$.
The functions $\hat{\rho}^{\varepsilon}$ and
$\tilde{\rho}^{\varepsilon}$ correspond to the densities of,
respectively, the sub-probability measure and the vectorial
measure
\[
f\mapsto E \bigl[f(\xi^{\varepsilon}_{\tau,t}(X_{\tau}))\mathbf
{1}_{\{t\geq
\tau\}} \bigr]
\]
and
\[
\mathbf{f}\mapsto
E \bigl[\mathbf{f}(\xi^{\varepsilon}_{\tau,t}(X_{\tau})) \nabla_x
\xi^{\varepsilon}_{\tau,t}(X_{\tau}) h(\tau,X_0)\mathbf{1}_{\{
t\geq
\tau\}} \bigr].
\]
They are bi-measurable by similar arguments as in Remark
\ref{medibilidad}, and we have $\hat{\rho}^{\varepsilon}(t)\in
L^p$ and $\tilde{\rho}^{\varepsilon}(t)\in L^p_3$.
The assumptions on $\varphi$ ensure the following estimate
concerning the approximations $\varphi_{\varepsilon}$ of the Dirac
mass (see Lemma 4.4 in Raviart \cite{Rav}):
\begin{lema}\label{aproxco1}
Let $\varphi$ be a cutoff function of order $1$. Then, for all
$v\in W^{1,r}$ and $r\in[1,\infty]$, one has
\[
\|v-\varphi_{\varepsilon}*v\|_r\leq C \varepsilon
\sum_{i=1}^3 \biggl\|\frac{\partial v}{\partial x_i } \biggr\|_r.
\]
\end{lema}
We deduce the following result:
\begin{lema}\label{aproxims}
\textup{(i)} We have $
\tilde{\rho}^{\varepsilon}=\mathbf{w}^{\varepsilon}$ and, consequently,
\begin{equation}\label{estsrhon}
\sup_{\varepsilon>0} \tn\tilde{\rho}^{\varepsilon}
\tn_{0,p,T}<\infty\quad\mbox{and}\quad \sup_{\varepsilon>0} \tn
\hat{\rho}^{\varepsilon} \tn_{0,p,T} <\infty.
\end{equation}
\textup{(ii)} If $\varphi$ is a cutoff function of order $1$, then we
have that
\[
\sup_{t\in[0,T]} t^{{3}/({2p})-{1}/{2}}
\|\mathbf{w}^{\varepsilon}(t)- \mathbf{w}(t)\|_p\leq C(T)\varepsilon
\]
for some finite constant $C(T)$.
\end{lema}
\begin{pf} (i) Since $E ({\int_0^T }|
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(t,X_t^{\varepsilon})|\,dt )
$ and
$E ({\int_0^T} |\nabla
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(t,X_t^{\varepsilon})|\,dt )$
are finite, we can follow the lines of Lemma \ref{MPtoweakeq} and use Remark
\ref{P0rem} to see that for all $\mathbf{f}\in(C^{1,2}_b)^3$,
\begin{eqnarray}\label{weakvareps}
&&
\int_{\mathbb{R}^3}\mathbf{f}(t,y)\tilde{\rho}^{\varepsilon}(t,y)\,dy
\nonumber\\
&&\qquad=
\int_{\mathbb{R}^3} \mathbf{f}(0,y)w_0(y)\,dy + \int_0^t
\int_{\mathbb{R}^3}\mathbf{f}(s,y)\mathbf{g}(s,y)\,dy \,ds \nonumber
\\
&&\qquad\quad{} +\int_0^t
\int_{\mathbb{R}^3} \biggl[\frac{\partial\mathbf{f}}{\partial s}(s,y)
+\nu\triangle\mathbf{f}(s,y)\\
&&\qquad\hspace*{58.6pt}{} + \nabla\mathbf{f}(s,y)
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(s,y)\nonumber\\
&&\qquad\hspace*{58.6pt}{}+\mathbf
{f}(s,y)\nabla
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(s,y)
\biggr]\tilde{\rho
}^{\varepsilon}(s,y)
\,dy \,ds.\nonumber
\end{eqnarray}
On the other hand, the regularity properties of the stochastic
flow (\ref{stochflown}) imply that for all $\phi\in\mathcal{D}$
and $\theta\in\ ]0,T]$, the Cauchy problem
\begin{eqnarray}\label{cauchyprobv}\hspace*{28pt}
&&
\frac{\partial}{\partial s}f(s,y) +\nu\Delta
f (s,y)\nonumber\\
&&\qquad{} +\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(s,y)\nabla f(s,y)=0,\qquad
(s,y)\in{[0,\theta[}\times\mathbb{R}^3,\\
&&f(\theta,y)=\phi(y)\nonumber
\end{eqnarray}
has a unique solution $f$ that belongs to $C^{1,3}_b([0,\theta]
\times\mathbb{R}^3)$ (see Lemma 4.3 in \cite{F1}). One can thus use the
function $\mathbf{f}=\nabla f$ in (\ref{weakvareps}), and after
simple computations obtain, thanks to the null divergence of $w_0$
and $\mathbf{g}(s,\cdot)$, that
\begin{eqnarray*}
&&\int_{\mathbb{R}^3}\nabla\phi(y)\tilde{\rho}^{(n)}(t,y)\,dy\\
&&\qquad=\int_0^t
\int_{\mathbb{R}^3}\nabla\biggl[ \frac{\partial f}{\partial s}(s,y)
+\nu
\triangle f(s,y) +\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(s,y)\nabla f
(s,y) \biggr]\\
&&\qquad\quad\hspace*{29pt}{}\times\tilde{\rho}^{(n)}(s,y) \,dy \,ds =0
\end{eqnarray*}
for all $\phi\in\mathcal{D}$. Thus, $\operatorname{div}
\tilde{\rho}^{\varepsilon}(t) =0$, and we can adapt the arguments
of Section \ref{sec42} to conclude that $\tilde{\rho}^{\varepsilon}$
solves the linear mild equation
\begin{equation}\label{linabs}
\mathbf{v}=\mathbf{w}_0+ \mathbf{B}^{\varepsilon}(\mathbf
{v},\mathbf{w}^{\varepsilon}) ,\qquad \mathbf{v}
\in
\mathbf{F}_{0,p,T}.
\end{equation}
Since uniqueness for (\ref{linabs}) holds (by similar arguments as
for the nonlinear version), and $\mathbf{w}^{\varepsilon}$ also
solves the
equation, we conclude that
$\tilde{\rho}^{\varepsilon}=\mathbf{w}^{\varepsilon}$. The asserted
uniform bound for $\tilde{\rho}^{\varepsilon}$ is thus granted by
Theorem \ref{exist1}. To obtain the uniform bound for
$\hat{\rho}^{\varepsilon}$, we take $L^p$ norm to (\ref{linabs}),
and follow the arguments of the proof of Theorem
\ref{exist1}(i), to get that
\[
\|\tilde{\rho}^{\varepsilon}(t)\|_p \leq\tn\mathbf{w}_0\tn_{0,p,T}+C
\tn\mathbf{w}^{\varepsilon}\tn_{0,p,T}\int_0^t (t-s)^{-{3}/({2p})}
\|\tilde{\rho}^{{\varepsilon}}(s)\|_p \,ds.
\]
The conclusion follows by a similar application of Gronwall's
lemma as therein.
(ii) By an iterative argument as in the proof of Theorem
\ref{exist1}(i), we get that
\begin{eqnarray}\label{convrhowt}
\|\tilde{\rho}^{{\varepsilon}}(t)-\mathbf{w}(t)\|_p &\leq& C \int_0^t
\alpha(t-s)\|\mathbf{K}^{\varepsilon}(\mathbf{w})(s)-\mathbf
{K}(\mathbf{w})(s)\|_q \,ds\nonumber\\[-8pt]\\[-8pt]
&&{}+C(T) \int_0^t
\|\tilde{\rho}^{\varepsilon}(s)-\mathbf{w}(s)\|_q \,ds,\nonumber
\end{eqnarray}
where $\alpha(s)=\sum_{k=1}^{\tilde{N}(p)}s^{k\theta_0-1}$,
$\theta_0=1-\frac{3}{2p}$ and
$\tilde{N}(p)=\lfloor\theta_0^{-1} \rfloor+1$. Integrating in
time and using Gronwall's lemma, Theorem \ref{regularity}(i)
and Lemma \ref{aproxco1}, we obtain that for all $\theta\in
[0,T]$,
\begin{eqnarray*}
\int_0^{\theta}\|\tilde{\rho}^{\varepsilon}(t)-\mathbf{w}(t)\|_p
\,dt &\leq&
C \int_0^T\int_0^t
\alpha(t-s) \|\mathbf{K}^{\varepsilon}(\mathbf{w})(s)-\mathbf
{K}(\mathbf{w})(s)\|_q \,ds \,dt \\
&\leq& C\varepsilon
\int_0^T \sum_{k=1}^{\tilde{N}(p)}
t^{k(1 -{3}/({2p}))-{1/2}}\,dt = \varepsilon C(T).
\end{eqnarray*}
Substituting the latter in (\ref{convrhowt}), we obtain
\begin{eqnarray*}
\|\tilde{\rho}^{{\varepsilon}}(t)-\mathbf{w}(t)\|_p &\leq&
\varepsilon C(T) + C\int_0^t \alpha(t-s) \|\mathbf{K}^{\varepsilon
}(\mathbf{w}
)(s)-\mathbf{K}(\mathbf{w})(s)\|_q \,ds\\
&\leq& \varepsilon C(T)+ Ct^{{1/2}-{3/(2p)}} \varepsilon,
\end{eqnarray*}
and the conclusion follows.
\end{pf}
The proof of Theorem \ref{teoMP}(c) will be completed by
the following result, which, moreover, establishes the strong
pathwise convergence of the nonlinear processes
$(X^{\varepsilon},\Phi^{\varepsilon})$ as $\varepsilon\to0$. We
are inspired here by ideas introduced in \cite{FM}, but we need a
finer use of analytical properties, as we shall improve the rate
of $\varepsilon^{\delta}$ with $\delta\in(0,1)$, that was
obtained therein for a particular choice of kernel. Further
difficulties also will arise because of the additional (and more
singular) drift term of the ``vortex stretching processes''
$\Phi$, proper to dimension $3$.
\begin{proposicion} Let $\varphi$ be a cutoff of order $1$ and
$K^{\varepsilon}$ be defined in terms of $\varphi$ as before. Then,
as $\varepsilon$ goes to $0$, the family of processes $(X^{\varepsilon
}-X_0,\Phi^{\varepsilon})$, \mbox{$\varepsilon
>0$} is Cauchy in the Banach space of continuous processes
$(Y,\Psi)$ with values in $\mathbb{R}^3\times\mathbb{R}^{3\otimes3}$
with finite norm $E({\sup_{t\in[0,T]}} |Y_t| +|\Psi_t|)$.
Moreover, one has
\[
E \Bigl({\sup_{t\in[0,T]}} |X_t-X_t^{\varepsilon}|+
|\Phi_t-\Phi_t^{\varepsilon}| \Bigr)\leq C(T)\varepsilon,
\]
where $(X,\Phi)$ is a solution of the nonlinear s.d.e.
(\ref{nonlinSDE}).
\end{proposicion}
\begin{pf} We observe that the substraction of $X_0$ is only needed to
avoid a moment-type assumption on $X_0$. Let $\varepsilon>\varepsilon
'>0$. We have
\begin{eqnarray}\label{estimepseps'}
&&
E \Bigl({\sup_{s\leq t}}|X^{\varepsilon}_s-X^{\varepsilon'}_s| \Bigr)
\nonumber\\
&&\qquad\leq
\int_0^t
E \bigl|\bigl(\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(s,X^{\varepsilon
}_s)-\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
})(s,X^{\varepsilon}_s)\bigr)
\mathbf{1}_{\{s\geq\tau\}} \bigr|\,ds
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{} + \int_0^t
E \bigl|\bigl(\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
})(s,X^{\varepsilon
}_s)-\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
'})(s,X^{\varepsilon}_s)\bigr)
\mathbf{1}_{\{s\geq\tau\}} \bigr|\,ds
\nonumber\\
&&\qquad\quad{} + \int_0^t
E \bigl|\bigl(\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
'})(s,X^{\varepsilon
}_s)-\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
'})(s,X^{\varepsilon'}_s)\bigr)
\mathbf{1}_{\{s\geq\tau\}} \bigr|\,ds.\nonumber
\end{eqnarray}
The third term on the right-hand side of (\ref{estimepseps'}) is bounded
thanks to Theorem \ref{regularity}(iii) by
\[
C\int_0^t s^{-{1/2}-{3/2}({1/p}-{1/r})}
E \Bigl({\sup_{\theta\leq
s}}|X^{\varepsilon}_{\theta}-X^{\varepsilon'}_{\theta}| \Bigr)\,ds
\]
for any fixed $r\in(3,\frac{3p}{3-p})$. Writing
$q=\frac{3p}{3-p}$ and $q^*$ for its H\"{o}lder conjugate, and using
Lemmas \ref{contK} and \ref{aproxims}(ii), we bound the
second term by
\[
\int_0^T
\|\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon})(s)-\mathbf
{K}^{\varepsilon'}(\mathbf{w}
^{\varepsilon'})(s)\|_q
\|\hat{\rho}^{\varepsilon}(s)\|_{q^*}\,ds\leq C(T)\varepsilon.
\]
We have used the fact that ${\sup_{\varepsilon>0}} \tn
\hat{\rho}^{\varepsilon} \tn_{0,q^*,T} <\infty$ by interpolation
since $q^*<\frac{3}{2}<p$. By similar arguments, the first term on
the right-hand side of (\ref{estimepseps'}) can be bounded above by
\[
\int_0^T
\|\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon})(s)-\mathbf
{K}^{\varepsilon}(\mathbf{w}
^{\varepsilon})(s)\|_q
\|\hat{\rho}^{\varepsilon}(s)\|_{q^*}\,ds\leq C(T)\varepsilon.
\]
Bringing all together and using Gronwall's lemma we deduce that
\begin{equation}\label{chauchy1}
E \Bigl({\sup_{s\leq T}}|X^{\varepsilon}_t-X^{\varepsilon'}_t| \Bigr)
\leq C(T) \varepsilon.
\end{equation}
Now, notice that Gronwall's lemma and Theorem \ref{regularity}(iii)
imply that the processes $\Phi_t^{\varepsilon}$ are
bounded in $L^{\infty}( [0,T]\times\Omega,dt\otimes
\mathbb{P})$ uniformly in $\varepsilon$. Therefore, we have
\begin{eqnarray}\label{estimepseps'grad}
&&
E \Bigl({\sup_{s\leq t}}|\Phi^{\varepsilon}_s-\Phi^{\varepsilon
'}_s| \Bigr) \nonumber\\
&&\qquad\leq
C \int_0^t
E \bigl|\bigl(\nabla\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(s,X^{\varepsilon
}_s)-\nabla\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon})
(s,X^{\varepsilon}_s)\bigr) \mathbf{1}_{\{s\geq\tau\}} \bigr|\,ds
\nonumber\\
&&\qquad\quad{} + C \int_0^t
E \bigl|\bigl(\nabla\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
})(s,X^{\varepsilon}_s)-\nabla\mathbf{K}^{\varepsilon'}(\mathbf
{w}^{\varepsilon'})
(s,X^{\varepsilon}_s) \bigr)\mathbf{1}_{\{s\geq\tau\}} \bigr|\,ds
\\
&&\qquad\quad{} + C \int_0^t
E \bigl|\bigl(\nabla\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
'})(s,X^{\varepsilon}_s)-\nabla\mathbf{K}^{\varepsilon'}(\mathbf
{w}^{\varepsilon'})
(s,X^{\varepsilon'}_s) \bigr)\mathbf{1}_{\{s\geq\tau\}} \bigr|\,ds
\nonumber\\
&&\qquad\quad{} + C \int_0^t
E \Bigl( {|\nabla\mathbf{K}^{\varepsilon'}(\mathbf
{w}^{\varepsilon'})
(s,X^{\varepsilon'}_s) | \sup_{\theta\leq s}}|\Phi
^{\varepsilon}_{\theta}-\Phi^{\varepsilon'}_{\theta}| \Bigr)\,ds.\nonumber
\end{eqnarray}
By Theorem \ref{regularity}(iii), for fixed $r\in(3,q)$ the
last term in the right-hand side of (\ref{estimepseps'grad}) is bounded by
\[
C\int_0^t s^{-{1/2}-{3/2}({1/p}-{1/r})}
E \Bigl( {\sup_{\theta\leq s}}|\Phi^{\varepsilon}_{\theta}-\Phi
^{\varepsilon'}_{\theta}|
\Bigr)\,ds,
\]
and the third one is by
\[
C\int_0^t s^{-{1/2}-{3/2}({1/p}-{1/r})}
E | X^{\varepsilon}_s-
X^{\varepsilon'}_s | \,ds \leq C(T)\varepsilon,
\]
using also the previous estimates on $E| X^{\varepsilon}_s-
X^{\varepsilon'}_s |$. The first term in
(\ref{estimepseps'grad}) is upper bounded by
\begin{equation}\label{boundp2}
C\int_0^T \|\hat{\rho}^{\varepsilon}(s)\|_{p^*}
\|\nabla\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(s)-\nabla\mathbf{K}
^{\varepsilon'}(\mathbf{w}^{\varepsilon})(s)\|_p
\,ds.
\end{equation}
If $p\geq2$, then we have $p^*\leq2$ and so by (\ref{estsrhon})
and interpolation, we deduce that (\ref{boundp2}) is bounded by
\begin{eqnarray*}
&&C\tn\hat{\rho}^{\varepsilon}\tn_{0,p^*,T}\int_0^T \|\nabla
\mathbf{K}
(\varphi_{\varepsilon}*\mathbf{w}^{\varepsilon})(s)-\nabla\mathbf
{K}(\mathbf{w}
^{\varepsilon})\|_p\\
&&\qquad{}+
\|\nabla\mathbf{K}(\mathbf{w}^{\varepsilon})-\nabla\mathbf
{K}(\varphi_{\varepsilon'}*\mathbf{w}
^{\varepsilon})(s)\|_p
\,ds \leq C T \varepsilon.
\end{eqnarray*}
This last inequality is obtained by
Lemmas
\ref{contgradK}(i), \ref{aproxco1}, \ref{aproxims}(i)
and the uniform boundedness
of $(\mathbf{w}^{\varepsilon})_{\varepsilon\geq0}$ in $\mathbf{F}_{1,p,T}$.
If now $\frac{3}{2}<p<2$, then we have $3>p^*>2>p$ and by similar
steps as in the previous case $p\geq2$,
we can upper bound (\ref{boundp2}) by
\begin{eqnarray*}
&&
C \tn\hat{\rho}^{\varepsilon}\tn_{0,p^*,(T;p)} \int_0^T
s^{-{3/2}({1/p}-{1/p^*})}
\|\nabla\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})(s)-\nabla\mathbf{K}
^{\varepsilon'}(\mathbf{w}^{\varepsilon})(s)\|_p
\,ds \\
&&\qquad \leq\varepsilon\sup_{\delta\geq0} \tn
\hat{\rho}^{\delta}\tn_{0,p^*,(T;p)} \int_0^T
s^{-{3/2}({1/p}-{1/p^*})}s^{-{1/2}} \,ds \\
&&\qquad\leq\varepsilon C(T) \sup_{\delta\geq0} \tn
\hat{\rho}^{\delta}\tn_{0,p^*,(T;p)}.
\end{eqnarray*}
We have used here Lemma \ref{aproxco1}, the fact that
$(\mathbf{w}^{\varepsilon})_{\varepsilon\geq0}$ is uniformly
bounded in
$\mathbf{F}_{1,p,T}$ and that
$-\frac{3}{2}(\frac{1}{p}-\frac{1}{p^*})-\frac{1}{2} >-1$ since
$p> \frac{3}{2}$. The fact that the supremum in the previous
estimate is finite, is seen in the same way as part (vi) of
Proposition~\ref{contb}, namely by an iterative argument using the
mild equation (similar as therein) satisfied by
$\hat{\rho}^{\varepsilon}$, starting from the uniform bound in
Lemma \ref{aproxims}(i).
Thus, we have shown that the first term in the right-hand side of
(\ref{estimepseps'grad}) is bounded by a constant times
$\varepsilon$. Let us now tackle the second term in the right-hand side
of~(\ref{estimepseps'grad}). This is bounded by
\begin{eqnarray}\label{boundp2'}
&&
C\int_0^T \|\hat{\rho}^{\varepsilon}(s)\|_{p^*}
\|\nabla\mathbf{K}^{\varepsilon'}(\mathbf{w}^{\varepsilon
})(s)-\nabla\mathbf{K}
^{\varepsilon'}(\mathbf{w}^{\varepsilon'})(s)\|_p
\,ds \nonumber\\[-8pt]\\[-8pt]
&&\qquad\leq C\int_0^T \|\hat{\rho}^{\varepsilon}(s)\|_{p^*}
\|\mathbf{w}^{\varepsilon}(s)-\mathbf{w}^{\varepsilon'}(s)\|_p
\,ds\nonumber
\end{eqnarray}
thanks to Lemma \ref{contgradK}. By Lemma \ref{aproxims}(ii)
we
can upper bound (\ref{boundp2'}), respectively, by
\[
C \varepsilon\int_0^T s^{{1/2}-{3/(2p)}}\,ds = \varepsilon
C(T)
\]
in the case $p\geq2$, or by
\[
C \varepsilon\int_0^T s^{-{3/2}({1/p}-{1/p^*})}
s^{{1/2}-{3/(2p)}}\,ds=C'(T)\varepsilon
\]
in the case
$p<2$, where the constants are finite since $p>\frac{3}{2}$.
Consequently, we have an estimate of the form
\[
E \Bigl({\sup_{s\leq t}}|\Phi^{\varepsilon}_s-\Phi^{\varepsilon
'}_s| \Bigr)
\leq C \varepsilon+ C\int_0^t s^{-{1/2}-{3/2}(
{1/p}-{1/r})}
E \Bigl( {\sup_{\theta\leq s}}|\Phi^{\varepsilon}_{\theta}-\Phi
^{\varepsilon'}_{\theta}|
\Bigr)\,ds
\]
for each fixed $r \in(3,q)$, and Gronwall's lemma yields
\begin{equation}\label{chauchy2}
E \Bigl({\sup_{s\leq t}}|\Phi^{\varepsilon}_s-\Phi^{\varepsilon
'}_s| \Bigr) \leq
C(T) \varepsilon
\end{equation}
for all $\varepsilon\geq\varepsilon'>0$.
Estimates (\ref{chauchy1}) and (\ref{chauchy2}) thus show that
$(X^{\varepsilon}-X_0,\Phi^{\varepsilon})$ is a
Cauchy sequence in the Banach space of continuous processes
$(Y,\Psi)$ with values in $\mathbb{R}^3\times\mathbb{R}^{3\otimes3}$
and finite norm $E({\sup_{t\in[0,T]}} |Y_t| +|\Psi_t|)$.
Write the limit in the form $(X-X_0,\Phi)$, for a continuous
process $(X,\Phi)$ and define $\mathcal{E}_t^1$ and $\mathcal
{E}_t^2$ by
the relations
\begin{eqnarray}\label{linsdeE}
X_t&=&X_0+\sqrt{2\nu} \int_0^t \mathbf{1}_{\{s\geq
\tau\}}\,
dB_s+\int_0^t \mathbf{K}(\mathbf{w}) (s,X_s)\mathbf{1}_{\{s\geq
\tau\}} \,ds+\mathcal
{E}_t^1,\nonumber\\[-8pt]\\[-8pt]
\Phi_t&=&I_3+\int_0^t \nabla
\mathbf{K}(\mathbf{w})(s,X_s)\Phi_s\mathbf{1}_{\{s\geq\tau\}}
\,ds+\mathcal{E}_t^2.\nonumber
\end{eqnarray}
Comparing $(X,\Phi)$ and $(X^{\varepsilon},\Phi^{\varepsilon})$,
and using similar estimates as so far in this proof, but with $0$
instead of $\varepsilon'$ (and $\mathbf{w}$ instead of
$\mathbf{w}^{\varepsilon'}$), we get that $(X,\Phi)$ satisfies
(\ref{linsdeE}) with $\mathcal{E}_t^i=0$, $i=1,2$. Since that is a
linear s.d.e. (in McKean's sense), the proof that $(X,\Phi)$ is
the asserted nonlinear process will be achieved by checking that
for all bounded Lipschitz function $\mathbf{f}\dvtx\mathbb{R}^3\to
\mathbb{R}^3$, one has
\[
E\bigl(\mathbf{f}(X_t)\Phi_t h(\tau,X_0)\mathbf{1}_{\{s\geq\tau\}
}\bigr)=\int_{\mathbb{R}
^3}\mathbf{f}(x)\mathbf{w}(t,x)\,dx.
\]
The latter follows from the facts that
\[
E\bigl(\mathbf{f}(X_t^{\varepsilon})\Phi_t^{\varepsilon} h(\tau
,X_0)\mathbf{1}
_{\{s\geq\tau\}}\bigr)=\int_{\mathbb{R}^3}\mathbf{f}(x)\mathbf
{w}^{\varepsilon}(t,x)\,dx
\]
and
\begin{eqnarray}\label{estimflip}
&&
\bigl|E\bigl(\mathbf{f}(X_t)\Phi_t h(\tau
,X_0)\mathbf{1}_{\{s\geq
\tau\}}\bigr)-E\bigl(\mathbf{f}(X_t^{\varepsilon})\Phi_t^{\varepsilon}
h(\tau,X_0)\mathbf{1}_{\{s\geq\tau\}}\bigr)\bigr| \nonumber\\
&&\qquad\leq
\bigl(\|\Phi\|_{L^{\infty}([0,T]\times
\Omega)}+1\bigr)\|h\|_{\infty}\|\mathbf{f}\|_{\mathrm{Lip}}
E(|X_t-X_t^{\varepsilon}|+|\Phi_t-\Phi_t^{\varepsilon}|)\\
&&\qquad\leq C
\|\mathbf{f}\|_{\mathrm{Lip}} \varepsilon.\nonumber
\end{eqnarray}
\upqed\end{pf}
\begin{rem}
(a)
By Lemma \ref{aproxims}(i), the process
$(X^{\varepsilon},\Phi^{\varepsilon})$ defined in
(\ref{nonlinSDEreg}) is a solution in $[0,T]$ of the nonlinear
s.d.e.:
\begin{eqnarray}\label{nonlinSDEreg'}
\mbox{(i) }&& X_t^{\varepsilon}=X_0+\sqrt{2\nu} \int_0^t \mathbf
{1}_{\{s\geq
\tau\}} \,dB_s +\int_0^t
\mathbf{K}^{\varepsilon}(\tilde{\rho}^{\varepsilon})
(s,X_s^{\varepsilon})\mathbf{1}_{\{s\geq\tau\}}\,ds ,\nonumber\\
\mbox{(ii) }&& \Phi_t^{\varepsilon}=I_3+\int_0^t \nabla
\mathbf{K}^{\varepsilon}(\tilde{\rho}^{\varepsilon
})(s,X_s^{\varepsilon
})\Phi_s^{\varepsilon}
\mathbf{1}_{\{s\geq\tau\}} \,ds \quad\mbox{and}\nonumber\\[-8pt]\\[-8pt]
\mbox{(iii) }&& \mbox{the law $P^{\varepsilon}$ of
$(\tau,X^{\varepsilon},\Phi^{\varepsilon})$ belongs to }\mathcal
{P}_{b,{3/2}}^T \quad\mbox{and}\nonumber\\
&&\tilde{P}_t^{\varepsilon}(dx)=\tilde{\rho}^{\varepsilon}(t,x)\,dx.\nonumber
\end{eqnarray}
{\smallskipamount=0pt
\begin{longlist}[(b)]
\item[(b)]
It is also possible to associate a unique pathwise solution of
(\ref{nonlinSDE}) with any solution $\mathbf{w}\in\mathbf
{F}_{0,p,T}\cap
\mathbf{F}_{0,1,T}$ of the mild vortex equation (i.e., not necessarily the
one given by Theorem \ref{exist1}). This can be done by an
approximation argument similar to the previous one, but
considering linear processes in the sense of McKean [with drift
terms $\mathbf{K}^{\varepsilon}(\mathbf{w})$ and $\nabla\mathbf
{K}^{\varepsilon}(\mathbf{w})$]
instead of the processes (\ref{nonlinSDEreg}).
\item[(c)]
Denote now by $\mathcal{W}_T$ the Wasserstein
distance in $\mathcal{P}(\mathcal{C}_T)$ associated with the metric in
$\mathcal{C}_T:=[0,T]\times C([0,T],\mathbb{R}^3\times\mathbb
{R}^{3\otimes3})$
\begin{eqnarray*}
&&d((\theta,y,\psi),(\eta,x,\phi)) \\
&&\qquad = |\theta-\eta|+\sup_{t\in[0,T]}
\bigl(\min\{|x(t)-y(t)|,1\}
+ \min\{|\psi(t)-\phi(t)|,1\}\bigr).
\end{eqnarray*}
Then, the previous proof states that
\[
\mathcal{W}_T(P^{\varepsilon},P)\leq C(T) \varepsilon,
\]
where $P$ is the law of the nonlinear process (\ref{nonlinSDE}).
\item[(d)] By the regularity results of Section \ref{sec3}, one can
prove in
a similar way as in Corollary 4.3 of \cite{F1} that the
stochastic flow (\ref{3linearflow}) is of class $C^1$, in spite of
the fact that $\mathbf{u}$ and $\nabla\mathbf{u}$ are singular at
$t=0$. Thus,
identity (\ref{formstocflow}) holds.
\end{longlist}}
\end{rem}
\section{The stochastic vortex method}\label{sec5}
We first consider a McKean--Vlasov model with mollified
interaction and cutoff. This extends the model studied in
\cite{F1} to the present situation involving random space--time
births.
Denote by $M_{\varepsilon}$ the sup-norm of $K_{\varepsilon}$
on $\mathbb{R}^3$
and by $L_{\varepsilon}$ a Lipschitz constant for it, which,
respectively, behave like ${1\over
\varepsilon^3}$ and ${1\over\varepsilon^4}$ when $\varepsilon\ll 1$.
Notice that $\operatorname{div} K_{\varepsilon}=(\operatorname{div}
K) *\varphi_{\varepsilon}=0$.
For $R>0$, we denote by $\chi_R\dvtx\mathbb{R}^{3\otimes3}\to\mathbb
{R}^{3\otimes
3}$ a Lipschitz continuous truncation function such that
$|\chi_R(\phi)|\leq R$. We may and shall assume that $\chi_R$ has
Lipschitz constant less than or equal to $1$.
Consider now a filtered probability space endowed with an adapted
standard three-dimensional Brownian motion $B$ and with a
$[0,T]\times\mathbb{R}^3$-valued random variable $(\tau,X_0)$
independent of $B$ and with law $P_0$.
\begin{teorema}\label{3teoMKV}
There is existence and uniqueness (pathwise and in law) for the
nonlinear process with random space--time births, nonlinear in the
sense of McKean
\begin{eqnarray}\label{procnonlin}
X^ {\varepsilon, R}_t&=&X_0+\sqrt{2\nu} \int_0^t
\mathbf{1}_{\{s\geq\tau\}} \,dB_s +\int_0^t \mathbf{u}^ {\varepsilon
, R}
(s,X^ {\varepsilon,
R}_s)\mathbf{1}_{\{s\geq\tau\}}\,ds\nonumber\\[-8pt]\\[-8pt]
\Phi^ {\varepsilon, R}_t&=&I_3+\int_0^t \nabla
\mathbf{u}^ {\varepsilon, R}(s,X^ {\varepsilon, R}_s)\chi_R(\Phi^
{\varepsilon, R}_s) \mathbf{1}_{\{s\geq\tau\}} \,ds\nonumber
\end{eqnarray}
with
\begin{equation}\label{nonlinearite}
\mathbf{u}^ {\varepsilon, R}(s,x)=E \bigl[K_{\varepsilon}(x-X^
{\varepsilon, R}_s)\wedge\chi_R(\Phi^ {\varepsilon, R}_s)
h(\tau,X_0)\mathbf{1}_{\{s\geq\tau\}} \bigr].
\end{equation}
\end{teorema}
The proof is based in the classic contraction argument of
Sznitmann \cite{Szn} and is not hard to obtain by combining
elements of Theorems 5.1 in \cite{F1} and Theorem~3.1 in
\cite{FM}.
Consider next a probability space endowed with a sequence
$(B^i)_{i\in\mathbb{N}}$ of independent three-dimensional Brownian motions,
and a sequence of independent random variables
$(\tau^i,X_0^i)_{i\in\mathbb{N}}$ with law $P_0$ and independent of the
Brownian motions. For each $n\in\mathbb{N}$ and $R,\varepsilon>0$, we
define the following system of interacting particles:
\begin{eqnarray}\label{lesystemepsR}
X^{i, \varepsilon, R,n}_t & = & X^i_0+\sqrt{2\nu} \int_0^t\mathbf
{1}_{\{s\geq
\tau^i\}} \,dB^i_s \nonumber\\
&&{} + \int_0^t \frac{1}{n}\sum_{j\not=i}
K_{\varepsilon} (X^ {i, \varepsilon,
R,n}_s-X^{j,\varepsilon,R,n}_s)\nonumber\\
&&\hspace*{51.5pt}{}\wedge
\chi_R(\Phi_s^{j,\varepsilon,R,n}) h(\tau^j,X_0^j)\mathbf{1}_{\{
s\geq\tau
^i,\tau^j\}} \,ds,\nonumber\\[-8pt]\\[-8pt]
\Phi^{i ,\varepsilon, R,n}_t & = & I_3 +\int_0^t
\frac{1}{n}\sum_{j\not=i} [\nabla K_{\varepsilon} (X^ {i,
\varepsilon, R,n}_s-X^{j,\varepsilon,R,n}_s)\nonumber\\
&&\hspace*{64.3pt}{} \wedge
\chi_R(\Phi_s^{j,\varepsilon,R,n}) h(\tau^j,X_0^j) ]\nonumber\\
&&\hspace*{58.6pt}{}\times\chi_R(\Phi^
{i, \varepsilon, R,n}_s)
\mathbf{1}_{\{s\geq\tau^i,\tau^j\}}\,ds,\nonumber
\end{eqnarray}
for $i=1,\ldots, n$, and with $\nabla K(y)\wedge z=\nabla_y
(K(y)\wedge z)$ for $y,z\in\mathbb{R}^3,y\not=0$. Pathwise existence
and uniqueness can be proved by adapting standard arguments,
thanks to the Lipschitz continuity of the coefficients.
In
the same probability space, we also consider the sequence
\begin{eqnarray}\label{procnonlincopies}
X^{i,\varepsilon,R}_t&=&X_0^i+\sqrt{2\nu} \int_0
\mathbf{1}_{\{s\geq\tau^i\}}\,d B^i_s+\int_0^t \mathbf{u}^
{\varepsilon, R}
(s,X^{i,\varepsilon,R}_s)\mathbf{1}_{\{s\geq\tau^i\}}\,ds,\nonumber\\[-8pt]\\[-8pt]
\Phi^{i,\varepsilon,R}_t&=&I_3+\int_0^t \nabla
\mathbf{u}^ {\varepsilon, R}(s,X^{i,\varepsilon,R}_s)\chi_R(\Phi
^{i,\varepsilon,R}_s)\mathbf{1}_{\{s\geq\tau^i\}} \,ds,\qquad
i\in\mathbb{N},\nonumber
\end{eqnarray}
of independent copies of (\ref{procnonlin}). Their common law in
$\mathcal{C}_T$ is denoted by $P^{\varepsilon, R}$, and we write
$\bar{h}:= \|w_0\|_1+\|\mathbf{g}\|_{1,T}$. Recall that $\chi_R$ is a
Lipschitz-continuous function, bounded by $R>0$ and with Lipschitz
constant less than or equal to $1$. It is not hard to adapt the
proof of Theorem 5.2 in \cite{F1} to get the following:
\begin{teorema}\label{propchaosepsR}
For $\varepsilon>0$ sufficiently small and all $R>0$, we have
\begin{equation}\label{convi}\qquad
\mathbb{E} \Bigl[\sup_{t\in[0,T]} \{
|X^{i,\varepsilon,R,n}_t-X^{i,\varepsilon,R}_t|+
|\Phi^{i,\varepsilon,R,n}_t-\Phi^{i,\varepsilon,R}_t|
\} \Bigr]\leq\frac{1}{\sqrt{n}} C(\varepsilon,R,\bar{h},T)
\end{equation}
for all $i\leq n$, where
\[
C(\varepsilon,R,\bar{h},T)= C_1\varepsilon(1+R \bar{h}
T)(R\bar{h}T)\exp\{C_2\varepsilon^{-9}\bar{h}T(R+1)(\bar{h}+RT)\}
\]
for some positive constants $C_1,C_2$ independent of $R$,
$\varepsilon$, $T$ and $\bar{h}$.
\end{teorema}
Let us now make the assumptions of Theorem \ref{exist1}, and
consider, in the corresponding time interval $[0,T]$, independent
copies $(X^{i,\varepsilon},\Phi^{i,\varepsilon})$ and
$(X,\Phi^i)$ of the processes (\ref{nonlinSDE}) and
(\ref{nonlinSDEreg'}) constructed on the given data
$(X_0^i,\tau^i,B^i)$, $i\in\mathbb{N}$.
Recall again that the uniform bound of Theorem \ref{regularity}(iii)
and Gronwall's lemma imply that the processes
$\Phi^{\varepsilon}$ are uniformly bounded, say
\[
{\sup_{t\in[0,T],\varepsilon\geq0, \omega\in
\Omega}}|\Phi_t^{\varepsilon}(\omega)| \leq R_{\circ}(T,\mathbf{w}_0)
\]
for some finite positive constant $R_{\circ}(T,\mathbf{w}_0)$. Thus, for
any $R\geq R_{\circ}$, one has for all $t\in[0,T]$ that
\[
(X^{i,\varepsilon}_t,\Phi^{i,\varepsilon}_t)=(X^{i,\varepsilon
}_t,\chi_R(\Phi^{i,\varepsilon}_t)).
\]
Consequently, $(X^{i,\varepsilon},\Phi^{i,\varepsilon})$ is a
pathwise solution in $[0,T]$ of (\ref{procnonlincopies}),
and so we conclude that
\[
(X^{i,\varepsilon},\Phi^{i,\varepsilon})=(X^{i,\varepsilon,R},\Phi
^{i,\varepsilon,R})
\]
almost surely. Bringing it all together, we obtain the following
pathwise approximation result:
\begin{teorema}\label{convortmet} Assume that \hyperlink{Hp}{$(\mathrm{H}_1)$} and
\hyperlink{Hp}{$(\mathrm{H}_p)$} hold with $p\in
(\frac{3}{2},3)$ and that the hypothesis of Theorem
\ref{exist1}\textup{(i)}
is satisfied. Let $K_{\varepsilon}$ be defined as in
(\ref{biotsavop}), with $\varphi$ a cutoff function of order $1$
and write $\bar{h}=\|w_0\|_1+\|\mathbf{g}\|_{1,T}$. Let, furthermore,
$R\geq
R_{\circ}(T,\mathbf{w}_0)$ and
\[
\varepsilon_n=(c_{\alpha} \ln n)^{-{1/9}}
\]
with
\[
0<c_{\alpha}<\alpha\bigl(C_2 \bar{h}T(R+1)(\bar{h}+RT)\bigr)^{-1}
\]
for some alpha $\alpha\in(0,\frac{1}{2})$. Then, we have for all
$i\leq n$,
\begin{eqnarray}\label{convi'}
&&\mathbb{E} \Bigl[\sup_{t\in[0,T]} \{
|X^{i,\varepsilon_n,R,n}_t-X^i_t|+
|\Phi^{i,\varepsilon_n,R,n}_t-\Phi^i_t| \} \Bigr] \nonumber\\[-8pt]\\[-8pt]
&&\qquad\leq
C(T,w_0,\mathbf{g},\alpha) \biggl[\frac{1}{n^{{1/2}-\alpha
}(\ln
n)^{{1/9}}}+\frac{1}{(\ln n)^{{1/9}}} \biggr],\nonumber
\end{eqnarray}
where $(X,\Phi)$ is the unique pathwise solution of
(\ref{nonlinSDE}), and the constant $C(T,w_0,\break\mathbf{g},\alpha)$ depends
on the data $w_0$ and $\mathbf{g}$ only through the quantities $\|w_0\|_p,
\tn\mathbf{g}\tn_{0,p,T}$ and $\|w_0\|_1+\|\mathbf{g}\|_{1,T}$.
\end{teorema}
\begin{rem}
(i)
The rate at which the second term in the right-hand side of
(\ref{convi'}) goes to $0$ is exactly that of
$\varepsilon=\varepsilon_n$. The logarithmic order of latter was
needed to make the upper bound in Theorem \ref{propchaosepsR} go
to $0$ with $n$, which then happens at an algebraic rate. The
global rate is, therefore, conditioned by the
techniques used in the proof of Theorem
\ref{propchaosepsR} (see \cite{F1} for details).
Under additional regularity assumptions, it
is possible by analytic arguments to slightly improve the
convergence rate (see the discussion at the end). An attempt for a
more substantial improvement should, however, exploit specific
features of the interaction at the level of the particle systems.
(ii)
The previous result implies as usual that $\mathcal{W}_T
(\operatorname{law}(X^{i,\varepsilon,R,n},\Phi^{i,\varepsilon,R,n}),P )$
goes to $0$ at least that fast, and that (with the obviously
extended meaning of $\mathcal{W}_T$)
\[
\mathcal{W}_T (\operatorname{law} ((X^{1,\varepsilon,R,n},\Phi
^{1,\varepsilon,R,n}),\ldots,(X^{k,\varepsilon,R,n},\Phi
^{k,\varepsilon,R,n}) )
,P^{\otimes k} )\leq k \delta_n,
\]
where $\delta_n$ stands
for the quantity in the right-hand side of (\ref{convi'}).
\end{rem}
We deduce the convergence at the level of empirical processes:
\begin{corolario}\label{convempproc} Under the assumptions of Theorem
\ref{convortmet}, the family\break
$(\tilde{\mu}^{n,\varepsilon_n,R}_t)_{0\leq t \leq T}$ of
$\mathbb{R}^3$-weighted empirical measures on $\mathbb{R}^3$
\[
\tilde{\mu}^{n,\varepsilon_n,R}_t:= \frac{1}{n}\sum_{i=1}^n
\delta_{X^{i,\varepsilon_n,R,n}_t}
\cdot(\chi_R(\Phi_t^{i,\varepsilon_n,R,n})h_0(\tau,X_0^i)
)\mathbf{1}_{\{t\geq r\}}
\]
converges in probability to $(\mathbf{w}(t,x)\,dx)_{0\leq t
\leq T}$ in the space
$C([0,T],\mathcal{M}_3(\mathbb{R}^3))$, where $\mathcal{M}_3(\mathbb
{R}^3)$ denotes
the space of finite $\mathbb{R}^3$-valued measures on $\mathbb{R}^3$ endowed
with the weak topology. Moreover, we have
\begin{eqnarray*}
&&\sup_{t\in[0,T],\|\mathbf{f}\|_{\mathrm{Lip}}\leq1}E |\langle\tilde
{\mu}^{n,\varepsilon_n,R}_t- \mathbf{w}(t),\mathbf{f}\rangle
| \\
&&\qquad\leq C
\biggl[\frac{1}{\sqrt{n}}+\frac{1}{n^{{1/2}-\alpha}(\ln
n)^{{1/9}}}+\frac{1}{(\ln n)^{{1/9}}} \biggr],
\end{eqnarray*}
where $\| \mathbf{f}\|_{\mathrm{Lip}}$ is the usual norm in the space of
bounded Lipshitz continuous functions $\mathbf{f}\dvtx\mathbb{R}^3 \to
\mathbb{R}^3$.
\end{corolario}
\begin{pf} It is enough to prove the bound for Lipshitz bounded
functions. For such a function
$\mathbf{f}\dvtx\mathbb{R}^3\to\mathbb{R}^3$, it holds
that
\begin{eqnarray}\label{estimatesf}
&&
|\langle\tilde{\mu}^{n,\varepsilon_n,R}_t,\mathbf{f}\rangle
-\langle\mathbf{w}(t),\mathbf{f}\rangle| \nonumber\\
&&\qquad
\leq\Biggl|\langle\tilde{\mu}^{n,\varepsilon_n,R}_t,\mathbf
{f}\rangle
-\frac{1}{n}\sum_{i=1}^n \mathbf{f}( X_t^{i,\varepsilon
_n,R})\wedge
(\chi_R(\Phi_t^{i,\varepsilon_n,R}))h(\tau,X_0^i)\mathbf{1}_{\{
\tau\geq t\}
} \Biggr|\nonumber\\
&&\qquad\quad{} + \Biggl|\frac{1}{n}\sum_{i=1}^n \mathbf{f}( X_t^{i,\varepsilon
_n,R})\wedge
(\chi_R(\Phi_t^{i,\varepsilon_n,R}))h(\tau,X_0^i)\mathbf{1}_{\{
\tau\geq t\}
}\\
&&\qquad\quad\hspace*{14.5pt}{}-\int_{\mathcal{C}_T} \mathbf{f}( y(t))\wedge
\chi_R(\phi(t))h(\theta,x(0))P^{\varepsilon_n,R}(d\theta,dy,d\phi
) \Biggr|\nonumber\\
&&\qquad\quad{} +|\langle\mathbf{w}^{\varepsilon_n}(t)- \mathbf{w}(t),\mathbf
{f}\rangle|\nonumber
\end{eqnarray}
with
$P^{\varepsilon_n,R}=P^{\varepsilon_n}=\operatorname{law}(\tau,X^{i,\varepsilon
_n,R},\Phi^{i,\varepsilon_n,R})$.
The independence of the processes
$(\tau^i,X^{i,\varepsilon_n,R},\Phi^{i,\varepsilon_n,R})$, $i\in
\mathbb{N}$, and the definition of $h$ imply that the expectation of the
second term in the right-hand side of\vspace*{-2pt} (\ref{estimatesf}) is
bounded by $\frac{1}{\sqrt{n}}2\|\mathbf{f}\|_{\mathrm{Lip}}R \bar{h}$, where
$\bar{h}=(\|w_0\|_1+\|\mathbf{g}\|_{1,T})$. We use the latter and estimate
in Theorem \ref{propchaosepsR} to bound the first term, and get
that
\begin{eqnarray*}
&&
E |\langle\tilde{\mu}^{n,\varepsilon_n,R}_t- \mathbf
{w}(t),\mathbf
{f}\rangle
| \\
&&\qquad\leq \|\mathbf{f}\|_{\mathrm{Lip}}( R+1) \bar{h}\frac{1}{\sqrt{n}}
C(\varepsilon_n,R,\bar{h},T)
\\
&&\qquad\quad{} +\frac{2 \|\mathbf{f}\|_{\mathrm{Lip}} R \bar{h} }{\sqrt{n}} + |\langle
\mathbf{w}^{\varepsilon_n}-\mathbf{w}(t),\mathbf{f}\rangle|.
\end{eqnarray*}
The last term being equal to the first term in
(\ref{estimflip}), the conclusion follows.
\end{pf}
\begin{rem} In the
case $\mathbf{g}=0$, Philipowski \cite{Phi} obtained a similar
approximation result of the vorticity field, for a simpler
particle system, under the additional assumption that the test
function $\mathbf{f}$ belongs to $L^{p^*}$.
\end{rem}
Finally, we establish an approximation result with convergence
rate for the velocity field. To that end, we need to strengthen
the already shown convergence of $\mathbf{w}^{\varepsilon}$ to
$\mathbf{w}$. We
will need the following:
\begin{lema}\label{convgradw} For each $\tilde{p}\in(\frac
{3}{2},p)$, there is a
constant $C(T,\tilde{p})$ such that
\[
\sup_{t\in[0,T]}t^{{3}/({2\tilde{p}})}\|\nabla
\mathbf{w}^{\varepsilon}(t)-\nabla\mathbf{w}(t)\|_{\tilde{p}}\leq
C(T,\tilde{p})
\varepsilon.
\]
\end{lema}
\begin{pf} We need $\tilde{p}\in(\frac{3}{2},3)$ in order to
dispose from a integrable (in time) bound for $\| D^2
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})(t)\|_{
{3\tilde
{p}}/({3-\tilde{p}})}$,
which we do not have for $\tilde{p}=p$. Indeed, for any
$\tilde{p}$ in that interval we have
$\tilde{q}:=\frac{3\tilde{p}}{3-\tilde{p}}\in(3,\frac{3p}{3-p})$,
and so by Theorem \ref{regularity}(i) and Lemma
\ref{contgradK} we have for $k,j,i=1,2,3$ that
\begin{eqnarray}\label{intboundqtilde}
&&\sup_{t \in[0,T],\varepsilon\geq
0}t^{{3/2}({1/p}-{1}/{\tilde{q}})} \biggl\|
\frac{\partial
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})_j}{\partial
x_i} \biggr\|_{\tilde{q}}\nonumber\\[-8pt]\\[-8pt]
&&\qquad{}+ \sup_{t \in[0,T],\varepsilon\geq
0}t^{{1/2}+{3/2}({1/p}-{1/\tilde{q}})}
\biggl\| \frac{\partial^2
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})_j}{\partial
x_i\,\partial
x_k} \biggr\|_{\tilde{q}}<\infty\nonumber
\end{eqnarray}
with
$-\frac{1}{2}-\frac{3}{2}(\frac{1}{p}-\frac{1}{\tilde
{q}})=-1+\frac{3}{2}(\frac{1}{\tilde{p}}-\frac{1}{p})
>-1$. Let us now check that one has
\begin{equation}\label{unifbound11}
\sup_{\varepsilon\geq0}\tn
\mathbf{w}^{\varepsilon}\|_{1,\tilde{p},T}<\infty.
\end{equation}
This is not immediate, since $T>0$ given by Theorem \ref{exist1}
was determined by the norm of $\mathbf{w}_0$ and of the operator
$\mathbf{B}^{\varepsilon}$ in the spaces corresponding to the parameter
$p>\tilde{p}$. We will prove (\ref{unifbound11}) using continuity
properties of the operators $\mathbf{B}^{\varepsilon}$. It follows from
Proposition 3.1(iii) in \cite{F1} that for $\frac{3}{2}\leq
r<3$ and $ \frac{3r}{6-r}\leq r' \leq r$, one has
\begin{equation}\label{contBext}
\sup_{\varepsilon\geq0}\tn\mathbf{B}^{\varepsilon}(\mathbf
{v},\mathbf{v}) \tn_{1,r',T}
\leq C_{r,r'} (T) (\tn\mathbf{v}\tn_{1,r,T})^2
\end{equation}
for some finite constant
$C_{r,r'} (T)$. From this, we deduce that $\mathbf{w}^{\varepsilon}
\in
\mathbf{F}_{1,\tilde{p},T}$, with a uniform (in $\varepsilon$)
bound, by
the following iterative procedure. Define a real sequence by
$r_0=\tilde{p}$, $r_{n+1}=\frac{6r_n}{3+r_n}$, and notice that it
is increasingly convergent to $3$. We can thus take $N\in\mathbb{N}$
such that $r_N< p\leq r_{N+1}$. The function $s\mapsto
\frac{3s}{6-s}$ being increasing on $[0,6]$, we then have
$\frac{3p}{6-p}\leq\frac{3r_{N+1}}{6-r_{N+1}}=r_N$. By
(\ref{contBext}) with $r=p$ and $r'=r_N$, we see that
$\mathbf{B}^{\varepsilon}(\mathbf{w}^{\varepsilon},\mathbf
{w}^{\varepsilon})\in\mathbf{F}_{1,
r_N,T}$, and since also $\mathbf{w}_0 \in\mathbf{F}_{1,r_N,T}$ holds
by Lemma
\ref{contB}(i) (taking $r_N$ in the place of $p$ and $r$
therein), we get that $\mathbf{w}^{\varepsilon}\in\mathbf{F}_{1,
r_N,T}$, with a
bound in that space that is uniform in $\varepsilon$.
We repeat the previous arguments with $r=r_N$ and
$r'=\frac{3r_N}{6-r_N}=r_{N-1}$ and get that $\mathbf{w}^{\varepsilon
}\in
\mathbf{F}_{1, r_{N-1},T}$, with a bound that is a uniform in
$\varepsilon$. Continuing $N-1$ times this scheme we get
(\ref{unifbound11}).
We now take derivatives in the mild vortex equation with
$\varepsilon\geq0$ (as justified in the proof of Proposition 3.1
in \cite{F1}),
\begin{eqnarray*}
\frac{\partial(\mathbf{w}^{\varepsilon})_k}{\partial x_i}(t,x)
&=&\int_{\mathbb{R}^3}\frac{\partial G^{\nu}_t}{\partial x_i}(x-y)
(w_0)_k(y) \,dy+\int_0^t \int_{\mathbb{R}^3} \frac{\partial G^{\nu
}_t}{\partial x_i}(x-y) \mathbf{g}(0,y)\,dy \,ds\\
&&{} - \int_0^t\sum_{j=1}^3\int_{\mathbb{R}^3}\frac{\partial G^{\nu}_{t-s}}
{\partial x_i}
(x-y) \biggl[\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon
})_j(s,y)\,\frac{
\partial\mathbf{w}^{\varepsilon}_k(s,y)}{\partial y_j}
\\
&&\hspace*{129.3pt}{} - \mathbf{w}^{\varepsilon}_j(s,y)\, \frac{\partial
\mathbf{K}^{\varepsilon}(\mathbf{w}^{\varepsilon})_k(s,y)}{\partial
y_j} \biggr]\,dy \,ds
\end{eqnarray*}
for $k=1,2,3$. Notice now that, thanks to the estimates
(\ref{unifbound11}), Lemma \ref{aproxims}(ii) also holds
with $p$ replaced by $\tilde{p}$. By estimates as those in the
proof of Theorem \ref{exist1}(i) and using Lemma
\ref{aproxims}(ii) and estimates (\ref{intboundqtilde}) and
(\ref{unifbound11}), we then have
\begin{eqnarray*}
&&\|\nabla\mathbf{w}^{\varepsilon}(t)-\nabla
\mathbf{w}(t)\|_{\tilde{p}}
\\
&&\qquad\leq
C\int_0^t(t-s)^{-{3}/({2\tilde{p}})}s^{-{1/2}}
[\|\mathbf{w}^{\varepsilon}(s)-\mathbf{w}(s)\|_{\tilde{p}}\\
&&\qquad\quad\hspace*{111.6pt}{}+\|\mathbf{K}^{\varepsilon}(\mathbf{w})(s)-\mathbf
{K}(\mathbf{w})(s)\|_{\tilde{q}}
]\,ds\\
&&\qquad\quad{}+C\int_0^t(t-s)^{-{3}/({2\tilde{p}})} [\|\nabla
\mathbf{w}^{\varepsilon}(s)+\nabla\mathbf{w}(s)\|_{\tilde{p}}\\
&&\hspace*{100.8pt}\qquad\quad{}+ \|\nabla\mathbf{K}^{\varepsilon}(\mathbf
{w})(s)-\nabla
\mathbf{K}(\mathbf{w})(s)\|_{\tilde{q}} ]\,ds \\
&&\qquad\leq C \varepsilon t^{1-{3}/{\tilde{p}}}+C \varepsilon
\int_0^t
(t-s)^{-{3}/({2\tilde{p}})}s^{-1+{3}/({2\tilde{p}})-{3}/({2p})}\,ds
\\
&&\qquad\quad{} + C \int_0^t (t-s)^{-{3}/({2\tilde{p}})} \|\nabla
\mathbf{w}^{\varepsilon}(s)-\nabla\mathbf{w}(s)\|_{\tilde{p}} \,ds\\
&&\qquad\leq C \varepsilon t^{-{3}/({2\tilde{p}})}+ C \int_0^t
(t-s)^{-{3}/({2\tilde{p}})} \|\nabla
\mathbf{w}^{\varepsilon}(s)-\nabla\mathbf{w}(s)\|_{\tilde{p}} \,ds.
\end{eqnarray*}
Iterating the latter sufficiently many times (using the identity
quoted in the proof of Theorem \ref{exist1}) (i), we obtain
that
\begin{eqnarray}\label{hola}
\|\nabla\mathbf{w}^{\varepsilon}(t)-\nabla\mathbf{w}(t)\|_{\tilde
{p}} &\leq& C
\varepsilon\bigl(t^{-{3}/({2\tilde{p}})}+1\bigr)\nonumber\\[-8pt]\\[-8pt]
&&{} + C(T) \int_0^t \|\nabla
\mathbf{w}^{\varepsilon}(s)-\nabla\mathbf{w}(s)\|_{\tilde{p}}
\,ds.\nonumber
\end{eqnarray}
Integrating (\ref{hola}) in time and using Gronwall's lemma,
and then inserting the obtained bound in the right-hand side of
(\ref{hola}), we obtain
\begin{equation}
\|\nabla\mathbf{w}^{\varepsilon}(t)-\nabla\mathbf{w}(t)\|_{\tilde
{p}}\leq C
\varepsilon\bigl(t^{-{3}/({2\tilde{p}})}+1\bigr),
\end{equation}
and the convergence statement for $\nabla\mathbf{w}^{\varepsilon}$
follows.
\end{pf}
\begin{corolario}\label{convvelfield}
Consider fixed real numbers $\tilde{p}\in(\frac{3}{2},3)$ and
$\alpha\in(0,\frac{1}{2})$. Under the assumptions of Theorem
\ref{convortmet}, there exists a constant $\mathbf{C}$ depending
on $\tilde{p},T, \|w_0\|_p, \tn
\mathbf{g}\tn_{0,p,T},\|w_0\|_1+\|\mathbf{g}\|_{1,T}$ and $\alpha$,
such that for
all $n\in\mathbb{N}$,
\begin{eqnarray*}
&&\sup_{t\in[0,T]}\gamma(t) E \bigl( |
\mathbf{K}^{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon
_n,R})(t,x)-\mathbf{u}
(t,x) | \bigr)\\
&&\qquad\leq
\mathbf{C} \biggl(\frac{(\ln n)^{{1}/{3}}}
{n^{{1}/{2}-\alpha}} +\frac{(\ln n)^{{1}/{3}}
}{\sqrt{n}}+\frac{1}{(\ln n)^{{1}/{9}}} \biggr),
\end{eqnarray*}
where
$\gamma(t)=(t^{{3}/({2\tilde{p}})}+t^{1-{3}/{2}(
{1}/{\tilde{p}}-{1}/{p})})$.
\end{corolario}
\begin{pf} For all $(t,x)\in[0,T]\times\mathbb{R}^3$, it holds
that
\begin{eqnarray}\label{3estimatesu}\qquad
&&|\mathbf{K}^{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon
_n,R})(t,x)-\mathbf{u}
(t,x) |\nonumber\\
&&\qquad
\leq
\Biggl|\mathbf{K}^{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon_n,R})(t,x)
\nonumber\\
&&\qquad\quad\hspace*{2pt}{}-\frac{1}{n}\sum_{i=1}^n K_{\varepsilon_n}(x-
X_t^{i,\varepsilon_n,R})\wedge
(\chi_R(\Phi_t^{i,\varepsilon_n,R}))h(\tau,X_0^i)\mathbf{1}_{\{
\tau\geq t\}
} \Biggr|\nonumber\\[-8pt]\\[-8pt]
&&\qquad\quad{} + \Biggl|\frac{1}{n}\sum_{i=1}^n K_{\varepsilon_n}(x-
X_t^{i,\varepsilon_n,R})\wedge
(\chi_R(\Phi_t^{i,\varepsilon_n,R}))h(\tau,X_0^i)\mathbf{1}_{\{
\tau\geq t\}
}\nonumber\\
&&\qquad\quad\hspace*{14.4pt}{} -\int_{\mathcal{C}_T} K_{\varepsilon_n}\bigl(x- y(t)\bigr)\wedge
\chi_R(\phi(t))h(\theta,x(0))P^{\varepsilon_n,R}(d\theta,dy,d\phi
) \Biggr|\nonumber\\
&&\qquad\quad{} +|\mathbf{K}^{\varepsilon_n}(\mathbf{w}^{\varepsilon
_n})(t,x)-\mathbf{u}(t,x)|\nonumber
\end{eqnarray}
with $P^{\varepsilon_n,R}$ as in Corollary \ref{convempproc}. By
similar reasons as in (\ref{estimatesf}), the expectation of the
second term is now bounded by
$\frac{1}{\sqrt{n}}2M_{\varepsilon_n}R \bar{h}$. With the estimate
in Theorem \ref{propchaosepsR} we get that
\begin{eqnarray*}
&&
E |\mathbf{K}_{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon
_n,R})(t,x)-\mathbf{u}(t,x) |
\\
&&\qquad\leq (L_{\varepsilon_n} R
+M_{\varepsilon_n})\bar{h}\frac{1}{\sqrt{n}}
C(\varepsilon_n,R,\bar{h},T)
\\
&&\qquad\quad{} +\frac{2M_{\varepsilon_n}R \bar{h} }{\sqrt{n}} +\|\mathbf
{K}^{\varepsilon
_n}(\mathbf{w}^{\varepsilon_n})(t)-\mathbf{K}(\mathbf{w})(t)\|
_{\infty}.
\end{eqnarray*}
Thus, from the estimates for $L_{\varepsilon}$ and
$M_{\varepsilon}$ we deduce that for fixed $\tilde{p}\in
(\frac{3}{2},3)$,
\begin{eqnarray*}
&&
E |\mathbf{K}_{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon
_n,R})(t,x)-\mathbf{u}(t,x) |
\\
&&\qquad\leq C (1+R \bar{h}T)(R\bar{h}T)\frac{(c\ln n)^{{1/3}}}
{n^{{1}/{2}-\alpha}} + C R \bar{h}\frac{(c\ln n)^{{1/3}}
}{\sqrt{n}} \\
&&\qquad\quad{} + \|\mathbf{w}^{\varepsilon_n}(t)-\mathbf{w}(t)\|_{W^{1,\tilde
{p}}}+\|\mathbf{K}
^{\varepsilon_n}(\mathbf{w})(t)-\mathbf{K}(\mathbf{w})(t)\|
_{W^{1,\tilde{q}}},
\end{eqnarray*}
where $\tilde{q}=\frac{3\tilde{p}}{3-\tilde{p}}<\frac{3p}{3-p}$.
We have used here again the Sobolev inclusions quoted in the proof
of Theorem \ref{regularity}, and Lemma \ref{contK}. Now, by Lemmas
\ref{contgradK} and \ref{aproxco1}, one has
\begin{eqnarray*}
\|\nabla\mathbf{K}^{\varepsilon_n}(\mathbf
{w})(t)-\nabla
\mathbf{K}(\mathbf{w})(t)\|_{\tilde{q}} &\leq& C \|
\varphi_{\varepsilon_n}*\mathbf{w}(t)-\mathbf{w}(t)\|_{\tilde
{q}}\leq C
\varepsilon_n \|\nabla\mathbf{w}(t)\|_{\tilde{q}}\\
&\leq& C
t^{-1+{3/2}({1}/{\tilde{p}}-{1}/{p})}\varepsilon_n,
\end{eqnarray*}
where we have also used part (i) of Theorem \ref{regularity}
in the last inequality. On the other hand,
\[
\| \mathbf{K}^{\varepsilon_n}(\mathbf{w})(t)-
\mathbf{K}(\mathbf{w})(t)\|_{\tilde{q}}\leq C \|
{\varphi_{\varepsilon_n}*\mathbf{w}(t)}-\mathbf{w}(t)\|_{\tilde
{p}}\leq C
\varepsilon_n \|\nabla\mathbf{w}(t)\|_{\tilde{p}}\leq C t^{-{1/2}}
\varepsilon_n
\]
thanks to the estimate (\ref{unifbound11}). From
the previous estimates and Lemmas \ref{aproxims} and
\ref{convgradw}, we deduce that
\begin{eqnarray*}
&&E |\mathbf{K}_{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon
_n,R})(t,x)-\mathbf{u}(t,x) |
\\
&&\qquad\leq C \frac{(\ln n)^{{1}/{3}}} {n^{{1}/{2}-\alpha}} + C
\frac{(\ln n)^{{1}/{3}}
}{\sqrt{n}}
\\
&&\qquad\quad{}+C\varepsilon_n\bigl(t^{-{3}/({2\tilde{p}})}+
t^{-{1}/{2}}+t^{-1+{3}/{2}({1}/{\tilde{p}}-{1}/{p})}\bigr),
\end{eqnarray*}
and the statement follows.
\end{pf}
\section{Convergence rate under additional
regularity assumptions}\label{sec6}
Let us finally explain how the convergence rate can be slightly
improved by assuming further regularity of the data $w_0$ and
$\mathbf{g}$. Since it is an adaptation of the developments in the
previous sections, we only sketch the main arguments.
First, it is possible to show that if the data $w_0$ and $\mathbf{g}$ are
such that
\begin{equation}\label{adreghyp}
\|w_0\|_{W^{m,p}},\qquad \sup_{t\in[0,T]}\|\mathbf{g}(t)\|
_{W^{m,p}}<\infty
\end{equation}
for some integer $m\geq1$, then the mild solutions
$\mathbf{w}^{\varepsilon}$, $\varepsilon\geq0$, given by Theorem
\ref{exist1} belong to the space $\mathbf{F}_{m+1,p,T}$ of functions
$\mathbf{v}(t)$ such that
\[
\sum_{i=1}^{m-1}\tn D^i \mathbf{v}\tn_{0,p,T} + \tn D^{m} \mathbf
{v}\tn
_{1,p,T}<\infty,
\]
where $D^i$ stands for the $i$th order space derivative. To prove
this, one easily first checks that $\mathbf{w}_0$ belongs to that space,
since the successive derivatives in the convolutions the heat
kernel can be applied to the data $w_0$ and $\mathbf{g}$. On the other
hand, on can show by induction that the bilinear operators
$\mathbf{B}^{\varepsilon}$ are continuous in $\mathbf{F}_{m+1,p,T}$,
and more
generally, in the naturally generalized versions
$\mathbf{F}_{m+1,r,(T;p)}$ of the space $\mathbf{F}_{1,r,(T;p)}$.
That is, the
spaces of functions $\mathbf{v}$ such that
\[
\sum_{i=1}^{m-1}\tn D^i \mathbf{v}\tn_{0,r,(T;p)} + \tn D^{m}
\mathbf{v}
\tn_{1,r,(T;p)}
\]
is finite. From this, one gets a local existence
result in the space $\mathbf{F}_{m+1,p,T}$, from which a regularity result
can be obtained by arguments that can be adapted from those in the
proof Theorem 3.2 in \cite{F1}. Moreover, one also checks that
the norms $\tn\mathbf{w}^{\varepsilon}\tn_{m+1,r,(T;p)}$ are bounded
uniformly in $\varepsilon\geq0$.
Now, we impose additional conditions on the regularizing kernel
$\varphi$, namely:
\begin{longlist}
\item$\int_{\mathbb{R}^3}\varphi(x)\,dx=1$.
\item$\int_{\mathbb{R}^3}|x|^{m+1}|\varphi(x)|\,dx<\infty$.
\item$\int_{\mathbb{R}^3}x_{i_1}\cdots x_{i_r} \varphi(x)
\,dx=0$ for
all $i_1,\ldots,i_r\in\{1,2,3\}$ and $r\leq m$.
\end{longlist}
Such function is called a cutoff function of order $m+1$. Then,
one has the following approximation result (see Lemma 4.4 in
\cite{Rav}):
\[
\|\varphi_{\varepsilon}* w - w\|_r\leq C \varepsilon^{m+1} \|
D^{m+1}w\|_r
\]
for all $w\in W^{m+1,r}$. Therefore, without any modification, for
such function $\varphi$, the proofs of Lemmas \ref{aproxims} and
\ref{convgradw} yield the same convergence results but at rate
$\varepsilon^{m+1}$.
By following exactly the same steps as in the previous section, we
finally deduce:
\begin{teorema}
Assume the hypotheses of Theorems \ref{convortmet} and, moreover, that
(\ref{adreghyp}) holds for some integer $m\geq1$ and that
$\varphi$ is a cutoff of order $m+1$. Then, we have for all $i\leq
n$,
\begin{eqnarray*}
&&\mathbb{E} \Bigl[\sup_{t\in[0,T]} \{
|X^{i,\varepsilon_n,R,n}_t-X^i_t|+
|\Phi^{i,\varepsilon_n,R,n}_t-\Phi^i_t| \} \Bigr] \\
&&\qquad\leq
C(T,w_0,\mathbf{g},\alpha) \biggl[\frac{1}{n^{{1}/{2}-\alpha
}(\ln
n)^{{1}/{9}}}+\frac{1}{(\ln n)^{({m+1})/{9}}} \biggr]
\end{eqnarray*}
and
\begin{eqnarray*}
&&\sup_{t\in[0,T],x\in\mathbb{R}^3}\gamma(t) E \bigl( |
\mathbf{K}^{\varepsilon_n}(\tilde{\mu}^{n,\varepsilon
_n,R})(t,x)-\mathbf{u}
(t,x) | \bigr)
\\
&&\qquad\leq\mathbf{C} \biggl(\frac{(\ln n)^{{1}/{3}}}
{n^{{1}/{2}-\alpha}} +\frac{(\ln n)^{{1}/{3}}
}{\sqrt{n}}+\frac{1}{(\ln n)^{({m+1})/{9}}} \biggr),
\end{eqnarray*}
where $\gamma(t)$ was defined in Corollary \ref{convvelfield},
where the
constants now, moreover, depend on $m$.
\end{teorema}
\section*{Acknowledgments}
I would like to thank Mireille Bossy for suggesting me the use of
cutoff techniques in \cite{Rav}. I also thank an anonymous referee for
carefully reading this work, and for helpful suggestions that allowed
me to improve its presentation.
|
1,314,259,993,265 | arxiv |
\section{Introduction}
\label{sec:introduction}
Recent advances in sensing, actuation, communication, and
computation technologies, as well as innovations in their
integration within increasingly smaller, interconnected devices,
has lead to the emergence of a new and fascinating class of
systems, the so-called {\em cyber-physical systems
(CPS)}. Examples of CPS include smart grids, smart factories,
smart transportation, and smart health-care~\cite{broyCPS}.
Similarly to living organisms, CPS operate in an uncertain,
continuously evolving ecosystem, where they compete for a limited
supply of resources. For survival, CPS need to continuously adapt,
such that, they react in real time and optimal fashion, with regard to
an overall survival metric, their partial knowledge, and their bounded
sensing, actuation, communication and computation capabilities.
In order to equip CPS with such exceptional features, various
researchers have started to wonder weather our current CPS
analysis, design and implementation techniques are still
adequate. Going back to Parnas, Chaudhuri and Lezama identified
in a series of intriguing papers~\cite{Parnas85,Chaudhuri10,Chaudhuri11}, the
if-then-else construct as the main culprit for program frailness.
In a simple decision of the form \texttt{if\,(x\,{>}\,a)}, the
predicate $x\,{>}\,a$ acts like a step function (see
Figure~\ref{fig:neuron}), with infinite plateaus to the left and
to the right of the discontinuity point $x\,{=}\,a$. In a typical
mid-size program, the nesting of thousands of if-then-else
conditions leads to a highly nonlinear program, consisting of a
large number of plateaus separated by discontinuous jumps. This
has important implications.
From a CPS-analysis point of view, predicates of the form
$f(x)\,{>}\,a$, where $f(x)$ is a nonlinear analytic function, are a
di\-sas\-ter. They render \emph{CPS analysis undecidable}.
Intuitively, in order to separate all points on one side of the curve
$f(x)\,{=}\,a$, from all on the other side, one needs to forever
decrease the size of a grid, in all the rectangles that are crossed by
the curve. Such a process does never terminate, except for linear
functions where computation is still prohibitive. For this reason, a
series of papers, of Fraenzle, Ratschan, Wang, Gao and
Clarke~\cite{Fraenzle99,Ratschan06,Wang14,Gao14}, proposed the use of
an indifference region $\delta$ (see Figure~\ref{fig:neuron}), and
rewrite the predicates in the form $f(x){-}a\,{>}\,\delta$. This
approach not only makes program analysis (wrt.~reals) decidable, and
computable in polynomial time, but it also aligns it with
the finite computational precision available in today's computers.
From a CPS-design point of view, where one is interested to find the
values of $a$ for which an optimization criterion is satisfied,
predicates of the form $f(x)\,{>}\,a$ are a nightmare. They render
\emph{CPS optimization intractable}. Intuitively, a gradient-descent
method searching for a local minimum, gets stuck in plateaus, where a
small perturbation to the left or to the right, still keeps the search
on the same plateau. In order to alleviate this problem, Chaudhuri
and Lezama~\cite{Chaudhuri10} proposed to smoothen the steps, by
passing a Gaussian input distribution through the CPS. This can be
thought of as corresponding to the sensing and actuation noise. The
parameters of this distribution control the position of the resulting
sigmoidal curve (see Figure~\ref{fig:neuron}), and its steepness, that
is, the width of the above indifference region $\delta$. The authors
however, stopped short of proposing a new programming paradigm, and
the step-like functions in the programs to be optimized, posed
considerable challenges in the analysis, as they cut the Gaussians in
very undesirable ways.
\begin{figure}[htbp]
\vspace*{1mm}
\begin{center}
\includegraphics[width=\linewidth]{neuron}
\end{center}
\vspace*{-2mm}
\caption{Sigmoid (blue) and step (black) functions.}
\label{fig:neuron}
\end{figure}
From a CPS-implementation point of view, conditional statements of the
form \texttt{if\,(f(x)\,{>}\,a)} are also a disaster. They render
\emph{CPS frail and nonadaptive}. In other words, a small change in
the environment or the program itself, may lead to catastrophic consequences,
as the CPS is not able to adapt. In the AI community, where steps are
called \emph{hard neurons} and sigmoid curves are called \emph{soft
neurons}, adaptation and robustness is achieved by learning a
particular form of Ba\-ye\-si\-an networks with soft-neuron distributions,
called neural networks. Such networks, and in particular deep neural
networks, have recently achieved amazing performance, for example in
the recognition of sophisticated
patterns~\cite{Ciresan12,Erhan10}. This technology looks so promising
that major companies such as Google and Amazon are actively recruiting
experts in this area. However, the neural-networks learned are still
met with considerable opposition, as it is very difficult, if not
impossible, to humanly understand them.
Having identified the if-then-else programming construct as the
major source of trouble in the analysis, design and
implementation of CPS, the following important question still
remains: \emph{Is there a simple, humanly understandable
way to develop robust and adaptive CPS?} It is our belief, that
such a way not only exists, but it is also amazingly simple!
First, as program skeletons express domain knowledge and
developer intuition, they are here to stay. However, one needs to
replace hard neurons with their soft counterparts. We call such
program statements neural if-then-else statements, or
nif-then-else for short. They represent probabilistic, probit
distributions, and the decision to choose the left or the right
branch is sampled from their associated Gaussian
distributions. As a consequence, a program with nif statements
represents not only one, but a very large (up to the computational
precision) set of correct executions.
Second, the partial knowledge of such a program is encoded as a
Bayesian network, expressing the conditional dependencies among
the Gaussian distributions occurring within the nif
statements. These dependencies may be given, learned through a
preliminary phase and continuously improved during deployment, or
inferred through optimization techniques. In this case, learning
and optimization are considerably simplified, as the program is
by definition smooth. The depth and the branching structure of
these Bayesian networks reflect the sequential and parallel
nesting within the program, which is an essential asset in
program understanding.
For example, a parallel algorithm for pattern recognition, may
possess a quad-tree Bayesian structure, hierarchically reflecting
the neighbourhood relation among subimages. The depth of the
network is determined by the height of the tree. Similarly, a
purely sequential program, representing successive decisions,
will have a very linear Bayesian structure, whose depth is
determined by the number of decisions.
In order to validate our new paradigm, we use the parking example
from~\cite{LezamaSlides10}. The goal of this example was to
automatically learn the parameters of a program skeleton,
intuitively expressing the control as follows: Go backwards up to
a point $a_1$, turn up to an angle $b_1$, go backwards up to
$a_2$, turn again up to $b_2$ and finally go backwards up to
$a_3$. Since this program uses classical if statements, it is not
adaptive, and a small perturbation such as a slippery
environment, may lead to an accident. We therefore rewrite the
program with nif statements, and learn the conditional Gaussian
network associated with the predicates within these statements.
Using its sensors, the control program is now able to detect the
actual stopping or turning points, and to adequately sample its
next targets. Although this program is written once and for all,
it is able to adapt to a varying environment.
The main contributions of the work presented in this paper can be
therefore briefly summarized as follows:
\vspace*{-3.5mm}
\begin{enumerate}
\item We propose a new programming paradigm for the development of
smooth and adaptive CPS in which:
\vspace{-1mm}
\begin{itemize}
\item Troublesome ifs are replaced by neural ifs, thus improving
analysis, design and implementation,
\item Partial knowledge is encoded within a learned Ba\-ye\-sian
network, with Gaussian distributions.
\end{itemize}
\vspace*{-2mm}
\item We demonstrate the versatility of this programming paradigm
on a parking example using Pioneer rovers. The associated
youtube videos are available at~\cite{neuralVideos}.
\end{enumerate}
\vspace*{-4mm} Given obvious space limitations, we do not address
CPS-analysis and CPS-design (optimization) in this paper. They will be
the subject of a series of follow up papers.
The rest of the paper is organized as follows. In
Section~\ref{sec:background} we introduce Bayesian inference, Bayesian
networks, and Gaussian and Probit distributions. In
Section~\ref{sec:neural} we introduce our programming paradigm. In
Section~\ref{sec:learning} we discuss how to learn the Gaussian
Bayesian network. In Section~\ref{sec:experiments} we discuss our
implementation platform and the associated results. In
Section~\ref{sec:related} we discuss related work. Finally in
Section~\ref{sec:conclusion} we give our concluding remarks and
directions for future work.
\vspace*{2mm}
\section{Background}
\label{sec:background}
The main tool for logical inference is the \emph{Modus-Ponens} rule:
Assuming that proposition $A$ is true, and that from the truth of $A$
one can infer the truth of proposition $B$, one can conclude that
propositions $A$ and $B$ are both true. Formally:
\[
A \wedge (A \rightarrow B) ~=~ A \wedge B ~=~ B \wedge (B \rightarrow A)
\]
In probability theory, the uncertainty in the truth of a proposition
(also called an event) is expressed as a probability, and implication
between propositions is expressed as a conditional probability. This
leads to a probabilistic extension of Modus-Ponens, known as the
\emph{Bayes' rule}. Formally:
\[
P(A)~ P(B \mid A) ~=~ P(A \wedge B) ~=~ P(B)~ P(A \mid B)
\]
This rule, consistent with logic, is the main mechanism for
probabilistic inference~\cite{russellnorvig}. It allows to reason in
both forward, or causal way, and backwards, or diagnostic way. For
example if $B$ is causally implied by $A$, then the left term in the
above equation denotes a causal relation, and the right term, a
diagnostic relation. Equating the two, allows one to use causal
information (or observed events), for diagnostic inference.
In real-world systems, causal relations are usually chained and can
form sophisticated structures.
\vspace*{-4mm}
\paragraph{Bayesian Networks}
A probabilistic system is completely characterized by the joint
probability distribution of all of its (possibly noisy)
components. However, the size of this distribution typically explodes,
and its use becomes intractable. In such cases, the Bayes' rule,
allows to successively decompose the joint distribution according to
the conditional dependences among its \emph{random variables
(RV)}. These are both discrete or continuous variables, which
associate to each value (or infinitesimal interval) in their range,
the rate of its occurrence. Networks of conditional dependencies
among random variables are known as \emph{Bayesian networks (BN)}, and
they have a very succinct representation.
Syntactically, a BN is a direct acyclic graph $G\,=\,(V,E)$, where
each vertex $v_i \in V$ represents a random variable $X_i$ and each
edge $e_{ij} \in E$ represents a conditional dependence of the
variable $X_j$ on the variable $X_i$. To avoid the complications
induced by the use of the joint probability distribution (or density),
each variable $X_i$ is associated with a \emph{conditional probability
distribution (CPD)} that takes into account dependencies only
between the variable and its direct
parents~\cite{russellnorvig,Koller09}. Such a compact representation
keeps information about the system in a distributed manner and makes
reasoning tractable even for large number of variables.
Although in many interesting applications the variables of a BN have
discrete distributions (e.g.~in fault detection, a device might have
only a finite number of diagnosable errors, caused by a finite set of
faults), in many other applications, continuous random variables
naturally describe the entities of interest. For instance, in our
\emph{parallel parking} running example, a Pioneer rover starting from
an initial location, needs to execute a sequence of motion primitives
(e.g.~driving or turning forward or backward with fixed speed for a
particular distance $X_i$ or angle $X_j$), which will result in parking
the rover in a dedicated parking spot.
\vspace*{-4mm}
\paragraph{Gaussian Distributions}
Any real measurement of a physical quantity is affected by
noise. Hence, the distances and the angles occurring in our parking
example are naturally expressed as continuous RVs. In this paper we
assume that variables have \emph{Normal}, also called,
\emph{Gaussian distributions (GD)}. These distributions naturally
occur from the mixing of several (possibly unobservable) RVs,
and they have mathematical properties making them very attractive.
An \emph{univariate Gaussian distribution (UGD)} is denoted by
$\mathcal{N}(\mu,\sigma^2)$ and it is characterized by two parameters:
The \emph{mean} $\mu$ and the \emph{variance} $\sigma^2$. In our
example, the desired distance in the first motion is associated with
$\mu$, which is perturbed by noise with variance $\sigma^2$. The
\emph{Gaussian probability density} of a RV $X$ with values $x$ is
defined as follows:
\begin{equation}
\texttt{pdf}_{\mu, \sigma^2}(x) =
\frac{1}{\sqrt{2\pi} \sigma}exp \left(
-\frac{(x - \mu)^2}{2 \sigma^2} \right).
\end{equation}
Parallel parking includes a sequence of motion primitives that are
mutually dependent. To express these dependencies we use a
\emph{multivariate Gaussian Distribution (MGD)} \cite{grimmett_book},
which generalizes the Gaussian distribution to multiple dimensions.
For a $n$-dimensional vector of random variables $\mathbf{X}$ the
probability density function is characterized by a $n$-dimensional
mean vector $\mathbf{\mu}$ and a symmetric positive definite
covariance matrix $\mathbf{\Sigma}$. To express the probability
density of a multivariate Gaussian distribution we use the inverse of
covariance matrix, called precision matrix
$\mathbf{T}=\mathbf{\Sigma}^{-1}$, which will be helpful later
during the learning phase. The probability density then can be written
as follows\cite{Neapolitan:2003}:
\begin{equation}
\texttt{pdf}_{\mu, \sigma^2}(\mathbf{x}) =
\frac{1}{(2\pi)^{n/2}(det(\mathbf{T}^{-1}))^{1/2}}exp \left(
-\frac{1}{2} \Delta^2(\mathbf{x}) \right),
\end{equation}
where $\Delta^2(\mathbf{x}) = (\mathbf{x} -
\mathbf{\mu})^{T}\mathbf{T}(\mathbf{x} - \mathbf{\mu})$.
A \emph{Gaussian BN (GBN)} is a BN where random variables $X$
associated to each node in the network have associated a Gaussian
distribution, conditional on their parents $X_i$.
\vspace*{-4mm}
\paragraph{Probit Distributions}
In order to smoothen the decisions in a program, we need to choose a
function without plateaus and discontinuities. Since we operate with
Gaussian random variables, the natural candidate is their
\emph{cumulative distribution function (CDF)}. This is an
\emph{S}-shaped function or a \emph{sigmoid} (see
Figure~\ref{fig:nwhile_sampling}), whose steepness is defined by
$\sigma^2$, where $\operatorname{erf}$ denotes the error function:
\begin{equation}
\texttt{cdf}_{\mu, \sigma^2}(x) = \frac12\left(1 +
\operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right),
\label{}
\end{equation}
For a particular value $x$ of $X$, the function $\texttt{cdf}_{\mu,
\sigma^2}(x)$ returns the probability that a random sample from the
distribution $\mathcal{N}(\mu, \sigma^2)$ will belong to the interval
$(-\infty,x]$.
Since the sensors and actuators of the Pioneer rover are noisy, the
trajectories it follows are each time different from the optimal one
(assuming that such difference is tolerated by the parking
configuration), even if the optimal trajectory of the parking example
is known. To be adaptive we use a combined approach: we incorporate
probabilistic control structures in the program (introduced in the
Section~\ref{sec:neural}) and sample commands from a GBN, whose
parameters were learned experimentally. To detect changes in the
environment and get more accurate position estimates, data from
various sensors are combined with a sensor fusion algorithm.
\vspace*{2mm}
\section{Neural Code}
\label{sec:neural}
Traditional inequality relations (e.g.~$>$, $\geq$, $\leq$, $<$)
define a sharp (or firm) boundary on the satisfaction of a condition,
which represents a step function (see Fig~\ref{fig:neuron}). Using
firm decisions in a program operating on Normal RVs, cuts
distributions in halves, leaving unnormalized and invalid PDFs (see
Figure~\ref{fig:nif_cut}: The upper right plot shows the approximation
of the PDF after passing a Normal RV through a traditional conditional
statement). Hence, to keep a proper PDF after passing a Normal RV
through an \texttt{if} or a \texttt{loop} statement one needs to
perform a re-normalization of the PDF.
In order to avoid re-shaping of probability density each time after a
variable is passed through a condition, we introduce a special type of
control structure called \emph{neural if}, or \texttt{nif} for short.
The name is coined to express the key novelty of our approach: We
propose to use a smooth conditionals $\texttt{cdf}_{\mu, \sigma^2}(x)$
instead of firm ones (see Figure~\ref{fig:neuron}). A \texttt{nif}
statement operates on an inequality relation and a variance
$\sigma^2$, and decides which branch should be taken:
\texttt{nif(x~$\#$~y,$\sigma^2$)}, where $\#$ can be replaced with
($>$, $\geq$, $\leq$ or $<$) and $\sigma^2$ represents the uncertainty
of making a decision. For the case when $\sigma^2 \rightarrow 0$ (no
uncertainty) we require the \texttt{nif} statement to behave as a
traditional \texttt{if} statement.
The evaluation of the \texttt{nif()} statement is explained on hand of
the following example, where \texttt{x}, \texttt{a} $\in \mathcal{R}$
and $\sigma^2$ $\in \mathcal{R^+}$.
\vspace{2mm}
\begin{small}
\begin{lstlisting}[style=neural,mathescape]
nif( x >= a, $\sigma^2$) S1 else S2
\end{lstlisting}
\end{small}
\vspace{-1mm}
The evaluation is done in two steps: (i)~Find an $\mathcal{R}$
interval $I$ representing the confidence of making the decision;
(ii)~Check if a sample from the GD $\mathcal{N}(0,\sigma^2)$ belongs to $I$.
Since the input RV has a GD, and a GD is used to evaluate the
condition, the result is a product of two GDs, which is also a GD
scaled by some constant factor $k$. To find $I$ in (i), we estimate
the difference \texttt{diff(x,a)} between \texttt{x} and \texttt{a}.
For the general case \texttt{nif(x~$\#$~a,$\sigma^2$)}, with arbitrary
$\#$, the difference \texttt{diff(x,a)} is defined as below, where
$\epsilon$ represents the smallest real number on a computer.
\begin{equation*}
\texttt{diff(x,a)} =
\begin{cases}
\texttt{x - a}-\epsilon & \text{if}\; \#\; \text{is}\; >,\\
\texttt{x - a} & \text{if}\; \#\; \text{is}\; \geq,\\
\texttt{a - x}-\epsilon & \text{if}\; \#\; \text{is}\; <,\\
\texttt{a - x} & \text{if}\; \#\; \text{is}\; \leq.
\end{cases}
\end{equation*}
Informally, our confidence is characterized by the difference: The
larger \texttt{diff(x,a)} is, the larger is the probability of
executing \texttt{S1}. The probability to execute \texttt{S1} is
given by $\texttt{cdf}_{0,\sigma^2}(\texttt{diff(x,a)})$ and
is used to obtain the interval $[q_1; q_2]$ by calculating two
symmetric quantiles $q_1$ and $q_2$ such that:
\begin{equation}
\int_{q1}^{q2} \texttt{pdf}_{0,\sigma^2}(\texttt{x}) dx =
\texttt{cdf}_{0, \sigma^2}(\texttt{diff(x,a)}).
\label{eq:quantiles}
\end{equation}
In the second step a random sample from the distribution
$\mathcal{N}(0,\sigma^2)$ is checked to belong to the interval
$[q_1;q_2]$. If it is within the interval, \texttt{S1} is executed,
otherwise \texttt{S2}. At this point the probability value to execute
\texttt{S1} is influenced by the variance $\sigma^2$ (see
Figure~\ref{fig:nwhile_sampling}).
Hence, the dependence is twofold: \texttt{diff(x,a)} shows how
confident we are in making a decision, and $\sigma^2$
characterizes the uncertainty.
For the case $\sigma^2\rightarrow 0$ the \texttt{nif} statement is
equivalent to the \texttt{if} statement. For infinitesimal $\sigma^2$
the PDF is expressed as a Dirac function $\delta(x)$, which has the
following properties: \\[2mm]
\hspace*{6mm}(i)\quad $\delta(x) = +\infty$ if $x = 0$ else $0$\\[1mm]
\hspace*{6mm}(ii)\quad $\int_{-\infty}^\infty \delta(x)\,dx = 1$.\\[2mm]
The Dirac function essentially concentrates all the PD in a single
point $x\,{=}\,0$. In this case the $\texttt{cdf}_{0,
\sigma^2\rightarrow 0}(x)$ becomes a step function (see
Figure~\ref{fig:neuron}). We consider two cases, as follows:
(i)~$\texttt{diff(x,a)}\,{\geq}\,0$ and
(ii)~$\texttt{diff(x,a)}\,{<}\,0$. In the first case the probability
of executing \texttt{S1} is equal to $1$, hence the interval is
$(-\infty; +\infty)$ and includes every possible sample; for the
second case the probability of taking \texttt{S1} is $0$, hence the
interval is empty and cannot contain any sample. An \texttt{if}
statement is an \texttt{nif} statement without uncertainty.
Let us illustrate the concept on a concrete example. Suppose that in
the current execution $\texttt{x} = 0$ and $\texttt{a}=1$.
Figure~\ref{fig:nwhile_sampling} illustrates how decisions are made if
$\sigma^2$ is: $0.4^2, \pi, 4^2$.
Since \texttt{diff(x,a) = 1}, the probability of executing \texttt{S1}
is defined by $\texttt{cdf}_{0,\sigma^2}(1)$ and for the above cases
is equal to 0.994, 0.714 and 0.599 respectively. The intervals for the
above cases are as follows: [-1.095;1.095], [-1.890;\,1.890] and
[-3.357;\,3.357]. In the second step we sample from the distributions
with the corresponding $\sigma^2$ ($\mathcal{N}(0,0.4^2)$,
$\mathcal{N}(0,\pi)$ and $\mathcal{N}(0,4^2)$) and check whether the
value lies within the intervals.\vspace*{1mm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\linewidth]{nwhile_sampling.pdf}
\end{center}
\vspace*{-5mm}
\caption{CDFs, PDFs and the quantiles for $x = 1$ }
\label{fig:nwhile_sampling}
\end{figure}
So far we were concerned with single samples $x \sim \mathcal{N}({\mu,
\sigma^2})$. Figure~\ref{fig:nif_cut} illustrates what happens to
the distributions: The differences of passing a GD RV \texttt{x $\sim$
$\mathcal{N}(0,0.1)$ } through the statements \texttt{if(x >= 0.15)}
and \texttt{nif(x >= 0.15, 0.1)}. Using our approach the GD is not
cut in undesirable ways, and it maintains its GD form after passing
the \texttt{nif} statement.\vspace*{1mm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\linewidth]{nif_cut.pdf}
\end{center}
\vspace*{-4mm}
\caption{Passing RVs through conditions}
\label{fig:nif_cut}
\end{figure}
We can introduce now the confidence-uncertainty trade-off to loops.
The \emph{neural while} statement, or \texttt{nwhile} for short, is an
extension of a traditional \texttt{while} statement which incorporates
uncertainty. The statement
$\texttt{nwhile}(\;\texttt{x}~\#~\texttt{a}, \sigma^2) \{ P_1 \}$
takes an inequality relation and variance $\sigma^2$ and executes the
program $P_1$ according to the following rule: (1)~Compute
$\texttt{diff}(\texttt{x}~\#~\texttt{a})$ and obtain quantiles $q_1$
and $q_2$ according to the Equation~\ref{eq:quantiles}; (2)~Check if a
random sample $x \sim \mathcal{N}(0,\sigma^2)$ is within the interval
$[q_1; q_2]$; (3)~If the sample belongs to the interval, execute $P_1$
and go to step (1), else exit.
Since the \texttt{nif} and \texttt{nwhile} statements subsume the
behavior of traditional \texttt{if} and \texttt{while} statements (the
case $\sigma^2\rightarrow0$), we use them to \emph{define an
imperative language with probabilistic control structures}. Binary
operators $bop$ (such as addition multiplication), unary operators
$uop$ (negation), and constants $c$ are used to form expressions
$E$. A program $P$ is a statement $S$ or combination of statements.\vspace*{1mm}
\begin{equation*}
\begin{split}
E ::=&\quad \texttt{x}_i\; |\; c\; | \; bop( E_1, E_2) \; | \; uop(E_1)\\
S ::=&\quad \texttt{skip} \; | \; \texttt{x}_i := E \; | \; S_1; S_2
\;| \; \texttt{nif} (\,\texttt{x}_i\, \# \, c , \sigma^2\,)\; S_1\; \texttt{else}\; S_2 \; | \\
&\quad \texttt{nwhile}(\,\texttt{x}_i\, \# \, c,\,\sigma^2)\{\; S_1\; \}
\end{split}
\end{equation*}
\vspace*{1mm}In order to define the denotational semantics for the \texttt{nif} and
the \texttt{nwhile} statements, we use
$\texttt{check}(\texttt{x}_i,c,\sigma^2, \#)$, a function which:
(1)~Computes the difference $\texttt{diff}(\texttt{x}_i, \texttt{c})$,
(2)~Finds quantiles $q_1$ and $q_2$ (Equation~\ref{eq:quantiles}), and
(3)~Checks if a sample $x~\sim~\mathcal{N}(0, \sigma^2)$ belongs to
the interval $[q_1;q_2]$. If it does, it returns value $1$, otherwise
it returns value $0$. The denotational semantics of neural programs
is then defined as follows:\vspace*{1mm}
\begin{equation*}
\begin{split}
&\llbracket \texttt{skip} \rrbracket(\texttt{x}) = \;\texttt{x} \\[4pt]
&\llbracket \texttt{x}_i := E\;\rrbracket(\texttt{x}) = \;
\texttt{x}[\llbracket E
\rrbracket(\texttt{x})\mapsto\texttt{x}_i]\\[4pt]
&\llbracket\; S_1; S_2\;\rrbracket(\texttt{x}) = \; \llbracket S_2
\rrbracket (\llbracket S_1 \rrbracket (\texttt{x})) \\[4pt]
&\llbracket \texttt{nif}(\,\texttt{x}_i\, \# \, c, \sigma^2)\;
S_1\; \texttt{else}\; S_2\rrbracket(\texttt{x}) = \\
&\quad\llbracket
\texttt{check}(\texttt{x}_i,a,\sigma^2,\#)\rrbracket(\texttt{x})
\llbracket S_1 \rrbracket(\texttt{x}) \quad{+} \\
&\quad\llbracket
\neg \texttt{check}(\texttt{x}_i,a,\sigma^2,\#)\rrbracket(\texttt{x})
\llbracket S_2 \rrbracket(\texttt{x})\\[4pt]
&\llbracket \texttt{nwhile}(\,\texttt{x}_i\, \# \, c,\,\sigma^2)\{\; S_1\; \}\rrbracket(\texttt{x}) = \\
&\quad\texttt{x}\llbracket \neg \texttt{check}(\texttt{x}_i, a,
\sigma^2, \#\,)\rrbracket(\texttt{x}) \quad{+}\\
&\quad\llbracket\texttt{check}(\texttt{x}_i, a, \sigma^2,
\#\,)\rrbracket(x) \llbracket \texttt{nwhile}(\,\texttt{x}_i\, \#
\, c,\,\sigma^2)\{\; S_1\; \}\rrbracket(\llbracket S_1 \rrbracket
\texttt{x})
\end{split}
\end{equation*}
\vspace*{-3mm}We are now ready to write the control-program skeleton for the
parallel parking task of our Pioneer rover, as a sequence of
\texttt{nwhile} statements, as shown in Listing~\ref{lst:parking}.
Each \texttt{nwhile} corresponds to executing one motion primitive of
the informal description in Section~\ref{sec:introduction}. The
functions \texttt{moving()} and \texttt{getPose()} are output and
input statements,
which for simplicity, were omitted from the denotational semantics.
\begin{small}
\begin{lstlisting}[caption={Parallel parking program
skeleton\label{lst:parking}},style=neural,mathescape]
nwhile(currentDistance < targetLocation1, sigma1){
moving();
currentDistance = getPose();
}
updateTargetLocations();
nwhile(currentAngle < targetLocation2, sigma2){
turning();
currentAngle = getAngle();
}
updateTargetLocations();
nwhile(currentDistance < targetLocation3, sigma3){
moving();
currentDistance = getPose();
}
updateTargetLocations();
nwhile(currentAngle < targetLocation4, sigma4){
turning();
currentAngle = getAngle();
}
updateTargetLocations();
nwhile(currentDistance < targetLocation5, sigma5){
moving();
currentDistance = getPose();
}
\end{lstlisting}
\end{small}
\vspace{-2mm}
The versatility of this approach is that the program skeleton is
written only once and comprises infinite number of controllers. The
question we need to answer next is:\\[2mm]
\quad\emph{What are the distances and
turning angles for each action and how uncertain are we about each
of them?}\\[2mm]
To find the unknown parameters from Listing~\ref{lst:parking}, namely
the target locations \texttt{targetLocation}s and variances
\texttt{sigma}s, we use the learning procedure described in
Section~\ref{sec:learning}.
\vspace*{2mm}
\section{Bayesian-Network Learning}
\label{sec:learning}
Parking can be seen as a sequence of moves and turns, where each
action depends on the previous one. For example, the turning angle
typically depends on the previously driven distance. Due to sensor
noise and imprecision, inertia and friction forces, and also many
possible ways to perform a parking task starting from one initial
location, we assume that the dependence between actions is
probabilistic, and in particular, the RVs are distributed according to
Gaussian distributions (GD). We represent the dependencies between
actions as the GBN in Figure~\ref{fig:GBN}, where $l_i$ or $\alpha_j$
denotes a distance or a turning angle of the corresponding action and
$b_{ij}$ is a conditional dependence between consecutive actions.\vspace*{1mm}
\begin{figure}[htbp]
\begin{center}
\input{graph}
\end{center}
\vspace*{-3mm}
\caption{Gaussian Bayesian Network for parking}
\label{fig:GBN}
\end{figure}
In order to learn the conditional probability distributions of the GBN
in Figure~\ref{fig:GBN}, and to fill in the \texttt{targetLocation}s
and the \texttt{sigma}s in Listing~\ref{lst:parking}, we record
trajectories of the successful parkings done by a human expert.
Figure~\ref{fig:trajectories} shows example trajectories used during
the learning phase.\vspace*{1mm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectories}
\end{center}
\vspace*{-4mm}
\caption{Example trajectories for the parking task}
\label{fig:trajectories}
\end{figure}
We than use the fact that any GBN can be converted to an
MGD~\cite{Neapolitan:2003} in our learning routine.
Learning the parameters of the GBN can be divided into three steps:
\begin{enumerate}
\vspace*{-2mm}\item Convert the GBN to the corresponding MGD,
\vspace*{-2mm}\item Update the precision matrix $\mathbf{T}$ of the MGD,
\vspace*{-2mm}\item Extract $\sigma^2$s and conditional dependences from $\mathbf{T}$.
\end{enumerate}
\vspace*{-2mm}{\em 1. Conversion step.} To construct MGD we need to
obtain the mean vector $\mu$ and the precision matrix
$\mathbf{T}$. The mean vector $\mu$ comprises the means of all the
variables from the GBN. To find the symbolic form of the precision
matrix, we use the recursive notation in \cite{HeckermanG95}, where
the value of the coefficients $b_i$, will be learned in the update
step below.\vspace*{-2mm}
\begin{equation}
\mathbf{T}_{i+1} =\left( \begin{matrix}
\textbf{T}_{i} +
\frac{\mathbf{b}_{i+1}\mathbf{b}_{i+1}^T}{\sigma_{i+1}^2} &
-\frac{\mathbf{b}_{i+1}}{\sigma_{i+1}^2} \\
-\frac{\mathbf{b}_{i+1}^T}{\sigma_{i+1}^2} &
\frac{1}{\sigma_{i+1}^2} \end{matrix} \right)
\label{eq:T}
\end{equation}
\vspace*{-2mm}
In order to apply Equation~\ref{eq:T} we define an ordering starting
with the initial node $l_1$. Its precision matrix is equal to:
\vspace*{-3mm}
\begin{equation*}
T_{1} = \frac{1}{\sigma^2_1}.
\end{equation*}
\vspace*{-3mm}
The vector $\mathbf{b}_{i}$ comprises dependence coefficients for node
$i$ on all its immediate parents it in the ordering. For example, the
dependence vector for node $\alpha_2$ in the Figure~\ref{fig:GBN}
equals to:
\vspace*{-3mm}
\begin{equation*}
b_{4} = \left(
\begin{matrix}
0 \\
0 \\
b_{43} \\
\end{matrix}
\right)
\end{equation*}
\vspace*{-3mm}
After applying the Equation~\ref{eq:T} to each node in the GBN, we
obtain the precision matrix $\mathbf{T}_7$, shown in
Figure~\ref{eq:precision}. Since each action in the parking task
depends only on the previous one (for example, in Figure~\ref{fig:GBN}
the turning angle depends on the previously driven distance only), we
can generalize the precision matrix for the arbitrary number of
moves. For a GBN with $k$ moves, all non-zero elements of the
precision matrix $T \in \mathcal{R}^{k;k}$ can be found according to
the Equation~\ref{eq:T_gen}, where $\mathbf{T}(r,c)$ is a $c$-th
element in a $r$-th row of the precision matrix with indices started
from one.
\vspace*{-3mm}
\begin{equation}
\begin{split}
\mathbf{T}(i, i-1) =-\frac{b_{i(i-1)}}{\sigma^2_{i}},\\
\mathbf{T}(i, i) =\frac{1}{\sigma^2_{i}} + \frac{b^2_{(i+1)i}}{\sigma^2_{i+1}},\\
\mathbf{T}(i, i+1) =-\frac{b_{(i+1)i}}{\sigma^2_{i+1}},
\end{split}
\label{eq:T_gen}
\end{equation}
\vspace*{-3mm}
\emph{2. Update step.} Once we derived the symbolic form of the
precision matrix ($\mathbf{T}_7$ in our example), we use the training
set, in order to learn the actual values of its parameters, as
described in the algorithm from \cite{Neapolitan:2003}. Each training
example $\mathbf{x}^{(i)}$ corresponds to a vector of lengths and
turning angles for a successful parking task. The total number of
examples in the training set is $M$.
The procedure allows us to learn iteratively and adjust the prior
belief by updating the values of the mean $\mu$ and covariance matrix
$\beta$ of the prior, where $v$ is a size of a training set for the
prior belief, and $\alpha = v-1$.
\vspace*{-0.45cm}
\begin{equation}
\beta = \frac{v(\alpha - n + 1)}{v + 1} \mathbf{T}^{-1},
\end{equation}
\vspace*{-0.45cm}
The updated mean value $\mu^*$ incorporates prior value of the mean
$\mu$ and the mean value of the new training examples $\mathbf{x}$.
\vspace*{-0.45cm}
\begin{equation}
\begin{split}
\overline{\mathbf{x}} =& \frac{\sum_{i =1}^M \mathbf{x}^{(i)}}{M} \\
\mu^* =& \frac{v\mu + M\overline{\mathbf{x}}}{v+ M}
\end{split}
\label{eq:x_and_mu}
\end{equation}
\vspace*{-0.45cm}
The size of the training set $v^*$ is updated to its new value:
\vspace*{-0.6cm}
\begin{equation}
v^* = v + M
\end{equation}
\vspace*{-0.6cm}
The updated covariance matrix $\beta^*$ combines the prior matrix
$\beta$ with the covariance matrix of the training set $\mathbf{s}$:
\vspace*{-0.5cm}
\begin{equation}
\begin{split}
\mathbf{s} =& \sum_{i =1}^M \left(x^{(i)} - \overline{\mathbf{x}}\right)\left(x^{(i)} - \overline{\mathbf{x}}\right)^T \\
\beta^* =& \beta + s + \frac{rm}{v+M}\left(x^{(i)} - \overline{\mathbf{x}}\right)\left(x^{(i)} - \overline{\mathbf{x}}\right)^T
\end{split}
\end{equation}
\vspace*{-0.3cm}
Finally, the new value of the matrix $\beta$ is used to calculate
the covariance matrix $(\mathbf{T}^*)^{-1}$, where $\alpha^*=\alpha+M$.
\vspace*{-0.4cm}
\begin{equation}
{(\mathbf{T}^*)}^{-1} = \frac{v^* + 1}{v^*(\alpha^* - n + 1) } \beta^*
\label{eq:T_new}
\end{equation}
\vspace*{-0.2cm}
\begin{figure*}[!t]
\begin{equation}
\textbf{T}_7 =\left(
\begin{matrix}
\frac{1}{\sigma^2_1} + \frac{b_{21}^2}{\sigma^2_2} & -\frac{b_{21}}{\sigma_2^2} & 0 & 0 & 0 & 0 & 0\\
-\frac{b_{21}}{\sigma_2^2} & \frac{1}{\sigma^2_2} + \frac{b_{32}^2}{\sigma_3^2} & -\frac{b_{32}}{\sigma_3^2}
& 0 & 0 & 0 & 0\\
0 & -\frac{b_{32}}{\sigma_3^2} & \frac{1}{\sigma_3^2} + \frac{b_{43}^2}{\sigma_4^2} & -\frac{b_{43}}{\sigma_4^2}
& 0 & 0 & 0\\
0 & 0 & -\frac{b_{43}}{\sigma_4^2} & \frac{1}{\sigma_4^2} +\frac{b_{54}^2}{\sigma_5^2} &
-\frac{b_{54}}{\sigma_5^2} & 0 & 0\\
0 & 0 & 0 &-\frac{b_{54}}{\sigma_5^2} & \frac{1}{\sigma_5^2} + \frac{b_{65}^2}{\sigma_6^2} & -\frac{b_{65}}{\sigma_6^2} & 0\\
0 & 0 & 0 & 0 & -\frac{b_{65}}{\sigma_6^2} & \frac{1}{\sigma_6^2} + \frac{b_{76}^2}{\sigma_7^2} & -\frac{b_{76}}{\sigma_7^2}\\
0 & 0 & 0 & 0 & 0 & -\frac{b_{76}}{\sigma_7^2} & \frac{1}{\sigma_7^2} \\
\end{matrix}
\right)
\label{eq:precision}
\end{equation}
\vspace*{-2mm}
\end{figure*}
\emph{3. Extraction step.} The new parameters of the GBN can now be
retrieved from the updated mean vector $\mu^*$ and from
$(\mathbf{T}^*)^{-1}$. If new traces are available at hand, one can
update the distributions by recomputing $\mu^*$ and
$(\mathbf{T}^*)^{-1}$ using
Equations~\ref{eq:x_and_mu}-\ref{eq:T_new}. We depict the whole
process in Figure~\ref{fig:2phaseProgram}: Unknown parameters from
the program skeleton are learned from successful traces and these
dependencies are used during the execution phase to sample the
commands.\vspace*{2mm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.9\linewidth]{2phaseProgram}
\end{center}
\vspace*{-3mm}
\caption{Learning Parameters in a Neural Program}
\label{fig:2phaseProgram}
\vspace*{2mm}
\end{figure}
\vspace*{4mm}\section{Experimental results}
\label{sec:experiments}
We performed our experiments on a \texttt{Pioneer P3AT-SH} mobile
rover from Adept MobileRobots\cite{Adept} (see
Figure~\ref{fig:robot}). The rover uses the \texttt{Carma Devkit}
from SECO \cite{CarmaPoster} as a main computational unit. The
comprised Tegra 3 ARM CPU runs the Robot Operating System (ROS) on top
of Ubuntu 12.04.\vspace*{1mm}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{robot_expl}
\end{center}
\vspace*{-2mm}
\caption{Experimental platform: Pioneer Rover}
\label{fig:robot}
\end{figure}
\vspace*{2mm}
\subsection{Structure of the Parking System}
\label{subsec:structure}
The parking system can be separated into several building blocks (see
Figure \ref{fig:parking_sys}). The block \emph{Rover Interface} senses
and controls the rover, that is, it establishes an interface to the
hardware. The block \emph{Sensor Fusion} takes the sensor values from
the \emph{Rover Interface} block, and provides the estimated pose of
the rover to the high-level controller \emph{Engine}. The
\emph{Engine} uses the \emph{GBN} block to update the motion commands
based on the estimated pose. Furthermore, the \emph{Engine} maps the
(higher level) motion commands to velocity commands needed by the
\emph{Rover Interface} to control the rover.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=\linewidth]{parking_system}
\end{center}
\vspace*{-6mm}
\caption{Parking system architecture}
\label{fig:parking_sys}
\end{figure}
\vspace*{2mm}
\subsubsection{The Gaussian Bayesian Network Block}
The goal of the GBN block in Figure~\ref{fig:parking_sys}, is to
generate motion commands for the Engine to execute. A motion command
corresponds to a driving distance or a turning angle.
During the learning phase, the distributions of the random variables
(RVs) in the Gaussian Bayesian network (GBN) in Figure \ref{fig:GBN}
are collected in a CSV file of following format:\vspace*{1mm}
\lstset{basicstyle=\scriptsize}
\begin{lstlisting}[frame=single]
motionType,motionDirection,mean,variance,
dependenceCoefficient
\end{lstlisting}
\vspace*{-3mm}
Parsing the CSV file initializes the GBN that will be used for
sampling the motion commands. Before starting the run we obtain the
initial command vector from the distributions learned. The
distribution of the first move $l_1$ is independent from any other
move and has the form $\mathcal{N}(\mu_1, \sigma_1^2)$. Starting from
the second move $\alpha_1$, each motion depends on the previous one:
For motion number $n$, the distribution has the form
$\mathcal{N}(\mu_n\,{-}\,b_{n,n-1}\,{*}\,~x^{\text{\tiny{sampled}}}_{n-1},
\sigma_n^2)$. The initial command vector is obtained by sampling from
$l_1$, and each subsequent command vector is obtained by taking into
account the previous sample, when sampling from its own distribution.
As the rover and its environment are uncertain (e.g., sensors are
disturbed by noise) we use the pose provided by the sensor fusion unit
to update the motion commands. Hence the motion commands are
constantly adapted to take into account the actual driven distance
(which could be different from the planned one due to the
aforementioned uncertainty of the CPS). This allows us to incorporate
the results of the sensor fusion algorithm in the updated commands.
\vspace*{2mm}
\subsubsection{The Engine Block}
\label{subsec:engine}
During the run we execute a motion command according to the
semantics of the \texttt{nwhile} loop.
In particular, the estimated pose is passed from the Sensor Fusion
block to the Engine and compared with the target location, specified
as a point on a 2-D plane. Since the rover is affected by noise its
path can deviate and never come to the target location. To be able to
detect and overcome this problem we estimate the scalar product of two
vectors: The first one is the initial target location, and the second
one is the current target location. This product is monotonely
decreasing and becomes negative after passing the goal even on a
deviating path. In an \texttt{nwhile} statement we monitor the
distance (or angle) and detect if we should process the next command.
To obtain the current state of the rover (its pose), and send velocity
commands, we start two separate threads: (i)~Receive the pose
and (ii)~Send the velocity command. The motion command
(containing desired driving distance or turning angle) is converted to
a suitable velocity or steering command respectively, for the
\emph{Rover Interface}. After each executed command, we resample
the pose in order to take into account actual driving distance in
subsequent moves.
\vspace*{2mm}
\subsubsection{The Rover Interface Block}
The block \emph{Rover Interface} implements the drivers for sensors
and actuators. The \emph{wheel velocities} are measured by encoders,
already supplied within the Pioneer rover. A built-in microcontroller
reads the encoders and sends their value to the \texttt{Carma
Devkit}. Additionally the rover is equipped with an \emph{inertial
measurement unit (IMU)} including an accelerometer, measuring the
linear acceleration, and a gyroscope, measuring the angular velocity
of the rover. The \texttt{Raspberry Pi} mounted on top of the rover
samples the accelerometer and gyroscope, and forwards the raw IMU
measurements to the \texttt{Carma Devkit}. The rover is controlled
according to the incoming velocity commands containing desired linear
velocity into forward direction (x-translation) and the desired
angular velocity (z-rotation). The desired translation and rotation is
converted to the individual wheel velocities, which are sent to and
applied by the built-in microcontroller.
\vspace*{2mm}
\subsubsection{The Sensor Fusion Block}
Parking is often performed by applying predefined rules
\cite{LiYing10} or following a specific trajectory \cite{LiYing10}. So
typically an autonomously parking car stops at a specific position
beside a parking space and then turns and moves for a fixed time,
angle or distance. The car has to stop, move and turn \emph{exactly}
as designated to park successfully. The car has to be aware of its
current pose, that is, position and heading, otherwise parking will
most likely fail (whatsoever controller is used). However, the current
pose is observed by sensors, which suffer from
uncertainty. Measurements are distorted by noise, e.g., caused by
fluctuations of the elements of the electrical circuit of the
sensors. The environment may be unpredictable, e.g., the car may slip
over water or ice when parking. To overcome such problems sensor
fusion techniques are applied, i.e., several sensors are combined to
estimate a more accurate state. A common method is state estimation
(also called \emph{filtering}) \cite{Mit07,Thr06}.
In this application, an \emph{unscented Kalman filter (UKF)}
\cite{Wan00} is used. This filter combines the measurements listed in
Table \ref{tab:sensors} with a suitable model describing the relations
from the measured variables to the pose of the car.\vspace*{2mm}
\begin{table}[h!]
\renewcommand{\arraystretch}{1.05}
\centering
\begin{tabular}{|c|p{0.22\textwidth}|c|}
\hline
& \textsc{Sensor} & \textsc{Variance} \\
\hline \hline
$v_l$ & left wheel's velocity & 0.002 $m/s$ \\
$v_r$ & right wheel's velocity & 0.002 $m/s$ \\
$a$ & linear acceleration & 0.25 $m/s^2$ \\
$\omega$ & angular velocity & 0.00017 $rad/s$ \\
\hline
\end{tabular}
\vspace*{-1mm}
\caption{Used sensors and its variances.}
\label{tab:sensors}
\vspace*{-1mm}
\end{table}
The \emph{belief state} maintained by the UKF, e.g., the current
linear velocity, will be continuously updated: (i) By predicting
the state, and (ii) By updating the prediction with
measurements. For example, the linear velocity will be predicted by
the current belief of acceleration and the time elapsed since the
previous estimation. Next, the measurements from accelerometer and
wheel encoders are used to update the predicted velocity. Because the
wheel encoders are much more accurate than the acceleration sensor
(see variance in Table \ref{tab:sensors}), the measurements from the
wheel encoders will be trusted more (for simplicity one can think of
weighting and averaging the measurements, where the particular weight
corresponds to the reciprocal variance of the sensor). However, by
using more than one sensor, the unscented Kalman filter reduces the
error of estimated and actual velocity.
\vspace*{2mm}
\subsection{Integration into ROS}
\label{subsec:ROSintegration}
ROS~\cite{ROS2009} is a meta-operating system that provides common
functionality for robotic tasks including process communication,
package management, and hardware abstraction. A basic building block
of a system in ROS is a so-called \emph{node}, that performs
computation and exchanges information with other entities. Nodes
communicate with each other subscribing for or publishing messages to
specific \emph{topics}. So all the nodes subscribed to a particular
topic A, will receive messages from nodes publishing to this topic A.
Since the application is implemented in ROS we use the utility
\texttt{roslaunch} to start the required ROS nodes (as shown in Figure
\ref{fig:parking_sys_ros}) corresponding to the blocks given in
Section \ref{subsec:structure}.
\begin{description}
\vspace*{-4mm}\item[Rover Interface:] The node \texttt{RosAria} is
used to control the velocity of the rover and provide the values of
the wheel encoders for the sensor fusion node. The ROS nodes
\texttt{imu3000} and \texttt{kxtf9} running on the \texttt{Raspberry
Pi} provide data from acceleromenter and gyroscope.
\vspace*{-2mm}\item[Sensor Fusion:] \texttt{sf\_filter} node reads
sensor values, implements the sensor fusion algorithm and provides
the estimated pose of the rover. \vspace*{-2mm}\item[GBN and
Engine:] \texttt{pioneer\_driver} is a node implementing resampling
of commands based on the actual driven distance and constantly
providing the required velocity commands to the \texttt{RosAria}
node (see Figure \ref{fig:parking_sys_ros}).
\end{description}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{parking_system_ros}
\vspace*{-4mm}
\caption{Parking system in ROS.}
\label{fig:parking_sys_ros}
\end{figure}
\vspace*{2mm}
\subsection{Results}
After the learning phase, we obtain the parameters of the GBN that we
use in the program skeleton (Table.~\ref{tab:GBN_val}). Since we
track the position using the data from the sensor fusion and each
movement has the experimentally learned uncertainty, we are resistive
to the perturbation of the actual driving distances and angles. If in
the current distance of the robot deviates from the planned one, the
commands, resampled from the GBN will try compensate the deviation
with the dependencies obtained from the learning phase.\vspace*{1mm}
\begin{table}[h!]
\renewcommand{\arraystretch}{1.05}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$-$ & & $b_{21}$ & 0.7968 & $b_{32}$ & -0.2086 & $b_{43}$ & 0.5475 \\ \hline
$\sigma_1^2$ & 0.0062 & $\sigma_2^2$ & 0.0032 & $\sigma_3^2$ & 0.0019 & $\sigma_4^2$ & 0.022 \\
\hline \hline
$b_{54}$ & -0.0045 & $b_{65}$ & 1.1920 & $b_{76}$ & -0.0968 & & \\ \hline
$\sigma_5^2$ & 0.0008 & $\sigma_6^2$ & 0.0178 & $\sigma_7^2$ & 0.0013 & & \\
\hline
\end{tabular}
\caption{BN variances and coefficient dependences.}
\label{tab:GBN_val}
\vspace*{4mm}
\end{table}
\vspace*{2mm}
\section{Related Work}
\label{sec:related}
Although probabilistic programs (PP), Gaussian Bayesian networks (GBN)
and neural networks were considered before, to the best of our
knowledge, the development of smooth control statements, related
within a GBN ontology is new. The ontology represents the knowledge of
the PP about both its environment and its own control logic.
Probabilistic programs are represented by different languages and
frameworks~\cite{Gilks94, RajamHenz14, Dekhtyar00}. The authors
in~\cite{RajamHenz14} differentiate probabilistic programs from
``traditional'' ones, by the ability to sample at random from the
distribution and condition the values of variables via
observation. Although they consider both discrete and continuous
probability distributions, and transformation of Bayesian Networks and
Discrete Time Markov Chains to probabilistic programs, they do not
mention probabilistic control structures linked in GBN.
In~\cite{Chaudhuri10}, the authors adapted the signal and image
processing technique called \emph{Gaussian smoothing (GS)}, for
program optimization. Using GS, a program could be approximated by a
smooth mathematical function, which is in fact a convolution of a
denotational semantics of a program with a Gaussian function. This
approximation is used to facilitate optimization for solving the
parameter synthesis problem. In~\cite{Chaudhuri11} this idea was
developed further and soundness and robustness for smooth
interpretation of programs was defined. In both papers the authors
do not consider any means for eliminating the re-normalization step
of the probability density function when a variable is passed through
a conditional branch in the current execution trace. Moreover, they
stop short of proposing new, smooth control statements.
Learning Bayesian Networks comprises different tasks and problem
formulations: $i)$~Learning the structure of the network,
$ii)$~Learning the conditional probabilities for the given structure
and $iii)$ Performing querying-inference for a given Bayesian
Network~\cite{Neapolitan:2003}. In~\cite{HeckermanG95} the authors
introduce a unified method for both discrete and continuous domains
to learn the parameters of Bayesian Network, using a combination of
prior knowledge and statistical data.
Various formulations of a mobile parking problem were extensively
studied for robots with different architectures~\cite{Manzan12,
Jiang99, Khoshe05, Sciclu12, LiYing10}. For instance,
in~\cite{LiYing10} the authors use a custom spatial configuration of
the ultrasonic sensors and binaural method to perceive the environment
and park the robot using predefined rules. In~\cite{Khoshe05} the
authors approximate the trajectory for the parking task with a
polynomial curve, that the robot could follow with the constraints
satisfied, and used fuzzy controller to minimize the difference
between specified trajectory and actual path.
In order to govern a physical process (e.g., parking a car), the
controller must be aware of the internal state of the process (e.g.,
the position of the car). Sensors measure the outputs of a process,
whereof the state can be estimated. However, the measurements are
distorted by noise and the environment may be unpredictable. State
estimators \cite{Mit07,Thr06,Aru02} and in particular Kalman filters
\cite{Thr06,Wan00} are commonly used methods to increase the
confidence of the state estimate evaluated out of raw sensor
measurements.
\vspace*{2mm}
\section{Conclusion}
\label{sec:conclusion}
In this paper we introduced \emph{deep neural programs (DNP)}, a new
formalism for writing robust and adaptive cyber-physical-system (CPS)
controllers. Key to this formalism is: (i) The use of smooth Probit
distributions in conditional and loop statements, instead of their
classic stepwise counterparts, and (2) The use of a Gaussian Bayesian
network, for capturing the dependencies among the Probit distributions
within the conditional and loop statements in the DPN.
We validated the usefulness of DPNs by developing, once and for all, a
parallel parking CPS-controller, which is able to adapt to unforeseen
environmental situations, such as a slippery ground, or noisy
actuators. No classic program has such ability: One would have to
encode all this unforeseen situations, which would lead to an
unintelligible code.
In future work, we plan to explore the advantages of DPNs in the
analysis, as well as, in the design (optimization) of CPS
controllers. The nice mathematical properties of DPNs make them an
ideal formalism for these tasks.
\vspace*{2mm}
\bibliographystyle{abbrv}
|
1,314,259,993,266 | arxiv | \section{Introduction}
Nowadays, a shift splitting iteration scheme has been successfully
used to solve the large sparse system of linear equations
\begin{equation}\label{eq:1}
Ax=b
\end{equation}
with $A$ being non-Hermitian positive definite, which is deemed as
one of the efficient stationary solvers and is first introduced in
\cite{bai}, and works as follows: Given an initial guess $x^{(0)}$,
for $k=0,1,2,\ldots$ until $\{x^{(k)}\}$ converges, compute
\begin{equation}\label{eq:2}
(\alpha I+ A)x^{(k+1)}=(\alpha I-A)x^{(k)}+2b,
\end{equation}
where $\alpha$ is a given positive constant. It is noteworthy that
this shift splitting iteration scheme (\ref{eq:2}) not only is
unconditionally convergent, but also can induce an economical and
effective preconditioner $P=\alpha I+ A$ for the non-Hermitian
positive definite linear system (\ref{eq:1}). This induced
precondtioner is called as the shift-splitting preconditioner. When
the shift-splitting preconditioner $P=\alpha I+ A$ together with
Krylov subspace methods are employed to solve the non-Hermitian
positive definite linear system (\ref{eq:1}), its highly efficiency
has been confirmed by numerical experiments in \cite{bai}.
Since both the shift splitting iteration scheme and the
shift-splitting preconditioner are economical and effective, they
have drawn much attention. Not only that, this approach has been
successfully extended to other practical problems, such as the
classical saddle point problems
\begin{equation}\label{eq:13}
\left[\begin{array}{cc}
A&B^{T}\\
-B&0
\end{array}\right]
\left[\begin{array}{c}
x\\
y
\end{array}\right]=\left[\begin{array}{c}
p\\
q
\end{array}\right],
\end{equation}
where $A\in \mathbb{R}^{n\times n}$ is symmetric positive definite (SPD),
$B\in \mathbb{R}^{m\times n}$ with $\mbox{rank}(B)=m\leq n$, see
\cite{Caoy}. Whereafter, Chen and Ma in \cite{Chen} proposed the
two-parameter shift-splitting preconditioner for saddle point
problems (\ref{eq:13}). Based on the work in \cite{Chen}, Salkuyeh
\emph{et al.} in \cite{Salkuyeh} use the two-parameter
shift-splitting preconditioner for the saddle point problems
(\ref{eq:13}) with symmetric positive semidefinite (2, 2)-block,
\cred{and for the same problem when the symmetry of the (1,1)-block is omitted in \cite{SalkuyehYazd}},
Cao
\emph{et al.} in \cite{Caoy2} considered the saddle point problems
(\ref{eq:13}) with nonsymmetric positive definite (1, 1)-block, Cao
and Miao in \cite{Caoy3} considered the singular nonsymmetric saddle
point problems (\ref{eq:13}), and so on.
On the other hand, combining the shift splitting technique with the
matrix splitting technique, some new efficient preconditioners have
been developed, such as the modified shift-splitting
preconditioner\cite{Zhou}, the generalized modified shift-splitting
preconditioner\cite{Huang}, the extended shift-splitting
preconditioner \cite{Zheng}, a general class of shift-splitting
preconditioner \cite{Caoy4}, the modified generalized
shift-splitting preconditioner \cite{Huang2,DKSMRCAMWA}, the generalized double
shift-splitting preconditioner \cite{Fan}, and so on.
In this paper, we consider the asymmetric saddle point problems of the form
\begin{equation}\label{eq:14}
\mathcal{A}{\bf x}=\left[\begin{array}{cc}
A&B^{T}\\
-C&0
\end{array}\right]
\left[\begin{array}{c}
x\\
y
\end{array}\right]=\left[\begin{array}{c}
b\\
q
\end{array}\right]={\bf f},
\end{equation}
where $A\in \mathbb{R}^{n\times n}$ is SPD,
$B, C\in \mathbb{R}^{m\times n}$, $m\leq n$. Moreover, the matrices $B$ and $C$ are of full rank. In \cite{CaoNLAA}, Cao proposed the
augmentation block triangular preconditioner
\begin{equation}\label{PrAug}
P_{Aug}=\left[\begin{array}{cc}
A+B^TW^{-1}C&B^{T}\\
0 & W
\end{array}\right],
\end{equation}
for the system obtaining from multiplying the second block row of (\ref{eq:14}) by $-1$, where $W\in\Bbb{R}^{m\times n}$ is nonsingular and such that $A+B^TW^{-1}C$ is invertible. The performance of the preconditioner $P_{Aug}$
was compared with several preconditioners presented in \cite{CaoIJCM,CaoAPNUM,Murphy}. In \cite{LiHuangLi}, Li {\it et al.} presented the partial positive semidefinite and skew-Hermitian splitting (for short, PPSS) iteration method for the system (\ref{eq:14}). The PPSS iteration method induces the preconditioner
\begin{equation}\label{PrPPSS}
P_{PPSS}=\frac{1}{2\alpha} (\alpha I+H)(\alpha I+S),
\end{equation}
where $\alpha>0$,
\[
H=\left[\begin{array}{cc}
A & 0\\
0 & 0
\end{array}\right]\quad {\rm and} \quad
S=\left[\begin{array}{cc}
0 & B^T\\
-C & 0
\end{array}\right].
\]
Numerical results presented in \cite{LiHuangLi} show that the $P_{PPSS}$ preconditioner outperforms the classical HSS preconditioner \cite{BenziSIMAX}. Although the shift-splitting iteration scheme and the
shift-splitting preconditioner have been successfully used to solve
the classical saddle point problems, they have not been applied to
the asymmetric saddle point problems (\ref{eq:14}). Based on this, our goal of this paper is to use the
shift-splitting iteration scheme and the shift-splitting
preconditioner for the asymmetric saddle point problems. One can see
\cite{Elman,greif1, greif2,cafieri,r,b1} for more details.
Theoretical analysis shows that the shift-splitting iteration
method is convergent under suitable conditions and the spectral
distributions of the corresponding preconditioned matrices are
better clustered. Numerical experiments arising from a model Stokes
problem are provided to show the effectiveness of the proposed two
preconditioners.
We use the following notations throughout the paper. For a given matrix $S$, ${\cal N}({S})$ stands for the null space of $S$. The spectral radius of a square matrix $G$ is denoted by $\rho(G)$. For a vector $x\in\Bbb{C}^n$,
$x^*$ is used for the conjugate transpose of $x$. The real and imaginary parts of any $y\in\Bbb{C}$ are denoted by
$\Re(y)$ and $\Im(y)$, respectively. For two vectors $x$ and $y$, the \textsc{Matlab} notation $[x;y]$ is used for
$[x^T,y^T]^T$. Finally, for two vectors $x,y\in\Bbb{C}^n$, the standard inner product of $x$ and $y$ is denoted by $\langle x,y \rangle=y^*x$.
The layout of this paper is organized as follows. In Section \ref{Sec2}, the
shift splitting iteration scheme and the related shift-splitting
preconditioner are presented for the asymmetric saddle point
problems (\ref{eq:14}). In Section \ref{Sec3}, numerical experiments are provided to
examine the convergence behaviors of the shift-splitting
preconditioner and its relaxed version for solving the asymmetric
saddle point problems (\ref{eq:14}). Finally, some conclusions are
described in Section \ref{Sec4}.
\section{The shift-splitting method}\label{Sec2}
Here, three lemmas are given for later discussion.
\begin{lemma}\label{ExistSol} \emph{\cite{CaoNLAA}}
The saddle point matrix
\begin{equation}\label{CalA}
\mathcal{A}=\left[\begin{array}{cc}
A&B^{T}\\
-C&0
\end{array}\right]
\end{equation}
is nonsingular if and only if $rank(B)=rank(C)=m$,
${\cal N}(A) \cap {\cal N}(C)=\{0\}$ and
${\cal N}(A^T) \cap {\cal N}(B)=\{0\}$.
\end{lemma}
\begin{lemma}\label{Root1} \emph{\cite{Wu2}}
Let $\lambda$ be any root of the quadratic equation $x^{2}-ax+b= 0$, where $a,b\in \mathbb{R}$. Then, $|\lambda|<1$ if and only if $|b| < 1$ and $|a| < 1+b$.
\end{lemma}
\begin{lemma}\label{Root2}\emph{\cite{Bai2}}
Let $\lambda$ be any root of the quadratic equation $x^{2 }-\phi x +
\psi = 0$, where $\phi,\psi\in \mathbb{C}$. Then, $|\lambda|<1$ if
and only if $|\psi| < 1$ and $|\phi-\phi^{\ast}\psi| + |\psi|^{2} <
1$.
\end{lemma}
First, to guarantee the unique solution of the asymmetric saddle
point problems (\ref{eq:14}), Lemma \ref{NonSinA} is obtained.
\begin{lemma}\label{NonSinA}
Let $A$ be a SPD matrix and $\rank(B)=rank(C)=m$. Then, saddle point matrix $(\ref{CalA})$ is nonsingular.
\begin{proof}
It is an immediate result of Lemma \ref{ExistSol}.
\end{proof}
\end{lemma}
Similarly, for every $\alpha>0$ and under the conditions of Lemma \ref{NonSinA}, the matrix
\[
\alpha I+\mathcal{A}=\left[\begin{array}{cc}
\alpha I+A & B^{T} \\
-C& \alpha I \\
\end{array}\right]
\]
is nonsingular.
Next, under the condition of Lemma \ref{NonSinA}, we can establish the shift-splitting (SS) iteration method
for solving the asymmetric saddle point problems (\ref{eq:14}). To this end, the shift-splitting of
the coefficient matrix $\mathcal{A}$ in (\ref{eq:14}) can be
constructed as follows
\begin{align*}
\mathcal{A}&=\frac{1}{2}(\alpha I+\mathcal{A})-\frac{1}{2}(\alpha
I-\mathcal{A})\\
&= \frac{1}{2}\left[\begin{array}{cc}
\alpha I+A & B^{T} \\
-C& \alpha I \\
\end{array}\right]-\frac{1}{2}\left[\begin{array}{cc}
\alpha I-A & -B^{T} \\
C& \alpha I \\
\end{array}\right],
\end{align*}
where $\alpha>0$ and $I$ is the identity matrix. This matrix
splitting naturally leads to the shift splitting (SS) iteration
method for solving the asymmetric saddle point problems
(\ref{eq:14}) and works as follows.
\medskip
\noindent{\it {\bf The SS iteration method}: Let the initial vector ${\bf x}^{(0)}\in
\mathbb{R}^{n+m}$ and $\alpha>0$. For
$k=0, 1, 2,\ldots$ until the iteration sequence
$\{{\bf x}^{(k)}\}_{k=0}^{+\infty}$ is converged, compute ${\bf x}^{(k+1)}$, by solving the linear system
\begin{equation}\label{eq:21}
\left[\begin{array}{cc}
\alpha I+A & B^{T} \\
-C& \alpha I \\
\end{array}\right]
{\bf x}^{(k+1)}
=\left[\begin{array}{cc}
\alpha I-A & -B^{T} \\
C& \alpha I \\
\end{array}\right]{\bf x}^{(k)}+2\left[
\begin{array}{c}
b \\
q\\
\end{array}
\right].
\end{equation}}
Clearly, the iteration matrix $M_{\alpha}$ of the SS method is
\begin{equation}\label{eq:23}
M_{\alpha}=\left[\begin{array}{cc}
\alpha I+A & B^{T} \\
-C& \alpha I \\
\end{array}\right]^{-1}\left[\begin{array}{cc}
\alpha I-A & -B^{T} \\
C& \alpha I \\
\end{array}\right].
\end{equation}
To study the convergence property of the SS method, the value of the
spectral radius $\rho(M_{\alpha})$ of the corresponding iteration
matrix $M_{\alpha}$ is necessary to be estimated. As is known, when
$\rho(M_{\alpha})<1$, the SS iteration method is convergent.
Thereupon, we assume that $\lambda$ is an eigenvalue of the matrix
$M_{\alpha}$ and its corresponding eigenvector is ${\bf x}=[x;y]$.
Therefore, we have
\[
\left[\begin{array}{cc}
\alpha I-A & -B^{T} \\
C& \alpha I \\
\end{array}\right]\left[
\begin{array}{c}
x \\
y\\
\end{array}
\right]=\lambda\left[\begin{array}{cc}
\alpha I+A & B^{T} \\
-C& \alpha I \\
\end{array}\right]\left[
\begin{array}{c}
x \\
y\\
\end{array}
\right],
\]
which is equivalent to
\begin{align}
&(\lambda-1)\alpha x+(\lambda+1)Ax+(\lambda+1)B^{T}y=0, \label{eq:23}\\
&(1+\lambda)Cx-\alpha(\lambda-1) y=0. \label{eq:24}
\end{align}
To obtain the convergence conditions of the SS method, the following
lemmas are given.
\begin{lemma}\label{Lemma5} Let the matrix $A$ be SPD and $\rank(B)=rank(C)=m$. If $\lambda$
is an eigenvalue of the matrix $M_{\alpha}$, then $\lambda\neq\pm1$.
\begin{proof}
If $\lambda=1$, then based on Eqs. (\ref{eq:23}) and
(\ref{eq:24}) we have
\begin{equation}\label{eq:25}
\left\{ \begin{aligned} &Ax+B^{T}y=0,\\
& -Cx=0.
\end{aligned} \right.
\end{equation}
Based on Lemma \ref{NonSinA}, we deduce that $x=0$ and $y=0$. This is a contradiction, because ${\bf x}=[x;y]=0$ can not be an
eigenvector of $M_{\alpha}$. Hence, $\lambda\neq1$.
When $\lambda=-1$, based on Eqs. (\ref{eq:23}) and (\ref{eq:24}) we
have $\alpha x=0$ and $\alpha y=0$. Since $\alpha>0$, we get $y=0$ and $x=0$, which is a contradiction, since $[x;y]$ is an eigenvector. Hence $\lambda\neq-1$.
\end{proof}
\end{lemma}
Based on the above discussion, the results in Lemma \ref{Lemma6} are right.
\begin{lemma}\label{Lemma6} Let the conditions of Lemma \ref{Lemma5} be satisfied.
Let also $\lambda$ be an eigenvalue of $M_{\alpha}$ and ${\bf x}=[x;y]$ be
the corresponding eigenvector. Then $x \neq0$. Moreover, if $y = 0$,
then $|\lambda| < 1$.
\begin{proof}
When $x=0$, from (\ref{eq:24}) we have
$\alpha(\lambda-1) y=0$. Based on Lemma \ref{Lemma5}, $\lambda\neq 1$. Therefore, $y=0$. This
contradicts with the nonzero eigenvector ${\bf x}=[x;y]$. Hence $x
\neq0$.
When $y=0$, based on Eq. (\ref{eq:23}) we get
\[
( \alpha I+A)^{-1}(\alpha I-A)x=\lambda x.
\]
Therefore, using the Kellogg's lemma (see \cite[page 13]{Marchuk}) we deduce
\[
|\lambda|\leq \|( \alpha I+A)^{-1}(\alpha I-A)\|_{2}<1,
\]
which completes the proof.
\end{proof}
\end{lemma}
For later use we define the set ${\cal S}$ as
\[
{\cal S}=\{x\in\Bbb{C}^n: {\bf x}=[x;y] \text{ is an eigenvector of } M_{\alpha} \text{ with } \|x\|_2=1\}.
\]
It follows from Lemma \ref{Lemma6} that the members of ${\cal S}$ are nonzero.
\begin{theorem}\label{Thm1}
Let the conditions of Lemma \ref{Lemma5} be satisfied.
For every $x\in{\cal S}$, let $a(x)=x^{\ast}Ax$, $s(x)=\Re(x^HB^TCx)$ and $t(x)=\Im(x^HB^TCx)$.
For each $x\in{\cal S}$, if $s(x)>0$ and
\begin{equation}\label{EqCond}
|t(x)|<a(x)\sqrt{s(x)},
\end{equation}
then
\begin{equation*}
\rho(M_{\alpha})<1,\quad \forall \alpha>0,
\end{equation*}
which implies that the SS iteration method $(\ref{eq:21})$ converges
to the unique solution of the asymmetric saddle point problems
$(\ref{eq:14})$.
\begin{proof}
Based on Lemma \ref{Lemma5}, from (\ref{eq:24}) we have
\begin{equation}\label{eq:27}
y=\frac{\lambda+1}{\alpha(\lambda-1)}Cx.
\end{equation}
Substituting (\ref{eq:27}) into (\ref{eq:23}) leads to
\begin{equation}\label{eq:28}
(\lambda-1)\alpha
x+(\lambda+1)Ax+\frac{(\lambda+1)^{2}}{\alpha(\lambda-1)}B^{T}Cx=0.
\end{equation}
Let $\|x\|_{2}=1$. Pre-multiplying $x^{\ast}$ to the both sides of
Eq. (\ref{eq:28}) leads to
\begin{equation}\label{EqQuadEq}
\alpha^{2}(\lambda-1)^{2}+\alpha(\lambda^{2}-1)x^{\ast}Ax+(\lambda+1)^{2}x^{\ast}B^{T}Cx=0,
\end{equation}
which is equivalent to
\begin{equation}\label{eq:29}
\alpha^{2}(\lambda-1)^{2}+\alpha(\lambda^{2}-1)a+(\lambda+1)^{2}(s(x)+t(x)i)=0.
\end{equation}
For the sake simplicity in notations, we use $s$, $t$ and $a$ for $s(x)$, $t(x)$ and $a(x)$, respectively.
It follows from Eq. (\ref{eq:29}), that
\begin{equation}\label{eq:210}
\lambda^{2}+\frac{2(s+ti-\alpha^{2})}{\alpha^{2}+\alpha
a+s+ti}\lambda+\frac{\alpha^{2}-\alpha a+s+ti}{\alpha^{2}+\alpha
a+s+ti}=0.
\end{equation}
Next, we will discuss two aspects: $t = 0$ and $t\neq0$.
When $t = 0$, from (\ref{eq:210}), we get
\begin{equation}\label{eq:211}
\lambda^{2}+\frac{2(s-\alpha^{2})}{\alpha^{2}+\alpha
a+s}\lambda+\frac{\alpha^{2}-\alpha a+s}{\alpha^{2}+\alpha a+s}=0.
\end{equation}
By simples computations, we have
\begin{equation}\label{eq:212}
\Big|\frac{\alpha^{2}-\alpha a+s}{\alpha^{2}+\alpha a+s}\Big|<1
\end{equation}
and
\begin{equation}\label{eq:213}
\Big|\frac{2(s-\alpha^{2})}{\alpha^{2}+\alpha a+s}\Big|<1+
\frac{\alpha^{2}-\alpha a+s}{\alpha^{2}+\alpha a+s}.
\end{equation}
Based on Lemma \ref{Root1}, the inequalities (\ref{eq:212}) and (\ref{eq:213})
imply that the roots of the real quadratic equation (\ref{eq:211})
satisfy $|\lambda| < 1$.
If $t\neq0$, then Eq. (\ref{eq:210}) can be written as $\lambda^2+\phi \lambda +\psi=0$, where
\[
\phi=\frac{2(s+ti-\alpha^{2})}{\alpha^{2}+\alpha a+s+ti}\
\mbox{and}\ \psi=\frac{\alpha^{2}-\alpha a+s+ti}{\alpha^{2}+\alpha
a+s+ti}.
\]
By some calculations, we get
\begin{align*}
\phi-\phi^{\ast}\psi&=\frac{2(s+ti-\alpha^{2})}{\alpha^{2}+\alpha
a+s+ti}-\frac{2(s-ti-\alpha^{2})}{\alpha^{2}+\alpha
a+s-ti}\cdot\frac{\alpha^{2}-\alpha a+s+ti}{\alpha^{2}+\alpha
a+s+ti}\\
&=\frac{2(s-\alpha^{2}+ti)}{\alpha^{2}+\alpha
a+s+ti}\cdot\frac{\alpha^{2}+\alpha a+s-ti}{\alpha^{2}+\alpha
a+s-ti}-\frac{2(s-\alpha^{2}-ti)}{\alpha^{2}+\alpha
a+s-ti}\cdot\frac{\alpha^{2}-\alpha a+s+ti}{\alpha^{2}+\alpha a+s+ti}\\
&=2\Big[\frac{(s-\alpha^{2}+ti)(\alpha^{2}+\alpha
a+s-ti)}{(\alpha^{2}+\alpha
a+s)^{2}+t^{2}}+\frac{(\alpha^{2}-s+ti)(\alpha^{2}-\alpha
a+s+ti)}{(\alpha^{2}+\alpha a+s)^{2}+t^{2}}\Big]\\
&=2\frac{(s-\alpha^{2}+ti)(\alpha^{2}+\alpha
a+s-ti)+(\alpha^{2}-s+ti)(\alpha^{2}-\alpha
a+s+ti)}{(\alpha^{2}+\alpha
a+s)^{2}+t^{2}}\\
&=4\frac{\alpha a(s-\alpha^{2})+2\alpha^{2} ti}{(\alpha^{2}+\alpha
a+s)^{2}+t^{2}}.
\end{align*}
Further, we have
\begin{equation} \label{eq:217}
\begin{aligned}
&|\psi|=\sqrt{\frac{(\alpha^{2}-\alpha
a+s)^{2}+t^{2}}{(\alpha^{2}+\alpha a+s)^{2}+t^{2}}}<1,\\
&|\phi-\phi^{\ast}\psi|=\frac{4\sqrt{\alpha^2
a^2(s-\alpha^{2})^{2}+4t^{2}\alpha^{4}}}{(\alpha^{2}+\alpha
a+s)^{2}+t^{2}}.
\end{aligned}
\end{equation}
Based on Lemma \ref{Root2}, the necessary and sufficient condition for
$|\lambda|<1$ is
\begin{equation}\label{eq:218}
|\phi-\phi^{\ast}\psi|+|\psi|^{2}<1.
\end{equation}
Substituting (\ref{eq:217}) into (\ref{eq:218}) and solving the
inequality (\ref{eq:218}) for $t$, gives
$
|t|< a\sqrt{s},
$
which completes the proof.
\end{proof}
\end{theorem}
According to the definition of $t(x)$, we have
\begin{eqnarray*}
|t(x)|&=&|\Im(x^HB^TCx)|=|\langle B^TCx,x \rangle| \\
& \leq& \|B^TCx\|_2 \|x\|_2 \qquad\qquad\qquad\qquad\qquad \text{(Cauchy-Schwarz inequality)} \\
& \leq& \|B^TC\|_2 \|x\|_2=\|B^TC\|_2.
\end{eqnarray*}
Also we have $a(x)=x^{\ast}Ax\geq \lambda_{\min}(A)$, where $\lambda_{\min}(A)$ is the smallest eigenvalue of $A$. Therefore, the inequality (\ref{EqCond}) can be replaced by
\[
\|B^TC\|_2 \leq \lambda_{\min}(A) \sqrt{s(x)}.
\]
In the special case that $C=kB$ with $k>0$, we can state the following theorem.
\begin{theorem}\label{Thm2}
Let the conditions of Lemma \ref{Lemma5} be satisfied and $C=kB$ with $k>0$.
Then $\rho(M_{\alpha})<1$, $\forall \alpha>0$, which implies that the SS iteration method $(\ref{eq:21})$ converges
to the unique solution of the asymmetric saddle point problems
$(\ref{eq:14})$.
\begin{proof}
If $C=kB$ with $k>0$, then the matrix $B^TC=kB^TB$ is symmetric positive semidefinite. Therefore, we have $s(x)=kx^*B^TBx\geq 0$ and $t(x)=0$. According to Theorem \ref{Thm1}, all we need is to prove the convergence for the case that $s(x)=0$. If $s(x)=0$, then we get $Bx=0$. Now, from Eq. (\ref{CalA}) we deduce that
\[
\alpha^{2}(\lambda-1)^{2}+\alpha(\lambda^{2}-1)a=0,
\]
which is equivalent to
\[
\alpha(\lambda-1) ( \alpha^{2}(\lambda-1)+\alpha(\lambda+1)a)=0.
\]
Now, since $\alpha> 0$ and $\lambda \neq 1$ (from Lemma \ref{Lemma5}), we deduce that
\[
\alpha^{2}(\lambda-1)+\alpha(\lambda+1)a=0,
\]
which gives the following equation for $\lambda$
\[
\lambda=\frac{\alpha-a}{\alpha+a}.
\]
Therefore, since $a=x^*Ax>0$, we conclude that $|\lambda|<1$, which completes the proof.
\end{proof}
\end{theorem}
\begin{remark}
When $k=1$, Theorem \ref{Thm2} is the main result in \cite{Caoy}. That is to say, Theorems \ref{Thm1} and \ref{Thm2}
are generalizations Theorem 2.1 in \cite{Caoy}.
\end{remark}
Finally, we consider the preconditioner induced by the SS iteration method $(\ref{eq:21})$.
As is known, the advantage of matrix splitting technique often is twofold: one is to result in a
splitting iteration method and the other is to induce a splitting
preconditioner for improving the convergence speed of Krylov
subspace methods in \cite{bai}. Based on the SS iteration method
$(\ref{eq:21})$, the corresponding shift-splitting preconditioner
can be defined by
\[
P_{SS}=\frac{1}{2}\left[\begin{array}{cc}
\alpha I+A & B^{T} \\
-C& \alpha I \\
\end{array}\right].
\]
Since the multiplicative factor $\frac{1}{2}$ in the preconditioner
$P_{SS}$ has no effect and can be removed when $P_{SS}$ is used as a
preconditioner, in the implementations, we only consider the
shift-splitting preconditioner $P_{SS}$ without the multiplicative
factor $\frac{1}{2}$. In this case, using $P_{SS}$ with Krylov
subspace methods (such as GMRES, or its restarted version GMRES($k$)),
a vector of the form
\[
z=P_{SS}^{-1}r
\]
needs to be computed.
Let $z=[z_{1};z_{2}]$ and $r=[r_{1};r_{2}]$. Then $z=P_{SS}^{-1}r$ is equal to
\begin{equation}\label{eq:31}
\left[
\begin{array}{c}
z_{1} \\
z_{2}\\
\end{array}
\right]=\left[\begin{array}{cc}
I & 0 \\
\frac{1}{\alpha}C& I \\
\end{array}\right]\left[\begin{array}{cc}
\alpha I+A+ \frac{1}{\alpha}B^{T}C& 0\\
0& \alpha I\\
\end{array}\right]^{-1}\left[\begin{array}{cc}
I & -\frac{1}{\alpha}B^{T} \\
0& I \\
\end{array}\right]\left[
\begin{array}{c}
r_{1} \\
r_{2}\\
\end{array}
\right].
\end{equation}
Based on Eq. (\ref{eq:31}), the following algorithm can be used to
obtain the vector $z$.
\medskip
\begin{Algor}\label{Algor1}\rm
Let $z=[z_{1};z_{2}]$ and
$r=[r_{1};r_{2}]$. Compute $z$ by the following procedure\\
1. Compute $t=r_{1}-\frac{1}{\alpha}B^{T}r_{2}$;\\
2. Solve $(\alpha I+A+ \frac{1}{\alpha}B^{T}C)z_{1}=t$ for $z_1$;\\
3. Compute $z_{2}=\frac{1}{\alpha}(Cz_{1}+r_{2})$.
\end{Algor}
In Step 2 of Alg. \ref{Algor1}, in general the matrix $\alpha I+A+
\frac{1}{\alpha}B^{T}C$ is indefinite, hence the corresponding
system can be solved exactly using the LU factorization or inexactly
using a Krylov subspace method like GMRES or its restarted version.
However, when $C=kB$ with $k>0$, this matrix is of the form $\alpha
I+A+\frac{k}{\alpha}B^{T}B$ which is SPD. Therefore, the
corresponding system can be solved exactly using the Cholesky
factorization or inexactly using the conjugate gradient (CG) method.
\cred{In general the matrix $\alpha I+A+\frac{1}{\alpha}B^{T}C$ is dense (because of the term $B^TC$) and solving the corresponding system by a direct method may be impractical. Hence, it is recommended to solve the system by an iteration method, as we will shortly do in the section of the numerical experiments. From theoretical point of view, when $\alpha=0$ the preconditioner $P_{SS}=\alpha I+\mathcal{A}$ coincides with the coefficient matrix of original system. In this case, implementation of the preconditioner would be as difficult as solving the original system. Hence, it is better to choose a small value of $\alpha$ to obtain a more well-conditioned matrix. Since the condition of the matrix
$\alpha I+A+\frac{1}{\alpha}B^{T}C$ strongly depends on the term $\frac{1}{\alpha}B^{T}C$, similar to \cite{Caoy2,Golub} we choose
the parameter $\alpha$ equals to
\[
\alpha_{est}=\frac{\|B^TC\|_2}{\|A\|_2},
\]
which balances the matrices $A$ and $B^TC$.}
When Krylov subspace methods together with the preconditioner
$P_{SS}=\alpha I+\mathcal{A}$ are applied to solve the asymmetric
saddle point problems $(\ref{eq:14})$, \cred{we need to establish} the
spectral distribution of the preconditioned matrix $P_{SS}^{-1}A$ to
investigate the convergence performance of the preconditioner
$P_{SS}$ for Krylov subspace methods.
The following theorem on the spectral distribution of the
preconditioned matrix $P_{SS}^{-1}\mathcal{A}$ can be obtained.
\begin{theorem}\label{Thm3} Let the conditions of
Theorem \ref{Thm1} or \ref{Thm2} be satisfied. Then the
preconditioned matrix $P_{SS}^{-1}\mathcal{A}$ are positive stable
for $\alpha>0$ and \cred{its the eigenvalues} satisfy $|\lambda|<1$,
where $\lambda$ denotes the eigenvalue of the preconditioned matrix
$P_{SS}^{-1}\mathcal{A}$.
\begin{proof}
\cred{It follows from
\begin{align*}
2P_{SS}^{-1}\mathcal{A}=I-M_{\alpha},
\end{align*}
that for each $\mu \in \sigma(M_{\alpha})$, there is a $\lambda\in \sigma(P_{SS}^{-1}\mathcal{A})$, such that $2\lambda=1-\mu$. Therefore, we
\begin{eqnarray*}
\frac{\mu}{2} &=& \frac{1}{2} - \lambda = \frac{1}{2} - \Re(\lambda) - i \Im(\lambda).
\end{eqnarray*}
Hence, from the fact that $|\mu|<1$ we conclude
\[
(\frac{1}{2} - \Re(\lambda))^2 +(\Im(\lambda))^2< \frac{1}{4},
\]
which shows that the eigenvalues of the preconditioned matrix $P_{SS}^{-1}\mathcal{A}$ are contained in a circle with radius $\frac{1}{2}$ centered at $(\frac{1}{2},0)$. Hence, the real parts of the eigenvalues of the matrix
$P_{SS}^{-1}\mathcal{A}$ are all positive. This means that the matrix $P_{SS}^{-1}\mathcal{A}$ is
positive stable for $\alpha>0$. On the other hand, from $|\mu|< 1$ we deduce that
\[
2|\lambda|=|1-\mu|\leq 1+|\mu|< 2,
\]
which completes the proof.}
\end{proof}
\end{theorem}
Here, we present a relaxed version of the shift-splitting
preconditioner as well, which is defined by
\[
P_{RSS}=\left[\begin{array}{cc}
A & B^{T} \\
-C& \alpha I \\
\end{array}\right].
\]
Similarly, using $P_{RSS}$ with Krylov subspace methods (such as
GMRES, or its restarted version GMRES($k$)), a vector of the form
\[
z=P_{RSS}^{-1}r
\]
has to be computed as well. Let $z=[z_{1};z_{2}]$ and
$r=[r_{1};r_{2}]$. Then, we have
\begin{equation}
\left[
\begin{array}{c}
z_{1} \\
z_{2}\\
\end{array}
\right]=\left[\begin{array}{cc}
I & 0 \\
\frac{1}{\alpha}C& I \\
\end{array}\right]\left[\begin{array}{cc}
A+ \frac{1}{\alpha}B^{T}C& 0\\
0& \alpha I\\
\end{array}\right]^{-1}\left[\begin{array}{cc}
I & -\frac{1}{\alpha}B^{T} \\
0& I \\
\end{array}\right]\left[
\begin{array}{c}
r_{1} \\
r_{2}\\
\end{array}
\right].
\end{equation}
Based on Alg. \ref{Algor1}, by a simple modification, we obtain Alg. \ref{Algor2} to obtain the vector $z$ as follows.
\begin{Algor}\label{Algor2}\rm
Let $z=[z_{1};z_{2}]$ and $r=[r_{1};r_{2}]$. Compute $z$ by the following procedure\\
1. Compute $t=r_{1}-\frac{1}{\alpha}B^{T}r_{2}$;\\
2. Solve $(A+ \frac{1}{\alpha}B^{T}C)z_{1}=t$;\\
3. Compute $z_{2}=\frac{1}{\alpha}(Cz_{1}+r_{2})$.
\end{Algor}
In the same way, we can obtain the spectral distribution of the
preconditioned matrix $P_{RSS}^{-1}\mathcal{A}$, as follows.
\begin{theorem}\label{Thm4}
Let the conditions of Theorem \ref{Thm1} be satisfied. Then the
preconditioned matrix $P_{RSS}^{-1}\mathcal{A}$ has an eigenvalue
$1$ with algebraic multiplicity $n$ and the remaining eigenvalues
are the eigenvalues of matrix $\frac{1}{\alpha}C(A+
\frac{1}{\alpha}B^{T}C)^{-1}B^{T}$.
\begin{proof}
By calculation, we get
\begin{align*}
P_{RSS}^{-1}\mathcal{A}=&\left[\begin{array}{cc}
I & 0 \\
\frac{1}{\alpha}C& I \\
\end{array}\right]\left[\begin{array}{cc}
A+ \frac{1}{\alpha}B^{T}C& 0\\
0& \alpha I\\
\end{array}\right]^{-1}\left[\begin{array}{cc}
I & -\frac{1}{\alpha}B^{T} \\
0& I \\
\end{array}\right]\left[\begin{array}{cc}
A & B^{T} \\
-C& 0 \\
\end{array}\right]\\
=&\left[\begin{array}{cc}
(A+ \frac{1}{\alpha}B^{T}C)^{-1} & 0 \\
\frac{1}{\alpha}C(A+ \frac{1}{\alpha}B^{T}C)^{-1}& \frac{1}{\alpha} I\\
\end{array}\right]\left[\begin{array}{cc}
I & -\frac{1}{\alpha}B^{T} \\
0& I \\
\end{array}\right]\left[\begin{array}{cc}
A & B^{T} \\
-C& 0 \\
\end{array}\right]\\
=&\left[\begin{array}{cc}
(A+ \frac{1}{\alpha}B^{T}C)^{-1} & -\frac{1}{\alpha}(A+ \frac{1}{\alpha}B^{T}C)^{-1}B^{T} \\
\frac{1}{\alpha}C(A+ \frac{1}{\alpha}B^{T}C)^{-1}&-\frac{1}{\alpha^{2}}C(A+ \frac{1}{\alpha}B^{T}C)^{-1}B^{T}+ \frac{1}{\alpha} I\\
\end{array}\right]\left[\begin{array}{cc}
A & B^{T} \\
-C& 0 \\
\end{array}\right]\\
=&\left[\begin{array}{cc}
I & (A+ \frac{1}{\alpha}B^{T}C)^{-1}B^{T} \\
0&\frac{1}{\alpha}C(A+ \frac{1}{\alpha}B^{T}C)^{-1}B^{T}\\
\end{array}\right].
\end{align*}
Therefore, the proof is completed.
\end{proof}
\end{theorem}
\cred{Obviously, for each $\alpha>0$ the preconditioner $P_{RSS}$ is more closer than the preconditioner $P_{SS}$ to the original matrix ${\cal A}$. However, the subsystem appeared in the implementation of the $P_{SS}$ preconditioner in a Krylov subspace method is more well-conditioned than that of $P_{RSS}$. Hence, it is recommend to apply the preconditioner $P_{SS}$ when the subsystems are solved inexactly using an iteration method and the $P_{RSS}$ when the subsystems are solved exactly using direct method.}
\section{Numerical experiments}\label{Sec3}
In this section, we present some numerical experiments to demonstrate the performance of the
shift-splitting preconditioner. In the
meantime, the numerical comparison are provided to show the
advantage of the shift-splitting preconditioner ($P_{SS}$) and its relaxed version ($P_{RSS}$) \cred{over the PPSS preconditioner given by Eq. \eqref{PrPPSS} (denoted by $P_{PPSS}$) and the augmentation block triangular preconditioner given by Eq. \eqref{PrAug} (denoted by $P_{Aug}$)}.
In our computations, we apply the flexible GMRES (FGMRES) \cite{FGMRES,SaadBook} together with these four
preconditioners to solve the assymmetric saddle point systems
(\ref{eq:14}) and adjust the right-hand side ${\bf f}$ such that the exact
solution is a vector of all ones. The iterations start with a zero vector as an initial guess and are stopped when the numbers of
iteration exceeds 1000 or
\[
R_k=\frac{\|{\bf f}-\mathcal{A} {\bf x}^{(k)}\|_{2}}{\| {\bf f}\|_{2}}\leq 10^{-7},
\]
where ${\bf x}^{(k)}$ is the computed solution at iteration $k$. In the implementation of the preconditioners the subsystems are solved inexactly using the iterations method. When the coefficient matrix is SPD, the corresponding system is solved using the conjugate gradient (CG) method, otherwise by the restarted version of GMRES(10). For the subsystems, the iteration is stopped as soon as the residual 2-norm is reduced by a factor of $10^2$ and the maximum number of iterations is set to be 100. Similar to the outer iterations, a null vector is used as an initial guess. Finally, for the augmentation block triangular preconditioner the matrix $W$ is set to be $W=\alpha I$ with $\alpha>0$. In this case, the preconditioner $P_{Aug}$ takes the following form
\[
P_{Aug}=\left[\begin{array}{cc}
A+\frac{1}{\alpha}B^TC& B^{T}\\
0 & \alpha I
\end{array}\right],
\]
For all the methods the optimal value of parameter are obtained experimentally (denoted by $\alpha_*$) and are the ones resulting in the least numbers of iterations. \cred{We also report the numerical results for the parameter
$\alpha_{est}={\|B^TC\|_2}/{\|A\|_2}$.
}
We present the numerical results in the tables. In the tables, ``CPU" and ``Iters" stand for the elapsed CPU time (in second) and the number of iterations for the convergence. A dagger ($\dag$) means that the iteration has not converged in 1000 iterations.
All runs are implemented in \textsc{Matlab} R2017, equipped with a Laptop with 1.80 GHz central processing unit (Intel(R) Core(TM) i7-4500), 6 GB memory and Windows 7 operating system.
\begin{Example}\label{Ex1} \rm Let the asymmetric saddle point
problems (\ref{eq:14}) be given by
\begin{equation*}
A=\left[\begin{array}{cc}
I\otimes T+T\otimes I & 0 \\
0& I\otimes T+T\otimes I\\
\end{array}\right]\in\mathbb{R}^{2s^{2}\times2s^{2}}
\end{equation*}
and
\begin{equation*}
B^{T}=\left[\begin{array}{cc}
I\otimes F \\
F\otimes I\\
\end{array}\right]\in\mathbb{R}^{2s^{2}\times s^{2}},\ C=kB,
\end{equation*}
with
\[
T=\frac{\mu}{h^{2}}\mbox{tridiag}(-1,2,-1)\in\mathbb{R}^{s\times
s},\quad F=\frac{1}{h}\mbox{tridiag}(-1,1,0)\in\mathbb{R}^{s\times
s},k>0.
\]
where $\otimes$ denotes the Kronecker product and $h={1}/{(s+1)}$
is the discretization mesh-size. Therefore, the total number of variables $n=3s^{2}$.
This asymmetric saddle point problems (\ref{eq:14}) can be obtained by using the upwind scheme to
discretize the Stokes problem in the region
$\Omega=(0,1)\times(0,1)\subset \mathbb{R}^{2}$ with its boundary
being $\partial\Omega$: find $u$ and $p$ such that
\begin{equation*}
\left\{ \begin{aligned}
-\mu\Delta u+\nabla p &=f, \ \mbox{in}\ \Omega,\\
\nabla\cdot u&=g,\ \mbox{in}\ \Omega,\\
u&=0,\ \mbox{on} \ \partial\Omega,\\
\int_{\Omega} p(x)dx&=0,
\end{aligned} \right.
\end{equation*}
where $\mu$, $\Delta$, $u$ and $p$ are the viscosity scalar, the
componentwise Laplace operator, a vector-valued function
representing the velocity, and a scalar function representing the
pressure, respectively.
We set $s=16,32,64,128,256$ and $k=2$. Generic properties of the test matrices are presented in Table \ref{Tbl1}. In this table, $nnz(.)$ stands for number of nonzero entries of the matrix. Numerical results for $\mu=1$ and $\mu=0.1$ are presented in the Tables \ref{Tbl2} and \ref{Tbl3}, respectively. From the numerical results in Tables \ref{Tbl2}-\ref{Tbl3}, it is easy to find that the computational efficiency of GMRES
can not be satisfy when it is directly used to solve the asymmetric saddle point problems
(\ref{eq:14}). Whereas, FGMRES together with these four preconditioners for solving the asymmetric saddle point problems
(\ref{eq:14}) can rapidly converge. This also confirms that all four preconditioners indeed can improve the convergence speed of
GMRES. Among the preconditioners, $P_{SS}$ and $P_{RSS}$ outperform the others from the iteration steps and the CPU time point of review. \cred{On the other hand we observe the parameter $\alpha_{est}$ often gives suitable results, especially for large problems.}
\begin{table}
\centering\caption{Matrix properties for Example \ref{Ex1}.\label{Tbl1}}\vspace{-0.5cm}
\begin{center}
\scalebox{1.}
{
\begin{tabular}{|c|c|c|c|c|c|} \hline
$s$ & $n$ & $m$ & $nnz(A)$ & $nnz(B)$ & $nnz(C)$ \\ \hline
16 & 512 & 256 & 2432 & 992 & 992 \\ [0.2cm]
32 & 2048 & 1024 & 9984 & 4032 & 4032 \\ [0.2cm]
64 & 8192 & 4096 & 40448 & 16256 & 16256 \\ [0.2cm]
128 & 32768 & 16384 & 162816 & 65280 & 65280 \\ [0.2cm]
256 & 131072 & 65536 & 653312 & 261632 & 261632 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table} \centering
\centering\caption{Numerical results of FGMRES for Example \ref{Ex1} with $\mu=1$.\label{Tbl2}}
\scalebox{0.8}{
\begin{tabular} {|l||l|l|l|l|l|l||l|l|l|l|} \hline
$s$ & ~~~~~~~~~ &No Prec. &$P_{SS}$~~~~~~ & $P_{RSS}$ ~~ & $P_{PPSS}$ ~~ &$P_{Aug}$~~ &&\cred{$P_{SS}$}~~~~~~& \cred{$P_{RSS}$} ~~\\ \hline
16 & $\alpha_*$ & -- & 0.10 & 0.20 & 98.50 & 0.11 & $\alpha_{est}$ & 2.03 & 2.03\\
& Iters & 133 & 8 & 8 & 38 & 21 & Iters & 12 & 11 \\
& CPU & 0.13 & 0.03 & 0.02 & 0.05 & 0.06 & CPU & 0.03 & 0.03 \\
& $R_k$ & 8.1e-8 & 8.4e-8 & 5.9e-8 & 1.0e-7 & 7.5e-8 & $R_k$ & 6.6e-8 & 9.3e-8\\ \hline
32 & $\alpha_*$ & -- & 0.20 & 0.34 & 100.60 & 0.10 & $\alpha_{est}$ & 2.01 & 2.01 \\
& Iters & 285 & 9 & 9 & 45 & 21 & Iters & 13 & 12 \\
& CPU & 2.93 & 0.06 & 0.05 & 0.14 & 0.14 & CPU & 0.06 & 0.06 \\
& $R_k$ & 9.6e-8 & 2.4e-8 & 9.6e-8 & 1.0e-7 & 8.9e-8 & $R_k$ & 5.2e-8 & 7.4e-8\\ \hline
64 & $\alpha_*$ & -- & 0.60 & 1.50 & 102.20 & 0.37 & $\alpha_{est}$ & 2.01 & 2.01 \\
& Iters & 617 & 12 & 12 & 63 & 29 & Iters & 14 & 13 \\
& CPU & 36.20 & 0.39 & 0.3 & 0.89 & 0.74 & CPU & 0.38 & 0.36\\
& $R_k$ & 9.6e-8 & 7.5e-8 & 8.2e-8 & 9.3e-8 & 8.2e-8 & $R_k$ & 5.6e-8 & 6.4e-8\\ \hline
128 & $\alpha_*$ & -- & 0.60 & 0.64 & 103.90 & 4.20 & $\alpha_{est}$ & 2.02 & 2.02\\
& Iters & $\dag$ & 22 & 23 & 111 & 31 & Iters & 24 & 23 \\
& CPU & -- & 2.37 & 2.48 & 8.69 & 3.19 & CPU & 2.55 & 2.33\\
& $R_k$ & -- & 8.4e-8 & 8.5e-8 & 8.8e-8 & 6.5e-8 & $R_k$ & 5.2e-8 & 5.4e-8\\ \hline
256 & $\alpha_*$ & -- & 1.39 & 1.39 & 102.00 & 22.00 & $\alpha_{est}$ & 2.02 & 2.02\\
& Iters & $\dag$ & 57 & 52 & 217 & 78 & Iters & 64 & 54 \\
& CPU & -- & 34.89 & 32.38 & 175.49 & 47.18 & CPU & 40.66 & 33.84 \\
& $R_k$ & -- & 9.5e-8 & 8.1e-8 & 9.9e-8 & 7.1e-8 & $R_k$ & 9.5e-8 & 4.2e-8 \\ \hline
\end{tabular}}
\end{table}
\begin{table} \centering
\centering\caption{Numerical results of FGMRES for Example \ref{Ex1} with $\mu=0.1$.\label{Tbl3}}
\scalebox{0.8}{
\begin{tabular} {|l||l|l|l|l|l|l||l|l|l|l|} \hline
$s$ & ~~~~~~~~~ &No Prec. &$P_{SS}$~~~~~~ & $P_{RSS}$ ~~ & $P_{PPSS}$ ~~ &$P_{Aug}$~~ && \cred{$P_{SS}$}~~~~~~& \cred{$P_{RSS}$} ~~\\ \hline
16 & $\alpha_*$ & -- & 0.25 & 0.25 & 15.40 & 0.53 & $\alpha_{est}$ & 18.34 & 18.34\\
& Iters & 117 & 8 & 8 & 36 & 17 & Iters & 28 & 12 \\
& CPU & 0.16 & 0.02 & 0.02 & 0.04 & 0.04 & CPU & 0.02 & 0.02 \\
& $R_k$ & 8.9e-8 & 1.5e-8 & 1.4e-8 & 9.4e-8 & 8.6e-8 & $R_k$ & 8.2e-8 & 6.6e-8\\ \hline
32 & $\alpha_*$ & -- & 0.23 & 0.23 & 29.80 & 2.42 & $\alpha_{est}$ & 19.45 & 19.45 \\
& Iters & 238 & 11 & 11 & 56 & 20 & Iters & 31 & 13 \\
& CPU & 1.67 & 0.07 & 0.07 & 0.16 & 0.11 & CPU & 0.08 & 0.06 \\
& $R_k$ & 9.0e-8 & 9.0e-8 & 5.6e-8 & 9.2e-8 & 1.0e-7 & $R_k$ & 6.9e-8 & 5.3e-8\\ \hline
64 & $\alpha_*$ & -- & 1.50 & 2.1 & 53.20 & 4.60 & $\alpha_{est}$ & 19.87 & 19.87 \\
& Iters & 483 & 11 & 11 & 86 & 26 & Iters & 32 & 14 \\
& CPU & 22.48 & 0.26 & 0.25 & 0.95 & 0.65 & CPU & 0.46 & 0.38\\
& $R_k$ & 9.9e-8 & 9.7e-8 & 5.8e-8 & 9.6e-8 & 9.4e-8 & $R_k$ & 8.8e-8 & 4.2e-8\\ \hline
128 & $\alpha_*$ & -- & 4.90 & 6.4 & 92.80 & 19.10 & $\alpha_{est}$ & 19.98 & 19.98\\
& Iters & 908 & 18 & 19 & 129 & 39 & Iters & 33 & 20 \\
& CPU & 302.64 & 1.91 & 1.96 & 7.07 & 4.06 & CPU & 3.07 & 2.17\\
& $R_k$ & 9.9e-8 & 9.2e-8 & 7.2e-8 & 9.9e-8 & 9.9e-8 & $R_k$ & 7.4e-8 & 9.2e-8\\ \hline
256 & $\alpha_*$ & -- & 10.90 & 12.96 & 131.00 & 25.90 & $\alpha_{est}$ & 20.05 & 20.05\\
& Iters & $\dag$ & 30 & 37 & 192 & 90 & Iters & 37 & 46 \\
& CPU & -- & 26.03 & 22.73 & 151.26 & 55.38 & CPU & 23.26 & 29.10 \\
& $R_k$ & -- & 9.0e-8 & 9.6e-8 & 9.7e-8 & 7.8e-8 & $R_k$ & 6.2e-8 & 9.1e-8 \\ \hline
\end{tabular}}
\end{table}
In the sequel, we investigate the spectral
distribution of four preconditioned matrices
$P_{SS}^{-1}\mathcal{A}$, $P_{RSS}^{-1}\mathcal{A}$, $P_{PPSS}^{-1}\mathcal{A}$ and
$P_{Aug}^{-1}\mathcal{A}$. To do so, we set $s=16$ and use the optimal value of the parameters given in Tables \ref{Tbl2} and \ref{Tbl3}.
Figs. \ref{Fig1}-\ref{Fig2} plot the spectral distribution of the
matrices. Fig. \ref{Fig1} plots the spectral distribution of five
matrices $\mathcal{A}$, $P_{SS}^{-1}\mathcal{A}$, $P_{RSS}^{-1}\mathcal{A}$
$P_{PPSS}^{-1}\mathcal{A}$ and $P_{Aug}^{-1}\mathcal{A}$ with $\mu=1$ and Fig. \ref{Fig2} for $\mu=0.1$.
From the spectral distribution in Figs. \ref{Fig1}-\ref{Fig2}, four preconditioners $P_{SS}$, $P_{RSS}$, $P_{PPSS}$
and $P_{Aug}$ improve the spectral distribution of the original
coefficient matrix $\mathcal{A}$.
As we observe, the eigenvalues of $P_{SS}^{-1}\mathcal{A}$ are $P_{RSS}^{-1}\mathcal{A}$ better clustered than the two other preconditioned matrices. Moreover, the spectral
distribution of $P_{SS}^{-1}\mathcal{A}$ and $P_{RSS}^{-1}\mathcal{A}$ are almost in line with the theoretical
results, see Theorem \ref{Thm3} and Theorem \ref{Thm4}.
\begin{figure}
\centering
\includegraphics[width=6in,height=3.5in]{Fig1.eps}
\caption{Spectra distribution of Example \ref{Ex1} for $s=16$ with $\mu=1$ and $k=2$.\label{Fig1}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6in,height=3.5in]{Fig2.eps}
\caption{Spectra distribution of Example \ref{Ex1} for $s=16$ with $\mu=0.1$ and $k=2$.\label{Fig2}}
\end{figure}
\end{Example}
\begin{Example}\label{Ex2}\rm
We use the matrix {\bf Szczerba$\slash$Ill\_Stokes} from the UF Sparse Matrix Collection\footnote{https://www.cise.ufl.edu$\slash$research$\slash$sparse$\slash$matrices$\slash$Szczerba$\slash$Ill\_Stokes.html}, which is an ill-conditioned matrix arisen from computational fluid dynamics problems. Generic properties of the test matrix are given in Table \ref{Tbl4}.
The FGMRES (GMRES) method without preconditioning fails to converge in 1000 iterations. So, we present the numerical results of the FGMRES method with the preconditioners $P_{SS}$, $P_{RSS}$, $P_{PPSS}$ and $P_{Aug}$ for different values of the parameter $\alpha$ in Table \ref{Tbl5}. As we observe all the preconditioners reduce the number of iterations of the GMRES method. The minimum value of the CPU time for each of the preconditioner have been underlined. As we see the minimum value of the CPU time is due to the $P_{SS}$ preconditioner. \cred{Numerical results of the preconditioners $P_{SS}$ and $P_{RSS}$ have been presented in Table \ref{Tbl6}. As we there is a good agreement between the results of the $P_{SS}$ and $P_{RSS}$ preconditioners with $\alpha_*$ and those of with $\alpha_{est}$.}
\end{Example}
\begin{table}
\centering\caption{Matrix properties for Example \ref{Ex2}.\label{Tbl4}}\vspace{-0.5cm}
\begin{center}
\label{exact}
\scalebox{0.8}
{
\begin{tabular}{|c|c|c|c|c|c|} \hline
Matrix & $n$ & $m$ & $nnz(A)$ & $nnz(B)$ & $nnz(C)$ \\ \hline
Szczerba$\slash$Ill\_Stokes & 15672 & 5224 & 73650 & 58242 & 59476 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\begin{table}
\centering\caption{Numerical results for Example \ref{Ex2} for different values of $\alpha$.\label{Tbl5}}\vspace{-0.5cm}
\begin{center}
\label{exact}
\scalebox{0.80}
{
\begin{tabular}{|c|ccc|ccc|ccc|ccc|} \hline
& \multicolumn{3}{c|}{$P_{SS}$} & \multicolumn{3}{c|}{$P_{RSS}$} & \multicolumn{3}{c|}{$P_{PPSS}$} & \multicolumn{3}{c|}{$P_{Aug}$}\\ \hline
$\alpha$ & Iters & CPU & $R_k$ & Iters & CPU & $R_k$ & Iters & CPU & $R_k$ & Iters & CPU & $R_k$ \\ \hline
0.1 & 471 & 31.32& 9.8e-8 & 179 & 15.11 & 9.7e-8 & 467 & 22.97 & 1.0e-7 & 188 & 8.31 & 9.9e-8 \\
0.05 & 358 & 22.14& 9.9e-8 & 169 & 13.57 & 9.8e-8 & 348 & 13.65 & 9.7e-8 & 180 & 7.94 & 9.6e-8 \\
0.01 & 210 & 13.56& 9.7e-8 & 145 & 11.44 & 9.7e-8 & 164 & 5.19 & 9.1e-8 & 171 & 7.23 & 9.9e-8 \\
0.005 & 173 & 11.40& 9.6e-8 & 134 & 10.12 & 9.9e-8 & 121 & \underline{{\bf 4.44}} & 9.2e-8 & 175 & \underline{{\bf 7.19}} & 9.9e-8 \\
0.001 & 115 & 7.49 & 9.9e-8 & 110 & 7.21 & 9.5e-8 & 66 & 6.97 & 9.1e-8 & 193 & 7.61 & 9.8e-8\\
0.0005 & 99 & 5.84 & 9.8e-8 & 97 & 5.93 & 9.8e-8 & 66 & 15.23 & 9.0e-8 & 208 & 8.20 & 9.7e-8\\
0.0001 & 64 & \underline{{\bf 3.91}} & 9.7e-8 & 63 & \underline{{\bf 3.95}} & 1.0e-7 & 84 & 123.13 & 9.4e-8 & 311 & 16.30 & 1.0e-7\\
0.00005 & 62 & 4.26 & 9.6e-8 & 61 & 4.23 & 1.0e-7 & 92 & 168.02 & 9.3e-8 & 313 & 18.61 & 9.9e-8\\
\hline
\end{tabular}
}
\end{center}\vspace{0.5cm}
\cred{
\centering\caption{Numerical results for Example \ref{Ex2} for $\alpha_{est}$.\label{Tbl6}}\vspace{-0.5cm}
\begin{center}
\label{exact}
\scalebox{0.90}
{
\begin{tabular}{|cccc|cccc|} \hline
\multicolumn{4}{|c|}{$P_{SS}$} & \multicolumn{4}{c|}{$P_{RSS}$} \\ \hline
$\alpha_{est}$ & Iters & CPU & $R_k$ & $\alpha_{est}$ & Iters & CPU & $R_k$ \\ \hline
0.000169 & 74 & 4.57 & 9.5e-8 & 0.000169 & 73 & 4.24 & 9.9e-8 \\
\hline
\end{tabular}
}
\end{center}}
\end{table}
\section{Conclusion}\label{Sec4}
For the asymmetric saddle point problems, we have
presented the shift-splitting preconditioner and its relaxed version
to improve the convergence speed of Krylov subspace method (such as
GMRES/FGMRES). The eigenvalue distribution of the related preconditioned
matrices have been provided. Moreover, we have proved that the shift-splitting iteration method for
the asymmetric saddle point problems
is convergent under suitable conditions. Numerical experiments from
the Stokes problem are given to verify the efficiency of the
shift-splitting preconditioner and its relaxed version.
\section*{ Acknowledgment}
\cred{The authors would like to thank the anonymous referee for helpful comments and suggestions.}
{
|
1,314,259,993,267 | arxiv | \section{0pt}{2pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\sectionfont{\fontsize{12}{15}\selectfont}
\paragraphfont{\fontsize{9}{9}\selectfont}
\usepackage{braket}
\usepackage[font=small,labelfont=bf]{caption}
\usepackage{lineno}
\usepackage{nameref}
\usepackage{newfloat}
\usepackage[expansion=true]{microtype}
\renewcommand{\figurename}{Fig.}
\DeclareFloatingEnvironment{metfig}
\renewcommand{\metfigname}{Fig.}%
\renewcommand{\themetfig}{S\arabic{metfig}}%
\usepackage{hyperref}
\hypersetup{
colorlinks=true,
linkcolor=black,
citecolor=black,
filecolor=black,
urlcolor=black,
}
\setlength\linenumbersep{5pt}
\setlength\columnsep{25pt}
\title{Single-electron spin resonance in a nanoelectronic device using a global field}
\date{}
\author[1,*]{E. Vahapoglu}
\author[1,*]{J. P. Slack-Smith}
\author[1]{R. C. C. Leon}
\author[1]{W. H. Lim}
\author[1]{F. E. Hudson}
\author[1]{T. Day}
\author[1]{T. Tanttu}
\author[1]{C. H. Yang}
\author[1]{A. Laucht}
\author[1,$\dagger$]{A. S. Dzurak}
\author[1,$\dagger$]{J. J. Pla}
\affil[1]{School of Electrical Engineering and Telecommunications, UNSW Sydney, Sydney, NSW 2052, Australia.}
\affil[*,$\dagger$]{These authors contributed equally to this work}
\begin{document}
\maketitle
\begin{abstract}
\textbf{Spin-based silicon quantum electronic circuits offer a scalable platform for quantum computation, combining the manufacturability of semiconductor devices with the long coherence times afforded by spins in silicon. Advancing from current few-qubit devices to silicon quantum processors with upwards of a million qubits, as required for fault-tolerant operation, presents several unique challenges, one of the most demanding being the ability to deliver microwave signals for large-scale qubit control. Here we demonstrate a potential solution to this problem by using a three-dimensional dielectric resonator to broadcast a global microwave signal across a quantum nanoelectronic circuit. Critically, this technique utilizes only a single microwave source and is capable of delivering control signals to millions of qubits simultaneously. We show that the global field can be used to perform spin resonance of single electrons confined in a silicon double quantum dot device, establishing the feasibility of this approach for scalable spin qubit control.}
\end{abstract}
\twocolumn
\section*{Introduction}
The ability to engineer quantum systems is expected to enable a range of transformational technologies including quantum-secured communication networks, enhanced sensors and quantum computers, with applications spanning a diverse range of industries. Quantum computers are poised to significantly outperform their classical counterparts in many important problems like quantum simulation (aiding materials and drug development) and optimization. Whilst some applications are expected to be executable on medium-scale quantum computers (with 100-1000 qubits) that do not employ error correction protocols \cite{Preskill2018}, arguably the most disruptive algorithms \cite{Shor1994} will require a large-scale and fully fault-tolerant quantum computer with upwards of a million qubits \cite{Fowler2012, Lekitsch2017}.
Of the possible physical implementations of qubits, electron spins in gate-defined silicon quantum dots stand out for their long coherence times \cite{Veldhorst2014}, their ability to operate at temperatures above 1\,K \cite{Yang2020, Petit2020} and for the potential to leverage the experience of the semiconductor industry in fabricating devices at scale. The building blocks for a spin-based quantum computer in silicon, namely high-fidelity single \cite{Veldhorst2014} and two \cite{Veldhorst2015, Huang2019, Xue2019} qubit gates, have been demonstrated and attention is now being directed towards the engineering challenges of constructing a large-scale processor \cite{Gonzalez2020}. To achieve this ambitious goal, significant hurdles associated with qubit measurement and control must be overcome, such as how to deliver the microwave control signals to many qubits simultaneously, without disturbing the fragile cryogenic environment in which the processor operates.
One approach for spin qubit control successfully deployed in current few-qubit devices is based on a direct magnetic drive using an on-chip transmission line (TL) \cite{Dehollain2013}. A strong microwave current is passed through a wire placed close to the quantum dot to generate an alternating magnetic field. Localization of the control field to the wire demands a number of transmission lines that scales with the total number of qubits \cite{Li2018}, while multiple high-frequency coaxial lines would be needed to deliver the control signals into the cryogenic system. The large microwave currents running through the processor in such a scheme raises concerns of heating, whilst the control lines themselves take up valuable chip real-estate.
Electric dipole spin resonance (EDSR) \cite{Pioro2008, Kawakami2014, Takeda2016, Watson2018, Yang2020, Leon2020} is an alternative method for producing local spin control. For electron spin qubits, this technique simulates a magnetic drive by exploiting magnetic field gradients produced by nanomagnets in conjunction with microwave electric fields applied directly to the qubit gate electrodes. Integration of EDSR in large 2D qubit arrays poses several challenges. In addition to the required transversal gradients, stray longitudinal gradients from the nanomagnets can cause spin decoherence. Furthermore, as with the TL-based direct magnetic drive, this technique requires microwave signals to be applied across many control lines, with multiple coaxial cables entering the system and both cross-talk and layout issues to overcome.
An elegant and potentially scalable spin qubit control solution was anticipated in the 1998 Kane proposal for a silicon quantum computer \cite{Kane1998}. In this approach a single uniform global magnetic field is radiated across the entire processor \cite{Veldhorst2017, Hill2015} (see Fig.~\ref{fig:1}A). Qubit operations are realized by applying potentials locally to spins to bring them in and out of resonance with the global field \cite{Laucht2015}. This method utilizes a single microwave source and does not require the direct passage of strong high-frequency currents through the processor, reducing heating and simplifying the chip layout design.
Conventional electron spin resonance (ESR) spectroscopy \cite{schweiger2001} already provides a way to deliver global microwave fields to large ensembles of spins. The first-order approach to implementing global control for spin qubits in a quantum processor would be to simply place the chip inside a conventional 3D microwave cavity. However, internal device structures (such as metal gates and bond wires) adversely affect the properties of these microwave resonators \cite{Kong15}. Furthermore, the chip is exposed to large alternating electric fields in these cavities, which interfere with (or potentially damage) the sensitive qubit readout nanoelectronics.
An important metric for a microwave cavity is the power-to-field conversion factor $C$, which quantifies how well a microwave input signal is converted to the alternating magnetic field needed to drive spin rotations. The expression $B_1 = C\sqrt{P}$ relates the magnetic drive $B_1$ to the input signal power $P$ through the conversion factor $C \propto \sqrt{Q/\omega V}$, which itself depends on the quality factor $Q$ of the cavity, its frequency $\omega$ and the microwave mode volume $V$. For low conversion factors (as obtained with conventional metallic cavity resonators), substantial powers are required to drive sufficiently fast spin rotations -- powers potentially incompatible with the cryogenic environment of the quantum processor. To raise the conversion efficiency and reduce the driving power, cavities with high $Q$ and small mode volumes are desirable.
While attempts have been made to control spins in nanoelectronic devices using 3D metallic ESR resonators \cite{simovicDesignQbandLoopgap2006}, performing spin resonance with a global field has, until now, remained elusive. Here we demonstrate ESR of single spins in a silicon metal-oxide-semiconductor (SiMOS) quantum dot (QD) device by using a compact dielectric resonator (DR) placed above the chip (Fig.~\ref{fig:1}B). The DR is constructed from potassium tantalate (KTaO$_3$ or KTO), a quantum paraelectric material that exhibits an exceptionally high dielectric constant at cryogenic temperatures and hence compact microwave mode volumes. ESR control is confirmed to be resonator-driven by observing an enhancement in the mixing of the quantum dot spin states within the dielectric resonator bandwidth. This represents the first step towards the vision (see Fig.\ref{fig:1}A) of large-scale qubit control using global magnetic fields generated off-chip.
\vspace{30pt}
\begin{figure*}[]
\centering
\includegraphics[width=1\linewidth]{fig1_sci.pdf}
\caption{\textbf{Device stack for off-chip ESR and dielectric resonator simulations.}
\textbf{(A)} A 3D render of the vision for large-scale qubit control using a global microwave field.
\textbf{(B)} 3D graphic of the global control device stack as used in our experiments, including silicon quantum nanoelectronic device (bottom), sapphire dielectric spacer (middle) and potassium tantalate (KTaO$_3$) dielectric microwave resonator (top). The DR is a $0.7$\, $\times$ $0.55$\, $\times$ $0.3$\,mm$^3$ rectangular prism.
\textbf{(C)} A photograph of the device and coaxial loop coupler.
\textbf{(D)} Schematic depicting directions of the magnetic and electric field components of the TE$_{11\delta}$ mode in the dielectric resonator.
\textbf{(E-F)} Finite-element simulation of the magnetic field $z$-component magnitude (E) and electric field magnitude (F) of the TE$_{11\delta}$ mode supported by the DR.
\textbf{(G)} $S_{11}$ reflection parameter measurement near the fundamental mode of the dielectric resonator, measured via the coaxial loop coupler.
}
\label{fig:1}
\end{figure*}
\section*{Results}
\paragraph*{Dielectric Resonator and Chip Assembly.}\label{section::DR}
In order to overcome the challenges presented by global control, we desire a microwave cavity that minimizes the mode volume (whilst maximizing surface area), maintains a large quality factor and produces very little electric field at the chip. In this paper we exploit a species of ESR cavity called a dielectric resonator -- a volume of dielectric material where the sudden transition between the high dielectric constant inside the material and the low dielectric constant outside of it (vacuum) leads to the formation of standing waves. DRs are often used in conventional ESR spectroscopy \cite{Blank2003} due to their high magnetic field to power conversion factors ($C$) and low intrinsic losses. Our DR is designed to operate in the TE$_{11\delta}$ mode \cite{Geifman2005}, shown in Fig.~\ref{fig:1}D. In this mode the DR acts like a magnetic dipole (i.e. a current loop). The magnetic field is strongest along the central axis (see Fig.~\ref{fig:1}E) and extends outside of the DR, while the electric field is concentrated towards the outer edges and strongly confined inside the material (Fig.~\ref{fig:1}F).
The frequency of a dielectric resonator is inversely proportional to the dielectric constant $\varepsilon_\text{r}$ and its mode volume $V$, specifically $\omega \propto 1/(\sqrt{\varepsilon_\text{r}}V^{1/3})$. Thus, for a given frequency, the mode volume can be reduced by increasing $\varepsilon_\text{r}$. In this work we exploit the perovskite material potassium tantalate. KTO is a quantum paraelectric, a material that displays ferroelectric-like properties (such as a high dielectric constant) but whose transition to the ferroelectric phase is suppressed by quantum fluctuations \cite{Gevorgian2009}. KTO has an extraordinarily large dielectric constant at cryogenic temperatures ($\varepsilon_\text{r} \approx 4,300$ for $T < 10$\,K) and exhibits very low microwave losses ($\tan\delta \sim 10^{-4}~\text{to}~10^{-5}$) permitting high internal quality factors \cite{Geyer2005}.
The DR is cut in the shape of a rectangular prism and integrated with the silicon quantum dot device as depicted in Fig.~\ref{fig:1} (B and C). The smallest dimension of the prism is its height, as this allows us to increase the surface area over which the $B_1$ field is generated for a given DR volume (and consequently frequency). Our finite-element simulations indicate that $B_1$ has a good uniformity across the qubit plane, varying by less than 15\,\% over a $0.2 \times 0.2\,$mm$^2$ area (see Methods). We note, however, that this uniformity is far from optimized and can readily be improved through adjustments to the DR dimensions. In order to maximize the resonator $Q$ and minimize any stray electric fields reaching the device, a $200$\,\textmu m thick sapphire spacer is placed between the DR and silicon chip. The $B_1$ field used for driving spin rotations is oriented perpendicular to the qubit plane when operating in the TE$_{11\delta}$ mode. The chip containing the silicon double-quantum-dot device is wire-bonded to a printed circuit board and housed inside a copper enclosure to shield the sample and to suppress radiation losses from the DR. Microwave power is inductively coupled into the resonator through a coaxial cable terminated in a shorted loop (see Fig.~\ref{fig:1}C). The coupling strength of the DR to the coaxial port is determined primarily by the overlap between the DR and loop magnetic fields and can be controlled ex-situ by altering the position of the loop.
Experiments are performed in a dilution refrigerator at a base temperature of 50\,mK. We probe the frequency response of the resonator via an $S_{11}$ measurement through the coaxial coupler (Fig.~\ref{fig:1}G) and extract a resonant frequency of the fundamental TE$_{11\delta}$ mode of 6.163\,GHz at 50\,mK. We measured an internal quality factor of $Q_\text{i} \approx 60,000$ for the resonator in a separate cool-down without the chip, in line with the best reported values of the microwave loss-tangent for KTO \cite{Geyer2005}. In practice, we find the quality factor of the resonator to be limited by losses in the device and extract an internal quality factor of $Q_\text{i} \approx 500$. Finite-element simulations of the complete device assembly are performed to reproduce the measured DR $S_{11}$ (see Methods) and indicate an exceptionally-large conversion factor $C \approx3\,\text{\textmu T}/\sqrt{\text{\textmu W}}$.
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{fig2_sci.pdf}
\caption{\textbf{Silicon double quantum dot with latched Pauli spin blockade readout.}
\textbf{(A)} Scanning electron micrograph (SEM) of the quantum dot device with false coloring for the gate electrodes.
\textbf{(B)} A schematic cross-section through the middle of the device (indicated with a white dashed line in panel a), showing the 3D structure of the gates and conduction band profile.
\textbf{(C)} Charge stability diagram of the double quantum dot obtained by monitoring the current $ I_\text{SET} $ through a nearby capacitively-coupled charge sensor (a single electron transistor). The numbers in parentheses represent the charge occupancies of the double dot system Dot 1, Dot 2: ($N1$, $N2$).
\textbf{(D)} Schematic energy diagram of the (4,0)-(3,1) anti-crossing.
\textbf{(E)} Readout pulse sequence overlaid on a 2D color plot of the readout search signal $\Delta I_{\rm SET}$ (see Methods for details) as a function of the voltages on gates D1 and D2. The absolute voltage range is indicated by the blue rectangle in panel c. Readout is performed with the sequence B to D. The solid lines indicate transitions with high tunnel rates, the dashed lines indicate transitions with low tunnel rates, and the thin lines identify the PSB (Pauli spin blockade) and latched regions.
\textbf{(F)} Histogram of the latched SET current signal for $\ket{\downarrow\downarrow}$ state initialization, indicating a $\ket{\downarrow\downarrow}$ initialization infidelity of 19\,\%.
}
\label{fig:2}
\end{figure*}
\paragraph*{Nanoelectronic Device and Spin Readout.}
The single spins in this work are provided by a \textsuperscript{nat}Si metal-oxide-semiconductor
double quantum dot device, electrostatically defined by a palladium (Pd) gate stack architecture \cite{Zhao2019}. Fig.~\ref{fig:2}A shows a scanning electron microscope (SEM) image of a device nominally identical to the one measured. A cross-section through the dot channel depicting the gate stack and conduction band profile is illustrated in Fig.~\ref{fig:2}B. Critically, instead of having a microwave transmission line \cite{Dehollain2013} or a micromagnet \cite{Leon2020} fabricated on-chip with the quantum dots, here we incorporate the coaxial loop coupler and dielectric resonator on top of the silicon device to drive spin resonance in a global field.
To populate an arbitrary integer number of electrons ($N1$, $N2$) in Dot 1 and Dot 2, we apply positive voltages to gates D1 and D2. The electrons are loaded from an electron reservoir induced at the Si/SiO$_2$ interface by applying a positive bias to D3, D4 and RG. By lowering the voltages on confinement gates CB1, CB2 and SETB, we ensure that electrons in the two QDs are confined to small spatial regions as indicated in Fig.~\ref{fig:2}A. The barrier gates RB and J allow us to control the tunnel coupling between the dots and the reservoir (RB) as well as the interdot coupling (J). A single electron transistor (SET) nearby serves as a charge sensor to monitor the occupancy of the two QDs \cite{Veldhorst2014}.
Fig.~\ref{fig:2}C shows the charge stability diagram of the double QD system, measured via a double lock-in technique \cite{Yang2011}. The nearly horizontal lines (blue) correspond to charge transitions in Dot 1, while the nearly vertical lines (red) are caused by charge transitions in Dot 2. As we decrease $V_\text{D1}$ and $V_\text{D2}$ (the voltages applied to gates D1 and D2), electrons are depleted one-by-one from Dot 1 and Dot 2 until they are both completely empty, denoted as (0,0) in the lower left corner of the stability plot. The transitions of Dot 1 are not visible at lower $V_\text{D2}$ because the tunnel rates between Dot 1 and the reservoir for these bias conditions are smaller than the lock-in probe frequency. These transitions are indicated with gray dashed lines as a visual aid. We note that the faint vertical transition visible at $V_\text{D2}\approx1.45$\,V is due to an unintentional dot formed under gate RB. The sensing signal is weaker owing to its distance from the SET sensor.
For the remainder of the paper, we focus on the (4,0)-(3,1) charge transition for singlet-triplet (ST) readout. Here, two of the electrons in Dot 1 form a spin-zero closed shell and do not interact with the third and fourth electrons during experiments. Spin readout is performed via the pulse sequence A to D indicated in Fig.~\ref{fig:2}E (see Methods for the experiment performed to obtain this 2D color-map). Step A prepares the double QD system in the (4,0) singlet state. Then, by ramping adiabatically from A to B, the system is initialized the triplet state $\ket{\downarrow\downarrow}$ (see the energy diagram in Fig.~\ref{fig:2}D). Pulsing from B to C attempts to move the electron from Dot 2 to Dot 1. If the electron in Dot 2 forms a singlet with the electron in Dot 1, tunneling takes place producing a change in the charge configuration from (3,1) to (4,0). However, if a triplet state is formed, Pauli spin blockade (PSB) prevents the electron from tunneling and the system remains in the (3,1) charge configuration \cite{Johnson2005}. The charge and therefore spin states can be differentiated by monitoring the SET current $I_\text{SET}$, a process referred to as spin-to-charge conversion. Additionally, the sensitivity of the spin-to-charge conversion is enhanced using a latched PSB mechanism \cite{Harvey-Collard2018, Zhao2019} by pulsing quickly from C to D.
The histogram shown in Fig.~\ref{fig:2}F is formed by running 30,000 single shot measurements and recording the difference of the SET current at pulse level D ($I_\text{D}$) and a reference level in the (4,0) region ($I_\text{R}$). The histogram reveals two peaks corresponding to measurements of singlet or triplet states. The singlet and triplet peaks are separated by 8.2\,$\sigma$, indicating a charge state readout infidelity on the order of $10^{-16}$. Since the pulse sequence A-D ideally initializes and measures the system in the $\ket{\downarrow\downarrow}$ state, the appearance of the singlet histogram peak allows us to infer a $\ket{\downarrow\downarrow}$ initialization fidelity of 81\%.
\paragraph*{Single-Electron Spin Resonance in a Global Field.}\label{section::ESR}
\begin{figure}[h!]
\centering
\includegraphics[]{fig3_sci.pdf}
\caption{\textbf{Electron Spin Resonance Results.}
\textbf{(A)} Pulsing scheme for the electron spin resonance measurements. The double quantum dot is initialized in a $\ket{\downarrow\downarrow}$ state. Microwave power is then applied to the dielectric resonator, generating an alternating magnetic field, $B_1$, which can rotate the spins if they are in resonance with the field. Finally, readout is performed to find the probability for the system to be in the triplet state. The cartoon below illustrates the spin states of the double dot system for each pulse stage, showing both the case when the ESR drive is on-resonance and when it is off-resonance.
\textbf{(B)} Triplet probability as a function of the applied microwave frequency $f_{\rm ESR}$ at $B_0 = 227.48$\,mT, showing three ESR (electron spin resonance) peaks, two of which are consistent with the double dot system, while the third is an unexpected ESR peak. Note that panels B-D all share the same horizontal axis.
\textbf{(C)} Triplet probability as a function of $f_{\rm ESR}$ and $B_0$, demonstrating that the ESR peaks shift with magnetic field, as expected.
\textbf{(D)} Depth of the middle ESR peak (P\textsubscript{b}) plotted as a function of the qubit frequency, demonstrating enhancement of the magnetic field inside the bandwidth of the DR. The resonator $ S_{11} $ is overlaid in gray (see Fig.~\ref{fig:1}G).
}
\label{fig:3}
\end{figure}
Having demonstrated spin initialization and readout, we now investigate electron spin resonance in a global microwave magnetic field. Using the pulse sequence illustrated in Fig.~\ref{fig:3}A, we first initialize a $\ket{\downarrow\downarrow}$ state in the double QD system. We then apply microwave control pulses to the coaxial loop coupler which excites the TE$_{11\delta}$ mode of the dielectric resonator, generating a global alternating magnetic field, $ B_{1} $, to manipulate the individual spins. When the DR frequency is resonant with either the $\ket{\downarrow\downarrow}$$\,\Longleftrightarrow\,$$\ket{\uparrow\downarrow}$ or $\ket{\downarrow\downarrow}$$\,\Longleftrightarrow\,$$\ket{\downarrow\uparrow}$ transitions (see Fig.~\ref{fig:2}D), which occurs at specific values of $B_{0}$, the spin states become mixed, reducing the probability of the system being in the $\ket{\downarrow\downarrow}$ state. The resonance frequencies can be calculated with $ f_{\rm res} = g\mu_\text{B}B_{0}/h$, where $g$, $\mu_\text{B}$, and $h$ are the electron $g$-factor, Bohr magneton, and Planck constant, respectively. After spin manipulation, readout is performed by pulsing to the latched region (as described above) and classifying the detected spin state as either triplet or singlet depending on the size of the recorded SET current. Repeating this sequence several times allows us to calculate the probability for measuring a $\ket{\downarrow\downarrow}$ state $P_\text{triplet}$. When the spins in either Dot 1 or Dot 2 are resonant with the DR, we measure a reduction in $P_\text{triplet}$. It is known that the electron $g$-factors can be different in each dot due to electric-field-induced Stark shifts and device strains produced by thermal contraction of the metal gates \cite{Huang2017,Tanttu2019}.
Figure~\ref{fig:3}B shows the ESR peaks driven by the DR as a function of applied microwave frequency at a static magnetic field $ B_0 = 227.48$\,mT. By fitting the experimental data (blue circles) to Gaussian distributions we obtain two broad peaks P\textsubscript{a} and P\textsubscript{b}, with resonance frequencies of 6.163\,GHz and 6.174\,GHz, respectively, attributed to the double QD system, as well as a third peak P\textsubscript{c} at 6.180\,GHz, which could potentially result from an unintended spin state coupled to the DQD. To demonstrate that these peaks are spin-related, we measure the triplet probability as a function of $B_0$ and the $B_1$ drive frequency $f_{\rm ESR}$, as shown in Fig.~\ref{fig:3}C. All three peaks exhibit a linear dependence on $B_0$, each having $\approx54\,\text{MHz}$ shift in a range of 2\,mT $ B_0 $ field. The ESR frequencies are all consistent with electron spin g-factors in SiMOS quantum dots \cite{Tanttu2019}, which are generally slightly below $ g=2 $, to an accuracy limited by the calibration of our DC superconducting magnet.
We observe that the visibility of the P\textsubscript{a} and P\textsubscript{b} signals is enhanced considerably when the microwave frequency $f_{\rm ESR}$ is within the bandwidth of the DR fundamental mode, as should be the case if ESR is being driven by a global field produced by the resonator. In Fig.~\ref{fig:3}D, we plot the ESR peak depth along the P\textsubscript{b} transition as a function of ESR frequency, $ f_{\rm ESR} $, indicated by the dashed line in Fig.~\ref{fig:3}C and fit the result with a Lorentzian curve (black line). It is clear that the maximum peak depth coincides well with the $S_{11}$ response of the DR (Fig.~\ref{fig:1}G) re-plotted with grey line in Fig. \ref{fig:3}D for comparison). Outside of the DR resonance bandwidth, the visibility of P\textsubscript{b} saturates to approximately 1\,\% indicating that there is some residual drive which may, for example, originate from the coaxial loop coupler. However, the visibility is improved to 8\,\% at the resonance frequency (6.163\,GHz), demonstrating that the observed ESR is largely driven by the dielectric resonator itself. For powers exceeding $-32$\,dBm at the loop coupler, we observe transients in the measured SET current after application of the ESR pulse (see Methods), which coincides with the spin states being left in a fully mixed state. Consequently, we are not able to increase the microwave drive power beyond the ESR linewidth ($\Delta f = 2\,\text{MHz}~\text{to}~4\,\text{MHz}$) of the spins in this \textsuperscript{nat}Si device, which is a prerequisite for observing Rabi oscillations.
\section*{Discussion}
A natural next step is the demonstration of coherent spin control. The measured device is fabricated on a \textsuperscript{nat}Si substrate which consists of approximately 4.7\,\% \textsuperscript{29}Si, having non-zero nuclear spins that cause significant inhomogeneous broadening of the resonance frequencies. Coherent spin driving therefore requires relatively fast Rabi frequencies, at least as large as the P\textsubscript{a} and P\textsubscript{b} transition linewidths of 4\,MHz and 2\,MHz, respectively (Fig. \ref{fig:3}C). Moving to an isotopically-enriched silicon substrate will substantially reduce the power requirements for observing coherent control \cite{Veldhorst2014}.
The measured internal quality factor of the dielectric resonator and device assembly is approximately two orders of magnitude lower than the material limit for KTO ($Q_\text{i}\approx 60,000$). Improvements in the quality factor could be made through device modifications to remove any microwave current loops and to minimize the overlap of the DR with lossy materials such as bond wires and the highly-doped source and drain n$^+$ ohmic contacts. Reaching the material limit for $Q$ would boost the conversion factor by an order of magnitude, decreasing the power requirements a hundredfold and allowing Rabi frequencies as large as 6\,MHz for an input microwave power of just 200\,\textmu W.
The successful demonstration of electron spin resonance in a nanoelectronic device using a global magnetic field, as reported here, is a crucial step on the path to scale-up of spin-based quantum processors. This work shows that 3D dielectric resonators can be integrated with nanoelectronic circuits and are a viable source of global magnetic fields. The surface area of the current dielectric resonator ($0.7 \times 0.55$\,mm$^2$) would overlap with approximately 40 million qubits, assuming a conservative 100\,nm qubit pitch \cite{Veldhorst2017}. This area can be readily increased by reducing the height and/or frequency of the dielectric resonator. We believe that with the proposed alterations to the chip design and resonator assembly, large-scale control of millions of spin qubits in a continuous microwave field is now a realistic prospect.
\section*{Materials and Methods}
\paragraph*{Electromagnetic simulations.}
To estimate the dielectric resonator conversion factor $C$ and plot the magnetic and electric field profiles in Fig.~\ref{fig:1} (E and F), we use the software package
Circuit Simulation Technology Microwave Studio (CST-MWS) \cite{CST}. We define a 3D model of the global control device stack (using a simplified nanoelectronic device model), a coaxial loop coupler and copper enclosure. A waveguide port is defined to couple the microwave excitation to the coaxial loop and the frequency domain solver is used to solve Maxwell's equations over
a finite-element mesh of the model.
The internal quality factor observed in the experiment is reproduced in the simulation by adjusting the loss tangent of materials in the device and the external coupling strength is matched by adjusting the position of the loop coupler. The magnetic and electric field profiles are extracted by placing a field monitor at the resonant frequency, which is found by observing the simulated $S_{11}$ parameter response. To estimate the magnetic field homogeneity of the DR we simulate the KTO and silicon (without the device layer), using the eigenmode solver to calculate the magnetic field distribution of the TE$_{11\delta}$ mode. This is presented in Fig.~\ref{fig:Methodshomogeneity} as a percentage of the maximum field value.
\begin{metfig}[!h]
\centering
\includegraphics[]{figMethodshomogeneity.pdf}
\caption{\textbf{Magnetic field homogeneity.}
Magnetic field distribution of the TE$_{11\delta}$ mode, plotted as a percentage of the maximum field value at the surface of the silicon sample.
}
\label{fig:Methodshomogeneity}
\end{metfig}
\paragraph*{Experimental set-up.}
Our experiments were performed in a set-up similar to previous work \cite{Veldhorst2014,Yang2020} except for the delivery of the microwave signal to the device. The diagram in Fig.~\ref{fig:MethodsSetup} depicts how the signal generated by the microwave source is routed to the device. The signal reaches the coaxial loop coupler after going through a cryogenic circulator with a pass-band of 4\,GHz to 8\,GHz. The microwave signal produces an alternating current around the loop, generating a magnetic field that inductively couples to the DR field, as explained in the section `\nameref{section::DR}'. To measure the DR $S_{11}$ response we replace the microwave source with a vector network analyzer (VNA), where the reflected signal from the loop coupler is returned to the VNA via the circulator.
\begin{metfig}[!h]
\centering
\includegraphics[]{figMethodsSetup.pdf}
\caption{\textbf{Microwave control setup diagram.}
Schematic showing the connections between the microwave (MW) source, VNA used for the $S_{11}$ measurements and coaxial loop coupler.
}
\label{fig:MethodsSetup}
\end{metfig}
\paragraph*{SET response to a continuous microwave drive.}
Whilst the ESR experiments of section `\nameref{section::ESR}' are performed in pulsed mode, where microwaves are applied in 150\,\textmu s bursts (a duty cycle of approximately 0.7\,\%), here we present the SET current response under a continuous microwave drive at the DR fundamental mode resonance frequency. We scan the SET ST gate voltage ($V_{\rm ST}$) and monitor the current $I_{\rm SET}$ as we increase the power of the microwave drive (Fig. \ref{fig:MethodsCP}). The broadening of the observed Coulomb peak widths can be a useful measure to investigate the effect of electric fields present in the device whilst driving. For powers starting from $-38$\,dBm, the broadening linearly increases as a function of power. We note that even with powers as high as $-25$\,dBm at the coupler, there is still a good visibility of the peaks.
\begin{metfig}[!h]
\centering
\includegraphics[]{figMethodsCP.pdf}
\caption{\textbf{Coulomb peaks versus microwave power.}
Coulomb peaks measured as a function of the microwave power applied to the loop coupler. Inset: Width of the Coulomb peak identified by the black arrow.
}
\label{fig:MethodsCP}
\end{metfig}
\paragraph*{Pauli spin blockade search.}
The 2D color-map in Fig.~\ref{fig:2}E of the main text is a measurement primarily performed to search for the Pauli Spin-Blockade (PSB) region at the (4,0)-(3,1) transition. Each pixel in the map encodes the current difference $\Delta I_{\rm SET}$ between two measurements with different spin initializations: A mixed state (producing current $I_{\rm mix}$) and a singlet state ($I_{\rm singlet}$). Both types of initializations are depicted in Fig. \ref{fig:MethodsPSB}. The current $I_\text{mix}$ is measured by following the sequence M1-M2-R, which starts with (3,0) and then ramps to (3,1) before readout. The spin state of the electron loaded to D2 after the ramp could be parallel or anti-parallel to the spin of the third electron in D1, i.e. the two spin-state becomes a mixture of singlet and triplet states. For measuring $I_\text{singlet}$ the sequence becomes S1-S2-R. The resulting charge configuration after the ramp to S2 becomes (4,0), which forces the two-spin state to be a singlet due to PSB. The voltages of points M1, M2, S1 and S2 are fixed in the experiment whilst R is scanned, with the ``readout search signal'' $\Delta I_{\rm SET} = I_{\rm mix} - I_{\rm singlet}$ measured at each pixel to produce the color map.
\begin{metfig}[!h]
\centering
\includegraphics[]{figMethodsPSB.pdf}
\caption{\textbf{Pauli Spin Blockade Search Measurement.}
Green (red) level labels and arrows indicate how mixed (singlet) states are initialized for PSB search. The readout search signal $\Delta I_{\rm SET} = I_{\rm mix} - I_{\rm singlet}$ is measured at the level R, which is swept all over the 2D space in the plot.
}
\label{fig:MethodsPSB}
\end{metfig}
\paragraph*{Power limit to ESR pulses.}
As mentioned in the section `\nameref{section::ESR}', ESR pulses with powers exceeding $-32$\,dBm at the loop coupler produced random charge jumps in the SET current (Fig.~\ref{fig:MethodsSpinScrambling}). These jumps were observed to completely mix the spin state of the system, i.e. remove the spin information before read-out. For lower powers, the traces do not show any charge jumps and the read-out was not affected.
\begin{metfig}[!h]
\centering
\includegraphics[]{figMethodsSpinScrambling.pdf}
\caption{\textbf{Time-resolved SET current traces recorded for ESR pulses with different powers.}
The trace with high power ($-17$\,dBm at the coupler) has unexpected steps in the SET current, indicating sudden electron jumps in the vicinity of the dots and SET. The jumps are not observed in the low power trace ($-32$\,dBm at the coupler). Traces offset by a 0.15\,nA vertical shift.
}
\label{fig:MethodsSpinScrambling}
\end{metfig}
\paragraph*{Background subtraction.} The data in Fig.~\ref{fig:3}B and 2D color-map in Fig.~\ref{fig:3}C are post-processed to remove background artefacts unrelated to the spins. A horizontal trace recorded at a $B_0$ far off resonance with spins is subtracted from the data. Additionally, each horizontal trace is offset by its own median current to correct for any variation in the SET current level over time.
|
1,314,259,993,268 | arxiv | \section{Future Work on Evidence Generation}\label{sec:FutureEvidence}
\section{AFS High-Level Software Requirements}\label{sec:HLR}
Without a doubt, the most fundamental step of any system development is capturing a good set of requirements in a systematic way such that it is accurate, complete, verifiable etc. However, this is not always an easy, straightforward task for most complex systems, including the AFS system. Though the ConOps described the contingencies and the recovery actions at a high level, specifying precise requirements for AFS such that it can be subject to a variety of analysis was not without challenges. We begin this section with a brief note about some of the challenges we encountered while specifying requirements, followed by details of the formalization and analysis of the AFS requirements.
Initially, we captured the requirements of each contingency independently, as specified in ConOps in Section~\ref{chap:challenge}. However, a deeper look at the failure conditions and the recovery actions reveled that (a) there was a lack of detail about desired behaviour when multiple failures occur at the same time; (b) some of the recovery actions had critical dependency on proper functioning of components whose contingencies and recovery actions have been defined within the same ConOps. For example, the copter is expected to return to launch, when the battery level of the AFS system is below a certain level; however, without proper functioning of GPS, the copter would not be able to return to launch. Despite having identified GPS failures as one of the contingencies in the ConOps, the respective recovery was assigned a low priority than battery. So, it was imperative that we not only capture the priority among the contingencies, but also take into account dependencies in the recovery actions. Also, on collectively analysing the recovery actions, we found that all of them describe response actions to the same actuator of the copter and broadcast messages to the same messaging channel. Hence, it was crucial for us to carefully specify a consistent and correct response when multiple failures occur. Further, the ConOps indicated that all the three geofence conditions have the level of priority. Thought this may be acceptable, it will lead to non-deterministic behaviours and, eventually implementation. Hence, we had to explore means to provide some sort of precedence among those equally prioritized contingencies.
Given all the above challenges, after several round of brainstorming and exploring multiple approaches, we finally decided to use the notion of \emph{states} to abstractly conceptualize the condition of the AFS during various contingencies and the \emph{transition conditions} defining when and how the system shifts between those states. However, to handle the priority and dependencies among contingencies, a flat state-transition was not adequate. We explored the notion of defining each contingency independently and specify an \emph{arbiter}~\cite{murugesan2013modes}, to finally decide a recovery action. However, the issue was that, to define the final recovery action, it became necessary to repeat the specification of each failure conditions again in the arbiter since there were dependencies. Hence, we took an hierarchical state machine approach~\cite{harel1987statecharts} where we encoded the notion of hierarchy, sequence and dependency of recover actions into the requirements.
It is worth to mention that, although state machines and hierarchies are often correlated with design models, we ave found that they also provide a comprehensible means to conceive and describe the requirements of complex systems. Hence, we have used it purely to capture the AFS contingency requirements in a concise and precise manner. This does not impose any restrictions on the way the system will be designed or implemented.
\subsection{CLEAR Features for AFS HLR}
We formally captured the AFS requirements using the CLEAR notation explained in Section~\ref{chap:challenge}. We defined the requirements of each contingency as separate \emph{requirement sets}. A requirement set stands alone such that it sufficiently describes the necessary capabilities, and constraints of the contingency considered. All the variables and enumerations that are used across all the requirement sets are collectively defined in a common dictionary or definition files.
\paragraph{State Machines}
States are widely used as abstraction to capture mutually exclusive sets of discrete, dynamic system behaviors. The description of the states and the rules defining when and how the system transitions between those states, collectively known as \emph{state machines}, provided a coherent, concise means to express the AFS requirements.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/AFSstates.PNG}
\caption{AFS State Definition}
\label{fig:AFSState}
\end{figure}
To that end, using the state definition construct in CLEAR notation, we defined \texttt{AFS\_State} to represent the various contingency states of the AFS system, as shown in Figure~\ref{fig:AFSState}. This state definition has several benefits, such as succinctly specify the behaviours and helps define precedence among the behaviours in an easily understandable manner.
\paragraph{Precedence and Priority Among Requirements}
\input{requirements_priority}
\newpage
\paragraph{Ontic Type Support for Navigation Entities}
One of the new ontic types we defined in CLEAR for AFS is \emph{XYZVector} type, a coordinate system definition that allows uniquely defining the location of geographical and navigational entities. The coordinate system convention used for most vehicles, including aircraft, is based on an XYZ system, where $X$ and $Y$ represent the horizontal position on ground and $Z$ represents the altitude from ground. When specifying the requirements of aerospace systems, the notion of three dimensional coordinate systems is inevitable to express position of air vehicles in space, their distance to other entities in space, etc. In CLEAR notation users can define terms (or variables) of type XYZ coordinates as shown in Figure~\ref{fig:XYZVecDefn}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.7\columnwidth]{figures/XYZDefine.PNG}
\caption{XYZVector Definition in CLEAR}
\label{fig:XYZVecDefn}
\end{figure}
To allow defining the notion of distance between two XYZ Coordinate terms, CLEAR notation provides the distance between construct. The distance between two XYZ points ($x_1, y_1, z_1 $) and ($x_2, y_2, z_2 $) are computed using the following formula,
$$ distance = \sqrt \{(x_1^2 - x_2^2) + (y_1^2 - y_2^2) + (z_1^2 - z_2^2)\} $$
Figure~\ref{fig:XYZVecReq} illustrates the usage of this construct in one of the requirements of the AFS.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/XYZExample.PNG}
\caption{XYZVector terms and distance function usage in CLEAR}
\label{fig:XYZVecReq}
\end{figure}
\subsection{AFS Requirements in CLEAR}
We now describe the CLEAR specification of the AFS contingencies in detail.
\paragraph{Insufficient Battery}
As mentioned earlier, the Insufficient Battery contingency has the highest priority. Per the ConOps, if the battery level is critically low (less than the \texttt{T\_land}), the system should land immediately; whereas, if the battery is between the \texttt{T\_rtl} (Return To Launch) and \texttt{T\_land}, the system shall return to launch. While we originally defined two states for this contingency, as we were analyzing and formally capturing the requirements, we found that the system needs the GPS to function normally for returning to launch. Hence, to capture these scenarios, we has to define three different states for this contingency -- namely \emph{`Return to launch due to low battery', `Land immediately due to low battery and no GPS'} and \emph{`Land immediately due to Battery Critically Low'}. The conditions to each of these states as well as their behaviour (expected output) are different. A snippet of the formalization in CLEAR notation is shown in Figure~\ref{fig:Insuff_Req}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/Insuff_Bat_Req.PNG}
\caption{Low Battery Requirements}
\label{fig:Insuff_Req}
\end{figure}
\paragraph{GPS Lock Loss Requirements}
The next in priority was the GPS Lock loss (states and transitions colored pink in Figure~\ref{fig:Priority}); In other words, the GPS lock loss related contingency requirements need to come into play only when there are no battery contingency. To capture that condition within the requirement, we defined a boolean variable called \texttt{No\_Abonormal\_Batt\_Event} that would be true when the battery level is more than \texttt{T\_rtl}. The response for GPS loss requires that the system waits by hovering for a certain amount time to allow going back the mission when GPS recovers, or terminate and land otherwise. Further, the goal was also to abandon the mission if there are more than a certain number of GPS failures.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/GPS_Lock_Loss_Req.PNG}
\caption{GPS Lock Loss Requirements}
\label{fig:GPSLLReq}
\end{figure}
To capture these behaviours, we defined three states for the GPS lock loss -- namely \emph{`GPS Loss Hover', `Flight Terminated due to GPS Loss'} and \emph{`Mission Abandoned'} -- and the respective transition conditions. While we explicitly defined a counter (\texttt{GPS\_Loss\_Count}) to keep track of the number of GPS losses, the CLEAR notation construct \emph{`has been ... for ... seconds'} helps specify the notion of timers. Figure~\ref{fig:GPSLLReq} shows a small snippet of the requirements in CLEAR.
\paragraph{Geofence Breach Requirements}
The third in priority is the various breach contingency responses; In other words, when altitude, range or polygon breach occurred, if there are no battery or GPS related failures, then the system shall attempt to correct the breach situation within a certain amount of time, otherwise, the system shall land. While the ConOps indicated that the breach conditions have equal priority, in order to avoid non-deterministic implementation, it was decided that the altitude, polygon and range breach conditions will be respectively given precedence.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/Breach_Req.PNG}
\caption{Breach Requirements}
\label{fig:BreachReq}
\end{figure}
To capture the notion of third in line priority while specifying requirements, we had to define a boolean condition whole value is true if there are no battery and GPS related failures at each instant in time. Further, to be able to specify that the system is not already in a state with higher priority, we used CLEAR notation's construct to define custom \emph{sets}, i.e, define a select sub-set of states among the \texttt{AFS\_State}. For instance, we defined sets such as \texttt{GPS\_Loss\_States and Breach\_States} indicating the group of all states related to GPS loss and breach contingencies respectively. This helped specify the requirements in reasonably modular and terse manner, as shown in the Figure~\ref{fig:CommReq}. Moreover, to capture the precedence among the breach conditions, we leveraged the \emph{`...as defined in the following precedence order...'} construct in CLEAR notation. This construct allowed us to specify the response in a precedence order, as shown the snippet~\ref{fig:BreachReq}. As mentioned earlier, instead of explicitly defining a timer to capture the temporal aspect of this contingency, we used the in-built CLEAR construct \emph{`has been ... for ... seconds'} to concisely capture the requirements.
\paragraph{Communication Loss Requirements}
The communication loss related contingency has the least priority; this means that only when the system is in normal flight state when the communication failures, the related contingency behaviours are exhibited. Similar to the GPS loss requirements, the system shall attempt to reestablish the communication and get back to normal course within a certain amount of time, otherwise will return to launch. Moreover, when the total number of communication loss (denoted by \texttt{GS\_Comm\_Distruption\_Count} in requirement) incidents exceeds a certain threshold, the system shall return to launch. To capture this in CLEAR, we defined 5 different states. Since this had the least priority, we had to take into account the absence of other failures in every requirement relating the transition among the communication loss states. Hence, we defined a boolean variable for that purpose, called \texttt{No\_Abonormal\_Batt\_GPS\_Breach\_Events}, as the name suggests, indicates the absence of other failure conditions. A snippet of the formalization in CLEAR notation is shown in Figure~\ref{fig:CommReq}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/GS_Commnication_Req.PNG}
\caption{Communication Requirements}
\label{fig:CommReq}
\end{figure}
These requirements specified in CLEAR help automate a number of rigorous analysis and verification tasks. In the rest of this section, we describe in detail how we generate test oracles and test scenarios from these requirements. We also translate these requirements to the Sally transition system language for checking against the architecture model and the low-level requirements. We have been developing the formal semantics for the mapping between CLEAR and the input language of the Sally model checker using the Text2Test tool infrastructure. We have also defined a precise ontology and representation format for the evidence artifacts at the requirements level within the TA2 provenance ontology. We have also developed a simple climate control example in order to explain the construction of the assurance argument. This example is used to communicate how the different pieces of the assurance argument fit together.
\section{AFS Evidence Generation from Sally Model Checking}\label{sec:SallyEvidence}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/SallyOverview.png}
\caption{An overview of the Sally model creation and checking process.}
\label{fig:sally_overview}
\end{figure}
Two main artifacts are required for checking properties against the software requirements: a Sally model, which is generated by Text2Test from a set of CLEAR high level requirements, and the properties themselves, which are first expressed in natural language and then formalized into a Sally query (Figure \ref{fig:sally_overview}). The Sally model checker takes the model and the query as inputs, using them to either verify that the property holds or produce a counterexample demonstrating that the property does not hold. (That is, assuming that a conclusion can be reached in a reasonable amount of time. The complexity of the model's state will grow exponentially as more steps are required to reach a conclusion.)
We developed a set of formal properties to check against the CLEAR requirements. This feature of Text2Test is under active development, and future versions will have more integration with the CLEAR document. For now, properties are formalized and specified as Sally queries.
As described in Chapter \ref{chap:ontology}, these properties fit into the DesCert ontology. For each, there is a set of software requirements, and a specific property that holds for those requirements. Two tools are used: Text2Test, which generates a Sally model based on the requirement set and the property itself, and Sally, which is run on the resulting model and property to verify the property's correctness.
For the Insufficient Battery requirement set, the model file is \texttt{AFS\_Insuff\_Battery\_Req\_sally.mcmt}. The property is found in the file \texttt{AFS\_Insuff\_Battery\_Property\_1.mcmt}, and states that if the battery level falls below a certain threshold, the AFS state must not be normal. The following command was used to verify this property with Sally:
\begin{lstlisting}
sally --engine kind -v 1 --show-trace \
AFS_Insuff_Battery_Req_sally.mcmt \
AFS_Insuff_Battery_Property_1.mcmt
\end{lstlisting}
The formal property for the Insufficient Battery set was as follows:
\begin{lstlisting}
(assume-input AFS_Insuff_Battery_Req_sys
(= bat_level 19)
)
(query AFS_Insuff_Battery_Req_sys
(=> (and not_initial_step true) (not(= AFS_State 0)))
)
\end{lstlisting}
To constrain an input variable, the \texttt{assume-input} function is required. A more flexible approach to constrain inputs is to use an internal variable which always mirrors the input, but that requires either tool support for generating such a variable with the model or modifying the model by hand. We must also exclude the initial step from checking because the initial values of all internal variables are undefined in the first step, which allows the model checker to choose anything and will always cause a counterexample to be generated. After the first step, the model's internal variables will have been initialized based on the first step's input variables and proper checking can begin. The \texttt{not\_initial\_step} variable is generated by Text2Test for this purpose, and simply starts out false for one step and then remains true forever.
For the GPS Lock Loss requirement set, the generated Sally model is found in the file \texttt{AFS\_GPS\_LockLoss.mcmt}. The property being tested is in \texttt{AFS\_GPS\_LockLoss\_Property\_1.mcmt}, and checks that if the GPS loss count exceeds the maximum, the AFS state must not be normal. At this stage in tool development, it was necessary to use an internal variable of the model in the property itself to access the previous value fo the GPS loss count. In the future, properties will be specified at the CLEAR level, and Text2Test will generate both the Sally model and the Sally query. This way, properties will be specified with expressions, independently of whatever internal variables are found in the requirement set. To verify this property, the following command was used:
\begin{lstlisting}
sally --engine kind -v 1 --show-trace \
AFS_GPS_Lock_Loss_Req_sally.mcmt \
AFS_GPS_LockLoss_Property_1.mcmt
\end{lstlisting}
For the GS Communication Loss requirement set, the generated Sally model is \texttt{AFS\_GS\_Communication\_Loss\_Property\_1.mcmt} and the property being tested is in \texttt{AFS\_GS\_Communication\_Loss\_sally.mcmt}. It states that if the communication disruption count has exceeded 3, the AFS state must not be normal. This property was verified with the following command:
\begin{lstlisting}
sally --engine kind -v 1 --show-trace \
AFS_GS_Communication_Loss_sally.mcmt
\end{lstlisting}
The final requirement set defines nominal operation, with the Ground Control Station setting the state of the AFS. The Sally model is found in the file \texttt{AFS\_Nominal\_Requirements\_sally.mcmt}, and the property being checked is in the file \texttt{AFS\_Nominal\_Requirements\_Property\_1.mcmt}. This property states that under these requirements, the incoming GCS message should be set directly as the AFS state. The command used to verify this property was:
\begin{lstlisting}
sally --engine pdkind --solver yices2 -v 1 --show-trace \
AFS_Nominal_Requirements_sally.mcmt \
AFS_Nominal_Requirements_Property_1.mcmt
\end{lstlisting}
\section{AFS Test Generation from Text2Test}\label{sec:TestEvidence}
One of the main drivers behind the CLEAR notation is to be able to automate formal-methods based analysis and test case generation. Text2Test is the main tool that performs requirements analysis and test generation. Given a set of requirements, Text2Test performs a synthesis for this purposes from the data-flow model. It then uses Honeywell’s internal HiLiTE tool that provides comprehensive static analysis and data-flow test generation that has been used extensively for certification in many product lines.
\subsection{Analysis of Generic Properties of Requirements}
\label{sec:clear-generic-prop-evidence}
Text2Test utilizes public domain SMT Solvers (e.g., Z3); analysis objective are formulated into an SMT problem and the solver is used to provide consistency analysis and
(limited) completeness analysis. A variety of arithmetic (including non-linear), logical, and time-based constructs are supported as part of these capabilities to
allow the tools to be used for large-scale industrial problems. Requirement analysis is enabled for each output variable. In
Text2Test data-flow synthesized model, all the requirements specifying the same output variable are combined and chained.
For each requirement set of the AFS system, we analyzed its generic properties, described in Section~\ref{sec:clear-generic-properties}, and shown below in Figure~\ref{fig:genPropertiesAFS}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\columnwidth]{figures/AFS_Gen_Properties.PNG}
\caption{AFS Generic Properties}
\label{fig:genPropertiesAFS}
\end{figure}
The errors found by the SMT solver (Z3) is reported in an XML file. The report is a consolidation of all the generic property requirements issues found in a given set of requirements. While there were no consistency and mode-thrashing errors, the tool reported the lack of input and output completeness along with the input and output domain values which made the set of requirements incomplete.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/AFSgenPropertiesAnalysisOutput.PNG}
\caption{AFS Generic Properties Analysis Output}
\label{fig:genPropertiesAnalysisOutput}
\end{figure}
The analysis is performed on groups of individual requirement statements that concern the same output attribute. The requirement defects analysis loops over each of the output variables and lists the generic issues found in each of them. Hence, the errors are displayed in a tabular format for every output attribute. The Figure~\ref{fig:genPropertiesAnalysisOutput} shows the report of the insufficient battery requirements analysis. In the figure, the \emph{Unspecified combination of input values} lists one combination of input values (aka. counterexample) for which the requirements set in consideration do not specify a value for a response. In other words, for the combination of inputs (\texttt{GPS\_Fix = Available; bat\_level= 51.0; copter\_position = 0.1}), there is no value specified for the output \texttt{copter\_command}. This does not indicate a problem with the system, rather this combination of input values occur when the system is in normal state; hence, this completeness issue reported shows that the requirement set under analysis does not include the set of requirement that specify normal system behaviours. Along the same lines, the \emph{Unspecified output values} lists all missing value assignments to output \texttt{copter\_command} and \texttt{AFS\_State} (as per its definition) specified by the set of requirements. Again this is not an indication of problem with the AFS, rather the result of analysing individual set of contingency requirements, while the definitions include the values specified by other sets of requirements. In the next phases of the project, we plan to define a concrete approach that would compartmentalize the notion of requirement set, such that they can be verified in isolation (without these issues), as well as in a compositional manner with other requirement sets.
\subsection{Requirements-based Test Case Generation}
Using the HiLiTE tool at the back end, the Text2Test automatically generates requirements based test cases from the synthesized model of requirements set. HiLiTE generates specific tests at the model level for each block embedded in the model, using either heuristic test case templates via backward propagation, or the formal specification of equivalence class via
SMT-solving. The theory behind the test case generation is explained in detail in Section~\ref{chap:tools}.
Once executed the Text2Test creates a number of report files and test vectors for each of AFS's requirement set.
\paragraph{Test Vector Report and Test Vectors}
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/AFS_TestReport.PNG}
\caption{AFS Test Report - Template}
\label{fig:AFS_TestReport}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/AFS_TestReport_Vector.PNG}
\caption{AFS Test Report - Vector}
\label{fig:AFS_TestReport_Vector}
\centering
\includegraphics[width=\columnwidth]{figures/AFS_TestVector.PNG}
\caption{AFS Test Vector}
\label{fig:AFS_TestVector}
\end{figure}
The test report contains tables of the test case templates and generated test vectors for each block. Figure~\ref{fig:AFS_TestReport} shows the templates part of the test vector report that was generated for AFS. Test case templates define the input and output requirements for each test for each block. In the portion of the Figure~\ref{fig:AFS_TestReport}, the template that specifies the input values that should produce the expected output values is shown. The column header in yellow shows the name of the variables for the block under test.
Figure~\ref{fig:AFS_TestReport_Vector} shows the vector part of the test vector report for AFS. The Generated Test Vectors show the actual values input to the block under test and the value at the block’s output port. The template requires that Input1 and Input2 have values of 0 and 1, but since these ports are not directly accessible, Text2Test must determine what values to give the model inputs to affect the correct values. In the generated test vector shown, notice that the block under test receives the system`s input (cells colored blue cells at the top of the Test Vectors table), constants (colored in teal color cells at the top of the Test Vectors table) and intermediate values (colored white). The system’s input values were determined from the requirements whereas the intermediate values are the result of intermediate computation performed before the values can be propagated to the output (cell colored bright green).
Further, Text2Test also generates test vectors in a machine-readable (csv) file contains all the actual vectors that can be used with a suitable harness to test the system in consideration. Figure~\ref{fig:AFS_TestVector} shows the comma separated vector file displayed in Excel.
\paragraph{Test Generation Status Reports}
The Figure~\ref{fig:AFS_StatusReport} shows the status report for the AFS execution, that summarizes the execution and any issues encountered in analyzing the requirements and list which test cases were generated or could not be generated for each requirement. The detailed coverage metrics are displayed in this report.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/AFS_StatusReport.PNG}
\caption{AFS Status Report}
\label{fig:AFS_StatusReport}
\end{figure}
\section{Assurance-Driven Development (ADD)}\label{sec:ADD}
The products of engineering have been engines, bridges, buildings, factories, planes, and automotives.
These products are designed based on abstractions of chemical, structural, mechanical, and
electrical laws and processes that are observable with reasonable precision and
reproducible with uncanny accuracy. Software is a different kind of beast. Each piece of software
is \emph{sui generis}\@. Software-intensive systems have to meet a stringent range of
demands spanning functionality, performance, reliability, persistence, security, and maintainability.
Software designs can be exceedingly complex. Such systems can be composed of many, deeply nested layers of abstraction. There are few
general laws that help in modeling and understanding software behavior. The operation of
software is typically only indirectly observable. Software-related failures can range from annoying issues like memory
leaks and poor search results to design flaws, security holes, and bugs. Even minor errors
like numerical overflows can have catastrophic consequences, as was illustrated in the
failure of the Ariane-5 launch. Software-intensive systems exhibit high internal
structural complexity, as well as high external complexity in meeting a broad range of requirements
and use cases. Since it is not possible for testing to span the internal and external complexity, it is
important to independently certify the behavior of the software for its intended use.
Certifying software-intensive systems is hugely expensive due to the combination of
internal and external complexity.
Performing a \emph{post facto} certification of a completed software project suffers
from confirmation bias. Any gaps uncovered late in the software lifecycle
during the certification process are likely to be too costly to fix.
The DesCert project approaches certification as an integral part of the software
lifecycle. We achieve software certification through an assurance-driven development (ADD) methodology
where
\begin{itemize}
\item The primary objective of the development is an assurance argument that can be maintained along with the software, and
\item The claims, arguments, and supporting evidence are developed and refined through the software lifecycle
\end{itemize}
\begin{figure}[!htb]
\centering
\includegraphics[scale=.55]{figures/DesCert_assurance_approach.png}
\caption{DesCert Assurance Driven Development Methodology}
\label{fig:assurance-driven-development}
\end{figure}
The ADD methodology tracks the software development artifacts (high-level requirements, architecture,
low-level requirements, source code, object code) through each stage of the lifecycle.
Tools are employed at each stage to generate evidence to show that
\begin{enumerate}
\item The refinement of the development artifacts from one stage to the next is correct.
\item The development artifacts produced in each stage are accurate and consistent,
and exhibit runtime safety and security properties.
\end{enumerate}
In Phase 1 of the ARCOS program, we have focused on the ADD infrastructure in the form of requirements
definition and analysis, architecture and build systems, trusted tools for static and dynamic code analysis,
and continuous assurance and evidence integration workflows.
\section{Designing for Efficient Assurance Arguments}\label{sec:efficient}
On 2 September 2006, an RAF Nimrod XV230 ``suffered a catastrophic mid-air fire'' while flying
in Helmand province, Afghanistan. All fourteen people aboard the plane died.
The fire happened 90 seconds following air-to-air refuelling (AAR).
The Nimrod, developed from the de Havilland Comet,
has been flying since 1969
but the AAR had been added by BAE first in 1982 and upgraded in 1989, and
certified on the basis of a \emph{safety case} developed
by BAE in consultation with QinetiQ during 2001-2004.
The cause of the fire was a fuel leak around the AAR
that was ignited by contact with an exposed (due to frayed/inadequate insulation)
element of the cross-feed (CF) duct (1969-75) and Supplementary Conditioning Pack (SCP) duct
(1979-84) that transported hot (470\degree C) air. The cross-feed duct was
placed dangerously close to a fuel tank. The accident was the subject of an investigation by
Charles Haddon-Cave~\cite{Nimrod09}\@. The report from this investigation pointed out a number of things
that were wrong with the design of the AAR system as well as the ``safety case''.
A key point noted by the report is
\begin{quote}
\emph{As a matter of good engineering
practice, it would be extremely unusual (to put it no higher) to
co-locate an exposed source of ignition with a potential source of
fuel, unless it was designated a fire zone and provided with
commensurate protection. Nevertheless, this is what occurred within
the Nimrod. }
\end{quote}
The report also observes that:
\begin{quote}
\emph{
A Safety Case itself is defined as ``a structured argument,
supported by a body of evidence, that provides a
compelling, comprehensible and valid case that a system is safe
for a given application in a given environment''.}
\emph{ The basic aims, purpose and underlying philosophy of Safety Cases
were clearly defined, but there was limited practical guidance as
to how, in fact, to go about constructing a Safety Case.
\ldots
If the Nimrod Safety Case had been properly carried out, the loss of XV230 would have been avoided.
}
\end{quote}
The Nimrod XV230 accident and a number of other accidents demonstrate that
failures can arise from a combination of many sources: poor regulation,
inept management, bad design, defective engineering, inadequate maintenance,
and improper operation. If a safety case or assurance argument covered
each of these sources of failure in their full complexity, it would fail
to be convincing. No evaluator would have the resources to draw out all of
the flaws in such a complex assurance case. With the Nimrod, the safety
case as presented drew attention away from the simple requirement that
\emph{fuel and ignition should not interact outside the combustion chamber.}
This would mean that any heat sinks would need to be physically and
thermally isolated from fuel both as part of the design and maintenance.
Any deviation from this restriction would be easily detected. More importantly,
the need to develop such an efficient argument for safety serves as a design
heuristic: \emph{avoid design features that lack efficient
arguments. }
An efficient argument requires designs with proven background theories,
large safety margins, trusted tools and
processes, secure platforms, and architectures that offer strong guarantees. The trusted processes
must include the use of powerful modeling and analysis tools for analyzing requirements.
For example, in the DesCert project, we use the CLEAR modeling language to capture requirements
in a precise and cogent notation. The requirements are analyzed through model checking and
testing. Ontic types are used to connect a data representation with the quantity it represents.
Ontic type analysis can rule out a number of potential design flaws such as sending unencrypted
data over a public channel, allowing sensitive operations to operate on tainted inputs,
giving unauthenticated users access to sensitive data, or performing calculations with
raw, unfiltered sensor input. The software architecture must also support strong claims for
isolating functionality so that they can
only interact through the ``official channels'' supported by the architecture. The architecture
can thus guarantee correct timing and functional behavior while protecting against Denial of Service (DoS)
and side-channel attacks. Similarly, a certified build system and a secure platform can guarantee
the provenance and fidelity of the software that is executed. In Descert, we rely on Radler to offer
guarantees on the architecture and the build process. We also employ static and dynamic analysis tools to
deliver strong guarantees for the safe execution of C, C++, and Java code used in the implementation.
We also gain efficiency by employing a safety monitor to observe the execution of the system and ensure that
the safety policies are not being violated.
We can illustrate the contrast between an efficient and inefficient argument with a few simple examples.
The use of a separation kernel to guarantee isolation between processes yields an efficient argument
in contrast to a fine-grained argument for separation bases on analyzing the memory accesses of the
individual processes. The former argument rests on the correctness of the separation kernel, which might
be a substantive assurance exercise, but one with a much larger falsification space. Claims about the separation kernel can be evaluated once and reused multiple times even within the same assurance argument, whereas any analysis of the memory accesses
would have to be repeated for each instance. Similarly, one could use a programming language with a strong type
system such that a type-safe program cannot crash. This argument depends on assurance regarding
the soundness of the typechecker, where the cost of this assurance case can be amortized over multiple uses
of the programming language. The claim for the soundness of the typechecker also fails easily if there is any bug,
including those that are irrelevant to the programs in the current project. The key design strategies for
efficient arguments are
\begin{itemize}
\item Precise claims
\item Validatable models and assumptions
\item Amortized cost through trusted and reusable design tools/artifacts
\item Architectural separation of concerns
\item Rigorous chain of reasoning and evidence
\end{itemize}
Eventually, when we have enough data on certification costs, we can quantify
the efficiency of the argument by comparing the amortized cost of an efficiently argued claim
against the cost of an inefficient, one-time argument.
In summary, DesCert evidence generation supports strong, reusable claims through the use of the CLEAR requirements
notation, Sally model checking, Radler architecture definition, Ontic type analysis, and powerful static and
dynamic code analysis.
\section{Motivation: The Eight Variables Model}\label{sec:8var}
The ADD methodology shown in Figure~\ref{fig:assurance-driven-development} illustrates the above aspects of assurance with corresponding development and verification activities. The development artifacts in successive lifecycle stages are shown in the center column of this figure, with verification activities denoted by green arcs on the left and right sides. The arcs going from a development artifact to a higher level one show compliance --- i.e., the correctness of refinement (Goal 1). The self arcs analyze a particular development artifact to show accuracy, consistency, runtime safety, and security (Goal 2).
Tools are used to automate the verification activities at each stage of the lifecycle, thus enabling incremental, continuous assurance throughout the lifecycle. The use of automated tools also enables a systematic process of iterative refinement and defect removal in early lifecycle stages. For example, requirements consistency and completeness defects, reported by Text2Test tool, can be iteratively removed before proceeding to the next stage.
In this respect, the DesCert approach is similar to recent agile and test-driven-development methods as opposed to the traditional waterfall method. Chapter~\ref{chap:baseline} describes the continuous assurance flow automation in DesCert.
The ADD methodology employs both review and testing based methods (shown on the left side of Figure~\ref{fig:assurance-driven-development}) and formal methods (shown on the right), both supported by tools. The two types of methods can support complementary assurance objectives or can be used for the same objectives to lower the testing burden/cost and to increase confidence. The concept of \emph{properties}, essential to this approach, is described in Section~\ref{sec:property-desc}. The tools and their usage to generate evidence is described in Chapter~\ref{chap:tools}.
Our specific instantiation of the assurance-driven development (ADD)
follows our Eight Variables Model (8VM) of
cyber-physical systems. The model as shown in Figure~\ref{fig:8var} categorizes the
classes of components, agents, or actors in the design and the variables that capture the observable behavior
of these components and their interactions.\footnote{The variables here are just labels for the
observable behaviors of the actors and their interactions, and should not be confused with
program variables. }
In a typical cyber-physical system, there is a physical
plant, such as a vehicle or a building. The Pose of the plant is a class of variables that
includes the position, orientation, temperature, etc. of the plant. The plant is also interacting
with an external physical World covering the terrain, wind, friction, and other factors.
We can measure some of the physical variables, namely, the Monitored ones,
in the Plant and Environment through Sensors. The Sensor observes these physical values of the
Monitored variables and writes these observations as digital values of the class of Input variables.
The Input variables
are processed by the Controller, i.e., the software component, along with any Operator Commands,
to produce the Output to the actuator and updates to the operator Display. The Output drives
the Actuator to produce the Controlled input to the Plant. Note that the Environment,
Pose, Command, and Display are the externally observable variables, and the Controlled, Monitored, Input,
and Output variables are internal. The Environment, Pose, Controlled, and Monitored variables
are physical variables. The Input, Command, Display, and Output variables are digital variables.
Also, the Controlled, Monitored, Input, and Output variables are Parnas's original Four Variables.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figures/8-variable-ardu.png}
\caption{The Eight Variables Model}
\label{fig:8var}
\end{figure}
We typically work with models of the
World, Plant, Sensor, and Actuator, and even the Operator. The top-level
claim for the whole System has the form below
\begin{alltt}
WorldModel(Environment) AND
PlantModel(Environment, Control, Pose, Monitored) AND
SensorAccuracy(Monitored, Input) AND
ActuatorResponse(Output, Control) AND
ControllerOutput(Input, Command, Output, Display) AND
OperatorModel(Display, Command)
IMPLIES
Requirement(Command, Environment, Pose, Display)
\end{alltt}
In DesCert Phase 1, we are primarily focused on the Software Requirements which are
captured by the ControllerOutput predicate. The physical variables
can be continuous or switched continuous. In the latter case, the
variable is characterized by a series of epochs defined by the
switching times $\mathit{switch}(i)$. For each time $t$, there is a
function $\mathit{epoch}(t)$ that indicates the epoch to which the
time $t$ belongs. For example, the torque delivered by the engine might be a
switched variable that is switched by means of gear shifts.
The digital variables are switched but, unlike physical variables, the values
are latched within each epoch. For example, the variable \texttt{input}
is a sampled sequence of values of the physical variable \texttt{monitor}.
The \texttt{SensorAccuracy} predicate constrains the discrepancy between the
value of the physical variable and its sampled counterpart.
The software requirements in the \texttt{ControllerOutput} specification are
implemented by the Architecture and the Component Contracts (the Low Level Requirements or LLRs). The Architecture consists
of the Logical Architecture and the Physical Architecture. The Logical Architecture
specifies the nodes (with their periods, step functions, their list of published and subscribed topics)
and the topics. The Physical Architecture maps nodes to virtual machines on actual physical
hardware platforms, and topic channels to specific communication mechanisms.
The assurance argument is factored so that the ControllerOutput specification is entailed by the
Component Contracts and the Logical Architecture. The Physical Architecture can be shown to
imply the assumptions made in the Logical Architecture regarding scheduling jitter, worst-case execution time (WCET) and communication latencies and throughput. The semantic structure of the assurance argument is
described in Figure~\ref{fig:argument}\@. In supporting this argument, we need to ensure
that we also have empirical or analytical evidence to support the argument nodes (the circles) as well.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{figures/Argument.png}
\caption{Assurance Argument Structure}
\label{fig:argument}
\end{figure}
The validity of the argument rests on the strengths of the inference nodes. These inference nodes
can be \emph{inductive}, as in supporting a claim with test evidence,
\emph{deductive}, as in deductively decomposing a claim into subclaims,
or \emph{probabilistic}, as when the probability of failure is computed.
The inference nodes represent argumentation patterns that are used repeatedly and
hence compatible with the efficient argument goal.
\section{The Radler Architecture Framework}\label{sec:Radler}
The Radler framework employs a distributed quasi-periodic model of
computation where individual computation nodes interact within a
publish/subscribe architecture~\cite{conf/memocode/LarrieuS14,conf/memocode/LiGS15}.
The logical architecture consists of
nodes and topics. Each node publishes on a collection of topics,
where each topic has at most one publisher node, and subscribe to
another collection of topics. The nodes execute periodically with
minimum and maximum bounds on the period between two successive
executions. In each execution, a node reads its subscription mailbox
to extract the inputs to which it applies a step function. The
outputs of the step function are then published to the corresponding
subscriber mailboxes. A number of useful properties can be derived
directly from Radler logical architecture. The physical architecture
maps the nodes to virtual machines which are themselves mapped to
physical machines, and the mailbox semantics is implemented using
physical communication channels and buffers. Within the Radler
architecture, behavioral properties of the state machines can be
established using the step function precondition/postcondition
contracts and the logical architecture. The step function code can be
independently analyzed for compliance with the contract, the
worst-case execution time bounds, and for generic properties such as
the absence of runtime errors or ontic type violations. Traceability
information can be maintained tracking requirements to state machines
which are decomposed into Radler nodes with step functions implemented
by code. Radler also provides a certified build system so that the
executables are created with the glue code needed for execution and
interaction.
A Radler software architecture is specified in the Radler Architecture Definition Language (RADL)
as a publish/subscribe system with periodically executing nodes publishing on bounded latency channels.
An example \texttt{.radl} file with the architecture definition is shown in Section~\ref{sec:RADL}\@.
A RADL architecture definition consists of a \emph{logical architecture} and a \emph{physical architecture}\@.
The logical architecture specifies \emph{nodes} and \emph{topics}. Each topic has a unique publisher node
and a message type. Each node specifies a set of topics to which it subscribes along with the buffer sizes,
and a set of topics on which
it publishes. In addition, the node descriptions captures the minimum/maximum period, the expected latency
on each subscribed channel, and the step function that maps the subscription message buffers to published
messages. The channel latency is measured from when the publisher starts executing its step function
so it includes the worst-case execution time.
A node might also be attached to and exchanging data with zero or more devices. The physical architecture
maps nodes to processors and virtual machines and maps each topic channel between a publisher/subscriber pair
for a topic to a physical channel on a communication bus connecting the two endpoints.
From the logical architecture, we can derive a number of useful theorems that have been formally verified using PVS.
These theorems can be used to derive ArchitectureProperties that capture the end-to-end latencies and other timing properties
in the architecture. By factoring out these properties, we can modify the logical architecture while preserving
the Architecture properties so as to maintain the structure and validity of the assurance argument.
\begin{alltt}
LogicalArchitecture(Nodes, Topics)
IMPLIES
ArchitectureProperties(Input, Command, Output, Display, Nodes, Topics)
\end{alltt}
Let $\mathit{min}(n)$ and $\mathit{max}(n)$ be the minimum and maximum periods of node $n$, and
let $D_{mn}$ be the message latency between publisher $m$ and subscriber $n$. We then have the following claims:
\begin{enumerate}
\item If $\mathit{min}(m) > D_{mn}$, then $n$ received messages from $m$ in the same order in which they were sent. (No Overtaking)
\item Subscriber $n$ can conservatively detect the failure of $m$ by observing when $k$ consecutive periods have transpired
with no new messages from $m$ for $k . \mathit{min}(n) > D_{mn} + \mathit{max}(m)$. (Failure Detection)
\item Messages can be lost because the buffer is over-written by newer messages from the publisher.
Under the assumption of No Overtaking and a buffer size of $L$ messages, no more than
$M-L$ consecutive messages can be lost for smallest $M$ such that $M.\mathit{min}(m) > D_{mn} + \mathit{max}(n)$.
This is a crucial property that ensures that in each step, the subscriber $n$ sees at least one of every $M - L + 1$
consecutive messages sent by $m$, and that $n$'s buffer contains the last $L$ messages received. (Bounded Message Loss)
\item The age of a message from $m$ to $n$
is the time elapsed between when it is published by $m$ and and the time it
is processed by the subscriber $n$. The maximum age of a message is bounded by $D_{mn} + \mathit{max}(m)$\@. (Bounded Age)
\end{enumerate}
The upshot of these architecture properties is that if a publisher is signalling a condition, then it has to be aware
that due to Bounded Message Loss, the condition has to be signalled in at least $M - L + 1$ consecutive messages
in case $M - L$ of these messages are over-written in the buffer. Conversely, a subscriber's published values are
based on inputs with published age constrained by the Bounded Age assumption. In the case of the room temperature controller,
the latter assumptions bounds the delay between sensing the temperature and the actuation of the heater. When combined
with bounds on the leakage rate and the heating rate, and the component contracts, the architecture properties
can be used to demonstrate that the room temperature eventually stabilizes to a range between \emph{Min} and \emph{Max},
and remains stably in this interval. In general, temporal properties like this can be derived from the combination
of the step function contracts and the architecture properties.
The \texttt{ArchitectureProperties} follow from the logical
architecture description, which in turn is satisfied by the mapping
to the physical architecture and the properties we assume of the
physical platform.
\begin{alltt}
PlatformProperties(VirtualMachines, TransportMedium) AND
PhysicalArchitectureMapping(Nodes, VirtualMachines,
Topics, TransportMedium)
IMPLIES LogicalArchitecture(Nodes, Topics)
\end{alltt}
The Radler architecture framework also includes a software build system that takes as input the logical and
physical architecture definition and the associated source files for the step functions and creates a collection
of executable binary images that can be launched on the physical platform. The software running on the
platform implement the architecture in terms of the nodes executing periodically and communicating on the
topic channels. The Radler build system also adds monitors to check that the specified latency bounds are
not breached and adds a flag to the message to indicate if the contents are based on stale inputs (which can
occur even with normal behavior) or missing inputs (which is abnormal). We extended Radler to integrate
nodes defined using Java code running on a Java Virtual Machine (JVM). We specifically added a Java-defined
node executing the BeepBeep3 safety monitoring framework. Unlike the AFS node which executes recovery
actions, the safety monitor is a passive component that ensures that any failure event or combination of
events does trigger the appropriate recovery action.
\section{A Motivating Example: Room Temperature Regulation}\label{sec:thermostat}
We can illustrate the argument template using a simple example of a thermostat-based room temperature
controller which captures the structure of the argument and the forms of evidence. The temperature
controller can be mapped on to the eight-variables model as shown in Figure~\ref{fig:8var-thermo}.
The thermostat turns the heater on or off, and the thermometer senses the room temperature at a specific
location. The operator can switch the thermostat on or off and set the desired temperature.
\begin{figure}[htb]
\centering
\includegraphics[width=.8\linewidth]{figures/8-variable-roomheater.png}
\caption{Eight-Variable Model for Thermostat Room Temperature Controller}
\label{fig:8var-thermo}
\end{figure}
If we consider a simple system like a thermostat, the main requirement is that it
maintains the room temperature around the set temperature between
\texttt{Low} and \texttt{High} by switching the thermostat on when the sensed temperature falls below
$\mathtt{Low} + \Delta$, and off when the sensed temperature exceeds $\mathtt{High} - \Delta$\@.
There might be additional requirements, for example, that there is some hysteresis built into the
switch for the heater so that it is not damaged by being switched on and off too frequently.
We assume (\texttt{SensorAccuracy}) that there is an error of $\epsilon$ in sensing the temperature, and
the room temperature
can rise (\texttt{ActuatorResponse}) or fall (\texttt{PlantModel}) at no more than a rate of $rho$
degrees per second. We also assume that when the heater is on, the room temperature rises
at a rate of at least $\rho^-$ degrees per second, for $0 < \rho^- < \rho$\@.
The thermostat example is not as trivial as it might seem. Assumptions about the sensor,
actuator, plant, and environment are needed to achieve the desired behavior. Additionally,
the architectural model contributes timing latencies that need to be factored into the argument.
The desired property is that the
room temperature is maintained between $\mathtt{Low}$ and $\mathtt{High}$ when the thermostat is on.
However, this might not hold at the initial point when the thermostat is switched on.
If the initial temperature is already above $\mathtt{High}$, then there is no way to force the
temperature to within the acceptable range since the system only heats and does not cool.
When the initial temperature $\theta_0$ is below $\mathtt{Low}$,
it will take some time, at least $\frac{\mathtt{Low} - \theta_0}{\rho^-}$, from when the heater is switched on
before the temperature converges to within this bound. Since there is a latency of at most $\tau$, in sensing the
temperature and switching on the heater, the temperature could drop to $\theta_0 - \rho\tau$ within this time,
we need to allow at least $\frac{\mathtt{Low} - \theta_0 - \rho\tau }{ \rho^-} $ seconds following the switching on of
the thermostat, for convergence to have occurred. Once the thermostat has been switch on and enough
time has elapsed for the room temperature to converge to the acceptable range, it can be shown that
it remains within this range. This is because, when the temperature is below $\mathtt{Low} + \Delta$, then
either the heater is already switched on, and the temperature is rising, or it is off. The latter condition
can only arise when temperature was above $\mathtt{Low} + \Delta - \epsilon$ at least $\tau$ seconds ago.
As long as $\frac{\mathtt{Low} + \Delta - \epsilon}{ \rho^+} < \tau$, we can ensure that the temperature does not
drop below $\mathtt{Low}$ before the heater is switched on. Symmetrically, we see that as long as
$\frac{\mathtt{High} - \Delta + \epsilon}{\rho^+} < \tau$, the temperature does not exceed $\mathtt{High}$\@.
If we ensure a $\mathtt{High} - \Delta$ exceeds $\mathtt{Low} + \Delta$ by at least a positive quantity
$\gamma > 2\epsilon$, then we can fulfil the hysteresis requirement that the heater not be switch on or off
too frequently. This is because there will be a gap of at least $\frac{\gamma - 2\epsilon}{\rho^+}$ between
the heater being switched on and the off, or vice-versa.
\begin{figure}[htb]
\centering
\includegraphics[width=.8\linewidth]{figures/Thermostat.png}
\caption{Thermostat Behavior in Radler}
\label{fig:thermostat}
\end{figure}
The correct behavior of the temperature controller depends on some crucial architecture properties of the Radler model.
The interaction between the physical room temperature, the digital sampled sensed temperature, and the
activation of the heater is shown in Figure~\ref{fig:thermostat}\@.
The thermometer, the thermostat controller can be viewed as independent nodes in the architecture.
The thermometer samples the room temperature at a rate, say, of 10Hz. The thermostat switches the
heater on or off depending on whether the detected temperature falls below $\mathtt{Low} + \Delta$ or
exceeds $\mathtt{High} - \Delta$\@.
A temperature reading of $\hat{\theta}$ might
represent an actual temperature $\theta$ in the range $\hat{\theta} \pm \epsilon$ and drift by at most $\rho/10$ between readings.
When the sensed
temperature falls below $Low + \Delta$, it will be sampled within a tenth of a second.
If we assume that the message latency between the temperature sensor that the heater controller is
at most a tenth of a second. If we assume that the thermostat is operating at 2Hz, then
the end-to-end delay between the temperature falling below $\mathtt{Low} + \Delta$ and the
heater being switched off could exceed $.5 + .1 + .1 = .7$ seconds.
In order for
the thermostat to regulate the room temperature within the $[\mathtt{Low}, \mathtt{High}]$ interval,
we have assume that rate of change of temperature, both during heating and cooling, is bounded.
We need to ensure that the delays introduced by the sampling rates and message communication is
sufficiently small that the temperature remains within the safe interval in the time between the
temperature triggers $\mathtt{Low} + \Delta$ and $\mathtt{High} - \Delta$ are detected, and the heater
is turned on or off.
In the above argument, we are relying on the Radler architecture properties as well as component contracts regarding
the step functions associated with the individual nodes. These component contracts are just precondition/post-condition
pairs associated with these step functions relative to the mailbox inputs on their subscribed channels
and the outputs on their published channels. For example, the thermostat contract is that it must direct
the heater to be switched on (respectively, off) when the sensed temperature falls below $\mathtt{Low} + \Delta$
(respectively, exceeds $\mathtt{High} - \Delta$).
The argument supporting the \texttt{ControllerOutput} claim can be stated as
\begin{alltt}
ArchitectureProperties(Input, Command, Output,
Display, Nodes, Topics) AND
ComponentContracts(Input, Command, Output, Display, Nodes)
IMPLIES ControllerOutput(Input, Command, Output, Display)
\end{alltt}
As we saw in the case of the thermostat, assumptions SensorAccuracy, ActuatorResponse, WorldModel, PlantModel,
and OperatorModel might all factor into the design of the Controller since certain couplings between
variables might hold only because of these \emph{external} constraints. As an example, an interlock in the
operator console might mean that certain commands are impossible in specific modes.
The key takeaways are that
\begin{enumerate}
\item The Eight Variables model decomposes the argument for a system into precisely stated modeling assumptions
about the environment, physical plant, sensors, actuators, and operator under which the software requirements
must be met.
\item Writing software requirements even for simple systems such as a temperature controller can be quite subtle.
\item The model of computation used in the architecture allows independent software components to be
be independently developed with their individual component contracts.
\item The argument that the software requirements have been implemented follows from the architecture
properties, the logical architecture, and the component contracts.
\item The assurance case must also demonstrate that the physical architecture satisfies the assumptions
regarding scheduling, worst-case execution time, and channel latencies specified in the logical architecture.
\end{enumerate}
The ArduCopter system is more complicated than
a thermostat, but the structure of the decomposition into claims
and subclaims remains the same, and much of the underlying reasoning follows the same pattern.
\section{Aligning the Assurance Argument Structure with DO-178C Guidance}
The assurance driven workflow and argumentation structure shown in Figure~\ref{fig:assurance-driven-development} aligns with the DO-178C scaffold as shown in Figure~\ref{fig:do178c}\@.
The RTCA DO-178C guidelines specify certification objectives based on five design assurance
levels (DAL) that correlate with the impact of anomalous behavior: Catastrophic (Level A), Hazardous (Level B),
Major (Level C), Minor (Level D), No Effect (Level E). Figure~\ref{fig:do178c} (taken from \url{https://en.wikipedia.org/wiki/DO-178C}) shows the objectives and activity for each DAL, with required traceability between artifacts.
\begin{figure}[htb]
\centering
\includegraphics[width=.6\linewidth]{figures/DO-178C_Traceability}
\caption{The DO-178C objectives and argument structure}
\label{fig:do178c}
\end{figure}
The assurance argument in our case is for the AFS component, and it consists of claims for
\begin{enumerate}
\item Tool validity: These will support the validity of the claims and counterexamples generated by the individual tools as supported by tool qualificatio
\item Validity of high-level requirements: Consistency, completeness, verifiability, and compliance with system-level requirements, including traceability.
\item Validity of mapping from high-level requirements to low-level requirements (LLR), including traceability: Radler architecture + Sally models.
\item Validity of source code: absence of runtime errors, and compliance with LLR, including traceability.
\item Validity of object code: generation of tests from high- and low-level requirements and test execution on object code, including traceability.
\end{enumerate}
The claims and artifacts are captured in Figure~\ref{fig:claimsArtifacts} (claims are in green colored boxes on the right, other boxes denote evidence artifacts).
The evidence generated to support the above claims include
\begin{enumerate}
\item Tool Qualification evidence for CLEAR, Text2Test, Clear2Sally, Sally, Radler, Seahorn, Randoop, Daikon, Toradocu, and the Checker Framework.
\item High-level requirements in CLEAR partitioned module-wise into Requirement sets.
\item Test oracles
\item Test suites
\item Radler architecture properties supported by test traces
\item Sally model-checking claims and counterexamples.
\item Code analysis: static and dynamic analysis evidence.
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width = .8\linewidth]{figures/ClaimsArtifacts.png}
\caption{DesCert Assurance Claims and Artifacts}
\label{fig:claimsArtifacts}
\end{figure}
\section{Assurance Ontology}\label{sec:Ontology}
The assurance artifacts created and maintained in the DesCert project are ingested into RACK.
These artifacts are either data (papers, requirements, test cases, analysis results, architecture definitions, proofs,
and code, or metadata (requirement labels, traceability, tool configuration, file handles).
The data is provided in the form of files in a separate directory. The metadata is ingested into RACK.
The DesCert ontology is defined in the SADL language.
In representing the evidence data in the TA2 Rack-in-a-Box framework, we employ the Provenance
ontology schema (see Figure~\ref{fig:prov_core}) consisting of entities \textbf{Agents}, \textbf{Activities}, and \textbf{Artifacts},
and relations:
\begin{enumerate}
\item ActedOnBehalfOf(Agent, Agent).
\item WasAssociatedWith(Activity, Agent)
\item WasAttributedTo(Entity, Agent)
\item WasDerivedFrom(Entity, Entity)
\item Used(Activity, Entity)
\item WasGeneratedBy(Entity, Activity)
\item WasInformedBy(Activity, Activity)
\end{enumerate}
In the TA2 ontology, the activities are System development, Requirements development, Hazard Identification,
Code development, Test development, Test execution. For DesCert, we add the Software Architecture and
Low-Level Requirements activities. The evidence schemas we employ consist of
\begin{enumerate}
\item High-Level Requirements and Test Development using CLEAR and Text2Test to develop Requirements sets mitigating hazards.
\item Property Checks using Sally Tool covering both specific and generic properties associated with requirement sets
\item High-Level Requirement Analysis by Text2Test Tool generating Requirements Analysis results
\item Software Architecture and Code Contract (Low-Level Requirements) Development using the Radler Architecture Definition Language (RADL) and build system as well as specific software libraries
\item Property Analysis of Source Code by SeaHorn connecting code to Low-Level Requirements on code components
\item Property Analysis of Source Code by Randoop and Daikon connecting code to Low-Level Requirements on code components
\item Property (Type) Analysis of Source Code by Checker Framework connecting code to Low-Level Requirements on code components
\end{enumerate}
We have also extended the ontology to connect properties with the corresponding DO-178C objectives.
The DesCert ontology is described in detail in Chapter~\ref{chap:ontology}.
\section{ArduCopter Challenge Problem}\label{sec:ardu}
The DesCert approach to assurance-driven development of new software is prototyped using the ArduPilot platform.
The ArduPilot is an open source platform for controllers for a range of vehicles including rovers, fixed-wing aircraft,
and rotorcraft. In DesCert, we employ the ArduCopter instantiation of the ArduPilot. The platform architecture
for the ArduPilot is shown in Figure~\ref{fig:ArduPilot}\@. The architecture has a Hardware Abstraction Layer (HAL)
that supports a number of hardware/OS platforms, a shared library for control-related computations, and vehicle-specific
code which in our case is the ArduCopter rotorcraft. The platform supports a number of simulation engines, and in our
project we use the Software-In-The-Loop (SITL) simulator.
\begin{figure}[htb]
\centering
\includegraphics[width = .8\linewidth]{figures/ArduPilot_HighLevelArchitecture.png}
\caption{ArduPilot Platform Architecture (From \url{https://ardupilot.org/dev/docs/learning-ardupilot-introduction.html
})}
\label{fig:ArduPilot}
\end{figure}
Though the ArduCopter has a basic Advanced Fail Safe (AFS) functionality for recovering from glitches, it is embedded
into the main control loop. We decided to define an independent AFS functionality that uses data from the primary
control software. Our AFS system is located on the companion computer which is connected to the primary computer
through MAVLink. We deployed our Radler architecture running on the Robot Operating System (ROS)
by introducing a gateweay Radler node representing the interface with the primary ArduPilot platform.
The communication between the gateway node and the primary computer employs the MAVROS transport channel.
The assurance case study for the Advanced FailSafe (AFS) component of the
ArduCopter focused on a concept of operations (ConOps) where the
ArduCopter autonomously executes a mission plan by flying through a sequence
of way points at specified altitudes.
The AFS monitor detects events such as range violations, geofence breaches, GPS loss, communication loss, and battery depletion to trigger appropriate recovery actions.
When a potential failure event is detected, the AFS monitor executes
a recovery action to keep the vehicle safe either by returning to the launch site, hovering in place, or
landing. The Radler architecture for the challenge problem also integrated BeepBeep3,
a Java application for safety monitoring.
The details of the AFS Challenge Problem are spelled out in Chapter~\ref{chap:challenge}.
\section{Assurance Tools}\label{sec:tools}
As already noted, a design workflow supporting efficient arguments requires trusted tools
with semantically coherent interfaces that can be composed for evidence generation.
The DesCert workflow employs CLEAR as a notation for capturing high-level behavioral
software requirements (HLRs). CLEAR requirements capture temporal properties specifying the
reactive behavior of state machines such as the AFS module. They also specify certain
safety and timing properties that must be satisfied by the state machines.
From the CLEAR requirements, we use the Text2Test tool to generate test inputs and
test oracles for the state machines in the form of controllable inputs and observable outputs.
Test generation is based on a testing theory for exploring and monitoring
the implementation of each operator used in the requirements definition.
The Text2Test tool also generates transition system models from the CLEAR requirements
in the Sally language. These transition system models can be individually or jointly
analyzed for temporal properties, specifically invariants, using the Sally model checker.
The software HLRs are refined to a design given by the Radler architecture which
implements each state machine component as a Radler node. We have used
SRI's Prototype Verification System (PVS)~\cite{Owre95:prolegomena}, an interactive proof assistant,
to verify certain key architectural propertes of the Radler architecture.
As we showed in Section~\ref{sec:thermostat}, these architecture properties can be used to
refine the HLRs in terms of precondition/post-condition contracts on the step functions
employed by the nodes. These contracts as well as generic properties of the source code
such as type correctness and the absence of certain classes of runtime errors are established
using dynamic and static analysis. The static analysis tools include the Checker Framework for
annotated Java code, and SeaHorn for LLVM bit code. The dynamic analysis tools include the
Randoop unit test generator and the Daikon analyzer for likely program assertions, as well as the
Text2Test tool. We also integrate BeepBeep3 as an runtime safety monitor.
While we generated some modest tool
qualification evidence, we did not take a serious stab at tool qualification. A rigorous tool qualification
following the guidelines in RTCA DO-330 would be an extremely costly exercise that would distract us from
the research goal of developing a proof-of-concept evidence generating design workflow.
The DesCert assurance tools are summarized in the table in Figure~\ref{fig:toolsuite}\@.
\begin{figure}[htb]
\centering
{\small\begin{tabular}{{|l|l|l|}}\hline
Phase & Tools & Artifacts\\\hline
Tool Qualification & Self Analysis & Test+Analysis \\\hline
System Requirements & Operational Scenarios &
\begin{tabular}{{l}}
Consistency\\ Completeness
\end{tabular}
\\\hline
Hazard Analysis & Sally & Model Checking \\\hline
Software High Level Requirements &
\begin{tabular}{{l}}
CLEAR\\ Text2Test\\ PVS\\ Sally
\end{tabular}
& \begin{tabular}{{l}}
Consistency\\ Completeness\\Validation
\end{tabular}
\\\hline
Software Low Level Requirements & \begin{tabular}{{l}}
SeaHorn\\ Randoop\\ Daikon\\ Checker Framework
\end{tabular} & Static/Dynamic Analysis\\\hline
Executable Object Code & \begin{tabular}{{l}}
BeepBeep3\\Text2Test
\end{tabular} & Safety Monitoring \\\hline
\end{tabular}}
\caption{DesCert Tool Suite}\label{fig:toolsuite}
\end{figure}
\section{Current Limitations of DesCert Assurance Methodology}\label{sec:limitations}
The DesCert assurance-driven development workflow is aimed at creating a paradigm for the automated certification
of safety-critical systems. The Phase 1 effort was largely exploratory. We centered our workflow on the
creation of designs that supported efficient arguments. The Radler model of computation plays a key role
in facilitating an efficient argument structure. Our assurance-driven development follows
the structure of an assurance case complying with the guidance in DO-178C. We track several of the
Level C and D objectives and traceability relations suggested by the DO-178C standard. However, our
approach generates evidence during the design lifecycle where the goal of the design is the
creation of a software system supported with the design and assurance artifacts. This is in contrast
to a \emph{post facto} approach to certification where the evidence chain is constructed to comply with the
DO-178C objectives as a postscript to the design lifecycle. We also focus on constructing evidence that
targets the behavior of the software and not the process by which it is constructed and analyzed.
Since we generate evidence from different phases of the design, we target evidence that is semantically
coherent so that the behavioral models and claims at the different levels are consistent with each other.
In particular, any software failure can be connected to a flaw in the assurance argument constructed
from the evidence.
Broadly, the DesCert approach to assurance-driven development starts
with the formalization of the intent of the system in the form of
precise requirements defined in the CLEAR language. The analysis of
the requirements captured in CLEAR cover both generic properties that
must hold of any requirements as well as specific properties that
constrain the software application under certification. We use ontic
type annotations to capture the representational intent of the data
objects consistently throughout the design. The software design is
centered around a choice of an architectural model of computation,
which in our case is the Radler framework. The architecture, defined
in the Radler Architecture Definition Language (RADL), captures the
logical architecture in the form of nodes operating
quasi-periodically, and communicating over publish-subscribe channels
with specified latency bounds. The RADL physical architecture maps
the nodes to processes within a virtual machine and the topic channels
are implemented through mailboxes connected to their publishers
through a transport protocol. Radler architectures are flexible about
how the physical architecture is actually realized as long as the
period, communication, and communication latency assumptions are
satisfied. The software services provided by each node are
implemented as step functions with their own
precondition/post-condition contracts.
While the above outline of a high-assurance design process can be made fully rigorous,
there are some limitations with the Phase 1 work that need to be addressed in future work.
\begin{enumerate}
\item CLEAR has only been applied to a limited set of case studies. Both the behavioral
language and the background libraries of useful operations need to be expanded.
\item We have not yet defined an Ontic type framework that spans the design stages from requirements to source code.
\item The experience with translating CLEAR state machine requirements to Sally is limited to a few examples. This translation will need to be expanded to handle complex requirements.
\item Sally itself only implements model checking for invariants and cannot handle more complex temporal properties.
\item The soundness of the translation from CLEAR to Sally needs to be certified.
\item Though we have a broad and mature suite of tools, only the HiLite tool has gone through a tool qualification process.
\item While we have the source code analysis tools and did apply them to isolated examples, we did not make a systematic
effort into generate and integrate evidence from the analysis of source code components since the High-Level
Requirements and Design levels took up a fair amount of effort.
\item The Baseline DesCert continuous integration workflow only has a small number of plug-ins, mainly Randoop and Daikon,
and we will be working to expand the number of plug-ins in future work.
\end{enumerate}
\section{Continuous Assurance with Baseline DesCert}
\label{sec:baselinebackground}
In this section, we describe the \textit{Baseline DesCert workflow}, a tool for
monitoring and maintaining the status of the assurance artifacts during
continuous assurance. This tool also interacts the Rack system in order to
curate and ingest assurance artifacts in accordance with the DesCert ontology (see
Section~\ref{chap:ontology}).
In Baseline DesCert, continuous assurance starts with evidence generation.
Evidence generation occurs on a changing codebase in a fashion that mirrors the
continuous model of software development. Baseline DesCert views a codebase as a
changing artifact that evolves through modifications submitted by programmers.
\textit{Baseline DesCert}'s operating mechanism is centered around the use
Gradle and Gradle plugins to record evidence as part of the build process. It
integrates different tools into Gradle through plugins so that these tools can
be applied as software artifacts are created or updated. The recorded evidence
will be maintained within the GE TA2 Rack system. Figure~\ref{fig:baseline} shows
Baseline DesCert's main components.
\begin{figure}[h!]\centering
\includegraphics[width=13cm]{figures/BaselineDesCertPipeline.pdf}
\caption{Baseline DesCert}
\label{fig:baseline}
\end{figure}
We have chosen to develop the Baseline DesCert workflow on top of the Gradle
build system for a few reasons. Gradle is breaking new ground in open-source
software development. Its native support is constantly evolving. It can build
not only Java projects but also Android and C/C++ ones, giving us the
flexibility for project selection. Also, Gradle's plugins
portal\footnote{\url{https://gradle.plugins.org}} offers numerous plugins for a
wide variety of capabilities relevant to Baseline DesCert; e.g., C/C++ plugin.
\sloppy{
As illustrated in Figure~\ref{fig:baseline}, \textit{Baseline DesCert} aims at chaining
many evidence generation steps. Each of these steps is wrapped into a Gradle plugin.
Each plugin
\begin{inlinelist}
\item configures a set of evidence generation tools and
\item then executes them on codebase modifications
\end{inlinelist}. Particularly, Baseline DesCert aims at integrating
\begin{inlinelist}
\item Text2Test and Toradocu\cite{BlasiGKGEPC2018} for
consistency analysis and test oracle generation,
\item Sally\cite{dutertre2018verification} and Randoop\cite{pacheco2007randoop}
for model checking and unit test generation,
\item SeaHorn\cite{seahorn} and Daikon\cite{ErnstPGMPTX2007} for both static and
correctness analysis,
\item SeaHorn\cite{seahorn} and the Checker
Framework\cite{conf/icse/DietlDEMS11} for property proving,
\item Test hardness for test execution and emulation, and
\item a monitoring tool for assurance evidence management
\end{inlinelist}.
On each run, Baseline Descert is going to monitor not only the execution of its components
but also their output, aggregating and correlating recorded evidence in the process.
At this point, the developer who submitted software modifications can initiate the ingestion
of assurance artifacts into Rack in accordance with the DesCert ontology. }
The flow of evidence recorded by Baseline DesCert's executions is going to
capture properties of architecture and requirements of artifacts representing
the system ConOps. At each stage of this successive evidence generation process,
the objective is to work towards the construction of an assurance argument that
can be maintained along with the software's evolution.
\section{Baseline DesCert's Gradle Plugins}
\label{sec:baselinedescert}
Baseline DesCert is a work-in-progress tool. We have currently implemented
Gradle plugins for the University of Washington tools: Daikon and Randoop, and
are integrating the Checker Framework. Moreover, we have also implemented a Rack
data import plugin. This plugin is responsible for pushing the data generated by
the plugins into the Rack system. In addition to the Randoop, Daikon, and Rack
data import plugins, we are working on building more plugins for the other tools shown
in Figure~\ref{fig:baseline}. For example, we have started the definition of
plugins for CLEAR requirements checking, Text2Test, Sally, and SeaHorn. We have
documented their main API and how this API could be used and what type of
evidence it can generate. Plugins can be found at
\url{https://github.com/SRI-CSL/daikon-gradle-plugin.git } and
\url{https://github.com/SRI-CSL/randoop-gradle-plugin.git}.
The Randoop plugin, for example, integrates the
Randoop~\cite{pacheco2007randoop} tool to automatically create unit tests for a
set of classes, in JUnit format. The Randoop tool uses
feedback-directed random test generation to generate sequences of
method/constructor invocations for the classes under test. To use the plugin,
one must add {\small \texttt{apply plugin: ``com.sri.gradle.randoop''}} to the root
project's {\small \texttt{build.gradle}}. Individual Randoop settings should be
also specified in the {\small \texttt{build.gradle}}. At least, one should
specify,
\begin{inlinelist}
\item the path to the Randoop tool,
\item the JUnit package name (the location of the classes under tests), and
\item the Randoop output directory
\end{inlinelist}. Figure~\ref{fig:randoopconfig} shows a complete configuration
of the Randoop plugin.
\begin{figure}[h!]\centering\scriptsize
\begin{lstlisting}
plugins {
id 'java'
id 'maven-publish'
id 'com.sri.gradle.randoop' version '0.0.1-SNAPSHOT'
}
runRandoop {
randoopJar = file("libs/randoop.jar")
junitOutputDir = file("${projectDir}/src/test/java")
// Maximum number of seconds to spend generating tests.
// Zero means no limit. If nonzero, Randoop is nondeterministic:
// it may generate different test suites on different runs.
timeoutSeconds = 30
// Stop generation as soon as one error-revealing test
// has been generated.
stopOnErrorTest = false
// What to do if Randoop generates a flaky test:
// (1) halt, (2) discard, (3) output
flakyTestBehavior = 'output'
// A flag that determines whether to output
// error-revealing tests.
noErrorRevealingTests = true
// A flag that determines whether to use
// JUnit's standard reflective mechanisms
// for invoking tests.
junitReflectionAllowed = false
usethreads = true
outputLimit = 2000
junitPackageName = 'com.foo'
}
\end{lstlisting}
\caption{Randoop plugin configuration}
\label{fig:randoopconfig}
\end{figure}
Once configured, the plugin can be run by invoking either the {\small
\texttt{runRandoop}} or the {\small \texttt{randoopEvidence}} tasks. The former
only generates the unit tests while the latter generates both the unit
tests and a set of evidence files summarizing the execution of the Randoop tool.
Figure~\ref{fig:randoopevidence} shows the output of the
{\small \texttt{randoopEvidence}} task.
\begin{figure}[ht]\centering\scriptsize
\begin{lstlisting}
{
"Evidence": {
"RandoopJUnitTestGeneration": {
"INVOKEDBY": "RandoopGradlePlugin",
"AUTOMATEDBY": "RandoopGradlePlugin",
"PARAMETERS": "[--time-limit:30, --flaky-test-behavior:output,
--output-limit:2000, --usethread:true,
--no-error-revealing-tests:true, --stop-on-error-test:false,
--junit-reflection-allowed:false, --junit-package-name:com.foo,
--junit-output-dir:src/test/java]"
},
...
"RandoopTestsAndMetrics": {
"BRANCH": "master",
"EXPLORED_CLASSES": "2",
"COMMIT": "6fb16d1",
"PUBLIC_MEMBERS": "6",
"NORMAL_EXECUTIONS": "314804",
"REGRESSION_TEST_COUNT": "885",
"ERROR_REVEALING_TEST_COUNT": "0",
"AVG_EXCEPTIONAL_TERMINATION_TIME": "0.224",
"MEMORY_USAGE": "4647MB",
"EXCEPTIONAL_EXECUTIONS": "0",
"GENERATED_TEST_FILES_COUNT": "3",
"AVG_NORMAL_TERMINATION_TIME": "0.0572",
"GENERATED_TEST_FILES": [
"src/test/java/com/foo/RegressionTest0.java",
"src/test/java/com/foo/RegressionTest1.java",
"src/test/java/com/foo/RegressionTestDriver.java"
],
"CHANGES": "local",
"INVALID_TESTS_GENERATED": "0",
"NUMBEROFTESTCASES": "885"
}
}
}
\end{lstlisting}
\caption{Randoop plugin evidence}
\label{fig:randoopevidence}
\end{figure}
The Daikon plugin integrates the Daikon\cite{ErnstPGMPTX2007} invariant detector
to report likely program invariants. Daikon runs a program (e.g., unit tests
generated by Randoop), observes the values that the program computes, and then
reports properties that were true over the observed executions. The plugin can
detect whether the unit tests to execute were generated by Randoop (executed
through the Randoop plugin) or not. If there were not generated by the Randoop tool, the
plugin would search for a test driver (i.e., a test class that contains a static
main method) it could run. If the plugin cannot find a test driver, then the Daikon
plugin assumes the project under test has its own unit tests and thus it generates
a test driver that can execute these tests.
Similar to the Randoop plugin, one must add {\small \texttt{apply plugin:
``com.sri.gradle.daikon''}} to the root project's {\small \texttt{build.gradle}}
and a few other settings to run the plugin. The Daikon plugin can be run by
invoking either the {\small \texttt{runDaikon}} or the {\small
\texttt{daikonEvidence}} tasks. Figs.~\ref{fig:daikonconfig} and
\ref{fig:daikonevidence} show a complete configuration of the Daikon plugin and
the generated evidence respectively.
\begin{figure}[t!]\centering\scriptsize
\begin{lstlisting}
plugins {
id 'java'
id 'maven-publish'
id 'com.sri.gradle.daikon' version '0.0.2-SNAPSHOT'
}
runDaikon {
outputDir = file("${projectDir}/build/daikon-output")
// the project directory where daikon.jar, ChicoryPremain.jar,
// and dcomp_*.jar files exist
requires = file("libs")
// *TestDriver package name. Daikon tool requires
// a test driver. If you use Randoop,
// then Randoop will generate one for you.
// Otherwise, tell the plugin to generate a test driver,
// simply by using the -Pdriver property when executing
// the plugin.
testDriverPackage = "com.foo"
}
\end{lstlisting}
\caption{Daikon plugin configuration}
\label{fig:daikonconfig}
\end{figure}
\begin{figure}[h!]\centering\scriptsize
\begin{lstlisting}
{
"Evidence": {
"DaikonPluginConfig": {
"OUTPUT_DIR": "build/daikon-output",
},
...
"DaikonInvsAndMetrics": {
"CORES": "16",
"JVM_MEMORY_LIMIT_IN_BYTES": "477626368",
"SUPPORT_FILES": [
"build/daikon-output/RegressionTestDriver.dtrace.gz",
"build/daikon-output/RegressionTestDriver.decls-DynComp",
"build/daikon-output/RegressionTestDriver.inv.gz"
],
"PP_COUNT": "5",
"INVARIANTS_FILE": "build/daikon-output/RegressionTestDriver.inv.txt",
"MEMORY_AVAILABLE_TO_JVM_IN_BYTES": "432013312",
"CLASSES_COUNT": "1",
"TEST_DRIVER": "src/test/java/com/foo/RegressionTestDriver.java",
"TESTS_COUNT": "4",
"INVARIANT_COUNT": "0"
}
}
}
\end{lstlisting}
\caption{Daikon plugin evidence}
\label{fig:daikonevidence}
\end{figure}
The third plugin is the Rack data import. This plugin is responsible for loading
the Randoop and Daikon evidence JSON files, transforming these files into a format that
the Rack system can ingest, and then (if the Rack system is running) pushing the
transformed files into the Rack system. The configuration is simple. This
plugin has only one dependency: the Gradle plugin {\small
\texttt{com.jetbrains.python.envs}}, version {\small \texttt{0.0.30}}. The Rack
data import uses this plugin to create an Anaconda environment that contains all
the dependencies needed for running the Rack CLI. The Rack
CLI\footnote{\url{https://github.com/ge-high-assurance/RACK/tree/master/cli}} is
a tool that exposes a set of APIs for setting up ontologies and also pushing
data into the Rack system. Once the Anaconda environment is created, the Rack data
import plugin uploads the DesCert ontology to the Rack system. If the Randoop
and Daikon evidence files are available, the plugin pushes the evidence data to
the Rack system. Otherwise, it immediately returns.
Figure~\ref{fig:rackingestconfig} shows the Rack data import plugin's
configuration.
\begin{figure}[h!]\centering\scriptsize
\begin{lstlisting}
plugins {
...
id "com.jetbrains.python.envs" version "0.0.30"
}
envs {
bootstrapDirectory = new File(buildDir, 'bootstrap')
envsDirectory = new File(buildDir, 'envs')
conda "Miniconda3", "Miniconda3-latest", "64"
condaenv "descert", "3.8.5", "Miniconda3", ["numpy"]
}
task setupRackCli(type: Exec) {
dependsOn 'build_envs'
executable "./rack-descert.sh"
args "cli", "--conda=Miniconda3", "--condaenv=descert"
}
task setupRackArcos(type: Exec) {
dependsOn 'setupRackCli'
executable "./rack-descert.sh"
args "init", "--conda=Miniconda3", "--condaenv=descert"
}
task importData(type: Exec) {
executable "./rack-descert.sh"
args "import", "--conda=Miniconda3", "--condaenv=descert"
}
\end{lstlisting}
\caption{Rack data import configuration}
\label{fig:rackingestconfig}
\end{figure}
The {\small \texttt{rack-descert.sh}} script shown in
Figure~\ref{fig:rackingestconfig} contains functionality for
\begin{inlinelist}
\item installing the Rack CLI,
\item checking whether a Rack instance is running,
\item setting up the DesCert ontology in Rack,
\item curating evidence data, and
\item ingesting the curated evidence data into the Rack system.
\end{inlinelist}
The curation step is a key operation of this plugin. It transforms recorded
evidence data to a new format that matches the DesCert ontology in the Rack
system. For example, it turns the Randoop's evidence data (See
Figure~\ref{fig:randoopevidence}) into a set of concepts that describe the
ontology of the Randoop evidence: the \textit{RandoopJunitTestGeneration}
activity is invoked by a user entity (e.g., developer). This activity is
automated by the \textit{Randoop tool} entity given some user-specified tool
configuration and a target \textit{source code} entity. The results of a
\textit{RandoopJunitTestGeneration} execution are captured in the
\textit{RandoopTestsAndMetrics} entity.
We have created an illustrative descert-example project (See
\url{https://github.com/SRI-CSL/descert-example.git}), its Docker image, its
configuration, and its documentation. This project generates evidence for a
basic Java project, as part of its building process, using the Randoop and
Daikon Gradle plugins. This project can algo transforms the generated evidence
into a format that the Rack system can handle during its data import task.
If a local version of Rack is running, a user can trigger the uploading of
the evidence data to the Rack system. Figure~\ref{fig:execbaseline} shows
Baseline DesCert's execution on the descert-example repository.
\begin{figure}[h!]\centering
\includegraphics[width=12cm]{figures/BaselineDesCertOperation.pdf}
\caption{Baseline DesCert execution on descert-example repository}
\label{fig:execbaseline}
\end{figure}
\section{Track, Compare, and Visualize Your Evidence}
\label{sec:trackcompare}
At a first glance, evidence generation with Baseline DesCert looks a lot like
continuous software development. But there are some key differences. Unlike
continuous software development, the essential unit of progress in Baseline
DesCert is an experiment, meaning programmers are going to be able to track what
they doing in the pipeline. With Baseline DesCert, we aim at automatically
linking programmers' experiments to latest git commits in the software
repository of a project under assessment. Our goal to facilitate to programmers
a mechanism for easily comparing any subset of experiments with visualizations,
trying to achieve tool agreement whenever possible. The agreement of tools on
specific generated evidence is an indicator of the quality of the generated
evidence. Figure~\ref{fig:experiment} illustrate our vision for experiment
tracking in Baseline DesCert.
\begin{figure}[h]\centering
\includegraphics[width=13cm]{figures/BaselineDesCertExperimentTracking.pdf}
\caption{Experiment tracking Baseline DesCert}
\label{fig:experiment}
\end{figure}
With experiment tracking, programmers now have new capabilities. They can vary
evidence generation tools, change versions of a particular codebase, and even
adjust more general settings of each evidence generation tool. For example, on a
new experiment, one can change Sally and Daikon with SeaHorn or with Facebook's
Infer\cite{calcagno2015moving, calcagno2015open} and then track the quality of
newly generated evidence using the new tools.
As Figure~\ref{fig:experiment} suggests, comparing the results of each experiment
is going to facilitate collaboration between the users of Baseline DesCert.
Drawing inspiration from continuous integration systems like Travis-CI and
others, our goal is allow users, or different teams, to access the ``continuous
assurance'' history of a codebase at any time. Users can also watch other users'
experiments unfold in real-time and even provide feedback if requested. We
hypothesize that the ability to see evidence evolution over many executions and
many tool combinations (through experiment tracking and visualizations) is going
to provide a sense of progress with respect to the quality of the generated
evidence for a changing codebase.
\section{Concept of Operations}
\label{sec:conops}
\begin{figure}[!hbtp]
\centering
\includegraphics[scale=.40]{figures/conops_mission.png}
\caption{Phase 1 Mission Operation}
\label{fig:conops_mission}
\end{figure}
As shown in Figure \ref{fig:conops_mission}, we envision an ArduCopter Rotorcraft (VTOL) performing Autonomous Surveillance mission during a \emph{nominal operation} where \emph{no contingency situations} are encountered during operation and the operation is to be performed in a location with communication proximity to a Ground Control Station (GCS) that expected to oversee the ArduCopter operation. All coordinates shown are in triplet \emph{(latitude, longitude, altitude)} and in \emph{meters} as units. Also altitude is specified relative height above origin, which is also assumed to be the \emph{home} position or site of \emph{launch} $L$ i.e. where mission begins as well as site of return once the mission is completed. The mission involved following a sequence of waypoints $L, W1, W2, W3, ..., W6, L$. After reaching the launch position, the craft will land.
As part of the mission configuration, \emph{Rally Points} are also specified e.g. $R1, R2$. These are pre-specified points where the ArduCopter can proceed to, as alternative to Home point or waypoint, during emergencies or contingencies. For example, ArduCopter can proceed to the closest Rally Point, rather than proceeding all the way back to the Home position, and can loiter at that location, and perform an automated landing there.
\begin{figure}[!hbtp]
\centering
\includegraphics[scale=.40]{figures/conops_mission_with_fences.png}
\caption{Geofences during Mission Operation}
\label{fig:conops_mission_with_fences}
\end{figure}
ArduPilot supports alarms generated by several types of Fences (boundaries described by latitude/longitude and/or altitude) to prevent the vehicle from traveling higher or further than desired, or into unwanted areas. Types of Fences supported varies by vehicle. Upon Fence breach, selectable actions are taken. As part of mission configuration and illustrated in Figure \ref{fig:conops_mission_with_fences}, the ArduCopter has Geo-Fences pre-specified. The Cylindrical Geo-Fence is a simple “tin-can” centered around home. Cylindrical Geo-Fence restriction has two checks associated with it (i) max altitude check (height of cylinder) and (ii) a range check (radius of cylinder). Additionally, there is an arbitrary shaped Polygonal Geo-Fence restriction to ensure ArduCopter flies specific locations within the polygonal boundary. Cylindrical and Polygon Geo Fence Breach Checks are both \emph{Inclusion} fences to keep vehicle from flying “out-of the fence”. Note, \emph{Exclusion} fence available but not utilized in the current mission configuration setup to keep vehicle from flying “into the fence”.
\section{Mission Scenario and Assumptions}
\label{sec:mission_scenario}
The primary objectives of the mission are for the ArduCopter to :
\begin{itemize}
\item Successfully complete a surveillance mission that is pre-configured before start and includes takeoff from home/launch, visit all waypoints in sequence and finally return to launch and land \textit{(Objective 1)}
\item Complete the mission with full autonomous control of the ArduCopter :
\begin{itemize}
\item Without any remote pilot assistance from Ground Control System (GCS) \textit{(Objective 2)}
\item Without any loss of control of the ArduCopter in both nominal and off-nominal/contingency/emergency situations enumerated apriori \textit{(Objective 3)}
\item Without physically losing track of the ArduCopter whereby GCS is notified of the location of the ArduCopter during the whole mission duration \textit{(Objective 4)}
\end{itemize}
\item Complete the mission operation safely:
\begin{itemize}
\item By limiting potential hazards (e.g. collision) risks exposed to humans, properties and other airborne assets during the operation of the mission by restricting the Arducopter to flying within a pre-configured Geofence i.e. Operational Safety Zone \textit{(Objective 5)}
\item Without crashing (destroying) the ArduCopter during takeoff, cruising through the waypoints or during landing \textit{(Objective 6)}
\end{itemize}
\end{itemize}
\textit{We would ideally like to satisfy all missions objectives, if possible, and if there are conflicts/trade-offs then we require mission objectives prioritization (from high to low) be in the order: (5), (6), (3), (4), (1), (2)}. As an example of prioritization, landing immediately at some point when insufficient battery emergencies occurs rather than going towards at Home/Launch shows prioritization of objectives (5) \& (6) over (1).
There are a variety of \emph{assumptions} related to the validity of the mission related configuration:
\begin{itemize}
\item Mission Waypoints trajectory are correctly specified in a loop: $L \rightarrow W1 \rightarrow W2 \rightarrow \cdots \rightarrow Wn \rightarrow L$
\item Geofence configuration $(\mathit{max\_altitude, range, polygon})$ is correctly specified:
\begin{itemize}
\item Cylinder $(2.\pi.\mathit{range} \times \mathit{max\_altitude})$, Polytope $(2D\ \mathit{Polygon \times max\_altitude})$ defined
\item $\mathit{Operational\ Safety\ Zone} = \mathit{Cylinder} \cap \mathit{Polytope}$ i.e. Common 3D space intersecting Cylinder and Polytope must be non-zero
\item \textit{Minimizes potential hazards (e.g. collision) risks} exposed to humans, properties and other airborne assets as long as the operation of the mission is restricted within this operational safety zone
\end{itemize}
\item Mission Waypoints trajectory is feasible if it completely within the Geofence:
\begin{itemize}
\item All waypoints and points en-route between waypoints are completely within $\mathit{Operational\ Safety\ Zone} = \mathit{Cylinder} \cap \mathit{Polytope}$
\item All waypoints and points between waypoints along $L \rightarrow W1 \rightarrow W2 \rightarrow \cdots \rightarrow Wn \rightarrow L$ satisfy (1) range check , (2) max\_altitude check and (3) polygon check
\end{itemize}
\item Emergency Rallypoints $R1, R2, \cdots Rn$ are suitably chosen:
\begin{itemize}
\item All rallypoints are within Geofence and satisfy (1) range Check , (2) max\_altitude check and (3) polygon check
\item In case of reacting to a specific emergency/off-nominal/contingency situation, rather than having to only proceed to the Launch/Home position always (which might be far off depending on where in the trajectory ArduCopter is currently flying), Rallypoints are suitable “alternate” location choices for the ArduCopter to proceed to e.g. within Line of Sight (LOS) of Ground Control Station (GCS)
\item Apriori configure within the mission “closest” Rallypoints associated different waypoint and paths between waypoints. E.g. $R1$ Rallypoint for $L \rightarrow W1 \rightarrow W2 \rightarrow W3 \rightarrow W4$ and $R2$ for $W4 \rightarrow W5 \rightarrow W6 \rightarrow W7$
\end{itemize}
\item Mission Waypoints and trajectory en-route between waypoints as well as Rallypoints are within communication range of the Ground Control Station (GCS) in normal/nominal situations. Note: Off-nominal complete loss of communication as well as discontinuity in service (intermittent service) is expected
\item Mission Waypoints and trajectory en-route between waypoints as well as Rallypoints typically have access to GPS satellites and GPS Fix signals for navigation in normal/nominal situations. Note: Off-nominal complete loss of GPS signal as well as discontinuity in service (intermittent service) is expected. In these situation the on-board navigation (e.g. EKF filter) is expected to supply location information continuously to the ArduCopter to coast for a while using only primary inertial sensor IMU (accelerometer, gyros) measurements until aiding sensor GPS fix signals may be obtained later (no guarantee for that GPS Fix signal to happen).
\item Remaining Battery Energy Level is sufficiently provisioned/budgeted and configurations appropriately setup for $[100\% > T_{NOM} > T_{RTL} > T_{LAND} > 0\%]$ as shown in Figure \ref{fig:battery_levels}
\begin{figure}[!hbtp]
\centering
\includegraphics[scale=.90]{figures/battery_levels.png}
\caption{Battery Energy Levels}
\label{fig:battery_levels}
\end{figure}
\begin{itemize}
\item Nominally the full mission can be completed with $(100\%- T_{NOM})$ battery energy and some spare battery level remaining at $T_{NOM}$. No guarantees in off-nominal situations requiring tough maneuvers, battery drainage etc.
\item $T_{RTL}$ remaining battery threshold level is so chosen such $T_{RTL}$ battery energy is good enough with spare to return to launch/home and do the necessary vehicle maneuvers to go from anywhere within mission trajectory in a normal/nominal condition. No guarantee in off-nominal situations
\item $T_{LAND}$ battery threshold level is so chosen such $T_{LAND}$ battery energy is good enough with spare to vertically land on the ground immediately and safely from any altitude within max\_altitude in normal/nominal conditions
\end{itemize}
\end{itemize}
\section{System and Software Architectures}
\label{sec:SySoArch}
\begin{sidewaysfigure}[!hbtp]
\centering
\includegraphics[scale=.55]{figures/system_software_architecture.png}
\caption{System Architecture with Advanced Fail-Safe (AFS) Software Component}
\label{fig:sys_sw_arch}
\end{sidewaysfigure}
For the phase 1 challenge problem, we envision a system architecture built with associated software components depicted in Figure \ref{fig:sys_sw_arch}. In particular, phase 1 focus will be limited to generation of evidences for compliance and certification a single software component associated with \emph{Advanced Fail-Safe (AFS)} function, shown to the top right of Figure \ref{fig:sys_sw_arch}. The vision is that the Ground Control Station (GCS) performs remote pilot operations (monitors and potentially does control interventions) of the ArduCopter using some wireless channel of communication (5G/4G, Radio on WiFi, SATCOM etc). We use MAVLink as the message transport protocol for the communication between GCS and ArduCopter including a bi-directional periodic heartbeats, status updates, sending/receiving command, data and acknowledgments (ACKs). Note that MAVLink offers no guarantees for message delivery and repeated state checks may have to be done by underlying software to ascertain reliable delivery.
The GCS Software used MAVProxy and Flight Software (on-board software on ArduCopter) has the vehicle specific flight code and in phase 1, we \emph{do not} make any changes to these code and use the code as is. As shown in the bottom left of Figure \ref{fig:sys_sw_arch}, the flight software has multiple layers/parts: (1) Implements the ArduCopter flight control modes including: Takeoff, Loiter, Land, Auto (autopilot), Altitude-Hold, Stabilize, Return-to-Launch (RTL) etc. (ii) Shared Libraries to support multiple drivers for various navigation sensors e.g. GPS, IMU and other inertial sensors, barometer, camera etc (iii) Navigation function implemented using Extended Kalman Filter (EKF), Position/Attitude and Motor/Servo control etc for teh different mode specifications (iv) hardware abstraction layer (HAL) to support portability to lots of different platforms. and (v) OS support for Linux etc and various hardware support code (processor, IO, sensor interfaces).
Our primary focus in phase 1 is to certify a single software component developed as new software within ArduPilot and which architecturally integrates with the rest of ArduCopter system and software in a seamless manner. We design and develop the Advanced Fail Safe (AFS) function in a principled manner i.e. with a view to ease certification by minimizing defect escapes and lowering verification costs through automation. The objective of AFS function is to take safe corrective action in abnormal situations – identifying the following \emph{six contingency situations} triggering the appropriate recovery response actions:
\begin{enumerate}
\item Cylindrical Geofence breach: Max Altitude check violation
\item Cylindrical Geofence breach: Range check violation
\item Polygon Geofence breach: polygon boundary check violation
\item GPS Lock Loss
\item Ground Station Communication Loss
\item Insufficient Battery
\end{enumerate}
A shown in the bottom right of Figure \ref{fig:sys_sw_arch}, the AFS function is built as a separate software component in a companion computer i.e. separate computer hardware that also communicates with the ArduCopter flight software using MAVLink over another independent channel. The channel itself can be a wired communication medium like a serial interface or ethernet interface or another potentially another wireless interface. Alternatively the AFS function can also be loaded on the ArduCopter flight hardware and co-hosted as separate software software partition along with flight software partition and the inter-partition communication still leveraging MAVLink transport interface. This flexibility in architectural separation of AFS functionality allows us to develop the AFS software component independently and leveraging the time-space partitioning strategy (e.g. ARINC 653 partition \cite{Krodel2007RealTimeOS}) for demonstrating verification of the AFS software component in isolation. Subsequently, we would then focus on the dependencies of the AFS function with the rest of the system i.e. verification of the integrated system and it's multiple components in a systematic manner. To reiterate, in phase 1 and in this report, we limit our focus still to generating evidence for a single AFS software component.
AFS software component is built on top of Robot Operating System, version 1 (ROS1~\cite{book:ROS}) for a Linux operating system on top of which all ArduPilot software code is built. We leverage MAVROS, ROS-based extendable communication node, which enables MAVLink extendable communication between computers running ROS (1) for any MAVLink enabled autopilot, ground station, or peripheral. MAVROS is the "official" supported bridge between ROS (1) and the MAVLink protocol. AFS Function is built using Robot Architecture Definition Language (RADL) specification and associated Radler~\footnote[4]{Radler 1.0 documentation: \url{https://sri-csl.github.io/radler/}}~\footnote[5]{Radler examples: \url{https://github.com/SRI-CSL/radler}} code generation tool and build environment to generate executables that ensures the software code executing in a ROS environment adheres to the specification. We discuss in Section~\ref{sec:RadlerMain} the details of the RADL architectural specifications, the nice properties inherited due to the architectural paradigm when the system and associated software components strictly adhere to the specification and the Radler build tool that assembles the requisite software for execution on the ROS platform. The AFS specific RADL specification i.e \emph{afs.radl} is discussed in Section~\ref{sec:radlEvidence}. We design the AFS system architecture using Radler as a collection of nodes (with periods, publish/subscribe topics) and topics. The system can be tested using the SITL simulator of ArduPilot.
Radler architecture specification consists of the logical and physical parts. The logical part is specified in terms of node and topic similar to ROS. The nodes execute independently and periodically, and publish and/or subscribe topics. AFS Function node (afs\_function) executes its step function with period of 100 milliseconds and publishes 4 recovery actions subscriber ROS nodes: (i) AFS Geofence Breach Recovery for 3 different kinds (altitude, range, polygon) (ii) AFS GCS Comm Loss Recovery (iii) AFS GPS Lock Loss Recovery (iv) AFS Battery Insufficiency Recovery. The AFS Gateway node (afs\_gateway) acts as bridge between AFS Function and through the MAVROS/MAVLink connects to the on-board flight controls on the ArduCopter.
More specifically, AFS Gateway node forwards back-and-forth messages between AFS Function and ROS/MAVROS/MAVLink interface on the companion computer. It remotely collects events from flight controls e.g. GPS Lock Loss, Remaining battery energy, Comm Loss, Geofence Breach events and forwards that to the AFS Function. The AFS Function implements the Recovery action logic and sends control commands back (e.g. change of flight mode ) to AFS Gateway which then send the command to the flight controls on the ArduCOpter via the MAVLink.
Physical part of the AFS RADL specification map nodes to the process on specific machines, in this case companion computer. Radler build process, using source codes (afs\_gateway.h, afs\_gateway.cpp, afs\_function.h, afs\_function.cpp), does some explicit checks and also generates the glue code for scheduling, communication, and failure detection such as timeout or staleness of data. Note that in Phase 1, we exclusively focus on the AFS Function and not on the AFS Gateway.
\section{High Level Behavioral Description of AFS Function}
\label{sec:HLDR}
The primary objective of intent specification and formally capturing requirements is to be able to demonstrate, with confidence, evidence of implementation deployed on actual system meets the intent. We will be specifying the intent of the AFS Function and the associated requirements for triggering recovery actions during contingency events in the Constrained Language Enhanced Approach to Requirements (CLEAR) notation. The formalized requirements are analyzed for consistency, completeness and other generic properties to ensure latent defects do not enter during requirements writing phase that can subsequently manifest in implementation where it can be trickier to diagnose. From requirements, we use Text2Test to automate generation of test cases and test oracles that are tightly traceable to these requirements. The high-level requirements are also model-checked using the Sally translation of the CLEAR requirements together with the node-level code contracts (specific properties) and architecture properties are also satisfied due to Radler specification and associated build.The node-level contracts are analyzed using the dynamic and static code-analysis tools. The tools associated with such evidence evidence generation is the subject matter for Section~\ref{chap:tools} and the AFS specific evidence that is generated is also discussed in Section~\ref{chap:evidence}. In this section below we discuss the high level behavioral description of AFS Function that informs all the requirements discussed in subsequent sections.
Broadly there are \emph{six AFS Monitor Events} due Off-nominal situations triggering the appropriate recovery/response actions below. Note that AFS Prioritization when multiple AFS events occur from high to low is: $(6), (4), [ (1), (2), (3) ], (5)$
\begin{enumerate}
\item Cylindrical Geofence breach due to Max Altitude check violation
\begin{itemize}
\item \emph{Potential contingency scenario}: Weather/Wind causing trajectory to either overshoot the fence or too close to it with respect to tolerance margins to handle navigation error.
\item \emph{AFS recovery/response action}: Try to drop to “Target Altitude” i.e. (Max\_Altitude - margin) within 5 sec. If Target Altitude is achieved, continue with mission. If not achieved, then land and communicate landing location to GCS and terminate mission.
\end{itemize}
\item Cylindrical Geofence breach due to Range check violation
\begin{itemize}
\item \emph{Potential contingency scenario}: Weather/Wind causing trajectory to either overshoot the fence or too close to it with respect to tolerance margins to handle navigation error.
\item \emph{AFS recovery/response action}: Try to drop to “Target Position” i.e. (Max\_Range - margin) within 5 sec. If Target Position is achieved, continue with mission. If not achieved, then land and communicate landing location to GCS and terminate mission.
\end{itemize}
\item Polygon Geofence breach due to Polygon Boundary check violation
\begin{itemize}
\item \emph{Potential contingency scenario}: Weather/Wind causing trajectory to either overshoot the fence or too close to it with respect to tolerance margins to handle navigation error.
\item \emph{AFS recovery/response action}: Try to drop to “Target Position” i.e. (Polygon\_Boundary - margin) within 5 sec. If Target Position is achieved, continue with mission. If not achieved, then land and communicate landing location to GCS and terminate mission.
\end{itemize}
\item GPS Lock Loss: Loss of GPS signal for 3 seconds
\begin{itemize}
\item \emph{Potential contingency scenario}: GPS Denied/Degraded Environment, navigation drifts when coasting on pure IMU/Inertial without GPS fixes.
\item \emph{AFS recovery/response action}: First loiter or hover at the current location i..e waypoint or in transit between way points for 5 seconds. If GPS recovered within 5 seconds then resume mission and proceed to next waypoint. If GPS not recovered within 5 seconds, then go to last completed “Waypoint” with last known GPS Fix and using ONLY IMUs without GPS fixes (coasting drifts) in transition and then at that position land and communicate landing location to GCS and terminate mission. Note that last waypoint has better potential than rallypoint in this scenario for GPS fix possibility while GPS signal availability at rallypoint is unknown.
\end{itemize}
\item Ground Station Communication Loss: does not receive a heartbeat message for a period of 3 seconds
\begin{itemize}
\item \emph{Potential contingency scenario}: MAVLink loss of Heartbeat messages from/to GCS, outside radio communication range, no cell towers etc.
\item \emph{AFS recovery/response action}: Go to a pre-configured Rally Point (Emergency Hovering point) and “loiter” and Try to re-establish communication connection for 5 seconds duration without loss. If still no communication reestablishment, then return to launch and terminate mission. If communication established, then complete the remaining mission; Increment strike counter to track communication disruptions history. If strike counter $\geq 3$ then return to launch as comms deemed unreliable and terminate mission.
\end{itemize}
\item Insufficient Battery
\begin{itemize}
\item \emph{Potential contingency scenario}: Over consumption of energy due to difficult vehicle maneuvers during the mission, battery drainage/leaks etc.
\item \emph{AFS recovery/response action}: If battery level $\leq T_{RTL}$ and $\geq T_{LAND}$ then return to launch and terminate mission.
If battery level $< T_{LAND}$ then land immediately to the ground and communicate landing location to GCS after you have landed and terminate mission.
\end{itemize}
\end{enumerate}
\subsection{CLEAR: Constrained Language Enhanced Approach to Requirements}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/CLEAR_principles.png}
\caption{An overview of the design principles of CLEAR.}
\label{fig:clear-principles}
\end{figure}
Typically, software requirements are written purely as natural language, which is expressive and, when written well, understandable without learning a specific notation. However, natural language is also subject to interpretation and ambiguity. Sometimes a formal notation is used to capture software requirements, which eliminates ambiguity and opens up possibilities for software analysis, but the syntax required is generally opaque to the untrained eye. In addition, formal software requirements can be hard to relate to the real world which is often much messier than the formal realm can easily capture.
CLEAR's approach attempts to combine the best of both worlds by defining a formal language that can generally be read as English, but in fact conforms to a traditional language grammar~\cite{arcosCLRTechReport}. A set of CLEAR requirements can generally be read and understood without any experience with the notation. The phrases and constructs available in CLEAR have been carefully considered to be as unambiguous as possible, which acts as a strictly-enforced style guide on requirements. Because it is a formal language, it can be parsed and analyzed without guessing at the intent or meaning of the requirements.
\pagebreak[1]
\paragraph{Single expressions of intent.}
For example, consider two requirements for a microwave:
\includegraphics[width=\linewidth]{figures/CLEAR_microwave_example.png}
Each requirement captures a single specific intent for the behavior of the microwave: when cooking, the microwave should be heating up the food. When the door is open, the magnetron must not be active. Both are reasonable expectations, but when considered together they are contradictory. If the door is opened while cooking, one requirement says to continue cooking while the other says to stop. In a large set of requirements, such conflicting requirements may be far apart in the document, making it difficult to spot the inconsistency. Text2Test performs this analysis and produces an error.
The ability to capture individual intents and consider them both individually and in combination is a different approach than writing a program and allows for thinking about the software in a different way than programming. Much like a developer writing a test case may think of things that she did not consider when writing the implementation, defining the behavior of a program in individual requirement statements is a different thought process than writing code. By definition, a program operates on the complete input space; that is, for any possible input the code will respond to it in one way or another, even if that particular input was not considered by the author of the code. Requirements, on the other hand, just specify intended behavior. If there is a gap in the input space, it will become apparent that this behavior needs to be defined. The following Section~\ref{sec:clear-generic-properties} describes the analyses that are performed on CLEAR requirements for find inconsistencies (conflicts) among requirements and gaps.
Text2Test is the name of the tool which takes CLEAR files as input and parses, processes, and analyzes the requirements, producing reports and test cases. Each requirement is transformed into a data flow diagram. If multiple requirements define values for a particular output, these results are combined and checked to ensure that no contradictions exist (i.e., two requirements that prescribe different outputs under the same conditions). Text2Test can also analyze the conditions specified by the requirements for a particular output and produce a warning if there are inputs that have no applicable requirement. Text2Test is described in more detail in Section~\ref{sec:t2t-overview}.
\paragraph{CLEAR Notation Features.}
CLEAR provides requirements constructs to specify a variety of behaviors in a precise manner, while also providing a way to build abstractions and factory the requirements space for complex behaviors. The following are the salient features of the notation:
\begin{itemize}
\item Each requirement is a declarative statement that can be independently reviewed and tested; requirements are additive --- adding requirements completes the behaviors required
\item CLEAR allows specification of the following aspects of behaviors:
\begin{itemize}
\item State-based, event-triggered, time-triggered behaviors and combinations thereof.
\item Algorithmic aspects using combination of mathematical, relational, and Boolean expressions, sets/selections, interval arithmetic, interpolation tables.
\item CLEAR includes a large library of math functions including numeric manipulation, trigonometric, exponential, filters, etc.
\end{itemize}
\item Constructs for creating abstractions and for factoring of complex behaviors:
\begin{itemize}
\item Tabular format for generalized truth tables, precedence tables (as alternative to while requirements and clauses) and interpolation tables
\item Creating common definitions of specific behaviors or input conditions referenced from several requirements
\item Creating complex types of objects and conditions/behaviors associated with them
\item Creating intermediate abstractions (states, variables) to create a framework for the specification of complex behaviors
\end{itemize}
\item Support for ontic concepts and types:
\begin{itemize}
\item Basic ontic concepts of real-world time intervals, events, sensor input validity specification, and units.
\item Higher-level concepts such as XYZ Vectors of position, velocity, vector difference, concept of moving average of sensor values, etc.
\item Future extensions: Ontic type system around basic physical (and cyber) concepts such as position, velocity, time, temperature, pressure, angles and attributes of units and relationships applied to them. Users will create application-specific subtypes.
\end{itemize}
\end{itemize}
\paragraph{CLEAR Semantics.}
The CLEAR semantics provide a number of useful concepts and structures useful for writing software requirements:
\begin{itemize}
\item Objects have (discrete or continuous) time-varying values
\item Conditions are predicates over the values of objects at a given time
\item Events are changes in conditions
\item Actors are systems or components
\item Functions are computed on object values at a given time or over a time interval
\item Responses are functional updates to values of internal or output objects of an actor
\item Responses can be condition-based, event-triggered, time-triggered (or combination): CLEAR is agnostic about the Model of Computation (MoC)
\item Functions and structural aspects
\begin{itemize}
\item Algorithmic aspects using combination of mathematical, relational, and Boolean expressions, sets/selections, interval arithmetic, generalized truth tables, interpolation tables.
\begin{itemize}
\item CLEAR includes a large library of math functions (including transcendentals, filters, integrators)
\end{itemize}
\item Type system with support for ontic types (more future work)
\item Factoring of complex behaviors into multiple definitions/requirements
\end{itemize}
\end{itemize}
\paragraph{CLEAR: Structure of Requirements.}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/CLEAR_req_types.png}
\caption{Requirement Types/Structure in CLEAR}
\label{fig:clear-req-types}
\end{figure}
CLEAR requirements can take one of a few general forms which are fully described in Figure \ref{fig:clear-req-types}. If the system must do something unconditionally, a ``shall" statement can simply specify the output and its value. Responses to events, such as an input crossing a threshold or changing to a particular value, use the word ``when" followed by the condition and then the response to that event. Variables can be marked as state variables, and requirements that apply in a particular state can be specified using a ``while" statement. If the software can be configured with features enabled or disabled, the word ``where" is used to specify that a requirement only applies when a particular feature is enabled. Exceptional or abnormal conditions can be specified with the word ``if" instead of ``when", overriding other requirements even if they would normally apply. By giving ``if" statements priority, requirement authors can write the majority of requirements to deal with nominal cases without having to specify a lack of error in every single requirement, and only need to address the exceptional cases in ``if" statements. Finally, many of these features can be composed together in various configurations.
\section{Lightweight Code Analysis Tools}
The University of Washington research team led by Professor Michael Ernst have developed a suite of
lightweight analysis tools for Java. These tools are used extensively in industry for checking
large codebases for bugs and software quality assurance. The three main tools that we integrate in
the DesCert project are
\begin{enumerate}
\item The Randoop tool for synthesizing unit tests
\item The Daikon tool for the dynamic detection of program assertions
\item The Checker Framework for pluggable and extensible typechecking
\end{enumerate}
These tools can be used for finding bugs as well as establishing generic and specific properties of
code components.
\subsection{Randoop: Feedback-Directed Random Test Generation}
Randoop synthesizes unit tests for Java classes in the JUnit format~\cite{pacheco2007randoop}\@.
It uses feedback-directed random test generation to build and compose sequences of
constructor operations and method calls to explore portions of the state space that
na\"{\i}ve test generation might not cover. The unit tests synthesized by Randoop
can be used to uncover and fix bugs or as regression tests that can verified whenever
the software changes. Randoop has been applied to large codebases and has successfully
uncovered bugs in widely-used libraries including Sun's and IBM's JDKs and a core .NET component.
Figure~\ref{fig:randoop} shows an example of an error-revealing unit test generated by Randoop
in the OpenJDK library. This test demonstrates the construction of a set \texttt{s1} on which
the generated assertion for the reflexivity of the equality test fails. The unit test requires
the construction of a \texttt{list} object
and a \texttt{TreeSet} object (containing the list object) to which we apply the reflexive equality test.
\begin{figure}
\centering
\begin{smallsession}
// This test shows that the JDK collection classes
// can create an object that is not equal to itself.
@Test
public static void test1() {
LinkedList list = new LinkedList();
Object o1 = new Object();
list.addFirst(o1);
// A TreeSet is an ordered collection. According to the API
// documentation, this constructor call should throw a
// ClassCastException because the list element is not Comparable. But
// the constructor silently (and problematically) accepts the list.
TreeSet t1 = new TreeSet(list);
Set s1 = Collections.synchronizedSet(t1);
// At this point, we have successfully created a set (s1)
// that violations reflexivity of equality: it is not equal
// to itself! This assertion fails at run time on OpenJDK.
org.junit.Assert.assertEquals(s1, s1);
}
\end{smallsession}
\caption{A Randoop Unit Test Example}
\label{fig:randoop}
\end{figure}
Tests can reveal errors by failing generated assertions, violating explicit contracts, or throwing unexpected
exceptions. The classification of test results as errors or expected behavior is under user control since
in many cases, the thrown exceptions might be appropriate given the inputs. The generated tests can also be minimized
to avoid redundant or irrelevant steps.
As described in Chapter~\ref{chap:baseline}, we have developed a Baseline DesCert plug-in for Randoop to
collect the error-revealing and regression tests for Java class libraries. These test suites are
maintained as evidence artifacts along with the test analysis results.
\subsection{Daikon: Dynamically Detection of Program Invariants}
Unit tests such as those generated by Daikon can be used to generate traces for Java programs. These traces
can be mined for assertions that hold at specific program points including loop invariants and
preconditions and post-conditions for method calls. The Daikon tool dynamically analyzes program behaviors
to automatically learn useful assertions~\cite{ErnstPGMPTX2007}\@. It is important to remember that these assertions might not
valid since the only hold on the test runs, but in many cases, the information generated by Daikon is
quite helpful. Consider, for example, a Java class that implements a stack with \texttt{push},
\texttt{pop}, \texttt{top}, \texttt{topAndPop}, \texttt{isEmpty}, \texttt{isFull}, and \texttt{makeEmpty} methods,
using an array representation for the stack content and a \texttt{topOfStack} slot that is \texttt{-1} when the
stack is empty. Daikon can detect that the array component is never equal to null and that the
\texttt{topOfStack} slot is always at least \texttt{-1}\@. It can also detect that the size of the array
is always one more than the value of the \texttt{topOfStack} field. These are indeed valid invariants.
Daikon operates by generating and testing a number of assertions. The generation of assertions is
smart and based on instrumentation to track the interaction between values in a program.
For instance, it will check if an array is sorted when the elements in the array look comparable,
and the sortedness property appears to be relevant to the behavior of the program. Once a set of useful putative
invariants have been identified, other tools, including the Checker Framework can be used to verify if
the invariant is in fact valid.
\subsection{The Checker Framework: Pluggable Extensible TypeChecking}
Java is a strongly typed object-oriented programming language, but a number of common errors
are not caught by the Java compiler. Java 8 adopted the Type Annotation Specification
as a notation for inserting annotations into comments. The Checker Framework uses the
Type Annotation Specification to support pluggable typecheckers for additional program properties~\cite{papi2008practical,dietl2011building}\@.
One such example is the \emph{nullness} checker where the \texttt{System.out.println} method is invoked
on a possibly null object \texttt{myObject.toString}\@. The nullness checker flags this inconsistency.
Fields and variables can be annotated with \texttt{@NonNull} types, but these can also be derived by the
type inference tool which saves the labor of hand-annotating the code with types.
\begin{figure}[htb]
\begin{smallsession}
public class NullnessExample {
public static void main(String[] args) {
Object myObject = null;
if (args.length > 2) {
myObject = new Object();
}
System.out.println(myObject.toString());
}
}
\end{smallsession}
\caption{Checker Framework Example: NullnessExample.java}
\label{fig:nullness}
\end{figure}
The Checker Framework has been extended with a number of other checkers. The Regex checker ensures
that regular expressions strings are actually well-formed regular expressions. The Taint checker
ensures that tainted user input does not pollute the input to sensitive operations such as a
database query on a malicious query string. The Encryption checker uses the \texttt{@Encrypted}
annotation tag to ensure that sensitive data is encrypted when placed on a publicly visible
channel. The Checker Framework ships with over twenty five custom type checkers for properties such as
initialization, resource leaks, locks, bounds of index variables, purity (absence of side-effects),
and units of measurement. There are also around twenty checkers that have been developed by third parties
spanning information flow, determinism, immutability, and typestate. Many of these are examples of
Ontic type systems that are relevant to the DesCert project.
The Checker Framework was applied BeepBeep3 runtime monitoring library~\cite{halle2017event} which is
used to monitor runtime safety within our ArduCopter AFS subsystem. The analysis revealed a number of issues.
For example, it revealed an inconsistency in annotation of the \texttt{getProvenanceTree} which is
declared to return a non-null result in one file but is defined to return a \texttt{null} result in another.
These flaws have been reported to the developers of the BeepBeep3 tool.
\chapter{Introduction}\label{chap:introduction}
\setcounter{page}{0}
\pagenumbering{arabic}
\pagestyle{plain}
\input{introduction}
\chapter{The DesCert Approach}\label{chap:approach}
\input{approach}
\chapter{Phase 1 Challenge Problem: Advanced Fail Safe (AFS) Case Study}\label{chap:challenge
\input{challenge}
\chapter{DesCert Evidence Ontology}\label{chap:ontology}
\input{ontology}
\chapter{Tools for Evidence Generation}\label{chap:tools}
\input{tools}
\chapter{Continuous Assurance Workflow: Baseline DesCert} \label{chap:baseline
\input{baseline}
\chapter{DesCert Evidence Generation}\label{chap:evidence}
\input{evidence}
\chapter{Conclusions}\label{chap:conclusions}
\input{conclusions}
\bibliographystyle{alpha}
\section{DesCert Enables Property-Based Assurance}
\label{sec:property-desc}
Figure~\ref{fig:assurance-driven-development} provides our approach to assurance where tools are used to do verification on artifacts produced by the development process. Both testing based methods (shown on left) and formal methods (shown on right) are used. To enable property-based assurance, the formal methods are based on establishing properties of a particular development artifact (or a set of artifacts) and the proving them using a tool.
Figure~\ref{fig:arch_req_property_flow} shows the flow of evidence to capture properties of architecture and requirements and using tools to perform analysis, resulting in proofs that the property holds. In this figure, development or verification \emph{activities} are indicated by grey boxes, \emph{development} artifacts are denoted by blue boxes whereas \emph{verification} artifacts are denoted by green shapes. A beige rhombus denotes the \emph{tool} used to perform an activity. (note: other aspects of system development and verification, e.g. code development and testing are not shown in this figure)
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{figures/Arch_req_property_flow.png}
\caption{Property Specification and Analysis for Architecture and Requirements}
\label{fig:arch_req_property_flow}
\end{figure}
The system development starts with the system ConOps which are successively developed into a sequence of artifacts including system requirements, system architecture, software high-level requirements, software low-level requirements, and source code. At each stage of this successive development, it is essential to assure that a development artifact satisfies certain \emph{properties} to meet the following types of assurance objectives:
\begin{itemize}
\item The artifact complies with a parent artifact from which it is derived/refined. E.g.: software high-level requirements comply with system requirements and system architecture.
\item The artifact mitigates certain specific hazards. E.g., speed never exceeds more than 120 miles/hr, copter doesn't run out of battery power in air (it would fall to ground and cause hazard otherwise).
\item The artifact's specification doesn't exhibit any abnormal behavior that can cause a hazard. E.g.: source code doesn't contain any null-reference exception.
\item The artifact's specification satisfies certain properties that support generic assurance claims/arguments. E.g.: communication messages are delivered in order, requirements are consistent with each other.
\end{itemize}
To this end, we have defined the ontology for specification and analysis of properties, giving a first-class status to properties and their relationships, as described below.
\paragraph{Generic and Specific Properties.}
We classify properties into two broad categories: \textit{Generic Properties} and \textit{Specific Properties}:
\begin{itemize}
\item Generic Property: A generic property is a declaration that an artifact (\textit{scope} of the property) must satisfy some general desired characteristics related to verification objectives, or must exhibit absence of certain generic defects that can cause hazards. Generic properties are established for the type of development artifact (e.g., requirements, architecture, design, code) and are automatically applied to that particular type of artifact regardless of the application. Figure~\ref{fig:arch_req_property_flow} provides examples of generic properties for architecture and requirements. Other examples: code doesn't exhibit numeric overflow, code doesn't exhibit null-reference-pointer exceptions, etc.
\item Specific Property: A specific property is, by definition, application specific. It declares that an artifact (\textit{scope} of the property) must satisfy certain application-specific behaviors and must not violate certain application-specific constraints to prevent hazards. Examples: speed never exceeds more than 120 miles/hr, thermostat always turns heat on when temperature is below 50 degrees F. Figure~\ref{fig:specific_prop_req} shows the evidence flow for the specification and checking of specific properties of requirements.
\end{itemize}
\begin{sidewaysfigure}[!htb]
\centering
\includegraphics[scale=.65]{figures/Sally_modelchecking_property_flow.png}
\caption{Specification and Checking of Specific Properties of Requirements}
\label{fig:specific_prop_req}
\end{sidewaysfigure}
As we have stated earlier, both generic and specific properties must be verified at each level of the development artifacts in order to meet different parts of the set of all assurance objectives. The type and technique of analysis used for each of these properties as well as the assurance objective they relate to are significantly different.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{figures/main_property_schema.png}
\caption{Ontology Schema for Properties}
\label{fig:main_property_schema}
\end{figure}
Figure~\ref{fig:main_property_schema} shows the ontology schema for properties in UML notation, that derives from base W3C PROV schema described in section \ref{sec:ontology-desc}. The Generic Property class has a \emph{propertyScope} relationship to an ENTITY (e.g., a requirement set, architecture, code) over which the property must hold (note: Specific Property inherits all relationships and attributes of Generic Property). An ANALYSIS activity uses a property to try to prove the property(against the entity in its scope). A property can mitigate a potential hazard (via \emph{mitigates} relationship), satisfy a particular verification objective, and/or have a basis in a higher-level development artifacts from which the property is inferred. For example, a property to be verified on code could be inferred from software high-level requirements.
\section{DesCert Ontology Description}
\label{sec:ontology-desc}
This section provides a description of the salient aspects of DesCert Ontology. The DesCert ontology is based on the ontology classes in RACK (Rapid Assurance Curation Kit) database\footnote[3]{GE RACK: \url{https://github.com/ge-high-assurance/RACK}} being used in the ARCOS program. RACK in turn derives from the core \emph{provenance} model PROV defined by W3C.
Figure~\ref{fig:prov_core} depicts the core structures in PROV. An Entity captures a thing in the world (in a particular state) --- e.g., a particular version of a development or verification artifact. The entity was derived from some other entity, and was generated by an Activity that used other entities. An Agent (e.g. a person or tool) was associated with the activity, and the entity that was generated by the activity was attributed to that agent.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.8]{figures/prov_core.png}
\caption{The Core Structures in W3C PROV}
\label{fig:prov_core}
\end{figure}
\subsection{Ontology for Checking Specific Properties against a Model of Requirements}
As mentioned in section~\ref{sec:property-desc}, \textit{properties} are central to the DesCert evidence flow; Figure~\ref{fig:main_property_schema} depicts the basic concept of Generic and Specific Properties. Using these basic concepts, Figure~\ref{fig:specific_prop_req} shows the evidence flow for the specification and checking of specific properties of requirements using the Sally tool. The ontology schema for this usage is shown in Figure~\ref{fig:sally_model_checking_ontology}. In the upper left quadrant of this figure is the \emph{SallyTransitionSystemModel}, which is generated by another activity (shown fully in Figure~\ref{fig:sally_model_gen_ontology}. The Sally model is completely based upon the \emph{RequirementSet}, as denoted by the \emph{wasDerivedFrom} relationship.Sally model uses the \emph{SallyNotation}. \emph{SallyModelChecking} activity uses the Sally model and a SpecificProperty to attempt to prove that the property holds on the Sally model, and by extension, on the RequirementSet. The results of the this are in ANALYSIS\_OUTPUT.
\begin{sidewaysfigure}[!htbp]
\centering
\includegraphics[scale=.65]{figures/sally_model_checking_ontology.png}
\caption{Ontology for Checking Properties against the Sally Model of Requirements}
\label{fig:sally_model_checking_ontology}
\end{sidewaysfigure}
\begin{figure}[!htb]
\centering
\includegraphics[scale=.6]{figures/sally_model_gen_ontology.png}
\caption{Ontology for Generation of Sally Model from Requirements}
\label{fig:sally_model_gen_ontology}
\end{figure}
Figure~\ref{fig:sally_model_gen_ontology} shows the ontology for the auto generation of the \emph{SallyTransitionSystemModel} from a \emph{RequirementSet}. At the center of this figure is the activity that invokes the \emph{Text2Test} tool and pass it parameters that contain the name of the RequirementSet, execution period, and any requirements and dictionary files referenced by the RequirementSet. This produces that SallyTransitionSystemModel that traces to the RequirementSet via the \emph{wasDerivedFrom} relationship. The SallyTransitionSystemModel is then used for checking SpecificProperties against it as shown in Figures~\ref{fig:specific_prop_req} and \ref{fig:sally_model_checking_ontology}.
\subsection{Ontology for Checking Generic Properties of Requirements}
Figure~\ref{fig:arch_req_property_flow} depicts the checking of generic properties of requirements and the evidence flow. Figure~\ref{fig:req_analysis_ontology} shows the ontology for this evidence. The \emph{Text2Test} tool is invoked by the \emph{RequirementAnalysis} activity to analyze a \emph{Requirementset} against several predefined \emph{ClearGenericProperties}. The requirements contained in the Requirementset are expressed in the \emph{ClearNotation} which is supported by userGuide and semantics documents. The ClearGenericProperties trace to satisfy parts of DO-178C objectives. A high-level view of these properties is shown in Figure~\ref{fig:arch_req_property_flow}; details of the properties are described in Section~\ref{sec:clear-generic-properties}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.65]{figures/CLEAR_req_analysis_ontology.png}
\caption{Ontology for Checking Generic Properties of Requirements}
\label{fig:req_analysis_ontology}
\end{figure}
\subsection{Ontology for Test Generation from Requirements}
Figure~\ref{fig:testgen-evidence-flow} depicts the tool usage and evidence flow for generation of tests from software high-level requirements (HLR) and the execution of those tests. Software HLR for each software component are first developed using system requirements and architecture as inputs; the architecture provides an embedding context of the component within the system. The Text2Test tool is used for the generation of \emph{Test Oracles} and \emph{Tests} from the HLR.
\begin{sidewaysfigure}[!htbp]
\centering
\includegraphics[scale=.7]{figures/testgen_evidence_flow.png}
\caption{Evidence Flow for Test Generation from Requirements}
\label{fig:testgen-evidence-flow}
\end{sidewaysfigure}
Figure~\ref{fig:testgen-ontology} depicts the ontology for generations of tests from software high-level requirements. Test Oracles and TESTs are generated by this activity. A Test Oracle traces to a requirement (and specific operator within that requirement) and is based upon the CLEAR Testing Theory described in more detail in Section~\ref{sec:clear-testing-theory}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.45]{figures/testgen_ontology.png}
\caption{Ontology for Test Generation from Requirements}
\label{fig:testgen-ontology}
\end{figure}
\section{AFS RADL Architecture Evidence Generation}\label{sec:radlEvidence}
As mentioned in the beginning of Section~\ref{chap:evidence}, in this Section~\ref{sec:radlEvidence} we will focus on how DesCert AFS related evidence associated with generic properties of architectural RADL specifications are created and this is illustrated in the top layers, left-to-right, of the Figure~\ref{fig:descert_evidence_generated}.
We have described the RADL architectural language specifications, which consists of both the logical and physical parts/layers of the system, the methodology to perform software builds using Radler and associated architectural analysis in Section~\ref{sec:RadlerMain}. We have also introduced the ArduCopter system architecture and where the AFS subsystem (software component) resides in the larger system in Section~\ref{sec:SySoArch}. The specific RADL specification for the AFS subsystem is captured in \emph{afs.radl}
in the DesCert Github repository\@.\footnote{Contact the authors for permission to access the repository.}
Figure \ref{fig:afs_node_topic} shows the nodes and communications between them via topics for the AFS subsystem which consists of two nodes. The \code{AFS Function \emph{(afs\_function)}} node both publishes to as well as subscribes from \code{AFS Gateway \emph{(afs\_gateway)}} node. \code{AFS Gateway} acts as a bridge between \code{AFS Function} and MAVROS which in turn interfaces with the ArduCopter flight code through the MAVLink. \code{AFS Gateway} captures all messages from ArduCopter and extracts battery information, raw GPS position information,Geo-fence breach events and other health of the ArduCopter health information, GCS heartbeat information etc and publishes these as topics to \code{AFS Function} subscriber. The \code{AFS Function} node then executes its step function with period of 100 milliseconds and publishes 4 recovery actions subscriber ROS nodes: (i) AFS Geofence Breach Recovery for 3 different kinds (altitude, range, polygon) (ii) AFS GCS Comm Loss Recovery (iii) AFS GPS Lock Loss Recovery (iv) AFS Battery Insufficiency Recovery. The \code{AFS Function} node then publishes as topics flight controls command (e.g. change in flight controls mode or switching sensor information etc) to \code{AFS Gateway} subscriber which translated the command appropriately and sends them via MAVROS to ArduCopter via MAVLink transport (potentially repeatedly messages may needed to sent to ArduCopter until ACKs on receipt of the message is confirmed).
\begin{figure*} [hbtp!]
\centering
\includegraphics[width=0.7\linewidth]{figures/afs_rqt_graph.png}
\caption{Nodes (in ellipse) and topics (in rectangle) of the AFS subsystem}
\label{fig:afs_node_topic}
\end{figure*}
We next describe below how the different \emph{generic properties associated with architecture} for AFS RADL specification, illustrated in the top layer of Figure~\ref{fig:descert_evidence_generated}, is proven using architectural analysis.
\vspace{0.1in}
\shadowbox{
\begin{minipage}{0.9\textwidth}
\begin{center}
\emph{Architecture Generic Property 1:}
\end{center}
Software architecture of ArduPilot Advanced Fail Safe (AFS) subsystem conforms and is restricted to quasi-periodic multi-rate computational model.
\end{minipage}
}
\vspace{0.1in}
\emph{Architecture Generic Property 1 met rationale}: ArduPilot Advanced Fail Safe (AFS) subsystem include afs\_gateway and afs\_function that are specified in afs.radl discussed above. Radler build process used the afs.radl specification which included multiple rates of different computational nodes (tasks running at fixed period) with communicating topics/messages between publishers and subscribers over channels with expectation of guaranteed delivery and latency/timing/delays. Radler compiles the associated software source code related to afs\_gateway and afs\_function (RADL nodes of AFS) and then executes the code within a ROS environment with a fixed period that is may or may not vary but is bounded (i.e. period may vary between min and max clock).
\vspace{0.1in}
\shadowbox{
\begin{minipage}{0.9\textwidth}
\begin{center}
\emph{Architecture Generic Property 2:}
\end{center}
Quasi-periodic computational model of any RADL architecture and in particular ArduPilot Advanced Fail Safe (AFS) subsystem has the following 5 properties:
\begin{enumerate}
\item Bounded processing latency for any message/topic between every pair of publisher and subscriber
\item Communication messages are delivered in order i.e. no overtaking, with timing assumptions
\item Given fixed buffer (including size 1), consecutive message losses can be bounded
\item Message Loss can be eliminated with Bounded queue length which can be used to provision communication channel for transporting message/topic between publisher \& subscribers appropriately
\item Age and/or Freshness of any message at any subscriber in terms of time steps is bounded.
\end{enumerate}
\end{minipage}
}
\vspace{0.1in}
\emph{Architecture Generic Property 2 met rationale}:
Analysis results, based on PVS proofs, are presented in paper~\cite{conf/memocode/LarrieuS14} and discussed in Section~\ref{sec:RadlerMain}.
\vspace{0.1in}
\shadowbox{
\begin{minipage}{0.9\textwidth}
\begin{center}
\emph{Architecture Generic Property 3:}
\end{center}
Radler, validates platform/physical layer performance when mapped and bound to logical layer functions as well as any other assumptions on the architectural model used in verifying any other properties listed in \emph{Architecture Generic Property 2}. For any RADL Specification, and in particular ArduPilot Advanced Fail Safe (AFS) subsystem RADL specification, the following 4 properties are always held at run-time due to Radler run-time checks/validation:
\begin{enumerate}
\item No stale message received after it's latency and period
\item Every message is never received more than it's timeout duration latency and period
\item System healthy and no failures
\item Max latencies, node periods, execution times adheres to the RADL specifications.
\end{enumerate}
\end{minipage}
}
\vspace{0.1in}
\emph{Architecture Generic Property 3 met rationale}:
Radler enforces for each node, a state structure, an initialization function, a step function, and a finish function code structure as well as generates the communication layer (glue code), the scheduler, the overall compilation script and configuration files, and builds an executable for each machine. Radler also instruments the code for runtime monitoring and validation of platform performance and assumptions. Please see detailed rationale in paper~\cite{conf/memocode/LiGS15} and discussions in Section~\ref{sec:RadlerMain}.
\vspace{0.1in}
\shadowbox{
\begin{minipage}{0.9\textwidth}
\begin{center}
\emph{Architecture Generic Property 4:}
\end{center}
Physical Layer of RADL Architectural Specification and in particular ArduPilot Advanced Fail Safe (AFS) subsystem physical layer has robust time and space partitioning.
\end{minipage}
}
\vspace{0.1in}
\emph{Architecture Generic Property 4 met rationale}:
Robust space Partitioning based on the memory partitioning strategy in the specification depending on whether the two RADL nodes in are in (1) same virtual machine within a single RTOS or (2) two different virtual machines with different individual RTOS but in the same hypervisor within a single machine or (3) two machines.
Radler will then ensure provisioning respectively for (1) a shared memory ring buffer (intra partition) or (2) a ring buffer will be setup in a memory region shared
between those two virtual machine with message passing APIs (inter-partition) and (3) IP-based communication will be used with queuing and sampling ports (inter-partition) appropriately.
Robust time partitioning with WCETs for every step function (computation duration) within each RADL node being specified in the RADL specification and also subsequently Radler tool is also able to generate an instrumented version of the software code and monitor the system with runtime checks for violations of timing properties such as maximum latency of channels, node periods, execution times, etc. The radler tool also generates local system log that radler can be analyzed and validated against the architecture specification. Such runtime checks are often the only way of validating the timing parameters assumed by the model. Please see detailed rationale in paper~\cite{conf/memocode/LiGS15} and discussions in Section~\ref{sec:RadlerMain}.
\section{Architecture Specification and Analysis using Radler}\label{sec:RadlerMain}
The Radler Architecture Definition Language decomposes the design into a logical architecture
based on a pub/sub quasi-period model of computation. Radler nodes execute periodically
and communicate with other nodes over bounded latency channels. We are using the MAVROS
capability to support a Radler architecture for the ArduPilot. In particular, the
ArduCopter Failsafe component will be implemented as an independent Radler node running
on a companion computer so that its failure modes are independent from the base platform.
The use of Radler also serves as a step toward the Phase 2 challenge problem where we have
to coordinate between multiple components while ensuring security properties.
We have the defined a Radler architecture for the ArduCopter with the AFS module running on an
independent \emph{companion} computer. The AFS module uses MAVROS as the transport layer to subscribe to
messages from the ArduCopter module, the Ground Control Station (GCS), and the Remote Control (RC).
The AFS module receives status updates from the ArduCopter Base module and is able to change modes
and set certain flight parameters. It is also able to annunciate warnings to the GCS and RC Pilot.
We have enhanced Radler to integrate nodes that are implemented in Java using the Java Native Interface. We plan to use this capability
to define capabilities like logging and database access that leverage Java interfaces but are not
critical to the real-time responsiveness of the system.
We have developed a tutorial (both in the forms of a video and use case in Radler git repository) demonstrating the Radler code generation and its execution on SITL (software in the loop) simulator for the Arducopter AFS (advanced fail safe). In the tutorial, the Radler architecture consists of AFS gateway, battery, altitude, and log nodes communicating with Arducopter via MAVROS (ROS-based extendable communication node) on the companion computer.
We constructed a SITL/MAVROS/Radler deployment in a virtual machine environment using Vagrant and provided a sequence to create a pre-built image for the Boeing/TA4 Evaluation. We are currently evaluating Java-based
runtime verification tools that we can use to monitor the behavior of the AFS module.
\subsection{The Radler Model of Computation}\label{sec:RadlerMoC}
A Model of Computation (MoC) specifies the execution of individual nodes in a distributed system
and their interaction through shared memory or message-passing channels.
Radler implements distributed systems within a publish/subscribe architecture with a
quasi-periodic model of computation. The nodes repeatedly execute a step function
with a minimum and maximum bound on the period between two successive execution.
The nodes communicate through topic channels with an associated message type.
Each topic has exactly one publisher node, but can have zero or more subscriber nodes.
Each node specifies a buffer size and a latency bound for the mailbox associated with
each of its subscription channels. On each execution step, a node reads its input
mailboxes, applies its step function to these inputs, and then sends thes outputs to the
respective mailboxes for the topics on which it publishes.
\newcommand{\etime}[1]{\mathit{time}(#1)}
\newcommand{\evalue}[1]{\mathit{value}(#1)}
\newcommand{\estream}[1]{\mathit{stream}(#1)}
\newcommand{\rstream}[1]{\mathit{rstream}(#1)}
\newcommand{\estep}[1]{\mathit{step}(#1)}
\newcommand{\bstep}[1]{\mathit{bstep}(#1)}
There are several important theorems about RADL's multi-rate, quasi-periodic model of computation that have been proved
in PVS\cite{conf/memocode/LarrieuS14}. These include:
\begin{enumerate}
\item Bounded processing latency for message: A message sent by publisher node $P$ at time $t$ to subscriber
node $S$ on topic $A$ with maximal latency $L(A,S)$
is processed by node $S$ within time $t + L(A, S) + \mathit{max}(S)$ unless it is superseded by a
subsequent message from $P$\@. The maximal delay occurs when a message sent by $P$ at time $t$ is received
by $S$ at a time just after $\tau_S(i)$ and processed by $S$ at $\tau_S(i+1)$ where $\tau_S(i+1) - \tau_S(i) = \mathit{max}(S)$.
\item No overtaking, with timing assumptions: If $L(A, S) < \mathit{min}(P)$, then
messages are received by $S$ in the order sent by $P$ since the $i$'th message from $P$
will be received by $S$ before the $i+1$'st message is sent.
\item Bounded consecutive message loss: Assuming a buffer size of one, if $M$ is the small integer such that
$M. \mathit{min}(P) > L(A, S) + \mathit{max}(S)$, then
at least one of $M$ consecutive messages is read by the subscriber (assuming no overtaking). The fastest rate at which message
can be sent is $1/\mathit{min}(P)$, and the
maximum number of messages that can arrive in the
interval from $\tau_S(i)$ to $\tau_S(i+1)$ are those sent in the interval from $\tau_S(i) - L(A, S)$ to $\tau_S(i+1)$\@.
\item Bounded queue length to eliminate message loss: Under the same assumptions as the previous bullet,
with a queue length of $Q$, at most $M - Q$ consecutive messages are lost.
\item \label{prop:bounded_message} Bounded age $\mathit{MA}(m)$ of a message input $m$ used by subscriber $S$ in a step function:
$\mathit{MA}(m) < L(A, S) + \mathit{max}(P)$ (without overtaking)\@. The quantity $L(A, S) + \mathit{max}(P)$
is the biggest gap between $\rstream{P}(i)$ and $\rstream{P}(i+1)$\@.
\end{enumerate}
Bounds can also be computed for the scenario where overtaking is possible~\cite{conf/memocode/LarrieuS14}\@.
\subsection{The Radler Architecture Definition Language}\label{sec:RADL}
\input{radler_lang}
\input{radler_build}
\input{house_thermo}
\input{radler_java}
\subsection{Checking Generic Properties of Requirements}
\label{sec:clear-generic-properties}
Section~\ref{sec:property-desc} introduces the notion of \emph{properties}, that we intend to analyze, into two broad categories: \textit{ Generic Properties} and \textit{Specific Properties}.
In the context of requirement specification, generic properties are those that are fundamental to any good requirements specification, irrespective of the system under consideration, such as consistency, completeness, verifiability, non-ambiguity, etc. We analyze the requirements for the following types of properties:
\begin{itemize}
\item \textbf{Consistency}: that ensures that the requirements are free of
\begin{itemize}
\item Conflicts across multiple requirements such as two or more requirements specify different values for the same output variable under overlapping input conditions. For example, the following requirements of a thermostat system are inconsistent since they specify conflicting values for Display\_Indicator when the both their antecedents are true at the same time.
\textit{\\REQ 1: While HVAC\_Mode is `Heat Off', the Thermostat shall set Display\_Indicator to `White'.
\\REQ 2: While HVAC\_SetUp is true, the Thermostat shall set Display\_Indicator to `Blue'.}
\item Cycles in requirement data flow, without state transition
\end{itemize}
\item \textbf{Accuracy}: that assures that the requirements are accurately specified with specific logical and mathematical outcomes.
\item \textbf{Non-Ambiguity}: that assures that the requirements do not use combinations of natural-language phrases or mathematical expression that can interpreted ambiguously.
\item \textbf{Completeness (``internal")} that identifies gaps in requirements wrt
\begin{itemize}
\item Input Gaps: Certain combinations of input conditions missing in requirement set. For example, the following requirement is considered input space incomplete if there are no other requirements that specify behaviours when HVAC\_Mode is `Heat ON'. \\\textit{REQ: While HVAC\_Mode is `Heat Off', the Thermostat shall set Display\_Indicator to `White'.}
\item Output Gaps: Certain output values not produced by any requirement in the requirement set. For example, the above requirement is considered output space incomplete if the value of Display\_Indicator is defined to be an enumeration of several colors, but there are no requirements to set values of Display\_Indicator other than `White'.
\end{itemize}
\item \textbf{Requirements Verifiability/Testability} that ensures if there is any
\begin{itemize}
\item state in a requirement that is not reachable
\item condition in a requirement that is not achievable
\end{itemize}
\item \textbf{Advanced properties} based upon Ontic type information helps ensures:
\begin{itemize}
\item Simple ontic-type violations – e.g., adding altitude to runway length
\item Margins (values, time) used in decisions are not adequate based upon the real-world nature of inputs; e.g., improper time-debouncing or hysteresis of sensor input values (Margins are derived based on constraints associated with Ontic type for the sensor input)
\item \textbf{Mode Thrashing} is an advanced analysis based on margins, such as the system switching back-and-forth (metastable) between two states rapidly (general case: cycle through multiple states)
\end{itemize}
\end{itemize}
Section~\ref{sec:clear-generic-prop-evidence} provide instances of generic properties that were checked on the AFS requirements.
\section{Notation and Tools for Requirements Specification, Analysis, and Test Generation}
Figure~\ref{fig:ReqEvidenceGenOverview} shows the high-level overview of requirement specification, the types of requirements analysis that and the respective evidence generated.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{figures/AFS_Evidence_Overview.PNG}
\caption{Requirements Analysis Evidence Generation Overview}
\label{fig:ReqEvidenceGenOverview}
\end{figure}
In essence, specifying requirements in the CLEAR notation allows us to use the Text2Test tool to automatically perform the following analysis and generate related evidences for assurance:
\begin{description}
\item [Formal Analysis]: The requirements are semantically analysed for consistency, completeness and non-ambiguity using formal solvers such as SMT at the backend. The results of the formal analysis are automatically captured in a tabular, easy-to-understand web page format.
\item [Test Vectors]: The tool automatically generates requirements-based test vectors in a Comma-separated textual file format. With the help of a test harness, these test cases can be used to verify and assure that the implementation indeed meets the requirements.
\item [Test Reports]: Along with the test cases, the Text2Test also automatically generates various reports that quantifies the verifiability of the requirements as well as the provides a detailed explanation of which part of the requirement each test case checks. This serves as the traceability between the requirement and the test cases.
\item [Sally Model]: From the functional requirement specifications the Text2Test tool automatically generates Sally model, that can be used by the Sally model checker to verify specific properties (such as safety properties).
\end{description}
\subsection{Sally Model Generation and Specific Property Verification}
\label{sec:sally}
Sally is a model checker for infinite state systems described as transition systems. Sally has both bounded model checking (BMC) engine and k-induction (kind) engine for the verification of transition systems. That is to say, integrating Sally with Text2Test provides advanced capability of specific property verification other than the generic ones for the requirement set, given that the requirement set can be modeled as a transition system. In our tool chain, the integration is achieved by translating Text2Test internal model to Sally model. This subsection introduces the model translation process, as well as the enhanced capability of specific property verification enabled by the tools integration and the extension.
\subsubsection{From Text2Test internal Semantic Synthesis Model to Sally model}
\paragraph{Model translation}
Sally model is a script model with inputs, states, and the state transitions. On the other hand, Text2Test internal model is a functional block diagram model which may or may not possess intended state transition behavior. But because there exists underlying system frequency for the internal model, it essentially can be remodeled as a transition system that responds to instantaneous changes. In this translation process, blocks/structures of the internal model are converted to the script segments (declaration, initialization, and state transition) in Sally model, following the summarized rules in Table~\ref{sally translation rules}. Because Sally script is based on the SMT lib 2.0 format, the translation order of the blocks/structures is irrelevant.
\begin{table}[h!]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{ |c|l|}
\hlineB{3}
\multicolumn{2}{|l|}{\textbf{Case I:} A system \textit{input} block}\\
\hline
& \textbf{1) System input declaration:} A system input variable.\\
\cline{2-2}
\textbf{Sally} & \textbf{2) State declaration:} An auxiliary input state variable.\\
\cline{2-2}
\textbf{counterparts:} & \textbf{3) State initialization:} None.\\
\cline{2-2}
&\textbf{4) State transition:} Next step value of the input state equals to input. \\
\hline
Note:& All states update simultaneously lagging by one step of system input(s).\\
\hlineB{3}
\multicolumn{2}{|l|}{\textbf{Case II:} An \textit{STB} block and the associated \textit{unit delay} and \textit{output} block}\\
\hline
\textbf{Sally} & \textbf{1) State declaration:} A Sally state variable.\\
\cline{2-2}
\textbf{counterparts:} & \textbf{2) State initialization:} Same value as the \textit{unit delay}'s initial value.\\
\cline{2-2}
& \textbf{3) State transitions:} Transition matrix in Sally language. \\
\hline
Note:&This structure as a whole corresponds to one single Sally state.\\
\hlineB{3}
\multicolumn{2}{|l|}{\textbf{Case III:} A time-dependent block}\\
\hline
\textbf{Sally} & \textbf{1) State declaration:} A Sally state variable.\\
\cline{2-2}
\textbf{counterparts:} & \textbf{2) State initialization:} Same value as the block's initial value.\\
\cline{2-2}
& \textbf{3) State transitions:} Transfer function in Sally language. \\
\hline
Note:&\textbf{Case III} excludes the \textit{unit delay} block instances from \textbf{Case II} category.\\
\hlineB{3}
\multicolumn{2}{|l|}{\textbf{Case IV:} A non-time-dependent block}\\
\hline
\textbf{Sally} & 1) \textbf{State declaration:} A Sally state variable.\\
\cline{2-2}
\textbf{counterparts:} & 2) \textbf{State initialization:} None.\\
\cline{2-2}
& 3) \textbf{State transitions:} Block's math/logical function in Sally language. \\
\hline
Note:&A memoryless state has no initialization.\\
\hlineB{3}
\end{tabular}
}
\caption{Text2Test internal model to Sally model translation.}
\label{sally translation rules}
\end{table}
Text2Test internal model responds to system input(s) instantaneous at the same time step, while in Sally model system input(s) are used to update system state(s) in the next time step. Therefore to eliminate the one-step response gap at the system inputs interface, a system input variable and an auxiliary input state variable are created in pair in Sally model , where the state is one step lag the input. Thus, the auxiliary input state(s) update simultaneously with all the other system states, all states as a whole exhibiting equivalent behavior of the corresponding Text2Test internal model.
\subsubsection{Checking specific properties against the Sally model}
\begin{figure}[!htb]
\centering
\includegraphics[scale=.27]{figures/Sally_modelchecking_property_flow_snapshot.png}
\caption{Checking Specific Properties of Requirements using Sally}
\label{fig:specific_prop_req_sally}
\end{figure}
Figure~\ref{fig:specific_prop_req_sally} shows the evidence flow for checking a Specific Property against a Sally model (adapted from Figure~\ref{fig:specific_prop_req}). In a nutshell, the Sally model checker helps rigorously ensure that the specific properties (typically specified from system requirements to mitigate hazards) are satisfied by model (that is derived from the high-level functional requirements).
\paragraph{Specifying properties with system inputs}
The generic Sally query has one inconvenience on property specification with system inputs. In the query .mcmt file, an input condition cannot be directly encoded as part of the query formula, like the way that most other model checkers can. Although Sally tool provides an assumption annex for the input conditions as a supplement to the query, so that the input conditions are treated as the extra system assumptions. But it is verbose and more importantly still does not address the issue that some property may have clause involving the relationship between input and state (something like ``State $s$ is always greater than input $x$.''). Nevertheless, in our Text2Test Sally integration, each system input has a corresponding auxiliary input state in the Sally model created during model translation (as shown in Table~\ref{sally translation rules} Case I). Therefore, it is strongly recommended to use the input state variable, rather than the input variable, in the property query formula whenever possible to be more flexible in expressiveness.
\subsubsection{Examples of Properties}
\paragraph{Mode-thrashing} Mode-thrashing is a hazardous phenomenon that often occurs in cases involving continuous value triggered mode switch. For example in the thermostat example in Section~\ref{sec:thermostat} Figure~\ref{fig:8var-thermo}, if the maximum sensing fluctuation (denoted by $\mathit{fluc}_{max}$) of the thermometer sensed temperature (denoted by $t_{sensed}$) is larger than the thermostat threshold margin (difference of ON/OFF temperature threshold values), then thermostat may send frequently oscillating ON/OFF control signals (denoted by $ctrl$) to the heater when the actual room temperature (denoted by $t_{room}$) is an unchanging value in-between or near the thermostat threshold values. In the extreme case where $\mathit{fluc}_{max}$ is as small as exact half of the threshold margin, mode-thrashing could still occur when $t_{room}$ happens to stay at the midpoint value of two threshold values. Absence of potential mode-thrashing has an LTL formulation:
\begin{equation*}
(t_{room}-\mathit{fluc}_{max}\le t_{sensed}\le t_{room}+\mathit{fluc}_{max})\Rightarrow \neg (ctrl=\text{ON}~\textbf{X}~ctrl=\text{OFF}),
\end{equation*}
for all unchanging $t_{room}$, where \textbf{X} is the temporal operator ``next'' in LTL language. The encoding of this formula in Sally requires temporal extension to Sally generic query, which is elaborated in the following subsection~\ref{subsec:temporal extension}. The reason that here $\mathit{fluc}_{max}$ is \underline{not} naively compared with the thermostat threshold margin (or its half) is that, in a more general case, the continuous condition value and its fluctuation may go through some non-trivial transform before becoming triggering signal of the mode switch, so directly checking if undesired mode switch can occur is the only universal detection method. Detecting potential mode-thrashing is non-trivial but also not too complex to be encoded as a generic property of requirement set in Text2Test. Therefore, it is performed on both Text2Test internal model and the corresponding Sally model, the results are compared to showcase the semantic equivalence between two models.
\subsubsection{Temporal extension to Sally query}\label{subsec:temporal extension}
\paragraph{Motivation} Generic Sally query does not allow basic temporal operators such as ``next.'' and ``prev.'', although ``next.'' is used in the Sally model script to denote the next time step. Queries are checked at all time step without an explicit temporal operator. This is to say, one cannot write a generic query formula about a state $s$ in specific future or past time step(s). This is due to implementation limitation rather than the power of reasoning engines. A workaround solution is to create time-shifted auxiliary state variables (for instances, $prev\_s, next\_s$), and add their proper declarations and state transitions into the Sally model, so that they can be used in the query. It is non-automatic, and it is tedious and error-prone in the case of specifying a property across multiple time steps that requires one auxiliary state variable for each related state at each time step. To enrich temporal logic semantics and take most advantage of the reasoning power, temporal extension to Sally query is developed as part of the tools integration.
\paragraph{Approach}Two basic temporal operators \textbf{X} and \textbf{F}, denoting ``next'' and ``eventually'' respectively, and time step syntax sugar are introduced to augment the query language. Let $t$ and $t'_{>t}$ denote the beginning and end time of the temporal domain, and $\mathit{generic}\_pred$ be an SMT Lib 2.0 format Boolean predicate of the state variables from the generic Sally model, we have the following general form of temporal extended predicates below:
\begin{itemize}
\item \textbf{X}($t,t'$][\textit{generic\_pred}]--- meaning that ``$\mathit{generic}\_pred$ holds for \underline{all} time steps in between $t$ (not included) to $t'$ (included).''\\
\item \textbf{F}($t,t'$][\textit{generic\_pred}]--- meaning that ``$\mathit{generic}\_pred$ holds for \underline{any} time step in between $t$ (not included) to $t'$ (included).''
\end{itemize}
Note that both $t$ and $t'$ are integer multiple of the system period. They can be negative, 0, or positive numbers, corresponding to past, current, or future time respectively. A temporal extended predicate can be embedded in a larger query formula the same way a generic predicate does. As a simple example, the property ``In system $sys$, when state $p$ is $true$, state $q$ shall be $true$ for the next 2 seconds.'' can be formulated as:
\begin{equation}\label{eq:extendedExample}
\mbox{(query }sys\mbox{ }(\Rightarrow\mbox{ }p\mbox{ }\textbf{X}[0,2][q])).
\end{equation}
\paragraph{Property translation and Sally model augmentation}A temporal extended property is firstly translated into an equivalent generic Sally query before Sally tool takes it as input. The translation is a straightforward process of temporal unfolding and (sometimes) shifting. Supposing system period is 1 second, Formula~\ref{eq:extendedExample} is unfolded to
\begin{equation}\label{eq:unfolded}
\mbox{(query }sys\mbox{ }(\Rightarrow\mbox{ }p\mbox{ (and }q\mbox{ }next\_q\mbox{ }next2\_q))),
\end{equation}
where $next\_q$ and $next2\_q$ are auxiliary state variables denoting 1 and 2 time steps forward shifts of $q$ respectively. Note the difference between the prefix ``\textit{next\_}'' in the auxiliary state variable name and the temporal operator ``next.''. In case that the temporal operator is \textbf{F} instead of \textbf{X}, Formula~\ref{eq:extendedExample} is unfolded to
\begin{equation*}
\mbox{(query }sys\mbox{ }(\Rightarrow\mbox{ }p\mbox{ (or }q\mbox{ }next\_q\mbox{ }next2\_q))).
\end{equation*}
Each of the newly created auxiliary state variables needs to be declared and given a state transition in the Sally model. The state transition is given by the form of assigning the next time step state value. While an auxiliary state variable of the \underline{past} time step can be easily assigned as ``(= next.$prev\_s$ state.$s$)'', it is not easy to assign a \underline{future} state variable without introducing more auxiliary variables than what are needed in the property. Naturally, entire Formula \ref{eq:unfolded} can be shifted 2 time steps towards past, resulting in the plain Sally query in the generic form:
\begin{equation}\label{eq:shifted}
\mbox{(query }sys\mbox{ }(\Rightarrow\mbox{ }prev2\_p\mbox{ (and }prev2\_q\mbox{ }prev\_q\mbox{ }q))),
\end{equation}
where $prev\_q$ and $prev2\_q$ are auxiliary state variables denoting 1 and 2 time steps backward shifts of $q$ respectively.
Now, all state variables in Formula \ref{eq:shifted} are either on current or past time step. Their declarations and state transitions can be added to the original Sally model without introducing further more auxiliary state variables. The augmented Sally model is thereby a property-specific Sally model, because the choice of auxiliary state variables are property-specific. The entire process is done in an automatic fashion. Lastly, Sally tool verifies the property-specific Sally model against the plain Sally query. The complete data flow is summarized in Figure~\ref{fig:extended property check}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/sally_model_augmentation.PNG}
\caption{Temporal property translation and Sally model augmentation.}
\label{fig:extended property check}
\end{figure}
\section{SeaHorn Tool for Static Code Analysis}
SeaHorn~\cite{seahorn} is a verification framework for LLVM-based
languages. SeaHorn performs a whole-program points-to
analysis~\cite{seadsa} to translate LLVM bitcode into a set of
verification conditions (VCs) whose satisfiability implies that the
program is safe. Currently, SeaHorn focuses on reachability
properties. Specifically, SeaHorn checks for predefined errors such as
memory errors and user-defined assertions. The details of these VCs
and how SeaHorn checks for predefined errors and assertions vary
depending on the back-end solver. SeaHorn provides three main back-end
solvers:
\begin{itemize}
\item a model-checker, called Spacer~\cite{KomuravelliGC14}, based on
Property-Directed Reachability (PDR)~\cite{bradley2011sat} that
produces (a) proofs, consisting of loop invariants and procedure
summaries, (b) counterexamples if the program cannot be proven safe,
or otherwise, (c) it returns ``unknown''. For practical reasons,
Spacer does not precisely model certain aspects of the semantics of
the program. For instance, \texttt{malloc} is modeled as a function
that returns non-deterministically a pointer, ignoring important
details such as alignment and memory layout. Alternatively, SeaHorn
uses Sally~\cite{jovanovic2016property} if all functions and
procedures can be inlined since Sally cannot model them. The benefit
of using Sally is in the use of PDR combined with k-induction,
a powerful technique for inferring loop invariants that Spacer does
not support.
\item a bounded-model checker (BMC) that models more precisely the
semantics of programs (including memory allocation) at the expense
of producing bounded proofs (i.e., proofs that only hold for a
finite number of loop iterations) but it is very effective at
finding bugs in programs.
\item a static analysis based on Abstract
Interpretation~\cite{Cousot_POPL77}, called Crab, that produces
proofs, consisting also of loop invariants and procedure summaries
but it cannot produce counterexamples. Due to the nature of
abstractions, Crab is often more efficient than Spacer/Sally and BMC
but it can produce many more false positives. Crab provides a rich
set of fixpoint solvers, abstract domains and analyses. For
instance, Crab provides a Memory analysis that can prove absence of
memory errors such as null dereferences and buffer overflows.
\end{itemize}
In this phase, we have focused on Crab, the SeaHorn abstract
interpreter. More specifically, we have focused on the following tasks:
\begin{itemize}
\item Improve Crab capabilities to perform analysis of relevant C
projects.
\item Implement a new Tag analysis that models many interesting ontic
properties.
\item Start developing evidence formats (i.e., certificates) for
independent checkers.
\end{itemize}
\subsection{Progress on Analysis of C code}
We have developed a new memory model, called \emph{region-based memory
model (RBMM)}, that enables more efficient analysis of C code with
few restrictions with respect to the standard C memory model. This
work has been published in~\cite{crabir}.
Standard C memory model partitions memory into memory objects. A
pointer points to an offset within a memory object. RBMM further
partitions memory into regions. A region can span multiple
objects. The key difference is that pointers pointing to different
regions cannot alias even if they point to the same memory
object. RBMM makes two key assumptions in order to allow efficient
analysis of C programs: (a) matched pairs of memory writes-reads must
access the same number of bytes, and (b) it assumes that programs do
not have undefined behavior (UB). Assumption (a) excludes non-portable
code. Although it may seem counter-intuitive, assumption (b) does not
limit our analysis from proving absence of UB. As shown by Conway et
al.~\cite{ConwayDNB08}, conditionally sound analyses can prove absence
of errors (e.g., memory violations) or otherwise, produce one
counterexample although it cannot produce all possible
counterexamples. In summary, the approach described in~\cite{crabir}
works as follows:
\begin{enumerate}
\item apply SeaHorn whole-program pointer analysis on the LLVM program so
that memory is statically partitioned into regions. The analysis
also identifies which parts of the program might not satisfy the
assumptions of RBMM;
\item translate the LLVM program into a novel
intermediate-representation (IR), called CrabIR, where all LLVM
memory instructions are translated to instructions over regions and
references;
\item perform abstract interpretation on CrabIR using Crab fixpoint
solvers and abstract domains.
\end{enumerate}
In~\cite{crabir}, we apply Crab using our new region-based aware IR
(CrabIR) on five popular C projects: \texttt{bftpd}, \texttt{brotli},
\texttt{curl}, \texttt{thttpd}, and \texttt{vsftpd}. Sizes vary from
5K to 50K lines of code. The results show that Crab can prove that
around $60\%$ of all non-trivial\footnote{A memory dereference is
considered trivial if LLVM analyses can already prove that it is not
null (e.g., global variables).} pointer accesses cannot be
null. This number is promising considering that the environment for
these programs is conservatively ignored and no specialized abstract
domains are used.
\subsection{New Tag Analysis}
In this phase, we have also developed a new Tag analysis in Crab,
where memory locations can be tagged with a numerical identifier
(i.e., tag) and then the Crab forward analysis propagates those tags
through memory. Upon completion of the analysis, Crab clients can ask
whether a memory location is tagged with a particular set of
tags. Taint analysis is an example of tagging.
As a pilot study, we have applied the Tag analysis on \texttt{thttpd}
(8K lines of code) in order to perform taint analysis where sources
are systems calls such as \texttt{read} and \texttt{mmap} and sinks
are systems calls such as \texttt{write} and \texttt{writev}. Each
source and each sink is assigned a different tag. The analysis runs in
few seconds and infer 24 of the 72 relationships between the 7 sources
and 8 sinks in \texttt{thttpd}. In the next phase, we plan to focus on
more specific ontic properties (e.g., security related properties) and
apply the Tag analysis on a broader set of relevant programs.
\subsection{Progress on Evidence Formats for Independent Checkers}
We have developed an approach that uses program invariants produced by
an abstract interpreter as evidence that can be checked by independent
checkers. We plan to start implementing a prototype in the context of
eBPF programs.
Any abstract interpreter produces program invariants at each basic
block or program location. More precisely, an abstract interpreter
produces an \emph{invariant map}, $InvMap$, that maps a location $l$
to $Inv_{l}$, a formula defined on a subset of program variables,
expressed in the restricted form allowed by the abstract domain
($Inv_l: AbsState$). The formula $Inv_{l}$ expresses a set of facts
that holds at location $l$ expressible in the particular abstract
domain.
In our approach, the tuple $\langle P, \sqcup, \Rightarrow,
\mathsf{TrFn}, InvMap \rangle$ constitutes the \emph{certificate} that
needs to be checked, where $P$ is the program, $\Rightarrow: AbsState
\times AbsState \mapsto Bool$ is the abstract implication, $\sqcup:
AbsState \times AbsState \mapsto AbsState$ is the abstract join, and
$\mathsf{TrFn}: AbsState \mapsto AbsState$ represents the abstract
transfer functions for each transition from location $l'$ to $l$.
Then, the checker receives a certificate and it must verify for each
location $l$ in $P$ that:
\[ \bigsqcup_{l' \in \mathsf{pred}(l)} (\mathsf{TrFn}^{l'~\rightarrow~l}(InvMap(l'))) \Rightarrow InvMap(l) \]
\vspace{2mm}
\noindent where $\mathsf{pred}$ returns the predecessors of a given
location.
Note that any other abstract interpreter different from the one that
generated the certificate can be used as checker. Another possibility
is to translate program invariants ($AbsState$) into some first-order
logic fragment expressed by some combination of SMT theories so that
the checker can be replaced by a SMT solver. This would need also to
replace $\bigsqcup$ with logical disjunction, abstract implication
with logical implication, and the abstract transfer function
$\mathsf{TrFn}^{l'~\rightarrow~l}$ with the logical encoding of each
transition $l' \rightarrow l$.
We plan to implement the ideas from this section in the context of
eBPF programs. An eBPF program is a bytecode program that can be
executed in the kernel without recompiling the kernel. The price to
pay for such as flexibility is that the kernel needs to verify that
the program is memory safe before executing it. In a previous
work~\cite{prevail}, we developed an abstract interpreter based on
Crab, called Prevail, that outperforms the existing verifier in the
Linux Kernel. The main limitation is that Prevail must be run in user
space. Recently, Windows OS has adopted the eBPF technology and chosen
Prevail as the verifier. Currently, Windows OS runs Prevail in a
secure environment which is not an ideal solution either. To solve
this problem, we plan to use Prevail to generate certificates in user
space and implement a lightweight checker more suitable to be run in a
secure environment.
\subsection{Test Generation from Requirements}\label{sub:testGen}
Figure~\ref{fig:t2t-tool-arch} provides an overview of the Text2Test tool. An important capability provided by Text2Test is the auto-generation of tests from a set of requirements \cite{ren-bhatt-2016-smt-hilite}.
Figure~\ref{fig:testgen-evidence-flow} shows the flow of evidence artifacts for test development from software high-level-requirements (HLR). The \emph{test development}, automated by Text2Test, generates two artifacts:
\begin{enumerate}
\item Test Oracles: The test obligations for specific behavior operator instances used in requirement clauses/subclauses across the requirement set.
\item Tests: Each test consists of a sequence test vectors containing input values and expected output values of the component under test. A test traces to the Test Oracles it satisfies.
\end{enumerate}
As described in Section~\ref{sec:t2t-internal-model}, the Text2Test tool creates an internal representation of the requirement set - the \emph{semantic synthesis model}. The nodes in this diagram are the \emph{behavioral operators} instances that are derived from the clauses/subclauses of all requirements in the set. Behavioral operators include Boolean logic operators, relational operators, mathematical operators and functions, time-based operators, selection operators, event-based operators, and state-changing operators.
This forms the basis for generation of test oracles and tests. As part of the Text2Test tool configuration, there is a formal definition of test oracle criteria and equivalence class definition of the required test obligation for each type of behavioral operator. The CLEAR testing theory, described in the next subsection.
\subsubsection{CLEAR Testing Theory}
\label{sec:clear-testing-theory}
\begin{quote}
``Testing can only reveal the presence of errors, not their absence.'' – Dijkstra (and DO-178C)
\end{quote}
Testing is inherently incomplete and cannot be formalized in logic. The challenge is to make testing more rigorous and bring some notion of “completeness.” One way to accomplish this is to base testing on some reasonable \emph{applicable criteria}. The CLEAR Testing Theory uses the well-established guidance in DO-178C~\cite{DO178C} to this end. DO-178C clearly establishes that tests must be based upon requirements and posits the criteria shown in Figure~\ref{fig:do178c-testing-criteria}, and brings notions of rigor and ``completeness'' within that framework.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.9]{figures/do178c_testing_criteria.png}
\caption{Testing Criteria in DO-178C}
\label{fig:do178c-testing-criteria}
\end{figure}
The \emph{CLEAR Testing Theory} consists of a set of arguments and derivations to establish the claim that the \emph{Test Oracles} and \emph{Tests}, created for a given set of requirements in CLEAR notation, fully satisfy the ``applicable criteria'' for requirement-based testing for those requirements. A second claim is that a Test Harness correctly executes the Tests (and the implied test procedure) on the software component and produces pass/fail results of each test. The following is the summary of claims of CLEAR Testing Theory:
\begin{itemize}
\item Claim 1:
The Test Oracles and Tests satisfy all applicable criteria (e.g.: equivalence class / boundary value, time-related functions, state transitions) per DO-178C sections 6.4.2.1 and 6.4.2.2 for the set of requirements in CLEAR notation.
This claim is supported by the following subclaims:
Subclaim 1: A set of Test Oracle is created for each instance of an operator (behavioral sub-clause/subexpression) within a requirement in CLEAR notation.
Subclaim 2: The set of Test Oracles for an operator satisfies all applicable criteria (e.g.: equivalence class / boundary value, time-related functions, state transitions) per DO-178C sections 6.4.2.1 and 6.4.2.2.
Subclaim 3: A Test is created for each Test Oracle by 1) backward propagating the operator's inputs to component under test inputs, using principles of \emph{controllability}, and 2) by forward propagating the operator's output to an observable output of the component using principles of \emph{observability}.
\item Claim 2:
A Test Harness concretizes and correctly executes the Tests (and its implied procedure) on the target software and produces pass/fail results of each test case.
\end{itemize}
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{figures/test_oracle_examples.png}
\caption{Example of Derivation of Test Oracles from a Requirement Set}
\label{fig:test-oracle-examples}
\end{figure}
Figure~\ref{fig:test-oracle-examples} shows examples of derivation of test oracles from a requirement set using the Text2Test tool. As discussed previously in Section~\ref{sec:t2t-overview} and Figure~\ref{fig:t2t-tool-arch}, semantic transformations are applied to requirements to create a semantic synthesis model. The nodes in this model are the operators in the requirements (implied by natural language subclauses and mathmatical/logical subexpressions). For each operator, there is a set of test oracles, each oracle defines the equivalence class of input and output values for that a particular behavior of that operator.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{figures/test_oracle_switch.png}
\caption{Definition of Test Oracles for Switch Operator}
\label{fig:test-oracle-switch}
\end{figure}
Figure~\ref{fig:test-oracle-switch} shows the test oracle definition for a switch operator which represents the logic of a ``while ... otherwise ...'' requirement. The essential point to note here that the equivalence class definition specifies that the values at the two inputs of the switch (x and y in the requirement) need to be different so that one can ascertain that the proper branch in the code was chosen. Such test will detect the type of variable substitution error in the code shown in Figure~\ref{fig:var-subst-error}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{figures/var-subst-code-error.png}
\caption{Example of Code Error Caught by the Test Oracle of Switch Operator}
\label{fig:var-subst-error}
\end{figure}
\subsection{Text2Test Tool Overview}
\label{sec:t2t-overview}
\begin{sidewaysfigure}[!htbp]
\centering
\includegraphics[width=\linewidth]{figures/t2t_tool_architecture.png}
\caption{Architecture of the Text2Test Tool}
\label{fig:t2t-tool-arch}
\end{sidewaysfigure}
Text2Test is a powerful tool capable of automatically generating evidence artifacts for CLEAR requirements in textual form. Figure~\ref{fig:t2t-tool-arch} shows the architecture of Text2Test tool. It takes the CLEAR requirement set from front-end requirement editor, as well as metadata such as variable data type, units, and other ontic type specifications in separate dictionary files, as inputs. Then it creates a minimum-scale but semantic-equivalent internal block diagram model for the requirement set called \emph{Semantic Synthesis Model}. This internal model not only provides a visualization, but also is suitable for the model-based analysis including test generation, static analysis (e.g, range propagation), and formal verification (against generic properties as listed in subsection~\ref{sec:clear-generic-properties}). The internal model also serves as a medium which can be translated to models for other verification tools.
\subsubsection{From RequirementSet to Text2Test internal Semantic Synthesis Model}
\label{sec:t2t-internal-model}
\paragraph {Data-Flow Semantic Graph creation.}
A data-flow semantic graph (called ''raw model'') is first created from the requirement, capturing the data flow and semantics of the behavior operators used in requirement clauses and subexpressions. The raw model preserves the semantics of each and every individual requirement in the requirement set, but it is not fully functional as a whole in mainly two aspects. Firstly, a \textit{switch} block may have, at one of its data input ports, connection to the \textit{Invalid} block that cannot be executed or propagated through. This is due to the fact that the switch block is created from a selection requirement (for example, a ``when..., then...'' requirement) incomplete by itself. Secondly, if a feedback path is formed, it often misses a unit delay implied by the keyword ``transition to'' without an explicit time shifting keyword ``previous'', which is needed for unit delay block creation in the raw model. Other than the system-level semantics imperfection, the raw model often contains redundant logical blocks, and lacks non-primitive blocks (state transition for example). These all make the raw model less user-friendly for reading and examining.
It is not hard to see that, the common root cause of these ``imperfection'' is that the incompleteness of individual requirement gets carried to the raw model whose creation process does not possess system perspective. To address these, a system-level aggregation will be performed on the raw model through a block merge process. Block merge is based on graph search and pattern recognition, aiming to fuse low level primitive blocks into blocks of richer semantics, filling semantic gaps as well as eliminating redundancy. The next subsection elaborates the merge of a set of blocks exhibiting state transition behavior, resulting in a fully functional state transition substructure of much more compact form.
\paragraph {State transition block merge to create Semantic Synthesis Model.}
A typical state transition behavior consists of state initialization, and a state transition function that determines the current state value based on the previous state value and/or external transition triggering signals (also called non-state trigger). In the CLEAR requirement set, the state transition behavior of one state variable is often distributed in multiple individual requirements, each of which contributes a partial statement of either state initialization or one transition action as the example shown in Figure~\ref{fig:st req set}. In a typical state transition action requirement, the ``while'' clause is to specify the previous (source) state value and the ``when'' clause is to specify the non-state triggering condition, followed by the clause that sets the current (destination) state value.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/st_req_set.PNG}
\caption{Requirement set example for a state variable SOME\_STATE.}
\label{fig:st req set}
\end{figure}
The left hand side of Figure~\ref{fig:stb} shows the raw model subgraph representing the state transition behavior in Figure~\ref{fig:st req set}. In the raw model creation process, for each requirement, primitive blocks are created, explicitly mapping to CLEAR functional and logical keywords (e.g., \textit{switch} block maps to keyword``While/When'', \textit{not} block maps to logical operator ``not''), then proper connection is added, forming a feedback loop path. A \textit{Combiner} block is created as a routing hub node aggregating all state value set and get for the common state variable. The initialization requirement is simply converted to a \textit{constant} block feeding to the \textit{Combiner}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/stb_merge.PNG}
\caption{State transition subgraph before and after merge.}
\label{fig:stb}
\end{figure}
The state transition block merge starts from identifying the \textit{Combiner} block in the raw model and its associated feedback paths. Then the \textit{Combiner} block is replaced by a \textit{StateTransitionBlock} (\textit{STB}) block whose functionality is defined by an inherent transition matrix (initialized as an empty matrix) as shown in Table~\ref{st matrix}. Next, each feedback path is analyzed to identify the associated non-state trigger and the source state value(s). Both of them are then recorded into the state transition matrix as row elements (for \textbf{semantic richness}) so that the entire feedback path is no longer explicitly needed thus removed from the subgraph (for \textbf{simplicity}), while the non-state trigger is reconnected directly to the \textit{STB} block input. Requirement ID is also recorded as row info (for \textbf{traceability}). The \textit{Invalid} block is removed (for \textbf{execution}), since the transition matrix is presumptive to be input-complete if the requirement set is input-complete. For assurance, the input completeness will be formally verified as a generic property in a later stage after merge. Lastly, a simple feedback path of \textit{unit delay} outside the \textit{STB} is added (for \textbf{temporal correctness}), providing the state value memory for the merged state transition subgraph. Note that, \textit{unit delay} is initialized by the initial state value, therefore, the initialization path in the raw model is also removed (for \textbf{simplicity}), rendering the \textit{STB} itself as a memoryless non-timedependent block.
\begin{table}[h!]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{ |c|c|c|cV{3}c|c| }
\hline
\multicolumn{4}{|cV{3}}{\textbf{State transition matrix}} & \textbf{requirement ID}\\
\hlineB{3}
\textbf{Initialization}&\multicolumn{3}{cV{3}}{$s_0$}& ``Initial Condition'' \\
\hline
&\textbf{non-state triggers} &\textbf{ source} &\textbf{destination}& \cellcolor{gray}\\
\cline{2-5}
\textbf{Transition} &trigger $1$ & \{$s_1$\} &$s_2$& ``action 1''\\
\cline{2-5}
\textbf{actions}&... & ...& ... &...\\
\cline{2-5}
&trigger $n$& \{$s_2, s_3, s_4$\} & $s_0$ &``action n''\\
\hline
\end{tabular}
}
\caption{A complete state transition matrix for SOME\_STATE}
\label{st matrix}
\end{table}
The current CLEAR language supports a wide range of variations of state transition behavior specification other than the typical form in Figure~\ref{fig:st req set}, consequently corresponding to variations of subgraph structures in the raw model. For instances, 1) a state transition system can have a reset trigger that instantly overrides all other transition triggers once activated, 2) different triggers are associated with the same source state value(s), causing branching on a feedback path and/or intertwining among feedback paths, or 3) the current state value is determined by the non-trivial history of the state values instead of one-step memory of the previous state value, resulting in a time-dependent block such as \textit{timer} in the feedback path in the raw model. By analyzing the structural features of the raw model subgraph (e.g., feedback paths, branching nodes, locations of key functional blocks, etc.), Text2Test block merge is able to recognize those advanced state transition behaviors and achieve semantics-preserving and semantics-completing translation to the merged model. The merged model is the final stage semantically correct and complete model interpretation of the requirement set. It is the model that all the downstream formal analysis and test generation within Test2Test is based on. It also serves as the generic model from which semantically equivalent models for other tools (Sally for example) are obtained through proper translation. The sections below use the term ``internal model'' to denote the merged semantic synthesis model unless specified otherwise.
The following capabilities are implemented in Text2Test tool using the internal model; these are described in subsequent subsections:
\paragraph{Generic property checking and defect report generation} Text2Test utilizes the SMT-based property checking capabilities implemented on the internal model obtained above. Text2Test checks against a set of system generic properties as listed in subsection~\ref{sec:clear-generic-properties}, to detect possible fundamental defect(s) of the requirement set. Model/Requirement defects (with defect information such as defect type, root requirement IDs, variables and counterexample, etc.) are recorded in a .xml file for manual review purposes. These model checkings also serve the ``sanity check'' of the internal model, only defect-free models proceed to be converted to other tool models (e.g., Sally model in the following subsection) for further model checking against specific properties.
\paragraph{Translation to Sally model} The Text2Test internal model is a correct and complete model for the requirement set. Its block diagram structure with well-defined block functionalities and clear connection relation make it easy to be further translated to Sally model as elaborated in subsection~\ref{sec:sally}.
\paragraph{Test generation} As a critical and legacy tool capability, test generation is elaborated in subsection~\ref{sub:testGen}.
|
1,314,259,993,269 | arxiv | \section{introduction}
Let $d\in\mathbb N$. For a nonempty subset $C$ of $\Zz^d$, we denote by $\Theta_d(C)$ the {\it edge boundary} of $C$, i.e.,
\[
\Theta_d(C):=\{ (x,y)\in \Zz^d\times\Zz^d \ : \ \textrm{$|x-y|=1$, $x\in C$ and $y\in \Zz^d\setminus C$} \}.
\]
Its cardinality $\#\Theta_d(C)$ is the {\it edge perimeter} of $C$. Given $n\in\mathbb N$, the $n$-points edge-isoperimetric problem in $\mathbb Z^d$ is the minimization problem
\[
EIP^d(n):=\min \{\#\Theta_d(C): C\subset\mathbb Z^d,\; \#C=n\}.
\]
In the following, a nonempty set $C$ of $\mathbb Z^d$ is said to be an $EIP^d$ minimizer if the edge perimeter of $C$ is equal to $EIP^d({\# C})$. As a convention, the empty set is assumed to be an $EIP^d$ minimizer as well.
A solution to the $n$-points edge-isoperimetric problem was given by Bollobas and Leader in \cite{Bollobas}. If two points $x,y$ in a configuration $C \subset \mathbb Z^d$ occupy neighboring lattice sites, i.e.\ $|x - y|=1$, we say there is a {\it bond} connecting these points. The number of bonds $b(C):=\frac12\,\#\{(x,y)\in C\times C: |x-y|=1\}$ satisfies the elementary relation $\#\Theta_d(C)+2b(C)=2d\#C$. This shows that edge-perimeter minimization coincides with number of bonds maximization, as $\# C$ is fixed.
The edge isoperimetric problem naturally arises within the theory of equilibrium shapes of crystals under a minimal surface energy criterion \cite{BL, DKS, M}. It appears in connection to low temperature lattice statistics systems such as the Ising model \cite{ACC,Arous-Cerf,B,C,CK,CM,CP,N}. Regarded as a maximization problem for the number of bonds, it is incurred in the analysis of classical interacting point particle systems with short-range interatomic potentials, where it describes ground states among configurations on a given lattice. In situations where ground states are known to crystallize, $EIP^d$ minimizers are indeed general ground states. Whereas interactions with significant long-range contributions lead to non-trivial boundary layers, see, e.g., \cite{Theil:11,JKST:19}, for specific {\it sticky-disc} potentials in the plane, crystallization in the triangular lattice has been shown already in \cite{Harborth,Heitmann-Radin80,Radin81}. Yet, convergence of $EIP^2$ minimizers to the hexagonal Wulff shape as the particle number $n$ diverges, the $n^{3/4}$ law for fluctuations at finite $n$ and sharpened estimates with optimal constants have only been obtained rather recently, cf.\ \cite{AuYeung-et-al12,Schmidt,DPS}, respectively. Analogous results for the square lattice and the hexagonal lattice are found in \cite{DPS2,MPS,Mainini-Stefanelli12}, where the different lattice periodicity is induced by the presence of a three-body potential. The emergence of a macroscopic Wulff shape as an effect of the surface tension is a common feature of these models.
Unlike the classical anisotropic isoperimetric problem in $\mathbb R^d$, which admits the Wulff shape as the unique solution \cite{DP,F,FonsecaMueller:91,H}, the $n$-points edge isoperimetric problem has many solutions in general. In two dimensions, optimal polyominoes and lattice animals are discussed in \cite{BCC,Harary}. Indeed, characterizing isoperimetrically optimal polyominoes and polycubes is a classical problem in discrete mathematics and it also considered in \cite{CA, G,EG, NB,VB}. We refer to \cite{Ahlswede, Bezrukov, Bollobas, Harper} for further results in combinatorics and for optimization problems on graphs.
A peculiar feature of the $EIP^d$ problem is that for infinitely many specific values of $n$ the solution to $EIP^d(n)$ is -- up to translations -- unique (e.g., if $n = \ell^d$ for some $\ell \in \mathbb N$)\footnote{To see this, for each set $C \subset \Zz^d$ with $\# C = n$ let $V_C = \bigcup_{x \in C} (x+[-1/2,1/2]^d)$ with volume $|V_C| = n$ and surface area $\Theta_d(C) = \int_{\partial V_C} \| \nu \|_{L^1}$ ($\nu$ the unit outward normal to $V_C$). As the minimizer of $V \mapsto \int_{\partial^* V} \| \nu \|_{L^1}$ on sets of finite perimeter with volume $n$ is up to translations uniquely given by $[1/2, \ell + 1/2]^d$ (see, e.g., \cite{FonsecaMueller:91,Taylor:75}), every $EIP^d$ minimizer $C$ must satisfy $C = \{1, \ldots, \ell\}^d$ up to translation.}, while for general (infinitely many) $n$ we will see that there are many substantially different minimizers. Our main result Theorem~\ref{thm:main} will show that -- after a suitable translation -- each solution $C$ to $EIP^d(n)$ is close to the cubic Wulff shape $W_n = \{1, \ldots, \lfloor n^{1/d} \rfloor\}^d$ and provide a sharp scaling law for the symmetric distance $C \triangle W_n$ which measures the fluctuations around $W_n$. More precisely, our main result reads as follows.
\begin{theorem}\label{thm:main} There is a constant $K_d > 0$ which only depends on the dimension $d$ such that
\begin{itemize}
\item[(i)] for every $n \in {\mathbb N}$ and each solution $C$ to $EIP^d(n)$ there is a translation vector $a \in \mathbb Z^d$ such that
$$ \# (C - a) \triangle W_n \le K_d n^{(d-1+2^{1-d})/d}. $$
\item[(ii)] This estimate is sharp as for each $\eps > 0$ there are infinitely many $n \in {\mathbb N}$ for which a solution $C$ to $EIP^d(n)$ exists which satisfies the estimate
$$ \inf_{a \in \mathbb Z^d}\# (C - a) \triangle W_n \ge (K_d - \eps) n^{(d-1+2^{1-d})/d}. $$
\end{itemize}
\end{theorem}
We remark that, by way of contrast, the special solutions found in \cite{Ahlswede, Bollobas} (cf.\ Theorems \ref{theorem:SpecialSolutions} and \ref{unique} below) differ from $W_n$ only on a single surface layer, their symmetric difference thus satisfy the (best possible) estimate of order $O(n^{(d-1)/d})$.
Still the maximal fluctuations are of lower order than the number $n$ of particles, so that the macroscopic shape of an $EIP^d$ minimizer is close to the Wulff shape as the number of atoms grows. In the setting of Theorem \ref{thm:main} sharp estimates for this convergence can be given by considering the rescaled and translated empirical measure of a sequence $C_n$ of $EIP^d(n)$ minimizers. Rescaling with the edge length $n^{1/d}$, Theorem \ref{thm:main} shows that, for a suitable sequence of translation vectors $a_n$, $\mu_n = \frac{1}{n} \sum_{x \in C_n - a_n} \delta_{x/n^{1/d}}$ converges weakly to the uniform measure on the unit $d$-dimensional cube. Measuring the weak convergence of probability measures in terms of the bounded Lipschitz distance $d_{\rm BL}(\mu,\nu): = \sup_{\varphi \in {\rm Lip}_1} \int_{\mathbb R^d} \varphi \, d(\mu-\nu)$, where ${\rm Lip}_1$ is the space of Lipschitz functions that are bounded by $1$ and have Lipschitz constant bounded by $1$ as well, Theorem \ref{thm:main} implies
\[ d_{\rm BL}(\mu_n, \lambda^d|_{[0,1]^d}) \le C n^{(-1+2^{1-d})/d}, \]
and this estimate is sharp. This convergence is crucial in the context of low temperature crystallization as it provides a theoretical justification for the formation of a deterministic droplet at the macroscopic scale.
Yet, we also observe that the shape fluctuations at finite $n$ are substantial. Indeed, the non-uniqueness does not solely result from rearrangements of points on the surface. Such differences in `surface particles' would only be of order $O(n^{(d-1)/d})$. Instead, we observe differences of the order $O(n^{(d-1)/d} \cdot n^{2^{1-d}/d})$ which shows that -- in an averaged sense -- microscopic deviations, asymmetries and boundary defects may occur in a whole surface layer of depth $O(n^{2^{1-d}/d})$. (See also the construction in Lemma~\ref{lemma:lb}.)
Scaling laws for fluctuations around the asymptotic Wulff shape have first been obtained for the planar triangular lattice in \cite{Schmidt}, also cp.\ the announcement in {\cite{AuYeung-et-al12}, and with optimal constants in \cite{DPS}. The square lattice and the hexagonal lattice, including optimal constants, are considered in \cite{MPS, MPS2}, respectively, \cite{DPS2}. More recently, also dimers have been analyzed, cf.\ \cite{FriedrichKreutz:19}. In all the two-dimensional systems an $n^{3/4}$ law was found to sharply describe fluctuations at finite $n$. Very recently, also within the technically much more demanding three-dimensional case a sharp scaling law could be established for the cubic lattice in \cite{MPSS}. Curiously, the same scaling $n^{3/4}$ was found to optimal. The only result in general dimensions appears to be the recent contribution \cite{CL}, which provides another relevant connection between the continuum and the discrete isoperimetric inequality. Indeed, it is shown in \cite{CL} that an estimate from above on the maximal deviation estimate from the Wulff shape in a crystalline system can be obtained through an application of the classical isoperimetric inequality. However, such estimates turn out to be sharp only in dimension $2$, as they provide a higher exponent as compared to the one we find in Theorem \ref{thm:main}.
To the best of our knowledge, the result of Theorem \ref{thm:main} is the first characterization of the overall shape of edge isoperimetric sets in a higher-dimensional system, providing a sharp scaling law for fluctuations around the perfect cube. Moreover, it closes the analysis for the cubic lattice, clearly recovering the $n^{3/4}$ law in dimension $2$ and $3$. Starting from $d=2$, the sequence of optimal scaling exponents, according to Theorem \ref{thm:main}, turns out to be
\[
\frac34,\;\;\frac34,\;\;\frac{25}{32},\;\;\frac{13}{16},\;\;\frac{161}{192},\;\;\frac{55}{64},\;\frac{897}{1024},\;\;\frac{683}{768},\;\;\frac{4609}{5120},\;\;\frac{931}{1024},\;\; \frac{22529}{24576},\;\;\ldots, \frac{d-1+2^{1-d}}{d},\;\;\ldots
\]
It is an increasing sequence that converges to $1$ as $d\to+\infty$, consistently with the fact that the number of surface points scales with $n^{(d-1)/d}$ and the total number of points $n$ have the same scaling exponent in the limit. The scaling exponent of the typical averaged width $n^{2^{1-d}/d}$ of surface layers in which boundary defects may occur is found to converge to $0$ geometrically fast as $d \to \infty$.
\subsection*{Plan of the paper} In Section \ref{D} we review the special solutions found in \cite{Ahlswede,Bollobas} and provide some alternative descriptions of such `daisies'. The construction of the lower bound, which is needed to prove Theorem \ref{thm:main}(ii), is given in Section \ref{lower}. The considerably more involved upper bound in Theorem \ref{thm:main}(i) is found in Section \ref{upper}. We close by summarizing our results in the proof of Theorem \ref{thm:main}.
\section{Daisies}\label{D}
We begin by reviewing the special solutions to the edge-perimeter minimization problem that were constructed in \cite{Ahlswede,Bollobas}, see also \cite[Chapter 7]{Harper}. These solutions are obtained by consecutively adding points on hyperplanes neighboring the faces of a cuboid.
Algebraically, these special solutions are conveniently described in terms of a special order on $\mathbb N^d$. In the following definition we use this notation: for $x=(x_1,\ldots, x_d)\in\mathbb N^d$, we let $\max x:=\max_{i=1,\ldots, d}x_i$, we let $\tilde x=(\tilde x_1,\ldots, \tilde x_d)$, where $\tilde x_i=1$ if $x_i<\max x$ and $\tilde x_i=x_i$ if $x_i=\max x$. Moreover, we let $x_*\in\mathbb N^{d-k}$ be obtained from $x$ by dropping the $k\in\{1,\ldots, d\}$ components of $x$ that are equal to $\max x$.
Finally, we denote by $\prec_{R}$ the right-to-left strict lexicographic order in $\mathbb N^d$, i.e., $x\prec_{ R} y$ if for some $i\in\{1,\ldots, d\}$, there holds
$$x_j=y_j\;\;\forall j\in\{i+1,\ldots, d\}\quad\mbox{and}\quad x_i<y_i.$$
\begin{definition}[Order on $\mathbb N^d$, see \cite{Ahlswede}]\label{ab}
We define a strict and total order relation $\prec$ in $\mathbb N^d$ as follows. For $x=(x_1,\ldots, x_d)\in\mathbb N^d$, $y=(y_1,\ldots, y_d)\in\mathbb N^d$, $x\neq y$, we say that
$x\prec y$ if one of the following three instances occurs:
\begin{itemize}
\item[1)] $\max x<\max y$
\item[2)] $\max x=\max y$ and $\tilde x\prec_{ R}\tilde y$
\item[3)] $\max x=\max y>2$, $\tilde x=\tilde y$, $x_*\prec y_*$
\end{itemize}
Of course, $x \preceq y$ means $x \prec y$ or $x = y$.
\end{definition}
We note that in the third instance, since $\tilde x=\tilde y$, the value $\max x=\max y$ is found in $x$ and $y$ exactly at the same entries. If $k\in\{1,\ldots, d-1\}$ is the number of entries that realize such maximum, the relation $x_*\prec y_*$ is defined in the same way but in dimension $d-k$. Therefore the order $\prec$ is defined by induction, and in dimension one $x\prec y \iff x<y$. Given $x\in\mathbb N^d$, $y\in\mathbb N^d$, $x\neq y$, it is easy to check from the above definition that either $x\prec y$ or $y\prec x$, so that $\prec$ is a strict total order in $\mathbb N^d$.
\begin{theorem}[Special solutions, see \cite{Ahlswede}]\label{theorem:SpecialSolutions}
For each $n \in \mathbb N$ the string of the first $n$ elements in $\mathbb N^d$ with respect to the order $\prec$ is an $EIP^d$ minimizer.
\end{theorem}
So in particular one obtains a nested sequence of solutions for any given cardinality. Our first aim is to provide a more geometric characterization of these point sets which in the sequel we refer to as `daisies'.
\begin{definition}[Perfect daisy]\label{rectdaisy}
Let $k\in\mathbb N$.
A nonempty set $Q\subset \mathbb Z^k$ is a $k$-dimensional {\it perfect daisy} if it is of the form
$$
Q=\{1,\ldots, p_1^{(k)}\}\times\ldots\times\{1,\ldots, p_k^{(k)}\}
$$
for some natural numbers $p_i^{(k)}$ (called the coefficients of the daisy) such that the sequence $\{1,\ldots, k\}\ni i\mapsto p_i^{(k)}$ is nonincreasing and $p_1^{(k)}-p_k^{(k)}\in \{0,1\}$.
Tuples $n = (n_1, \ldots, n_k) \in \mathbb N^k$ which are decreasing, i.e., $n_1 \ge \ldots \ge n_k$, and whose oscillation $n_1 - n_k$ is at most $1$ will sometimes be called {\it $DO1$-tuples}.
We also introduce the {\it value-change position} $s\in\{1,\ldots,k\}$, corresponding to a value change in a $DO1$-tuple $n$, and precisely
\begin{equation}\label{valuechange}
s=s(n_1,\ldots,n_k):=
\left\{\begin{array}{cl}
\min\{j\in\{2,\ldots, k\}: n_j<n_{j-1}\}&\quad \mbox{ if $n_k-n_1=1$},\\
1&\quad\mbox{ if $n_k-n_1=0$}.
\end{array}\right.
\end{equation}
\end{definition}
\begin{definition}[Daisy]
\label{dai} Let $d\in\mathbb N$.
A nonempty set $Q\subset \mathbb Z^d$ is a $d$-dimensional {\it daisy} if for some $h\in\{0,\ldots d-1\}$ it is of the form
$$
Q=Q^{(d)}\cup Q^{(d-1)}\cup\ldots\cup Q^{(d-h)}, \quad\mbox{where}
$$
1) $Q^{(d)}$ is a $d$-dimensional perfect daisy (Definition {\rm \ref{rectdaisy}}), with coefficients ${q}_i^{(d)}$, $i\in\{1,\ldots, d\}$.
\noindent 2) A sequence $(s_k)\subset\{1,\ldots,d\}$ and the nonempty sets $Q^{(d-k)}$ are defined recursively for $k=1,\ldots, h$ as follows:
$Q^{(d-k)}=A_1^{(d-k)}\times\ldots\times A_d^{(d-k)}$, $S_{d,0}:=\{1,\ldots, d\}$,
\[
A_j^{(d-k)}:=\left\{\begin{array}{ccc}\{1,\ldots, q_j^{(d-k)}\}&\quad\mbox{if $j\in S_{d,k}:=\{1,\ldots, d\}\setminus\{s_1,\ldots,s_{k}\}$}\\
\{q_j^{(d-r_{j,k})}+1\}&\quad \mbox{if $j\in\{s_1,\ldots, s_k\}$},
\end{array}\right.
\]
where
$r_{j,k}:=\max\{n\in\{0,\ldots, k-1\}: j\in S_{d,n}\},$ and
\[
s_k:=\left\{
\begin{array}{lll}
\min S_{d,k-1}&\;\; \mbox{ if $q_i^{(d-k+1)}=q_j^{(d-k+1)}$ for any $i,j\in S_{d,k-1}$}
\\
\min\{j\in S_{d,k-1}: q_{\pi(j)}^{(d-k+1)}>q_j^{(d-k+1)}\}&\;\;\mbox{ otherwise},
\end{array}
\right.
\]
where, for $j\in S_{d,k}$ such that $j>\min S_{d,k}$, the notation is $\pi(j):=\max\{i\in S_{d,k}:i<j\}$.
If $Q^{(1)}\neq \emptyset$ we also conventionally denote by $s_d$ the unique element of $S_{d,d-1}$.
\noindent 3) For any $k=\{1,\ldots,h\}$, the natural numbers $q_j^{(d-k)}$ are defined for $j\in S_{d,k}$ and satisfy
\begin{itemize}
\item[3.1)] for $i,j\in S_{d,k}$, there holds $i<j\Rightarrow q_i^{(d-k)}\ge q_j^{(d-k)}$,
\item[3.2)] for $J_1:=\min S_{d,k}$ and $J_2=\max S_{d,k}$,
there holds $q_{J_1}^{(d-k)}-q_{J_2}^{(d-k)}\in\{0,1\}$,
\item[3.3)] for all $i\in S_{d,k}$, there holds $q_i^{(d-k)}\le q_i^{(d-k+1)}$,
\item[3.4)] there exists $i\in S_{d,k}$ such that there holds $q_i^{(d-k)}< q_i^{(d-k+1)}$.
\end{itemize}
\end{definition}
\begin{remark}\rm
The sets $Q^{(d-k)}$, $k\in\{1,\ldots h\}$, from Definition \ref{dai} are all nonempty. However, we shall often denote a $d$-dimensional daisy $Q$ as $Q^{(d)}\cup\ldots\cup Q^{(1)}$ even if $h<d-1$ in Definition \ref{dai}.
In such case, it is understood that $Q=Q^{(d)}\cup\ldots\cup Q^{(d-h)}$, where $Q^{(d-k)}\neq\emptyset$ if $k\in\{1,\ldots h\}$ and $Q^{(d-k)}=\emptyset $ if $k\in\{h+1,\ldots, d-1\}$.
\end{remark}
The description in Definition \ref{dai} is rather involved mainly due to the fact that the precise description of the position of an individual constituent $Q^{(m)}$ which is merely an (isometric) copy of a perfect $m$-dimensional daisy is quite complicated. We therefore provide an alternative description in terms of a collection of perfect daisies with a compatibility condition.
\begin{definition}[Larger sequences]\label{larger} Let $(a_1,\ldots, a_n) \in \mathbb N^n$ and $(b_1,\ldots, b_{n+1}) \in \mathbb N^{n+1}$ be $DO1$-tuples. We say that $(b_1,\ldots, b_{n+1})$ is {\it larger} than $(a_1,\ldots, a_n)$ and write $(a_1,\ldots, a_n)\sqsubset(b_1,\ldots,b_{n+1})$ if $a_i\le b_{f(i)}$ for any $i\in\{1,\ldots, n\}$ and strict inequality holds for at least one of the indices $i=1,\ldots, n$. Here, $f$ is the increasing bijection from $\{1,\ldots, n\}$ onto $\{1,\ldots, n+1\}\setminus\{s\}$, where $s\in\{1,\ldots, n+1\}$ is the position corresponding to a value change in the sequence $(b_1,\ldots,b_{n+1})$, which is defined as in \eqref{valuechange}.
\end{definition}
\begin{proposition}\label{characterization}
A $d$-dimensional daisy $Q=Q^{(d)}\cup\ldots\cup Q^{(d-h)}$ identifies with a collection of $(d-k)$-dimensional perfect daisies (according to {\rm Definition \ref{rectdaisy}}), still denoted by $Q^{(d-k)}$, $k= 0,\ldots,h$, with coefficients $p_i^{(d-k)}$, $i=1,\ldots,d-k$, such that
$(p_1^{(d-k)},\ldots, p_{d-k}^{(d-k)})\sqsubset(p_1^{(d-k+1)},\ldots, p_{d-k+1}^{(d-k+1)})$ for any $k\in\{1,\ldots, h\}$ in the sense of {\rm Definition \ref{larger}}.
\end{proposition}
\begin{proof}
Given a daisy from Definition \ref{dai}, we introduce the increasing bijection $b:\{1,\ldots, d-k\}\to S_{d,k}$ and coefficients $p_i^{(d-k)}:=q_{b(i)}^{(d-k)}$, $i=1,\ldots, d-k$, so that
\[
\prod_{i\in S_{d,k}}\{1,\ldots, q_i^{(d-k)}\}=\prod_{i=1}^{d-k} \{1,\ldots,p_i^{(d-k)}\}.
\]
For any $k\in\{0,\ldots, h\}$, the sequence $\{1,\ldots, d-k\}\ni i\mapsto p_i^{(d-k)}$ is $DO1$, thanks to properties 3.1) and 3.2) of Definition \ref{dai}. In other words, any layer $Q^{(d-k)}$ can be identified with a $(d-k)$-dimensional perfect daisy with coefficients $p_i^{(d-k)}$, $i=1,\ldots,d-k$, according to Definition \ref{rectdaisy}, by dropping from any point $z=(z_1,\ldots, z_d)\in Q^{(d-k)}$ all the components $z_i$ such that $i\notin S_{d,k}$.
Moreover, by properties 3.3) and 3.4) of Definition \ref{dai} we infer that $(p_i^{(d-k)},\ldots, p_{d-k}^{(d-k)})\sqsubset(p_1^{(d-k+1)},\ldots, p_{d-k+1}^{(d-k+1)})$, for any $k\in\{1,\ldots, h\}$, in the sense of {\rm Definition \ref{larger}}.
On the other hand, given $DO1$-sequences $\{1,\ldots, k\}\ni i\mapsto p_i^{(d-k)}$ for $k\in\{0,\ldots,h\}$, suppose that $(p_i^{(d-k)},\ldots, p_{d-k}^{(d-k)})\sqsubset(p_1^{(d-k+1)},\ldots, p_{d-k+1}^{(d-k+1)})$ for any $k\in\{1,\ldots, h\}$. Then, the numbers $s_j$ from Definition \ref{dai} are uniquely identified in terms of the value-change positions of these sequences. Indeed, we define $Q^{(d)}$ as the perfect $d$-dimensional daisy with coefficients $\{p_1^{(d)},\ldots, p_d^{(d)}\}$, then we define $s_1$ as the value-change position for the sequence $(p_1^{(d)},\ldots, p_d^{(d)})$ according to formula \eqref{valuechange}, $S_{d,1}:=\{1,\ldots,d\}\setminus\{s_1\}$ and we define for $i\in S_{d,1}$ the numbers $q^{(d-1)}_i:=p^{(d-1)}_{g_1(i)}$, where $g_1(i)$ is the increasing bijection of $S_{d,1}$ onto $\{1,\ldots,d-1\}$.
Then we define $s_2$ from $S_{d,1}$ and from the sequence, $(q_i^{(d-1)})_{i\in S_{d,1}}$ as done in Definition \ref{dai}. Therefore, we recursively define, for $k=2,\ldots, h$, the numbers $q_i^{(d-k)}:=p^{(d-k)}_{g_k(i)}$, where $g_k(i)$ is the increasing bijection of $S_{d,k}$ onto $\{1,\ldots, d-k\}$, and then $s_{k+1}$ from $S_{d,k} = \{1, \ldots, d\}\setminus\{s_1,\ldots, s_k\}$ and the coefficients $q_i^{(d-k)}$ as done in Definition \ref{dai}. The relation $\sqsubset$ between sequences $p_i^{(k)}$ ensures that properties 3.3) and 3.4) of Definition \ref{dai} are satisfied.
\end{proof}
\begin{remark}\label{rmk:daisy-constituents}\rm
A $d$-dimensional daisy $Q=Q^{(d)}\cup\ldots\cup Q^{(1)}$ can be characterized either by the coefficients $q_i^{(k)}$ from Definition \ref{dai} or by the coefficients $p_i^{(k)}$ from Proposition \ref{characterization}. In the sequel we will also refer to a subset of $\mathbb Z^d$ which is an isometric copy of an $m$-dimensional daisy ($m \le d$) simply as a daisy (as, e.g., in Proposition \ref{daisysection} and Corollary \ref{coro1} below). In particular, the constituents $Q^{(m)}$ of $Q$ are $m$-dimensional daisies.
\end{remark}
In order to see that daisies are in fact the solutions found in Theorem \ref{theorem:SpecialSolutions} we note that, in view of Definition \ref{dai} and Proposition \ref{characterization}, daisies can also be characterized by matrices. To this end, we let $\mathcal{A}$ be the set of $(h+1) \times d$ matrices $ A = (a_{i,j})_{1 \le i \le h+1 \atop 1 \le j \le d}$ with $h \le d-1$ whose entries consist of dots and numbers in the following way. The first line $(a_{1,1}, \ldots, a_{1,d})$ is a $DO1$-tuple. The second line has a dot at the value change position $s_1 = s(a_{1,1}, \ldots, a_{1,d})$ of the first line, i.e., $a_{2,s_1} = \cdot$, and $(a_{2,1}, \ldots, a_{2,s_1-1},a_{2,s_1+1},\ldots,a_{2,d})$ is $DO1$ with $(a_{2,1}, \ldots, a_{2,s_1-1},a_{2,s_1+1},\ldots,a_{2,d}) \sqsubset (a_{1,1}, \ldots, a_{1,d})$. In general, the $i$-th line consists of $i-1$ dots at the positions $s_1, \ldots, s_{i-1}$, where $s_{k}$ is the value change position of the sequence of numbers in the $k$-th line, $k = 1, \ldots, i-1$, and the tuple of numbers that is obtained by omitting these dots is a $(d-i+1)$-dimensional $DO1$-tuple which is smaller (wrt $\sqsubset$) than the sequence of numbers in the previous line.
Note that the set of daisies is in one-to-one correspondence with the set $\mathcal{A}$: If we denote the sequence of numbers in the $i$-th line of $A \in \mathcal{A}$ by $(p^{(d-i+1)}_1, \ldots, p^{(d-i+1)}_{d-i+1})$, $A$ corresponds to the daisy $Q = Q^{(d)} \cup \ldots \cup Q^{(d-h)}$ with $Q^{(d-i+1)} = \{1, \ldots, p_1^{(d-i+1)}\} \times \ldots \times \{1, \ldots, p_{d-i+1}^{(d-i+1)}\}$, $i = 1, \ldots, h+1$, and, conversely, each daisy arises in such a way, see Proposition \ref{characterization}. With respect to the geometric position of the individual perfect daisy $Q^{(d-i+1)}$, as detailed in Definition \ref{dai}, we note that the numbers within the $i$-line are also the $q_i^{(d-k+1)}$ coefficients and dots occupy the positions $s_j$ for $j\in \{1,\ldots, {i-1}\}$. A number $a$ in the matrix corresponds to the factor $\{1,\ldots, a\}$, and any dot in a column corresponds to the factor $\{a+1\}$, where $a$ is the first number that is found going up in such column. Finally we observe that the cardinality of the daisy is just the line by line sum of the product of all the numbers in each line.
\noindent {\em Example.} Two $5$-dimensional examples of $Q=\cup_{k=1}^5 Q^{(k)}$ are
\begin{equation*}
\begin{pmatrix} 5 & 5 & 4 & 4 & 4\\
4 & 3 & \cdot & 3 & 3\\
3 & \cdot & \cdot & 3 & 2\\
2 & \cdot & \cdot & 2 & \cdot \\
\cdot & \cdot & \cdot & 1 & \cdot
\end{pmatrix}\qquad\qquad
\begin{pmatrix} 7 & 7 & 7 & 7 & 7\\
\cdot & 4 & 3 & 3 & 3\\
\cdot & 3 & \cdot & 3 & 2
\end{pmatrix}
\end{equation*}
In the second example, $Q^{(4)}=Q^{(5)}=\emptyset$.
\noindent {\em Example.} Two-dimensional daisies are subsets of $\mathbb Z^2$ of the form
\begin{align}\label{eq:two-d-daisy}
D^{(2)}_{a,b,c}
:=\left\{\begin{array}{ccc}(\{1,\ldots, a\}\times\{1,\ldots, b\})\cup(\{a+1\}\times\{1,\ldots, c\})&\quad\mbox{ if $b=a$,}\\
(\{1,\ldots, a\}\times\{1,\ldots, b\})\cup(\{1,\ldots, c\}\times\{b+1\})&\quad\mbox{ if $b+1=a$,}
\end{array}\right.
\end{align}
for given $b\in\mathbb N$, $a\in\{b,b+1\}$ and $c\in\{0,\ldots,a-1\}$, where it is understood that $\{1,\ldots c\}=\emptyset$ in case $c=0$.
Indeed, we have $D^{(2)}_{a,b,c}=Q^{(2)}\cup Q^{(1)}$, with $q_1^{(2)}=a$ and $q^{(2)}_2=b$ representing the coefficients of the perfect daisy $Q^{(2)}$. Moreover, we have $S_{2,0}=\{1,2\}$, $S_{2,1}=S_{2,0}\setminus \{s_1\}$, where
\[s_1=\left\{
\begin{array}{ll}
1&\quad\mbox{if $b=a$}\\
2&\quad\mbox{if $b+1=a$},
\end{array}\right.
\]
and $Q^{(1)}=A^{(1)}_1\times A^{(1)}_2$, where
\[
A_1^{(1)}=\left\{\begin{array}{ll}\{1,\ldots, c\}&\quad \mbox{if $s_1=2$},\\
\{1+a\}&\quad\mbox{if $s_1=1$},
\end{array}\right.\qquad \quad
A_2^{(1)}=\left\{\begin{array}{ll}\{1,\ldots, c\}&\quad\mbox{if $s_1=1$},\\
\{1+b\}&\quad\mbox{if $s_1=2$}.
\end{array}\right.
\]
Or simply in matrix form
\[\begin{pmatrix} a & b\\
\cdot & c \\
\end{pmatrix} \quad\mbox{ if $a=b$},\qquad\quad
\begin{pmatrix} a & b\\
c & \cdot \\
\end{pmatrix} \quad\mbox{if $a=b+1$},
\]
reduced to $(a\;\;b)$ if $c=0$ (i.e. $Q^{(2)}=\emptyset$).
\begin{theorem}[Daisies are unique and $EIP^d$ minimizers]\label{unique}
For $n,d\in\mathbb N$, there exists a unique $d$-dimensional daisy $Q$ such that $\#Q=n$. Moreover, it coincides with the string of the first $n$ elements in $\mathbb N^d$ with respect to the order $\prec$. In particular, $Q$ is an $EIP^d$ minimizer.
\end{theorem}
\begin{proof}
In view of Theorem \ref{theorem:SpecialSolutions} and our identification of daisies with matrices in $\mathcal{A}$, it suffices to show that there is a bijective mapping $\Phi : \mathcal{A} \to \mathbb N^d$ such that the daisy corresponding to $A \in \mathcal{A}$ is given by $\{ m \in \mathbb N^d : m \preceq \Phi(A) \}$.
To define such $\Phi$ consider the last row $(a_{h+1,1}, \ldots, a_{h+1,d})$ of the daisy matrix $A = (a_{ij})_{1 \le i \le h+1 \atop 1 \le j \le d} \in \mathcal{A}$ and replace each dot $a_{h+1,j}$ with $a_{i,j}+1$ if $a_{i,j}$ is the first number that is found going up in column $j$. We define $n = \Phi(A) \in \mathbb N^d$ to be the $d$-tuple thus obtained.
Conversely, suppose a tuple $n = (n_1, \ldots, n_d) \in \mathbb N^d$ is given. We define an $A = \Psi(n) \in \mathcal{A}$ by induction on the lines of $A$. If $n$ is a $DO1$-sequence, we stop and set $A = n$ (a perfect daisy). If $n$ is not a $DO1$-sequence, we consider the rightmost entry $n_j$ for which the maximum is attained, i.e., $n_j = \max\{n_1, \ldots, n_d\} > n_{j+1}, \ldots, n_{d}$ and let $a_{1,1} = \ldots = a_{1,j-1} = n_j$, $a_{1,j} = \ldots = a_{1,d} = n_j-1$. (This is the largest $DO1$-sequence which is dominated by $n$.) We also fill the rest of the $j$-th column with dots. If $n' = (n_1, \ldots, n_{j-1}, n_{j+1}, \ldots, n_d)$ is a $DO1$-sequence, we set $(a_{21}, \ldots, a_{2,j-1}, a_{2,j+1}, \ldots a_{2,d}) = n'$ and stop (obtaining a daisy with $h = 1$). If not, we continue this procedure until a $DO1$-sequence is reached. Note that our choice of the rightmost maximal entry as the value-change position for the constructed $DO1$-sequence guarantees that indeed the sequence of numbers in a line of $A$ is always larger than the sequence of numbers in the next line of $A$.
The assertion of Theorem \ref{unique} now follows from the following two observations: $\Phi$ and $\Psi$ are inverse to each other and the daisy decried by an $A \in \mathcal{A}$ is given by $\{ m \in \mathbb N^d : m \preceq \Phi(A) \}$.
In order to see that $\Psi \circ \Phi = \mathrm{id}$ consider $A \in \mathcal{A}$ and set $n = \Phi(A)$. We observe that since the $DO1$-sequences of numbers within the lines of $A$ are ordered wrt $\sqsubset$, the index $s_1$ of the rightmost maximum of $n$ is the value change position of the first line and its value $n_{s_1}$ is given by $a_{1,s_1}+1$. This shows that the first line of $\Psi \circ \Phi(A)$ is indeed $(a_{1,1}, \ldots, a_{1,d})$. Now deleting the first line and $s_1$-th column, the same argument for the remaining part shows that the second line is reproduced correctly as well. Continuing in this way, wee indeed get that $\Psi \circ \Phi = \mathrm{id}$.
To prove that also $\Phi \circ \Psi = \mathrm{id}$ we start with $n \in \mathbb N^d$ and set $A = \Psi(n)$. If $n$ is a $DO1$-sequence, clearly $\Phi(A) = n$. If not, then by $j$ denoting the largest index for which $n_j = \max\{n_1, \ldots, n_d\}$, we have $a_{1j} = n_j-1$ and $a_{ij} = \cdot$ if $j \ge 2$. By definition of $\Phi$ this gives $(\Phi(A))_j = n_j$. If $n' = (n_1, \ldots, n_{j-1}, n_{j+1}, \ldots, n_d)$ is a $DO1$-sequence, we also have set $(a_{21}, \ldots, a_{2,j-1}, a_{2,j+1}, \ldots a_{2,d}) = n'$ and so $\Phi(A) = n$. If not, we continue repeating the above step to finally obtain that indeed $\Phi(A) = n$.
Now suppose $A \in \mathcal{A}$ representing a daisy $Q = Q^{(d)} \cup \ldots \cup Q^{(h)}$ is given. We define $\tilde{A} = (\tilde{a}_{i,j})_{1 \le i \le h+1 \atop 1 \le j \le d}$ $(h \le d-1)$ by replacing each dot in $A$ with the coordinate it represents: For each column $j$, if $a_{1,j}, \ldots, a_{i,j} \neq \cdot$ and $a_{i+1,j} = \ldots = a_{h,j} = \cdot$, then $\tilde{a}_{i+1,j} = \ldots = \tilde{a}_{h,j} = a_{i,j} + 1$ while $\tilde{a}_{k,j} = a_{k,j}$ for $1 \le k \le i$. Recall that here $j$ is a value-change position of the $i$-line. So in fact the lines of $\tilde{A}$ are increasing with respect to $\prec$: $(\tilde{a}_{11}, \ldots, \tilde{a}_{1d}) \prec \ldots \prec (\tilde{a}_{h1}, \ldots, \tilde{a}_{hd})$. Also, by construction each perfect daisy $Q^{(k)}$ consists of precisely those points $m \in \mathbb N^d$ which satisfy $(\tilde{a}_{k-1,1}, \ldots, \tilde{a}_{k-1,d}) \prec m \preceq (\tilde{a}_{k,1}, \ldots, \tilde{a}_{k,d})$. Thus, $Q = \{ m \in \mathbb N^d : m \preceq \Phi(A) \}$.
\end{proof}
\begin{remark}[Explicit construction of daisies]\label{rmk:daisy-construction}\rm
Explicitly, one finds the coefficients $p_i^{(d-k)}$, $i=1,\ldots,d-k$, $k=0,\ldots, h$ of a daisy $Q=Q^{(d)}\cup\ldots\cup Q^{(d-h)}$ of given cardinality $n$ inductively: $(p_1^{(d)},\ldots, p_{d}^{(d)})$ is the largest $DO1$-tuple wrt $\prec$ of length $d$ such that $p_1^{(d)} \cdot \ldots \cdot p_{d}^{(d)} \le n$ and, for $k \ge 1$, $(p_1^{(d-k)},\ldots, p_{d-k}^{(d-k)})$ is the largest $DO1$-tuple wrt $\prec$ of length $d-k$ such that $p_1^{(d-k)} \cdot \ldots \cdot p_{d-k}^{(d-k)} \le n - \# Q^{(d)} - \ldots - \# Q^{(d-k-1)}$ as long as this number is not zero. If it is zero for the first time, let $h = k+1$. Note that indeed
\[ (p_1^{(d-k)},\ldots, p_{d-k}^{(d-k)})\sqsubset(p_1^{(d-k+1)},\ldots, p_{d-k+1}^{(d-k+1)}) \]
for any $k\in\{1,\ldots, h\}$ since by construction, if $s(p_1^{(d-k+1)},\ldots, p_{d-k+1}^{(d-k+1)}) = s$ and $p_{d-k+1}^{(d-k+1)} =: p$, then
\[ p_1^{(d-k)} \cdot \ldots \cdot p_{d-k}^{(d-k)})
< (p+1)^s p^{d-s} - (p+1)^{s-1} p^{d-s+1}
= (p+1)^{s-1} p^{d-s} \]
and so $(p_1^{(d-k)} \cdot \ldots \cdot p_{d-k}^{(d-k)}) \prec (p_1^{(d-k+1)},\ldots, p_{s-1}^{(d-k+1)}, p_{s+1}^{(d-k+1)} \ldots, p_{d-k+1}^{(d-k+1)})$.
\end{remark}
We conclude this section with a property of faces and sections of daisies. There is a similar result for general $EIP^d$ minimizers, see Corollary \ref{coro1}.
\begin{definition}[Sections]\label{sect}
Let $C\subset\mathbb Z^d$ be a nonempty set. For $s\in\{1,\ldots, d\}$ and $k\in\mathbb Z$ we define the $(d-1)$-dimensional {\it section} $S_{s,k}(C) := \{x\in C:\mathbf e_s\cdot x=k\}$ of $C$.
\end{definition}
\begin{definition}[Faces]\label{faces}
If $\emptyset \neq C\subset\mathbb Z^d$, any nonempty $(d-1)$-dimensional section $S_{s,k}(C)$ for which $S_{s,k+1}(C) = \emptyset$ or $S_{s,k-1}(C) = \emptyset$ is called a {\it (lateral) face} of $C$ (with normal $\mathbf e_s$). If $P$ is a perfect $d$-dimensional daisy and $m \in \{0, \ldots, d-2\}$, we also define an {\it $m$-dimensional face} of $P$ to be any (nonempty) subset of the form $L_1 \cap \ldots \cap L_{d-m}$, where $L_i$ is a lateral face of $P$ with normal $\mathbf e_{s_i}$ and $1 \le s_1 < \ldots < s_{d-m} \le d$.
\end{definition}
\begin{proposition}\label{daisysection}
Each $(d-1)$-dimensional section of a $d$-dimensional daisy is a $(d-1)$-dimensional daisy.
\end{proposition}
\begin{proof}
Let $Q$ be a $d$-dimensional daisy and wlog assume that that $S_{s,k}(Q)\neq\emptyset$. Let $P : S_{s,k}(\mathbb N^d) \to \mathbb N^{d-1}$ be the bijective mapping $P(z_1, \ldots, z_{s-1}, k, z_{s+1}, \ldots z_d) = (z_1, \ldots, z_{s-1}, z_{s+1}, \ldots z_d)$. We identify $S_{s,k}(Q)$ with $\mathcal S := P(S_{s,k}(Q))$. Now observe that each point of $\mathcal S$ can be written as $P(v)$ for some $v\in S_{s,k}(Q)\subseteq Q$ and each point in $\mathbb N^{d-1}\setminus \mathcal S$ can be written as $P(w)$ for some $w\in S_{s,k}(\mathbb N^{d}\setminus Q)\subseteq \mathbb N^d\setminus Q$. Therefore, we have $v\prec w$ by Theorem \ref{unique}. Since $w_s = k = v_s$ this also gives $ \mathcal S \ni P(v)\prec P(w)\notin \mathcal S $. We have thus proven that for any $x\in\mathcal S$ and any $y\notin\mathcal S$, there holds, $x\prec y$. This shows that $\mathcal S$ is the string of the first $\#\mathcal S$ points of $\mathbb N^{d-1}$ with respect to the order relation $\prec$. By Theorem \ref{unique}, $\mathcal S$ is a daisy.
\end{proof}
\section{Lower bound}\label{lower}
\begin{definition}[Scaling parameter]\label{parameter}
For $\ell\in\mathbb N$, $d\in\mathbb N$ we define $h_{\ell,d}:=\ell^{\,2^{1-d}}$.
\end{definition}
The next statement makes use of the notation of Definition \ref{sect}. It extends some rearrangement procedures that have already been introduced in \cite{MPS, MPSS}, whose main property is the monotonicity of the edge perimeter.
\begin{proposition}[Decreasing rearrangement]\label{rear}
Let $C\in\mathbb Z^d$ be a bounded nonempty set. Let $s\in\{1,\ldots, d\}$ and $k\in\mathbb Z$. Let $K_s:=\{k_1,\ldots, k_n\}$ denote the finite strictly increasing sequence of integers such that $S_{s,k}(C)\neq\emptyset\iff k\in K_s$. Let $\sigma:\{1,\ldots,n\}\to K_s$ be a bijection such that $\#S_{s,\sigma(i)}(C)\ge \#S_{s,\sigma(j)}(C)$ for any $1\le i\le j\le n$.
Let $D^{(d-1)}_{s,k}$ be the $(d-1)$-dimensional daisy with the same cardinality as $S_{s,k}(C)$. Finally, let $C_s\subset \mathbb Z^d$ denote the decreasing rearrangement of $C$ in the $\mathbf e_s$ direction, i.e., the unique configuration whose nonempty sections orthogonal to $\mathbf e_s$ are given by $P S_{s,k}(C_s)=D^{(d-1)}_{s,\sigma(k)}$, $k=1,\ldots, n$, where $P(z_1, \ldots, z_d) = (z_1, \ldots, z_{s-1}, z_{s+1}, \ldots z_d)$. Then $\#\Theta_d(C_s)\le \#\Theta_d(C)$.
\end{proposition}
\begin{proof}
For any $k\in K_s$, we look at $(d-1)$-dimensional configurations and we have $b(D^{(d-1)}_{s,k})\ge b(S_{s,k}(C))$, since daisies minimize the edge perimeter and maximize the number of bonds. This shows that the total number of bonds in directions that are orthogonal to $\mathbf e_s$ does not increase after the rearrangement. If $n=1$, the proof is concluded. Suppose instead that $n>1$, and we are left to check the number $b_s(\cdot)$ of bonds in the direction of $\mathbf e_s$. For $k\in K_s$ we use the shorthand $f(k):=\#S_{s,k}(C)=\# D^{(d-1)}_{s,k}$. Moreover, we define $I \in\{ 1, \ldots, n \}$ such that $k_I = \sigma(1)$ so that $f(k_I)\ge f(k_i)$ for any $i\in\{1,\ldots, n\}$.
By counting the bonds in the $\mathbf e_s$ direction
as sum of bonds between couples of consecutive sections,
we have
\[\begin{aligned}
b_s(C)&\le \sum_{i=2}^n \min\{f(k_{i-1}), f(k_i)\}\le
\sum_{i\in\{1,\ldots, n\}\setminus\{I\}} f(k_i)=\sum_{i=2}^n f(k_{\sigma(i)})=b_s(C_s),
\end{aligned}
\]
where the second inequality is obtained by using $\min\{f(k_{i-1}), f(k_i)\}\le f(k_{i-1})$ for $i\in\{2,\ldots, I\}$ (only in case $I>1$) and $\min\{f(k_{i-1}), f(k_i)\}\le f(k_i)$ if $i\in\{I+1,\ldots, n\}$.
The proof is concluded.
\end{proof}
Arguing by contradiction we deduce the following result (whose converse is false as seen already in dimension $2$ by taking
a configuration such as $\{(1,1),(1,2),\ldots,(1,n)\}$, $n\in\mathbb N, n\ge 4$).
\begin{corollary}\label{coro1}
Let $C$ be an $EIP^d$ minimizer. Then each $(d-1)$-dimensional section is an $EIP^{d-1}$ minimizer.
\end{corollary}
\begin{proof}
If $S_{s,k}(C)$ were not an $EIP^{d-1}$ minimizer, then $\#\Theta_{d-1}(S_{s,k}(C)) > \#\Theta_{d-1}(D_{s,k}^{(d-1)})$ and the above proof shows $\#\Theta_{d}(C_s) < \#\Theta_{d}(C)$.
\end{proof}
\begin{lemma}\label{easy2}
Let $\ell\in\mathbb N$.
Let $p\in\mathbb N$ be such that $p<\ell$. Suppose that
\[
M:=\{1,\ldots, \ell-p\}\times\{1,\ldots,\ell\}^{d-2}\times\{1,\ldots, \ell+p\}
\]
is an $EIP^d$ minimizer. Then
\[
Q:=\{1,\ldots, \ell-p\}\times\{1,\ldots,\ell\}^{d-1}
\]
is an $EIP^d$ minimizer as well.
\end{lemma}
\begin{proof}
We observe that $M=Q\cup T$, where $T:=\{1,\ldots,\ell-p\}\times\{1,\ldots,\ell\}^{d-2}\times\{\ell+1,\ldots,\ell+p\}.$
The number of bonds connecting these two blocks is $(\ell-p)\ell^{d-2}$.
We take the decreasing rearrangement (see Proposition \ref{rear}) of $M$ in the direction of $\mathbf e_d$. We get a configuration $\overline{M}$ whose sections $S_{d,k}(\overline M)$ are nonempty for $k=1,\ldots, \ell+p$ so that $\overline M=\bigcup_{k=1}^{\ell+p} S_{d,k}(\overline M)$. By considering $S_{d,k}(\mathbb Z^d)$ as a copy of $\mathbb Z^{d-1}$, each of such sections identifies with the $(d-1)$-dimensional daisy of cardinality $(\ell-p)\ell^{d-2}$. Since $M$ is an $EIP^d$ minimizer, then $\overline{M}$ is an $EIP^d$ minimizer as well by Proposition \ref{rear}, and it is itself a union of two blocks $\overline Q$ and $\overline T$, where
\[
\overline Q:= \bigcup_{k=1}^{\ell} S_{d,k}(\overline M),\qquad \overline T:= \bigcup_{k=\ell+1}^{\ell+p} S_{d,k}(\overline M),
\]
with $\# Q=\#\overline Q$, $b(Q)=b(\overline Q)$, $\# T=\#\overline T$, $b(T)=b(\overline T)$, and
\begin{equation}\label{qt}
b(\overline M)=b(\overline T)+b(\overline Q) + (\ell-p)\ell^{d-2}
\end{equation}
Now, assuming that $Q$ is not an $EIP^d$ minimizer, we shall prove that $\overline M$ is not an $EIP^d$ minimizer either, thus reaching a contradiction and concluding the proof. Indeed, if $Q$ is not an $EIP^d$ minimizer, we consider the daisy $D$ with the same cardinality so that
\begin{equation}\label{DQ}
(\ell-p)\ell^{d-1}=\#D=\#Q=\#{\overline Q}
\end{equation}
and
\begin{equation}\label{bDQ}
b(D) > b(Q)=b(\overline Q).
\end{equation}
$D$ is of course contained in the daisy $\{1,\ldots, \ell\}^d$ whose cardinality is larger, since daisies are ordered by cardinality, see Theorem \ref{unique}. In particular, by looking at its sections in the direction of $\mathbf e_d$, we see that for some $1 \leq h \leq \ell$ we have $S_{d,k}(D)\neq\emptyset$ if and only if $k\in\{1,\ldots, h\}$. Moreover, each nonempty section $S_{d,k}(D)$ identifies with $EIP^{d-1}$ minimizers (see Corollary \ref{coro1}). We claim that $S_{d,1}(D)$ identifies with a $(d-1)$-dimensional daisy and $\#S_{d,1}(D)\ge(\ell-p)\ell^{d-2}$.
Indeed, the fact that $S_{d,1}(D)$ is a $(d-1)$-dimensional daisy comes from Proposition \ref{daisysection}.
Moreover, from Definition \ref{dai} it is possible to see that $\#S_{d,i}(D)\ge \#S_{d,j}(D)$ if $1\le i\le j$: this fact can be alternatively deduced from Theorem \ref{unique}, since Definition \ref{ab} readily implies that if $x=(x_1,\ldots, x_{d})\in D$, then $(x_1,\ldots, x_{d-1}, y_d)\prec x$ for any $y_d\in\{1,\ldots, x_{d-1}\}$.
Therefore $\#D\le h\, \#S_{d,1}(D)$, so that if $\#S_{d,1}(D)<(\ell-p)\ell^{d-2}$ were true it would lead to $\#D<(\ell-p)\ell^{d-1}$, which is against \eqref{DQ}. The claim is proved.
We take
a rigid motion of $\overline T$ in the direction of $\mathbf e_d$, i.e., we introduce $T^*:= \overline T-(\ell+p)\mathbf e_d$, so that
\[
T^*=\bigcup_{k=1-p}^{0} S_{d,k}( T^*)
\]
Then we let $ M^*:=D\cup T^*$. The cardinality of $ M^*$ is that of $\overline M$, since \eqref{DQ} holds and since obviously $\# T^*=\#\overline T$.
Similarly, $b(T^*)=b(\overline T)$. Most importantly, $$\mathrm{dist}(D, T^*) =\mathrm{dist}(S_{d,1}(D),S_{d,0}( T^*))=1$$ and the number of bonds connecting $D$ and $T^*$ is equal to $\# S_{d,0}( T^*)$: indeed, each point of the form $S_{d,0}( T^*)+\mathbf e_d$ belongs to $S_{d,1}(D)$, because we have already proven that $S_{d,1}(D)$ identifies with a $(d-1)$-dimensional daisy whose cardinality is larger than $(\ell-p)\ell^{d-2}$, while $S_{d,0}(T^*)$ identifies with a $(d-1)$-dimensional daisy of cardinality $(\ell-p)\ell^{d-2}$ (and we use the fact that daisies are ordered by cardinality). This allows to conclude, together with \eqref{qt} and \eqref{bDQ}, that
\[
b(M^*)=(\ell-p)\ell^{d-2}+b( T^*)+ b(D) > (\ell-p)\ell^{d-2}+b(\overline T)+ b(\overline Q)=b(\overline M),
\]
contradicting the fact that $\overline M$ is a $EIP^d$ minimizer and thus concluding the proof.
\end{proof}
The next lemma provides the lower bound.
\begin{lemma}\label{lemma:lb}
Let $d\in\{2,3,\ldots\}$. Let $\ell\in\mathbb N$.
The configuration
\[
P_{\ell,d,p}:=\{1,\ldots, \ell-p\}\times\{1,\ldots,\ell\}^{d-1}
\]
is an $EIP^d$ minimizer for any $p\in\mathbb N$ such that $p\le \lfloor h_{\ell,d}\rfloor$.
\end{lemma}
\begin{proof}
The statement holds if $d=2$. Indeed, the configuration $\{1,\ldots,\ell-p\}\times\{1,\ldots,\ell \}$ is an $EIP^2$ minimizer for any $p\in\{1,\ldots,\lfloor\sqrt{\ell}\rfloor \}$ as shown in \cite[Lemma 4.1]{MPSS}. We include a short alternative argument here: Wlog assume that $p \ge 2$ (and $\ell \ge 4$) since otherwise the claim follows from $P_{\ell,2,p}$ being a daisy. Then $D = D^{(2)}_{a,b,c}$ with $a = \ell - \lceil \frac{p}{2} \rceil$, $b = \ell - \lfloor \frac{p}{2} \rfloor -1$ and $c = \ell - \lceil \frac{p}{2} \rceil (\lfloor \frac{p}{2} \rfloor + 1)$ is a two-dimensional daisy (see \eqref{eq:two-d-daisy}) with $p \ge 2$ guaranteeing $c \le a-1$ and $c \ge \ell - ((\frac{p}{2})^2 + \frac{p}{2}+1) \ge \ell - \frac{p^2}{2} -1 \ge 1$ as $p \le \sqrt{\ell}$. The assertion then follows from $\#D = \ell^2 - \ell p = \# P_{\ell,2,p}$ and $\Theta_2 (D) = 4 \ell - 2p = \Theta_2(P_{\ell,2,p})$.
Let $d\ge 3$. We prove the statement by induction on the dimension: we assume that $P_{\ell,d-1,p}$ is an $EIP^{d-1}$ minimizer for any $p\le \lfloor h_{\ell,d-1} \rfloor$ and we aim at showing that $P_{\ell,d,p}$ is an $EIP^d$ minimizer for any $p\le\lfloor h_{\ell,d}\rfloor$.
Thanks to Lemma \ref{easy2}, it is enough to show that
\[
M_{\ell,d,p}:=\{1,\ldots, \ell-p\}\times\{1,\ldots, \ell+p\}\times\{1,\ldots,\ell\}^{d-2}
\]
is an $EIP^d$ minimizer for any $p\le\lfloor h_{\ell,d}\rfloor$. In order to check this, we rearrange $M_{\ell,d,p}$, without losing bonds, to
\[
\widetilde M_{\ell,d,p}
:=(\{1,\ldots,\ell\}\times\{1,\ldots,\ell-p\}\times\{1,\ldots,\ell\}^{d-2})\;\cup\;
(\{1,\ldots,\ell-p\}\times\{\ell-p+1,\ldots,\ell\}\times\{1,\ldots,\ell\}^{d-2} ).
\]
From the latter configuration, for any $i=1,\ldots, p$ and any $j\in 1,\ldots, p-1$ we fill the $(d-2)$-dimensional section
\begin{equation*}
U^{i,j}:=\{\ell-p+i\}\times\{\ell-p+j\}\times\{1,\ldots,\ell\}^{d-2}
\end{equation*}
by recursively rigidly moving the $(d-2)$-dimensional section
$$\{\ell-p-k + 1\}\times\{\ell\}\times\{1,\ldots, \ell\}^{d-2},\qquad k=1,\ldots, p(p-1)$$
and filling the sets $U^{i,j}$ following the order $(i,j) \prec_R (i',j')\iff$ [($j<j'$) or ($j=j'$ and $i<i'$)],
thus recursively emptying a $(d-1)$ dimensional face of $\widetilde M_{\ell,d,p}$, so that we get,
\[
Q_{\ell,d,p}:=(\{1,\ldots,\ell\}\times\{1,\ldots,\ell-1\}\times\{1,\ldots,\ell\}^{d-2}) \;\cup\;(\{1,\ldots,\ell-p^2\}\times\{\ell\}\times\{1,\ldots,\ell\}^{d-2}).
\]
(This is possible since $p^2 \le \lfloor h_{\ell,d}\rfloor^2 < \ell$ for $d \ge 3$.) We notice that $Q_{\ell,d,p}$ is a rearrangement of $\widetilde M_{\ell,d,p}$, with the same number of bonds.
By Definition \ref{rectdaisy},
$$ Q_{\ell,d,p}\setminus (\{1,\ldots,\ell-p^2\}\times\{\ell\}\times\{1,\ldots, \ell\}^{d-2})$$
is (up to a coordinate relabeling) a perfect daisy. Therefore, $Q_{\ell,d,p}$ is an $EIP^d$ minimizer as soon as
$$
\{1,\ldots,\ell-p^2\}\times\{1,\ldots, \ell\}^{d-2}
$$
is an $EIP^{d-1}$ minimizer for then this set can be replaced by a $(d-1)$-dimensional daisy in $S_{2,\ell}(\mathbb Z^d)$ of cardinality $(\ell-p^2) \ell^{d-2}$ without decreasing the total number of bonds. The resulting configuration is (up to coordinate relabeling) a $d$-dimensional daisy and, thus, an $EIP^d$ minimizer. Assuming $p\le\lfloor h_{\ell,d}\rfloor$, by the elementary inequality $\lfloor x\rfloor^2\le\lfloor x^2\rfloor$ and by Definition \ref{parameter} we obtain
\[
p^2\le\lfloor h_{\ell,d}\rfloor^2\le \lfloor h_{\ell,d}^2\rfloor=\lfloor h_{\ell,{d-1}}\rfloor,
\]
which allows to conclude, by the induction assumption,
that
$$
\{1,\ldots,\ell-p^2\}\times\{1,\ldots, \ell\}^{d-2}
$$
is indeed an $EIP^{d-1}$ minimizer.
Therefore $Q_{\ell,d,p}$, $\widetilde M_{\ell,d,p}$ and $M_{\ell,d,p}$ are $EIP^d$ minimizers, as desired, for any $p\le\lfloor h_{\ell,d}\rfloor$.
\end{proof}
We shall later need the following converse statement.
\begin{lemma}\label{converse}
Let $d\in\{2,3,\ldots\}$. Let $\ell\in\mathbb N$ and $j \in \{0, \ldots, d-1\}$.
The configuration
\[
P_{\ell,j,d,2p}:=\{1,\ldots, \ell-2p\}\times\{1,\ldots,\ell+1\}^{j}\times\{1,\ldots,\ell\}^{d-1-j}
\]
is not an $EIP^d$ minimizer if $p\in\mathbb N$ is such that $2p\ge 4^{c_d}\, h_{\ell,d}$, where $c_d:=1-2^{1-d}$.
\end{lemma}
\begin{proof}
Let $\tilde{\ell} = \ell+1$ if $j \ge 1$ and $\tilde{\ell} = \ell$ in case $j=0$. The result is true if $d=2$, as a consequence of \cite[Lemma 4.1]{MPSS}. It also directly follows by comparing with $D = \{1, \ldots, \ell - p \} \times \{1, \ldots, \tilde{\ell} -p - 1\}$ which for $p \ge \sqrt{\ell}$ satisfies $\#D = (\ell - p)(\tilde{\ell} - p - 1) \ge (\ell - 2p) \tilde{\ell} = \#P_{\ell,j,2,2p}$ while $\Theta_2(D) = 2 \ell + 2 \tilde{\ell} - 4p - 2 < 2 \ell - 2 \tilde{\ell} - 4p = \Theta_2(P_{\ell,j,2,2p})$. We prove the statement by induction. We consider the following two subsequent, edge-perimeter preserving rearrangements of $P_{\ell,j,d,2p}$:
\begin{align*}
P'
&=\big( \{1,\ldots, \ell-2p\}\times\{1,\ldots, \tilde{\ell}-p\}
\cup\{\ell-2p+1,\ldots, \ell-p\}\times\{1,\ldots,\ell-2p\} \big) \times H, \\
P''
&=\Big(\big( \{1,\ldots, \ell-2p\}\times\{1,\ldots,\tilde{\ell}-p-1\}
\cup \{\ell-2p+1,\ldots, \ell-p\}\times\{1,\ldots,\ell-p-1\} \big) \times H \Big)\\
&\qquad \cup\;\Big( \{1,\ldots, \ell-2p-p(p-1)\}\times \{\tilde{\ell}-p\} \times H\Big),
\end{align*}
where we have set $H = \{1,\ldots,\ell+1\}^{j-1}\times\{1,\ldots,\ell\}^{d-1-j}$ if $j \ge 1$ and $H = \{1,\ldots,\ell\}^{d-2}$ if $j = 0$.
Here $P''$ is obtained from $P'$ by successively moving $d-2$ dimensional slices similarly as in the proof of Lemma \ref{lemma:lb}. We may assume without loss of generality that $\ell-2p-p(p-1) \ge 1$ for otherwise this process would terminate with an empty layer at the level $\mathbb Z \times \{\tilde{\ell} - p\} \times \mathbb Z^{d-2}$, i.e., at some point we are moving the $(d-2)$-dimensional section $\{1\}\times\{\tilde{\ell}-p\}\times H$, which would be the only remaining set of points with second component equal to $\tilde{\ell}-p$, to a position $\{\ell-2p+i\}\times\{\ell-2p+j\}\times H$ for some $i\in\{1,\ldots,p\}, j\in\{1,\ldots p-1\}$. This would strictly increase the number of bonds, which directly shows that $P'$ and thus $P_{\ell,j,d,2p}$ cannot be $EIP^{d-1}$ minimizers.
In particular, by Corollary \ref{coro1} $P''$ (and thus $P_{\ell,j,d,2p}$) is not an $EIP^d$ minimizer if its face
\[ \{1,\ldots, \ell-2p-p(p-1)\}\times H\]
is not an
$EIP^{d-1}$ minimizer. We make use of the induction assumption: the configuration $\{1,\ldots, \ell-2q\}\times H$ is not an $EIP^{d-1}$ minimizer if $2q\ge 4^{c_{d-1}}h_{\ell,d-1}$.
Therefore, the face $ \{1,\ldots, \ell-2p-p(p-1)\}\times H$ is not an $EIP^{d-1}$ minimizer (and thus $P_{\ell,j,d,2p}$ is not an $EIP^d$ minimizer), if
\begin{equation}\label{kk}
p(p+1) \ge 4^{c_{d-1}}h_{\ell,d-1}.
\end{equation}
The latter is implied by $2p\ge 4^{c_d}h_{\ell,d}$: indeed, since $c_{d-1} + 1 = 2 c_d$ and $h_{\ell,d-1}=h_{\ell,d}^2$, we have
\[ (2p)^2/4
\ge (2^{2c_d} h_{\ell,d})^2/4
= 2^{2c_{d-1}+2} h_{\ell,d}^2/4
= 4^{c_{d-1}}h_{\ell,d-1}, \]
which readily implies \eqref{kk}.
Therefore, if $2p\ge 4^{c_d}h_{\ell,d}$, we obtain that $P_{\ell,j,d,2p}$ is not an $EIP^d$ minimizer.
\end{proof}
\section{Upper bound}\label{upper}
We introduce the notion of defects of a daisy, which is crucial for the rearrangement procedures that will lead to the proof of the upper bound. In the following definition, we will consider a $d$-dimensional daisy $P=P^{(d)}\cup\ldots\cup P^{(1)}$. In order to define defects of lower-dimensional layers, given $m\in\{2,\ldots, d\}$, we recall that the set $P^{(m-1)}\cup\ldots\cup P^{(1)}$ is a copy of an $(m-1)$-dimensional daisy, through the identification provided by {\rm Proposition \ref{characterization}}.
\begin{definition}[Defects]\label{defect}
Let $P=P^{(d)}\cup P^{(d-1)}\cup\ldots\cup P^{(1)}$ be a $d$-dimensional daisy.
\begin{itemize}
\item[i)]
Let $R$ be a $d$-dimensional perfect daisy. We say that $P$ has a ($(d-1)$-dimensional) {\it defect} with respect to $R$ if
a $(d-1)$-dimensional nonempty section $S=S_{s,j}(R)$ of $R$ (see Definition \ref{sect}) exists such that $\mathrm{dist}(S,P)=1$. In such case, the set $D:=\{y\in S:\mathrm{dist}(y, P)=1\}$ is the defect.
\item[ii)]
Given $m\in\{2,\ldots, d\}$,
we say that the $(m-1)$-dimensional daisy $P^{(m-1)}\cup\ldots\cup P^{(1)}$ has an ($(m-2)$-dimensional) {\it defect} with respect to $P^{(m)}$ if it has a defect, according to point i), with respect to the $(m-1)$-dimensional perfect daisy $$\qquad\quad Q^{(m-1)}:=\{1,\ldots, p_1^{(m)}\}\times\ldots\times\{1,\ldots, p_{z_{m}-1}^{(m)}\}\times \{1,\ldots, p_{z_{m}+1}^{(m)} \}\times\ldots\times\{1,\ldots, p_{m}^{(m)}\},$$ where $\{p_1^{(m)},\ldots, p_m^{(m)}\}$ are the coefficients of the perfect $m$-dimensional daisy $P^{(m)}$ and
$z_m$ is the corresponding value-change position, see Definition \ref{rectdaisy}.
\end{itemize}
\end{definition}
\begin{remark}\label{defectremark}\rm We note that a $d$-dimensional daisy $P=P^{(d)}\cup\ldots\cup P^{(1)}$ has a defect with respect to the perfect $d$-dimensional daisy $R$ if and only if $R\supsetneqq Q$, where $Q$ is the smallest perfect $d$-dimensional daisy such that $P\subseteq Q$.
In particular, $Q$ also has a defect with respect to $R$. Moreover, the definition of daisy implies that if $P^{(1)}\neq\emptyset$, then $P^{(1)}$ has necessarily a defect with respect to $P^{(2)}$ (we stress that by a ($0$-dimensional) defect for $P^{(1)}$ wrt $P^{(2)}$ we just mean a point). More generally, if $P^{(m-1)}\neq\emptyset$ and $P^{(m-2)}=\emptyset$, then $P^{(m-1)}$ has a defect wrt $P^{(m)}$. In particular, if $P=P^{(d)}\cup\ldots\cup P^{(1)}$ is a $d$-dimensional daisy and it is not perfect, then there exists $m\in \{2,\ldots, d\}$ such that $P^{(m-1)}\cup\ldots\cup P^{(1)}$ is not empty and has a defect with respect to $P^{(m)}$.
\end{remark}
Following Definition \ref{defect}, the first properties of defects are contained in the following
\begin{proposition}\label{defectpro}
If $P^{(m-1)}\cup\ldots\cup P^{(1)}$ has a defect with respect to $P^{(m)}$, then the defect contains a set $F$ which is a copy of the smallest $(m-2)$-dimensional face of $P^{(m-1)}$, and any point of $F$ has distance $1$ from $P^{(m-1)}$.
\end{proposition}
\begin{proof}
By assumption, $P^{(m-1)}\cup\ldots\cup P^{(1)}$ has a defect wrt the perfect $(m-1)$-dimensional daisy
$Q:=\{1,\ldots, p_1^{(m)}\}\times\ldots\times\{1,\ldots, p_{z_{m}-1}^{(m)}\}\times \{1,\ldots, p_{z_{m}+1}^{(m)} \}\times\ldots\times\{1,\ldots, p_{m}^{(m)}\}$. By Remark \ref{defectremark}, also the smallest perfect $(m-1)$-dimensional daisy $\hat Q$ containing $P^{(m-1)}\cup\ldots\cup P^{(1)}$ is strictly contained in $Q$ and has a defect wrt to $Q$. If $\tilde Q$ is the perfect $(m-1)$-dimensional daisy that follows $\hat Q$ in the order $\prec$, then the set $\tilde Q\setminus \hat Q$ is contained in (a section of) $Q$. Moreover, $\tilde Q\setminus \hat Q$ contains a set $F$ with the desired properties.
More explicitly we define $F$ as follows. Suppose $P^{(m-1)}$ is the perfect daisy $\{1,\ldots, t+1\}^j\times\{1,\ldots, t\}^{m-1-j}$ for suitable $t\in\mathbb N$ and $j\in\{0,\ldots, m-2\}$. If $P^{(m-2)} = \emptyset$, then $\hat{Q} = P^{(m-1)}$, $\tilde{Q} = \{1,\ldots, t+1\}^{j+1}\times\{1,\ldots, t\}^{m-2-j}$, and we set
\[ F :=
\begin{cases}
\{1,\ldots, t+1\}^{j-1}\times\{1,\ldots, t\}\times\{t+1\}\times\{1,\ldots, t\}^{m-2-j} & \mbox{if } j \ge 1, \\
\{t+1\}\times\{1,\ldots, t\}^{m-2} & \mbox{if } j =0.
\end{cases} \]
In case $P^{(m-2)} \neq \emptyset$ (in particular $m \ge 3$), and so $\hat{Q} = \{1,\ldots, t+1\}^{j+1}\times\{1,\ldots, t\}^{m-2-j}$ and
\[ \tilde{Q} =
\begin{cases}
\{1,\ldots, t+1\}^{j+2}\times\{1,\ldots, t\}^{m-3-j} & \mbox{if } j \le m-3, \\
\{1,\ldots, t+2\}\times\{1,\ldots, t+1\}^{m-2} & \mbox{if } j = m-2,
\end{cases} \]
we set
\[ F :=
\begin{cases}
\{1,\ldots, t\}\times \{t+1\}\times\{1,\ldots, t\}^{m-3} & \mbox{if } j = 0, \\
\{1,\ldots, t+1\}^{j-1}\times\{1,\ldots, t\}^2\times\{t+1\}\times\{1,\ldots,t\}^{m-3-j} & \mbox{if } 1 \le j \le m-3, \\
\{t+2\}\times \{1,\ldots,t+1\}^{m-3}\times\{1,\ldots, t\} & \mbox{if } j = m-2.
\end{cases} \]
We see that $F$ is a copy of $\{1,\ldots,t+1\}^{j-1}\times \{1,\ldots, t\}^{m-1-j}$ which is a smallest $(m-2)$-dimensional face of $P^{(m-1)}$ and that any point of $F$ has distance $1$ from $P^{(m-1)}$.
\end{proof}
A stronger statement holds:
\begin{proposition}\label{defectpro2}
If $P^{(m-1)}\cup\ldots\cup P^{(1)}$ has a defect with respect to $P^{(m)}$ (or in general with respect to a perfect $(m-1)$-dimensional daisy), then the defect contains a copy of $P^{(m-2)}\cup\ldots\cup P^{(1)}$.
\end{proposition}
\begin{proof}
By its definition, a defect is contained in an $(m-2)$-dimensional hyperplane that has distance $1$ from one of the lateral faces $L$ of $P^{(m-1)}\cup\ldots\cup P^{(1)}$ (cf.\ Definition \ref{faces}) and it is made by all the points in such hyperplane whose distance from $L$ is $1$. Since $L$ identifies with an $(m-2)$-dimensional daisy by Proposition \ref{daisysection}, and since daisies are ordered by cardinality (see Theorem \ref{unique}), it is enough to show that $\#L\ge \#(P^{(m-2)}\cup\ldots\cup P^{(1)})$.
Through the rest of the proof we make use of the notation
\[
P_1:= P^{(m-1)}\cup\ldots\cup P^{(1)},\qquad
P_2:= P_1 \setminus P^{(m-1)},
\]
so that $P_2$ identifies with the $(m-2)$-dimensional daisy $P^{(m-2)}\cup\ldots\cup P^{(1)}$. Let $(p_1^{(m-1)},\ldots, p_{m-1}^{(m-1)})$ be the coefficients of the perfect $(m-1)$-dimensional daisy $P^{(m-1)}$ and let $z_{m-1}$ be the corresponding value-change position. By the definition of a daisy, we have
$P_2\subsetneqq Z$, where
$$ Z:=
\{1,\ldots,p_1^{(m-1)}\}\times\ldots\times\{1,\ldots, p_{z_{m-1}-1}^{(m-1)}\}\times\{p_{z_{m-1}}^{(m-1)}+1\}\times\{1,\ldots, p_{z_{m-1}+1}^{(m-1)}\}\times\ldots\times\{p_{m-1}^{(m-1)}\},$$
and $P_2$ coincides with the ($(m-2)$-dimensional) lateral face of $P_1$ that is made by all those points $z$ of $P_1$ whose $(z_{m-1})$-th component is $p_{z_{m-1}}^{(m-1)}+1$.
If $L=P_2$ we are done, therefore from now we assume $L\neq P_2$.
We notice that being $L$ another lateral face of $P_1$, we have
\[
L\setminus P_2=\{1,\ldots, p_1^{(m-1)}\}\times\ldots\times\{p_{j}^{(m-1)}\}\times\ldots\times\{1,\ldots, p_{m-1}^{(m-1)}\}
\]
for some $j\in\{1,\ldots, m-1\}\setminus\{z_{m-1}\}$, hence
\begin{equation}\label{eq1}
\#(L\setminus P_2)=\prod_{i\in\{1,\ldots, m-1\}\setminus\{j\}} p_i^{(m-1)}.
\end{equation}
Let $W:=\{y=(y_1,\ldots, y_{m-1})\in \mathbb N^{m-1} : y_{z_{m-1}}= p_{z_{m-1}}^{(m-1)}+1,\ y_j=p_j^{(m-1)}\}$. Since $L$ coincides with the set of all the points of $P_1$ whose $j$-th coordinate is $p_{j}^{(m-1)}$ and since $P_2\subsetneqq Z$, we have
\begin{equation}\label{eq2}
P_2\setminus L= P_2\setminus W\subseteq Z\setminus W.
\end{equation}
But we notice that
\begin{equation}\label{eq3}
\#(Z\setminus W)=(p_j^{(m-1)}-1)\prod_{i\in\{1,\ldots, m-1\}\setminus\{z_{m-1},j\}} p_i^{(m-1)}.
\end{equation}
Thanks to \eqref{eq1}, \eqref{eq2} and \eqref{eq3}, we obtain
\[\begin{aligned}
\#P_2-\# L&=\#(P_2\setminus L)-\#(L\setminus P_2)\le \#(Z\setminus W)-\#(L\setminus P_2)\\
&= (p_j^{(m-1)}-1)\prod_{i\in\{1,\ldots, m-1\}\setminus\{z_{m-1},j\}} p_i^{(m-1)}\;\;-\prod_{i\in\{1,\ldots, m-1\}\setminus\{j\}} p_i^{(m-1)}\\&=(p_j^{(m-1)}-1-p_{z_{m-1}}^{(m-1)})\prod_{i\in\{1,\ldots, m-1\}\setminus\{z_{m-1},j\}}p_i^{(m-1)}\le 0,
\end{aligned}
\]
where the last inequality is due to the fact that $p_j^{(m-1)}-p_{z_{m-1}}^{(m-1)}\in\{-1,0,1\}$, by the definition of a daisy.
\end{proof}
\begin{definition}[Defect filling] \label{filling}
Let $P=P^{(d)}\cup P^{(d-1)}\cup\ldots\cup P^{(1)}$ be a $d$-dimensional daisy. Let $m\in\{2,\ldots, d\}$.
Suppose that $D$ is a defect of the $(m-1)$-dimensional daisy $P^{(m-1)}\cup\ldots\cup P^{(1)}$ wrt $P^{(m)}$ (resp. wrt a perfect $(m-1)$-dimensional daisy) according to point ii) of Definition \ref{defect} (resp. according to point i) of Definition \ref{defect}).
The {\it defect is filled} if a new configuration $P'_{m-1}$ is obtained from $P^{(m-1)}\cup\ldots\cup P^{(1)}$ by adding a nonempty subset $D'$ of $D$. The construction of $P'_{m-1}$ from $P^{(m-1)}\cup\ldots\cup P^{(1)}$ is therefore called a {\it defect filling}. Notice that each point of $D'$ has one and only one bond with $P^{(m-1)}\cup\ldots\cup P^{(1)}$.
\end{definition}
\begin{definition}[Minimal rectangle]\label{mr}
Let $d\in\mathbb N$. Let $C\subset \mathbb N^d$ be a finite set. We define the {\it minimal rectangle} of $C$ as the smallest subset $R(C)$ of $\mathbb N^d$ such that $C \subseteq R(C)$ and such that $R(C)= x_0 + \{1,\ldots, a_1\}\times\ldots\times \{1,\ldots, a_d\}$ for some $x_0 \in \mathbb Z^d$ and $a_1,\ldots, a_d\in\mathbb N$.
\end{definition}
We are ready for the proof of the key statement.
\begin{lemma}\label{key}
Let $d\in\{2,3,\ldots\}$.
Let $C$ be an $EIP^d$ minimizer with minimal rectangle $R(C)$ according to {\rm Definition \ref{mr}}, and assume wlog that $x_0 = 0$ and $a_d\ge a_j$ for any $j=1,\ldots, d$.
Then there exists another $EIP^d$ minimizer $\bar C$ such that $\#C=\#\bar C$ and
\begin{equation}\label{quasi}
\bar C =\{1,\ldots,\ell_1\}\times\ldots\times\{1,\ldots,\ell_{d-1}\}\times\{1,\ldots,a_d-1\}\cup F_1\cup F_2,
\end{equation}
where $(\ell_1, \ldots, \ell_{d-1})$ is a $DO1$-tuple, $F_1$ is (a translate of) a $(d-1)$-dimensional daisy that is contained in the hyperplane $\{x \cdot \mathbf e_d=a_d\}$ and $F_2$ is a configuration contained in the hyperplane $\{ x\cdot \mathbf e_j=\ell_{j}+1\}$ for some $j\in\{1,\ldots, d-1\}$.
\end{lemma}
\begin{proof}
Since $C$ is an $EIP^d$ minimizer, we may assume that it contains a point of the form $(i_1,\ldots, i_d)$ for any $i_d=1,\ldots, a_d$. Let $C'$ be the decreasing rearrangement of $C$ in the direction $\mathbf e_d$, see Proposition \ref{rear}. In particular, for any $j=1,\ldots, a_d$, we denote by $P_j$ the section $S_{d,j}(C')$ of $C'$ (see Definition \ref{sect}) and we say that $P_j$ is the $j$-level of $C'$. We notice that the $j$-level $P_j$ identifies with a $(d-1)$-dimensional daisy for any $j=1,\ldots,a_d$ and we have $P_j\subseteq P_{j-1}$ for any $j\in\{2,\ldots, a_d\}$, as a byproduct of the rearrangement definition.
We assume that $P_1$ is a $(d-1)$-dimensional perfect daisy (we shall get rid of this assumption at the end of the proof). We will show that, whenever the inclusion $P_j\subset P_1$ is strict (for some $j=2,\ldots, a_d-1$), then it is possible to move a point from the $a_d$-level to the $j$-th level, obtaining another $EIP^d$ minimizer.
Therefore, the major issue is to show that this is possible without losing bonds.
Suppose that $j$ is the minimal natural number such that the inclusion $P_j\subset P_1$ is strict (in particular, $P_{j-1}=P_1$).
We denote by $Q$ the $(d-1)$-dimensional daisy at the $j$-level and by $\hat Q$ the $(d-1)$-dimensional daisy at the the top level $a_d$.
We introduce the usual daisy notation
\[
P_j=Q=Q^{(d-1)}\cup Q^{(d-2)}\cup\ldots \cup Q^{(1)},\qquad
P_{a_d}=\hat Q=\hat Q^{(d-1)}\cup\hat Q^{(d-2)}\cup\ldots\cup\hat Q^{(1)}.
\]
We also denote by $p_i^{(k)}$ and $\hat p_i^{(k)}$ the coefficients of such daisies from Proposition \ref{characterization}.
Recalling that daisies are identified by their cardinality (see Theorem \ref{unique}), we have $\hat Q\subseteq Q$,
and then we split the proof in the following two possible cases:
\medskip
\noindent \underline{Case 1: There exists $m\in\{1,\ldots, d-1\}$ such that for some $i\in\{1,\ldots, m\}$ there holds $p_i^{(m)}<\hat p_i^{(m)}$}.
In this case, let $\bar m$ be the maximal of such $m$'s, so
that
\begin{equation}
\label{null}
p_i^{(\bar m)}<\hat p_i^{(\bar m)}\;\mbox{ for some $i\in\{1,\ldots,\bar m\}$ }
\end{equation}
and
\begin{equation}\label{eins}
p_i^{(\bar m+1)}\ge\hat p_i^{(\bar m+1)}\;\mbox{ for all $i\in\{1,\ldots, \bar m+1\}$}.
\end{equation}
Note that $\bar{m} \leq d-2$. By the monotonicity of the sequences $\{1,\ldots, \bar{m}\}\ni i\mapsto p_i^{(\bar{m})}$ and $\{1,\ldots, \bar{m}\}\ni i\mapsto \hat p_i^{(\bar{m})}$ and the fact that their oscillation is at most $1$ (see Definition \ref{rectdaisy} and Proposition \ref{characterization}), we get
\begin{equation}\label{zwei}
p_i^{(\bar{m})}\le \hat p_i^{(\bar{m})} \;\mbox{ for all $i\in\{1,\ldots, \bar{m}\}$}.
\end{equation}
We consider the following two sets, obtained from $P_j$ and $P_{a_d}$ by exchanging the layers from $\bar m$ to $1$:
\[
\widetilde{P_j}=Q^{(d-1)}\cup\ldots\cup Q^{(\bar m+1)}\cup \hat Q^{(\bar m)}\cup\ldots\cup\hat Q^{(1)},
\]
\[
\widetilde{P_{a_d}}= \hat Q^{(d-1)}\cup\ldots\cup \hat Q^{(\bar m+1)}\cup Q^{(\bar m)}\cup\ldots\cup Q^{(1)}.
\]
We claim that in view of Proposition \ref{characterization} these two new configurations are both daisies. Indeed, the claim is obvious for $\widetilde{P_{a_d}}$, since \eqref{zwei} implies $Q^{(\bar m)}\subset \hat Q^{(\bar m)}$ (and by \eqref{null} the inclusion is strict).
On the other hand in order to see that $\widetilde{P_j}$ is a daisy, we need to check that the sequence $i\mapsto p_i^{(\bar m+1)}$ is larger than the sequence $i\mapsto \hat p_i^{(\bar m)}$ in the sense of Definition \ref{larger}. But this is a direct consequence of \eqref{eins} and $(\hat p_1^{(\bar m+1)}, \ldots, \hat p_{\bar m+1}^{(\bar m+1)}) \sqsupset (\hat p_1^{(\bar m)}, \ldots, \hat p_{\bar m}^{(\bar m)})$. Therefore, $\widetilde{P_j}$ and $\widetilde{P_{a_d}}$ satisfy all the assumptions in Definition \ref{dai} and the claim follows. We now consider the new configuration that arises from $C'$ by substituting $P_{a_d}$ with $\widetilde{P_{a_d}}$ and $P_j$ with $\widetilde{P_j}$. It has the same cardinality as $C'$ but a smaller upper face since \eqref{null} and \eqref{zwei} imply $\#\widetilde{P_j}>\#P_j$. In fact, it is also an $EIP^{d}$ minimizer, as desired, because the total number of bonds does not change: For the bonds perpendicular to $\mathbf e_d$ we have
\begin{align*}
&b(\widetilde{P_j}) + b(\widetilde{P_{a_d}}) \\
&\quad = b(Q^{(d-1)}\cup\ldots\cup Q^{(\bar m+1)}) + b(\hat Q^{(\bar m)}\cup\ldots\cup\hat Q^{(1)}) + (d-1-\bar m) \# \hat Q^{(\bar m)}\cup\ldots\cup\hat Q^{(1)} \\
&\quad \quad + b(\hat Q^{(d-1)}\cup\ldots\cup \hat Q^{(\bar m+1)}) + b(Q^{(\bar m)}\cup\ldots\cup Q^{(1)}) + (d-1-\bar m) \# Q^{(\bar m)}\cup\ldots\cup Q^{(1)} \\
&\quad =b(P_j) + b(P_{a_d}).
\end{align*}
Also the number of bonds in the $\mathbf e_d$ direction is conserved as lost bonds between the $a_d$ and $a_{d}-1$ layer are restored as new bonds between the $j$-th layer and the perfect daisy $P_{j-1}$.
\medskip
\noindent \underline{Case 2: For all $m\in\{1,\ldots, d-1\}$, the inequality $p_i^{(m)}\ge \hat p_i^{(m)}$ holds for all $i\in\{1,\ldots, m\}$}.
This means that for any $m\in\{1,\ldots, d-1\}$, $\hat Q^{(m)}$ is a subset (possibly not strict) of $Q^{(m)}$. In order to show that it is possible to move points from the $a_d$-level to the $j$-th level
we provide an iteration algorithm.
Before introducing the full algorithm, let us start by discussing the basic instance. If $Q$ has a defect with respect to the perfect $(d-1)$-dimensional daisy $P_{j-1}$, and if $\hat Q$ is not perfect, i.e.\ if $\hat Q^{(d-2)}\neq\emptyset$, we remove $\hat Q^{(d-2)}\cup\ldots\cup \hat Q^{(1)}$ from the top layer and use it to fill the defect (see Definition \ref{filling}). Indeed, $\hat Q^{(m)} \subseteq Q^{(m)}$ for all $m$ implies that $\hat Q^{(d-2)}\cup\ldots\cup \hat Q^{(1)}$ is, after a rigid motion, a subset of the defect thanks to Proposition \ref{defectpro2}. Thereby the total number of bonds is unchanged, as all the bonds of $\hat Q^{(d-2)}\cup\ldots\cup \hat Q^{(1)}$ with $\hat Q^{(d-1)}$ (whose number is $n:=\#(\hat Q^{(d-2)}\cup\ldots\cup \hat Q^{(1)}) $)
are restored as bonds with $Q$. Also, the $n$ bonds of $\hat Q^{(d-2)}\cup\ldots\cup \hat Q^{(1)}$ with $P_{a_d-1}$
are all replaced with bonds connecting to the larger daisy $P_{j-1}$.
Let us now introduce the algorithm. Starting from $k=d-1$ and decreasing $k\ge2$, we perform the following iteration procedure:
\begin{center}{\it
if $ Q^{(k)}\cup\ldots\cup Q^{(1)}$ does not have a defect with respect to $Q^{(k+1)}$ and $\hat Q^{(k-1)}\neq \emptyset $, \\ proceed to check $Q^{(k-1)}\cup\ldots\cup Q^{(1)}$ and $\hat Q^{(k-2)}$.}
\end{center}
Here $Q^{(d)}$, which occurs if $k = d-1$, is understood as $P_{j-1}$. We have three possible situations:
\begin{itemize}
\item[A)] The procedure does not stop and reaches $k=2$, with no defects in $Q^{(2)}\cup Q^{(1)}$ (wrt $Q^{(3)}$) and $\hat Q^{(1)}\neq\emptyset$.
In such case $Q^{(1)}$ is nonempty and has a ($0$-dimensional) defect, see Remark \ref{defectremark}. Therefore we take a corner point from $\hat Q$ which has $d$ bonds to other points, to fill this defect without reducing the total number of bonds.
\item[B)] The procedure stops at some $k\ge 2$ with a defect in $ Q^{(k)}\cup\ldots\cup Q^{(1)}$ (wrt $Q^{(k+1)}$) and nonempty $\hat Q^{(k-1)}$.
As $ \hat Q^{(k-1)}\cup\ldots\cup \hat Q^{(1)}$ is nonempty and $p_i^{(m)} \ge \hat p_i^{(m)}$ for all $i \in \{1, \ldots, m\}$, $m \in \{1, \ldots, k-1\}$, we can proceed as above to fill a defect of $Q^{(k)}$ with a copy of $\hat Q^{(k-1)}\cup\ldots\cup \hat Q^{(1)}$. Here, removing such a portion from the top layer destroys $\bar n(d-k+1)$ bonds, where $\bar n = \# \hat Q^{(k-1)}\cup\ldots\cup \hat Q^{(1)}$, while filling the defect restores the same number of bonds.
\item[C)] The procedure stops at some $k\ge 2$ with $\hat Q^{(k-1)}=\emptyset$.
We will define $\hat S_{k-1}$ as one of the smallest $(k-1)$-dimensional faces of $\hat Q^{(k)}$. $\hat S_{k-1}$ identifies with a $(k-1)$-dimensional daisy thanks to Proposition \ref{daisysection}, and in fact with a perfect daisy since $\hat Q^{(k)}$ is a perfect daisy. More precisely and more generally, for the perfect $k$-dimensional daisy $\hat Q^{(k)}$ and for $j\in\{0,\ldots, k\}$ we define $\hat S_{k-j}$ as the set that is obtained by taking all the points $z=(z_1,\ldots, z_k)\in\hat Q^{(k)}$ and by freezing the first $j$ entries of $z$ to their maximal value. Then $\hat S_{k-j}$ is a perfect $(k-j)$-dimensional daisy and a copy of the smallest $(k-j)$ dimensional face of $\hat Q^{(k)}$ (in particular, $\hat S_k=\hat Q^{(k)}$ and $\hat S_0$ is a single corner point of $\hat Q^{(k)}$). We stress that each point of $\hat S_{k-1}$ has one bond with a point of $\hat Q^{(k)}\setminus \hat S_{k-1}$, unless $\hat Q^{(k)}$ is made of a single point (which is the only situation yielding $\hat S_{k-1}=\hat Q^{(k)}$).
\begin{itemize}
\item[C1)]
If there are defects in $Q^{(k)}\cup\ldots\cup Q^{(1)}$, by Proposition \ref{defectpro} the defect contains a copy of $S_{k-1}$, the smallest $(k-1)$-dimensional face of $Q^{(k)}$. But $\hat Q^{(k)}\subseteq Q^{(k)}$ implies $\hat S_{k-1}\subseteq S_{k-1}$. Therefore we can move $\hat S_{k-1}$ to fill the defect, as soon as $\hat Q^{(k)}$ is not made by a single point, since each of the bonds of $\hat S_{k-1}$ with $\hat Q^{(k)}\setminus \hat S_{k-1}$ is restored as a bond with $Q^{(k)}\cup\ldots\cup Q^{(1)}$ through this defect filling (Definition \ref{filling}). Also the lost bonds with $\hat Q^{(d-1)}\cup\ldots\cup \hat Q^{(k+1)}$ and with the $a_d-1$ layer are restored. Now note that $\#\hat S_{k-1}=\#\hat Q^{(k)}=1$ is not possible, since otherwise the defect filling would increase the number of bonds and contradict the minimality of $C'$. In particular, this defect filling does not exhaust $\hat Q^{(k)}$.
\item[C2)]
Assume now there are no defects in $Q^{(k)}\cup\ldots\cup Q^{(1)} = Q^{(k)}\cup\ldots\cup Q^{(h)}$, where $h\in \{1,\ldots k-1\}$ is such that $Q^{(h)} \neq \emptyset$ and and $Q^{(h-1)} = \emptyset$.
Suppose first that $\hat S_h \subseteq Q^{(h)}$. Since $Q^{(h)}$ has a defect due to Remark \ref{defectremark}, by Proposition \ref{defectpro} this defect contains a copy of the smallest $(h-1)$-dimensional face of $Q^{(h)}$. Since $\hat S_h \subseteq Q^{(h)}$, then the defect also contains a copy of $\hat S_{h-1}$. We can thus remove $\hat S_{h-1}$ from the top level and use it to fill the defect. Similarly as above, each point of $\hat S_{h-1}$ has one bond with $\hat S_h\setminus \hat S_{h-1}$ unless the latter is empty, and these bonds are restored as bonds with $Q^{(h)}$ through the defect filling. Again, $\hat S_{h}\setminus\hat S_{h-1}=\emptyset$ is not possible (because the defect filling would create new bonds, contradicting the minimality of $C'$), so that $\hat Q^{(k)}$ is not exhausted.
Now suppose that, on the contrary, $\hat S_h \supsetneqq Q^{(h)}$. Since $Q^{(k)} \supseteq \hat Q^{(k)} = \hat S_k$, there is an index $i \in \{ h, \ldots, k-1 \}$ such that $Q^{(k)} \supseteq \hat S_k, \ldots, Q^{(i+1)} \supseteq \hat S_{i+1}$ but $Q^{(i)} \subsetneqq \hat S_i$. (Recall that daisies are totally ordered by inclusion) As $\hat S_i$ is a perfect daisy, we also have $Q^{(i)} \cup \ldots \cup Q^{(h)} \subsetneqq \hat S_i$. Since $Q^{(i+1)} \supseteq \hat S_{i+1}$ it is then possible to exchange the two sets $Q^{(i)} \cup \ldots \cup Q^{(h)}$ and $\hat S_i$ without changing the total number of bonds: indeed, we remove these two sets from their position by rigidly moving $\hat S_i$ into the $i$-dimensional affine hyperplane that was occupied by $Q^{(i)} \cup \ldots \cup Q^{(h)}$ in such a way that all the bonds that have deleted while detaching $\hat S_i$ are restored as bonds with $P_{j-1}$ and with $Q^{(d-1)} \cup \ldots \cup Q^{(i+1)}$, and similarly by moving $Q^{(i)} \cup \ldots \cup Q^{(h)}$ rigidly to a subset originally occupied by $\hat S_i$, restoring all bonds that have been deleted while detaching $Q^{(i)} \cup \ldots \cup Q^{(h)}$.
\end{itemize}
\end{itemize}
We have shown that it is always possible to take points from the $a_d$-level to the $j$-level. Both in Case 1 and Case 2 above, the $a_d$-level is not exhausted by this procedure. Indeed, in Case 1 we see that $\hat Q^{(d-1)}$ is left at the top level. Moreover, we have seen through the different instances of the algorithm in Case 2 that the top level is not exhausted. Therefore, we can repeat the procedure, and with a finite number of steps we reach a configuration of the form
\[
\{1,\ldots,\ell_1\}\times\ldots\times\{1,\ldots,\ell_{d-1}\}\times\{1,\ldots,a_d-1\}\cup F_1,
\]
as desired.
Let us conclude by generalizing the argument in case $P_1$ is not a perfect daisy. As $P_1=P_1^{(d-1)}\cup\ldots \cup P_1^{(1)}$, let us consider the set of points $H$ in $C'$ whose projection on $\{x\cdot\mathbf e_d=1\}$ belongs to $P_1 \setminus P_1^{(d-1)}$. Let $k\in\{1,\ldots, a_d\}$ denote the top level where points of $H$ are found. If $k\le a_d-2$, then $P_{k}^{(d-1)}=P_1^{(d-1)}$ and $P_{k+1}\subseteq P_{k}^{(d-1)}$. Therefore we can proceed as before with $P_{k}$ in place of $P_1$. We obtain a configuration of the form \eqref{quasi} with $F_2=H$. If $k\in\{a_d-1,a_d\}$, then $C'$ is already of the form \eqref{quasi}, with the $\ell_i$'s being the coefficients of the perfect daisy $P_1^{(d-1)}$.
\end{proof}
\begin{corollary}\label{cor:ad-est}
Let $C\subset \mathbb Z^d$ be an $EIP^d$ minimizer. Let $R(C)$ be the minimal rectangle and assume $x_0 = 0$. Let $a_d$ be the maximal edge of $R(C)$. Let $\ell = \ell_1$ from \eqref{quasi}. Then
\[ a_d-\ell\le4^{c_d}h_{\ell,d} + 6. \]
\end{corollary}
\begin{proof}
From Lemma \ref{key} we obtain $\bar C$ as in \eqref{quasi} with top layer $F_1$ and (possibly a) lateral face $F_2$ contained in $\{x\cdot \mathbf e_j=\ell_{j}+1\}$ for some $j\in\{1,\ldots, d-1\}$. Wlog we assume that $p = \lfloor \frac{a_d - \ell}{2} \rfloor \ge 3$.
Throughout the proof, we perform transformations that delete and restore only the bonds in the directions $\mathbf e_j$ and $\mathbf e_d$. We first obtain another $EIP^d$ minimizer by cutting the entire block of points at the levels from $a_d-p+1$ to $a_d$ and paste it after a rigid motion to the lateral face of $\bar C$ that is contained in the hyperplane $\{\mathbf e_j\cdot x=1\}$. In particular, we perform this rigid motion by letting the moved points from $F_2$ find their new positions at the first level, i.e., on the hyperplane $\{\mathbf e_d\cdot x=1\}$ and the points from $F_1$ on the hyperplane $\{\mathbf e_j\cdot x=1-p\}$. More precisely, any $x \in \bar C$ with $x_d \in \{a_d-p+1, \ldots, a_d\}$ is mapped to
\[ (x_1,\ldots, x_{j-1}, a_d-p+1-x_d, x_{j+1}, \ldots, x_{d-1}, \ell_j+2-x_j). \]
This is possible without reducing the number of bonds since $\ell_j + 1 \le \ell + 1 \le a_d - p$.
In this way, the obtained configuration $C'$ contains the set
\[ Y := \prod_{i=1}^{j-1} \{1, \ldots, \ell_i\} \times \{-p+2, \ldots, 0\} \times \prod_{i=j+1}^{d-1} \{1, \ldots, \ell_i\} \times \{\ell_j+1\} \]
but not the points above $Y$ in the $\mathbf e_d$ direction. Moreover, the top level of $C'$ is the level $a_d - p$, and precisely it is the set $\big( \prod_{i=1}^{d-1} \{1,\ldots, \ell_i\}\times\{a_d - p\} \big) \cup F_2^{a_d-p}$, where $F_2^{a_d-p}:=S_{d,a_d-p}(F_2)$.
Let $k=p^2-3p$ if $\ell_j = \ell_{d-1}$ and $k=p^2-3p+1$ if $\ell_j = \ell_{d-1}+1$. We move points from the level $a_d - p$ to obtain another $EIP^d$ minimizer, whose upper face is
\[ U := \prod_{i=1}^{j-1} \{1, \ldots, \ell_i\} \times \{k+1, \ldots, \ell_j\} \times \prod_{i=j+1}^{d-1} \{1, \ldots, \ell_i\} \times \{a_d-p\}. \]
This is done, similarly to the constructions of Section \ref{lower}, by moving $(d-2)$-dimensional faces of the top level (one by one): we remove $\{ x \in C' : x_j = i, \, x_d = a_d-p \}$ for $i=1,\ldots, k$, and place such $(d-2)$-dimensional layers at the positions
\[ \prod_{i=1}^{j-1} \{1, \ldots, \ell_i\} \times \{-j_1\} \times \prod_{i=j+1}^{d-1} \{1, \ldots, \ell_i\} \times \{j_2\}, \]
where $j_1 \in \{0,1,\ldots, p-2\}$ and $j_2\in\{\ell_j+2,\ldots, a_d-p-1\}$, which are the $(p-1)(a_d-p-\ell_j-2) \ge(p-1)(p-2)\ge k+1$ free positions above $Y$. (This is done, say, following the right-to-left lexicographic order of those $(j_1,j_2)$.) In doing so we fill $k$ of such free positions, and if $F_2^{a_d-p}\neq\emptyset$, we finally move it to fill the $(k+1)$-st position.
Since the upper face $U$ is necessarily an $EIP^{d-1}$ minimizer by Corollary \ref{coro1}, from Lemma \ref{converse} we infer
\[ p^2-3p
\le 4^{c_{d-1}}\,h_{\ell_{d-1},d-1}
\le 4^{c_{d-1}}\,h_{\ell,d-1}, \]
which implies, by using the relations $c_{d-1}+1=2c_d$ and $h_{\ell,d-1}=h^2_{\ell,d}$,
\[ 2p
\le 3+\sqrt{9+4^{c_{d-1}+1} h_{\ell,d-1}}\le 5+ 2^{c_{d-1}+1}\sqrt{h_{\ell,d-1}}=4^{c_d}h_{\ell,d}+5,
\]
where we have also used the elementary inequality $3+\sqrt{9+x}\le 5+\sqrt{x}$, which holds for $x\ge 2$ (noticing that $4^{c_{d-1}+1}h_{\ell, d-1}\ge 4$ as $d\ge 2$). The result is proven.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}]
\noindent (i) Let $C$ be an $EIP^d$ minimizer with $\#C = n$. Wlog suppose and $R(C) = \{1, \ldots, a_1\} \times \ldots \times \{1, \ldots, a_d\}$ and $a_1, \ldots, a_{d-1} \le a_d$. By Lemma \ref{key} and Corollary \ref{cor:ad-est}, with $\ell = \ell_1$ from \eqref{quasi} we have $n = \ell^{d-1} a_d + O(\ell^{d-2} a_d)$ and $a_d-\ell\le 4^{c_d} h_{\ell,d} + 6$. In particular, $n = \ell^d + O(h_{\ell,d} \ell^{d-1})$. We also observe that \eqref{quasi} gives $n \ge (\ell-1)^d$.
Now suppose there is an $i$ with $a_i \le \ell - 2 d 4^{c_d} h_{\ell,d}$. Then
\begin{align*}
n
\le \# R(C)
&\le (\ell - 2 d 4^{c_d} h_{\ell,d}) (\ell + 4^{c_d} h_{\ell,d} + 6)^{d-1} \\
&= \ell^d (1 - 2 d 4^{c_d} h_{\ell,d} \ell^{-1}) (1 + (4^{c_d} h_{\ell,d} + 6) \ell^{-1})^{d-1}.
\end{align*}
Using that $h_{\ell,d} \ell^{-1} \to 0$ as $n \to \infty$ and $(1 + (4^{c_d} h_{\ell,d} + 6) \ell^{-1})^{d-1} = 1 + (d-1) (4^{c_d} h_{\ell,d} + 6) \ell^{-1} + O((h_{\ell,d} \ell^{-1})^2)$, we find that for $n$ sufficiently large,
\[ \ell^d (1 - d \ell^{-1})
\le \ell^d (1 - \ell^{-1})^d
\le n
\le \ell^d (1 - 2 d 4^{c_d} h_{\ell,d} \ell^{-1}) (1 + d 4^{c_d} h_{\ell,d} \ell^{-1})
\]
and so
$$ 1 - d \ell^{-1} \le 1 - d 4^{c_d} h_{\ell,d} \ell^{-1}, $$
contradicting $h_{\ell,d} \to \infty$ as $n \to \infty$. This shows that in fact $a_i \ge \ell - 2 d 4^{c_d} h_{\ell,d}$ for all $i$ if $n$ is large enough.
As a consequence we have
$$ \# R(C) \triangle \{1, \ldots, \ell\}^d
\le (\ell + 4^{c_d} h_{\ell,d})^d - (\ell - 2 d 4^{c_d} h_{\ell,d})^d
= O(h_{\ell,d}\ell^{d-1}). $$
From $n = \ell^d + O(h_{\ell,d} \ell^{d-1}) = \ell^d(1 + O(h_{\ell,d} \ell^{-1}))$ and thus $\lfloor n^{1/d} \rfloor = \ell (1 + O(h_{\ell,d} \ell^{-1})) = \ell + O(h_{\ell,d})$ we also obtain
$$ \# W_n \triangle \{1, \ldots, \ell\}^d
= O(h_{\ell,d}\ell^{d-1}). $$
So by the triangle inequality we get
$$ \# W_n \triangle C
= O(h_{\ell,d}\ell^{d-1}) $$
as claimed.
\smallskip
\noindent (ii) This follows directly from Lemma \ref{lemma:lb}.
\end{proof}
\subsection*{Acknowledgements} E.M.\ acknowledges support from the MIUR-PRIN project No 2017TEXA3H and from the INdAM-GNAMPA 2019 project {\it ``Trasporto ottimo per dinamiche con interazione''}. Both authors wish to thank Paolo Piovano and Ulisse Stefanelli for interesting discussions on the subject of the paper.
|
1,314,259,993,270 | arxiv | \section{Introduction}
\label{s0}
Solving $QCD$ in 't Hooft large-$N$ limit \cite{H1} is a long-standing difficult problem. An easier problem is to find a solution, not exact, but only asymptotic in the ultraviolet ($UV$).
In a sense this asymptotic solution in the $UV$ already exists: It is ordinary perturbation theory.
But in fact it is much more interesting an asymptotic solution in the $UV$ written in terms of glueballs and mesons as opposed to gluons and quarks.
An asymptotic solution of this kind would replace $QCD$ viewed as a theory of gluons and quarks, that are strongly coupled in the infrared in perturbation theory, with a theory of an infinite number of glueballs and mesons, that are weakly coupled at all scales in the large-$N$ limit \cite{H1}. Indeed, at the leading $\frac{1}{N}$ order the two-point connected correlators of gauge invariant operators must be a sum of propagators of free fields \cite{Mig}, involving by the Kallen-Lehmann representation single-particle pure poles, because the interaction associated to three- and multi-point correlators vanishes. At the next order the interaction arises, but it is parametrically weak in the $\frac{1}{N}$ expansion.
Recently, the asymptotic structure of two-point correlators of any spin has been explicitly characterized by the asymptotic theorem \cite{MBN} reported below, in 't Hooft limit of any large-$N$ confining asymptotically-free gauge theory massless in perturbation theory, such as
massless $QCD$ (i.e. $QCD$ with massless quarks).
The asymptotic theorem for the two-point correlators is the basis of a new technique described in this paper, that we call the asymptotically-free bootstrap \footnote{The name derives by the celebrated conformal bootstrap.}, by which we extend the asymptotic theorem to three- and multi-point correlators
and $S$-matrix amplitudes, getting in this way an asymptotic solution of large-$N$ $QCD$ in a sense specified below.
The asymptotic theorem is based on the Callan-Symanzik equation, plus the Kallen-Lehmann representation, plus the assumption that the theory confines, i.e. technically that the one-particle spectrum for each integer spin $s$ at the leading $\frac{1}{N}$ order is a discrete diverging sequence with asymptotic distribution $ \rho_s(m^2)$. In this introduction we recall the precise statement of the asymptotic theorem, because it is necessary to explain the logic of this paper.
The connected two-point Euclidean correlator of a local gauge-invariant single-trace operator (or of a fermion bilinear) $\mathcal{O}^{(s)}$ of integer spin $s$ and naive mass dimension $D$ and with anomalous dimension $\gamma_{\mathcal{O}^{(s)}}(g)$,
must factorize asymptotically for large momentum, and at the leading order in the large-$N$ limit, over the following poles and residues (after analytic continuation to Minkowski space-time):
\begin{eqnarray} \label{at0}
\langle \mathcal{O}^{(s)}(x) \mathcal{O}^{(s)}(0) \rangle_{conn}
\sim \sum_{n=1}^{\infty} \frac{1}{(2 \pi)^4} \int P^{(s)} \big(\frac{p_{A}}{m^{(s)}_n}\big) \frac{m^{(s)2D-4}_n Z_n^{(s)2} \rho_s^{-1}(m^{(s)2}_n)}{p^2+m^{(s)2}_n } \,e^{ip\cdot x}d^4p \nonumber \\
\end{eqnarray}
where $ P^{(s)} \big( \frac{p_{A}}{m^{(s)}_n} \big)$ is a dimensionless polynomial in the four momentum $p_{A}$ \footnote{ We employ latin letters $A,\cdots$ to denote vector indices, and greek letters $\alpha, \dot \alpha, \cdots$ to denote spinor indices.} that projects on the free propagator of spin $s$ and mass $m^{(s)}_n$ and:
\begin{eqnarray} \label{g}
\gamma_{\mathcal{O}^{(s)}}(g)= - \frac{\partial \log Z^{(s)}}{\partial \log \mu}=-\gamma_{0} g^2 + O(g^4)
\end{eqnarray}
with $Z_n^{(s)}$ the associated renormalization factor computed on shell, i.e. for $p^2=m^{(s)2}_n$:
\begin{eqnarray} \label{z}
Z_n^{(s)}\equiv Z^{(s)}(m^{ (s)}_n)= \exp{\int_{g (\mu)}^{g (m^{(s)}_n )} \frac{\gamma_{\mathcal{O}^{(s)}} (g)} {\beta(g)}dg}
\end{eqnarray}
The symbol $\sim$ means always in this paper asymptotic equality in the sense specified momentarily, up to perhaps a constant factor overall.
The proof of the asymptotic theorem reduces to showing that Eq.(\ref{at0})
matches asymptotically for large momentum, within the universal leading and next-to-leading logarithmic accuracy,
the renormalization-group ($RG$) improved perturbative result implied by the Callan-Symanzik equation.
An important corollary \cite{MBN} of the asymptotic theorem is that to compute the asymptotic behavior we need not to know explicitly neither the actual spectrum nor the asymptotic spectral distribution, since it cancels by evaluating the sum in Eq.(\ref{at0}) by the integral that occurs as the leading term in Euler-MacLaurin formula. Hence $RG$-improved perturbation theory does not contain in fact spectral information \cite{MBN}, as perhaps expected, and so it does not our asymptotic solution.
In order to get spectral information it is necessary to lift the asymptotic solution to the actual solution (see conclusions in section \ref{s1}).
Nevertheless, from a practical point of view, the asymptotic formulae suitably interpreted can be employed also in the infrared, simply substituting the known experimental masses (and in some cases residues, as for $f_{\pi}$ ) of mesons and glueballs, in order to get correlators and $S$-matrix amplitudes that are both factorized over poles of physical particles and are asymptotic to the correct result in the ultraviolet. In this paper we write asymptotic correlators of vector and axial currents, relevant for the light by light scattering amplitude and for the structure of the pion form factor, but we do not discuss at all the physical applications.
This paper is a short communication, detailed proofs will appear elsewhere.
\section{Asymptotically-free bootstrap for massless large-$N$ $QCD$} \label{s2}
The first step to work out the asymptotically-free bootstrap consists in exploiting the conformal invariance of two- and three-point correlators at the lowest non-trivial order of perturbation theory, together
with the $RG$ corrections implied by the Callan-Symanzik equation, in any asymptotically-free
theory massless in perturbation theory.
As a consequence, for the connected correlators of a scalar operator $\mathcal{O}$ of naive mass-dimension $D$, $G^{(2)}$ and $G^{(3)}$:
\begin{equation}
\braket{\mathcal{O}(x_1)\mathcal{O}(x_2)}_{\mathit{conn}}=G^{(2)}(x_1-x_2)
\end{equation}
\begin{equation}
\braket{\mathcal{O}(x_1)\mathcal{O}(x_2) \mathcal{O}(x_3)}_{\mathit{conn}}= G^{(3)}(x_1-x_2, x_2-x_3, x_3-x_1)
\end{equation}
we get the estimates:
\begin{equation}\label{21}
G^{(2)}(x_1-x_2) \sim C_2 (1+O(g^2))
\frac{(\frac{g(x_1-x_2)}{g(\mu)})^{\frac{2\gamma_0}{\beta_0}}}{ (x_1- x_2)^{2D}}
\end{equation}
\begin{equation} \label{31}
G^{(3)}(x_1-x_2, x_2-x_3, x_3-x_1)
\sim
C_3 (1+O(g^2))\frac{(\frac{g(x_1-x_2)}{g(\mu)})^{\frac{\gamma_0}{\beta_0}}}{(x_1-x_2)^{D}}\frac{(\frac{g(x_2-x_3)}{g(\mu)})^{\frac{\gamma_0}{\beta_0}}}{(x_2-x_3)^{D}}\frac{(\frac{g(x_3-x_1)}{g(\mu)})^{\frac{\gamma_0}{\beta_0}}}{(x_3-x_1)^{D}}
\end{equation}
Eq.(\ref{31}) can be proved by means of the operator product expansion ($OPE$) as well, that allows us to convey more local information than the Callan-Symanzik equation alone, and that essentially reduces the
asymptotic estimates for three-point correlators to two-point correlators.
In fact, under the assumption that the three-point correlator $\braket{\mathcal{O}(x)\mathcal{O}(0) \mathcal{O}(y)}_{\mathit{conn}}$ does not vanish at lowest order in perturbation theory, i.e. $C_3\neq 0$, we can substitute in the correlator, with asymptotic accuracy as $x$ vanishes, the contribution in the $OPE$ that contains the operator $\mathcal{O}$ itself: $\mathcal{O}(x)\mathcal{O}(0) \sim C(x) \mathcal{O}(0) + \cdots$,
with $C(x) \sim \frac{(\frac{g(x)}{g(\mu)})^{\frac{\gamma_0}{\beta_0}}}{ x^{D}} $ by the Callan-Symanzik equation for the coefficient functions in the $OPE$.
Hence we get asymptotically for $x_1 \rightarrow x_2$:
\begin{eqnarray} \label{OPE}
\braket{\mathcal{O}(x_1)\mathcal{O}(x_2) \mathcal{O}(x_3)}_{\mathit{conn}} && \sim C(x_1-x_2)\braket{\mathcal{O}(x_2)\mathcal{O}(x_3)}_{\mathit{conn}} \sim C(x_1-x_2) G^{(2)}(x_2-x_3) \nonumber \\
&&\sim
C(x_1-x_2)C^2(x_2-x_3)
\end{eqnarray}
that coincides with Eq.(\ref{31}). Therefore, because of the symmetric nature of Eq.(\ref{31}) and by Eq.(\ref{OPE}), we get the fundamental result valid for $C_3 \neq 0$:
\begin{eqnarray} \label{AS}
G^{(3)}(x_1-x_2, x_2-x_3, x_3-x_1) \sim C(x_1-x_2) C(x_2-x_3) C(x_3-x_1)
\end{eqnarray}
So far so good. Everything that we have discussed is well known, but perhaps Eq.(\ref{AS}), in any asymptotically-free theory massless in perturbation theory.
Now it comes the interesting part.
The asymptotic theorem extends to the coefficients of the $OPE$ in the scalar case, because they arise from the non-perturbative part involving condensates of the scalar two-point correlator, which the
Kallen-Lehmann representation applies to as well. In particular for $C(x)$ we get:
\begin{eqnarray} \label{001}
&&C(x_1-x_2)
\sim \sum_{n=1}^{\infty} \frac{1}{(2 \pi)^4} \int \frac{m^{D-4}_n Z_n \rho^{-1}(m^{2}_n)}{p^2+m^{2}_n } \,e^{ip\cdot (x_1-x_2)}d^4p \nonumber \\
&&\sim \sum_{n=1}^{\infty} \frac{1}{(2 \pi)^4} \int \frac{m^{D-4}_n (\frac{g(m_n)}{g(\mu)})^{\frac{\gamma_0}{\beta_0}} \rho^{-1}(m^{2}_n)}{p^2+m^{2}_n } \,e^{ip\cdot (x_1-x_2)}d^4p
\sim
\frac{(\frac{g(x_1-x_2)}{g(\mu)})^{\frac{\gamma_0}{\beta_0}}}{ (x_1- x_2)^{D}}
\end{eqnarray}
The basic idea of the asymptotically-free bootstrap is to substitute the Kallen-Lehmann representation for $C(x)$, Eq.(\ref{001}), in Eq.(\ref{AS}).
Thus explicitly and constructively we get the asymptotic spectral representation of three-point scalar correlators in momentum space:
\begin{eqnarray} \label{2}
&&\braket{\mathcal{O}_{D, \gamma_0}(q_1)\mathcal{O}_{D, \gamma_0}(q_2) \mathcal{O}_{D, \gamma_0}(q_3)}_{\mathit{conn}} \nonumber \\
&&\sim \delta(q_1+q_2+q_3) \sum_{n_1=1}^{\infty} \sum_{n_2=1}^{\infty} \sum_{n_3=1}^{\infty} \int \frac{m^{D-4}_{n_1} Z_{n_1}\rho^{-1}(m^{2}_{n_1})}{p^2+m^{2}_{n_1} }
\frac{m^{D-4}_{n_2} Z_{n_2}\rho^{-1}(m^{2}_{n_2})}{(p+q_2)^2+m^{2}_{n_2} }
\frac{m^{D-4}_{n_3} Z_{n_3}\rho^{-1}(m^{2}_{n_3})}{(p+q_2+q_3)^2+m^{2}_{n_3} }d^4p \nonumber \\
\end{eqnarray}
But this cannot be the whole story.
Indeed, while Eq.(\ref{2}) is asymptotic to the correct result in $RG$-improved perturbation theory, it has not the correct pole structure, that is a consequence of the $OPE$:
\begin{eqnarray} \label{ope}
&& \lim_{q^2_2\rightarrow \infty} \braket{ \mathcal{O}_{D, \gamma_0}(q_1) \mathcal{O}_{D, \gamma_0}(q_2)\mathcal{O}_{D, \gamma_0}(q_3) }_{\mathit{conn}}
\sim \lim_{q^2_2 \rightarrow \infty} \delta(q_1+q_2+q_3)
C(q_2) G^{(2)}(q_3) \nonumber \\
&& \sim \lim_{q_2^2 \rightarrow \infty} \delta(q_1+q_2+q_3) \sum_{n_1=1}^{\infty}
\frac{m^{D-4}_{n_1} Z_{n_1}\rho^{-1}(m^{2}_{n_1})}{q_2^2+m^{2}_{n_1} } \sum_{n_2=1}^{\infty} \frac{m^{2D-4}_{n_2} Z^2_{n_2}\rho^{-1}(m^{2}_{n_2})}{q_3^2+m^{2}_{n_2} } \nonumber\\
\end{eqnarray}
as it follows Fourier transforming in Eq.(\ref{OPE}) and substituting the Kallen-Lehmann representation.
Thus while Eq.(\ref{ope}) and Eq.(\ref{2}) have the same large-momentum asymptotics, the symmetric form and the $OPE$ form factorize on different cuts and poles.
Indeed, $RG$-improved perturbation theory has a non-perturbative ambiguity by multiplicative functions of the external Euclidean momenta, that are asymptotic to $1$ in the ultraviolet.
We fix asymptotically this ambiguity by requiring that the new improved three-point correlator carries a simple pole for each external momentum $(q_1,q_2,q_3)$ on shell in Minkowski space-time, but without changing its Euclidean asymptotic behavior.
Hence, the real structure of the Euclidean correlator must be asymptotically:
\begin{eqnarray} \label{3}
&& \braket{\mathcal{O}_{D, \gamma_0}(q_1)\mathcal{O}_{D, \gamma_0}(q_2) \mathcal{O}_{D, \gamma_0}(q_3)}_{\mathit{conn}}
\sim \delta(q_1+q_2+q_3)
\int \sum_{n_1=1}^{\infty} \frac{m^{D-4}_{n_1} Z_{n_1}\rho^{-1}(m^{2}_{n_1})}{p^2+m^{2}_{n_1} } \frac{m_{n_1}^2}{q_2^2+m_{n_1}^2}\nonumber\\
&&\sum_{n_2=1}^{\infty} \frac{m^{D-4}_{n_2} Z_{n_2}\rho^{-1}(m^{2}_{n_2})}{(p+q_2)^2+m^{2}_{n_2} } \frac{m_{n_2}^2}{q_3^2+m_{n_2}^2}
\sum_{n_3=1}^{\infty} \frac{m^{D-4}_{n_3} Z_{n_3}\rho^{-1}(m^{2}_{n_3})}{(p+q_2+q_3)^2+m^{2}_{n_3} } \frac{m_{n_3}^2}{q_1^2+m_{n_3}^2} d^4p
\end{eqnarray}
where we employed: $\lim_{n \rightarrow \infty} \frac{m_n^2}{q^2+m_n^2}=1$. Proceeding by induction in the $OPE$ and employing $x_1 \sim x_2$, we get the asymptotic contribution to the $r$-point scalar correlator:
\begin{eqnarray} \label{r}
\braket{ \mathcal{O}(x_1) \mathcal{O}(x_2) \cdots \mathcal{O}(x_r) }_{\mathit{conn}} \sim C(x_1-x_2) \cdots C(x_r-x_1)
\end{eqnarray}
\section{Asymptotic effective action and $S$-matrix amplitudes} \label{s4}
From Eq.(\ref{3}) it follows the asymptotic effective action in the scalar glueball sector at lower orders, that reproduces the correlators in the $\frac{1}{N}$ expansion by means of the identification
$\mathcal{O}(x)=\sum_n \Phi_n(x)$:
\begin{eqnarray} \label{eff1}
&&\Gamma= \frac{1}{2!} \sum_n \int dq_1 dq_2 \delta(q_1+q_2) m_n^{4-2D} Z_n^{-2}\rho(m^{2}_{n}) \Phi_n(q_1) (q_1^2+m_n^2) \Phi_n(q_2) + \frac{C}{3! N} \int dq_1 dq_2 dq_3 \nonumber \\
&&\delta(q_1+q_2+q_3)
\int \sum_{n_1=1}^{\infty} m_{n_1}^2 \frac{m^{-D}_{n_1} Z^{-1}_{n_1} \Phi_{n_1}(q_2)}{p^2+m^{2}_{n_1} } \sum_{n_2=1}^{\infty} m_{n_2}^2 \frac{m^{-D}_{n_2} Z^{-1}_{n_2}\Phi_{n_2}(q_3)}{(p+q_2)^2+m^{2}_{n_2} }
\sum_{n_3=1}^{\infty} m_{n_3}^2\frac{m^{-D}_{n_3} Z^{-1}_{n_3}\Phi_{n_3}(q_1)}{(p+q_2+q_3)^2+m^{2}_{n_3} }dp \nonumber \\
\end{eqnarray}
with $C \sim O(1)$, computable in lowest-order perturbation theory.
Fourier transforming Eq.(\ref{r}) and employing the Kallen-Lehmann representation for $C(x)$, we get the spectral representation of the asymptotic primitive $r$-point scalar vertices in the effective action up to overall normalization:
\begin{eqnarray} \label{hl}
&&\int dq_1 dq_2 \cdots dq_r \delta(q_1+q_2+ \cdots + q_r)
\int \sum_{n_1=1}^{\infty} m_{n_1}^2 \frac{m^{-D}_{n_1} Z^{-1}_{n_1} \Phi_{n_1}(q_2)}{p^2+m^{2}_{n_1} }
\sum_{n_2=1}^{\infty} m_{n_2}^2 \frac{m^{-D}_{n_2} Z^{-1}_{n_2}\Phi_{n_2}(q_3)}{(p+q_2)^2+m^{2}_{n_2} } \nonumber \\
&&\cdots \sum_{n_r=1}^{\infty} m_{n_r}^2\frac{m^{-D}_{n_r} Z^{-1}_{n_r}\Phi_{n_r}(q_1)}{(p+q_2+ \cdots +q_r)^2+m^{2}_{n_r} }dp
\end{eqnarray}
The $S$-matrix generating functional follows setting the kinetic term in canonical form by rescaling the fields $\Phi_n$, that is equivalent to dividing by the square root of the residues of the propagators in the $LSZ$ formulae:
\begin{eqnarray} \label{can}
&& S= \frac{1}{2!} \sum_n \int dq_1 dq_2 \delta(q_1+q_2) \Phi_n(q_1) (q_1^2+m_n^2) \Phi_n(q_2)
+ \frac{C}{3!N} \int dq_1 dq_2 dq_3 \delta(q_1+q_2+q_3) \nonumber \\
&& \int \sum_{n_1=1}^{\infty} \frac{ \rho^{-\frac{1}{2}}(m^2_{n_1})\Phi_{n_1}(q_2)}{p^2+m^{2}_{n_1} }
\sum_{n_2=1}^{\infty} \frac{ \rho^{-\frac{1}{2}}(m^{2}_{n_2})\Phi_{n_2}(q_3)}{(p+q_2)^2+m^{2}_{n_2} }
\sum_{n_3=1}^{\infty} \frac{ \rho^{-\frac{1}{2}}(m^2_{n_3})\Phi_{n_3}(q_1)}{(p+q_2+q_3)^2+m^{2}_{n_3} }dp
\end{eqnarray}
and analogously for the primitive $r$-point vertices. Remarkably, the dependence on the naive dimension and anomalous dimension has disappeared, because the $S$-matrix cannot depend on the choice of the interpolating field for the same asymptotic state.The asymptotic interaction in the scalar sector is generated by an infinite number of vertices that look like one-loop diagrams in a $\Phi^3$ field theory up to perhaps normalization, one for each order of the $\frac{1}{N}$ expansion. If we consider only power-counting, because of the explicit factor of $\rho_0^{-\frac{1}{2}}$ in Eq.(\ref{can}), that has dimension of $\Lambda_{QCD}$, the $S$-matrix in the scalar sector behaves in the $UV$
as in a super-renormalizable field theory. But the actual dependence of the $S$-matrix amplitudes on the masses of the asymptotic states is very sensitive, because of the factors of $\rho_0^{-\frac{1}{2}}$ for each external line, to the rate of grow with the mass of the spectral distribution, that includes the multiplicities, while the factor of $\rho_0^{-1}$ cancels after summing on intermediate states in internal propagators.
In string theories defined on space-time curved in some extra dimensions it might not be impossible to reproduce such asymptotic $S$-matrix,
but no presently known string theory, in particular based on the $AdS$ String/Gauge Fields correspondence, reproduces the asymptotics of correlators implied by Eq.(\ref{eff1}) (see section \ref{s1}).
The simplest three-point correlators next to the scalar ones involve the flavor chiral $R=+,-$, vector $R=V$, and axial $R=A$ currents, $j^{a}_{R \alpha \dot \beta}$, built by means of fermion bilinears in $QCD$, with $a$ a flavor index in the adjoint representation of
$SU(N_f)$, or $a=0$ for the identity representation. The generating functional of spin-$1$ correlators in spinor notation, by the identification $j^{a\alpha_1 \dot \beta_1}_{R }(x)=\sum_n \Phi^{a \alpha_1 \dot \beta_1}_{Rn}$, reads:
\begin{eqnarray} \label{effspin1}
&&\Gamma_{1R} = \frac{1}{2} \sum_n \int dq_1 dq_2 \delta(q_1+q_2) m_{Rn}^{-2} Z_{Rn}^{-2} \rho_{1R}(m^2_{Rn})
\Phi^{a \alpha \dot \beta}_{Rn }(q_1) \big( q_1^2+ m_{Rn}^2 \big) \Phi^a_{ \alpha \dot \beta Rn}(q_2) \nonumber \\
&&+ \frac{C}{3 \sqrt N} \int dq_1 dq_2 dq_3 \delta(q_1+q_2+q_3) Tr^{(R)}(a,b,c)
\int \sum_{n_1=1}^{\infty} \frac{p_{\alpha_1 \dot \beta_2} m^{-2}_{Rn_1} z^{-1}_{Rn_1} \Phi^{a \alpha_1 \dot \beta_1}_{Rn_1}(q_2)}{p^2+m^{2}_{Rn _1} } \nonumber \\
&& \sum_{n_2=1}^{\infty} \frac{(p+q_2)_{\alpha_2 \dot \beta_3} m^{-2}_{Rn_2} z^{-1}_{Rn_2} \Phi^{b \alpha_2 \dot \beta_2}_{Rn_2}(q_3)}{(p+q_2)^2+m^{2}_{Rn _2} }
\sum_{n_3=1}^{\infty} \frac{(p+q_2+q_3)_{\alpha_3 \dot \beta_1} m^{-2}_{Rn_3} z^{-1}_{Rn_3} \Phi^{c \alpha_3 \dot \beta_3}_{Rn_3}(q_1)}{(p+q_2+q_3)^2+m^{2}_{Rn _3} } dp \nonumber \\
\end{eqnarray}
with $\partial_{\alpha \dot \beta} \Phi^{a \alpha \dot \beta}_ {Rn}(x)=0$ on shell, and to match $RG$-improved perturbation theory: $ \lim_{n \rightarrow \infty} Z_{Rn} =1 $, $\sum_n z_{Rn} m_{Rn}^{-2} \rho^{-1}_{1}(m^2_{Rn}) \sim 1 $, $Tr^{(+)}(a,b,c)= Tr(T^aT^bT^c)$, $Tr^{(-)}(a,b,c)= - Tr(T^cT^bT^a)$, $ Tr^{(V)}(a,b,c) \sim f^{abc} $, $ Tr^{(A)}(a,b,c) \sim d^{abc} $. The factors of $ p_{\alpha_1 \dot \beta_2} m^{-1}_{Rn_1}$ spoil the super-renormalizability of the $S$-matrix, obtained setting $\Gamma_{1R}$ in canonical form, because now the effective coupling is dimensionless $\rho^{-\frac{1}{2}}_{1R} m^{-1}_{Rn} $. By power counting, in the spin-$1$ sector for mesons large-$N$ $QCD$ is renormalizable but not super-renormalizable, yet all the divergences must be reabsorbed in a redefinition of $\Lambda_{QCD}$.
\section{ Conclusions and outlook} \label{s1}
The main limitation of the asymptotic solution is that it does not provide spectral information.
But first and foremost, the intrinsic interest of the asymptotic solution is that it furnishes a concrete guide to find out an actual solution, possibly only for the spectrum and the $S$-matrix amplitudes, by other methods,
that may be of field-theoretical or of string-theoretical nature. Moreover, for approximate solutions, the asymptotic solution provides a quantitative measure of how good or bad the approximation is.
For example, employing the asymptotic theorem for two-point correlators \cite{MBN} or directly resumming the leading logarithms of perturbation theory \cite{MBM}, it has become apparent that all the present proposal for the scalar or pseudoscalar glueball propagators in confining asymptotically-free $QCD$-like theories based on the $AdS$ String/Gauge Fields correspondence disagree by powers of logarithms \cite{MBM,MBN} with the asymptotic solution.
Indeed, by the asymptotic theorem the asymptotic behavior of the scalar glueball propagator, the correlator that controls the mass gap in large-$N$ $YM$, reads in any asymptotically-free gauge theory massless in perturbation theory \cite{MBN,MBM}:
\begin{align}\label{eqn:corr_scalare_inizio}
&\int\langle Tr{F_{}^2}(x) Tr{F}_{}^2(0)\rangle_{conn}e^{ip\cdot x}d^4x
\sim p^4\Biggl[\frac{1}{\beta_0\log\frac{p^2}{\Lambda_{\overline{MS}}^2}}\Biggl(1-\frac{\beta_1}{\beta_0^2}\frac{\log\log\frac{p^2}{\Lambda_{\overline{MS}}^2}}{\log\frac{p^2}{\Lambda_{\overline{MS}}^2}}\Biggr)+O\biggl(\frac{1}{\log^2\frac{p^2}{\Lambdams^2}}\biggr)\Biggr]
\end{align}
up to contact terms, while all the glueball correlators presently computed in the literature on the basis of the $AdS$ String/Gauge Fields correspondence behave as $p^4 \log^n (\frac{p^2}{\mu^2})$, with $n=1$ in the Hard Wall and Soft Wall models, and $n=3$ in the Klebanov-Strassler cascading $\mathcal{N}$ $=1$ $SUSY$ gauge theory, despite in the latter the asymptotically-free $NSVZ$ beta function is correctly reproduced in the supergravity approximation.
The aforementioned asymptotic disagreement \cite{MBN,MBM} implies that for an infinite number of poles and/or residues the large-$N$ glueball propagator on the string side of the would-be correspondence disagrees with the actual propagator of the asymptotically-free $QCD$-like theory on the gauge side.
This is unsurprising, since the stringy gravity side of the correspondence is in fact strongly coupled in the $UV$, and therefore it cannot describe the $UV$ of any confining asymptotically-free gauge theory. Thus there is no reason that the needed spectral information be correctly encoded in such class of strongly-coupled $AdS$-based theories.
However, we expect that the $QCD$ string is singled out as the unique string theory that is asymptotic to the asymptotic solution for the $S$-matrix, with the asymptotic states labelled by the spectrum generating algebra of $QCD$.
|
1,314,259,993,271 | arxiv | \section{Introduction}
Large pre-trained language models have brought unprecedented progress in NLP, but also concerns regarding the excessive computing power needed to train them \cite{strubell_energy_2019}.
Limited access to large amounts of computational resources, as well as environmental considerations, curb possibilities for less-resourced and less-researched languages.
Additionally, models like GPT-2 \citep{radford_language_2019} are trained on amounts of data that are not available for most languages.
As a result of these limitations, language models are commonly trained for English, whereas reproductions in other languages may underperform or not exist.
That language models can benefit from information in other languages has been demonstrated by the effectiveness of multilingual BERT (mBERT) and XLM-RoBERTa \citep{conneau_unsupervised_2020}.
However, for downstream tasks mBERT has been shown to be outperformed by monolingual models for higher resource languages whereas lower resource languages can still achieve better results without pre-trained language models \citep{nozza_what_2020,wu_are_2020}.
Rather than pursuing a multilingual direction, we aim at exploiting existing language models and language similarities to create models for new languages. Specifically,
we develop a multi-step procedure for adapting English GPT-2 \citep{radford_language_2019} to Italian and Dutch.
Dutch is genetically closely related to English, both being West-Germanic languages, while Italian is a more distant Romance language from the same Indo-European language family \citep{eberhard_ethnologue_2020}.
It is however worth noticing that at sentence level English and Italian tend to have the same word order (SVO), while Dutch is SVO in main clauses, but SOV in subordinate ones; at noun phrase level, English and Dutch share constituent order (for example adjective-noun) while Italian is different (mostly noun-adjective).
A GPT-2 based model has previously been trained from scratch for Italian \citep{de_mattei_geppetto_2020}. We can thus compare sentences generated by this model with sentences generated by our adapted model.
For Dutch, no other GPT-2 based models exist, but similar BERT-based models have been trained from scratch \citep{de_vries_bertje_2019, delobelle_robbert_2020}.
\paragraph{Procedure Overview and Contributions} When training a new language model, weights of an existing pre-trained model for another language can be used for initialisation.
The first step in our training procedure is to only retrain the lexical embeddings of the GPT-2 \textit{small} model, without touching the Transformer layers.
We show that retrained lexical embeddings are well aligned with the English vocabulary and that GPT-2 is capable of generating realistic text in Italian and Dutch after this step.
Next, we demonstrate that the lexical embeddings of larger GPT-2 models can be approximated by transforming the \textit{small} lexical embeddings to the GPT-2 \textit{medium} lexical embedding space.
The least-squares regression method is the most effective transformation method for this scaling procedure.
Human judgements show that generated sentences are often realistic, but become even more consistently so after additional finetuning of the Transformer layers.
This improvement is stronger for Dutch than for Italian.
The steps in our pipeline yield GPT-2 based language models for Italian and Dutch which are made available on the Hugging Face model hub\footnote{\url{https://huggingface.co/GroNLP}};
the source code is available on Github\footnote{\url{https://github.com/wietsedv/gpt2-recycle}}.
On the last page, we also include a `recipe' for creating GPT-2 models for new languages.
\section{Background}
Previous and current research relevant for the present work is found in the more general field of transfer learning, with a specific focus on language transfer.
We also discuss how our approach of translating lexical layers in different model sizes relates to work on aligning word embeddings.
\subsection{Language transfer}
Transfer learning can be an effective strategy to adapt models to lower-resource languages by initially training a model for a source language and then further training (parts of) the model for a target language.
It has been successfully used to create machine translation models with little parallel data \citep{zoph_transfer_2016} as well as other classic NLP tasks \citep{lin_choosing_2019}.
In machine translation a model can be adapted by initially training it for a high-resource language pair after which the model should be partially retrained for a low-resource language \citep{zoph_transfer_2016, nguyen_transfer_2017, kocmi_trivial_2018}.
Retraining a randomly initialised lexical layer while freezing the rest of the model is an effective method to adapt a model to a new language, and dictionary based initialisation is not required to get the best performance \citep{zoph_transfer_2016}.
\citet{artetxe_cross-lingual_2020} show that a monolingual BERT model can be adapted from a source language to a different target language by retraining the lexical layer for the target language while freezing the Transformer layers in the model.
Zero shot adaptation for downstream tasks is possible by finetuning the original source model with source language data and swapping lexical layers afterwards.
Lexical layer retraining approaches may be effective despite the presence of source and target language dissimilarities if a downstream task does not require perfect data.
However, these methods have not been applied yet to generative language models where dissimilarities can cause clear syntactic and lexical errors.
Language similarity plays a role in the effectiveness of transfer learning for language models.
For instance, in machine translation French is a better parent model for Spanish than German \citep{zoph_transfer_2016}.
Word order differences between languages can negatively influence transfer performance, and \citet{kim_effective_2019} show that randomly swapping words in the source language, which forces the model to rely less on consistent word order, can improve performance in the target language.
Overall, genetic similarity between source and target languages can play a role, but \citet{lin_choosing_2019} have shown that in practice the geographic distances between countries of origin, syntactic similarity and subword overlap are better predictors of transfer performance for machine learning, part-of-speech tagging, dependency parsing and entity linking.
\subsection{Aligning word embeddings}
Alignment of lexical embeddings, for example for multiple languages, is most prominently done with mapping-based approaches \citep{ruder_survey_2019}.
Typically, a function is determined that transforms one vector space to another based on a seed lexicon.
This lexicon is a dictionary of anchor points that should result close together after transformation.
An influential method for learning a lexical embedding mapping is the least-squares linear transformation method by \citet{mikolov_exploiting_2013}.
They observe that words and their translations in other languages show similar constellations of related words after such a transformation.
An alternative method that is generally considered to be an improvement \citep{ruder_survey_2019} is the orthogonal procrustes solution.
This method adds the constraint that the transformation matrix must be orthogonal.
In practice this means that the transformation only contains rotations and reflections and no scaling and translation.
This constraint enables length normalisation \citep{xing_normalized_2015} and ensures monolingual invariance \citep{artetxe_learning_2016}.
Mapping-based approaches rely on isomorphism, which means that a one-to-one token mapping between source and target lexical embedding spaces should be possible.
This assumption is used for bilingual lexicon induction after alignment \citep{conneau_word_2018}.
However,
the isomorphism assumption
highly depends on language similarity and (amount of) training data \citep{sogaard_limitations_2018}.
Some more complex alignment methods like RCLS \citep{joulin_loss_2018} optimise for dictionary translation performance, which assumes isomorphism, but simpler methods like the orthogonal procrustes solution are more effective for downstream tasks like natural language inference \citep{glavas_how_2019}.
\citet{mohiuddin_lnmap_2020} propose a solution to the isomorphism problem by learning a new shared embedding space with an auto-encoding neural model instead of trying to fit the embeddings of one language in the space of another language.
\section{Resources}
\paragraph{Models}
The models that we train are based on the pre-trained GPT-2 language models \citep{radford_language_2019}.
GPT-2 is an auto-regressive Transformer-decoder based language model for English and comes in four sizes: small (12 layers), medium (24 layers), large (36 layers) and extra large (48 layers).
Our experiments use the small (\texttt{sml}) and medium (\texttt{med}) model sizes.
\paragraph{Pre-training data}
The GPT-2 models are (further) pre-trained with Italian (\texttt{ita}) and Dutch (\texttt{nld}) data.
The Italian pre-training data is the same dataset that was used to train the Italian GPT-2 small language model GepPpeTto \citep{de_mattei_geppetto_2020}.
This dataset is a combination of Wikipedia data (2.8GB) and web texts from the ItWaC corpus (11GB; \citealt{baroni_wacky_2009}).
Dutch data consists of a combination of Wikipedia (2.0GB), newspaper articles (2.9GB; \citealt{ordelman_roeland_jf_twnc_2007}), books (6.5GB) and articles from various Dutch news websites (2.1GB).
Documents are filtered to only contain Dutch texts using the Wikipedia-trained fastText language identifier \citep{joulin_bag_2017}, and are deduplicated based on exact sentence matches.
The final Dutch pre-training data contains 13GB of plain text, of which 5\% is reserved as development data.
\paragraph{Evaluation data}
The Italian models are tested using the same three corpora that were used to evaluate GePpeTto \citep{de_mattei_geppetto_2020}: Wikipedia, ItWaC, EUR-Lex (laws), newspapers and blog posts.
A 5\% subset of this data is used for development.
For perplexity evaluation, the Dutch 500 million word, 22-genre SoNaR corpus is used \citep{oostdijk_construction_2013}.
The smaller 1 million word SoNaR-1 subcorpus is used as development data.
\paragraph{Tokenisation}
The datasets are tokenised using byte-pair-encoding (BPE).
For better comparison, the Italian vocabulary is taken from the GePpeTto model \citep{de_mattei_geppetto_2020}.
The Dutch BPE vocabulary is based on the full pre-training data and it has been ensured that every character that is used in the Dutch language is present as a single character token in the vocabulary.
A large vocabulary size is beneficial because words are less often split in separate tokens, but vocabularies that are too large will have low token coverage for uncommon tokens.
\paragraph{Computation}
Training a model like GPT-2 is a computationally expensive task that requires access to costly hardware for long training times.
All models discussed in this paper are trained with eight parallel NVIDIA V100 32GB GPUs on the Peregrine high performance computing cluster at the University of Groningen.
For efficient implementation of the models, we use PyTorch (1.6.0; \citealt{paszke_pytorch_2019}), PyTorch Lightning (0.9.0; \citealt{falcon_pytorch_2019}) and Transformers (3.0.2; \citealt{wolf_huggingfaces_2020}).
We implement four strategies to decrease general training time.
First, the models are trained with 16-bit automatic mixed-precision training \citep{micikevicius_mixed_2018}. This decreases training time with a factor of two to three times.
Second, we split each document in windows of 128 instead of 1024 tokens when we only train the lexical embeddings.
Third, we minimise padding by using bucketed random sampling which means that sequences within minibatches have roughly the same length.
Finally, we use maximum batch sizes that fit into GPU memory and use gradient accumulation in order to do backpropagation only for every 2000 examples.
The models are trained with the Adam optimiser \citep{kingma_adam_2017} and initial learning rates are chosen based on the steepest loss slope with gradually increasing learning rates \citep{smith_cyclical_2017}.
The learning rate is reduced by 10\% on when training loss reaches a plateau.
More implementation details are given in the git repository.
\section{Cross-language Transfer}
\label{sec:lang}
We adapt GPT-2 for Italian and Dutch with minimal random initialisation.
The lexical embeddings in GPT-2 are trained with an English BPE vocabulary. Therefore, they are not usable for the new languages and the lexical embedding layer has to be randomly initialised for the target vocabulary.
This lexical embedding layer is used both as the first and the last layer of GPT-2 (tied weights).
Relearning lexical embeddings with frozen Transformer layers prevents catastrophic forgetting in the Transformer layers when the embeddings are still random.
\paragraph{Relearning lexical embeddings}
Relearning lexical embeddings is nearly as computationally expensive as fully training the model, because back-propagation has to be done through the full model in order to update the lexical embeddings in the first layer of the model.
However, loss values stabilize after only one to two epochs with lexical embedding relearning whereas full model training takes more training time.
We retrain the lexical embeddings for the \texttt{sml} and \texttt{med} model for Italian and Dutch by training until loss on the validation data stops decreasing.
When we retrain the \texttt{sml} model, the perplexities on our Italian and Dutch test data become 44.2 and 48.9 respectively.
These perplexity scores show that the \texttt{sml} model can predict Dutch and Italian tokens reasonably well without having retrained the Transformer layers.
Therefore, the English Transformer layers are at least partially language-independent and our relearning method automatically aligns lexical embeddings to the embedding space of the English model.
However, if we retrain the \texttt{med} lexical layer for Italian and Dutch with the same method, test data perplexities are 81.2 and 185.0.
These unsatisfactory \texttt{med} perplexities could be due to stopping training too early or to arriving at a suboptimal local optimum.
Training for a longer time or trying different random initialisations defeats the purpose of minimising computational requirements.
A more efficient method that uses the already learned \texttt{sml} embeddings is described in Section~\ref{sec:complexity}.
\begin{table}
\centering
\begin{tabular}{l | l l}
\toprule
\textbf{English} & \textbf{Italian} & \textbf{Dutch} \\
\midrule
while & mentre & terwijl \\
genes & geni & genen \\
clothes & vestiti & kleren \\
musicians & composi[...] & artiesten \\
permitted & ammessa & toegelaten \\
Finally & infine & Eindelijk \\
satisfied & soddisfatto & tevreden \\
\midrule
\textit{Accuracy:} & \multicolumn{1}{c}{85\%} & \multicolumn{1}{c}{89\%} \\
\bottomrule
\end{tabular}
\caption{\label{tab:wte:alignment} Alignment of closest tokens in the lexical embeddings of \texttt{sml\textsubscript{rle}} for Italian and Dutch. Accuracy scores are based on a manual evaluation by the authors of 200 random aligned tokens. Semantically correct subword matches are included.}
\end{table}
\begin{table*}
\centering
\begin{small}
\begin{tabular}{p{0.98\columnwidth} | p{0.98\columnwidth}}
\toprule
\textbf{Italian} & \textbf{Literal English translation} \\
\midrule
La prima parte del film venne \textit{distribuito} in Giappone con l'aggiunta della colonna sonora. & The first part of the film was \textit{distributed} in Japan with the addition of the soundtrack.\\\midrule
L'unico motivo \textit{di la} mia insoddisfazione fu il fatto che l'inizio della sua attività [\ldots]& The only reason \textit{of the} my unsatisfaction was the fact that the beginning of-the his/her activity [\ldots]\\\midrule
Il suo nome deriva da un vocabolo arabo. & The his/her name derives from a word Arabic.\\
\toprule
\textbf{Dutch} & \textbf{Literal English translation} \\
\midrule
In een artikel in de Journal of Economicologie (1998), \textit{The New York Times schrijft}: & In an article in the Journal of Economicology (1998), \textit{The New York Times writes}:\\\midrule
Ik kan me niet voorstellen dat mensen van mijn generatie \textit{zijn zo boos op mij te wachten}. & I can me not imagine that people of my generation \textit{are so mad at me to wait}.\\\midrule
Ik heb niets gedaan om mijn moeder te helpen. & I have nothing done to my mother to help.\\%\midrule
\bottomrule
\end{tabular}
\end{small}
\caption{\label{tab:wte:examples} A selection of generated sentences by the \texttt{sml} model with Italian and Dutch lexical embeddings. Phrases in italics are ungrammatical in the target language.}
\end{table*}
\paragraph{Vocabulary alignment}
The lexical embeddings of both the original English tokens as well as the relearned Italian and Dutch lexical embeddings can be considered to inhabit the same embedding space because the lexical embeddings of all three languages are tuned to minimise loss with the exact same Transformer layers.
Therefore, tokens with similar meaning in different languages should be close to each other if the lexical embeddings are properly trained.
Table~\ref{tab:wte:alignment} shows the closest Italian and Dutch tokens of a random sample of English tokens.
These alignments show that the optimal lexical embeddings for both Italian and Dutch are often literal translations of English tokens.
Thanks to similarity of context-dependent structures like syntax in these three languages, the English model can be adapted to Italian and Dutch.
Based on this small sample, Dutch to English alignment seems to be slightly more accurate than Italian to English, but a more thorough study would be required to evaluate the actual relation between genetic similarity and alignment potential through this method.
\paragraph{Text generation}
Table~\ref{tab:wte:examples} shows some examples of unconditioned text generation of the English \texttt{sml} with relearned lexical embeddings for Italian and Dutch.
These examples show that the model can generate proper Italian and Dutch sentences, although it sometimes uses English word order where the correct word order differs in Dutch, or ignores grammatical gender agreement in Italian defaulting to the singular masculine, or doesn't always produce correctly Italian prepositional articles (``di la'', en: \textit{of the} vs ``della'', en: \textit{of-the}). Phrases in italics in Table~\ref{tab:wte:examples} highlight such mistakes.
The literal English translations, however,
show that the models can generate proper Italian and Dutch grammar that differs from English.
Italian and Dutch lexical embeddings are not only aligned with equivalent English tokens, but unexpected correct syntax shows that the grammatical functions of words have also been adapted. For example, in Italian the noun-adjective order is opposite to English and realised correctly; also, the use of the definite article in front of a possessive pronoun is correctly introduced, while ungrammatical in English.
This shows that the relatively low-dimensional context-independent lexical embeddings in GPT-2 contain syntactic features of the tokens in addition to semantics, and confirms previous findings of high information density in the lexical layer of language models \cite{de_vries_whats_2020}.
Therefore, language adaptation can be to some extent effective by adapting the lexical embedding layer without retraining Transformer layers at all.
\section{Scaling up Complexity}
\label{sec:complexity}
Replacing the original lexical embeddings with lexical embeddings from a different target language seems an effective way to initialise full model transfer to that target language.
However, relearning the lexical embeddings of a new vocabulary requires full forward and backward propagation through the whole model.
Therefore, this becomes an increasingly more expensive task for larger model sizes.
When multiple model sizes need to be transferred to a new language, the lexical embeddings do not need to be retrained from scratch.
Instead, vocabulary alignment between the source and target languages for the smaller model could be used to initialise the embeddings for a larger model.
After relearning the lexical embeddings of the \texttt{sml} model for Italian and Dutch, we observed that tokens with similar meaning in different languages are close to each other in the embedding space.
This alignment effect should also be present in properly trained lexical embeddings of larger models.
Given that we have at our disposal known embeddings for all 50K English tokens for every model size, we can use these data points to transform model size \texttt{sml} to larger model size \texttt{med}.
Regardless of architecture, embeddings are only considered to be alignable if they are trained under identical conditions with the same type and amount of data \citep{levy_improving_2015, ruder_survey_2019}.
Our goal differs from previous alignment efforts since instead of aligning languages, we align separately trained embeddings for different model sizes, trained on the same data with identical and fully parallel vocabularies in English. The embeddings differ in dimensionality (768d for \texttt{sml}, 1024d for \texttt{med}) and the different model sizes may influence the amount and density of information in the lexical embeddings.
\begin{table*}[ht]
\centering
\begin{tabular}{l | c c | c c c }
\toprule
& \multicolumn{2}{c|}{Italian} & \multicolumn{3}{c}{Dutch} \\
\textbf{Model} & \textbf{Int@1k} & \textbf{PPL} & \textbf{Int@1k} & \textbf{PPL} & \textbf{PPL (1 epoch)} \\
\midrule
\texttt{med\textsubscript{rle}} (1 epoch) & 0.38 & - & 185.02 & - & - \\
\midrule
\texttt{sml\textsubscript{rle}} $\xrightarrow{proc}$ \texttt{med} & \textbf{0.61} & 8.12 $\times 10^{12}$ & \textbf{0.61} & 5.02 $\times 10^{12}$ & 52.69 \\
\texttt{sml\textsubscript{rle}} $\xrightarrow{lstsq}$ \texttt{med} & 0.56 & \textbf{364.06} & 0.56 & \textbf{293.61} & \textbf{47.57} \\
\texttt{sml\textsubscript{rle}} $\xrightarrow{1-nn}$ \texttt{med} & 0.37 & 2,764.19 & 0.36 & 1,101.59 & 50.25 \\
\texttt{sml\textsubscript{rle}} $\xrightarrow{10-nn}$ \texttt{med} & 0.37 & 20,715.80 & 0.35 & 11,871.66 & 56.88 \\
\bottomrule
\end{tabular}%
\caption{\label{tab:trans:med} Scores for different transformation methods. Int@1K are the average 1k nearest English neighbours intersection (int) fractions between \texttt{sml} and transformed \texttt{med} embeddings. \textit{PPL} is the perplexity on the test sets for Italian and Dutch. \textit{PPL (1 epoch)} indicates the perplexity after one epoch of training, which is low if the transformed embeddings were close to a good local optimum.}
\end{table*}
\subsection{Transformation methods}
The 50K parallel English tokens can be used to find an optimal transformation between lexical embeddings of different model sizes.
The completeness of this mapping due to shared vocabularies between models eliminates the need to use complex solutions like refinement or bootstrapping the lexicon \citep{artetxe_unsupervised_2018}.
We compare three simple supervised alignment methods for transformation from source space \texttt{sml} to target space \texttt{med}.
\paragraph{Regression (lstsq)}
A classic approach for mapping lexical embeddings is mean-squared-error minimising linear regression with the least-squares method \citep{ruder_survey_2019, mikolov_exploiting_2013}.
This method learns a transformation matrix $\boldsymbol{W}$ that minimises the Euclidean distance between source and target embeddings.
The optimal matrix is approximated with stochastic gradient descent, and therefore this is not an exact solution.
\paragraph{Orthogonal Procrustes (proc)}
More recent alignment approaches constrain the transformation $\boldsymbol{W}$ to be an orthogonal matrix \citep{ruder_survey_2019, artetxe_learning_2016}.
This constraint enables using the exact solution for the orthogonal Procrustes problem \citep{xing_normalized_2015}.
The exact solution only rotates and reflects data points to be as close as possible to the target space without any scaling or translation, preserving monolingual invariance in the source embeddings \citep{artetxe_learning_2016}.
\paragraph{Weighted K-Nearest Neighbours (knn)}
Unlike typical alignment approaches, we have a complete set of parallel data points in the source and target spaces (English).
The unknown target language tokens can be approximated by taking the $\boldsymbol{K}$ nearest English tokens in the source \texttt{sml} embedding space and using the distance-weighted sum of these tokens in the target \texttt{med} embedding space.
\subsection{Results after transformation}
Table~\ref{tab:trans:med} shows \texttt{med} embedding similarity with source \texttt{sml} embeddings and perplexities on test data with transformed embeddings, as well as the transformed embeddings with one additional epoch of training.
Results are consistent for Italian and Dutch.
Nearest English neighbours are best preserved with the Orthogonal Procrustes method.
However, the perplexity scores are extremely high for this method.
The perplexity scores with the different methods vary in complete orders of magnitude.
Based on this, the least-squares regression method outperforms the other methods.
After one epoch of additional training with transformation initialised embeddings, the lstsq method still outperforms the other methods.
It even outperforms the \texttt{sml} model with fully tuned lexical embeddings.
\begin{table}
\centering
\resizebox{7.4cm}{!}{%
\begin{tabular}{l | c r }
\toprule
\textbf{Model} & \multicolumn{2}{c}{\textbf{PPL}} \\
& \texttt{ita} & \texttt{nld} \\
\midrule
\texttt{sml\textsubscript{rle}} & 44.19 & 48.85 \\
\texttt{sml\textsubscript{rle} + finetuning} & \textbf{42.45} & \textbf{39.59} \\
\texttt{sml\textsubscript{full}}* & 193.15 & 219.34 \\
\midrule
\texttt{med\textsubscript{rle} + finetuning} & 42.51 & 44.68 \\
\midrule
\texttt{GePpeTto (sml)} & 106.84 & - \\
\bottomrule
\end{tabular}%
}
\caption{\label{tab:full} Perplexities of the concatenated test data for the final models. The \texttt{med\textsubscript{rle}} model is in practice the \texttt{sml\textsubscript{rle}} $\xrightarrow{lstsq}$ \texttt{med} model. * The \texttt{sml\textsubscript{full}} model is trained for the equivalent amount of time as the \texttt{sml\textsubscript{rle} + finetuning} models, but with all layers unfrozen.}
\end{table}
\section{Full model finetuning}
\label{sec:full}
After obtaining lexical embeddings for Italian and Dutch to be plugged into the English GPT-2 models, the full models can be finetuned for the target language.
The best performing lexical embeddings will be used to train the \texttt{sml} and \texttt{med} Italian and Dutch models.
These are the lexical embeddings that are relearned from random initialisation for the \texttt{sml} model.
For the \texttt{med} model, the lstsq transformed \texttt{sml} embeddings with additional training are used (\texttt{sml\textsubscript{rle}} $\xrightarrow{lstsq}$ \texttt{med\textsubscript{+rle}}).
The relearned lexical embeddings reduce the risk of information loss while the model is adjusting to a new language.
Nevertheless, information can still be lost during training.
For instance for the \texttt{sml} Dutch model, validation loss increases with a learning rate of $10^{-4}$, but this does not happen with a lower learning rate of $10^{-5}$.
\section{Obtained models and evaluation}
\label{sec:eval}
For both Italian and Dutch, we evaluate three models:
(i) the English \texttt{sml} model with relearned lexical embeddings;
(ii) the \texttt{sml} model with additional finetuning to the target language; and
(iii) the English \texttt{med} model with relearned lexical embeddings that were initialised by transforming \texttt{sml} embeddings with the least-squares method.
For Italian, we also include the GPT-2 small based GePpeTto model \citep{de_mattei_geppetto_2020}, which was trained from scratch.
This inclusion offers the opportunity of a direct comparison between a GPT-2 model trained from scratch and those obtained with our transfer approach.
We run both an automatic and a human-based evaluation.
For the former, we compare perplexity scores on unseen test data in different genres.
For the latter, we collect and compare judgements over generated and gold texts by native speakers of Italian and Dutch.
\subsection{Perplexity}
\label{sec:res:ppl}
Table~\ref{tab:full} shows perplexity scores on concatenated multi-genre test data based on a strided moving window perplexity calculation.\footnote{Window sizes are 128 tokens and strides are 64 tokens except for GePpeTto. GePpeTto was trained with at most 100 tokens, so its window size is 100 with a 50 token stride.}
Perplexities are calculated with Italian and Dutch vocabularies of 30K tokens.
These results show that perplexities are low when only relearning the lexical embeddings for both Italian and Dutch.
Further finetuning of the \texttt{sml} model seems to have the greatest effect for the Dutch language.
The \texttt{med} models with relearned lexical embeddings have lower perplexity than the equivalent \texttt{sml} models.
This shows that language transferability based on the lexical layer is not restricted to small model sizes.
Moreover, we see that our proposed method results in lower perplexity scores than regular full model finetuning of the English model.
The overall perplexity scores of Italian are closer to each other than the Dutch perplexities.
We also tested perplexities by the different genres that make up both the Italian and the Dutch datasets (see Figure~\ref{tab:res:ita:genres} and Figure~\ref{tab:res:nld:genres} for details), and observed that while perplexities vary greatly per genre, the model ranking per genre is consistent with the global scores.
\begin{table}[ht!]
\centering
\resizebox{7.7cm}{!}{%
\begin{tabular}{l | c c c }
\toprule
\textbf{Model} & \textbf{Social} & \textbf{News} & \textbf{Legal} \\
\midrule
\texttt{sml\textsubscript{rle}} & 134.64 & 67.14 & 16.95 \\
\texttt{sml\textsubscript{rle} + finetuning} & \textbf{118.19} & \textbf{55.63} & 15.36 \\
\midrule
\texttt{med\textsubscript{rle}} & 123.64 & 59.18 & \textbf{14.95} \\
\midrule
\texttt{GePpeTto\textsubscript{sml}} & 179.47 & 80.83 & 34.71 \\
\bottomrule
\end{tabular}%
}
\caption{\label{tab:res:ita:genres} Perplexities for different genres within the Italian test data. Rankings are consistent with Table~\ref{tab:full} except for the legal domain.}
\end{table}
\begin{table}[ht!]
\centering
\resizebox{7.7cm}{!}{%
\begin{tabular}{l | c c c }
\toprule
\textbf{Model} & \textbf{Proceedings} & \textbf{News} & \textbf{Legal} \\
\midrule
\texttt{sml\textsubscript{rle}} & 44.47 & 239.14 & 52.01 \\
\texttt{sml\textsubscript{rle} + finetuning} & \textbf{36.35} & \textbf{171.83} & \textbf{42.92} \\
\midrule
\texttt{med\textsubscript{rle}} & 40.62 & 234.52 & 45.01 \\
\bottomrule
\end{tabular}%
}
\caption{\label{tab:res:nld:genres} Perplexities for some SoNaR genres in Dutch. Models rankings are consistent across genres.}
\end{table}
\subsection{Human Judgements}
\label{sec:human}
The perplexity scores give an indication on how well a language is represented by language models, but this does not reliably tell how good the model is in a generative setting. For this, we resort to human judgements.
Human assessments of generated texts are collected for the models that incorporate the crucial steps in our approach and achieve reasonable perplexity scores: the \texttt{sml} models with only relearned lexical embeddings, the finetuned \texttt{sml} models and the higher complexity \texttt{med} models with only relearned lexical embeddings based on transformed \texttt{sml} lexical embeddings.
Texts are assessed in isolation by means of a \textit{direct} evaluation \cite{novikova_rankme_2018}.\footnote{A direct evaluation is opposed to a comparative one, usually involving a ranking task \cite{novikova_rankme_2018,de_mattei_geppetto_2020}; this is left to future work.}
Subjects are presented with texts on the screen, and are asked whether the texts they see could have been written by a human. All subjects are pre-informed that some of the texts they will see are machine generated.
Rather than discrete answers, we obtain continuous evaluations by offering the possibility of clicking anywhere on a bar whose extremes are ``no'' to the left and ``yes'' to the right.
The evaluation interface is made with PsychoPy3 \citep{peirce_psychopy2_2019} and hosted with Pavlovia\footnote{\url{https://pavlovia.org}}.
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{plots/boxplot-ita.pdf}
\caption{Human judgement scores for Italian texts.}
\label{fig:human:ita}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\centering
\includegraphics[width=\textwidth]{plots/boxplot-nld.pdf}
\caption{Human judgement scores for Dutch texts.}
\label{fig:human:nld}
\end{subfigure}
\caption{Human judgement scores based on a continuous scale. Most judgements were close to 0 or 1.}
\label{fig:human}
\end{figure*}
Italian models were evaluated by 24 participants (9~M, 15~F) with ages ranging from 26 to 63 with a median age of 46.
The Dutch models were evaluated by 15 participants (11~M, 4~F) with ages ranging from 23 to 36 with a median age of 27.
The three final models are evaluated for both languages; for Italian, we also add GePpeTto \citep{de_mattei_geppetto_2020}.
Human written gold sentences were sampled from the test data as an additional condition.
For each of these 5 Italian and 4 Dutch conditions, 100 sentences are evaluated.
Each participant has evaluated 50 to 150 sentences and each sentence is evaluated by 3 to 5 participants. As a result, we obtain 1950 evaluations for 500 Italian texts and 1550 evaluations for 400 Dutch texts.
All artificial sentences are randomly generated without conditioning and with beam search (5 beams, with top 50 tokens or a summed probability of at least 90\%), and a temperature of 3.0.
Setting the temperature value $>$1 means decreasing the sampling probability of likely tokens, and therefore increases variation between generated samples.
Longer sentences have a higher chance to contain mistakes, so a model that generates longer sentences may have a disadvantage.
However, explicitly controlling sentence length is not possible nor desired since sentence length may also be an indication of model quality.
For both languages the randomly sampled gold sentences have more long sentences than the models, but the \texttt{sml} model with finetuning also sometimes generates longer sentences.
We filter out sentences longer than 30 tokens to decrease sentence length effects on judgements.
The remaining Italian sentences have median lengths of 18 or 22 words and the Dutch ones 16 or 17 words for the different conditions.
Figure~\ref{fig:human} shows the distributions of human judgements per condition.
Variance seems to be high due to the non-normally distributed scores as relatively many scores are close to zero, half or one.
The model differences appear stronger for Dutch than Italian, but for both languages the subjects have given high scores to gold sentences.
This is expected and indicates that the participants are able to correctly judge real human texts.
Of the three trained models, the small model with additional finetuning achieves the highest scores.
For the Italian model comparison we use a linear mixed-effects model but with only author as fixed effect and random intercepts for participants and sentences.
There is no significant effect for sentence length.
The judgements on gold texts are significantly higher than all model judgements ($p < 0.005$) except for \texttt{sml\textsubscript{fine}}.
However, \texttt{sml\textsubscript{fine}} is not significantly better than GePpeTto nor the \texttt{sml\textsubscript{rle}} and \texttt{med\textsubscript{rle}} models ($p > 0.05$).
For Dutch we use a linear mixed-effects model with fixed effects for author and sentence length (in number of words) and random intercepts for participants and sentences.
Sentence length has a significant negative effect ($p < 0.001$).
All artificial authors score significantly lower than gold ($p < 0.001$).
As for Italian, the \texttt{sml\textsubscript{fine}} model appears the best model, but in this case the judgement scores are significantly higher than for the other two models ($p < 0.001$).
The \texttt{sml\textsubscript{rle}} and \texttt{med\textsubscript{rle}} models do not differ significantly from each other.
The human judgements show consistent results across the languages, but differences between Dutch judgements are stronger than for Italian.
This seems to mirror the smaller perplexity differences for Italian than for Dutch.
Whether demographic or cultural differences also play a role in this difference will need to be further investigated.
In sum, we see that the English GPT-2 models with relearned lexical embeddings are recognisable as artificial, whereas this problem is attenuated after additional finetuning.
The \texttt{sml} model with additional finetuning performs at least as well as the GePpeTto model that was trained from scratch.
\section{Conclusion}
We have described methods to adapt GPT-2 to genetically related languages and to increase model complexity.
Retraining lexical embeddings forces the model to learn representations that are aligned between English and the target language.
GPT-2 is able to generate realistic text in another language, but human judgements reveal that additional finetuning of the full model is needed to generate realistic sentences more consistently.
Relearned lexical embeddings show signs of syntactic adaptation to the new language, though not fully consistently.
Dutch is genetically closer to English than Italian, but our results do not prove that this method works better for Dutch.
Future research on the relation between degrees and types of language similarity and transferability of models will enable more effective monolingual transfer, and possibly training better multilingual models by selecting optimal clusters of languages.
This kind of work offers a privileged perspective into the information learned by generative language models and provides empirical ground for linguistic typology research (e.g., uncovering which linguistic aspects are more universal, and which more language-specific).
Relearning lexical embeddings using our method can still be considered an expensive solution, but training costs decrease when a smaller embedding space is scaled up to the embedding space of a larger model.
In other words, approximating a good initialisation of the embedding weights decreases training time.
This method also enables adaptation of (extra) large GPT-2 models to other languages.
If you can borrow pre-trained weights, why retrain models from scratch?
In the right column we summarise the steps for the shortest path to train your own GPT-2 for another language.
\section*{Acknowledgments}
We gratefully acknowledge the support of the Dutch Research Council (NWO Aspasia grant for M.~Nissim) and the financial support of the Center for Groningen Language and Culture (CGTC).
Additionally, we would like to thank Lorenzo De Mattei for sharing the Italian data with us.
We would also like to thank the Center for Information Technology of the University of Groningen for providing access to the Peregrine high performance computing cluster.
Finally, we thank the anonymous reviewers for their insightful feedback.
Any mistakes remain our own.
\section*{Impact Statement}
This work aims to minimize the environmental impact of training large neural language models by adapting existing models and by using smart initialisation of model weights.
However, experiments in this paper still require the use of GPUs for extended periods of time which has environmental impact.
Our final models are published and all models that automatically generates natural text could unfortunately be used maliciously.
While we cannot fully prevent such uses once our models are made public, we do hope that writing about risks explicitly and also raising awareness of this possibility in the general public are ways to contain the effects of potential harmful uses.
We are open to any discussion and suggestions to minimise such risks.
\vspace*{1.2cm}
\recipe{
This paper describes several steps that are taken to transfer GPT-2 to a different language.
The recommended shortest path to replicate this for another language is to follow these steps:
\paragraph{Vocabulary} Create a new BPE vocabulary for your target language. The optimal size for your vocabulary depends on your language, so select the size by stepwise increments until the number of tokens per sentence slows to decrease.
\paragraph{Start small} Re-initialise the lexical embeddings of the small GPT-2 model for your vocabulary size and only retrain the lexical embeddings.
\paragraph{Increase model size} If you want to train a larger model size, fit a least-squares regression model to the English lexical embeddings in the small and larger model size and use the fitted model to transform your newly trained lexical embeddings to a larger model size.
\paragraph{Optimise your embeddings} Do additional lexical embedding training in the target model size. Transformed embeddings are a good initialisation, but
they are not perfect.
\paragraph{Finetune} Unfreeze the full target model and do some finetuning to make sure that syntax differences are learned by the new model. Use a low learning rate like $10\textsuperscript{-5}$.}{\textbf{\large Create your own GPT-2 model}}
|
1,314,259,993,272 | arxiv | \section{Introduction}
Social media are sometimes used to disseminate hateful messages.
In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis.
Lawmakers and social media sites are increasingly aware of the problem and are developing approaches to deal with it, for example promising to remove illegal messages within 24 hours after they are reported \cite{titcomb2016}.
This raises the question of how hate speech can be detected automatically.
Such an automatic detection method could be used to scan the large amount of text generated on the internet for hateful content and report it to the relevant authorities.
It would also make it easier for researchers to examine the diffusion of hateful content through social media on a large scale.
From a natural language processing perspective, hate speech detection can be considered a classification task: given an utterance, determine whether or not it contains hate speech.
Training a classifier requires a large amount of data that is unambiguously hate speech.
This data is typically obtained by manually annotating a set of texts based on whether a certain element contains hate speech.
The reliability of the human annotations is essential, both to ensure that the algorithm can accurately learn the characteristics of hate speech, and as an upper bound on the expected performance \cite{warner2012,waseem2016hateful}.
As a preliminary step, six annotators rated 469 tweets. We found that agreement was very low (see Section 3).
We then carried out group discussions to find possible reasons. They revealed that there is considerable ambiguity in existing definitions.
A given statement may be considered hate speech or not depending on someone's cultural background and personal sensibilities.
The wording of the question may also play a role.
\mw{vll. sollten wir hier noch etwas mehr das Problem herausarbeiten: niedriges agreement, kein agreement, transfer, ambiguität}
We decided to investigate the issue of reliability further by conducting a more comprehensive study across a large number of annotators, which we present in this paper.
Our contribution in this paper is threefold:
\begin{itemize}
\setlength\itemsep{0em}
\item To the best of our knowledge, this paper presents the first attempt at compiling a German hate speech corpus for the refugee crisis.\footnote{Available at \url{https://github.com/UCSM-DUE/IWG_hatespeech_public}}
\item We provide an estimate of the reliability of hate speech annotations.
\item We investigate how the reliability of the annotations is affected by the exact question asked.
\end{itemize}
\section{Hate Speech}
For the purpose of building a classifier, \newcite{warner2012} define hate speech as ``abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation''.
More recent approaches rely on lists of guidelines such as a tweet being hate speech if it ``uses a sexist or racial slur'' \cite{waseem2016hateful}.
These approaches are similar in that they leave plenty of room for personal interpretation, since there may be differences in what is considered offensive.
For instance, while the utterance \textit{``the refugees will live off our money''} is clearly generalising and maybe unfair, it is unclear if this is already hate speech.
More precise definitions from law are specific to certain jurisdictions and therefore do not capture all forms of offensive, hateful speech, see e.g. \newcite{matsuda1993}.
\label{twitterdef}
In practice, social media services are using their own definitions which have been subject to adjustments over the years \cite{jeong2016}.
As of June 2016, Twitter bans \emph{hateful conduct}\footnote{``You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.
We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.'', The Twitter Rules}.
With the rise in popularity of social media, the presence of hate speech has grown on the internet.
Posting a tweet takes little more than a working internet connection but may be seen by users all over the world.
Along with the presence of hate speech, its real-life consequences are also growing.
It can be a precursor and incentive for hate crimes, and it can be so severe that it can even be a health issue \cite{burnap2014hate}.
It is also known that hate speech does not only mirror existing opinions in the reader but can also induce new negative feelings towards its targets \cite{martin2013}.
Hate speech has recently gained some interest as a research topic on the one hand -- e.g. \cite{nemanja2014,burnap2014hate,silva2016} -- but also as a problem to deal with in politics such as the \emph{No Hate Speech Movement} by the Council of Europe.
The current refugee crisis has made it evident that governments, organisations and the public share an interest in controlling hate speech in social media.
However, there seems to be little consensus on what hate speech actually is.
\section{Compiling A Hate Speech Corpus}
As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe.
We therefore had to compile our own corpus.
We used Twitter as a source as it offers recent comments on current events.
In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links.
This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them.
To find a large amount of hate speech on the refugee crisis, we used 10 hashtags\footnote{\emph{\#Pack}, \emph{\#Aslyanten}, \emph{\#WehrDich}, \emph{\#Krimmigranten}, \emph{\#Rapefugees}, \emph{\#Islamfaschisten}, \emph{\#RefugeesNotWelcome}, \emph{\#Islamisierung}, \emph{\#AsylantenInvasion}, \emph{\#Scharia}} that can be used in an insulting or offensive way.
Using these hashtags we gathered 13\,766 tweets in total, roughly dating from February to March 2016.
However, these tweets contained a lot of non-textual content which we filtered out automatically by removing tweets consisting solely of links or images.
We also only considered original tweets, as retweets or replies to other tweets might only be clearly understandable when reading both tweets together.
In addition, we removed duplicates and near-duplicates by discarding tweets that had a normalised \textit{Levenshtein} edit distance smaller than .85 to an aforementioned tweet.
A first inspection of the remaining tweets indicated that not all search terms were equally suited for our needs.
The search term \emph{\#Pack} (vermin or lowlife) found a potentially large amount of hate speech not directly linked to the refugee crisis. It was therefore discarded.
As a last step, the remaining tweets were manually read to eliminate those which were difficult to understand or incomprehensible.
After these filtering steps, our corpus consists of 541 tweets, none of which are duplicates, contain links or pictures, or are retweets or replies.
As a first measurement of the frequency of hate speech in our corpus, we personally annotated them based on our previous expertise.
The 541 tweets were split into six parts and each part was annotated by two out of six annotators in order to determine if hate speech was present or not.
The annotators were rotated so that each pair of annotators only evaluated one part.
Additionally the offensiveness of a tweet was rated on a 6-point Likert scale, the same scale used later in the study.
Even among researchers familiar with the definitions outlined above, there was still a low level of agreement (Krippendorff's $\alpha =
.38$).
This supports our claim that a clearer definition is necessary in order to be able to train a reliable classifier.
The low reliability could of course be explained by varying personal attitudes or backgrounds, but clearly needs more consideration.
\section{Methods}
In order to assess the reliability of the hate speech definitions on social media more comprehensively, we developed two online surveys in a between-subjects design. They were completed by 56 participants in total (see Table \ref{tab:summary}).
The main goal was to examine the extent to which non-experts agree upon their understanding of hate speech given a diversity of social media content.
We used the Twitter definition of \textit{hateful conduct} in the first survey.
This definition was presented at the beginning, and again above every tweet.
The second survey did not contain any definition.
Participants were randomly assigned one of the two surveys.
The surveys consisted of 20 tweets presented in a random order. For each tweet, each participant was asked three questions.
Depending on the survey, participants were asked \textbf{(1)} to answer (yes/no) if they considered the tweet hate speech, either based on the definition or based on their personal opinion.
Afterwards they were asked \textbf{(2)} to answer (yes/no) if the tweet should be banned from Twitter.
Participants were finally asked \textbf{(3)} to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive.
After the annotation of the 20 tweets, participants were asked to voluntarily answer an open question regarding the definition of hate speech.
In the survey with the definition, they were asked if the definition of Twitter was sufficient.
In the survey without the definition, the participants were asked to suggest a definition themselves.
Finally, sociodemographic data were collected, including age, gender and more specific information regarding the participant's political orientation, migration background, and personal position regarding the refugee situation in Europe.
The surveys were approved by the ethical committee of the Department of Computer Science and Applied Cognitive Science of the Faculty of Engineering at the University of Duisburg-Essen.
\section{Preliminary Results and Discussion}
Since the surveys were completed by 56 participants, they resulted in 1120 annotations.
Table \ref{tab:summary} shows some summary statistics.
\begin{table}[h]
\centering
\setlength{\tabcolsep}{0.4em}
\begin{tabular}{lrrrr}
\hline
& Def. & No def. & p & r\\
\hline
Participants & 25 & 31 & \\
Age (mean) & 33.3 & 30.5 & \\
Gender (\% female) & 43.5 & 58.6 & \\
\hline
Hate Speech (\% yes) & 32.6 & 40.3 & .26 & .15 \\
Ban (\% yes) & 32.6 & 17.6 & .01 & -.32 \\
Offensive (mean) & 3.49 & 3.42 & .55 & -.08 \\
\hline
\end{tabular}
\caption{Summary statistics with p values and effect size estimates from WMW tests. Not all participants chose to report their age or gender.}
\label{tab:summary}
\end{table}
To assess whether the definition had any effect, we calculated, for each participant, the percentage of tweets they considered hate speech or suggested to ban and their mean offensiveness rating. This allowed us to compare the two samples for each of the three questions. Preliminary Shapiro-Wilk tests indicated that some of the data were not normally distributed. We therefore used the Wilcoxon-Mann-Whitney (WMW) test to compare the three pairs of series. The results are reported in Table \ref{tab:summary}.
Participants who were shown the definition were more likely to suggest to ban the tweet.
In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6\%).
This suggests that participants in that group aligned their own opinion with the definition.
We chose Krippendorff's $\alpha$ to assess reliability, a measure from content analysis, where human coders are required to be interchangeable. Therefore, it measures agreement instead of association, which leaves no room for the individual predilections of coders. It can be applied to any number of coders and to interval as well as nominal data. \cite{krippendorff2004}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{alphas.pdf}
\caption{Reliability (Krippendorff's $\alpha$) for the different groups and questions}
\label{fig:alphas}
\end{figure}
This allowed us to compare agreement between both groups for all three questions.
Figure \ref{fig:alphas} visualises the results.
Overall, agreement was very low, ranging from $\alpha = .18$ to $.29$.
In contrast, for the purpose of content analysis, Krippendorff recommends a minimum of $\alpha = .80$, or a minimum of $.66$ for applications where some uncertainty is unproblematic \cite{krippendorff2004}.
Reliability did not consistently increase when participants were shown a definition.
To measure the extent to which the annotations using the Twitter definition (question one in group one) were in accordance with participants' opinions (question one in group two), we calculated, for each tweet, the percentage of participants in each group who considered it hate speech, and then calculated Pearson's correlation coefficient.
The two series correlate strongly ($r = .895, p < .0001$), indicating that they measure the same underlying construct.
\section{Conclusion and Future Work}
This paper describes the creation of our hate speech corpus and offers first insights into the low agreement among users when it comes to identifying hateful messages.
Our results imply that hate speech is a vague concept that requires significantly better definitions and guidelines in order to be annotated reliably.
Based on the present findings, we are planning to develop a new coding scheme which includes clear-cut criteria that let people distinguish hate speech from other content.
Researchers who are building a hate speech detection system might want to collect multiple labels for each tweet and average the results.
Of course this approach does not make the original data any more reliable \cite{krippendorff2004}. Yet, collecting the opinions of more users gives a more detailed picture of objective (or intersubjective) hatefulness.
For the same reason, researchers might want to consider hate speech detection a regression problem, predicting, for example, the degree of hatefulness of a message, instead of a binary yes-or-no classification task.
In the future, finding the characteristics that make users consider content hateful will be useful for building a model that automatically detects hate speech and users who spread hateful content, and for determining what makes users disseminate hateful content.
\section*{Acknowledgments}
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant No. GRK 2167, Research Training Group ''User-Centred Social Media''.
\bibliographystyle{konvens2016}
|
1,314,259,993,273 | arxiv | \section{Introduction}
First order PDE's of the form
\begin{equation*}
(u_i)_t = \sum_{j} A^j_i(u) (u_j)_x\qquad i,j=1,...,n
\end{equation*}
are called hydrodynamic or dispersionless systems in
(1+1)-dimension. An important subclass of such systems are these
which have multi-Hamiltonian structure, infinite hierarchy of
symmetries and conservation laws. Differential Poisson structures
for hydrodynamic systems were introduced for the first time by
Dubrovin and Novikov \cite{Du1} in the form \eqref{novdub} with
$c=0$, where $g^{ij}$ is a contravariant nondegenerate flat metric
and $\Gamma _{k}^{ij}$ are related coefficients of the
contravariant Levi-Civita connection. Then, they were generalized
by Mokhov and Ferapontov \cite{Fe} to the nonlocal form
\begin{equation}\label{novdub}
\pi _{ij}=g^{ij}(u)\partial _{x}-\sum_{k}\Gamma _{k}^{ij}(u)
(u_k)_x + c (u_i)_x \Dx{-1} (u_j)_x
\end{equation}
in the case when $g^{ij}$ is of constant curvature $c$. The
natural geometric setting of related bi-Hamiltonian structures
(Poisson pencils) is the theory of Frobenious manifolds based on
the geometry of pencils of contravariant metrics \cite{Du2}.
Nevertheless, the condition of nondegeneracy of $g^{ij}$ for the
above Poisson tensors is not necessary. The degenerate
hydrodynamic Poisson tensors were considered by Grinberg \cite{G}
and Dorfmann \cite{D}.
In paper \cite{K} Krichever introduced integrable dispersionless
systems with rational Lax functions on $\mathbb{C} P^1$ of the form
\begin{equation}\label{kri}
L = p^N + \sum_{k=0}^{N-1}a_k p^k +
\sum_{l=1}^{\alpha}\sum_{i=1}^{i_l}\frac{a_{l,i}}{(p -
p_l)^i}\qquad n\geqslant 0,\quad i_l\geqslant 0,
\end{equation}
where $a$'s and the poles $p_l$ are smooth dynamical fields. Then,
around all poles of \eqref{kri}, i.e. $\infty$ and $p_l$, the
powers of Laurent expansions of $L$ generate infinite Lax
hierarchies of commuting vector fields with Lie bracket being the
canonical Poisson bracket \eqref{pb} with $r=0$. Moreover, near
these poles one can construct infinite hierarchies of constants of
motion. Rational Lax functions \eqref{kri} with related Lax
hierarchies have been introduced in \cite{K} in the context of
Whitham hierarchies and topological field theories. From this
point of view they have been considered also in \cite{AK} and
\cite{AK2}. The bi-Hamiltonian structures of Benney and Toda like
Lax hierarchies, but with Poisson bracket \eqref{pb} with $r=1$
and rational Lax functions was developed in \cite{FS}. Their
various reductions were also studied. They also have been
investigated in the context of degenerate Frobenius manifolds
\cite{S}. In \cite{Z}, it was shown how to construct recursion
operators for some classes of such rational Lax representations.
In the theory of nonlinear evolutionary PDE's (dynamical systems)
one of the most important problems is a systematic construction of
integrable systems. By integrable systems we understand those
which have infinite hierarchy of commuting symmetries. It is well
known that a very powerful tool, called the classical $R$-matrix
formalism, can be used for systematic construction of
(1+1)-dimensional field and lattice integrable dispersive systems
(soliton systems) \cite{STS}-\cite{BS} as well as dispersionless
integrable field systems \cite{Li}-\cite{BS2}. Moreover, the
$R$-matrix approach allows a construction of Hamiltonian
structures and conserved quantities.
In this paper the systematic approach of classical $R$-matrices to
(1+1)-integrable dispersionless multi-Hamiltonian systems with
meromorphic Lax hierarchies is presented. In the frames of that
formalism we generalize the results of Krichever onto a wider set
of integrable hierarchies with rational Lax representations as
well as we develop systematically their multi-Hamiltonian
structures. Section 2 briefly presents a number of basic facts and
definitions concerning the formalism of $R$-matrices on Poisson
algebras. In section 3 we define Poisson algebras of meromorphic
functions and construct $R$-matrices. We study multi-Hamiltonian
structures and show the main theorem, that Poisson tensors
constructed for fixed Poisson algebra at different points of
Laurent expansions of $L$ are equal and that related hierarchies
mutually commute. In section 4 we investigate appropriate forms of
meromorphic Lax functions, with finite number of dynamical fields,
which permit construction of integrable dispersionless systems and
illustrate results by a large number of examples.
\section{Classical $R$-matrix theory on Poisson algebras}
The crucial point of the formalism is the observation that
integrable dynamics from some functions space can be represented
by integrable dynamics from an appropriate Lie algebra in the form
of Lax equation
\begin{equation}\label{laxdyn}
L_t = {\rm ad}_A^* L = \brac{A,L},
\end{equation}
i.e. a coadjoint action of some Lie algebra $\mathfrak{g}$ on its dual
$\mathfrak{g}^*$, with the Lax operators $L$ taking values from this Lie
algebra $\mathfrak{g}^* \cong \mathfrak{g}$, where $[\cdot, \cdot]$ is an
appropriate Lie bracket. From \eqref{laxdyn} it is clear that we
confine to such algebras $\mathfrak{g}$ for which its dual $\mathfrak{g}^*$ can be
identified with $\mathfrak{g}$ through the duality map $\langle
\cdot,\cdot \rangle: \mathfrak{g}^* \times \mathfrak{g} \rightarrow \bb{R}$. So, we
assume the existence of a scalar product $(\cdot, \cdot)$ on
$\mathfrak{g}$ which is symmetric, non-degenerate and ${\rm ad}$-invariant:
$({\rm ad}_a b,c)_\mathfrak{g} + (b,{\rm ad}_a c)_\mathfrak{g} = 0$. This abstract
representation \eqref{laxdyn} of integrable systems is referred to
as the Lax dynamics. Obviously, we have one-to-one correspondence
between given lax dynamics and original dynamics.
On the space of smooth functions on the dual algebra $\mathfrak{g}^*$
there exists a natural Lie-Poisson bracket
\begin{equation}\label{liepo}
\pobr{H,F}(L):=\langle L, \brac{dF,dH} \rangle \qquad L\in \mathfrak{g}^*
\quad H, F\in \mathcal{C}^\infty \bra{\mathfrak{g}^*},
\end{equation}
where $dF$, $dH$ are differentials belonging to $\mathfrak{g}$ which can
be calculated from
\begin{equation}\label{grad}
\Diff{t}{F(L+tL')}{t=0} = \langle L',dF(L) \rangle,\qquad L,L'\in
\mathfrak{g}^*.
\end{equation}
A linear map $R:\mathfrak{g} \rightarrow \mathfrak{g}$, such that the bracket
\begin{equation}\label{rbra}
\brac{a,b}_R := \brac{R a, b} + \brac{a,R b}
\end{equation}
is a second Lie product on $\mathfrak{g}$ is called the classical
$R$-matrix. We will additionally assume that $R$-matrices commute
with derivatives with respect to evolution parameters, i.e.
\begin{equation}\label{assum}
\bra{R L}_t = R L_t .
\end{equation}
This property is equivalent to the assumption that $R$ commutes
with differentials of smooth maps from $\mathfrak{g}$ to $\mathfrak{g}$. This
property is used in the proof of Theorem~4.2 in \cite{Li},
although not explicitly stressed there. The equality \eqref{assum}
will be used in subsection 3.5 to show a commutation between
particular Lax hierarchies.
\begin{definition}
Let $\mathcal{A}$ be a commutative, associative algebra with unit. If
there is a Lie bracket on $\mathcal{A}$ such that for each element $a\in
\mathcal{A}$, the operator ${\rm ad}_a:b\mapsto \{a,b\}$ is a derivation of the
multiplication, i.e. $\{a,bc\} = \{a,b\}c+b\{a,c\}$, then
$(\mathcal{A},\{\cdot,\cdot\})$ is called a Poisson algebra and bracket
$\pobr{\cdot,\cdot}$ is a Poisson bracket.
\end{definition}
Thus, the Poisson algebras are Lie algebras with an additional
structure.
\begin{theorem} \cite{Li}
Let $\mathcal{A}$ be a Poisson algebra with Poisson bracket
$\{\cdot,\cdot\}$ and non-degenerate ${\rm ad}$-invariant scalar
product $(\cdot,\cdot)$ with respect to which the operation of
multiplication is symmetric, i.e. $(ab,c)=(a,bc)$, $\forall
a,b,c\in \mathcal{A}$. Assume $R$ is a classical $R$-matrix, such that
\eqref{assum} holds, then for each integer $n\geqslant 0$, the formula
\begin{equation}\label{pobr}
\pobr{H,F}_n = \bra{L, \pobr{R(L^ndF),dH}+\pobr{dF,R(L^ndH)}}
\end{equation}
where $H,F$ are smooth functions on $\mathcal{A}$, defines a Poisson
structure on $\mathcal{A}$. Moreover, all $\{\cdot,\cdot\}_n$ are
compatible.
\end{theorem}
The related Poisson bivectors $\pi_n$, such that $\{H,F\}_n =
(dF,\pi_ndH)$ are given by the following Poisson maps
\begin{equation}\label{pot}
\pi_n :dH\mapsto \pobr{R(L^ndH),L} + L^n R^*\bra{\pobr{dH,L}},\qquad
n\geqslant 0
\end{equation}
where the adjoint of $R$ is defined by the relation
$(R^*a,b)=(a,Rb)$. Notice that the bracket \eqref{pobr} with $n=0$
is just a Lie-Poisson bracket with respect to the Lie bracket
\eqref{rbra}. Referring to the dependence on $L$, Poisson maps
\eqref{pot} are called linear for $n=0$, quadratic for $n=1$ and
cubic for $n=2$, respectively.
We will look for a natural set of functions in involution w.r.t.
the Poisson brackets \eqref{pobr}. Such functions are Casimir
functions of the natural Lie-Poisson bracket \eqref{liepo}. A
sufficient condition for smooth function $F(L)$ to be a Casimir
function is that its differential $dF\in \ker {\rm ad}_L$, i.e.
$[dF,L]=0$. Hence, the following Lemma is valid
\begin{lemma}\cite{Li}\label{lemma}
Smooth functions on $\mathcal{A}$ which are Casimir functions of the
natural Lie-Poisson bracket \eqref{liepo} commute with respect to
$\{\cdot,\cdot\}_n$. The Hamiltonian system generated by a Casimir
function $C(L)$ and the Poisson structure $\{\cdot,\cdot\}_n$ is
given by the Lax equation
\begin{equation}\label{laxhier}
L_t = \brac{R(L^ndC),L},\qquad L\in \mathcal{A}.
\end{equation}
\end{lemma}
Let us assume that an appropriate scalar product on Poisson
algebra $\mathcal{A}$ is given by the trace form ${\rm Tr}: \mathcal{A} \rightarrow
\bb{R}$, such that
\begin{equation*}
\bra{a,b} = {\rm Tr} \bra{ab}.
\end{equation*}
As we have assumed a nondegenerate trace form ${\rm Tr}$ on $\mathcal{A}$, we
will consider the most natural Casimir functionals given by the
trace of powers of $L$, i.e.
\begin{equation}\label{casq}
dC_q(L) = L^q \Longleftrightarrow
\begin{cases}
C_q(L) = \frac{1}{q+1} {\rm Tr} \bra{L^{q+1}} & \text{for } q\neq -1\\
C_{-1}(L) = {\rm Tr} \bra{\ln L} & \text{for } q=-1
\end{cases}
\end{equation}
for which the related gradients follows by \eqref{grad}. Then,
taking these $C_q(L)$ as Hamiltonian functions, one finds a
hierarchy of evolution equations which are multi-Hamiltonian
dynamical systems
\begin{equation}\label{eveq}
L_{t_q} = \pobr{R(dC_q),L} = \pi_{0}(dC_q) = \pi_{1}(dC_{q-1}) =
... = \pi_{l}(dC_{q-l}) = ...\ .
\end{equation}
For any $R$-matrix each two evolution equations in the hierarchy
\eqref{eveq} commute due to the involutivity of the Casimir
functions $C_q$. Each equation admits all the Casimir functions as
a set of conserved quantities in involution. In this sense we will
regard \eqref{eveq} as a hierarchy of integrable evolution
equations.
To construct the simplest $R$-structure let us assume that the
Poisson algebra $\mathcal{A}$ can be split into a direct sum of Lie
subalgebras $\mathcal{A}_+$ and $\mathcal{A}_-$, i.e. $\mathcal{A} =\mathcal{A}_+ \oplus
\mathcal{A}_-$, $[\mathcal{A}_\pm,\mathcal{A}_\pm]\subset \mathcal{A}_\pm$. Denoting the
projections onto these subalgebras by $P_\pm$, the classical
$R$-matrix is well defined as
\begin{equation}\label{rp}
R = \tfrac{1}{2} (P_+ - P_-) = P_+ - \tfrac{1}{2} = \tfrac{1}{2} -
P_-.
\end{equation}
Following the above scheme, we are able to construct in a
systematic way integrable multi-Hamiltonian dispersionless
systems, with infinite hierarchy of involutive constants of motion
and infinite hierarchy of related commuting symmetries, on an
appropriate Poisson algebras. Finally, in the last step, we
reconstruct our multi-Hamiltonian hierarchies in the original
function space of related dispersionless systems.
\section{Lax hierarchies for dispersionless systems}
\subsection{Poisson algebras of meromorphic functions}
Let $\mathcal{F}$ be the algebra of meromorphic functions with a
finite number of poles, i.e. these analytic functions which have
no essential singularities, on a Riemann sphere $\mathbb{C} P^1$ (i.e.
complex plane with point at $\infty$). Let $p$ be a point in $\mathbb{C}
P^1$. Assume now that this algebra depends effectively on an
additional spatial variable $x\in \Omega$. Denote by $\mathcal{A}$ the
algebra of all smooth functions: $f: \Omega \rightarrow
\mathcal{F}$, i.e. $\mathcal{A} = \mathcal{C}^\infty(\Omega,\mathcal{F})$. Let $\Omega
= \mathbb{S}^1$ if we assume these functions to be periodic in $x$ or
$\Omega = \mathbb{R}$ if these functions supposed to belong to the
Schwartz space for a fixed parameter $p$. The Poisson bracket on
$\mathcal{A}$ can be introduced in infinitely many ways as
\begin{equation}\label{pb}
\pobr{f, g}_r := p^r \bra{\partial_pf\partial_xg-\partial_xf\partial_pg}\qquad r\in
\bb{Z}\qquad f,g\in \mathcal{A}.
\end{equation}
Then, fixing $r$, $\mathcal{A}$ is the Poisson algebra with an
appropriate bracket \eqref{pb}. Poisson brackets \eqref{pb} are
generalization of canonical Poisson bracket ($r=0$) through the
addition of $p^r$ factor.
To construct classical $R$-matrices we have to decompose $\mathcal{A}$
into a direct sum of Lie subalgebras. It can be done by expanding
functions belonging to $\mathcal{A}$ in an appropriate annulus near a
given point $\lambda$. Three kind of points on parametrized by $x$
$\mathbb{C} P^1$ will be important. Two fixed points: $\infty$ and $0$,
as well as points being smooth fields $v(x)$ from $\Omega$ to $\mathbb{C}
P^1$.
\subsection{Classical $R$-matrices}
Once we fixed Poisson algebra we are able to construct
$R$-matrices and related Lax vector fields for which the algebra
$\mathcal{A}$ constitutes the phase space.
\paragraph{The expansion around $\infty$.}
First, let us consider the case of point at $\infty$. Then,
meromorphic functions from $\mathcal{A}$ expanded around $\infty$ are
given by Laurent series:
\begin{equation}\label{ai}
\mathcal{A}^\infty = \pobr{\sum^N_{i= -\infty} a_i(x)p^i },
\end{equation}
where $a_i(x)$ are dynamical fields. To construct $R$-matrices we
have to decompose $\mathcal{A}_\infty$ into Lie subalgebras. For a fixed
$r$ let $\mathcal{A}^\infty_{\geqslant k-r} = \{\sum_{i\geqslant k-r} a_i(x)p^i\}$
and $\mathcal{A}^\infty_{<k-r} = \{\sum_{i< k-r} a_i(x)p^i\}$. Let $a
p^m$ and $b p^n$ be elements from \eqref{ai} of order $m$ and $n$,
respectively. Poisson bracket \eqref{pb} between these elements
has the order $m+n+r-1$ as
\begin{equation*}
\pobr{ap^m, bp^n}_r = \bra{m ab_x-n a_xb}p^{m+n+r-1}.
\end{equation*}
Now, simple inspection shows that $\mathcal{A}^\infty_{\geqslant k-r}$ and
$\mathcal{A}^\infty_{< k-r}$ are Lie subalgebras in the following cases:
\begin{enumerate}
\item $r=0$, $k=0$;
\item $r\in \bb{Z}$, $k=1,2$;
\item $r=2$, $k=3$.
\end{enumerate}
So, fixing $r$ we fix the Lie algebra structure with $k$ numbering
the $R$-matrices \eqref{rp} given in the following form
\begin{equation}\label{ri}
R = P^\infty_{\geqslant k-r} - \tfrac{1}{2}
\end{equation}
where $P^\infty_{\geqslant k-r}$ is an appropriate projection onto Lie
subalgebra of functions expanded in Laurent series \eqref{ai}. So,
Lax hierarchy \eqref{laxhier} assigned by \eqref{ri} for a given
Lax function $L\in \mathcal{A}^{\infty}$ is
\begin{equation}\label{laxi}
L_{t_q} = \pobr{\bra{L^{\tfrac{q}{N}}}^\infty_{\geqslant k-r},L}_r\qquad q\in \bb{Z}_+,
\end{equation}
where $(\cdot)^\infty_{\geqslant k-r}\equiv P^\infty_{\geqslant k-r}(\cdot)$
and $N\neq 0$ is the highest order of $L$ expanded in Laurent
series at $\infty$. So, if $L$ has pole at $\infty$ then $N>0$ and
the powers are positive, or if $L^{-1}$ has pole at $\infty$ then
$N<0$ and the powers are negative.
\paragraph{The expansion around $0$.}
Meromorphic functions expanded near $0$ constitute the following
algebra
\begin{equation*}\label{a0}
\mathcal{A}^0 = \pobr{\sum^{\infty}_{i= -m} a_i(x)p^i }.
\end{equation*}
The situation here is similar to the previous case. So,
$R$-matrices are defined for the same $r$ and $k$ as at $\infty$
and are of the form
\begin{equation*}\label{r0}
R = \frac{1}{2} - P^0_{< k-r}.
\end{equation*}
Hence, Lax hierarchies are
\begin{equation}\label{lax0}
L_{t_q} = - \pobr{\bra{L^{\tfrac{q}{m}}}^0_{< k-r},L}_r\qquad q\in \bb{Z}_+,
\end{equation}
where $-m\neq 0$ is the lowest order of Laurent series of $L$
expanded around $0$. So, if $L$ has pole at $0$ then $m>0$ and the
powers are positive, while if $L^{-1}$ has pole at $0$ then $m<0$
and the powers are negative.
Now, we will show that schemes for points at $\infty$ and at $0$
are interrelated.
\begin{proposition}\label{0i}
Under the transformation
\begin{equation*}\label{tran}
x'=x\quad p'=p^{-1}\quad t'=t
\end{equation*}
the Lax hierarchy \eqref{laxi} defined by $L\in \mathcal{A}^\infty$ for
$r,k$ transforms into the Lax hierarchy \eqref{lax0} defined by
$L'=L\in \mathcal{A}^0$ for $k'=3-k,r'=2-r$, i.e.
\begin{equation*}\label{trans}
L \text{ for } k,\ r \text{ at } \infty \quad \Longleftrightarrow
\quad L'=L \text{ for } k'=3-k,\ r'=2-r \text{ at } 0.
\end{equation*}
\end{proposition}
\begin{proof}
It follows from the observation that $\{\cdot,\cdot\}_r =
p^r\partial_p\wedge\partial_x = -p'^{2-r}\partial_{p'}\wedge\partial_{x'} = -
\{\cdot,\cdot\}'_{r'}$ and $(L^n)^\infty_{\geqslant k-r} =
({L'}^n)^0_{\leqslant k-r} = ({L'}^n)^0_{< k'-r'}$.
\end{proof}
\paragraph{The expansion around $v(x)$.}
Now, we will consider meromorphic functions in the form of Laurent
series expanded around some field $v(x)$:
\begin{equation*}\label{av}
\mathcal{A}^v = \pobr{\sum^{\infty}_{i= -m} a_i(x)(p-v(x))^i }.
\end{equation*}
Notice, that $v(x)$ is a dynamical field of the same kind as
coefficients $a_i(x)$. Let $\mathcal{A}^v_{\geqslant k-r} = \{\sum_{i\geqslant k-r}
a_i(p-v)^i\}$ and $\mathcal{A}^v_{< k-r} = \{\sum_{i< k-r} a_i(p-v)^i\}$.
Here the situation is a bit more complicated as one has to expand
$p^r$ in \eqref{pb} at $v(x)$, i.e.
\begin{equation*}
p^r = \sum_{s=0}^\infty \tbinom{r}{s}v(x)^{r-s}(p-v(x))^s
\end{equation*}
where $\binom{r}{s} = (-1)^s \binom{-r+s-1}{s}$ for $r<0$. Hence,
$p^r$ as the element of $\mathcal{A}_v$, has the lowest order equal zero,
the highest order equal $r$ for $r\geqslant 0$ and infinity for $r<0$.
Therefore
\begin{equation*}\label{pbv}
\pobr{a(p-v)^m, b(p-v)^n}_r = \bra{(p-v)^\alpha+ ... + v^r}
\times \bra{mab_x-na_xb}(p-v)^{m+n-1},
\end{equation*}
where $\alpha=r$ for $r\geqslant 0$ and $\alpha$ goes to $\infty$ for
$r<0$. One finds that $\mathcal{A}^v_{\geqslant k-r}, \mathcal{A}^v_{< k-r}$ are Lie
subalgebras in the following cases:
\begin{enumerate}
\item $r=0$, $k=0,1,2$; \item $r=1$, $k=1,2$; \item $r=2$, $k=2,3$
\end{enumerate}
and $R$-matrices have the form
\begin{equation}\label{rv}
R = \frac{1}{2} - P^v_{<k-r} .
\end{equation}
However, we have to choose these $R$-matrices which commutes with
derivatives with respect to evolution parameters. Let $L= \sum_i
a_i (p-v)^i$. Then,
\begin{align*}
(RL)_t - R L_t &= P^v_{<k-r}L_t - \bra{P^v_{<k-r}L}_t\\
&= P^v_{< k-r}\bra{\sum_i (a_i)_t (p-v)^i - \sum_i i a_i v_t
(p-v)^{i-1}} - \frac{d}{dt}\bra{\sum_{i<k-r} a_i (p-v)^i}\\
&= (k-r)a_{k-r-1} v_t (p-v)^{k-r-1}
\end{align*}
and equality \eqref{assum} holds when $k-r=0$. Hence, further on
we will consider only $R$-matrices \eqref{rv} for
\begin{equation*}
k=r=0,1,2.
\end{equation*}
In consequence, one finds the following Lax hierarchies related to
$R$-matrices \eqref{rv}
\begin{equation*}\label{laxv}
L_{t_q} = - \pobr{\bra{L^{\tfrac{q}{m}}}^v_{< 0},L}_r\qquad q\in
\bb{Z}_+\quad r=0,1,2,
\end{equation*}
where $-m\neq 0$ is the lowest order of Laurent series of $L$ at
$v$.
\subsection{Scalar products}
To construct Poisson structures one has to define an appropriate
scalar product on $\mathcal{A} $. We will define it near a given point
$\lambda$ by means of the trace form in the algebra $\mathcal{A}$ with
the Poisson structure \eqref{pb} for fixed $r$:
\begin{align*}
{\rm Tr}_\infty f &= -\int_\Omega {\rm res}_\infty \bra{p^{-r}f} dx,\\
{\rm Tr}_\lambda f &= \int_\Omega {\rm res}_\lambda \bra{p^{-r}f} dx,\qquad
\qquad \lambda = 0,v(x)\quad f\in \mathcal{A},
\end{align*}
where ${\rm res}$ is the standard residue. In further considerations
the residue theorem will be very useful. Let $f\in \mathcal{A}$ and
$\Gamma$ be a set of all finite poles of $f$. Then, according to
the residue theorem
\begin{equation}\label{rest}
\sum_{i\in \Gamma} {\rm res}_{\lambda_i}f =
\frac{1}{2\pi i} \oint_{\gamma_\Gamma} f\ dp \equiv -{\rm res}_\infty
f \qquad \lambda_i \neq \infty
\end{equation}
where $\gamma_\Gamma$ is closed curve encircling all finite poles
of $f$. So, residue at $\infty$ may be different then zero even if
$f$ does not have singularity at this point.
\begin{lemma}
For two arbitrary functions $f,g\in \mathcal{A}$ the scalar product:
\begin{equation}\label{prod}
\bra{f,g}_\lambda:={\rm Tr}_\lambda (fg)\qquad \lambda = \infty,0,v(x)
\end{equation}
is symmetric, nondegenerate and ${\rm ad}$-invariant.
\end{lemma}
\begin{proof}
The nondegeneracy and symmetry of \eqref{prod} are obvious. Let
$\gamma_\lambda$ be a closed curve circling once a finite pole
$\lambda$, then
\begin{align*}
{\rm Tr}_\lambda \pobr{f,g}_r &= \int_\Omega {\rm res}_\lambda \bra{\partial_p f \partial_x g} dx - \int_\Omega {\rm res}_\lambda \bra{\partial_x f \partial_p g} dx\\
& = \frac{1}{2\pi i}\int_\Omega \oint_{\gamma_\lambda} \bra{\partial_p f \partial_x g} dpdx - \frac{1}{2\pi i}\int_\Omega \oint_{\gamma_\lambda} \bra{\partial_x f \partial_p g} dpdx\\
& = \frac{1}{2\pi i}\int_\Omega \oint_{\gamma_\lambda} \bra{\partial_x
f \partial_p g} dpdx - \frac{1}{2\pi i}\int_\Omega
\oint_{\gamma_\lambda} \bra{\partial_x f \partial_p g} dpdx = 0,
\end{align*}
where we have integrated by parts with respect to $p$ and $x$.
Similar proof is for $\lambda=\infty$. Therefore
\begin{align*}
&\bra{\pobr{f,g}_r,h}_\lambda - \bra{\pobr{g,h}_r,f}_\lambda =
{\rm Tr}_\lambda \bra{\pobr{f,g}_r h} -
{\rm Tr}_\lambda \bra{\pobr{g,h}_r f} \\
&\qquad \qquad = {\rm Tr}_\lambda \bra{\pobr{fh,g}_r-f\pobr{h,g}_r}+
{\rm Tr}_\lambda \bra{f\pobr{h,g}_r}= {\rm Tr}_\lambda \pobr{fh,g}_r = 0,
\end{align*}
i.e. ${\rm ad}$-invariance is proved.
\end{proof}
For a given functional $H(L)\in \mathcal{C}^\infty(\mathcal{A})$ of $L\in \mathcal{A}$ the
differential can be calculated by \eqref{grad}. But for functional
$H = \int_\Omega h(u_i) dx $, where $u_i$ are dynamical
coefficient of $L\in \mathcal{A}$, we have to show how to construct $dH$.
Differential of $H$ constructed near a given point $\lambda$, will
be denoted by $d_\lambda H\in \mathcal{A}$. Coefficients of $d_\lambda H$
depend on dynamical fields and usual variational derivatives
$\var{H}{u_i}$ in such a way that the trace duality assumes the
usual Euclidean form, i.e.
\begin{equation}\label{eucl}
\bra{d_\lambda H,L_t}_\lambda = {\rm Tr}_\lambda\bra{d_\lambda H L_t} =
\sum_i \int_\Omega \var{H}{u_i}(u_i)_t\ dx .
\end{equation}
Notice, that from \eqref{eucl} it follows that
\begin{equation}\label{rel}
\forall_{i,j}\quad \bra{d_{\lambda_i}H,K}_{\lambda_i} =
\bra{d_{\lambda_j}H,K}_{\lambda_j}
\end{equation}
where $K$ is vector field on $\mathcal{A}$ such that it spans exactly the
same subspace of $\mathcal{A}$ as $L_t$.
To find $R^*$, i.e. the adjoint operation to $R$, one has to
determine the adjoint projections near $\lambda$ from the
following relation
\begin{equation*}
\bra{\bra{P^\lambda}^*f,g}_\lambda = \bra{f,P^\lambda
g}_\lambda\qquad f,g\in \mathcal{A}^\lambda .
\end{equation*}
So, for $0$ and $\infty$ we have
\begin{equation*}
\bra{P^0_{< k-r}}^* = 1 - P^0_{< 2r-k}\qquad \bra{P^\infty_{\geqslant
k-r}}^* = 1 - P^\infty_{\geqslant 2r-k}.
\end{equation*}
The case of $\lambda = v(x)$ is more delicate. Let $A= \sum_m
a_m(p-v)^m$ and $B= \sum_n b_n(p-v)^n$, then for $r\geqslant 0$:
\begin{align*}
\bra{A ,P^v_{<0}B}_v &= \int_\Omega {\rm res}_v \bra{\sum_{s\geqslant
0}\sum_m\sum_{n<0} \tbinom{-r}{s}v^{-r-s}a_mb_n(p-v)^{m+n+s}}dx\\
&= \int_\Omega \sum_{s\geqslant 0}\sum_{n<0}
\tbinom{-r}{s}v^{-r-s}a_{-n-s-1}b_n\ dx = \int_\Omega \sum_{s\geqslant
0}\sum_{m\geqslant -s}
\tbinom{-r}{s}v^{-r-s}a_mb_{-m-s-1}\ dx\\
&= \int_\Omega {\rm res}_v \bra{\sum_{s\geqslant 0}\sum_{m\geqslant -s}\sum_n
\tbinom{-r}{s}v^{-r-s}a_mb_n(p-v)^{m+n+s}}dx\\
&= \bra{p^r \sum_{s\geqslant 0}\tbinom{-r}{s}v^{-r-s}(p-v)^s P^v_{\geqslant -s}A ,
B}_v,
\end{align*}
where we used an appropriate expansion of $p^{-r}$ at $v$. Hence
\begin{equation*}
\bra{P^v_{< 0}}^* = 1 - p^r
\sum_{s=0}^{\infty}\tbinom{-r}{s}v^{-r-s}(p-v)^s P^v_{<-s}\qquad
r\geqslant 0
\end{equation*}
for $r=0$ it reduces to $\bra{P^v_{< 0}}^* = 1 - P^v_{< 0}$. We
will use simplified notation:
\begin{equation*}
P_v'= \sum_{s=0}^{\infty}\tbinom{-r}{s}v^{-r-s}(p-v)^s P^{v}_{<-s}
\end{equation*}
as then $\bra{P^v_{< 0}}^* = 1 - p^r P_v'$.
\subsection{Poisson structures}
The Poisson structures \eqref{pobr} at respective points, related
to respective $R$-matrices, are
\begin{equation*}\label{ps}
\pobr{H,F}^n_\lambda = \bra{d_\lambda F,\pi^n_\lambda d_\lambda
H}_\lambda\qquad \lambda = \infty,0,v(x)\quad n\geqslant 0
\end{equation*}
for which Poisson operators are given by the following forms
\begin{align}
\label{poi} &\pi^n_\infty d_\infty H = \pobr{\bra{L^nd_\infty H}^\infty_{\geqslant k-r},L}_r - L^n\bra{\pobr{d_\infty H,L}_r}^\infty_{\geqslant 2r-k},\\
\notag &\pi^n_0 d_0 H = \pobr{L,\bra{L^nd_0 H}^0_{< k-r}}_r - L^n\bra{\pobr{L,d_0 H}_r}^0_{<2r-k},\\
\notag &\pi^n_v d_v H = \pobr{L,\bra{L^nd_v H}^v_{<0}}_r -
L^np^rP_v'\bra{\pobr{L,d_v H}_r}.
\end{align}
It is important here to mention that for a given Lax operator $L$
it may happen that $L_t$ does not span a proper subspace of the
full Poisson algebra $\mathcal{A}$, i.e. the image of the Poisson
operator $\pi^ndH$ does not coincide with this subspace. Then, in
general, the Dirac reduction can be invoked for restriction of a
given Poisson tensor to a suitable subspace.
\begin{lemma}
The following relations will be needed to prove forthcoming
theorem:
\begin{align*}
& \bra{d_\infty F,\pobr{L,\bra{L^nd_0H}^0_{<k-r}}_r}_\infty =
\bra{d_0H,L^n\bra{\pobr{d_\infty F,L}_r}^\infty_{\geqslant 2r-k}}_0,\\
&\bra{d_\infty F,L^n \bra{\pobr{L,d_0H}_r}_{< 2r-k}}_\infty =
\bra{d_0H,\pobr{\bra{L^nd_\infty F}^\infty_{\geqslant k-r},L}_r}_0,
\end{align*}
for arbitrary $k$ and $r$, and
\begin{align}
\label{rel1} &\bra{d_\infty F,\pobr{L,\bra{L^nd_v
H}^v_{<0}}_r}_\infty =
\bra{d_v H,L^n\bra{\pobr{d_\infty F,L}_r}^\infty_{\geqslant r}}_v,\\
\label{rel2} &\bra{d_\infty F,L^np^rP_v'\bra{\pobr{L,d_v
H}_r}}_\infty = \bra{d_v H,\pobr{\bra{L^nd_\infty F}^\infty_{\geqslant
0},L}_r}_v,
\end{align}
where $r\geqslant 0$.
\end{lemma}
\begin{proof}
We will prove only the first and last relations as for the two
remaining ones the proof is similar. We use property of
${\rm ad}$-invariance and we omit (or add) these elements which do not
contribute in calculations of residues:
\begin{align*}
&\bra{d_\infty F,\pobr{L,\bra{L^nd_0 H}^0_{<k-r}}_r}_\infty = \bra{\bra{L^nd_0 H}^0_{<k-r},\pobr{d_\infty F,L}_r}_\infty \\
&= \int_\Omega {\rm res}_\infty \bra{p^{-r}\bra{L^nd_0H}^0_{<k-r}\pobr{L,d_\infty F}_r}dx = \int_\Omega {\rm res}_\infty \bra{p^{-r}\bra{L^nd_0 H}^0_{<k-r}\bra{\pobr{L,d_\infty F}_r}^\infty_{\geqslant 2r-k}}dx\\
&\overset{\text{by \eqref{rest}}}{=} \int_\Omega {\rm res}_0
\bra{p^{-r}\bra{L^nd_0H}^0_{<k-r}\bra{\pobr{d_\infty
F,L}_r}^\infty_{\geqslant 2r-k}}dx\\ &=\int_\Omega {\rm res}_0
\bra{p^{-r}L^nd_0H \bra{\pobr{d_\infty F,L}_r}^\infty_{\geqslant
2r-k}}dx = \bra{d_0H,L^n\bra{\pobr{d_\infty F,L}_r}^\infty_{\geqslant
2r-k}}_0 .
\end{align*}
Let $r\geqslant 0$. Using proper expansion of $p^{-r}$ at $v$ we have:
\begin{align*}
&\bra{d_\infty F,L^np^rP_v'\bra{\pobr{L,d_v H}_r}}_\infty = -\int_\Omega {\rm res}_\infty \bra{d_\infty F L^nP_v'\bra{\pobr{L,d_v H}_r}}dx\\
&= \int_\Omega {\rm res}_\infty \bra{\bra{L^nd_\infty F}^\infty_{\geqslant 0} P_v'\bra{\pobr{d_v H,L}_r}}dx \overset{\text{by \eqref{rest}}}{=} \int_\Omega {\rm res}_v \bra{\bra{L^nd_\infty F}^\infty_{\geqslant 0} P_v'\bra{\pobr{L,d_v H}_r}}dx\\
&= \int_\Omega {\rm res}_v \bra{\bra{L^nd_\infty F}^\infty_{\geqslant 0} \pobr{L,d_v H}_r}dx = \bra{\bra{L^nd_\infty F}^\infty_{\geqslant 0},\pobr{L,d_v H}_r}_v\\
& = \bra{d_v H,\pobr{\bra{L^nd_\infty F}^\infty_{\geqslant 0},L}_r}_v.
\end{align*}
Thus all relations are valid.
\end{proof}
\begin{theorem}\label{main}
Let $L\in \mathcal{A}$ be a meromorphic Lax function. Then for all
appropriate $k$ and $r$
\begin{equation*}\label{mainri}
\pobr{H,F}^n_0 = \pobr{H,F}^n_\infty\quad \text{and}\quad \pi^n_0
d_0 H = \pi^n_\infty d_\infty H
\end{equation*}
while for $k=r=0,1,2$
\begin{equation*}
\forall_i\ \pobr{H,F}^n_{v_i} = \pobr{H,F}^n_\infty \quad
\text{and}\quad \pi^n_{v_i} d_{v_i} H = \pi^n_\infty d_\infty H,
\end{equation*}
where $v_i$ are dynamical fields. Therefore, Poisson structures,
from the original function space of related dispersionless
systems, calculated for fixed $r$ and $k$ at different points are
equal.
\end{theorem}
\begin{proof}
We will prove only the second set of relations as for the first
part the proof is similar. Thus,
\begin{align*}
&\pobr{H,F}^n_{v_i} = \bra{d_{v_i} F,\pi^n_{v_i}d_{v_i} H}_{v_i} \overset{\text{by \eqref{rel}}}{=} \bra{d_\infty F,\pi^n_{v_i}d_{v_i} H}_\infty\\
&= \bra{d_\infty F,\pobr{L,\bra{L^nd_{v_i} H}^{v_i}_{<0}}_r - L^np^rP_{v_i}' \bra{\pobr{L,d_{v_i} H}_r}}_\infty\\
&\overset{\text{by \eqreff{rel1}{rel2}}}{=}
\bra{d_{v_i} H,L^n\bra{\pobr{d_\infty F,L}_r}^\infty_{\geqslant r}-\pobr{\bra{L^nd_\infty F}^\infty_{\geqslant 0},L}_r}_v\\
&= -\bra{d_{v_i} H,\pi^n_\infty d_\infty F}_{v_i,r}
\overset{\text{by \eqref{rel}}}{=} -\bra{d_\infty H,\pi^n_\infty
d_\infty F}_\infty = \pobr{H,F}^n_\infty.
\end{align*}
Now, from the equality of above Poisson brackets it follows that
\begin{align*}
\bra{d_\infty F,\pi^n_\infty d_\infty H}_\infty =
\bra{d_{\lambda_i} F,\pi^n_{\lambda_i}d_{\lambda_i} H}_{\lambda_i}
\overset{\text{by
\eqref{rel}}}{=} \bra{d_\infty F,\pi^n_{\lambda_i}d_{\lambda_i}
H}_\infty \Longleftrightarrow \pi^n_{\lambda_i} d_{\lambda_i} H =
\pi^n_\infty d_\infty H,
\end{align*}
where $\lambda_i=0,v_i$. Hence the theorem is proved.
\end{proof}
\subsection{Commuting multi-Hamiltonian Lax hierarchies}
Let $L\in \mathcal{A}$ be a Lax function such that $L$ and $L^{-1}$ can
have poles at $\infty, 0$ and $v_i(x)$. Then, for appropriate $r$
and $k$ near these poles one can construct the following
multi-Hamiltonian Lax hierarchies \eqref{eveq}
\begin{align}
\label{lhi} L_{t_q} &= \pobr{\bra{L^{\tfrac{q}{N}}}^\infty_{\geqslant
k-r},L}_r = \pi^0_\infty d_\infty H_q^\infty = \pi^1_\infty
d_\infty H_{q-1}^\infty = ...,\\
\label{lh0} L_{\tau_q} &= - \pobr{\bra{L^{\tfrac{q}{m_0}}}^0_{<
k-r},L}_r =
\pi^0_0 d_0 H_q^0 = \pi^1_0 d_0 H_{q-1}^0 = ...,\\
\label{lhv} L_{\xi_q} &= -
\pobr{\bra{L^{\tfrac{q}{m_i}}}^{v_i}_{< 0},L}_r = \pi^0_{v_i}
d_{v_i} H_q^{v_i} = \pi^1_{v_i} d_{v_i} H_{q-1}^{v_i} = ...,
\end{align}
where integer $q>0$ and $t, \tau, \xi$ are evolution parameters.
The Hamiltonians are then defined through trace forms near these
poles and are given by \eqref{casq} for $q\geqslant 0$
\begin{equation}\label{ham}
\begin{split}
&H_q^\lambda (L) = \frac{\epsilon}{\frac{q}{n}+1}
\int_\Omega {\rm res}_\lambda \bra{p^{-r}L^{\frac{q}{n}+1}}\qquad \text{for } q\neq -n\\
&H_{-n}^\lambda (L) = \epsilon \int_\Omega {\rm res}_\lambda
\bra{p^{-r}\ln L}\qquad \text{for } q= -n,
\end{split}
\end{equation}
where $\epsilon = -1, n=N$ for $\lambda=\infty$ and $\epsilon =
1,n=m_0,m_i$ for $\lambda=0,v_i$, respectively. Calculations of
$H_{-n}^\lambda$ from \eqref{ham} for $\lambda$ being the root of
$L$ may cause difficulties as then $\ln L$ has at $\lambda$
essential singularity. There is an alternative approach. First we
look for coefficients of $dH_{-n}^\lambda$ which can be simply
obtained from ${\rm Tr}_\lambda (L^{-1} L_t) = \sum_i \int_\Omega
\var{H_{-n}^\lambda}{u_i}(u_i)_t\ dx$, since $d_\lambda
H_{-n}^\lambda = L^{-1}$. Then, we calculate the functional
$H_{-n}^\lambda$ integrating a respective system of equations.
Let us show that Lax hierarchies \eqreff{lhi}{lhv} for fixed $r$
and $k$ mutually commute. Due to Lemma \ref{lemma} Hamiltonians
\eqref{ham}, as Casimirs of the natural Lie-Poisson bracket, are
in involution with respect to Poisson brackets \eqref{ps}, i.e.
$\{H_q^{\lambda_i}, H_{q'}^{\lambda_j}\}^n_{\lambda_k}=0$, where
$\lambda_i=\infty,0,v_i$. From Theorem~\ref{main} it follows that
$\pi^n_{\lambda_i} d_{\lambda_i} = \pi^n_{\lambda_j}
d_{\lambda_j}$. Now, hence $\pi d$ is the Lie algebra
homomorphism, from the algebra of smooth functions to the Lie
algebra of vector fields, the commutation between Lax hierarchies
\eqreff{lhi}{lhv} is immediate. For two vector fields $L_{t_q} =
\pi^n_{\lambda_i} d_{\lambda_i} H_q^{\lambda_i}$ and $L_{t'_{q'}}
= \pi^n_{\lambda_j} d_{\lambda_j} H_{q'}^{\lambda_j}$ we have that
\begin{align*}
\brac{L_{t_q} , L_{t'_{q'}}} = \brac{\pi^n_{\lambda_i}
d_{\lambda_i} H_q^{\lambda_i}, \pi^n_{\lambda_j} d_{\lambda_j}
H_{q'}^{\lambda_j}} = \brac{\pi^n_{\lambda_i} d_{\lambda_i}
H_q^{\lambda_i}, \pi^n_{\lambda_i} d_{\lambda_i}
H_{q'}^{\lambda_j}} = \pi^n_{\lambda_i} d_{\lambda_i}
\pobr{H_q^{\lambda_i}, H_{q'}^{\lambda_j}} = 0,
\end{align*}
where $[\cdot,\cdot ]$ is the Lie bracket between vector fields.
However, for these commutations the Hamiltonian property is not
necessary. We will show it for the Lax hierarchies \eqref{lhi} and
\eqref{lhv} with $k=r=0,1,2$ as for the other combinations the
calculations are similar. We will use simplified notation
$X=L^{\tfrac{q}{N}}, Y=L^{\tfrac{q'}{m_i}}$ and
\begin{equation*}
(X)^\infty_{\geqslant 0}=X^\infty_{\geqslant 0}=X-X^\infty_{< 0}\qquad
(Y)^{v_i}_{<0}=Y^v_{< 0}=Y-Y^v_{\geqslant 0}.
\end{equation*}
Then
\begin{align*}
&\bra{L_{t_q}}_{\xi_{q'}} - \bra{L_{\xi_{q'}}}_{t_q}=\\
&= \pobr{\bra{X^\infty_{\geqslant 0}}_{\xi_{q'}},L}_r +
\pobr{X^\infty_{\geqslant 0},L_{\xi_{q'}}}_r +
\pobr{\bra{Y^v_{< 0}}_{t_q},L}_r + \pobr{Y^v_{< 0},L_{t_q}}_r\\
&\overset{\text{by \eqref{assum}}}{=} \pobr{\bra{X_{\xi_{q'}}}^\infty_{\geqslant 0},L}_r +
\pobr{X^\infty_{\geqslant 0},L_{\xi_{q'}}}_r +
\pobr{\bra{Y_{t_q}}^v_{< 0},L}_r + \pobr{Y^v_{<
0},L_{t_q}}_r\\
&= -\pobr{\bra{\pobr{Y^v_{< 0},X}_r}^\infty_{\geqslant 0},L}_r -
\pobr{X^\infty_{\geqslant 0},\pobr{Y^v_{< 0},L}_r}_r\\
&\quad +\pobr{\bra{\pobr{X^\infty_{\geqslant 0},Y}_r}^v_{< 0},L}_r +
\pobr{Y^v_{< 0},\pobr{X^\infty_{\geqslant 0},L}_r}_r\\
&=\pobr{\bra{\pobr{X,Y^v_{< 0}}_r}^\infty_{\geqslant 0} +
\bra{\pobr{X^\infty_{\geqslant 0},Y}_r}^v_{< 0} -
\pobr{X^\infty_{\geqslant 0},Y^v_{< 0}}_r,L}_r = 0
\end{align*}
where we used Jacoby identity and the last equality holds since for $r=0,1,2$
\begin{align*}
\pobr{X^\infty_{\geqslant 0},Y^v_{< 0}}_r &=
\bra{\pobr{X^\infty_{\geqslant 0},Y^v_{< 0}}_r}^\infty_{\geqslant 0} + \bra{\pobr{X^\infty_{\geqslant 0},Y^v_{< 0}}_r}^v_{< 0}\\
&=
\bra{\pobr{X,Y^v_{< 0}}_r}^\infty_{\geqslant 0} + \bra{\pobr{X^\infty_{\geqslant 0},Y}_r}^v_{< 0}.
\end{align*}
We see now that the restriction \eqref{assum} is indeed crucial.
Actually, in the same way one can prove commutations between
symmetries inside these Lax hierarchies.
Combining results from current and previous subsections we obtain
the following corollary.
\begin{corollary}\label{cor}
Let $L$ be a Lax function in $\mathcal{A}$ with fixed Poisson bracket
given by $r$ and let us fix an appropriate $k$. Then, around each
pole of $L$ and $L^{-1}$ one finds infinite hierarchy of commuting
multi-Hamiltonian symmetries and infinite hierarchy of constants
of motion. Moreover, vector fields from these different
hierarchies mutually commute.
\end{corollary}
In further considerations we are interested in extracting closed
systems with finite number of dynamical functions. Therefore, we
will look for meromorphic Lax functions, with finite number of
dynamical coefficients, which allow a construction of consistent
evolution Lax hierarchies. So, in the following section we will
select an appropriate meromorphic Lax functions.
\section{Meromorphic Lax functions}
The meromorphic Lax function $L$ is an appropriate one if the
right-hand sides of Lax hierarchies \eqreff{lhi}{lhv} can be
written in the form of evolutions $L_t$, i.e. left-hand sides.
These Lax hierarchies are generated by positive and negative
powers (in general fractional) of respective expansions near poles
of $L$ and $L^{-1}$. Actually, the appropriate expansions near
$\infty$ and $0$ are for $k=r=0$; $k=1,2$ and $r\in \bb{Z}$;
$k=3,r=2$, while the expansions near $v_i(x)$ takes place for
$k=r=0,1,2$. One finds these poles by looking for roots of
$L^{-1}$ and $L$, respectively. Important is the following. Let
$L$ be an appropriate Lax function with respect to the Lax
hierarchy related to one of poles. Then, it is as well an
appropriate function for hierarchies for all other allowed poles,
for the same $r$ and $k$. It is so, as by Proposition \eqref{main}
the Lax hierarchy related to one pole can be rewritten for another
one. Moreover, for a given $L$ and fixed $r$ and $k$, the Lax
hierarchies, generated near all poles, will mutually commute.
We would like to investigate the general form of meromorphic Lax
functions being appropriate Lax functions, i.e. such which allow a
construction of integrable dispersionless equations. We will
distinguish between three cases: the first one when $L$ is a
finite formal Laurent series at $0$, the second one when $L$ is a
finite formal Laurent series at pole $v(x)$, and finally more
general case of rational functions.
\subsection{Polynomial Lax functions in $p$ and $p^{-1}$.}
Let us consider Lax functions of the form
\begin{equation}\label{pol0}
L = u_N p^{N} + u_{N-1} p^{N-1} + ... + u_{1-m} p^{1-m} +
u_{-m}p^{-m},
\end{equation}
i.e. formal finite Laurent series at 0. The coefficients $u_i$ are
dynamical fields. For Lax functions \eqref{pol0}, in general, we
can construct powers near $\infty$ and $0$ which will generate
related Lax hierarchies \eqref{lhi} and \eqref{lh0}, respectively.
If $k=r$ powers calculated around roots of $L$ generate additional
Lax hierarchies given by \eqref{lhv}.
From now on, without loos of generality, we will choose all
appearing constants in the form that will simplify all formulae.
\begin{proposition}\label{apol0}
Lax function of the form \eqref{pol0} is an appropriate one in the
following cases:
\begin{enumerate}
\item $k=0$, $r=0$: $N\geqslant 2$, $u_N=1$, $u_{N-1} = 0$, $m=0$;
\item $k=1$, $r\in \bb{Z}$: $N\neq 0$, $u_N=1$, $m\neq 0$ for $r=1$;
\item $k=2$, $r\in \bb{Z}$: $N\neq 0$ for $r=1$, $m\neq 0$, $u_{-m}=1$;
\item $k=3$, $r=2$: $N=0$, $m\geqslant 2$, $u_{1-m}=0$, $u_{-m}=1$.
\end{enumerate}
\end{proposition}
We will not prove this proposition as it is the standard case
considered in \cite{BS2}.
\begin{proposition}\label{pp2}
Under the transformation $p'=p^{-1}$ Lax hierarchies, from
Proposition \ref{apol0}, generated by powers calculated at
$\infty$ and $0$ for appropriate $r$ and $k$ transforms into Lax
hierarchies for $0$ and $\infty$ with $r'=2-r$ and $k'=3-k$,
respectively.
\end{proposition}
The proof immediately follows from Proposition \ref{0i}. Notice,
that by transformation $p'=p^{-1}$ Lax hierarchies \eqref{lhv} for
$r=k=0,1,2$ defined at roots of $L$ being dynamical fields fall
out from the scheme presented in this article. On the other hand,
for: $k=1, r=0$; $k=2, r=1$; $k=3, r=2$; according to Proposition
\ref{apol0} one can construct Lax hierarchies only at $\infty$ and
$0$. However, by $p'=p^{-1}$ they transform into cases: $k=2,
r=2$; $k=1, r=1$; $k=0, r=0$; respectively, for which one is able
to construct Lax hierarchies \eqref{lhv} related to all poles
(including poles being dynamical fields) of $L'$ and $L'^{-1}$.
Hence, the relevant cases from Proposition \ref{apolv} are:
\begin{itemize}
\item $k=0$, $r=0$;
\item $k=1$, $r\in \bb{Z}\backslash \{0\}$;
\item $k=2$, $r=2$.
\end{itemize}
The remaining cases can be obtained by transformation $p'=p^{-1}$
according to Proposition \ref{pp2}.
To construct Poisson operators we have to choose a point near
which we will perform the calculations. Nevertheless, as follows
from Theorem \ref{main}, the explicit form of Poisson operators in
the original function space is the same for all points. Thus, we
choose the $\infty$ as it is the standard case. Then, as we
assumed the usual Euclidean form \eqref{eucl}, differentials of
functional $H$ are given by
\begin{equation*}
dH\equiv d_\infty H = \sum_{i=-m}^{N+k-2}\var{H}{u_i}p^{r-1-i},
\end{equation*}
where $m=0$ for $k=0$. Still we have to check whether the above
Lax functions span proper subspaces, w.r.t. Poisson operators
\eqref{poi}, of the full Poisson algebras. We will limit ourselves
to linear ($n=0$) and quadratic ($n=1$) Poisson tensors, as
obviously it is enough to define bi-Hamiltonian structures.
Besides, in the all nontrivial cases Lax functions do not span
proper subspaces w.r.t. Poisson tensors for $n\geqslant 2$.
Poisson tensors restricted to finite number of fields are properly
defined if the highest and lowest orders of $\pi^n_\infty dH$ and
$L_t$ will coincide. Simple inspection shows that the highest
order of $\pi^n_\infty dH$ is equal to $\max \{N+k-2,n N+2r-k-1\}$
and the lowest is $0$ for $k=0$ and $\min \{k-1-m,-n m+2r-k\}$ for
$k=1,2$. Hence, in the case $k=0$ the Lax function always span the
proper subspace w.r.t. the linear Poisson tensor, but for $k=1,2$
only in case when $N\geqslant 2r-2k+1\geqslant -m$, otherwise the Dirac
reduction is required. The linear Poisson tensor is of the form
\begin{equation}\label{lin}
\pi^0_\infty dH = \pobr{\bra{dH}^\infty_{\geqslant k-r},L}_r
-\bra{\pobr{dH,L}_r}^\infty_{\geqslant 2r-k}.
\end{equation}
The reduced linear tensor for $N=-1$ and $k=r=1,2$ is given by
\eqref{lin_II}. For the quadratic Poisson tensors the Dirac
reduction is always necessary. The calculation procedure of Dirac
reduction is explained in \cite{BS} (in a bit different notation).
The reduced quadratic Poisson tensor for $k=r=0,1,2$ is given by
\begin{equation}\label{quad_I}
\bra{\pi^1_{\infty}}^{red}dH = \pobr{\bra{L dH}^\infty_{\geqslant
0},L}_r - L \bra{\pobr{dH,L}_r}^\infty_{\geqslant r} +
\frac{1}{N}\pobr{L,\partial_x^{-1}{\rm res}_\infty \pobr{dH,L}_0}_r,
\end{equation}
and for $k=1,r=0$ and $k=2,r=1$ takes the form
\begin{equation}\label{quad_II}
\bra{\pi^1_{\infty}}^{red}dH = \pobr{\bra{L dH}^\infty_{\geqslant
1},L}_r - L \bra{\pobr{dH,L}_r}^\infty_{\geqslant r-1} +
\frac{1}{m}\pobr{L,\partial_x^{-1}{\rm res}_\infty \pobr{dH,L}_0}_r.
\end{equation}
Both reduced Poisson tensors are always local as ${\rm res}_\infty
\{\cdot,\cdot\}_0 = (...)_x$.
In the article, in general, we present examples for simplest Lax
functions, where calculations are not very much complicated. From
the Lax hierarchies considered we exhibit only the first
nontrivial systems.
\begin{example}Two field system: $k=1$, $r\in \bb{Z}$.
Let us consider the Lax function of the form
\begin{equation}\label{l1}
L = p + u + v p^{-1}.
\end{equation}
It has poles at $\infty$ and $0$. Then, for $\infty$ we have
\begin{align}\notag
&L_{t_{2-r}} = \pobr{\bra{L^{2-r}}^\infty_{\geqslant 1-r},L}_r
\Longleftrightarrow \\ \label{l2}
&\pmatrx{u\\ v}_{t_{2-r}} = (2-r) \pmatrx{(1-r)uu_x-v_x\\
-u_xv-(1-r)uv_x} = \pi_0 dH^\infty_{2-r} = \pi^{red}_1
dH^\infty_{1-r},
\end{align}
where $\bra{L^{2-r}}^\infty_{\geqslant 1-r} = p^{2-r} + (2-r)u p^{1-r}$.
When $r=2$ the next equation from the hierarchy is the first
nontrivial one. For $r=1$ this is the well known dispersionless
Toda system. The hierarchy for $0$ is the same as $L$ has only two
poles of the same order and $(L^q)^\infty_{\geqslant 1-r} = L -
(L^q)^0_{< 1-r}$. The roots of $L$ are $\lambda_\pm =
\tfrac{1}{2}(-u\pm \sqrt{u^2-4v})$. Thus, for $r=1$
\begin{equation*}
(L^{-1})^{\lambda_\pm}_{<0} = \frac{1}{1-\frac{4v}{\lambda_\pm}}
(p-\lambda_\pm)^{-1}
\end{equation*}
and one finds the following equations
\begin{align*}
&L_{\xi^\pm_{-1}} = -\pobr{\bra{L^{-1}}^{\lambda_\pm}_{< 0},L}_1
\Longleftrightarrow \\
&\pmatrx{u\\ v}_{\xi^\pm_{-1}} = \frac{\pm 1}{(u^2-4v)^\frac{3}{2}} \pmatrx{2u_xv-uv_x\\
v(2v_x-uu_x)} = \pi_0 dH^{\lambda_\pm}_{-1} = \pi^{red}_1
dH^{\lambda_\pm}_{-2}.
\end{align*}
Of course, for $k=r=1$ all equations mutually commute.
The Lax function \eqref{l1} the defines proper subspace w.r.t. the
linear Poisson tensor \eqref{lin} only for $r=0,1$. In the cases,
the reduced quadratic Poisson tensors are given by \eqref{quad_II}
and \eqref{quad_I}, respectively. Hence, for $r=0$
\begin{align}\label{l3}
\pi_0 = \pmatrx{0 & \partial\\ \partial & 0}\qquad \pi^{red}_1 =
\pmatrx{2\partial & \partial u\\ u\partial & \partial v+v\partial},
\end{align}
and related Hamiltonians are
\begin{equation*}
H_1^\infty = \int_\Omega uv\ dx\qquad H_2^\infty = \int_\Omega
(u^2v+v^2)\ dx.
\end{equation*}
For $r=1$
\begin{align*}
\pi_0 = \pmatrx{0 & \partial v\\ v\partial & 0}\qquad \pi^{red}_1 =
\pmatrx{\partial v+v\partial & u\partial v\\ v\partial u & 2v\partial v}
\end{align*}
and
\begin{align*}
&H_0^\infty =
\int_\Omega uv\ dx\qquad H_1^\infty = \int_\Omega (u^2v+v^2)\ dx\\
&H_{-2}^{\lambda_\pm} = \int_\Omega \frac{\mp 1}{\sqrt{u^2-4v}}\
dx\qquad H_{-1}^{\lambda_\pm} = \pm \int_\Omega \ln
\frac{u+\sqrt{u^2-4v}}{v}\ dx.
\end{align*}
\end{example}
\begin{example} Two field system: $k=2$, $r=2$.
We will consider Lax function of the form
\begin{equation*}
L = v p + u + p^{-1}
\end{equation*}
i.e. function \eqref{l1} transformed by $p\mapsto p^{-1}$. By
Proposition \ref{pp2} the hierarchy for $\infty$ is given by
hierarchy \eqref{l2} for $k=1$, $r=0$ from above example. The
roots of $L$ are $\alpha_\pm = \tfrac{-u\pm \sqrt{u^2-4v}}{2v}$.
Thus, for
\begin{equation*}
(L^{-1})^{\alpha_\pm}_{<0} = \frac{1}{v-\frac{4v^2}{\alpha_\pm^2}}
(p-\alpha_\pm)^{-1}
\end{equation*}
one finds the following equations
\begin{align*}
&L_{\xi^\pm_{-1}} = -\pobr{\bra{L^{-1}}^{\alpha_\pm}_{< 0},L}_2
\Longleftrightarrow \\
&\pmatrx{u\\ v}_{t_{-1}} = \frac{\pm 1}{(u^2-4v)^\frac{3}{2}} \pmatrx{-uu_x+2v_x\\
2u_xv-uv_x} = \pi_0 dH^{\alpha_\pm}_{-1} = \pi^{red}_1
dH^{\alpha_\pm}_{-2}.
\end{align*}
This system by Proposition \ref{pp2} commutes with \eqref{l2} for
$r=0$. Thus, the Poisson tensors are given by \eqref{l3} with
Hamiltonians
\begin{align*}
H_{-2}^{\alpha_\pm} = \int_\Omega \frac{\pm u}{2\sqrt{u^2-4v}}\
dx\qquad H_{-1}^{\alpha_\pm} = \mp \int_\Omega
\frac{1}{2}\bra{u+\sqrt{u^2-4v}}\ dx.
\end{align*}
\end{example}
\subsection{Polynomial Lax functions in $(p-v)$ and $(p-v)^{-1}$}
Let us consider Lax functions which are formal Laurent series
around $v$, with a finite number of dynamical coefficients, of the
form
\begin{equation}\label{polv}
L = u_N (p-v)^N + u_{N-1}(p-v)^{N-1} + ... + u_{1-m}(p-v)^{1-m} +
u_{-m}(p-v)^{-m}\qquad m\neq 0.
\end{equation}
Lax functions \eqref{polv} have poles at $\infty$ and $v$, near
which calculated powers generate, if allowed by $k$ and $r$,
respective Lax hierarchies. Additional powers with related
hierarchies can be constructed around the roots of $L$.
\begin{proposition}\label{apolv}
Lax function of the form \eqref{polv} is an appropriate one in the
following cases:
\begin{enumerate}
\item $k=r=0$: $u_N=1$, $u_{N-1} = N v$;
\item $k=1$, $r\in \bb{Z}$: $N\neq 0$, $u_N=1$, $\kr{L}{p=0}=0$ for $r=1$;
\item $k=2$, $r\in \bb{Z}$: $N\neq 0$ when $r=1$,$\kr{L}{p=0}=0$, $\kr{\tfrac{d}{dp}L}{p=0}=1$;
\item $k=3$, $r=2$: $N=0$, $\kr{L}{p=0}=0$,
$\kr{\tfrac{d}{dp}L}{p=0}=1$, $\kr{\tfrac{d^2}{dp^2}L}{p=0}=0$.
\end{enumerate}
Moreover, for the same $k$ and $r$, the respective Lax hierarchies
commute.
\end{proposition}
\begin{proof}
It is enough to consider the Lax hierarchy related to $\infty$.
Function \eqref{polv} will be appropriate Lax function if the
left- and right-hand sides of Lax hierarchy \eqref{lhi} will
coincide and the number of independent equations will be the same
as the number of dynamical coefficients in $L$. The Lax hierarchy
\eqref{lhi} can be written in two equivalent representations
\begin{equation*}
L_t = \pobr{A^\infty_{\geqslant k-r},L}_r = -\pobr{A^\infty_{<
k-r},L}_r.
\end{equation*}
So, we have to examine expansions of this hierarchy near $\infty$
and $v$ as well as at $0$, since the factor $p^{r}$ occur in
Poisson bracket. It turns out that first representation yields
direct access to terms with lowest orders, whereas the second
representation yields information about terms with highest orders.
Near $\infty$ we have
\begin{align*}
&L_t = (u_N)_t p^N + (u_{N-1}-N v)_t p^{N-1} + lower\ terms,\\
&L_t =-\pobr{A^\infty_{< k-r},L}_r = -\pobr{\alpha
p^{k-r-1}+l.t.,u_N p^N + l.t.}_r = (...) p^\alpha + l.t.\ ,
\end{align*}
where $\alpha = N+k-2$ for $N\neq 0$ when $r=k-1$; and $\alpha =
0$ for $N=0$ and $r=k-1$. This impose the constraints on fields
$u_N$ and $u_{N-1}$ given in Proposition. The expansion of
$A^\infty_{\geqslant k-r}$ near $v$, is of the form $A^\infty_{\geqslant k-r}
= higher\ terms + \gamma_1 (p-v) + \gamma_0$ as $A^\infty_{\geqslant
k-r}$ does not have singularity at $v$. So, near $v$ we have
\begin{align*}
&L_t = higher\ terms + (u_{-m}+(m-1)v)_t (p-v)^{-m} + m v_t
(p-v)^{-m-1},\\
&L_t = \pobr{A^\infty_{\geqslant k-r},L}_r = \pobr{h.t.+ \gamma_0 , h.t.
+ u_{-m}(p-v)^{-m}}_r = h.t. + (...)(p-v)^{-m-1}
\end{align*}
and the lowest order of left- and right-hand side of \eqref{lhi}
are always the same. The expansion of $A^\infty_{\geqslant k-r}$ near
$0$, is of the form $A^\infty_{\geqslant k-r} = higher\ terms + \gamma
p^{k-r}$. So we have
\begin{align*}
&L_t = higher\ terms + \tfrac{1}{2}
\bra{\kr{\tfrac{d^2}{dp^2}L}{p=0}}_t p^2
+ \bra{\kr{\tfrac{d}{dp}L}{p=0}}_t p + \bra{\kr{L}{p=0}}_t,\\
&L_t = \pobr{A^\infty_{\geqslant k-r},L}_r = \pobr{h.t.+ \gamma p^{k-r}
, h.t. + \kr{\tfrac{d}{dp}L}{p=0} p + \kr{L}{p=0}}_r = h.t. +
(...)p^\alpha.
\end{align*}
For $k=r=0$ we have $\alpha=0$ and there is no need of additional
constraints. For $k=1$ if $r\neq 1$: $\alpha = 0$ and both sides
have the same order in expansion at $0$. But for $k=r=1$ we have
$\alpha =1$. Hence, $(L|_{p=0})_t=0$ and we have to impose the
constraint $L|_{p=0}=0$. Then, both sides have the same form. For
$k=2$ and arbitrary $r$: $\alpha >0$ and the first constraint of
the form $L|_{p=0}=0$ is needed. Taking into consideration this
constraint: $\alpha=2$ and it follows that
$(\tfrac{d}{dp}L|_{p=0})_t=0$. Hence, both sides will agree if we
impose an additional constraint $\tfrac{d}{dp}L|_{p=0} =1$. For
$k=3$ the reasoning is similar to the case $k=2$, but there will
be one more constraint of the form $\tfrac{d^2}{dp^2}L|_{p=0}=0$
needed. Commutation of Lax hierarchies follows from
Corollary~\ref{cor}.
\end{proof}
\begin{proposition}
The case $k=r=0$ of Proposition \ref{apolv} by the transformation
$p\mapsto p-v$ turns to the case $r=0$,~$k=1$ of Proposition
\ref{apol0}. Thus both Lax hierarchies are equivalent.
\end{proposition}
\begin{proof}
Consider the transformation $p'=p-v$, $x'=x$, $t'=t$, where
$t=t_q$~or~$\xi_q$. Then, $\partial_{p} = \partial_{p'}$, $\partial_x = \partial_{x'}-
v_x \partial_{p'}$ and $\partial_t = \partial_{t'} - v_t \partial_{p'}$. The points at
$\infty$ and $v$ transform into points at $\infty$ and $0$,
respectively and the Poisson bracket \eqref{pb} for $r=0$ is
preserved:
\begin{equation*}
\pobr{\cdot, \cdot}_0 = \partial_p \wedge \partial_x = \partial_{p'} \wedge
(\partial_{x'} + v_{x'} \partial_{p'}) = \partial_{p'} \wedge \partial_{x'} =
\pobr{\cdot, \cdot}_0' .
\end{equation*}
Let $L$ be the Lax function of the form \eqref{polv} from
Proposition \ref{apolv} for $r=k=0$. Then, by the above
transformation $L'=L$ is a Lax function of the form \ref{pol0}
from Proposition \ref{apolv} for $r=0$,~$k=1$. For meromorphic
function $A\in \mathcal{A}$, let $\bra{A}^\lambda_0$ mean the zero-order
term of Laurent series at $\lambda$. From \eqref{lhi} and
\eqref{lhv} it follows that
\begin{equation*}
v_{t_q} = \bra{\bra{L^{\tfrac{q}{N}}}^\infty_0}_x\qquad v_{\xi_q}
= \bra{\bra{L^{\tfrac{q}{m}}}^v_0}_x,
\end{equation*}
respectively. Thus the left- and right-hand side of \eqref{lhv}
are equal
\begin{align*}
&L_{\xi_q} = {L'}_{\xi_q'} - v_{\xi_q} L'_{p'} = {L'}_{\xi_q'} -
\bra{\bra{L^{\tfrac{q}{m}}}^v_0}_x L'_{p'} = {L'}_{\xi_q'} +
\pobr{\bra{L'^{\tfrac{q}{m}}}^0_0,L'}_0',\\
&L_{\xi_q} = -\pobr{\bra{L^{\tfrac{q}{m}}}^v_{<0},L}_0 = -
\pobr{\bra{L'^{\tfrac{q}{m}}}^0_{< 0},L'}_0'.
\end{align*}
Hence,
\begin{equation*}
L'_{\xi_q} = - \pobr{\bra{L'^{\tfrac{q}{m}}}^0_{< 1},L'}_0'.
\end{equation*}
Similar calculations are valid at $\infty$ .
\end{proof}
Notice that, for the case $k=r=0$ of Proposition \ref{apolv} one
is able to construct Lax hierarchies related to the roots of $L$,
which is not possible for the case $k=1,r=0$ of Proposition
\ref{apol0}. In the sense, the first case is more general.
\begin{proposition}\label{pp1}
Under the transformation $p'=p^{-1}$, the following equalities
between some cases from Proposition \ref{apolv} hold:
\begin{itemize}
\item the Lax hierarchy related to $0$ for $k=3,r=2$ is equivalent
to the Lax hierarchy related to $\infty$ for $k=r=0$ with $N=-1$;
\item the Lax hierarchy related to $0$ for $k=2,r\neq 1$ with
$N=0$ is equivalent to the Lax hierarchy related to $\infty$ for
$k=1,r\neq 1$ with $N=-1$; \item Lax hierarchies related to
$\infty$ and $0$ for $k=2,r=1$ with $N=-1$ are equivalent to Lax
hierarchies related to $0$ and $\infty$ for $k=1,r=1$ with $N=-1$,
$L|_{p=0}=0$, respectively.
\end{itemize}
\end{proposition}
\begin{proof}
The appropriate Lax function from Proposition \ref{apolv} for
$k=3$, $r=2$ has the form
\begin{equation*}
L = u_0 + u_{-1} (p-v)^{-1} + ... + u_{-m} (p-v)^{-m}
\end{equation*}
where $L|_{p=0}=0$, $\tfrac{d}{dp}L|_{p=0}=1$ and
$\tfrac{d^2}{dp^2}L|_{p=0}=0$. Taking into consideration the above
constraints, expansion of $L$ around $0$ is $L = ... + (...)p^2 +
p $. By transformation $p'=p^{-1}$ we have that
\begin{equation*}
(p-v)^{-1} = (p'^{-1}-v)^{-1} = -v'-v'^2(p'-v')^{-1}
\end{equation*}
where $v'=v^{-1}$. Thus $L$ transforms into
\begin{equation*}
L' = u'_0 + u'_{-1} (p'-v')^{-1} + ... + u'_{-m} (p'-v')^{-m}.
\end{equation*}
From the expansion around $0$ of $L$ it follows that expansion of
$L'$ near $\infty$ is $L' = p'^{-1} + (...)p'^{-2} + ...\ $.
Hence, $u'_0=0$, $u'_{-1}=1$ and the Lax function $L'$ is an
appropriate one for $k=r=0$. Analogously for two next relations in
the proposition. The rest holds by Proposition \ref{0i}.
\end{proof}
Now, let us pass to the Hamiltonian formulation of Lax hierarchies
related to the appropriate Lax functions from Proposition
\ref{apolv}. In general, the relevant cases are for $k=0,1,2$.
Further we will consider only them. The differential at $\infty$
of functional $H$ for the Lax function of the general form
\eqref{polv} is given by
\begin{equation}\label{dHv}
dH\equiv d_\infty H = p^r \bra{\frac{1}{mu_{-m}}\bra{\var{H}{v}+
\sum_{i=1-m}^Niu_i\var{H}{u_{i-1}}}(p-v)^m +
\sum_{i=1-m}^{N+1}\var{H}{u_{i-1}}(p-v)^{-i}}
\end{equation}
as
\begin{align*}
{\rm Tr}_\infty \bra{L_t dH} &= -\int_\Omega {\rm res}_\infty \bra{p^{-r}L_t dH} dx \overset{\text{by \eqref{rest}}}{=} \int_\Omega {\rm res}_v \bra{p^{-r}L_t dH} dx\\
&= \int_\Omega \bra{\sum_{i=-m}^N(u_i)_t\var{H}{u_i} + v_t\var{H}{v}} dx.
\end{align*}
For the Lax functions with constraints from Proposition
\ref{apolv} one has to modify differentials \eqref{dHv} in an
appropriate way or construct them by \eqref{diff}, i.e. the same
as in the next subsection. One has to examine when a given Lax
function from Proposition span the proper subspace with respect to
Poisson tensors. The procedure is rather technical and similar to
the proof of this proposition. Thus, we omit it and we will
present only the final results. The Lax functions from Proposition
\ref{apolv} for $k=0,1,2$ span proper subspace w.r.t. linear
Poisson tensor $n=0$ if $N\geqslant 2r-2k+1$, $m\geqslant -1$ and $r\geqslant k$.
Then, it is given by \eqref{lin}. If it is not the case, Dirac
reduction is required. The reduced linear Poisson tensor for
$N=-1$, $m\geqslant 1$ and $k=r=0,1,2$ is given by \eqref{lin_II}. These
Lax functions do not form a proper subspace w.r.t. quadratic
Poisson tensor $n=1$ and always the Dirac reduction procedure is
needed. For $k=r=0,1,2$ reduced quadratic Poisson tensors have the
form \eqref{quad_I}.
\begin{example} Two-field system: $k=r=1$.
The Lax function, taking into consideration appropriate
constraints, is given by the form
\begin{equation*}
L = (p-v) + u + v(u-v)(p-v)^{-1} = \frac{p(p+u-2v)}{p-v}.
\end{equation*}
For $\infty$ one finds $(L)^\infty_{\geqslant 0} = p+u-v$ and the
following equation
\begin{align*}
&L_{t_1} = \pobr{\bra{L}^\infty_{\geqslant 0},L}_1
\Longleftrightarrow\\
&\pmatrx{u\\ v}_{t_1} = \pmatrx{2u_xv+uv_x-2vv_x\\ u_xv} = \pi_0
dH^\infty_1 = \pi^{red}_1 dH^\infty_0 .
\end{align*}
The Lax hierarchy related to $v$ is the same as $L =
(L)^\infty_{\geqslant 0} + (L)^v_{< 0}$. The Lax function has two roots
$0$ and $2v-u$. Then, for $(L^{-1})^0_{< 0} =
\frac{v}{2v-u}p^{-1}$ we have
\begin{align*}
L_{\tau_{-1}} = -\pobr{\bra{L^{-1}}^0_{< 0},L}_1
\Longleftrightarrow
\pmatrx{u\\ v}_{\tau_1} = \pmatrx{\frac{v_x}{2v-u}\\
\frac{2vv_x-u_xv}{(u-2v)^2}} = \pi_0 dH^0_{-1} = \pi^{red}_1
dH^0_{-2}.
\end{align*}
The Lax hierarchy related to the root $2v-u$ is up to the sign the
same as above since $L^{-1} = (L^{-1})^0_{\geqslant 0} +
(L^{-1})^{2v-u}_{< 0}$.
The general form for a differential of a given functional $H$
according to \eqref{dHv} is
\begin{align*}
dH =
\frac{(2-u)\var{H}{u}+v\var{H}{v}}{(u-v)v^2}p(p-v)+\frac{1}{v}\var{H}{v}p.
\end{align*}
The Lax function defines the proper subspace w.r.t. the linear
Poisson tensor \eqref{lin}. The reduced quadratic Poisson tensors
is given by \eqref{quad_II}. Then,
\begin{align*}
\pi_0 = \pmatrx{\partial v+v\partial & \partial v\\
v\partial & 0 }\qquad \pi^{red}_1 = \pmatrx{2\partial uv + 2uv\partial & u\partial v+2v\partial v\\
v\partial u +2v\partial v & 2v\partial v} .
\end{align*}
The respective Hamiltonians are
\begin{align*}
&H^\infty_0 = \int_\Omega (u-v)\ dx\qquad H^\infty_1 =
\frac{1}{2}\int_\Omega
(u^2-v^2)\ dx\\
&H^0_{-2} = \int_\Omega \frac{v-u}{(u-2v)^2}\ dx\qquad H^0_{-1} =
\int_\Omega \ln \bra{\frac{u}{v}-2}\ dx .
\end{align*}
\end{example}
\subsection{Rational Lax functions}
Let us consider the general form of meromorphic Lax function given
by
\begin{equation}\label{rat}
L = \sum_{k=-m_0}^{N}u_k p^k + \sum_{i=1}^\alpha
\sum_{k_i=1}^{m_i} a_{i,k_i}(p-v_i)^{-k_i}
\end{equation}
where $u_k$, $a_{i,k_i}$ and $v_i$ are dynamical fields. From this
class of functions considered in the following subsection we
exclude those which have been examined earlier, i.e. \eqref{pol0}
and \eqref{polv}. Any function \eqref{rat} in general has a pole
at $\infty$ of order $N$, at $0$ of order $m_0$ and $\alpha$
evolution poles at $v_j$ of order $m_j$. Then, one can construct
positive powers of Laurent series at poles of $L$. Negative powers
can be constructed as expansion at the roots of $L$. These powers
generate for appropriate $r$ and $k$ Lax hierarchies
\eqreff{lhi}{lhv}.
\begin{proposition}\label{arat}
Function of the form \eqref{rat} is an appropriate one in the
following cases:
\begin{enumerate}
\item $k=r=0$:
\begin{itemize}
\item $N\geqslant 1$, $u_N=1$, $u_{N-1}=0$, $m_0=0$,
\item $\forall_k\ u_k=0$, $\sum_{i=1}^\alpha a_{i,1}=1$,
$\sum_{i=1}^\alpha \bra{a_{i,1}v_i + a_{i,2}}=0$;
\end{itemize}
\item $k=1$, $r\in \bb{Z}$:
\begin{itemize}
\item $N\geqslant 1$, $u_N=1$, $m_0\geqslant 1$,
\item $N=-1$, $u_{-1} + \sum_{i=1}^\alpha a_{i,1}=1$, $m_0\geqslant 1$,
\item $N\geqslant 1$, $u_N=1$, $m_0=0$, $L|_{p=0}=0$ for $r=1$,
\item $\forall_k\ u_k=0$, $\sum_{i=1}^\alpha a_{i,1}=1$, $L|_{p=0}=0$ for $r=1$;
\end{itemize}
\item $k=2$, $r\in \bb{Z}$:
\begin{itemize}
\item $N\geqslant 1$, $m_0\geqslant 1$, $u_{-m_0}=1$,
\item $N\geqslant 1$, $m_0=0$, $L|_{p=0}=0$,
$\tfrac{d}{dp}L|_{p=0}=1$,
\item $N=0$, $u_0=0$ for $r=1$, $m_0\geqslant 1$, $u_{-m_0}=1$,
\item $\forall_{k\neq 0}\ u_k=0$, $u_0=0$ for $r=1$,
$L|_{p=0}=0$, $\tfrac{d}{dp}L|_{p=0}=1$;
\end{itemize}
\item $k=3$ and $r=2$
\begin{itemize}
\item $N=0$, $m_0\geqslant 1$, $u_{1-m_0}=0$, $u_{-m_0}=1$,
\item $\forall_{k\neq 0}\ u_k=0$, $\kr{L}{p=0}=0$,
$\kr{\tfrac{d}{dp}L}{p=0}=1$, $\kr{\tfrac{d^2}{dp^2}L}{p=0}=0$.
\end{itemize}
\end{enumerate}
Moreover Lax hierarchies calculated at different points for the
same $r$ and $k$ mutually commute. We excluded here the Lax
functions of the form \eqref{pol0} and \eqref{polv}.
\end{proposition}
The proof is similar to the one of Proposition \ref{apolv}. The
evolution of the general form of rational function \eqref{rat} is
given by
\begin{equation*}
L_t = \sum_{k=-m_0}^{N}(u_k)_t p^k + \sum_{i=1}^\alpha
\sum_{k_i=1}^{m_i}\bra{ (a_{i,k_i})_t + (k_i - 1) a_{i,k_i-1}
(v_i)_t} (p-v_i)^{-k_i} + \sum_{i=1}^\alpha m_i a_{i,m_i} (v_i)_t
(p-v_i)^{-m_i-1} .
\end{equation*}
The Lax hierarchies have to be examined near $\infty$, $0$ and
dynamical poles. So, the meromorphic function of the form
\eqref{rat} will be an appropriate Lax function if:
\begin{itemize}
\item the right-hand sides of the Lax hierarchy considered
and the time derivatives $L_t$ will have the same
order at all above poles,
\item the number of independent equations, resulting from Lax hierarchies,
will be the same as that of dynamical coefficients included in $L$.
\end{itemize}
The first condition implies all constraints considered in
Proposition. To see that, an analysis like in proof of Proposition
\ref{apolv} is needed. So, by the first condition the right-hand
sides of considered Lax hierarchies for appropriate $r$ and $k$
can be uniquely presented in the form of $L_t$, i.e. the left-hand
sides. So, the second condition immediately follows from the first
one.
The simplest way of deriving dispersionless systems related to a
given meromorphic Lax functions is to transform Lax hierarchies
into purely polynomial form in $p$ through removal of finite
singularities. It can be done by multiplication of both sides of
Lax hierarchies by a proper factor.
\begin{proposition}\label{pp}
Under transformation $p'=p^{-1}$ Lax hierarchies, from Proposition
\ref{arat}, defined at $\infty$ and $0$ for appropriate $r$ and
$k$ transforms into Lax hierarchies defined at $0$ and $\infty$
for $r'=2-r$ and $k'=3-k$, respectively.
\end{proposition}
See proof of Proposition \ref{pp1} and the comment after
Proposition \ref{pp2}.
Hence, the relevant cases from Proposition \ref{arat} are:
\begin{itemize}
\item $k=0$, $r=0$;
\item $k=1$, $r\in \bb{Z}\backslash \{0\}$;
\item $k=2$, $r=2$.
\end{itemize}
The remaining cases can be obtained from the above cases by the
transformation $p'=p^{-1}$ according to Proposition \ref{pp}.
Once again we will consider Poisson tensors defined at $\infty$.
This time we are not going to present the explicit form of
differentials $d_\infty H$ for the general meromorphic Lax
function, but we will explain how to construct them. We postulate
that
\begin{equation}\label{diff}
dH\equiv d_\infty H = \sum_{i=N_\infty-\beta+1}^{N_\infty} \gamma_i\ p^{r-i-1}
\end{equation}
where $\beta$ is a number of dynamical coefficients in $L$ and
$N_\infty$ is the highest order of Laurent series of $L_t$ at
$\infty$. The form \eqref{diff} allows us to solve \eqref{eucl}
($\lambda=\infty$) to obtain functions $\gamma_i$ in terms of
dynamical coefficients of $L$ and its variational derivatives such
that we obtain the required Euclidean form. We will consider only
relevant cases of meromorphic Lax functions from Proposition
\ref{arat}. Verification that they span the proper subspace with
respect to Poisson tensors is similar to the proof of this
proposition. These Lax functions span the proper subspace w.r.t.
the linear Poisson tensor \eqref{lin} for $k=0$ if $N\geqslant 1$ and
for $k=1,2$ if $N\geqslant 2r-2k+1\geqslant -m_0$. If not the case, the Dirac
reduction is required. The reduced linear tensors for $k=r=0,1,2$
and $N=-1$ ($N$ is the highest order of Laurent series of $L$ at
$\infty$) are given by
\begin{align}\label{lin_II}
\pi^0_{\infty}dH &= \pobr{\bra{dH}^\infty_{\geqslant 0},L}_r -
\bra{\pobr{dH,L}_r}^\infty_{\geqslant r} +
\pobr{\gamma_1p+\gamma_0,L}_r\\
\notag &\gamma_1 = \partial_x^{-1}{\rm res}_\infty \pobr{dH,L}_0\\
\notag &\gamma_0 = \partial_x^{-1}{\rm res}_\infty \pobr{dH,L}_1 -
\partial_x^{-1}\bra{\gamma_1\bra{(L)_{-2}^\infty}_x+2(\gamma_1)_x(L)_{-2}^\infty},
\end{align}
where $(L)_{-2}^\infty$ is the coefficient staying at the term of
order $-2$ in Laurent series at $\infty$. For $k=r=0$ we have
$(L)_{-2}^\infty=0$ and \eqref{lin_II} simplified. Notice, that
for $k=r=0$ the reduced Poisson tensor \eqref{lin_II} is always
local, but for the remaining cases it is in general not. In the
case of quadratic Poisson tensors, considered Lax functions do not
span proper subspaces and the Dirac reduction is needed. The
reduced quadratic Poisson tensor for $k=r=0,1,2$ are given by
\eqref{quad_I}, and for $k=1, r=0$ and $k=2, r=1$ by
\eqref{quad_II} where $m=m_0$.
\begin{example} The two-field system: $k=r=0$.
Let the Lax function have the form
\begin{align*}
L = u(p-v)^{-1} + (1-u)\bra{p-\frac{uv}{u-1}}^{-1}.
\end{align*}
The roots of $L$ are $\infty$ and $\alpha = \frac{2uv-v}{u-1}$.
Then, for $\infty$ we have
\begin{align*}
L_{t_2} = \pobr{\bra{L^{-2}}^\infty_{\geqslant 0},L}_0
\Longleftrightarrow \pmatrx{u\\ v}_{t_2} = \pmatrx{2uv\\
\frac{(3u-1)v^2}{u-1}}_x = \pi^{red}_0 dH^\infty_2 = \pi^{red}_1
dH^\infty_1,
\end{align*}
where $(L^{-2})^\infty_{\geqslant 0} = p^2+\frac{2uv^2}{u-1}$. The
hierarchy for $\alpha$ is the same as $(L^q)^\infty_{\geqslant 0} = L -
(L^q)^\alpha_{< 0}$. The function $L$ has poles at $v$ and
$\lambda = \frac{uv}{u-1}$. At the point $v$ one finds the
following system
\begin{align*}
L_{\xi_1} = -\pobr{\bra{L}^v_{< 0},L}_0 \Longleftrightarrow
\pmatrx{u\\ v}_{\xi_1} = \pmatrx{\frac{u(u-1)^3}{v^2}\\
\frac{(u-1)^2}{v}}_x = \pi^{red}_0 dH^v_1 = \pi^{red}_1 dH^v_0,
\end{align*}
where $(L)^v_{< 0} = u(p-v)^{-1}$. The Lax function is invariant
with respect to the transformation $u\mapsto 1-u$, $v\mapsto
\frac{uv}{u-1}$. Therefore, the Lax hierarchy related to $\lambda$
can be obtained through this transformation.
In the case the differential of a given functional calculated by
\eqref{diff} is
\begin{align*}
dH =
\bra{\frac{2(u-1)^3}{v^3}\var{H}{u}+\frac{(u-1)^2}{uv^2}\var{H}{v}}p^3
+
\bra{\frac{3(u-1)^2(2u-1)}{v^2}\var{H}{u}+\frac{(u-1)(3u-1)}{uv}\var{H}{v}}p^2
.
\end{align*}
Then, from \eqref{lin_II} and \eqref{quad_I} we find the following
Poisson tensors
\begin{align*}
\pi^{red}_0 = \pmatrx{0 & \partial (1-u)\\ (1-u)\partial & -\partial
v-v\partial}\qquad \pi^{red}_1 = \pmatrx{\partial
\frac{u}{v^2}(u-1)^3+\frac{u}{v^2}(u-1)^3\partial &
\frac{(u-1)^2}{v}\partial\\ \partial \frac{(u-1)^2}{v} & 0},
\end{align*}
respectively. The Hamiltonians are
\begin{align*}
H^\infty_1 = \int_\Omega \frac{uv}{1-u}\ dx\qquad H^\infty_2 =
\int_\Omega \frac{uv^2}{1-u}\ dx\qquad H^v_0 = \int_\Omega v\
dx\qquad H^v_1 = \int_\Omega \frac{u(u-1)^2}{v}\ dx .
\end{align*}
\end{example}
\begin{example} The four-field dispersionless system: $k=r=0$.
For the Lax function of the form
\begin{align*}
L = p + a(p-v)^{-1} + b(p-w)^{-1}
\end{align*}
near $\infty$ one finds
\begin{align*}
L_{t_2} = \pobr{\bra{L^{2}}^\infty_{\geqslant 0},L}_0
\Longleftrightarrow
\pmatrx{a\\ b\\ v\\ w}_{t_2} = \pmatrx{2av\\ 2bw\\ 2a+2b+v^2\\
2a+2b+w^2}_x = \pi_0 dH^\infty_2 = \pi^{red}_1 dH^\infty_1,
\end{align*}
where $(L^2)^\infty_{\geqslant 0} = p^2 +2a+2b$. Near to the $v$ we have
\begin{align*}
L_{\xi_1} = -\pobr{\bra{L}^v_{< 0},L}_0 \Longleftrightarrow
\pmatrx{a\\ b\\ v\\ w}_{t_1} =
\pmatrx{a - \frac{ab}{(v-w)^2}\\ \frac{ab}{(v-w)^2}\\
v + \frac{b}{v-w}\\ \frac{a}{v-w}}_x = \pi_0 dH^v_1 = \pi^{red}_1 dH^v_0,
\end{align*}
where $(L)^v_{\leqslant 0} = a(p-v)^{-1}$. There are three, very
complicated, roots of $L$. Thus, we are not going to calculate the
respective equations.
The differential of a functional $H$ is
\begin{align*}
dH = &
\bra{\frac{2}{v-w}\var{H}{b}+\frac{1}{b}\var{H}{w}}\frac{(p-v)^2(p-w)}{(v-w)^2}
+\bra{\frac{2}{w-v}\var{H}{a}+\frac{1}{a}\var{H}{v}}\frac{(p-v)(p-w)^2}{(w-v)^2}\\
&+\frac{1}{(v-w)^2}\var{H}{b}(p-v)^2+\frac{1}{(w-v)^2}\var{H}{a}(p-w)^2.
\end{align*}
Then, from \eqref{lin} and \eqref{quad_I} one finds the linear
\begin{align*}
\pi_0 = \pmatrx{0 & 0 & \partial & 0\\
0 & 0 & 0 & \partial\\ \partial & 0 & 0 & 0\\
0 & \partial & 0 & 0}
\end{align*}
and quadratic Poisson tensors
\begin{align*}
\pi^{red}_1 = \pmatrx{\partial a+a\partial -\partial
\frac{ab}{(v-w)^2}-\frac{ab}{(v-w)^2}\partial & \partial
\frac{ab}{(v-w)^2}+\frac{ab}{(v-w)^2}\partial & (v+\frac{b}{v-w})\partial &
\frac{a}{v-w}\partial\\
\partial \frac{ab}{(v-w)^2}+\frac{ab}{(v-w)^2}\partial & \partial b+b\partial -\partial
\frac{ab}{(v-w)^2}-\frac{ab}{(v-w)^2}\partial & -\frac{b}{v-w}\partial &
(w-\frac{a}{v-w})\partial\\
\partial(v+\frac{b}{v-w}) & -\partial \frac{b}{v-w} & 2\partial & \partial\\
\partial \frac{a}{v-w}& \partial (w-\frac{a}{v-w}) & \partial & 2\partial},
\end{align*}
respectively. The Hamiltonians are
\begin{align*}
&H^\infty_1 = \int_\Omega (av+bw)\ dx\qquad H^\infty_2 =
\int_\Omega \bra{(a+b)^2+av^2+bw^2}\ dx\\
&H^v_0 = \int_\Omega a\ dx\qquad H^v_1 = \int_\Omega
\bra{av+\frac{ab}{v-w}}\ dx .
\end{align*}
\end{example}
\begin{example} Four-field system: $k=1$, $r\in \bb{Z}$.
\label{ex}
Let us consider a Lax function of the form
\begin{equation}\label{a1}
L= p+u+vp^{-1}+w(p-s)^{-1}.
\end{equation}
It has poles at $\infty$, $0$ and $w$. Related equations to
$\infty$ for $(L^{2-r})^\infty_{\geqslant 1-r} = p^{2-r}+(2-r)up^{1-r}$
are
\begin{align*}
&L_{t_{2-r}} = \pobr{\bra{L^{2-r}}^\infty_{\geqslant 1-r},L}_r \Longleftrightarrow\\
&\pmatrx{u\\ v\\ w\\ s}_{t_{2-r}} = (2-r)\pmatrx{(1-r)uu_x+v_x+w_x\\ u_xv+(1-r)uv_x\\
u_xw+(1-r)uw_x+(ws)_x\\ u_xs+(1-r)us_x+ss_x} =
\pi_0dH^\infty_{2-r} = \pi^{red}_1 dH^\infty_{1-r} .
\end{align*}
The first equations from Lax hierarchies related to $0$ for $r\neq
0$ are
\begin{align*}
L_{\tau_{r}} = -\pobr{\bra{L^{r}}^0_{< 1-r},L}_r
\Longleftrightarrow
\pmatrx{u\\ v\\ w\\ s}_{\tau_{r}} = rv^r \pmatrx{\ln v\\ u-\frac{w}{s}\\
\frac{w}{s}\\ \ln \frac{s}{v}}_x = \pi_0dH^0_{r} =
\pi^{red}_1dH^0_{r-1},
\end{align*}
where $(L^r)^0_{\leqslant 1-r} = v^r p^{-r}$. But for $r=0$ we have
\begin{align*}
L_{\tau_1} = -\pobr{\bra{L}^0_{< 1},L}_0 \Longleftrightarrow
\pmatrx{u\\ v\\ w\\ s}_{\tau_1} = \pmatrx{u-\frac{w}{s}\\ v-\frac{vw}{s^2}\\
\frac{vw}{s^2}\\ \frac{w-v}{s}-u}_x = \pi_0dH^0_1 =
\pi^{red}_1dH^0_0,
\end{align*}
where $(L)^0_{\leqslant 1} = u-\frac{w}{s}+ v p^{-1}$. For $r=1$ and
$(L)^s_{\leqslant 0} = w(p-s)^{-1}$ one finds
\begin{align*}
L_{\xi_1} = -\pobr{\bra{L}^s_{< 0},L}_1 \Longleftrightarrow
\pmatrx{u\\ v\\ w\\ s}_{\xi_1} = \pmatrx{w_x\\ v\bra{\frac{w}{s}}_x\\
u_xw+(ws)_x-v\bra{\frac{w}{s}}_x\\ u_xs+ss_x+s\bra{\frac{v}{s}}_x}
= \pi_0dH^s_1 = \pi^{red}_1dH^s_0 .
\end{align*}
Once again we are not going to consider Lax hierarchies related to
roots of $L$.
The differential of a functional $H$ is
\begin{align*}
dH = \bra{\var{H}{s} +
\frac{1}{s^2}\bra{\var{H}{v}-\var{H}{w}}}p^{r+2} -
\bra{\frac{2}{s}\bra{\var{H}{v}-\var{H}{w}}+\frac{1}{w}\var{H}{s}}p^{r+1}
+\var{H}{v}p^r + \var{H}{u}p^{r-1}.
\end{align*}
The Lax function \eqref{a1} span the proper subspace w.r.t linear
Poisson tensor \eqref{lin} only for $r=0,1$. The reduced quadratic
tensors are for $r=0,1$ given by \eqref{quad_II} and
\eqref{quad_I}, respectively. Thus, for $r=0$:
\begin{equation}\label{p1}
\pi_0 = \pmatrx{0 & \partial & 0 & 0\\ \partial & 0 & 0 & -\partial\\ 0 & 0 & 0 &
\partial\\ 0 & -\partial & \partial & 0}
\end{equation}
and
\begin{equation}\label{p2}
\pi^{red}_1 = \pmatrx{2\partial & \partial(u-\frac{w}{s}) & \partial \frac{w}{s} & -\partial\\
(u-\frac{w}{s})\partial & \partial v(1-\frac{w}{s^2})-v(1-\frac{w}{s^2})\partial
& \partial \frac{vw}{s^2}+\frac{vw}{s^2}\partial & (\frac{w-v}{s}-u)\partial\\
\frac{w}{s}\partial & \partial \frac{vw}{s^2}+\frac{vw}{s^2}\partial
& \partial w(1-\frac{v}{s^2})-w(1-\frac{v}{s^2})\partial & (u+s+\frac{v-w}{s})\partial\\
-\partial & \partial (\frac{w-v}{s}-u) & \partial (u+s+\frac{v-w}{s}) & 2\partial} .
\end{equation}
The related Hamiltonians are
\begin{align*}
&H^\infty_1 = \int_\Omega \bra{uv+uw+ws}\ dx\qquad H^\infty_2 =
\int_\Omega \bra{u^2v+v^2+ws^2+2uws+u^2w+2vw+w^2}\
dx\\
&H^0_0 = \int_\Omega v\ dx\qquad H^0_1 = \int_\Omega
v\bra{u-\frac{w}{s}}\ dx .
\end{align*}
For $r=1$ we have:
\begin{equation*}
\pi_0 = \pmatrx{0 & \partial v & \partial w & \partial s\\ v\partial & 0 & 0 & 0\\
w\partial & 0 & s\partial w+w\partial s & s\partial s\\ s\partial & 0 & s\partial s & 0}
\end{equation*}
and
\begin{equation*}
\pi^{red}_1 = \pmatrx{\partial(v+w)+(v+w)\partial & u\partial v & 2\partial ws+ws\partial +u\partial w & (u+s)\partial s\\
v\partial u & 2v\partial v & 2v\partial w & v\partial s\\
\partial ws+2ws\partial +w\partial u & 2w\partial v & \pi_{ww} & (s^2+us+v+2w)\partial s\\
s\partial (u+s) & s\partial v & s\partial (s^2+us+v+2w) & 2s\partial s},
\end{equation*}
where $\pi_{ww} = \partial uws+uws\partial+w\partial (2s^2+w)+(2s^2+w)\partial w$. The
related Hamiltonians are
\begin{align*}
&H^\infty_0 = \int_\Omega u\ dx\qquad H^\infty_1 = \int_\Omega
\bra{\frac{1}{2}u^2+v+w}\ dx\\
&H^0_0 = \int_\Omega (u-\frac{w}{s})\ dx\qquad H^0_1 = \int_\Omega
\bra{\frac{1}{2}u^2+v-\frac{uw}{s}-\frac{vw}{s^2}+\frac{w^2}{2s^2}}\
dx\\
&H^s_0 = \int_\Omega \frac{w}{s}\ dx\qquad H^s_1 = \int_\Omega
\bra{w+\frac{uw}{s}+\frac{vw}{s^2}-\frac{w^2}{2s^2}}\ dx.
\end{align*}
\end{example}
\pagebreak
\begin{example} Four-field system: $k=r=2$.
Lax function \eqref{a1} transformed by $p\mapsto p^{-1}$ has the form
\begin{equation*}
L= vp+u-\frac{w}{s}+p^{-1}-\frac{w}{s^2}(p-s^{-1})^{-1}.
\end{equation*}
For $(L)^{s^{-1}}_{\leqslant 0} = \frac{w}{s^2}(p-s^{-1})^{-1}$ one
finds the system
\begin{align*}
L_{\xi_1} = -\pobr{\bra{L}^{s^{-1}}_{< 0},L}_1 \Longleftrightarrow
\pmatrx{u\\ v\\ w\\ s}_{\xi_1} = \pmatrx{ -\frac{w}{s}\\ -\frac{vw}{s^2}\\
w\bra{\frac{v}{s^2}-1}\\ -u-s+\frac{v-w}{s}}_x =
\pi_0dH^{s^{-1}}_1 = \pi^{red}_1dH^{s^{-1}}_0
\end{align*}
commuting, by Proposition \eqref{pp}, with equations from Example
\ref{ex} for $r=0$. The linear and quadratic Poisson tensors are
given by \eqref{p1} and \eqref{p2}, respectively. Hamiltonian
functionals are given by
\begin{align*}
H^{s^{-1}}_0 = -\int_\Omega w\ dx\qquad H^{s^{-1}}_1 = \int_\Omega
w\bra{u+\frac{v-1}{s}}\ dx.
\end{align*}
\end{example}
\section{Comments}
In the article we have presented a systematic construction of
multi-Hamiltonian dispersionless systems with meromorphic Lax
representations. It is shown that for a given meromorphic Lax
function $L$, if allowed by $k$ and $r$, one can construct Lax
hierarchies related to all poles of $L$ and $L^{-1}$. These Lax
hierarchies, if we fix $k$ and $r$, mutually commute. It is shown
how to construct Poisson tensors and infinite hierarchies of
constants of motion. It is proved that Poisson tensors, from the
original function space, reconstructed for different poles are
equal. Also, we have examined systematically the forms of
appropriate meromorphic Lax functions, with finite number of
dynamical fields, allowing construction of consistent
dispersionless systems. The Poisson tensors constructed for the
appropriate meromorphic Lax functions considered in the following
article are nondegenerate.
Articles \cite{FS}-\cite{S} deal with rational Lax functions from
the algebra with fixed Poisson bracket $r=1$. However, only the
Lax hierarchies generated by powers constructed near $\infty$ have
been considered there. For the class of rational Lax functions
used in these papers the bi-Hamiltonian structures are degenerate,
i.e. the determinants of the related metrics vanish. The reason is
that the constraint of the form $L|_{p=0} = 0$ is not taken into
consideration. So, one dynamical field always can be represented
as a function of all others. This fact entails the degeneracy of
Poisson tensors.
There is a different approach to meromorphic Lax functions. From
the complex analysis it is well known that meromorphic function
can be uniquely presented in the factorized form. Because of such
a factorization there is no problem in finding poles of $L$ and
$L^{-1}$ near which one construct powers and related Lax
hierarchies. Another advantage is that the dispersionless systems
obtained have very symmetrical form. However, the disadvantage is
that Poisson tensors are significantly more complicated. Such,
factorized form of Lax functions as well allows for finding new
reductions which are not obvious when we have Lax function in the
standard form, see \cite{FS}-\cite{S}.
In the paper we have considered dispersionless systems with a
finite number of dynamical fields. However, Lax function being
infinite formal Laurent series leads to the construction of
dispersionless infinite-field Benney moment like equations. Such
systems for Laurent series at $\infty$ have been considered
earlier in \cite{Bl2}. The original Benney moment equation can be
obtained for $k=r=0$. If we consider formal Laurent series at a
pole being a dynamical field $v(x)$ we will construct new classes
of infinite-field dispersionless systems. They, together with
bi-Hamiltonian structures, will be studied in a forthcoming
article. Furthermore, all finite-field dispersionless systems,
with meromorphic Lax functions, considered in this paper may be
considered as reductions of these infinite-field systems.
All Lax functions used in the article belong to the algebras of
meromorphic functions. But, it is straightforward to extend the
theory presented into algebras of holomorphic functions. So, it
may be worth looking systematically for new classes of appropriate
Lax functions being holomorphic and allowing construction of
related dispersionless systems.
Another issue is the extension of the theory of meromorphic Lax
representations presented for dispersionless systems in order to
construct integrable dispersive soliton systems for rational Lax
operators. The first approach towards this was made in article
\cite{EOR}. However, the authors constructed soliton systems
related only to the case $k=r=0$ from our article. A more general
theory, of dispersive deformations of formal Lax functions being
polynomials in $p$ and $p^{-1}$, is presented in our paper
\cite{BS}. This approach is based on the Weyl-Moyal-like
quantization procedure. The idea relies on the deformation of the
usual multiplication in the algebra $\mathcal{A}$ to the new associative
but non-commutative $\star$-product. However, this theory works
only for $r=0,1,2$. Deformations of Poisson algebras for $r=0,2$
are equivalent and lead to the construction of field soliton
systems, but for $r=1$ they lead to the construction of lattice
soliton systems. So, in a forthcoming article we are going to
present a general theory of the field and lattice soliton systems
for rational Lax operators.
\subsection*{Acknowledgment}
This work was partially supported by KBN research grant No. 1 P03B
111 27. One of the authors B. M. Sz. would like to thank M. Pavlov
and A. Yu. Orlov for some discussions and useful references.
\footnotesize
|
1,314,259,993,274 | arxiv | \section*{Background}
High-throughput sequencing devices, called next-generation sequencers, have provided lots of DNA sequences of various organisms.
However, a very large number of draft genome is still incomplete, for example, in GenBank, 90\% of bacterial genomes are incomplete~\cite{bacterialGenomeSequencing}.
In order to improve the consistency and completeness of draft of reference genomes, which are produced based on short reads obtained by second-generation sequencers,
third-generation long reads sequencing can be used.
Due to this fact, third-generation sequencing technologies are becoming more popular,
for example, in 2018 the human genome assembled \textit{de novo} from only long DNA reads was published~\cite{human-genome-ONT-sequencing}.
Third-generation sequencing allowed to obtain much longer DNA reads compared to the second-generation sequencing technologies.
However, the error rate in long reads from third-generation devices compared to short DNA reads from second-generation sequencers is significantly higher~\cite{pacbio-sequencing, nanopore-sequencing}.
Moreover, the cost per sample of third-generation sequencing is higher than the second-generation sequencing~\cite{cost-per-sample-sequencing}.
An obvious concept of using both types of reads in \textit{de novo} assembly called hybrid assembly is currently explored~\cite{apple-hybrid-assembly, herbal-plant-hybrid-assembly}.
There are many possibilities to combine data from second-generation sequencing and third-generation sequencing.
The four most popular are listed below.
\begin{enumerate}
\item Long DNA reads could be mapped directly to the de Bruijn graph, which is built from short DNA reads.
Then, dedicated algorithms allow us to resolve some ambiguity in the de Bruijn graph, which can improve the consistency of the resulting DNA sequences.
Such an approach is implemented in some \textit{de novo} DNA assemblers for second-generation reads, e.g. Velvet~\cite{velvet}, ABySS~\cite{abyss}, SPAdes~\cite{spades}.
\item Long DNA reads could be \textit{de novo} assembled with dedicated assemblers, e.g. Canu~\cite{canu}, Falcon~\cite{falcon}, miniasm~\cite{miniasm}.
Then, created DNA sequences can be improved in terms of quality by mapping short DNA reads and correcting assembling errors, by Pilon~\cite{pilon} or quiver~\cite{quiver} applications.
\item Short DNA reads could be used to correct long DNA reads, for example, with CoLoRMap~\cite{colormap} or Nanocorr~\cite{nanocorr} tools.
Then, long and corrected DNA reads could be assembled with assemblers for third-generation sequencing data (as depicted in the previous point).
\item Short DNA reads could be \textit{de novo} assembled using assemblers dedicated for second-generation sequencing data (as depicted in point 1).
Then, long DNA reads could be used to link the resultant DNA sequences (contigs), for example, with LINKS~\cite{links} or SSPACE-LongRead~\cite{sspace-longread} applications.
\end{enumerate}
In this paper, we present a new application called \textit{dnaasm-link} for combining output of \textit{de novo} assembler with long DNA reads (point 4 of the previous list).
Our software contains a module for filling the gaps between contigs with a specified sequence from an appropriate long DNA read.
This feature is very important, in particular for complex DNA regions.
What is more, our method has several times shorter calculation time as well as several times lower memory requirements in comparison to other tools.
Significant memory optimization and reduction of computation time may enable the usage of the presented application for organisms with a large genome,
which is not possible in~existing applications.
The presented algorithm was implemented as a new extension of the dnaasm assembler~\cite{bmc2018dnaasm},
the demo application, docker image and source code are available at project homepage http://dnaasm.sourceforge.net.
\section*{Methods}
The presented algorithm efficiently finds and joins the adjacent contigs using long reads.
The contigs are produced by \textit{de novo} DNA assembler from short and high quality reads from second-generation sequencers.
In our approach the contigs are created by de Bruijn graph algorithm implemented in dnaasm assembler~\cite{bmc2018dnaasm}.
The new algorithm, called dnaasm-link, checks which contigs have a~sub-sequence similar to a~sub-sequence in long reads,
then adjacent contigs are found,
the distance between contigs is calculated and the gap is filled with a~sequence from the appropriate long DNA read.
The presented approach and the implementation details are described below.
\subsection*{Finding adjacent contigs}
The algorithm to find adjacent contigs uses k-mers similarity.
This algorithm consists of several stages.
Firstly, a set of k-mers is generated from the input set of contigs, each of them being inserted into the Bloom filter~\cite{bloom-filter}.
The length of analyzed k-mers (the value of parameter $k$) can be set by the user based on the error rate of long DNA reads - the higher error rate, the lower $k$ value.
The default value is 15.
This step is depicted in~Fig.~\ref{fig:algorithm-before-graph-building}A.
\begin{figure}[h!]
\includegraphics[scale=0.4,angle=270]{./algorithm_before_graph_building.pdf}
\caption{An exemplary process of generating and filtering k-mers pairs from long DNA reads.
(A) Firstly, the Bloom filter and an array, containing the number of occurrences of each k-mer, are build based on the k-spectrum generated from the input set of contigs.
(B) From each long DNA read, a set of k-mers pairs (k-mer length equal to $k$) is generated, with a distance between the beginning of the first k-mer and the end of the second one equal to $d$ and a sliding step equal to $t$.
(C) The input set of k-mers pairs is filtered with the Bloom filter - some pairs are discarded (blue arrows).
(D) Resultant set of k-mers pairs after the second filtering process.
It is worth noting that the resulting set of k-mers pairs (D) is very limited in relation to the generated set of k-mers pairs (B) due to errors in long DNA reads and repetitive regions of the investigated genome.}
\label{fig:algorithm-before-graph-building}
\end{figure}
Secondly, a~set of long DNA reads is proceeded - the set of k-mers pairs with the distance $d$ is generated.
The default value $d$ is $4000$.
It should be mentioned, that we do not generate a~full k-spectrum here, we rather use the step value $t$, set by default to $2$.
This step is depicted in~Fig.~\ref{fig:algorithm-before-graph-building}B.
The pairs in which both k-mers are in the generated previously Bloom filter, are processed further, as depicted in~Fig.~\ref{fig:algorithm-before-graph-building}C.
Thirdly, a set of unique k-mers is determined.
This process consists in counting the number of instances of a given k-mer in the input set of contigs.
K-mers which occur more than once are treated as non-unique.
All pairs of k-mers containing at least one non-unique k-mer are removed from further considerations,
as depicted in~Fig.~\ref{fig:algorithm-before-graph-building}D.
Next, the connection graph is built.
This graph is composed of vertices that represent contigs, and edges that represent connections between contigs derived from pairs of k-mers from long DNA reads.
Each edge contains three parameters that define the strength of the specified connection.
These parameters are:
\begin{itemize}
\item the number of connections between a given pair of contigs defined as the number of k-mers pairs;
\item the number of connections between a given pair of contigs defined as the number of DNA reads;
\item the number of connections between a given pair of contigs defined as the number of DNA reads, where specified DNA read is taken into consideration if number of k-mers pairs in this read is greater than the threshold value specified by the user.
\end{itemize}
After building the connection graph a set of filters is applied to remove some edges representing connections.
Filters remove the edges where at least one of the three numbers mentioned above is lower than the corresponding thresholds set by the user.
Finally, the process of generating the resulting set of scaffolds from the connection graph is performed.
The process respectively considers not processed previously vertices of the graph, starting from the vertex that represents the longest contig.
Then, the considered contig is expanded to the left and right.
During the expansion, two types of situations can occur:
(i) the specified contig is connected only with a single vertex in contig graph, then, considered contigs are joined;
(ii) the specified contig is connected with more than a single vertex. In this situation, a vertex with the largest number of connecting pairs of k-mers is preferred.
All vertices used in expansion are marked as used and are not taken into consideration in the next iteration of the algorithm.
The process is repeated till all vertices are processed.
\subsection*{Gap filling algorithm}
During scaffolds generation two contigs may overlap, then a single 'N' sign is inserted between them.
However, the contigs may be separated by a gap and the final step of the presented algorithm aims
to estimate the gap size and to fill it with an appropriate fragment of long DNA read.
Since both the distance between paired k-mers and the coordinates of those k-mers on
contigs are known, the estimated length of each gap is calculated.
In the same manner, having known the offset of each k-mer pair extracted from the long read,
it is possible to determine the offset of a sub-sequence of a read corresponding to each of the gaps in scaffolds.
Contigs are covered by multiple error-containing reads, and consequently, multiple different gap sequences may be generated.
In the presented application,
a gap sequence is taken directly from the read which covers the considered contigs with the greatest number of k-mer pairs.
\subsection*{Implementation}
The dnaasm application was implemented in the client-server architecture, based on the bioweb framework~\cite{bioweb}.
The dnaasm-link is a new module, deployed as shared library.
In our implementation, we used three programming languages: JavaScript, Python and C++.
Firstly, JavaScript along with HTML5 and AngularJS framework were used to implement graphical user interface (GUI).
Then, Python and Django library were used to implement server side.
Finally, C++ was used to implement the most complex data processing step - the algorithm presented in the work.
Moreover, we used several libraries, like Boost and Google Sparse Hash, to make implementation of our algorithm fast and memory scalable.
The main modules of our software are presented in Fig~\ref{fig:architecture}.
\begin{figure}[h!]
\includegraphics[scale=0.4,angle=270]{./architecture.pdf}
\caption{The architecture of dnaasm application. The user can use the application in two ways, through graphical user interface or a~command line.
Both ways lead to launching the calculation module, in which the presented algorithm is implemented as the shared library. What is more, the calculation module contains an additional shared library in which the \textit{de novo} assembler was implemented beforehand.
Both the mentioned assembler and the presented dnaasm-link scaffolder can be launched in a very similar and convenient way.}
\label{fig:architecture}
\end{figure}
\section*{Results}
Numerical experiments were performed to compare the presented application with the available tools and to indicate the advantages of gap filling in scaffolds using long DNA reads.
Briefly, the first experiment compares the quality of results obtained in the presented method with other tools for hybrid assembly.
The second experiment was carried out on artificially generated data and it indicates the benefits of using both short and long DNA reads over using only the output from second-generation sequencers.
Finally, the calculation time and memory usage were measured.
To evaluate the quality of resultant DNA sequences in experiments we used QUAST~\cite{quast} ver.~4.1.
We compared DNA sequences in terms of:
\begin{itemize}
\item{the number of resultant DNA sequences longer than 1000 bp;}
\item{the number of misassemblies - sum of relocations, translocations, and inversions;}
\item{N50 statistic - the length of the DNA sequence for which the sum of lengths of all sequences of that length or longer is greater than half of an assembly;}
\item{NA50 statistic - the same as N50, but not for all resultant DNA sequences - only for a set of aligned blocks which are results of breaking input DNA sequences at misassembly events;}
\item{the largest DNA sequence;}
\item{the largest alignment - the length of the largest continuous alignment in the resultant DNA sequences;}
\item{the average number of mismatches per 100 kbp;}
\item{the average number of indels per 100 kbp;}
\item{the average number of uncalled bases (N's) per 100 kbp.}
\end{itemize}
Moreover, we used BUSCO~\cite{busco} ver.~2.0 tool to compare the DNA sequence in terms of the number of reconstructed core genes.
As part of this evaluation of the DNA sequences, we have distinguished four groups: (i) complete and single-copy, (ii) complete and duplicated, (iii) fragmented and (iv) missing core genes.
A detailed description of the experiments and the results obtained could be found in the next parts of this section.
\subsection*{Comparison with another tools}
We compared the results of our application with other tools for hybrid assembly that connect the contigs using long reads.
The main objective was comparison in terms of linking the contigs with long DNA reads and filling in the resulting gaps.
For the above experiment we used publicly available data for \textit{Escherichia coli} and \textit{Saccharomyces cerevisiae}.
Both of the datasets on which we worked came from Nanocorr's~\cite{nanocorr} research\footnote{http://schatzlab.cshl.edu/data/nanocorr}, the names of the files are provided in Supplementary materials.
The above files are the result of \textit{de novo} assembly of short DNA reads and the correction of ONT reads by short DNA reads.
Basic parameters of the input set of long DNA reads and contigs are presented in Tab.~\ref{tab:coli_yeast_long_reads}.
\begin{table}[h!]
\caption{The input set of long DNA reads and contigs characteristic for \textit{E. coli} and \textit{S. cerevisiae} organisms from Nanocorr's research.}
\begin{tiny}
\begin{tabular}{l|r|r|r|r|r|r|r|r|}
\multicolumn{2}{c|}{~} & \textbf{No. of} & \multirow{2}{*}{\textbf{Sum [Mbp]}} & \multirow{2}{*}{\textbf{N50 [bp]}} & \multirow{2}{*}{\textbf{Max [bp]}} & \textbf{Avg.} & \textbf{Avg.} & \textbf{Avg.} \\
\multicolumn{2}{c|}{~} & \textbf{sequences} & ~ & ~ & ~ & \textbf{mis.} & \textbf{indels} & \textbf{N's} \\ \hline
\multirow{2}{*}{contigs} & \textit{E. coli} & 65 & 4.681 & 176396 & 398301 & 2.32 & 0.17 & 0.00 \\
~ & \textit{S. cerevisiae} & 430 & 14.911 & 53444 & 257346 & 85.77 & 8.80 & 0.00 \\ \hline
\multirow{2}{*}{long reads} & \textit{E. coli} & 59009 & 240.098 & 7471 & 43798 & 180.75 & 181.20 & 0.00 \\
~ & \textit{S. cerevisiae} & 88218 & 526.589 & 9189 & 72879 & 360.98 & 171.80 & 5.06 \\ \hline
\end{tabular}
\end{tiny}
\label{tab:coli_yeast_long_reads}
\end{table}
We compare our approach with two state-of-the-art tools used to join contigs into scaffolds with long reads: LINKS~\cite{links} ver. 1.8.5 and SSPACE-LongRead~\cite{sspace-longread} ver. 1.1.0.
What is more, in our research we also used some scaffolders for short, paired DNA reads - paired-end tags (PETs) or mate-pairs (MPs).
In order to run scaffolders for short DNA reads on long DNA reads dataset, we used the Fast-SG~\cite{fast-sg} tool.
This application allows us to generate a~set of paired DNA reads from subsequent long DNA reads and to map them to the preassembled contigs.
Then, this set of mapped, short DNA reads along with contigs were used as an input for scaffolders dedicated to short reads: OPERA-LG~\cite{opera-lg} ver. 2.0.6, BOSS~\cite{boss} and ScaffMatch~\cite{scaffmatch} ver. 0.9.0.
Parameter values for applications and the appropriate commands are provided in Supplementary materials,
while the results of the evaluation are presented in Tab.~\ref{tab:schatzlab-coli-yeast-linkage-statistics} and Tab.~\ref{tab:schatzlab-coli-yeast-core-genes}.
\begin{table}[h!]
\caption{Evaluation of dnaasm-link application in comparison to other tools for datasets depicted in Tab.~\ref{tab:coli_yeast_long_reads}.
The following reference sequences were used to evaluate the results: NC\_000913 for \textit{E. coli} and NC\_001133 ... NC\_001148, NC\_001224 for \textit{S. cerevisiae}.}
\begin{tiny}
\begin{tabular}{l|r|r|r|r|r|r|r|r|r|r|}
\multicolumn{2}{c|}{~} & \textbf{No. of} & \textbf{No. of} & \textbf{N50} & \textbf{NA50} & \textbf{Max} & \textbf{Largest} & \textbf{Avg.} & \textbf{Avg.} & \textbf{Avg.} \\
\multicolumn{2}{c|}{~} & \textbf{contigs} & \textbf{mis.} & \textbf{[bp]} & \textbf{[bp]} & \textbf{[bp]} & \textbf{algn. [bp]} & \textbf{mis.} & \textbf{indels} & \textbf{N's} \\ \hline
\multirow{7}{*}{\textit{E. coli}} & NGS contigs & 65 & 9 & 176396 & 164044 & 398301 & 360084 & 2.32 & 0.17 & 0.00 \\
~ & SSPACE-LongRead & 32 & 29 & 398301 & 211043 & 1274776 & 564486 & 2.47 & 0.37 & 570.90 \\
~ & LINKS & 23 & 19 & 637611 & 235726 & 1146701 & 636452 & 2.36 & 0.39 & 233.43 \\
~ & \textbf{dnaasm-link} & \textbf{22} & \textbf{20} & \textbf{746714} & \textbf{219242} & \textbf{1128693} & \textbf{636452} & \textbf{2.36} & \textbf{0.37} & \textbf{212.75} \\
~ & Fast-SG + OPERA-LG & 26 & 16 & 349966 & 342146 & 659623 & 658295 & 2.36 & 0.30 & 326.53 \\
~ & Fast-SG + BOSS & 60 & 14 & 177523 & 164044 & 611106 & 360084 & 2.32 & 0.17 & 64.79 \\
~ & Fast-SG + ScaffMatch & 55 & 18 & 185955 & 177523 & 603113 & 359089 & 2.41 & 0.17 & 139.44 \\ \hline
\multirow{7}{*}{\textit{S. cerevisiae}} & NGS contigs & 430 & 53 & 53444 & 49075 & 257346 & 249232 & 85.77 & 8.80 & 0.00 \\
~ & SSPACE-LongRead & 557 & 105 & 167867 & 126607 & 736874 & 452023 & 95.42 & 11.27 & 3690.74 \\
~ & LINKS & 202 & 89 & 202618 & 126598 & 623140 & 416048 & 87.04 & 10.00 & 850.77 \\
~ & \textbf{dnaasm-link} & \textbf{190} & \textbf{92} & \textbf{224004} & \textbf{126353} & \textbf{764024} & \textbf{431875} & \textbf{87.28} & \textbf{10.08} & \textbf{861.19} \\
~ & Fast-SG + OPERA-LG & 202 & 59 & 180866 & 155226 & 736942 & 451889 & 85.51 & 9.72 & 462.50 \\
~ & Fast-SG + BOSS & 369 & 113 & 57097 & 47994 & 257346 & 249232 & 85.77 & 8.80 & 374.16 \\
~ & Fast-SG + ScaffMatch & 328 & 144 & 80833 & 51157 & 434320 & 249232 & 85.41 & 8.82 & 489.70 \\ \hline
\end{tabular}
\end{tiny}
\label{tab:schatzlab-coli-yeast-linkage-statistics}
\end{table}
\begin{table}[h!]
\caption{Comparison of the number of core genes reproduced from datasets depicted in Tab.~\ref{tab:coli_yeast_long_reads}.
The sets of reference core genes used for evaluation were enterobacteriales\_odb9 and saccharomycetales\_odb9 for \textit{E. coli} and \textit{S. cerevisiae}, respectively.}
\begin{tiny}
\begin{tabular}{l|r|r|r|r|r|r|r|r|r|}
\multicolumn{2}{c|}{~} & \textbf{Complete and} & \textbf{Complete and} & \multirow{2}{*}{\textbf{Fragmented}} & \multirow{2}{*}{\textbf{Missing}} \\
\multicolumn{2}{c|}{~} & \textbf{single-copy} & \textbf{duplicated} & ~ & ~ \\ \hline
\multirow{7}{*}{\textit{E. coli}} & NGS contigs & 780 & 0 & 1 & 0 \\
~ & SSPACE-LongRead & 619 & 162 & 0 & 0 \\
~ & LINKS & 780 & 0 & 1 & 0 \\
~ & \textbf{dnaasm-link} & \textbf{780} & \textbf{0} & \textbf{1} & \textbf{0} \\
~ & Fast-SG + OPERA-LG & 780 & 0 & 1 & 0 \\
~ & Fast-SG + BOSS & 780 & 0 & 1 & 0 \\
~ & Fast-SG + ScaffMatch & 780 & 0 & 1 & 0 \\ \hline
\multirow{7}{*}{\textit{S. cerevisiae}} & NGS contigs & 1657 & 9 & 18 & 27 \\
~ & SSPACE-LongRead & 1647 & 27 & 15 & 22 \\
~ & LINKS & 1661 & 9 & 14 & 27 \\
~ & \textbf{dnaasm-link} & \textbf{1659} & \textbf{10} & \textbf{14} & \textbf{28} \\
~ & Fast-SG + OPERA-LG & 1661 & 9 & 16 & 25 \\
~ & Fast-SG + BOSS & 1660 & 9 & 18 & 24 \\
~ & Fast-SG + ScaffMatch & 1658 & 9 & 12 & 32 \\ \hline
\end{tabular}
\end{tiny}
\label{tab:schatzlab-coli-yeast-core-genes}
\end{table}
Our experiment indicates that the dnaasm-link application gives slightly better results than existing tools in terms of the quantity and quality of the resulting DNA sequences.
What is more, \textit{de novo} assembling process by tools that treat short and long reads differently (LINKS, SSPACE-LongRead, dnaasm-link) gives better results
than converting long reads into short reads to increase sequencing coverage followed by \textit{de novo} assembling.
\subsection*{The impact of adding long DNA reads}
We examined how the combination of short and long DNA reads affects the length and quantity of the resulting DNA sequences.
In this study we used \textit{Saccharomyces cerevisiae} (GenBank NC\_001133 ... NC\_001148, NC\_001224) reference genome.
From this genome we generated nine sets of short DNA reads by the pIRS~\cite{pirs} ver. 1.1.1 application and five sets of long reads by NanoSim~\cite{nanosim} ver. 1.0.0 tool,
where each set had a different depth of coverage.
The details of application used and dataset parameters are provided in Supplementary materials.
\begin{figure}[h!]
\includegraphics[scale=0.33,angle=0]{./mixing_illumina_ont.pdf}
\caption{The impact of adding long DNA reads on the number of resultant scaffolds longer than 1000 bp and NA50 statistic. The experiment was carried out on \textit{Saccharomyces cerevisiae} (GenBank NC\_001133 ... NC\_001148, NC\_001224) genome. Firstly, nine sets of short DNA reads and five sets of long DNA reads with different depth of coverage were generated. Then, short reads were \textit{de novo} assembled, and finally, resultant unitigs were linked by long DNA reads. The peak in number for contigs for Illumina coverage equal to 15x is due to the fact that 10x is too small to cover the whole genome.
After increasing the coverage, the number of contigs increases at the beginning,
because the whole genome is covered, but with small gaps.
It is worth mentioning that a~greater depth of coverage does not increase a~number of covered gaps in the results,
as all the gaps are caused by the complex DNA region and not the lack of coverage.}
\label{fig:mixing_illumina_ont}
\end{figure}
The generated short reads were \textit{de novo} assembled by ABySS ver. 2.0.1, then contigs were linked using long reads.
The results, presented in Fig~\ref{fig:mixing_illumina_ont}, prove that
combining long DNA reads with short ones can significantly increase the consistency of the resultant assemblies by reducing the final number of scaffolds.
Moreover, increasing the coverage of any sequencing technology above the certain level does not improve the results any more.
Next, we investigate how the use of long DNA reads affects the reconstruction of complex DNA structures such as long tandem repeats.
We compare our method to a~technique where gaps are filled with short DNA reads.
In this experiment we generated an~input set of reads for two organisms: \textit{Escherichia coli} (GenBank NC\_000913) and \textit{Saccharomyces cerevisiae} (GenBank NC\_001133 ... NC\_001148, NC\_001224).
We used the same applications, ie. pIRS and NanoSim as before, whose parameters are provided in Supplementary materials.
The short reads were \textit{de novo} assembled by ABySS~\cite{abyss}.
Next, we link contigs with long DNA reads by dnaasm-link tool in two modes: with and without gap filling.
Then, the scaffolds produced by dnaasm-link without gap filling were treated by three tools for filling gaps with short DNA reads:
GapFiller~\cite{gapfiller} ver. 1.10.0 , Sealer~\cite{sealer} ver. 1.9.0 and SOAPdenovo2~GapCloser~\cite{soapdenovo2-gapcloser} ver. 1.12.0.
Finally, we compared a number of detected tandem repeats by Tandem repeats finder application~\cite{trf}.
The mentioned application was also launched on the reference genomes, to determine ground truth data for this study.
The results presented in Tab.~\ref{tab:gap-filling-efficiency-tandem-repeats} depict the advantage of gap filling by dnaasm-link over other existing methods.
\begin{table}[h!]
\caption{Tandem repeat reconstruction efficiency. The table presents all tandem repeats in the \textit{E. coli} and \textit{S. cerevisiae} reference genomes.
In the presented table '+' signs mean the correct reproduction of the specified repetitive fragment,
'-' signs the lack of correct reconstruction.
The presented results indicate that the usage of long DNA reads by dnaasm-link tool allows to reconstruct some of tandem repeats.}
\begin{tiny}
\begin{tabular}{l|r|r|r|r|r|r|r|r|r|r|r|}
\multirow{3}{*}{\textbf{}} & \textbf{Motif} & \textbf{Num of} & \textbf{NGS} & \textbf{dnaasm-link} & \multicolumn{3}{|c|}{\textbf{dnaasm-link without gap filling}} & \textbf{dnaasm-link} \\
~ & \textbf{len. [bp]} & \textbf{repet.} & \textbf{unitigs} & \textbf{without gap fill.} & \textbf{+ GapFiller} & \textbf{+ Sealer} & \textbf{+ GapCloser} & \textbf{with gap fill.} \\ \hline
\multirow{6}{*}{\textit{E. coli}} & 181 & 3.0 & - & - & - & - & - & - \\
~ & 181 & 2.3 & - & - & - & - & - & - \\
~ & 178 & 1.9 & - & - & + & - & - & + \\
~ & 226 & 2.0 & - & - & - & - & - & + \\
~ & 113 & 2.7 & - & - & - & - & - & + \\
~ & 226 & 1.9 & - & - & - & - & - & - \\
~ & 200 & 2.0 & - & - & - & - & - & + \\ \hline
\multirow{14}{*}{\textit{S. cerevisiae}} & 135 & 1.9 & - & - & - & - & - & - \\
~ & 135 & 1.9 & - & - & - & - & - & - \\
~ & 135 & 3.1 & - & - & - & - & - & - \\
~ & 135 & 3.1 & - & - & - & - & - & - \\
~ & 135 & 1.9 & - & - & - & - & - & - \\
~ & 192 & 2.2 & - & - & - & - & - & - \\
~ & 192 & 2.1 & - & - & - & - & - & - \\
~ & 84 & 3.0 & - & - & - & - & - & - \\
~ & 1998 & 2.0 & - & - & - & - & - & - \\
~ & 207 & 2.1 & - & - & - & - & - & + \\
~ & 81 & 3.3 & - & - & - & - & - & + \\
~ & 189 & 1.9 & - & - & - & - & - & + \\
~ & 72 & 5.3 & - & - & - & - & - & + \\
~ & 189 & 2.3 & - & - & - & - & - & + \\ \hline
\end{tabular}
\end{tiny}
\label{tab:gap-filling-efficiency-tandem-repeats}
\end{table}
\subsection*{Time and memory usage}
We examined dnaasm-link application in terms of performance, which can be crucial in the analysis of large volume sequencing data.
Our application was compared with LINKS~\cite{links} and SSPACE-LongRead~\cite{sspace-longread} in terms of time and memory usage.
The results of the experiment are presented in Fig~\ref{fig:linkage_time_memory}.
\begin{figure}[!h]
\includegraphics[scale=0.33,angle=0]{./linkage_time_memory.pdf}
\caption{Comparison of calculation time and the peak of RAM memory usage of SSPACE-LongReads, LINKS and dnaasm-link applications. The experiment was carried out on the \textit{Caenorhabditis elegans} genome (GenBank NC\_003279 ... NC\_003284, NC\_001328). Firstly, a set of eleven sub-genomes of sizes 1Mbp, 10Mbp, 20Mbp ... 100Mbp was generated from previously mentioned genome. Then, for each sequence a set of long and short DNA reads was generated, short DNA reads were \textit{de novo} assembled by ABySS tool. Finally, set of resultant contigs and long DNA reads were used as input data sets in the presented experiment.}
\label{fig:linkage_time_memory}
\end{figure}
As expected, combining contigs in applications with accurate mapping takes much more time than in k-mer based tools, in particular, because of the duration time of mapping of long DNA reads to preassembled contigs.
For example, the calculation time of SSPACE-LongRead application, for which the BLASR~\cite{blasr} software is used in the mapping process, is over 15 times longer than for tools using a k-mer approach, like dnaasm-link tool.
Our tool is significantly faster than LINKS application, because LINKS, which uses a~similar algorithm, is implemented in Perl.
In addition, the LINKS application requires much more RAM memory;
for example, for a~genome of size 100 Mbp and the coverage of long reads equal to 30x, the LINKS application uses over 200 GB of RAM memory, and our application only 18.3 GB.
\section*{Discussion}
The dnaasm-link is the new tool for both: the connection of contigs and filling the gaps between them with long DNA reads.
The presented results indicate that the application works comparatively with existing tools in terms of the quality of the resultant DNA sequences.
However, the application works significantly faster with much less RAM memory usage, which can be crucial for large volume sequencing data.
Moreover, the presented software contains a module for filling the gaps between contigs by a specified sequence from an appropriate long DNA read, which is not implemented in similar tools.
The procedure of filling the gaps through the appropriate fragment of a specified long DNA read can significantly increase the parameters of the resulting DNA sequences.
In the presented study we indicated that a very large number of complex DNA structures, especially tandem repeats, could not be properly reproduced without using long DNA reads.
Moreover, the addition of long DNA reads, even with very low coverage, can significantly reduce the number of resultant DNA sequences and improve their consistency in relation to the results obtained only from short DNA reads.
In the presented application, a~gap within scaffolds could be optionally filled with a fragment of a single long DNA read.
However, this solution is not ideal, because such a read may contain many errors, especially, if the long reads are raw - errors have not been corrected before.
In order to control this issue, in the future we plan to add a module to create consensus from several DNA reads.
The result of the consensus of several long reads would be inserted into the gap instead of the raw fragment of a single long read, which would significantly reduce the number of errors in the considered DNA fragments.
However, the preliminary study shows a big increase in time complexity, when consensus is calculated with the use of multi-alignment dynamic programming algorithm.
In the future, we also plan to add a module for the analysis of the similarity of k-mers, which would take into account the fact that the k-mers may contain errors.
The presented tool is based on k-mers, which should contain as few errors as possible, because each single error in the specified DNA sequence causes the creation of $k$ erroneous k-mers in k-spectrum.
To deal with this problem, in the next version of the software we will add a module which will investigate a profile of specified k-mer and compare it to the profiles of the another k-mers.
The profile mentioned will contain several information, e.g. number of specified 2-mers and their location in the investigated k-mer.
The presented application is available under GNU Library or Lesser General Public License version 3.0 (LGPLv3).
In order to easily use the software, the demo application with web interface as well as Docker~\cite{docker} container with dnaasm-link tool are available.
What is more, the user can download binary files as well as source code and compile the application with any changes in the algorithm.
The correspondent links, additional data and more information can be found at project homepage http://dnaasm.sourceforge.net.
\section*{Conclusions}
As more and more genomes are sequenced, it becomes desirable to correctly reproduce their DNA sequences, especially, from short and long DNA reads.
Here we have presented dnaasm-link, a tool for linking contigs, a result of \textit{de novo} assembly of second-generation sequencing data, with long DNA reads.
\section*{Availability of data and materials}
dnaasm-link is implemented in C++, and is freely available under GNU Library or Lesser General Public License version 3.0 (LGPLv3).
It and related materials can be downloaded from project homepage http://dnaasm.sourceforge.net.
\section*{Funding}
This work was supported by the statutory research of Institute of Computer Science of Warsaw University of Technology.
\section*{Authors’ contributions}
RN identified the problem, RN, WF and WK designed the approach.
WF implemented the software.
WF and WK worked on testing and validation, WK and RN wrote the manuscript.
All authors read and approved the final manuscript.
\bibliographystyle{plain}
|
1,314,259,993,275 | arxiv | \section{Introduction}
\label{intro}
In their landmark paper \cite{kazhdan1979representations}, Kazhdan and Lusztig introduced, for each Coxeter system $(W,S)$, a family of polynomials with integer coefficients indexed by pairs of elements of $W$. These polynomials are now known as the Kazhdan--Lusztig (KL)-polynomials
that are related closely with the representation theory of semisimple algebraic groups, the topology of the Schubert varieties, the Verma modules, etc. (see e.g. \cite{brenti2003kazhdan} and references therein). To demonstrate the existence of these polynomials, the authors in \cite{kazhdan1979representations} introduced another family of polynomials, the KL $R$-polynomials, which we denote by $R_{u,v}(t)\in \mathbb{Z}[t]$. The relevance of the $R$-polynomials lies in the fact that the knowledge of them implies
the knowledge of the KL-polynomials.
Recently, it was proven that the KL-polynomials have non-negative coefficients \cite{elias2014hodge}. Meanwhile, the simplest examples reveal that $R$-polynomials can have negative coefficients. Fortunately, there exists another family of polynomials with positive coefficients that, after an easy change in variable, ``coincides'' with the $R$-polynomials (see Proposition \ref{propose decomposition R tilde polynomials}). These polynomials are known as the KL $\tilde{R}$-polynomials. We denote them by $\tilde{R}_{u,v}(t)$, for $u,v\in W$.
The primary object of the study herein is a new family of polynomials, $\{\tilde{\mathcal{R}}_{u,\underline{v}}(t) \}$, indexed by pairs $(u,\underline{v})$ formed by an element $u\in W$ and a (non-necessarily reduced) word $\underline{v}$ in the alphabet $S$. We refer to these polynomials as the ``diagrammatic $\tilde{R}$-polynomials.'' Such a name is justified by the following: Given a pair $(u,\underline{v})$, we define a set, $\mathbb{L}_{\underline{v}}(u)$, which is a subset of Libedinsky's light leaves associated to the pair $(u,\underline{v})$ \cite{libedinsky2008categorie}. In \cite{elias2016soergel}, Elias and Williamson introduced a diagrammatic version of the set $\mathbb{L}_{\underline{v}}(u)$. We define the diagrammatic $\tilde{R}$-polynomial, $\tilde{\mathcal{R}}_{u,\underline{v}}(t)$, by
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)= \sum_{l\in\mathbb{L}_{\underline{v}}(u)} t^{\deg{(l)}},
\end{equation}
where $\deg(\cdot )$ is a certain statistic defined over the set $\mathbb{L}_{\underline{v}}(u)$.
On the other hand, we demonstrate that if $\underline{v}$ is a reduced expression of some $v\in W$, then
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)=\tilde{R}_{u,{v}}(t).
\end{equation}
This last equality is the primary result herein (see Theorem \ref{teo diagrammatic R poly}), and it provides a combinatorial interpretation for $\tilde{R}$-polynomials. Obtaining a combinatorial interpretation for $\tilde{R}$-polynomials is not new; further, two of such interpretations exist in the literature: the one given by Deodhar \cite[Theorem 1.3]{deodhar1985some} in terms of distinguished subexpressions, and the one provided by Dyer \cite[Corollary 3.4]{dyer1993hecke} in terms of directed paths in the Bruhat graph of $W$.
Our approach presents two advantages with respect to Dyer's or Deodhar's approach. On the one hand, it allows us to obtain some closed formulas for $\tilde{R}$-polynomials in a simple and combinatorial manner. In particular, we recover and generalize the closed formulas obtained previously by Marietti \cite{marietti2002closed} and Pagliacci \cite{pagliacci2001explicit}.
Here, we use the term ``combinatorial'' in its more strict meaning, i.e., we obtain the closed formulas for $\tilde{R}$-polynomials using bijective proofs. On the other hand, our approach establishes an explicit connection between $\tilde{R}$-polynomials and the theory of Soergel bimodules.
Although not necessary, we briefly explain the aforementioned connection. As we indicated, KL-polynomials have non-negative coefficients. Elias and Williamson's proof of this fact relies on the previous work of Soergel \cite{soergel1992combinatorics,soergel2007kazhdan}. In fact, what they proved in \cite{elias2014hodge} is a deeper result known as ``Soergel's conjecture.'' One of the consequences of Soergel's conjecture is that the KL-polynomials are the graded ranks of certain Hom-spaces; in particular, they have positive coefficients. The diagrams that we use throughout this paper represent morphisms in the category of Soergel bimodules. In particular, the set $\mathbb{L}_{\underline{v}}(u)$ spans a submodule of a quotient of a certain Hom-space in the category of Soergel bimodules, such that our combinatorial interpretation of $\tilde{R}$-polynomials becomes a categorical interpretation. Namely, the diagrammatic $\tilde{R}$-polynomials are the graded rank of a submodule of a quotient of a certain Hom-space. We expect that the interpretation above together with the Hodge theoretic machinery developed in \cite{elias2014hodge} allow for a better understanding of the $\tilde{R}$-polynomials. Hereinafter, we treat the diagrams as only combinatorial objects. The reader interested in the algebraic part of the story is referred to \cite{elias2010diagrammatics,elias2016soergel,libedinsky2008categorie,libedinsky2015light,libedinsky2017gentle}.
This paper is organised as follows. In the next section, we recall the definition and some properties of the $\tilde{R}$-polynomials. In section \ref{section diagrams}, we introduce diagrammatic $\tilde{R}$-polynomials and prove that they coincide with the classical ones. Finally, in section \ref{section closed}, we provide some closed formulas for diagrammatic $\tilde{R}$-polynomials.
\section*{Acknowledgements}
This work is supported by the Fondecyt project 11160154 and the Inserci\'on en la Academia project PAI-Conicyt 79150016. The author would like to thank Nicol\'as Libedinsky and Paolo Sentinelli for their comments and suggestions.
\section{Kazhdan--Lusztig $\tilde{R}$-polynomials}
\label{section two}
Herein, $(W,S)$ denotes an arbitrary Coxeter system. We refer the reader to \cite{bjorner2006combinatorics} and \cite{humphreys1992reflection} for the notations and definitions concerning Coxeter systems. In particular, for $u,v\in W$, we write $u\leq v$ to indicate that $u$ is smaller than or equal to $v$ in the Bruhat order of $(W,S)$. The Bruhat order can be characterised in terms of subwords as follows.
\begin{lem} \label{lema subword property}
Let $\underline{v}=s_1\ldots s_n$ be a reduced expression of some $v\in W$. Then, $u\leq v$ if and only if there exists a reduced expression $\underline{u}= s_{i_1}s_{i_2}\ldots s_{i_k}$ of $u$, where $1\leq i_1 <i_2<\ldots < i_k\leq n$.
\end{lem}
Let $\mathcal{A} = \mathbb{Z}[t,t^{-1}]$. The Hecke algebra $\mathcal{H}=\mathcal{H}(W,S)$ of $(W,S)$ is the associative and unital $\mathcal{A}$-algebra generated by $\{H_s \, |\, s\in S\}$ with relations
$$\begin{array}{rl}
H_s^2= & 1+(t^{-1}-t)H_s, \\
\underbrace{H_sH_tH_s \cdots}_{m_{st}\mbox{-times}} = & \underbrace{H_tH_sH_t \cdots}_{m_{st}\mbox{-times}}
\end{array}
$$
where $m_{st}$ denotes the order of $st$ in $W$.
Given a reduced expression $\underline{v}=s_{1} \cdots s_{k}$ of an element $v\in W$, we define $H_v=H_{s_1}\cdots H_{s_k}$. It is well known that $H_v$ is well defined, that is, it does not depend on the choice of a reduced expression. We also set $H_e:=1$, where $e$ denotes the identity of $W$. The set $\{H_v \, | \, v\in W\}$ is a basis of $\mathcal{H}$, which we call the standard basis of $\mathcal{H}$. It follows directly from the definition of $\mathcal{H}$ that each generator $H_s$ is invertible and we have $H_s^{-1}= H_s+(t-t^{-1})$. Hence, all elements in the standard basis of $\mathcal{H}$ are invertible as well.
For $u,v\in W$, we denote the corresponding KL $R$-polynomials by $R_{u,v}(t)\in \mathcal{A}$. These (Laurent) polynomials are, by definition, the coefficients of the expansion of $(H_{v^{-1}})^{-1}$ in terms of the standard basis. Concretely, for $v\in W$ we have
\begin{equation}
(H_{v^{-1}})^{-1} = \sum_{u\in W} R_{u,v}(t) H_u
\end{equation}
\begin{rem}\rm
Herein, we follow the normalisation of Soergel \cite{soergel1997kazhdan} rather than the typical normalisation given in \cite{kazhdan1979representations}, \cite{humphreys1992reflection} or \cite{bjorner2006combinatorics}. In particular, if $R'_{u,v}(t)$ denotes the $R$-polynomial in that sources, then it is related with our $R$-polynomial, $R_{u,v}(t)$, by the formula
\begin{equation}
R_{u,v}(t)=t^{l(u)-l(v)}R'_{u,v}(t^2).
\end{equation}
\end{rem}
It is clear that $H_u$ appears in the expansion of $(H_{v^{-1}})^{-1} $ in terms of the standard basis if and only if $u\leq v$. In other words, we have $R_{u,v}(t)\neq 0$ if and only if $u \leq v$. Meanwhile, one can verify that even in the simplest examples, the $R$-polynomials do not have non-negative coefficients. This drawback is overcome with the following:
\begin{propos}\cite[Propositions 5.3.1 and 5.3.2]{bjorner2006combinatorics}
\label{propose decomposition R tilde polynomials}
Let $u,v\in W$. Then, there exists a unique polynomial $\tilde{R}_{u,v}(t)\in \mathbb{N}[t]$ such that
\begin{equation}
R_{u,v}(t)= \tilde{R}_{u,v}(t-t^{-1}).
\end{equation}
Now, assume that $u\leq v$. Then, $\tilde{R}_{u,v}(t)$ is a monic polynomial of degree $l(v)-l(u)$. Furthermore, if $ s\in S$ satisfies $l(vs)<l(v)$, then
\begin{equation} \label{decomposition R tilde polynomials}
\tilde{R}_{u,v}(t)= \left\{ \begin{array}{ll}
\tilde{R}_{us,vs}(t), & \mbox{ if } l(us)<l(u); \\
\tilde{R}_{us,vs}(t) + t\tilde{R}_{u,vs}(t), & \mbox{ if } l(us)>l(u).
\end{array} \right.
\end{equation}
\end{propos}
Equation (\ref{decomposition R tilde polynomials}) allows us to calculate the $\tilde{R}$-polynomials with the initial conditions $\tilde{R}_{v,v}(t)=1$ and $\tilde{R}_{u,v}(t)=0$ if $u\not \leq v$.
\section{Diagrammatic $\tilde{ \mathcal{R} }$-polynomials} \label{section diagrams}
In this section, we associate to each (non-necessarily reduced) word $\underline{v}$ a binary tree $\mathbb{T}_{\underline{v}}$ with nodes decorated by certain diagrams. The diagrams represent morphisms in the category of Soergel bimodules. However, we only treat them as combinatorial objects herein. For the algebraic meaning of the diagrams, the reader is referred to \cite{elias2010diagrammatics,elias2016soergel,libedinsky2008categorie,libedinsky2015light,libedinsky2017gentle}. In the following, we treat $S$ as a set of colours.
\begin{defi} \rm
An $S$-graph is a finite planar graph with its boundary embedded in a planar strip $\mathbb{R}\times [0,1]$, whose edges are coloured by elements of $S$, and all of whose vertices are of the following types\footnote{The reader familiar with diagrammatics for Soergel categories should note that the definition of $S$-graph herein does not allow trivalent vertices or polynomials to float in the graph. }:
\begin{enumerate}
\item Univalent vertices (Dots): $ \quad \univalent \quad $
\item $2m_{st}$-valent vertices: $\quad \multivalent \quad$
\end{enumerate}
Here, exactly $2m_{st}$ edges originate from the vertex, and they are coloured alternately by $s$ and $t$. For instance, the picture above corresponds to $m_{\red{s} \blue{t}}=4$.
\end{defi}
\begin{exa}\rm
Let $S=\{{\red{s}},{\blue{t}},{\green{u}} \}$ with $m_{{\red{s}}{\blue{t}}}=3$, $m_{{\red{s}}{\green{u}}}=2$ and $ m_{{\blue{t}}{\green{u}}}=3$. An example of an $S$-graph is given by
\begin{equation} \label{example Soergel graph}
\exampleSgraph
\end{equation}
\end{exa}
The boundary points of an $S$-graph on $\mathbb{R}\times \{0\}$ and on $\mathbb{R}\times \{1\}$ determine two sequences of coloured points, and hence two words in the alphabet $S$. We call these two sequences the bottom sequence and top sequence, respectively. For example, the bottom (resp. top) sequence of the $S$-graph in (\ref{example Soergel graph}) is $ststuutss$ (resp. $ust$).
\begin{defi}\rm
Given an $S$-graph $D$ we define its degree, $\deg (D)$, as the number of dots.
\end{defi}
For instance, the degree of the $S$-graph in (\ref{example Soergel graph}) is $4$. For the remainder of this section, we fix a word $\underline{v}=s_1s_2\ldots s_{n}$, $s_i\in S$. The construction of $\mathbb{T}_{\underline{v}}$ is by induction on the depth of the nodes and is as follows. In depth one, we have the following tree:
\begin{equation}
\levelone
\end{equation}
Here, we assume that the first letter of $\underline{v}$, $s_1$, is associated with the colour red. Let $1<k\leq n$ and assume that a node $N$ of depth $k-1$ has been decorated by the diagram $D$. Further, suppose that the top boundary of $D$, say $\underline{u}$, is a reduced expression for some $u\in W$. If the letter $s_k$ is blue, then we have two possibilities.
\begin{enumerate}
\item If $l(us_k)>l(u)$, then $N$ contains two child nodes that are decorated as follows.
\begin{equation}
\treestepone
\end{equation}
\item If $l(us_k)<l(u)$, then $N$ contains one child node. It is well known that in this case, there exists a reduced expression $\underline{u}'$ of $u$ with a letter $s_k$ in its rightmost position. Furthermore, we can obtain $\underline{u}'$ from $\underline{u}$ by applying a sequence of braid movements. Diagrammatically, we move a blue edge to the rightmost position by placing a sequence of $2m_{st}$-valent vertices on the top of $D$. Subsequently, we connect the two blue edges as illustrated below.
\begin{equation}
\treesteptwo
\end{equation}
The grey region in the diagram above indicates the place where the sequence of $2m_{st}$-valent vertices is located. This completes the construction of $\mathbb{T}_{\underline{v}}$.
\end{enumerate}
The diagram-decorating leaves of $\mathbb{T}_{\underline{v}}$ are called light leaves\footnote{Herein, we use the term ``light leaves'' to indicate a subset of the typical set of light leaves, namely the ones that do not contain any trivalent vertex. See \cite{libedinsky2008categorie} for the original definition of light leaves and \cite{elias2016soergel} for its diagrammatic version. We hope that this does not cause any confusion in the reader} of $\underline{v}$. The set of all light leaves of $\underline{v}$ is denoted by $\mathbb{L}_{\underline{v}}$. By construction, the bottom sequence of any element of $\mathbb{L}_{\underline{v}}$ is $\underline{v}$. Meanwhile, the top sequence of any element of $\mathbb{L}_{\underline{v}}$ is a reduced expression of some element of $W$, even if the word $\underline{v}$ is not reduced. Given $u\in W$, we denote by $\mathbb{L}_{\underline{v}}(u)$ the set of all light leaves of $\underline{v}$ with top sequence a reduced expression of $u$.
\begin{rem} \rm \label{remark ambiguity}
Step $2$ in the construction of $\mathbb{T}_{\underline{v}}$ introduces some ambiguity in the sets $\mathbb{L}_{\underline{v}}$ and $\mathbb{L}_{\underline{v}}(u)$. The problem here is that we have multiple choices to pass from one reduced expression to another. Hence, Step 2 can be performed in many ways. In the following, we treat the sets $\mathbb{L}_{\underline{v}}$ and $\mathbb{L}_{\underline{v}}(u)$ as any admissible choice of the corresponding $S$-graphs.
\end{rem}
\begin{exa}\rm \label{exa simplest}
Let $\underline{v}=\red{s} \red{s} \red{s}$. The tree $\mathbb{T}_{\underline{v}}$ associated to $\underline{v}$ is given by
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
Exactly five light leaves exist for $\underline{v}$. They are split into two classes according to their top sequence. We have
\begin{equation}
\mathbb{L}_{\underline{v}} = \mathbb{L}_{\underline{v}}(s) \cup \mathbb{L}_{\underline{v}}(e).
\end{equation}
\end{exa}
\begin{exa}\rm \label{example two different expressions}
Let $W$ be the symmetric group on four letters with $S=\{ {\red{s_1}}, {\blue{s_2}} , {\green{s_3}} \}$. Consider the words $\underline{v}= {\red{s_1}}{\blue{s_2}}{\red{s_1}}{\green{s_3}} {\blue{s_2}}{\red{s_1}}$ and $\underline{w}= {\blue{s_2}} {\green{s_3}}{\red{s_1}} {\blue{s_2}}{\green{s_3}}{\red{s_1}}$. Notice that both $\underline{v}$ and $\underline{w}$ are reduced words of the same element. We have
\begin{equation}
\mathbb{L}_{\underline{v}}(e)= \left\{ \exampletreeA \right\}
\end{equation}
\begin{equation}
\mathbb{L}_{\underline{w}}(e)= \left\{ \exampletreeB \right\}
\end{equation}
This example stresses that, although the sets $\mathbb{L}_{\underline{v}}(e)$ and $\mathbb{L}_{\underline{w}}(e)$ are different, they contain the same number of elements in each degree.
\end{exa}
The number and degree of the elements of the set $\mathbb{L}_{\underline{v}}(u)$ are collected in a polynomial, $\tilde{\mathcal{R}}_{u,\underline{v}}(t) \in \mathbb{N}[t]$, which we call the ``diagrammatic $\tilde{R}$-polynomial'' associated to $u$ and $\underline{v}$. Concretely, we define
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t) = \sum_{l\in \mathbb{L}_{\underline{v}}(u) } t^{\deg (l)}.
\end{equation}
Furthermore, if $\underline{v}$ is an empty word, we define
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t) = \left\{ \begin{array}{ll}
1, & \mbox{ if } u=e;\\
0, & \mbox{ otherwise.}
\end{array} \right. .
\end{equation}
\begin{rem}\rm
As mentioned in Remark \ref{remark ambiguity}, the sets $\mathbb{L}_{\underline{v}}(u)$ are not uniquely defined. However, regardless of how Step $2$ is performed in the construction of the tree $\mathbb{T}_{\underline{v}}$, we always obtain the same polynomial $\tilde{\mathcal{R}}_{u,\underline{v}}(t)$. In other words, $\tilde{\mathcal{R}}_{u,\underline{v}}(t)$ is well defined, that is, it only depends on the word $\underline{v}$ and the element $u$.
\end{rem}
\begin{exa}\rm
With the same notation as in Example \ref{example two different expressions}, we have
\begin{equation}
\tilde{\mathcal{R}}_{e,\underline{v}} (t)= \tilde{\mathcal{R}}_{e,\underline{w}} (t)= t^6+3t^4+t^2.
\end{equation}
\end{exa}
The following is the diagrammatic version of (\ref{decomposition R tilde polynomials}).
\begin{lem} \label{lema decomposition R tilde polynomials}
Let $\underline{v}=s_1\ldots s_ns$ be a word and $u\in W $. Let $\underline{v}'$ be the word obtained from $\underline{v}$ by erasing the rightmost letter, that is, $\underline{v}'=s_1\ldots s_{n}$. Then,
\begin{equation} \label{decomposition diagrammatic R tilde polynomials}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)= \left\{ \begin{array}{ll}
\tilde{\mathcal{R}}_{us,\underline{v}'}(t), & \mbox{ if } l(us)<l(u); \\
\tilde{\mathcal{R}}_{us,\underline{v}'}(t) + t\tilde{\mathcal{R}}_{u,\underline{v}'}(t), & \mbox{ if } l(us)>l(u).
\end{array} \right.
\end{equation}
\end{lem}
\begin{demo}
Suppose we have fixed $\mathbb{T}_{\underline{v}'}$. We will construct $\mathbb{T}_{\underline{v}}$ from our particular choice of $\mathbb{T}_{\underline{v}'}$.\footnote{That is, among all possibilities for $\mathbb{T}_{\underline{v}'}$, we have fixed one and from this tree, we complete the construction of $\mathbb{T}_{\underline{v}}$ by applying Step 1 or Step 2 to each leaf node of $\mathbb{T}_{\underline{v}'}$.} In particular, we fixed the sets $\mathbb{L}_{\underline{v}'}(x)$, for all $x\in W$. Let $l\in \mathbb{L}_{\underline{v}'}(x)$. The $S$-graphs decorating the child nodes of the node decorated by $l$ in $\mathbb{T}_{\underline{v}}$ belong to $ \mathbb{L}_{\underline{v}}(x)$ or $ \mathbb{L}_{\underline{v}}(xs)$. Hence, all the elements in $ \mathbb{L}_{\underline{v}}(u)$ are obtained from the elements of $ \mathbb{L}_{\underline{v}'}(u)$ or $ \mathbb{L}_{\underline{v}'}(us)$. We split the proof into two cases.
\begin{itemize}
\item[Case A. ] Suppose that $l(us)<l(u)$. Let $l\in \mathbb{L}_{\underline{v}'}(u)$.
Under this hypothesis, the node decorated by $l$ in $\mathbb{T}_{\underline{v}}$ contains one child node. Furthermore, the $S$-graph decorating this child node belongs to $\mathbb{L}_{\underline{v}}(us)$. Now, suppose that $l\in \mathbb{L}_{\underline{v}'}(us)$. In this case, the node decorated by $l$ in $\mathbb{T}_{\underline{v}}$ appears as
\begin{equation}
\treeproof
\end{equation}
For the above, we conclude that all the elements of $ \mathbb{L}_{\underline{v}}(u)$ are obtained by adding a line in the rightmost region of each element of $ \mathbb{L}_{\underline{v}'}(us)$. Therefore,
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)= \tilde{\mathcal{R}}_{us,\underline{v}'}(t),
\end{equation}
proving the result in this case.
\item[Case B. ] Suppose that $l(us)>l(u)$. Let $l\in \mathbb{L}_{\underline{v}'}(us)$. Under this hypothesis, the node decorated by $l$ in $\mathbb{T}_{\underline{v}}$ contains one child node. Furthermore, the $S$-graph decorating this child node contains the same degree as $l$ and belongs to $\mathbb{L}_{\underline{v}}(u)$. Therefore, each element in $\mathbb{L}_{\underline{v}'}(us)$ produces a light leaf in $\mathbb{L}_{\underline{v}}(u)$ of the same degree.
Now, suppose that $l\in \mathbb{L}_{\underline{v}'}(u)$. In this case, the node decorated by $l$ in $\mathbb{T}_{\underline{v}}$ appears as
\begin{equation}
\treeproofA
\end{equation}
Therefore, each element $l\in \mathbb{L}_{\underline{v}'}(u)$ produces a light leaf $l'\in \mathbb{L}_{\underline{v}}(u)$ with $\deg(l')=\deg(l)+1$. Summing up, we obtain
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)= \tilde{\mathcal{R}}_{us,\underline{v}'}(t) + t\tilde{\mathcal{R}}_{u,\underline{v}'}(t).
\end{equation}
\end{itemize}
\end{demo}
\begin{teo} \label{teo diagrammatic R poly}
Let $\underline{v}$ be a reduced expression of an element $v\in W$. Then,
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)= \tilde{R}_{u,v}(t),
\end{equation}
for all $u\in W$. In particular, $\tilde{\mathcal{R}}_{u,\underline{v}}(t)$ does not depend on the particular choice of a reduced expression for $v$.
\end{teo}
\begin{demo}
We proceed by an induction on $l(v)$. If $l(v)\leq 1$, the result is clear. Suppose that $l(v)>1$ and let $\underline{v}=s_1\ldots s_n s$ be a reduced expression of $v$. We remark that $\underline{v}'=s_1\ldots s_n$ is a reduced expression of $vs$ and that $l(vs)<l(v)$. Let $u\in W$ and suppose that $l(us)>l(u)$. Therefore, Proposition \ref{propose decomposition R tilde polynomials}, Lemma \ref{lema decomposition R tilde polynomials}, and our induction hypothesis yield
\begin{equation}
\begin{array}{rl}
\tilde{\mathcal{R}}_{u,\underline{v}}(t)= & \tilde{\mathcal{R}}_{us,\underline{v}'}(t) + t\tilde{\mathcal{R}}_{u,\underline{v}'}(t) \\
= & \tilde{R}_{us,vs}(t) + t\tilde{R}_{u,vs}(t) \\
= & \tilde{R}_{u,v}(t)
\end{array}
\end{equation}
The case when $l(us)<l(u)$ is treated similarly.
\end{demo}
\section{ Closed formulas for $\tilde{R}$-polynomials.} \label{section closed}
In this section, we provide some examples of the computations of $\tilde{R}$-polynomials using the diagrammatic approach. In particular, we recover and generalise uniformly some closed formulas obtained by Pagliacci \cite{pagliacci2001explicit} and Marietti \cite{marietti2002closed}.
\subsection{$\tilde{R}$-polynomials for permutations smaller than a transposition.} \label{section transposition}
In this section, we restrict our attention to the symmetric group $\mathfrak{S}_n$. We express the permutations using either the cycle notation or one-line notation. For instance, $ (1,2,3) $ and $231$ represent the same permutation. We denote by $s_i$ the simple transposition $(i,i+1)$, for $1\leq i <n$. We establish the convention that $s_i$ acts on the left on $\{ 1,2,\ldots ,n \}$.
The following was conjectured in \cite[Conjecture 7.7]{brenti1998kazhdan} and proven in \cite{marietti2002closed}.
\begin{teo} \label{Brenti conjecture}
Let $u,v \in \mathfrak{S}_n$ be such that $u\leq v \leq (a,b)$ for some $1 \leq a< b \leq n $. Therefore,
\begin{equation}
\tilde{R}_{u,v}(t)=t^{c}(t^2+1)^{\frac{1}{2} (l(v)-l(u)-c )}
\end{equation}
for some $c\in \mathbb{N}$.
\end{teo}
We prove a slight generalisation of Theorem \ref{Brenti conjecture} using the diagrammatic approach. We first require the following definition.
\begin{defi} \rm \label{definition UD-word}
We say a word $\underline{v}$ is ``up and down'' (or UD-word) if
\begin{equation} \label{almost favourite reduced expression}
\underline{v}= (s_{i_1}s_{i_2}\ldots s_{i_{r}}) s_t (s_{j_1}s_{j_2}\ldots s_{j_q}),
\end{equation}
where
\begin{enumerate}
\item $1\leq i_1 <i_2 <\ldots <i_{r} <t <n$;
\item $1\leq j_q <\ldots <j_2 <j_1<t< n$.
\end{enumerate}
\end{defi}
The definition above does not exclude the possibility that one or both of the parentheses can be empty.
It is noteworthy that in general, UD-words are not reduced expressions.
\begin{lem} \label{lemma simple tree A}
Let $\underline{v}$ be a UD-word. Then, the tree $\mathbb{T}_{\underline{v}}$ can be constructed
without using $6$-valent vertices. Furthermore, for this construction, if $u\in \mathfrak{S}_n$ and $\mathbb{L}_{\underline{v}}(u)\neq \emptyset $, then the top sequence of any element of $\mathbb{L}_{\underline{v}}(u)$ is a UD-word as well.
\end{lem}
\begin{demo}
We must use a $6$-valent vertex in the construction of $\mathbb{T}_{\underline{v}}$ only if $\underline{v}$ admits a subword of the type $s_is_{i+1}s_is_{i+1}$ or $s_{i}s_{i-1}s_is_{i-1}$. Both options are impossible for a UD-word.
Meanwhile, if $\mathbb{T}_{\underline{v}}$ is constructed without using $6$-valent vertices, then the top sequence of any element $l\in \mathbb{L}_{\underline{v}}(u)$ is obtained from $\underline{v}$ by erasing some letters. Therefore, such a sequence must be a UD-word.
\end{demo}
\begin{exa}\label{example we are forced} \rm
Lemma \ref{lemma simple tree A} tells us that among all the possibilities to construct
$\mathbb{T}_{\underline{v}}$ for a UD-word, there exists at least one option in which we do not need to use $6$-valent vertices. Consider the UD-word $ \underline{v} = {\red{s_1} }{\blue{s_3} }{\green{s_4} }{\blue{s_3} }{\red{s_1}} $. The reader can easily verify that the following $S$-graph decorates a node of depth $4$ in $\mathbb{T}_{\underline{v}}$.
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
This node contains only one child node that can be decorated by
\begin{equation} \label{two decorations}
% \usepackage[usenames,dvipsnames]{pstricks \qquad \mbox{ or } \qquad % \usepackage[usenames,dvipsnames]{pstricks\quad .
\end{equation}
Lemma \ref{lemma simple tree A} only claims the existence of the $S$-graph on the left of (\ref{two decorations}). The $S$-graph on the right of (\ref{two decorations}) is, despite being slightly bizarre, an admissible decoration.
\end{exa}
\begin{teo} \label{teo R poly UD-words}
Let $\underline{v}$ be a $UD$-word. If $u\in \mathfrak{S}_n$ satisfies $\mathbb{L}_{\underline{v}}(u)\neq \emptyset$, then
\begin{equation} \label{equation teo diagram R dominated by a transposition}
\tilde{\mathcal{R}}_{u,\underline{v}}(t) =t^{c}(t^2+1)^{\frac{1}{2} (l(\underline{v})-l(u)-c )}
\end{equation}
for some $c\in \mathbb{N}$.
\end{teo}
\begin{demo}
By Lemma \ref{lemma simple tree A}, there is one admissible construction for $\mathbb{T}_{\underline{v}}$ without using $6$-valent vertices. This allows us to obtain the set $\mathbb{L}_{\underline{v}}(u)$ without constructing the whole tree $\mathbb{T}_{\underline{v}}$. This is performed by analysing each letter occurring in $\underline{v}$ separately. Eight cases are to be considered. In the diagrams below, we only draw the letters involved in each case.
\begin{itemize}
\item[Case A1.] Exactly one $s_{i}$ appears in $\underline{v}$ and $s_i$ does not appear in $u$. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case A2.] Exactly one $s_{i}$ appears in $\underline{v}$ and $s_i$ appears (necessarily once) in $u$. In this case, elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case B1.] The letter $s_i$ appears twice in $\underline{v}$ and $u$. It is noteworthy that this forces a letter $s_{i+1}$ to appear in between the two $s_i$. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case C1.] The letter $s_i$ appears twice in $\underline{v}$, $s_{i+1}$ appears in $u $, and $s_i $ appears once in $u$ on the left of $s_{i+1}$. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case C2.] The letter $s_i$ appears twice in $\underline{v}$, $s_{i+1}$ appears in $u $, and $s_i $ appears once in $u$ on the right of $s_{i+1}$. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case C3.] The letter $s_i$ appears twice in $\underline{v}$, and $s_{i+1}$ does not appear in $u $. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case D1.] The letter $s_i$ appears twice in $\underline{v}$, $s_{i+1}$ appears in $u $, and $s_i$ does not appear in $u$. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ are forced to be of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\item[Case D2.] The letter $s_i$ appears twice in $\underline{v}$, $s_{i+1}$ does not appear in $u $, and $s_i$ does not appear in $u$. In this case, the elements in $\mathbb{L}_{\underline{v}}(u)$ have two options
\begin{equation} \label{last case}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\end{itemize}
We construct the elements of $\mathbb{L}_{\underline{v}}(u)$ by considering the letters appearing in $\underline{v}$ in the decreasing order. The result shows that the process is independent at each stage. This explains the reason that the formula in (\ref{equation teo diagram R dominated by a transposition}) is a product.
The only case when more than one option exists is D2. In case D2, the factor $(t^{2}+1)$ (see the degrees of the diagrams in (\ref{last case})) appears in $\tilde{\mathcal{R}}_{u,\underline{v}}(t) $. The other cases contribute with a power of $t$. The exponent in this power can be $0$, $1$, or $2$ according to the number of dots involved in each case. The sum of these exponents yields the value of the integer $c$.
\end{demo}
\begin{coro}
Theorem \ref{Brenti conjecture} holds.
\end{coro}
\begin{demo}
A reduced expression for $(a,b)$ is given by
\begin{equation}
s_as_{a+1}\ldots s_{b-2}s_{b-1}s_{b-2}\ldots s_{a+1}s_{a}
\end{equation}
It follows directly from Lemma \ref{lema subword property} that any $e< v\leq (a,b)$ has a reduced expression which is a UD-word. The result is now a direct consequence of Theorem \ref{teo R poly UD-words}.
\end{demo}
\begin{exa}\rm
Consider the UD-word $\underline{v}=s_1s_2s_4s_5s_7s_9s_8s_7s_4s_3s_2s_1$ and the element $u=s_7s_9s_8s_3\in \mathfrak{S}_{10}$. We enlist each letter with its corresponding case as in the proof of Theorem \ref{teo R poly UD-words}.
\begin{equation}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Letter & $s_9$ & $s_8$& $s_7$& $s_5$& $s_4$& $s_3$& $s_2$& $s_1$\\
\hline
Case & A2 & A2& C1& A1 & D2& A2 & D1 &D2 \\
\hline
Exponent & $0$ &$0$ &$1$ & $1$ & & $0$& $2$& \\
\hline
\end{tabular}
\end{equation}
As Case D2 occurs twice, the factor $(t^2+1)^2$ appears. Meanwhile, by adding the exponents in the table above, we obtain $c=4$. Therefore,
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{v}}(t) =t^{4}(t^2+1)^2.
\end{equation}
The four light leaves in $\mathbb{L}_{\underline{v}}(u)$ are depicted in (\ref{total leaves}), where we have drawn all edges with the same colour and have replaced the letters $s_i$ for $i$.
\begin{equation} \label{total leaves}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\end{exa}
\vspace{.3cm}
\subsection{$\tilde{\mathcal{R}}$-polynomials for the simplest words.}
In this section, we again consider an arbitrary Coxeter system $(W,S)$. Among all the words in the alphabet $S$, the simplest ones are those formed by one letter. We compute the diagrammatic $\tilde{\mathcal{R}}$-polynomials for such words. For the remainder of this section, we set $s\in S$ and for $n\in \mathbb{N}$, we define
\begin{equation}
\underline{n_s}= sss\cdots \qquad (n\mbox{-times}).
\end{equation}
A moment's though reveals that $\tilde{\mathcal{R}}_{u,\underline{n_s}}(t)=0$, unless $u=e$ or $u=s$. To treat the cases $u=e$ and $u=s$, we introduce the Fibonacci polynomials.
\begin{defi} \rm
For each $n\in \mathbb{N}$, we define the $n$-th Fibonacci polynomial $F_n(v)\in \mathbb{N}$ by the recurrence
\begin{equation}
F_n(v)=vF_{n-1}(v)+F_{n-2}(v),
\end{equation}
with initial conditions $F_0(v) = 1$ and $F_1(v)= v$.
\end{defi}
The Fibonacci polynomials have a combinatorial description in terms of paths in the Fibonacci tree.
\begin{defi}\rm
The $n$-th Fibonacci tree $FT_n$ is the binary tree defined inductively as follows. First, we have
\begin{equation}
FT_1 \longrightarrow % \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
Suppose now that $FT_n$ has been defined. Then, $FT_{n+1}$ is obtained from $FT_n$ by adding one child in each leaf node of $FT_n$ that is a left brother in $FT_n$, and two child nodes (one to the left and one to right) for the other leaf nodes of $FT_n$.
\end{defi}
\begin{exa} \rm
$FT_6$ is depicted below.
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\end{exa}
Let $P_n$ be the set of all paths\footnote{In this paper, a path is always going down.} from the root to the leaves of $FT_n$. To each $p\in P_n$, we define $\rho (p)$ as the number of right steps in $p$. Finally, we define $P_n'$ to be the subset of $P_n$ formed by the paths $p$ such that the last step is not a left step. It is a straightforward exercise to verify.
\begin{equation} \label{interpretation Fibo poly}
F_n(t)= \sum_{p\in P'_n} t^{\rho (p)}.
\end{equation}
\begin{teo}
Let $n\in \mathbb{N}$. Then,
\begin{equation}
\tilde{\mathcal{R}}_{u,\underline{n_s}}(t)= \left\{ \begin{array}{rl}
F_n(t), & \mbox{ if } u=e;\\
F_{n-1}(t), & \mbox{ if } u=s.
\end{array} \right.
\end{equation}
\end{teo}
\begin{demo}
By disregarding the decorations in the nodes of $\mathbb{T}_{\underline{n_{s}}}$ temporarily, we observed that $\mathbb{T}_{\underline{n_{s}}}$ coincides with $FT_n$. In this setting, we observed the light leaves of $\underline{n_s}$ as paths in $\mathbb{T}_{\underline{n_{s}}}$. With this interpretation, the right steps correspond to adding a dot in the rightmost region of a diagram; the left steps correspond to adding a straight line in the rightmost region of a diagram and central steps\footnote{That is, the ones that are not left or right.} correspond to transforming the rightmost straight line of a diagram into a loop.
As the degree of a diagram is the number of dots, we obtain that the degree of a light leaf of $\underline{n_s}$ is the number of right steps. Meanwhile, we have
\begin{equation}
\mathbb{L}_{\underline{n_s}}(e) = \{ l\in \mathbb{L}_{\underline{n_s}} \,|\, \mbox{last step in } l \mbox{ is not a left step } \}
\end{equation}
Therefore, (\ref{interpretation Fibo poly}) yields
\begin{equation}
\tilde{\mathcal{R}}_{e,\underline{n_s}}(t)= \sum_{l\in \mathbb{L}_{\underline{n_s}}(e)} t^{\deg(l)} = \sum_{p\in P_n'} t^{\rho (p)} = F_n(t).
\end{equation}
Finally, we found that a degree-preserving bijection exists: $ \mathbb{L}_{\underline{(n-1)_s}} (e) \longrightarrow \mathbb{L}_{\underline{n_s}} (s) $, and is produced by adding a straight line on the rightmost region of each element of $\mathbb{L}_{\underline{(n-1)_s}} (e)$. Hence,
\begin{equation}
\tilde{\mathcal{R}}_{s,\underline{n_s}}(t)= \sum_{l\in \mathbb{L}_{\underline{n_s}}(s)} t^{\deg(l)}= \sum_{l\in \mathbb{L}_{\underline{(n-1)_s}}(e)}t^{\deg(l)}= \tilde{\mathcal{R}}_{e,\underline{(n-1)_s}}(t)=F_{n-1}(t).
\end{equation}
\end{demo}
\subsection{The polynomial $\tilde{R}_{e,v}(t)$ for $ v=34\cdots n12$.}
In this section, we focus on the symmetric group $\mathfrak{S}_n $. We obtain a closed formula of the polynomial $\tilde{R}_{e,v}(t)$ for $ v=34 \cdots n12$. As in the previous section, this formula is given in terms of the Fibonacci polynomials. Such a formula was previously obtained by Pagliacci \cite[Theorem 4.1]{pagliacci2001explicit} using an inductive argument. We present a combinatorial proof in the stricter sense. That is, we construct a (degree-preserving) bijection between $\mathbb{L}_{\underline{v_n}}(e)$ and paths in the Fibonacci tree $FT_{n-3}$.
Hereinafter, we set $n\in \mathbb{N}$ and $v_n=34\cdots n12$. It can be easily shown that
\begin{equation}
\underline{v_n} = s_2s_1s_3s_2 s_4s_3 \cdots s_{n-1}s_{n-2}
\end{equation}
is a reduced expression for $v_n$.
We recall from section 4.1 that the key to obtaining a closed formula for $\tilde{R}$-polynomials was the fact that the relevant tree was constructed without using $6$-valent vertices. However, this does not apply for $ \underline{v_n}$ due to the occurrence of subwords of the form $ s_{i}s_{i+1}s_is_{i+1}$ in $\underline{v_n}$; thus, we must use $6$-valent vertices in the construction of $\mathbb{T}_{\underline{v_n}}$. Nevertheless, some $u\leq v_n$ exist such that the sets $\mathbb{L}_{\underline{v_n}}(u) $ can be constructed without using the $6$-valent vertices. In particular, we have the following:
\begin{lem} \label{lemma pagliacci without sixvalent}
The set $\mathbb{L}_{\underline{v_n}}(e)$ can be constructed without using $6$-valent vertices.
\end{lem}
\begin{demo}
Suppose we are forced to use a $6$-valent vertex at some stage of the construction of $\mathbb{T}_{\underline{v_n}}$. At this level, we must have the following diagram:
\begin{equation} \label{six valent}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\medskip
We recall that elements in $\mathbb{L}_{\underline{v_n}}(e)$ are precisely the light leaves with empty top sequence. Because letters $ s_{i}$ and $s_{i+1} $ no longer exist in $\underline{v_n}$ on the right of the ones shown in (\ref{six valent}), the top sequence of the light leaves emanating from this diagram contain at least two letters. In particular, they do not belong to $\mathbb{L}_{\underline{v_n}}(e)$.
\end{demo}
To provide a closed formula for polynomials $\tilde{R}_{e,v_n}(t)$, we introduce slightly modified Fibonacci polynomials. For $n\geq 1$, we define
\begin{equation}
\mathcal{F}_{n}(t)=t^{n-1}F_{n+1}(t).
\end{equation}
These polynomials contain the following combinatorial interpretation. Given a path $p \in P_n$, we define $\lambda (p)$ to be the number of left steps in $p$. Subsequently, we have
\begin{equation}
\mathcal{F}_{n}(t)= \sum_{p\in P_n} t^{2(n-\lambda (p))} .
\end{equation}
\begin{teo} \label{teo pagliacci}
For $n\geq 3$, we have
\begin{equation} \label{equation Pagliacci}
\tilde{R}_{e,v_n}(t)= \tilde{\mathcal{R}}_{e,\underline{v_n}} (t) = t^2\mathcal{F}_{n-3}(t)=t^{n-2}F_{n-2}(t).
\end{equation}
\end{teo}
\begin{demo}
We only need to show the equation in the middle of (\ref{equation Pagliacci}). The idea is to construct a bijection between the elements in $\mathbb{L}_{\underline{v_n}}(e)$ and the paths in the Fibonacci tree $FT_{n-3}$.
By Lemma \ref{lemma pagliacci without sixvalent}, we can construct the set $\mathbb{L}_{\underline{v_n}}(e)$ without using $6$-valent vertices. This allows us to treat each letter occurring in $\underline{v_n}$ separately. We recall that elements of $\mathbb{L}_{\underline{v_n}}(e)$ are the light leaves of $\underline{v_n}$ with an empty top sequence.
Let us begin by considering the letters $s_1$ and $s_{n-1}$. These letters appear only once in $ \underline{v_{n}}$; to obtain an element in $\mathbb{L}_{\underline{v_n}}(e)$, we must use dots over such letters. Now, consider the letter $s_{2}$. It appears twice in $\underline{v_{n}}$. Because we do not wish for them to occur in the top sequence, we would have to eliminate them. This can be accomplished by two options:
\begin{itemize}
\item[(R)] Placing a dot over each one of these letters; or
\item[(L)] Connecting these two letters to form a loop.
\end{itemize}
Option (R) increases the degree by two, and option (L) maintains the same degree. Now, consider the letter $s_3$. We remark that one of the two occurrences of the letter $s_3$ in $\underline{v_n}$ is in between the two letters $s_{2}$. If we use the first option (R) for thw letter $s_2$, then we have again options (R) and (L) for the letter $s_3$. On the contrary, if we use option (L) for the letter $s_{2}$, the letter $s_3$ in between the two $s_2$ is ``trapped'' by the loop formed by the letters $s_2$. Therefore, we must place a dot over each letter $s_{3}$. In this case, we increase the degree by two. We denote this unique option by (C). We continue with letter $s_4$. If we use option (R) or (C) for the letter $s_3$, then we can use options (R) and (L) for $s_4$. If we use option (L) for the letter $s_3$, we are forced to choose option (C) for $s_4$. The pattern is repeated continuously until we reach the letter $s_{n-2}$.
In summary, each element of $\mathbb{L}_{\underline{v_n}}(e)$ is determined by a word in the alphabet $\{C,L,R\}$ of length $n-3$ satisfying:
\begin{enumerate}
\item The first letter is $R $ or $L$.
\item Each letter $L$ is followed by a letter $C$.
\item Each letter $C$ is preceded by a letter $L$.
\end{enumerate}
Now, the promised bijection $\mathbb{L}_{\underline{v_n}}(e) \longrightarrow P_{n-3}$ is given by replacing letters $C$, $L$, and $R$ by the left steps, right steps, and central steps in $FT_{n-3}$, respectively. Given $l\in \mathbb{L}_{\underline{v_n}}(e)$, we denote by $p_l$ the image of $l$ under the map above. By construction, we have
\begin{equation}
\deg(l) =2 + 2((n-3)- \lambda (p_l)).
\end{equation}
Therefore, we obtain
\begin{equation}
\tilde{\mathcal{R}}_{e,\underline{v_n}} (t) = \sum_{l\in \mathbb{L}_{\underline{v_n}}(e)} t^{\deg (l)} = \sum_{p\in P_n} t^2 t^{2((n-3)- \lambda (p_l))} = t^2 \mathcal{F}_{n-3}.
\end{equation}
\end{demo}
\begin{exa}\rm In (\ref{tree Pagliacci}), we illustrate the bijection in the proof of Theorem \ref{teo pagliacci} for $n=7$. Because of space limitations, we replace letters $s_i$ by $i$. In this case, we have
$\tilde{\mathcal{R}}_{e,\underline{v_7}} (t) =t^{10}+4t^{8}+3t^{6} $.
\begin{equation} \label{tree Pagliacci}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\end{exa}
\vspace{.3cm}
\subsection{$\tilde{R}_{e,v}(t)$ for $v$ a $321$-avoiding and $2$-repeating permutation.}
In this section, we generalise the formula obtained in the previous section.
\begin{defi}\rm
A permutation $v\in \mathfrak{S}_n$ is called $321$-avoiding if no reduced expression $\underline{v}$ of $v$ contains a subword of consecutive letters of the form $s_is_{i \pm 1}s_i$.
\end{defi}
Let $v\in \mathfrak{S}_n$ be a $321$-avoiding permutation. In this case, the number of occurrences of any letter $s_i$ in $v$ is well defined. That is, it does not depend on the particular choice of a reduced expression for $v$. We define $n_v(i)$ to be the number of occurrences of the letter $s_i$ in $v$, for $1\leq i <n$. Subsequently, we say that $v$ is $2$-repeating if $n_v(i)\leq 2$, for all $1\leq i <n$. It is clear that permutations of the form $34\ldots n12$ considered in the previous section are $321$-avoiding and $2$-repeating. We now introduce a diagrammatic method of representing such permutations. Consider the set
\begin{equation}
\mathcal{T}= \{ (i,j)\in \mathbb{Z}^2 \, | \, j\leq 0, \, |i|\leq |j| \, ,\, i\equiv j \mbox{ mod } 2 \}.
\end{equation}
We call a finite subset of $\mathcal{T}$ a \emph{configuration of points}.
\begin{defi} \label{defin configuration}
We say a configuration of points $\mathcal{C}$ is admissible if it satisfies the following:
\begin{enumerate}
\item No three points in $\mathcal{C}$ with the same $y$-coordinate exist.
\item Let $P_1=(i_1,j_1)$ and $P_2=(i_2,j_2)$ be points in $\mathcal{C}$ with $j_1=j_2$. Then, we have $|i_1-i_2|=2$.
\item Let $P_1=(i_1,j_1)$ and $P_2=(i_2,j_2)$ be points in $\mathcal{C}$ with $j_1=j_2$ and $i_1<i_2$. Then, $(i_1+1,j_1+1)$ and $(i_1+1,j_1-1)$ belong to $\mathcal{C}$.
\end{enumerate}
\end{defi}
An example of an admissible configuration of points is illustrated in (\ref{configuration}).
\begin{equation} \label{configuration}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
We now associate to each admissible configuration of points, $\mathcal{C}$, a word in the alphabet $S=\{s_i\}_{i=1}^{\infty} $. First, each point $P=(i,j)\in \mathcal{C}$ is associated with the letter $s_{1-j}$. Then, the word $\underline{v_{\mathcal{C}}}$ associated to $\mathcal{C}$ is obtained by reading the points in $\mathcal{C}$ from left to right and from top to bottom. For instance, the word associated to the configuration of points in (\ref{configuration}) is given by
\begin{equation}
s_{8}s_{10}s_{12}s_{7}s_{9}s_{11}s_{13}s_{6}s_{8}s_{10}s_{9}s_{2}s_{4}s_{1}s_{3}s_{5}s_{2}s_{4}s_{3}s_{18}s_{17}s_{19}s_{16}s_{18}.
\end{equation}
\begin{lem}
Let $\mathcal{C}$ be an admissible configuration of points and $\underline{v_{\mathcal{C}}}$ its corresponding word. Then,
\begin{enumerate}
\item The word $\underline{v_{\mathcal{C}}}$ is a reduced expression of some permutation $v_{\mathcal{C}}$.
\item The permutation $v_\mathcal{C}$ is $321$-avoiding.
\item The permutation $v_\mathcal{C}$ is $2$-repeating.
\end{enumerate}
\end{lem}
\begin{demo}
The first two claims are a consequence of \cite[Lemma 1]{billey2001kazhdan}. The last claim follows immediately by the first condition in Definition \ref{defin configuration}.
\end{demo}
\begin{lem} \label{lema permutation to configuration}
Let $v$ be a $321$-avoiding and $2$-repeating permutation. Then, there exists an admissible configuration of points, $\mathcal{C}$, such that $\underline{v_{\mathcal{C}}}$ is a reduced expression of $v$.
\end{lem}
\begin{demo}
After a rotation of $90$ degrees, the ``heap'' of $v$, constructed in \cite[Section 3]{billey2001kazhdan}, is a configuration satisfying the requirements of the lemma.
\end{demo}
\begin{exa} \label{exa configuration}\rm
As mentioned earlier, permutations $v_n=34\ldots n12$ considered in the previous section are $321$-avoiding and $2$-repeating permutations. The word associated to the configuration in (\ref{confi vn}) is a reduced word for $v_9$.
\begin{equation} \label{confi vn}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
\end{exa}
Contrary to what Example \ref{exa configuration} might suggest, the configuration in Lemma \ref{lema permutation to configuration} is not unique. Considering the five points located in the bottom right of (\ref{configuration}), we can move all these points to two positions to the left or to the right without altering the element represented by the configuration.
Let $v\in \mathfrak{S}_n$ be a $321$-avoiding and $2$-repeating permutation. We now define some statistics on $v$. First, we define $n_v^1$ to be the number of letters in $v$ that appear exactly once. Meanwhile, we say that a set of consecutive integers $\{a,a+1,\ldots , b\}$ is a $2$-chain of $v$ if $n_v(i)=2$, for all $i\in \{a,a+1,\ldots , b\}$, and $n_v(a-1)\neq 2$ and $n_v(b+1)\neq 2$. For instance, if $v$ is the permutation obtained from the configuration in (\ref{configuration}), then the $2$-chains of $v$ are: $\{2,3,4 \}$, $\{ 8,9,10 \}$, and $\{ 18 \}$. We define $\kappa (v) $ to be the number of $2$-chains in $v$.
Let $c_1, c_2, \ldots, c_{\kappa(v)}$ be all the $2$-chains in $v$, indexed in a manner that the numbers in $c_i$ are less than the numbers in $c_j$, for $i<j$. Finally, we define $\lambda_{i}$ as the cardinality of $c_i$.
With these definitions, we can enunciate a vast generalisation of Pagliacci's formula.
\begin{teo} \label{teo generalisation Pagliacci}
Let $v\in \mathfrak{S}_n$ be a $321$-avoiding and $2$-repeating permutation. Then,
\begin{equation} \label{equation final}
\tilde{R}_{e,v}(t) =t^{n_v^1}\prod_{i=1}^{\kappa (v)} \mathcal{F}_{\lambda_i} (t).
\end{equation}
\end{teo}
\begin{demo}
Let $\mathcal{C}$ be any admissible configuration of points such that its associated word, $\underline{v_{\mathcal{C}}}$, is a reduced word of $v$. We must calculate the diagrammatic $\tilde{R}$-polynomial $\tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}}}}(t)$. Hence, we have to determine the set $\mathbb{L}_{\underline{v_{\mathcal{C}}}}(e)$. The same argument used in the proof of Lemma \ref{lemma pagliacci without sixvalent} reveals that $\mathbb{L}_{\underline{v_{\mathcal{C}}}}(e)$ can be constructed without using $6$-valent vertices; therefore, we can consider each letter occurring in $ \underline{v_{\mathcal{C}}}$ separately.
Let $N_v^1$ be the set of all letters occurring exactly once in $v$, such that $n_v^1=|N_v^1|$. Over each element of $N_v^1$, we must locate a dot. This explains the factor $t^{n_v^1}$ in (\ref{equation final}). Subsequently, we can think as if the letters in $N_v^1$ never occurred in $\underline{v_{\mathcal{C}}}$, or equivalently in $\mathcal{C}$. Let $\mathcal{C}'$ be the configuration of points obtained from $\mathcal{C}$ by eliminating the points associated to the letters in $N_v^1$. The configuration $\mathcal{C}'$ splits naturally into $ \kappa(v)$ subconfigurations of the form
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
Let us index the subconfigurations occurring in $\mathcal{C}'$ from top to bottom as follows: $\mathcal{C}_1', \mathcal{C}_2', \ldots ,\mathcal{C}_{\kappa(v)}'$. For each $1\leq i \leq \kappa (v)$, we define $\underline{v_{\mathcal{C}_i'}}$ to be the word obtained by reading the points in $ \mathcal{C}_i'$ from left to right and from top to bottom. We remark that the configurations $\{ \mathcal{C}_i' \}$ are not admissible. However, the word $\underline{v_{\mathcal{C}_i'}}$ is still valid. By applying a similar argument to the one used in the proof of Theorem \ref{teo pagliacci}, we obtain
\begin{equation}
\tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}_i'}}}(t)=\mathcal{F}_{\lambda_i}(t),
\end{equation}
as $\lambda_i$ is the number of letters involved in $\underline{v_{\mathcal{C}_i'}} $.
The crucial point is that $\mathcal{C}_i'$ and $\mathcal{C}_j'$ are independent, for $i\neq j$. That is, any generator involved in $\mathcal{C}_i'$ commutes with any generator involved in $\mathcal{C}_j'$, so that we can treat each of these subconfigurations separately. We finally obtain
\begin{equation}
\tilde{R}_{e,v}(t) = \tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}}}}(t)
= t^{n_v^1} \prod_{i=1}^{\kappa (v)} \tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}_i'}}}(t)
=t^{n_v^1}\prod_{i=1}^{\kappa (v)} \mathcal{F}_{\lambda_i} (t) .
\end{equation}
\end{demo}
\begin{exa} \rm
Let us illustrate Theorem \ref{teo generalisation Pagliacci} for the following configuration
\begin{equation}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
If we denote by $\mathcal{C}$ the configuration above, then $\underline{v_{\mathcal{C}}}$ is given by
\begin{equation}
s_{3}s_{2}s_{4}s_{6}s_{1}s_{3}s_{5}s_{7}s_{2}s_{4}s_{6}s_{8}s_{7}s_{9}s_{8}.
\end{equation}
In this case, $N_v^1=\{ s_1,s_5,s_9 \}$. Thus, $n_v^1=3$. Furthermore, two sub-configurations exist: $\mathcal{C}_1'$ and $\mathcal{C}_2'$, which are depicted in (\ref{equation confifi}). In this setting, we have $\kappa_v=2$ and $\lambda_1=\lambda_2=3$.
\begin{equation} \label{equation confifi}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
The words associated to the configurations $\mathcal{C}_1'$ and $\mathcal{C}_2'$ are given by
\begin{equation}
\underline{v_{\mathcal{C}_1'}} = s_{3}s_2s_4s_3s_2s_4 \qquad \mbox{ and } \qquad \underline{v_{\mathcal{C}_2'}}=s_6s_7s_6s_8s_7s_8.
\end{equation}
The sets $ \mathbb{L}_{\underline{v_{\mathcal{C}_1'}}}(e)$ and $ \mathbb{L}_{\underline{v_{\mathcal{C}_2'}}}(e)$ are obtained by considering the trees in (\ref{equation configuration C}), where because of space limitations, we have replaced the letter $s_i$ by $i$.
\begin{equation} \label{equation configuration C}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
We obtain
\begin{equation}
\tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}_1'}}}(t)=\tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}_2'}}}(t) = \mathcal{F}_3(t)= v^6+3v^4+v^2.
\end{equation}
To obtain any element of $\mathbb{L}_{\underline{v_\mathcal{C}}}(e)$ we choose an element of $ \mathbb{L}_{\underline{v_{\mathcal{C}_1'}}}(e)$, and an element $ \mathbb{L}_{\underline{v_{\mathcal{C}_2'}}}(e)$, and combine them. At this point, it is crucial that the letters involved in $\mathcal{C}_1'$ and $\mathcal{C}_2'$ commute, because some lines in the diagrams might involve an intersection. Exactly $25$ elements appear in $\mathbb{L}_{\underline{v_\mathcal{C}}}(e)$. Because of space limitations, we draw only one of these $25$ elements in (\ref{equation last leaf}). This diagram is the element of the lower degree in $\mathbb{L}_{\underline{v_\mathcal{C}}}(e)$.
\begin{equation}\label{equation last leaf}
% \usepackage[usenames,dvipsnames]{pstricks
\end{equation}
Finally, we have
\begin{equation}
\tilde{\mathcal{R}}_{e,v}(t)=\tilde{\mathcal{R}}_{e,\underline{v_{\mathcal{C}}}}(t)=t^3(v^6+3v^4+v^2)^2=t^{15} + 6t^{13} + 11t^{11} + 6t^9 + t^7.
\end{equation}
\end{exa}
As a final remark, the common feature of the four formulas obtained in this section is that the relevant set $\mathbb{L}_{\underline{v}}(u)$ could be constructed without using $6$-valent vertices. However, this is not true in general. The occurrence of $6$-valent vertices (or more generally $2m_{st}$-valent vertices) allows for letters appearing in a word to interact between them, and gaining control on this interaction appears extremely challenging. This explains the difficulty in obtaining closed formulas for $\tilde{R}$-polynomials. Nevertheless, we believe that if a few $6$-valent vertices exist in the diagrams of $\mathbb{L}_{\underline{v}}(u)$, then the possibility of obtaining a closed formula for $\tilde{\mathcal{R}}_{u,\underline{v}}(t)$ still exists. In particular, we conjecture that the following holds.
\begin{conj}
Let $w\in \mathfrak{S}_n$ be a $321$-avoiding and $2$-repeating permutation. If $u\leq v\leq w$, then
\begin{equation}
\tilde{R}_{u,v}(t)= t^{a}\prod_{i=1}^{b} \mathcal{F}_{c_i} (t),
\end{equation}
for some integers $a,b$ and $ c_i$.
\end{conj}
\section*{References}
\bibliographystyle{myalpha}
|
1,314,259,993,276 | arxiv | \section*{Abstract}
In this contribution, models of wireless channels are derived from the maximum entropy principle, for several cases where only limited information about the propagation environment is available.
First, analytical models are derived for the cases where certain parameters (channel energy, average energy, spatial correlation matrix) are known deterministically. Frequently, these parameters are unknown (typically because the received energy or the spatial correlation varies with the user position), but still known to represent meaningful system characteristics. In these cases, analytical channel models are derived by assigning entropy-maximizing distributions to these parameters, and marginalizing them out.
For the MIMO case with spatial correlation, we show that the distribution of the covariance matrices is conveniently handled through its eigenvalues. The entropy-maximizing distribution of the covariance matrix is shown to be a Wishart distribution. Furthermore, the corresponding probability density function of the channel matrix is shown to be described analytically by a function of the channel Frobenius norm.
This technique can provide channel models incorporating the effect of shadow fading and spatial correlation between antennas without the need to assume explicit values for these parameters.
The results are compared in terms of mutual information to the classical i.i.d. Gaussian model.\\
{\bf Keywords:} Maximum Entropy, Multiple Antennas, Wireless Channel Model, Spatial Correlation.
\section{Introduction}
The problem of modelling the characteristics of a wireless transmission channel is crucial to the appropriate design of suitable channel codes. The recent shift to the multiple antennas, or Multiple-Input Multiple-Output (MIMO), paradigm \cite{jerry} and the corresponding need for MIMO channel models, together with the introduction of codes (such as turbo codes \cite{Berrou_etal_turbocodes}) that can operate very close to the channel capacity, has placed the channel models under scrutiny:
initial capacity analyses of MIMO channels assuming i.i.d. Rayleigh fading \cite{telatar_mimo_capacity} were touting promising spectral efficiencies, whereas the importance of correlation between channel coefficients \cite{kyritsi_etal_mimo_correlation} and of the channel matrix rank
are now understood to be critical parameters.
In order to facilitate channel code development, analytical channel models are a desirable asset. However, most of the available channel models that capture the complex spatial characteristics of the propagation channel (geometry, reflection coefficients, \ldots) are based on ray tracing methods or variations thereof, which model the channel as a superposition of multipath components \cite{Burr_finite_scatterers} and therefore do not lend themselves easily to analysis. Conversely, some analytical models were proposed to address the problem of accurate space correlation modeling by assuming a Rayleigh fading with appropriately designed correlation properties \cite{Weichselberger_model}. See \cite{survey_MIMO_models_JWCN} for a broad round-up of the literature about wireless channel models.\\
In \cite{Debbah_maxent}, Debbah and M\"uller address the question of channel modeling on the basis of statistical inference.
Instead of relying on ad-hoc construction -- based on intuition -- and verification of the models, they propose a constructive method based on the constraints that the model needs to meet.
The joint probability density function (PDF) of the channel is derived from these constraints, using the maximum entropy (MaxEnt) principle, initially introduced by Jaynes \cite{Jaynes_1957}.
This principle build on the fact that the only consistent way of accounting for ignorance when modelling a random process is to maximize the entropy of the considered process, subject to all known constraints. In this context, consistent modelling is defined as the requirement that independent modellers being given the same set of constraints must obtain identical models. This approach is justified on the basis of avoiding the arbitrary introduction of information (in the form of model characteristics that represent a reduction of its entropy) and that can not be justified by any known constraint.
In the case of channel modelling, the constraints represent available knowledge about the environment or the channel representation itself (e.g. through bounds on amplitude, power...).
See \cite{Jaynes_probability_theory} for a recent overview of the application of maximum entropy methods to inference.\\
In \cite{Debbah_maxent}, the MaxEnt principle is used to derive a joint distribution of the entries of the MIMO channel matrix. The popular Gaussian i.i.d. model is shown to be the entropy-maximizing solution under the sole assumption that the average Frobenius norm of the channel matrix is known (known channel power constraint). However, this model is admittedly simplistic, in particular because of the following two reasons:
\begin{itemize}
\item Measurements have shown that the independence between components, as obtained in \cite{Debbah_maxent} and proposed in numerous models, rarely holds in reality, and that some degree of correlation between the components must be taken into account,
\item Gaussian models constitute good short-term models but their long-term properties are not realistic. More precisely, Gaussian models are known to adequately model the effects of rich scattering, but to neglect the long-term fading effect captured by the fact that the signal strength (represented by the short-term average of the channel Frobenius norm - in the following this quantity is denoted by ``channel energy'') fluctuates.
\end{itemize}
The aim of the present article is to extend the general scope of maximum entropy channel modeling, by amending existing models to address the aforementioned issues. Both points are addresses using the same method: first, a maximum entropy model is derived for the channel, conditioned on the parameter of interest (signal strength or spatial correlation). Then, a maximum entropy distribution for is derived for the parameter of interest itself, and is later marginalized out to obtain the full channel model.\\
This article is structured as follows: first, some notations are introduced in Section \ref{section_notations}. In Section \ref{section_energyconstraints}, a maximum entropy model for the channel energy is proposed, based on the knowledge of the average, and optionally on an upper bound, of the channel energy. The corresponding channel model is obtained by first deriving the distribution of the instantaneous channel realization for a known channel energy, and in a second step by marginalizing out the variable representing the energy using the distribution established previously.
Section \ref{section_spatialcorrelation}, focuses on the spatial correlation properties of frequency-flat fading channels. Specifically, we address the case where the channel is known to have spatial correlation, but the exact characteristics of this correlation are not known. In general, in the absence of knowledge about correlation, application of the MaxEnt principle yields a process with independent components (see \cite{Debbah_maxent}). Therefore, we first focus on the spatial covariance matrix, and derive the MaxEnt distribution of a general covariance matrix, in both the full-rank and rank-deficient cases. In a second step, we construct the analytical model for the MIMO channel itself, by first deriving the MaxEnt distribution of the channel for a known covariance, and later marginalizing over the covariance matrix, using the distribution of the covariance established previously.
The obtained distribution is shown to be isotropic, and is described analytically as a function of the Frobenius norm of the channel matrix. Finally, Section \ref{section_conclusion} draws some conclusions.\\
\section{Notations and channel model}
\label{section_notations}
Let us consider the multiple-antenna wireless channel with ${n_t}$ transmit and ${n_r}$ receive antennas. Since we are only concerned with non-frequency selective channels, let the complex scalar coefficient $h_{i,j}$ denote the channel attenuation between transmit antenna $j$ and receive antenna $i$, $j=1\ldots {n_t}$, $i=1\ldots {n_r}$. Let $\mathbf{H}(t)$ denote the ${n_r}\times {n_t}$ channel matrix at time $t$.
We recall the general model for a time-varying flat-fading channel with additive noise
\begin{equation}
\v{y}(t) = \mathbf{H}(t)\v{x}(t) + \v{n}(t), \label{awgn_channelmodel}
\end{equation}
where $\v{n}(t)$ is usually modeled as a complex circularly-symmetric Gaussian random variable (r.v.) with independent identically distributed (i.i.d.) coefficients.
In this article, we focus on the derivation of the fading characteristics of $\mathbf{H}(t)$. When we are not concerned with the time-related properties of $\mathbf{H}(t)$, we will drop the time index $t$, and refer to the channel realization $\mathbf{H}$ or equivalently to its vectorized notation $\v{h} = \mathrm{vec}(\mathbf{H}) = [h_{1,1}\ldots h_{{n_r},1}, h_{1,2}\ldots h_{{n_r},{n_t}}]^T$.
Let us also denote $N= {n_r}{n_t}$ and map the antenna indices into $[1\ldots N]$, \emph{i.e.} denoting equivalently $\v{h}=[h_1\ldots h_N]^T$.\\
\section{Channel energy constraints}
\label{section_energyconstraints}
\subsection{Average channel energy constraint}
\label{previous_results}
In this section, we briefly recall the results of \cite{Debbah_maxent}, where an entropy-maximizing probability distribution is derived for the case where the average energy of a MIMO channel is known deterministically.
It is obtained by maximizing the entropy $\int_{\mathbb{C}^N} -\log(P(\mathbf{H})) P(\mathbf{H}) \d\mathbf{H}$, where $\d\mathbf{H}= \prod_{i=1}^N \mathrm{dRe}(h_i) \mathrm{dIm}(h_i)$ is the Lebesgue measure on $\mathbb{C}^N$ ($\mathrm{Re}(\cdot)$ and $\mathrm{Im}(\cdot)$ denoting respectively the real and imaginary parts of a complex number), under the only assumption that the channel has a finite average energy $N E_0$, and the normalization constraint associated to the definition of a probability density, \emph{i.e.}
\begin{equation}
\int_{\mathbb{C}^N} ||\mathbf{H}||_{F}^2 P(\mathbf{H}) \d\mathbf{H} = N E_0, \quad \textrm{and} \int_{\mathbb{C}^N} P(\mathbf{H}) \d\mathbf{H} = 1. \label{normintegrals}
\end{equation}
This is achieved through the method of Lagrange multipliers, by writing
\begin{equation}
L(P) = \int_{\mathbb{C}^N} -\log(P(\mathbf{H})) P(\mathbf{H}) \d\mathbf{H} + \beta \left[1- \int_{\mathbb{C}^N} P(\mathbf{H}) \d\mathbf{H}\right] + \gamma \left[N E_0 - \int_{\mathbb{C}^N} ||\mathbf{H}||_{F}^2 P(\mathbf{H}) \d\mathbf{H}\right]
\end{equation}
where we introduce the scalar Lagrange coefficients $\beta$ and $\gamma$,
and taking the functional derivative \cite{Fomin_calculus_variations} w.r.t. $P$ equal to zero:
\begin{equation}
\frac{\delta L(P)}{\delta P} = -\log(P(\mathbf{H}))-1 -\beta -\gamma||\mathbf{H}||_{F}^2 =0.
\label{derivativeP}
\end{equation}
Eq.~(\ref{derivativeP}) yields $P(\mathbf{H}) = \exp\left( -(\beta+1) -\gamma ||\mathbf{H}||_{F}^2\right)$, and the normalization of this distribution according to (\ref{normintegrals}) finally yields the coefficients $\beta$ and $\gamma$, and the final distribution is obtained as
\begin{equation}
P_{\mathbf{H}|E_0}(\mathbf{H}) = \frac{1}{(\pi E_0)^{N}} \exp\left( -\sum_{i=1}^{N} \frac{|h_{i}|^2}{E_0} \right) \label{maxent_Giid}
\end{equation}
Interestingly, the distribution defined by eq.~(\ref{maxent_Giid}) corresponds to a complex Gaussian random variable with independently fading coefficients, although neither Gaussianity nor independence were among the initial constraints. These properties are the consequence, \emph{via} the maximum entropy principle, of the ignorance by the modeler of any constraint other than the total average energy $N E_0$.\\
\subsection{Probabilistic average channel energy constraint}
Let us now introduce a new model for situations where the channel model defined in the previous section applies locally (in time), but where $E_0$ can not be expected to be constant, e.g. due to shadow fading. Therefore, let us replace $E_0$ in eq.~(\ref{maxent_Giid}) by the random quantity $E$, known only through its probability density function (PDF) $P_E(E)$. In this case, the PDF of the channel $\mathbf{H}$ can be obtained by marginalizing over $E$:
\begin{equation}
P(\mathbf{H}) = \int_{\mathbb{R}^+} P_{\mathbf{H},E}(\mathbf{H},E) \d E = \int_{\mathbb{R}^+} P_{\mathbf{H}|E}(\mathbf{H}) P_E(E) \d E. \label{unknown_E}
\end{equation}
In order to establish the probability distribution $P_E$, let us find the maximum entropy distribution under the constraints:
\begin{itemize}
\item $0 \leq E \leq E_{max}$, where $E_{max}$ represents an absolute constraint on the transmit power, or on the amplitude range of the receiver,
\item its average $E_0 = \int_{0}^{E_{max}} E P_E(E) \d E$ is known.
\end{itemize}
Applying the Lagrange multipliers method again, we introduce the scalar unknowns $\beta$ and $\gamma$, and maximize the functional
\begin{equation}
L(P_E) = - \int_0^{E_{max}} log(P_E(E)) P_E(E)\d E + \beta\left[ \int_0^{E_{max}} E P_E(E) \d E - E_0 \right] + \gamma\left[ \int_0^{E_{max}} P_E(E) \d E -1 \right].
\end{equation}
Taking the derivative equal to zero ($\frac{\delta L(P_E)}{\delta P_E} = 0$) yields $P_E(E) = \exp\left( \beta E - 1 + \gamma \right)$, and the Lagrange multipliers are finally eliminated by solving the normalization equations
\begin{equation}
\int_0^{E_{max}} E \exp\left( \beta E - 1 + \gamma \right)\d E = E_0, \quad \mathrm{and}\quad \int_0^{E_{max}} \exp\left( \beta E - 1 + \gamma \right) \d E =1.
\end{equation}
$\beta<0$ is the solution to the transcendental equation
\begin{equation}
E_{max} \exp(\beta E_{max}) - \left(\frac{1}{\beta} +E_0\right)\left(\exp(\beta E_{max}) -1\right) =0, \label{solution_beta}
\end{equation}
and finally $P_E$ is obtained as the truncated exponential law
\begin{equation}
P_E(E)=\frac{\beta}{\exp(\beta E_{max})-1} \exp(\beta E), \quad 0 \leq E \leq E_{max}, \quad 0\quad \mathrm{elsewhere}.
\end{equation}
Note that taking $E_{max}=+\infty$ in eq.~(\ref{solution_beta}) yields $\beta=-\frac{1}{E_0}$ and the exponential law $ P_E(E)=E_0 \exp\left(-\frac{E}{E_0} \right)$.
\subsubsection{Application to the SISO channel}
In order to illustrate the difference between the two situations presented so far, let us investigate the Single-Input Single-Output (SISO) case ${n_t}={n_r}=1$, where the channel is represented by a single complex scalar $h$.
Furthermore, since the distribution is circularly symmetric, it is more convenient to consider the distribution of $r= |h|$. After the change of variables $h= r (\cos \theta+i \sin \theta )$, and marginalization over $\theta$, eq.~(\ref{maxent_Giid}) becomes
\begin{equation}
P_r(r) = \frac{2r}{E_0} \exp\left(-\frac{r^2}{E_0}\right), \label{scalar_eknown}
\end{equation}
whereas eq.~(\ref{unknown_E}) yields
\begin{equation}
P_r(r) = \int_0^{E_{max}} \frac{\beta}{\exp(\beta E_{max})-1} \frac{2r}{E} \exp\left(\beta E-\frac{r^2}{E}\right) \d E. \label{scalar_eunknown}
\end{equation}
Note that the integral always exists since $\beta<0$.
Figure~\ref{siso_Pr} depicts the probability density functions (PDFs) of $r$ under the known energy constraint (eq.~(\ref{scalar_eknown}), with $E_0=1$), and the known energy distribution constraint (eq.~(\ref{scalar_eunknown}) is computed numerically, for $E_{max}=1.5, 4$ and $+\infty$, taking $E_0=1$).
Figure~\ref{siso_cdf} depicts the cumulative density function (CDF) of the corresponding instantaneous mutual information $I(r)= \log(1+\rho r^2)$, for signal-to-noise ratio $\rho = 15 \ \mathrm{dB}$.
The lowest range of the CDF is of particular interest for wireless communications since it represents the probability of a channel outage for a given transmission rate. The curves clearly show that the models corresponding to the unknown energy have a lower outage capacity that the Gaussian channel model.
\begin{figure}[htb]
\centering
\subfigure[PDF of amplitude $r$]{\epsfig{figure=siso_Pr.eps,width=7.9cm}\label{siso_Pr}}
\subfigure[CDF of instantaneous mutual information $I(r)$]{\epsfig{figure=siso_cdf.eps,width=7.9cm}\label{siso_cdf}}
\caption{Amplitude and mutual information distributions of the proposed SISO channel models.}
\label{scalar_example_fig}
\end{figure}
\section{Spatial correlation models}
\label{section_spatialcorrelation}
In this section, we shall incorporate several states of knowledge about the spatial correlation characteristics of the channel in the framework of maximum entropy modeling. We first study the case where the correlation matrix is deterministic, and subsequently extend the result to an unknown covariance matrix.
\subsection{Deterministic knowledge of the correlation matrix}
In this section, we establish the maximum entropy distribution of $\mathbf{H}$ under the assumption that the covariance matrix $\mathbf{Q} = \int_{\mathbb{C}^N} \v{h}\v{h}^H P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) \d\mathbf{H}$ is known, where $\mathbf{Q}$ is a $N\times N$ complex Hermitian matrix.
Each component of the covariance constraint represents an independent linear constraint of the form
\begin{equation}
\int_{\mathbb{C}^N} h_{a} h_{b}^* P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) \d\mathbf{H} = q_{a,b}
\end{equation}
for $(a,b)\in [1,\ldots,N]^2$. Note that this constraint makes any previous energy constraint redundant since $\int_{\mathbb{C}^N} ||\mathbf{H}||_{F}^2 P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) \d\mathbf{H} = \mathrm{tr}(\mathbf{Q})$.
Proceeding along the lines of the method exposed previously, we introduce $N^2$ Lagrange coefficients $\alpha_{a,b}$, and maximize
\begin{eqnarray}
L(P_{\mathbf{H}|\mathbf{Q}}) = \int_{\mathbb{C}^N} -\log(P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H})) P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) \d\mathbf{H} &+& \beta \left[1- \int_{\mathbb{C}^N} P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) \d\mathbf{H}\right] \nonumber \\
&+&\sum_{\substack{a\in [1,\ldots,N]\\b\in [1,\ldots,N]}} \alpha_{a,b} \left[ \int_{\mathbb{C}^N} h_{a} h_{b}^* P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) \d\mathbf{H} - q_{a,b}\right].
\end{eqnarray}
Denoting $\mathbf{A}=[\alpha_{a,b}]_{(a,b)\in [1,\ldots,N]^2}$ the $N\times N$ matrix of the Lagrange multipliers, the derivative is
\begin{equation}
\frac{\delta L(P_{\mathbf{H}|\mathbf{Q}})}{\delta P_{\mathbf{H}|\mathbf{Q}}} = -\log(P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}))-1 -\beta -\v{h}^T \mathbf{A}\v{h}^*=0.
\label{derivativePQ}
\end{equation}
Therefore, $P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H}) = \exp\left( -(\beta+1) -\v{h}^T \mathbf{A} \v{h}^* \right)$, or, after elimination of the Lagrange coefficients through proper normalization,
\begin{equation}
P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H},\mathbf{Q}) = \frac{1}{ \det(\pi\mathbf{Q})}\exp\left(-( \v{h}^H \mathbf{Q}^{-1} \v{h} ) \right) \label{maxent_G_corr}.
\end{equation}
Again, the maximum entropy principle yields a Gaussian distribution, although of course its components are not independent anymore.
\subsection{Knowledge of the existence of a correlation matrix}
\label{section_maxent_Q}
It was shown in Section~\ref{previous_results} that in the absence of information on space correlation, maximum entropy modeling yields i.i.d. coefficients for the channel matrix, and therefore an identity covariance matrix. We now consider the case where covariance is known to be a parameter of interest, but is not known deterministically.
Again, we will proceed in two steps, first seeking a probability distribution function for the covariance matrix $\mathbf{Q}$, and then marginalizing the channel distribution over $\mathbf{Q}$.\\
\subsubsection{Correlation matrix PDF}
\label{sectionmaxentQ}
Let us first establish the distribution of $\mathbf{Q}$, under the energy constraint $\int \mathrm{tr}(\mathbf{Q}) P_{\mathbf{Q}}(\mathbf{Q}) \d\mathbf{Q}=N E_0$, by maximizing the functional
\begin{equation}
L(P_{\mathbf{Q}}) = \int_{\mathcal{S}} -\log(P_{\mathbf{Q}}(\mathbf{Q})) P_{\mathbf{Q}}(\mathbf{Q}) \d\mathbf{Q} + \beta \left[ \int_{\mathcal{S}} P_{\mathbf{Q}}(\mathbf{Q}) \d\mathbf{Q} -1 \right] + \gamma \left[ \int_{\mathcal{S}} \mathrm{tr}(\mathbf{Q}) P_{\mathbf{Q}}(\mathbf{Q}) \d\mathbf{Q} - N E_0 \right]. \label{maxentQ}
\end{equation}
Due to their structure, covariance matrices are restricted to the space $\mathcal{S}$ of $N\times N$ positive semidefinite complex matrices. Therefore, let us perform the variable change to the eigenvalues/eigenvectors space. Specifically, let us denote $\Lambda = \mathrm{diag}(\lambda_1 \ldots \lambda_N)$ the diagonal matrix containing the eigenvalues of $\mathbf{Q}$, and let $\mathbf{U}$ be the unitary matrix containing the eigenvectors, such that $\mathbf{Q} = \mathbf{U}\Lambda\mathbf{U}^H$.\\
We use the mapping between the space of complex $N\times N$ self-adjoint matrices (of which $\mathcal{S}$ is a subspace), and $\mathcal{U}(N)/T \times \mathbb{R}_{\leq}^N$, where $\mathcal{U}(N)/T$ denotes the space of unitary $N\times N$ matrices with real, non-negative first row, and $\mathbb{R}_{\leq}^N$ is the space of real N-tuples with non-decreasing components (see \cite[Lemma 4.4.6]{Hiai_Petz_monograph}). The positive semidefinite property of the covariance matrices further restricts the components of $\Lambda$ to non-negative values, and therefore $\mathcal{S}$ maps into $\mathcal{U}(N)/T \times {\mathbb{R}_{\leq}^+}^N$.\\
Let us now define function $F$ over $\mathcal{U}(N)/T \times {\mathbb{R}_{\leq}^+}^N$ as
\begin{equation}
F(\mathbf{U,\Lambda}) = P_{\mathbf{Q}}(\mathbf{U}\Lambda\mathbf{U}^H), \quad \mathbf{U} \in \mathcal{U}(N)/T, \quad \Lambda \in {\mathbb{R}_{\leq}^+}^N.
\end{equation}
According to this mapping, eq.~(\ref{maxentQ}) becomes
\begin{eqnarray}
L(F) &=& \int_{\mathcal{U}(N)/T \times \mathbb{R}_{\leq}^{+N}} -\log(F(\mathbf{U,\Lambda})) F(\mathbf{U,\Lambda}) K(\Lambda) \d\mathbf{U}\d{\Lambda} \nonumber \\
&+& \beta \left[ \int_{\mathcal{U}(N)/T \times \mathbb{R}_{\leq}^{+N}} F(\mathbf{U,\Lambda}) K(\Lambda) \d\mathbf{U}\d{\Lambda} -1 \right] \nonumber \\
& +& \gamma \left[ \int_{\mathcal{U}(N)/T \times \mathbb{R}_{\leq}^{+N}} \left( \sum_{i=1}^N \lambda_i \right) F(\mathbf{U,\Lambda}) K(\Lambda) \d\mathbf{U}\d{\Lambda}- N E_0 \right], \label{maxentULambda}
\end{eqnarray}
where we introduced the corresponding Jacobian $K(\Lambda) = \frac{(2\pi)^{N(N-1)/2}}{\prod_{j=1}^{N-1} j!} \prod_{i<j} (\lambda_i-\lambda_j)^2$, and used \linebreak $\mathrm{tr}(\mathbf{Q})= \mathrm{tr}(\Lambda) = \sum_{i=1}^N \lambda_i$. Maximizing the entropy of the distribution $P_{\mathbf{Q}}$ by taking $\frac{\delta L(F)}{\delta F} = 0$ yields
\begin{equation}
-K(\Lambda) -K(\Lambda)\log(F(\mathbf{U,\Lambda})) + \beta K(\Lambda) +\gamma \left(\sum_{i=1}^N \lambda_i \right)K(\Lambda) =0.
\end{equation}
Since $K(\Lambda)\neq 0$ except on a set of measure zero, this is equivalent to
\begin{equation}
F(\mathbf{U,\Lambda}) = \exp\left( \beta-1+\gamma \sum_{i=1}^N \lambda_i\right). \label{P_U_Lambda}
\end{equation}
Note that the distribution $F(\mathbf{U,\Lambda})K(\Lambda)$ does not explicitly depend on $\mathbf{U}$. This indicates that $\mathbf{U}$ is uniformly distributed, with constant density $P_{\mathbf{U}}=(2\pi)^N$ over $\mathcal{U}(N)/T$.
Therefore, the joint density can be factored as $F(\mathbf{U,\Lambda})K(\Lambda) = P_{\mathbf{U}} P_{\mathbf{\Lambda}}(\mathbf{\Lambda})$, where the distribution of the eigenvalues over ${\mathbb{R}_{\leq}^+}^N$ is
\begin{equation}
P_{\Lambda}(\Lambda) = \frac{\mathrm{e}^{\beta-1}}{P_{\mathbf{U}}} \exp\left(\gamma \sum_{i=1\ldots N} \lambda_i\right) \frac{(2\pi)^{N(N-1)/2}}{\prod_{j=1}^{N-1} j!} \prod_{i<j} (\lambda_i-\lambda_j)^2. \label{orderpdflambda}
\end{equation}
At this point, it is worth noting that the form of eq.~(\ref{orderpdflambda}) indicates that the order of the eigenvalues is immaterial.
In order to see this, consider a pair of eigenvalues $(\lambda_i,\lambda_j)$, with $i<j$ and $\lambda_i\leq \lambda_j$, and the change of variables $(x,y)=\left(\frac{\lambda_i+\lambda_j}{\sqrt{2}},\frac{-\lambda_i+\lambda_j}{\sqrt{2}} \right)$.
For any function $f(\lambda_i,\lambda_j)$,
\begin{equation}
\int_{0\leq \lambda_i \leq \lambda_j \leq +\infty} f(\lambda_i,\lambda_j) \d \lambda_i \d \lambda_j = \int_{x=0}^{+\infty} \int_{y=0}^{x} f(x,y) \d y \d x,
\end{equation}
whereas for the the non-restricted integral
\begin{equation}
\int_{(\lambda_i,\lambda_j)\in \mathbb{R}^{+2}} f(\lambda_i,\lambda_j) \d \lambda_i \d \lambda_j = \int_{x=0}^{+\infty} \int_{y=-x}^{x} f(x,y) \d y \d x.
\end{equation}
Note that for every function $f$ s.t. $f(x,y)=f(x,-y)$,
\begin{equation}
\int_{y=-x}^{x} f(x,y) \d y = 2 \int_{y=0}^{x} f(x,y) \d y,
\end{equation}
and therefore
\begin{equation}
\int_{(\lambda_i,\lambda_j)\in \mathbb{R}^{+2}} f(\lambda_i,\lambda_j) \d \lambda_i \d \lambda_j = 2 \int_{0\leq \lambda_i \leq \lambda_j \leq +\infty} f(\lambda_i,\lambda_j) \d \lambda_i \d \lambda_j.
\end{equation}
Since the probability distribution $P_{\Lambda}(\Lambda)$ in (\ref{orderpdflambda}) obviously verifies the property $f(x,y)=f(x,-y)$ in the rotated space for any $0\leq i,j \leq N$, this reasoning (generalized to any permutation of the ordered eigenvalues) applies to $P_{\Lambda}(\Lambda)$. Therefore, for the sake of simplicity, we will now work with the PDF $P'_{\Lambda}(\Lambda)$ of the joint distribution of the \emph{unordered} eigenvalues, defined over ${\mathbb{R}^+}^N$.
Note that its restriction to the set of the ordered eigenvalues is proportional to $P_{\Lambda}(\Lambda)$. More precisely,
\begin{equation}
\forall \Lambda \in {\mathbb{R}^+}^N, \quad P'_{\Lambda}(\Lambda) = \frac{1}{N!} P_{\Lambda}(\lambda_{s(1)},\ldots,\lambda_{s(N)}) \end{equation}
where $s$ is any permutation of $\{1\ldots N\}$ such that $\lambda_{s(1)} \leq \lambda_{s(2)} \leq \ldots\leq \lambda_{s(N)}$, and the coefficient $1/N!$ comes from the number of permutations of the $N$ eigenvalues.
Since $P_{\Lambda}(\lambda_{s(1)},\ldots,\lambda_{s(N)})=P_{\Lambda}(\lambda_1,\ldots,\lambda_N)$,
this yields
\begin{equation}
P'_{\Lambda}(\Lambda) = C \exp\left(\gamma \sum_{i=1\ldots N} \lambda_i\right) \prod_{i<j} (\lambda_i-\lambda_j)^2 \label{unorderedpdflambda},
\end{equation}
where the value of $C=\frac{\mathrm{e}^{\beta-1}}{P_{\mathbf{U}}} \frac{(2\pi)^{N(N-1)/2}}{ N!\prod_{j=1}^{N-1} j!}$ can be determined by
solving the normalization equation for the probability distribution $P'_{\Lambda}$:
\begin{eqnarray}
1 = \int_{{\mathbb{R}^+}^N} P'_{\Lambda}(\Lambda)\d\Lambda &=& C \int_{{\mathbb{R}^+}^N} \prod_{i=1}^N \mathrm{e}^{\gamma \lambda_i} \prod_{i<j} (\lambda_i-\lambda_j)^2 \d\Lambda \\
&=& C \left( -\frac{1}{\gamma}\right)^{N^2} \int_{{\mathbb{R}^+}^N} \prod_{i=1}^N \mathrm{e}^{-x_i} \prod_{i<j} (x_i-x_j)^2 \d x_1 \ldots \d x_N\\
&=& C \left( -\frac{1}{\gamma}\right)^{N^2} \prod_{i=0}^{N-1} \frac{\Gamma(i+2) \Gamma(i+1)}{\Gamma(2)} = C \left( -\frac{1}{\gamma}\right)^{N^2} \prod_{n=1}^N n! (n-1)!,
\end{eqnarray}
where we used the change of variables $x_i=-\gamma \lambda_i$ and the Selberg integral (see \cite[eq.~(17.6.5)]{Mehta_random_matrices}). This yields $C=(-\gamma)^{N^2} \prod_{n=1}^N [n! (n-1)!]^{-1}$. Furthermore, $\frac{\d \log(C)}{\d (-\gamma)}=\frac{N^2}{-\gamma}=N E_0$, and we finally obtain the final expression of the eigenvalue distribution
\begin{equation}
P'_{\Lambda}(\Lambda) = \left(\frac{N}{E_0} \right)^{N^2} \prod_{n=1}^N \frac{1}{n! (n-1)!} \exp\left(-\frac{N}{E_0} \sum_{i=1\ldots N} \lambda_i\right) \prod_{i<j} (\lambda_i-\lambda_j)^2 \label{finalpdflambda}.\\
\end{equation}
In order to obtain the final distribution of $\mathbf{Q}$, let us first note that since the order of the eigenvalues has been shown to be immaterial, the restriction of $\mathbf{U}$ to $\mathcal{U}(N)/T$ is not necessary, and $\mathbf{Q}$ is distributed as $\mathbf{U}\Lambda\mathbf{U}^H$, where the distribution of $\Lambda$ is given by eq.~(\ref{finalpdflambda}) and $\mathbf{U}$ is Haar distributed (uniform on $\mathcal{U}(N)$). Furthermore, note that eq.~(\ref{finalpdflambda}) is a particular case of the density of the eigenvalues of a complex Wishart matrix \cite{tulino_random_matrices,Edelman_PhD}. We recall that the complex $N \times N$ Wishart matrix with $K$ degrees of freedom and covariance $\mathbf{\Sigma}$ (denoted by $\tilde{\mathcal{W}}_N(K,\mathbf{\Sigma})$) is the matrix $\mathbf{A}=\mathbf{B}\mathbf{B}^H$ where $\mathbf{B}$ is a $N\times K$ matrix whose columns are complex circularly-symmetric independent Gaussian vectors with covariance $\mathbf{\Sigma}$.
Indeed, eq.~(\ref{finalpdflambda}) describes the unordered eigenvalue density of a $\tilde{\mathcal{W}}_N(N,\frac{E_0}{N}{\bf I}_N)$ matrix. Taking into account the isotropic property of the distribution of $\mathbf{U}$, we can conclude that $\mathbf{Q}$ itself is also a $\tilde{\mathcal{W}}_N(N,\frac{E_0}{N}{\bf I}_N)$ Wishart matrix. A similar result, with a slightly different constraint, was obtained by Adhikari in \cite{Adhikari_maxent_elastodynamics2006}, where it is shown that the entropy-maximizing distribution of a positive definite matrix with known mean $\mathbf{G}$ follows a Wishart distribution with $N+1$ degrees of freedom, more precisely the $\tilde{\mathcal{W}}_N(N+1,\frac{\mathbf{G}}{N+1})$ distribution.\\
The isotropic property of the obtained Wishart distribution (due to the fact that $\mathbf{U}$ is Haar distributed, \emph{i.e.} there is not privileged direction for the eigenvalues of the covariance matrix $\mathbf{Q}$), is a consequence of the fact that no spatial constraints were imposed on the correlation. The energy constraint (imposed through the trace) only affects the distribution of the eigenvalues of $\mathbf{Q}$.
Note also that the generation for simulation purposes of $\mathbf{Q}$ according to the Wishart distribution obtained above is easy, since it can be obtained as $\mathbf{Q}=\frac{E_0}{N}\mathbf{B}\mathbf{B}^H$, where $\mathbf{B}$ is a $N \times N$ matrix with i.i.d. complex circularly-symmetric Gaussian coefficients of unit variance.
\subsubsection{Application to the Kronecker channel model}
We highlight the fact that the result of Section~\ref{section_maxent_Q} is directly applicable to the case where the channel correlation is known to be separable between transmitter and receiver. In this case \cite{Chuah_Kronecker_model}, the full correlation matrix $\mathbf{Q}$ is known to be the Kronecker product of the transmit and receive correlation matrices, \emph{i.e.} $\mathbf{Q}=\mathbf{Q}_T \otimes \mathbf{Q}_R$, where $\mathbf{Q}_T$ and $\mathbf{Q}_R$ are respectively the transmit and receive correlation matrices. This channel model is therefore denoted by ``Kronecker model'', see \cite{Oestges_Kronecker_validity_vtcsp06} for an overview of its applicability.
The stochastic nature of $\mathbf{Q}_T$ and $\mathbf{Q}_R$ is barely mentioned in the literature, since the correlation matrices are usually assumed to be measurable quantities associated to a particular antenna array shape and propagation environment.
However, in situations where these are not known (for instance, if the array shape is not known at the time of the channel code design, or if the properties of the scattering environment can not be determined), but the Kronecker model is assumed to hold, our analysis suggests that the maximum entropy choice for the distribution of $\mathbf{Q}_T$ and $\mathbf{Q}_R$ is independent, complex Wishart distributions with respectively ${n_t}$ and ${n_r}$ degrees of freedom.
\subsubsection{Marginalization over $\mathbf{Q}$}
\label{section_marginalizeQ}
The complete distribution of the correlated channel can be obtained by marginalizing out $\mathbf{Q}$, using its distribution as established in Section \ref{sectionmaxentQ}.
The distribution of $\mathbf{H}$ is obtained through
\begin{equation}
P_{\mathbf{H}}(\mathbf{H}) = \int_{\mathcal{S}} P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H},\mathbf{Q}) P_{\mathbf{Q}}(\mathbf{Q}) \d\mathbf{Q} = \int_{\mathcal{U}(N) \times{\mathbb{R}^+}^N} P_{\mathbf{H}|\mathbf{Q}}(\mathbf{H},\mathbf{U},\Lambda) P'_{\mathbf{\Lambda}}(\mathbf{\Lambda}) \d\mathbf{U}\d\Lambda \label{integralH_existingQ}
\end{equation}
Let us rewrite the conditional probability density of eq.~(\ref{maxent_G_corr}) as
\begin{equation}
P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U}, \Lambda) = \frac{1}{\pi^N \det(\Lambda)} \mathrm{e}^{-\v{h}^H \mathbf{U}\Lambda^{-1}\mathbf{U}^H \v{h} } = \frac{1}{\pi^N \det(\Lambda)} \mathrm{e}^{ -\mathrm{tr}( \v{h}\v{h}^H \mathbf{U}\Lambda^{-1}\mathbf{U}^H )}. \label{conditional_H_Q}
\end{equation}
Using this expression in (\ref{integralH_existingQ}), we obtain
\begin{equation}
P_{\mathbf{H}}(\mathbf{H}) = \frac{1}{\pi^N} \int_{{\mathbb{R}^+}^N} \int_{\mathcal{U}(N)} \mathrm{e}^{ -\mathrm{tr}( \v{h}\v{h}^H\mathbf{U}\Lambda^{-1}\mathbf{U}^H )} \d\mathbf{U} \, \det(\Lambda)^{-1} P'_{\mathbf{\Lambda}}(\mathbf{\Lambda}) \d\Lambda. \label{integralH_intermed1}
\end{equation}
Following the notations of \cite{Capacity_and_character_expansion_IT}, let $\det(f(i,j))$ denote the determinant of a matrix with the $(i,j)$-th element given by an arbitrary function $f(i,j)$.
Also, let $\Delta(\mathbf{X})$ denote the Vandermonde determinant of the eigenvalues $x_i$ of the matrix $\mathbf{X}$
\begin{equation}
\Delta(\mathbf{X}) = \det(x_i^{j-1}) = \prod_{i>j} (x_i - x_j). \label{def_Vandermonde_determinant}
\end{equation}
Using these notations, let us recall the Harish-Chandra-Itzykson-Zuber (HCIZ) integral \cite{Balantekin_Un_character_expansion}
\begin{equation}
\int_{\mathcal{U}(N)} \exp( \kappa \mathrm{tr}(\mathbf{AUBU}^H)) \d \mathbf{U} = \left( \prod_{n=1}^{N-1} n! \right) \kappa^{N(N-1)/2} \frac{\det\left( e^{-A_i B_j} \right)}{\Delta(\mathbf{A}) \Delta(\mathbf{B})}, \label{HCIZ}
\end{equation}
where $\mathbf{A}$ and $\mathbf{B}$ are any hermitian matrices with respective eigenvalues $A_1,\ldots,A_N$ and $B_1,\ldots,B_N$. Let us explicit the Haar integral in (\ref{integralH_intermed1}) using the Harish-Chandra-Itzykson-Zuber result by identifying $\mathbf{A}=\v{h}\v{h}^H$ and $\mathbf{B}=\Lambda^{-1}$. Note however that we can not directly apply the result in (\ref{HCIZ}) since $\mathbf{A}$ is rank one, and therefore $\det \mathbf{A}=0$.
This can be resolved by taking the limit of all other eigenvalues to zero one by one, and applying the l'Hospital rule.
Therefore, let $\mathbf{A}$ be an Hermitian matrix which has its $N$th eigenvalue $A_N$ equal to $\v{h}^H\v{h}$, and the others $A_1,\ldots, A_{N-1}$ are arbitrary, positive values that will eventually be set to 0. Letting
\begin{equation}
I(\mathbf{H},A_1,\ldots, A_{N-1})=\frac{1}{\pi^N} \int_{{\mathbb{R}^+}^N} \int_{\mathcal{U}(N)} \mathrm{e}^{ -\mathrm{tr}( \mathbf{A}\mathbf{U}\Lambda^{-1}\mathbf{U}^H )} P_{\mathbf{U}} \d\mathbf{U} \, \det(\Lambda)^{-1} P'_{\mathbf{\Lambda}}(\mathbf{\Lambda}) \d\Lambda,
\end{equation}
$P_{\mathbf{H}}(\mathbf{H})$ can be determined as the limit distribution when the first $N-1$ eigenvalues of $\mathbf{A}$ go to zero:
\begin{equation}
P_{\mathbf{H}}(\mathbf{H}) = \lim_{A_1,\ldots,A_{N-1} \rightarrow 0} I(\mathbf{H},A_1,\ldots, A_{N-1}).
\end{equation}
Applying the HCIZ to integrate over $\mathbf{U}$ yields
\begin{eqnarray}
I(\mathbf{H},A_1,\ldots, A_{N-1}) &=&\frac{(-1)^{\frac{N(N-1)}{2}}}{\pi^N}\left(\prod_{n=1}^{N-1} n!\right) \int_{{\mathbb{R}^+}^N} \frac{\det\left( e^{-A_i/\lambda_j} \right)}{\Delta(\mathbf{A})\Delta(\Lambda^{-1})} \det(\Lambda)^{-1} P'_{\mathbf{\Lambda}}(\mathbf{\Lambda}) \d\Lambda \\
&=& \frac{1}{\pi^N} \left(\prod_{n=1}^{N-1} n!\right) \int_{{\mathbb{R}^+}^N} \frac{\det\left( e^{-A_i/\lambda_j} \right) \det(\Lambda)^{N-2}}{\Delta(\mathbf{A}) \Delta(\Lambda)} P'_{\mathbf{\Lambda}}(\mathbf{\Lambda}) \d\Lambda \\
&=& \frac{C}{\pi^N} \left(\prod_{n=1}^{N-1} n!\right) \int_{{\mathbb{R}^+}^N} \frac{\det\left( e^{-A_i/\lambda_j} \right) \det(\Lambda)^{N-2}\Delta(\Lambda)}{\Delta(\mathbf{A})} \mathrm{e}^{-\frac{N}{E_0}\mathrm{tr}(\Lambda)} \d\Lambda,
\end{eqnarray}
where we used the identity $\Delta(\Lambda^{-1}) = \det(\frac{1}{\lambda_i}^{j-1}) = (-1)^{N(N+3)/2} \frac{\Delta(\Lambda)}{\det(\Lambda)^{N-1}}$.
Then, let us decompose the determinant product using the expansion formula: for an arbitrary $N\times N$ matrix $\mathbf{X}=(X_{i,j})$,
\begin{equation}
\det(\mathbf{X}) = \sum_{\mathbf{a}\in \mathcal{P}_N} (-1)^{\mathbf{a}} \prod_{n=1}^N X_{n,a_n} = \frac{1}{N!} \sum_{\mathbf{a},\mathbf{b}\in \mathcal{P}_N} (-1)^{\mathbf{a}+\mathbf{b}} \prod_{n=1}^N X_{a_n,b_n}, \label{determinant_expansions}
\end{equation}
where $\mathbf{a}=[a_1,\ldots,a_N]$, $\mathcal{P}_N$ denotes the set of all permutations of $[1,\ldots,N]$, and $(-1)^{\mathbf{a}}$ is the sign of the permutation.
Using the first form of the expansion, we obtain
\begin{eqnarray}
\Delta(\Lambda)\det\left( e^{-A_i/\lambda_j} \right) &= & \det(\lambda_i^{j-1})\det( e^{-A_j/\lambda_i} ) \label{playwithdets}\\
& = & \left( \sum_{\mathbf{a}\in \mathcal{P}_N} (-1)^{\mathbf{a}} \prod_{n=1}^N \lambda_n^{a_n-1} \right) \left( \sum_{\mathbf{b}\in \mathcal{P}_N} (-1)^{\mathbf{b}} \prod_{m=1}^N \mathrm{e}^{-A_{b_m}/{\lambda_m}} \right) \\
& = & \sum_{\mathbf{a,b}\in \mathcal{P}_N^2} (-1)^{\mathbf{a}+\mathbf{b}} \prod_{n=1}^N \lambda_n^{a_n-1} \mathrm{e}^{-A_{b_n}/{\lambda_n}}.
\end{eqnarray}
Note that in (\ref{playwithdets}) we used the invariance of the second determinant by transposition in order to simplify subsequent derivations. Therefore,
\begin{eqnarray}
I(\mathbf{H},A_1,\ldots, A_{N-1}) &=& \frac{C}{\pi^N \Delta(\mathbf{A})} \left(\prod_{n=1}^{N-1} n!\right) \int_{{\mathbb{R}^+}^N} \sum_{\mathbf{a,b}\in \mathcal{P}_N} (-1)^{\mathbf{a}+\mathbf{b}} \prod_{n=1}^N \lambda_n^{N+a_n-3} \mathrm{e}^{-A_{b_n}/{\lambda_n}} \mathrm{e}^{-\frac{N}{E_0} \lambda_n} \d\Lambda \\
&=& \frac{C}{\pi^N \Delta(\mathbf{A})} \left(\prod_{n=1}^{N-1} n!\right) \sum_{\mathbf{a,b}\in \mathcal{P}_N} (-1)^{\mathbf{a}+\mathbf{b}} \prod_{n=1}^N \int_{\mathbb{R}^+} \lambda_n^{N+a_n-3} \mathrm{e}^{-A_{b_n}/{\lambda_n}} \mathrm{e}^{-\frac{N}{E_0} \lambda_n} \d \lambda_n \\
&=& \frac{C \, N!}{\pi^N } \left(\prod_{n=1}^{N-1} n!\right) \frac{\det[f_i(A_j)]}{\Delta(\mathbf{A})},
\end{eqnarray}
where we let $f_i(x)= \int_{\mathbb{R}^+} t^{N+i-3} \mathrm{e}^{-x/t} \mathrm{e}^{-\frac{N}{E_0} t} \d t$, and recognize the second form of the determinant expansion (eq.~(\ref{determinant_expansions})).
In order to obtain the limit as $A_1,\ldots A_{N-1}$ go to zero, we use a result from \cite[Appendix III]{Capacity_and_character_expansion_IT}, which states that the limit of the ratio $\frac{\det(f_i(x_j))}{\Delta(\mathbf{X})}$ of the singular determinants as the first $p$ eigenvalues go to $x_0$ is
\begin{equation}
\lim_{x_1,x_2,\ldots,x_p \rightarrow x_0} \frac{\det(f_i(x_j))}{\Delta(\mathbf{X})} = \frac{\det\left[f_i(x_0);f_i'(x_0);\ldots;f_i^{(p-1)}(x_0);f_i(x_{p+1});\ldots; f_i(x_N) \right]}{\Delta(x_{p+1},\ldots,x_N) \prod_{i=p+1}^N (x_i-x_0)^p \prod_{j=1}^{p-1} j!},
\end{equation}
where the first $p$ columns in the right-hand side determinant represent the successive derivatives of the function $f$, and the rows correspond to different values of $i=1,\ldots, N$.
Applying this -- with $p=N-1$ and $x_0=0$ since $\mathbf{A}$ has only one non-zero eigenvalue -- yields
\begin{eqnarray}
P_{\mathbf{H}}(\mathbf{H}) &=& \lim_{A_1,A_2,\ldots,A_{N-1} \rightarrow 0} I(\mathbf{H},A_1,\ldots, A_{N-1}) \\
&=& \frac{(-\gamma)^{N^2}}{\pi^N x_N^{N-1}} \prod_{n=1}^{N-1} \left[ n! (n-1)! \right]^{-1} \det\left[f_i(0);f_i'(0);\ldots;f_i^{(N-2)}(0); f_i(x_N) \right]. \label{P_h_determinant}
\end{eqnarray}
At this point, it becomes obvious from (\ref{P_h_determinant}) that the probability of $\mathbf{H}$ depends only on its norm (recall that $x_N=\v{h}^H\v{h}$ by definition of $\mathbf{A}$). The distribution of $\v{h}$ is isotropic, and is completely determined by the probability density function $P_{x}(x)$ of having $\v{h}$ s.t. $\v{h}^H\v{h}=x$.\\
Therefore, for a given $x$, $\v{h}$ is uniformly distributed over $\mathbb{S}^{N-1}(x)= \left\{\v{h} \, \mathrm{s.t.} \, \v{h}^H\v{h} = x \right\}$, the zero-centered complex hypersphere of radius $x$. Its volume is $V_N(x)=\frac{\pi^N x^N}{N!}$, and its surface is $S_N(x)=\frac{\d V_N(x)}{\d x}=\frac{\pi^N x^{N-1}}{(N-1)!}$.
Therefore, we can write the probability density function of $x_N$ as
\begin{equation}
P_{x}(x)= \int_{\mathbb{S}^{N-1}(x)} P_{\mathbf{H}}(\v{h}) \d \v{h} = \frac{(-\gamma)^{N^2}}{(N-1)!} \prod_{n=1}^{N-1} \left[ n! (n-1)! \right]^{-1} \det\left[f_i(0);f_i'(0);\ldots;f_i^{(N-2)}(0); f_i(x) \right].
\end{equation}
In order to simplify the expression of the successive derivatives of $f_i$, it is useful to identify the Bessel $K$-function \cite[Section 8.432]{Gradshteyn_Ryzhik}, and to replace it by its infinite sum expansion \cite[Section 8.446]{Gradshteyn_Ryzhik}
\begin{eqnarray}
f_i(x) &=& 2\left(\sqrt{ \frac{x}{-\gamma}} \right)^{i+N-2} K_{i+N-2}(2\sqrt{-\gamma x})\\
&=& (-\gamma)^{-i-N+2} \left[ \sum_{k=0}^{i+N-3} (-1)^k \frac{(i+N-3-k)!}{k!} (-\gamma x)^k + \right. \nonumber \\ && \left. (-1)^{i+N-1} \sum_{k=0}^{+\infty} \frac{(-\gamma x)^{i+N-2+k}}{k! (i+N-2+k)!} \left( \ln(-\gamma x)-\psi(k+1)-\psi(i+N-1+k) \right) \right].
\end{eqnarray}
Note that there is only on term in the sum with a non-zero $p$th derivative at 0. Therefore, the $p$th derivative of $f_i$ at 0 is simply (for $0\leq p \leq N-2$)
\begin{equation}
f_i^{(p)}(0) = (-1)^{-i-N} \gamma^{p-i-N+2} (i+N-3-p)!
\end{equation}
Let us bring the last column to become the first, and expand the resulting determinant along its first column:
\begin{eqnarray}
\det\left[f_i^{(0)}(0);\ldots;f_i^{(N-2)}(0); f_i(x) \right] &=& (-1)^{N-1} \det\left[f_i(x); f_i^{(0)}(0);\ldots;f_i^{(N-2)}(0) \right] \\
& =& (-1)^{N-1} \sum_{n=1}^{N} (-1)^{1+n} f_n(x) \det\left[\tilde{f}_{i,n}^{(0)}(0);\ldots;\tilde{f}_{i,n}^{(N-2)}(0) \right]
\end{eqnarray}
where $\tilde{f}_{i,n}^{(p)}(0)$ is the $N-1$ dimensional column obtained by removing the $n$th element from $f_i^{(p)}(0)$.
Factoring the $(-1)^p \gamma^{p-i-N+2}$ in the expression of $f_i^{(p)}(0)$ out of the determinant yields
\begin{equation}
\det\left[\tilde{f}_{i,n}^{(0)}(0);\ldots;\tilde{f}_{i,n}^{(N-2)}(0) \right] = (-1)^{n-N(N+1)/2} \gamma^{n-N^2+N-1} \det(\mathbf{g}^{(n)})
\end{equation}
where the $N-1$ dimensional matrix $\mathbf{g}^{(n)}$ has the elements
\begin{equation}
\mathbf{g}_{l,k}^{(n)} = \Gamma(q_l^{(n)}+N-k-1)
\end{equation}
where
\begin{equation}
q_l^{(n)}= \left\{ \begin{array}{ll}
l, & l \leq n-1, \\
l+1, & l \geq n.
\end{array} \right.
\end{equation}
Using the fact that $\Gamma(q_l^{(n)}+i)= q_l^{(n)} \Gamma(q_l^{(n)}+i-1)+ (i-1) \Gamma(q_l^{(n)}+i-1)$, note that the $k$th column of $\mathbf{g}^{(n)}$ is
\begin{equation}
\mathbf{g}_{l,k}^{(n)}=q_l^{(n)} \Gamma(q_l^{(n)}+N-k-2)+ (N-k-2) \mathbf{g}_{l,k+1}^{(n)}.
\end{equation}
Since the second term is proportional to the $(k+1)$th column, it can be omitted without changing the value of the determinant. Applying this property to the first $N-2$ pairs of consecutive columns, and repeating this process again to the first $N-2,\ldots,1$ pairs of columns, we obtain
\begin{eqnarray}
\det(\mathbf{g}^{(n)})&=& \det\left( \Gamma(q_l^{(n)}+N-2) ; \ldots ; \Gamma(q_l^{(n)}+2) ;\Gamma(q_l^{(n)}+1) ; \Gamma(q_l^{(n)}) \right)\\
&=& \det\left( q_l^{(n)} \Gamma(q_l^{(n)}+N-3) ; \ldots ; q_l^{(n)} \Gamma(q_l^{(n)}+1) ;q_l^{(n)} \Gamma(q_l^{(n)}) ; \Gamma(q_l^{(n)}) \right) \\
&=& \det\left( {q_l^{(n)}}^2 \Gamma(q_l^{(n)}+N-4) ; \ldots ; {q_l^{(n)}}^2 \Gamma(q_l^{(n)}) ;q_l^{(n)} \Gamma(q_l^{(n)}) ; \Gamma(q_l^{(n)}) \right) \\
&=& \det\left( {q_l^{(n)}}^{N-1-k} \Gamma(q_l^{(n)}) \right) \\
&=& \frac{\prod_{i=1}^N \Gamma(i)}{\Gamma(n)} \det\left({q_l^{(n)}}^{N-1-k} \right) \\
&=& \frac{\prod_{i=1}^N \Gamma(i)}{\Gamma(n)} (-1)^{(N-1)(N-2)/2} \det\left({q_l^{(n)}}^{k-1} \right),
\end{eqnarray}
where the last two equalities are obtained respectively by factoring out the $\Gamma(q_l^{(n)})$ factors (common to all terms on the $l$th row) and inverting the order of the columns in order to get a proper Vandermonde structure.
Finally, the determinant can be computed using (\ref{def_Vandermonde_determinant}), as
\begin{eqnarray}
\det\left({q_l^{(n)}}^{k-1} \right) &=& \prod_{1\leq j < i \leq N-1} \left( q_i^{(n)} - q_j^{(n)}\right) \\
&=& \left(\prod_{1\leq j < i \leq n-1} (i-j)\right) \left(\prod_{1\leq j < n \leq i \leq N-1} (i+1-j)\right) \left(\prod_{n \leq j < i \leq N-1} (i-j)\right) \\
&=& \prod_{i=1}^{n-2} i! \prod_{i=n}^{N-1} \frac{i!}{(i-n+1)!} \prod_{i=n+1}^{N-1} (i-n)! = \frac{\prod_{i=1}^{N-1} i!}{(n-1)!(N-n)!}.
\end{eqnarray}
Wrapping up the above derivations, one obtains successively
\begin{eqnarray}
\det(\mathbf{g}^{(n)}) &=& \left( \prod_{i=1}^{N-1} i! \right)^2 (-1)^{(N-1)(N-2)/2} \frac{1}{\left[(n-1)!\right]^2 (N-n)!}, \\
\det\left[\tilde{f}_{i,n}^{(0)}(0);\ldots;\tilde{f}_{i,n}^{(N-2)}(0) \right] & = & \left( \prod_{i=1}^{N-1} i! \right)^2 \frac{(-1)^{n+1} \gamma^{n-N^2+N-1}}{\left[(n-1)!\right]^2 (N-n)!},\\
\det\left[f_i^{(0)}(0);\ldots;f_i^{(N-2)}(0); f_i(x) \right] &=& \sum_{n=1}^{N} (-1)^{1-N} f_n(x) \left( \prod_{i=1}^{N-1} i! \right)^2 \frac{\gamma^{n-N^2+N-1}}{\left[(n-1)!\right]^2 (N-n)!}.
\end{eqnarray}
Finally, we obtain
\begin{equation}
P_{x}(x) = -\sum_{n=1}^{N} f_n(x) \frac{\gamma^{N+n-1}}{\left[(n-1)!\right]^2 (N-n)!},
\end{equation}
where $\gamma = -\frac{N}{E_0}$.\\
The corresponding PDF is shown in Figure~\ref{pdfs_4x4}, as well as the PDF of the instantaneous power of a Gaussian i.i.d. channel of the same size and mean power. As expected, the energy distribution of the proposed model is more spread out than the energy of a Gaussian i.i.d. channel.
Figure~\ref{Icdf_4x4} shows the CDF curves of the instantaneous mutual information achieved over the channel described in eq.~(\ref{awgn_channelmodel}) by these two channel models. The proposed model differs in particular in the tails of the distribution: for instance, the 1\% outage capacity is reduced from 4.5 to 3.9 nats w.r.t. the Gaussian i.i.d. model.
\begin{figure}[htb]
\centering
\subfigure[PDF of $x=||\mathbf{H}||^2_F$ ]{\epsfig{figure=pdfs_4x4.eps,height=6.2cm}\label{pdfs_4x4}}
\subfigure[CDF of instantaneous mutual information $I$]{\epsfig{figure=Icdf_4x4.eps,height=6.2cm}\label{Icdf_4x4}}
\caption{Amplitude and mutual information distributions of the proposed channel models for a $4\times 4$ antennas setting.}
\end{figure}
\subsection{Limited-rank covariance matrix}
In this section, we address the situation where the modeler takes into account the existence of a covariance matrix of rank $L < N$ (we assume that L is known).
As in the full-rank case, we will use the eigendecomposition $\mathbf{Q} = \mathbf{U}\Lambda\mathbf{U}^H$ of the covariance matrix, with $\Lambda = \mathrm{diag}(\lambda_1, \ldots, \lambda_L, 0, \ldots, 0)$. Let us denote $\Lambda_L = \mathrm{diag}(\lambda_1, \ldots, \lambda_L)$.
The maximum entropy probability density of $\mathbf{Q}$ with the extra rank constraint is unsurprisingly similar to the one derived in Section~\ref{sectionmaxentQ}, with the difference that all the energy is carried by the first $L$ eigenvalues, \emph{i.e.} $\mathbf{U}$ is uniformly distributed over $\mathcal{U}(N)$, while
\begin{equation}
P_{\Lambda_L}(\Lambda_L) = \left(\frac{L^2}{N E_0} \right)^{L^2} \prod_{n=1}^L \frac{1}{n! (n-1)!} \exp\left(-\frac{L^2}{N E_0} \sum_{i=1\ldots L} \lambda_i\right) \prod_{i<j\leq L} (\lambda_i-\lambda_j)^2 \label{finalpdflambdaL}.\\
\end{equation}
However, the definition of the conditional probability density $P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U},\Lambda)$ in eq.~(\ref{maxent_G_corr}) does not hold when $\mathbf{Q}$ is not full rank: $\mathbf{h}$ becomes a degenerate Gaussian random variable. Its projection
in the L-dimensional subspace associated to the non-zero eigenvalues of $\mathbf{Q}$ follows a Gaussian law, whereas the probability of $\v{h}$ being outside of this subspace is zero.
The conditional probability in eq.~(\ref{conditional_H_Q}) must therefore be rewritten as
\begin{equation}
P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U}, \Lambda_L) = \mathbbm{1}_{\{ \v{h} \in \mathrm{Span}(\mathbf{U}_{[L]}) \} } \frac{1}{\pi^L \prod_{i=1}^L \lambda_i} \mathrm{e}^{-\v{h}^H \mathbf{U}_{[L]}\Lambda_L^{-1}{\mathbf{U}_{[L]}}^H \v{h} }, \label{conditional_H_Q_limitedrank}
\end{equation}
where $\mathbf{U}_{[L]}$ denotes the $N \times L$ matrix obtained by truncating the last $N-L$ columns of $\mathbf{U}$. The indicator function ($\mathbbm{1}_{\mathcal{A}}=1$ if statement $\mathcal{A}$ is true, 0 else) ensures that $P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U}, \Lambda)$ is zero for $\v{h}$ outside of the column span of $\mathbf{U}_{[L]}$.\\
We need now to marginalize $\mathbf{U}$ and $\Lambda$ in order to obtain the PDF of $\v{h}$:
\begin{equation}
P_{\mathbf{H}}(\v{h}) = \int_{\mathcal{U}(N) \times{\mathbb{R}^+}^L} P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U},\Lambda_L) P_{\Lambda_L}(\Lambda_L) \d\mathbf{U}\d\Lambda_L.
\end{equation}
However, the expression of $P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U}, \Lambda_L)$ does not lend itself directly to the marginalization described in Section~\ref{section_marginalizeQ}, since the zero eigenvalues of $\mathbf{Q}$ complicate the analysis. However, this can be avoided by performing the marginalization of the covariance in an L-dimensional subspace. In order to see this, consider an $L \times L$ unitary matrix $\mathbf{B}_L$, and note that the $N \times N$ block-matrix $\mathbf{B} = \left(\begin{array}{cc} \mathbf{B}_L & 0 \\ 0 & {\bf I}_{N-L} \end{array} \right)$ is unitary as well.
Since the uniform distribution over $\mathcal{U}(N)$ is unitarily invariant, $\mathbf{U} \mathbf{B}$ is uniformly distributed over $\mathcal{U}(N)$, and for any $\mathbf{B}_L \in \mathcal{U}(L)$ we have
\begin{equation}
P_{\mathbf{H}}(\v{h}) = \int_{\mathcal{U}(N) \times{\mathbb{R}^+}^L} P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U} \mathbf{B},\Lambda_L) P_{\Lambda_L}(\Lambda_L) \d\mathbf{U}\d\Lambda_L.
\end{equation}
Furthermore, since $\int_{\mathcal{U}(L)} \d\mathbf{B}_L = 1$,
\begin{eqnarray}
P_{\mathbf{H}}(\v{h}) &=& \int_{\mathcal{U}(L)} \int_{\mathcal{U}(N) \times{\mathbb{R}^+}^L} P_{\mathbf{H}|\mathbf{Q}}(\v{h},\mathbf{U} \mathbf{B},\Lambda_L) P_{\Lambda_L}(\Lambda_L) \d\mathbf{U}\d\Lambda_L \d\mathbf{B}_L \\
&=& \int_{\mathbf{U} \in \mathcal{U}(N)} \mathbbm{1}_{\{ \v{h} \in \mathrm{Span}(\mathbf{U}_{[L]}) \} } \int_{\mathcal{U}(L)\times{\mathbb{R}^+}^L} \frac{1}{\pi^L \prod_{i=1}^L \lambda_i} \mathrm{e}^{-\v{h}^H \mathbf{U}_{[L]}\mathbf{B}_L\Lambda_L^{-1}{\mathbf{B}_L^H \mathbf{U}_{[L]}}^H \v{h} } P_{\Lambda_L}(\Lambda_L) \d\mathbf{B}_L \d\Lambda_L \d\mathbf{U} \\
& = & \int_{\mathbf{U} \in \mathcal{U}(N)} \mathbbm{1}_{\{ \v{h} \in \mathrm{Span}(\mathbf{U}_{[L]}) \} } P_{\v{k}}({\mathbf{U}_{[L]}}^H \v{h}) \d\mathbf{U}, \label{integralk_intermed0}
\end{eqnarray}
where (\ref{integralk_intermed0}) is obtained by letting $\v{k} = {\mathbf{U}_{[L]}}^H \v{h}$ and
\begin{equation}
P_{\v{k}}(\v{k}) = \int_{\mathcal{U}(L)\times{\mathbb{R}^+}^L} \frac{1}{\pi^L \prod_{i=1}^L \lambda_i} \mathrm{e}^{-\v{k}^H \mathbf{B}_L\Lambda_L^{-1}\mathbf{B}_L^H \v{k} } P_{\Lambda_L}(\Lambda_L) \d\mathbf{B}_L \d\Lambda_L. \label{integralk_intermed}
\end{equation}
We can then exploit the similarity of eqs.~(\ref{integralk_intermed}) and (\ref{integralH_intermed1}), and, by the same reasoning as in Section~\ref{section_marginalizeQ}, conclude directly that $\v{k}$ is isotropically distributed in $\mathcal{U}(L)$, and that its PDF depends only on its Frobenius norm, following
\begin{equation}
P_{\v{k}}(\v{k}) = \frac{1}{S_L(\v{k}^H \v{k})} P^{(L)}_{x}(\v{k}^H \v{k}),
\end{equation}
where
\begin{equation}
P^{(L)}_{x}(x) = \frac{2}{x}\sum_{i=1}^{L} \left(-L \sqrt{\frac{x}{N E_0}}\right)^{L+i} K_{i+L-2}\left(2 L\sqrt{\frac{x}{N E_0}}\right) \frac{1}{\left[(i-1)!\right]^2 (L-i)!}.
\end{equation}
Finally, note that $\v{h}^H\v{h}=\v{k}^H\v{k}$, and that the marginalization over the random rotation that transforms $\v{k}$ into $\v{h}$ in eq.~(\ref{integralk_intermed0}) preserves the isotropic property of the distribution. Therefore,
\begin{equation}
P_{\v{h}}(\v{h}) = \frac{1}{S_N(\v{h}^H \v{h})} P^{(L)}_{x}(\v{h}^H \v{h}).
\end{equation} \\
Examples of the corresponding PDFs for $L=1, 2, 4, 8, 12$ and 16 are represented on Fig.~\ref{rank_L_PDF_fig} for a $4 \times 4$ channel ($N=16$), together with the PDF of the instantaneous power of a Gaussian i.i.d. channel of the same size and mean power.
As expected, the energy distribution of the proposed MaxEnt model is more spread out than the energy of a Gaussian i.i.d. channel. \\
\begin{figure}[htb]
\centering
\epsfig{figure=Px_rank_L.eps,height=11cm}
\caption{Examples of the limited-rank covariance distribution $P^{(L)}_{x_N}(x)$ for $L=1, 2, 4, 8, 12$ and 16, and $\chi^2$ with 16 degrees of freedom, for $N E_0 = 16$.}
\label{rank_L_PDF_fig}
\end{figure}
The CDF of the mutual information achieved over the limited-rank ($L<16$) and full rank ($L=16$) covariance MaxEnt channel at a SNR of 15 dB is pictured on Figure~\ref{MI_rankL_CDF_fig} for various ranks $L$, together with the Gaussian i.i.d. channel.
The proposed model differs in particular in the tails of the distribution. In particular, the outage capacity for low outage probability is greatly reduced w.r.t. the Gaussian i.i.d. channel model.
\begin{figure}[htb]
\centering
\epsfig{figure=MI_rankL_CDF.eps,height=11cm}
\caption{CDF of the instantaneous mutual information of a $4\times 4$ flat-fading channel for the MaxEnt model with various covariance ranks, at 15dB SNR.}
\label{MI_rankL_CDF_fig}
\end{figure}
\section{Conclusion}
\label{section_conclusion}
In this paper, the maximum entropy principle is used to derive several models of wireless flat-fading channels for various cases of a priori knowledge about the channel properties. First, the cases of average channel energy and known upper-bound on the channel energy were studied.
Subsequently, the issue of taking into account an unknown amount of spatial correlation in MIMO channel models was addressed. The entropy maximizing distribution of the covariance matrix under a average trace constraint was shown to be a Wishart distribution, and the corresponding probability density function of the channel matrix was shown to be described analytically by a function of the channel Frobenius norm. This model was generalized to the case where the covariance matrix is rank-deficient with known rank.
The proposed channel models were compared to the commonly used Gaussian i.i.d. models in terms of the statistics of the achieved mutual information for a given noise level. The proposed models exhibit slightly lower average mutual information, in line with the rule that channel correlation decreases its capacity, and a higher variance than the Gaussian i.i.d. model, which reflects the presence of shadow fading.
\section*{Acknowledgements}
This research was supported by France Telecom and by the NEWCOM network of excellence. The authors would like to thank an anonymous reviewer for pointing out reference \cite{Adhikari_maxent_elastodynamics2006}.
\bibliographystyle{IEEEbib}
|
1,314,259,993,277 | arxiv | \section{Introduction}
\label{sec:Introduction}
The discovery of the Higgs boson~\cite{Aad:2012tfa,Chatrchyan:2012xdj}, possibly the first spin zero elementary particle observed in Nature,
raised the crucial issue concerning the existence of possibly several scalar particles with masses much below any supposed cut-off scale of a given theory, such as the Planck scale. The detection of a light scalar sector would, in fact, allow us to discriminate between the theories beyond the Standard Model (SM) which protect the electroweak scale from the influence of the high-energy cut-off, such as supersymmetry or compositeness, and the scenarios supported by selection mechanisms or landscape arguments which disfavour the existence of these particles.
Recently, both the ATLAS~\cite{ATLAS} and CMS~\cite{CMS:2015dxe} experiments at the LHC have reported an excess of events in the di-photon channel
associated to an invariant mass of about 750~GeV. Given the energy resolutions of the experiments, the signal events seem consistent with each other, implying an evidence for new physics with a global statistical significance that certainly exceeds the $3~\sigma$ level. From a theoretical point of view, because spin one particle decays to di-photon final states are forbidden by the Landau-Yang theorem, the possible candidates for the new resonance must have either spin zero or two.
However, in both the cases, the fact that no excesses have been reported for comparable energies in complementary channels as the di-jet~\cite{Khachatryan:2015dcf} and $t\bar t$~\cite{Khachatryan:2015sma,Aad:2015fna}, and neither in di-boson~\cite{Aad:2015owa} nor di-lepton~\cite{Aad:2015ufa} final states, poses a clear challenge to the possible interpretations of the di-photon excess within models of new physics.
In this work, after discussing the consistency of the LHC di-photon resonances detected by the two experiments, we interpret the signal in terms of a new hypothetical scalar particle and investigate the mentioned experimental hints within an effective field theory that models a possible singlet extension of the SM, as well as within the four flavour conserving Two Higgs Doublet Models (2HDM). We pay particular attention also to the minimal supersymmetric standard model (MSSM), study in detail a simple 2HDM extension featuring two heavy vector-like quarks, and comment, for completeness, on the possibilities offered by composite resonances.
Our results show that the LHC di-boson excess is indeed compatible with all the mentioned models but the 2HDM, including its supersymmetric UV completion, the MSSM, which are strongly disfavoured by the LHC upper constraints on the $pp\rightarrow H\rightarrow t\bar{t}$ cross section.
\section{Consistency of the signal }
\label{sec:H}
Recently the ATLAS and the CMS collaborations presented their results for searches of resonances in the di-photon channel analyzing respectively 3.2~fb$^{-1}$ and 2.6~fb$^{-1}$ of data collected at a 13~TeV collision energy. Both the experiments observe an excess in the di-photon signal peaked at 747~GeV~\cite{ATLAS} and
750~GeV~\cite{CMS:2015dxe} with local significances of 3.6 and $2.6~\sigma$ in ATLAS and CMS, respectively.
In addition to that, the CMS collaboration presented the combined results that include 19.7~fb$^{-1}$ of published data taken at 8~TeV~\cite{Khachatryan:2015qba}, which exhibits an excess
at the same energy and consequently enlarges the local significance of the signal to the $3.1~\sigma$ level. The ATLAS collaboration did not present the corresponding combination since the relative
Run 1 analysis extends only to 600~GeV of invariant mass. Nevertheless, during the presentation of the new results, the speaker~\cite{ATLAS} remarked that the two ATLAS datasets are consistent with each other.
The uncertainty in the photon energy determination at 750~GeV is of order ${\cal O}(1)\%$ in both the experiments,
depending on whether one of the photons is detected in the barrel or in the endcap.
Therefore, within the quoted energy uncertainty, the signals detected by the two experiments are compatible with each other and can originate from the decays of a new particle.
In light of this, regarding the two datasets as statistically independent and barring systematic errors allows us to make a first, naive, combination of the two signals that rejects the SM background hypothesis at the local $4.5~\sigma$ level.
Clearly the corresponding global significance is to be diminished by the look-elsewhere effect. However, in the combination of two independent measurements the signal of one experiment could be used to determine the signal region, compensating by net look-elsewhere effect, in a way that the signal detected by the other experiment then acquires a global significance. This implies that the total global significance of the LHC di-photon excess at 750~GeV invariant mass exceeds the $3~\sigma$ level and
should be considered as a strong evidence for new physics.
Breaking down the signal, we see that the most significant excess in the di-photon invariant mass spectrum observed by CMS for barrel-barrel events in the combined 8+13 TeV analysis is at around 750~GeV, suggesting a production cross section times branching ratio of
\begin{equation}
\sigma^{\rm CMS}_{pp\to H}BR^{\rm CMS}_{H\to\gamma\gamma}=4.47\pm 1.86~{\rm fb},
\end{equation}
obtained by the combination, properly scaled, of 8 and 13 TeV data.
As for the ATLAS signal, the most significant excess in the diphoton invariant mass spectrum is observed around 747~GeV. The difference between the number of events predicted by SM and the data is equal to
\begin{equation}
\Delta N=13.6\pm3.69,
\end{equation}
which, given the efficiency and acceptance values for the mentioned invariant mass and the integrated luminosity, corresponds to a cross section of:
\begin{equation}
\label{eq:exp_Sigmabr}
\sigma^{\rm ATLAS}_{pp\to H}BR^{\rm ATLAS}_{H\to\gamma\gamma}=\frac{\Delta N}{\epsilon\times {\cal L}}=\frac{13.6\pm3.69}{0.4\times 3.2}~{\rm fb}=10.6\pm2.9~{\rm fb}.
\end{equation}
We remark that the quoted values of the cross section are compatible with each other at the $1.8~\sigma$ level.\footnote{We point out that only the ATLAS experiment reported a first estimate of the resonance width of about 45~GeV. The CMS collaboration was not able to resolve the width even though 20~GeV bins were employed in the analysis. Given that the uncertainty on the ATLAS width estimate is unknown, and likely large, we chose to disregard the constraints brought by this estimate in our analysis.} Were these excesses generated by a Higgs boson with mass equal to 750~GeV, its signal strength compared to a SM Higgs with the same mass in the narrow-width approximation, defined for a given scalar $\varphi$ by
\begin{equation}\label{mugammaH}
\mu_{X,\varphi}=\frac{\sigma_{pp\rightarrow \varphi}{\textrm{Br}}_{\varphi\rightarrow X \bar{X}}}{\sigma_{pp\rightarrow \varphi}^{\textrm{SM}} \textrm{Br} ^{\textrm{SM}}_{\varphi\rightarrow X \bar{X}}},
\end{equation}
would be equal to
\begin{equation}\label{mugammaHAC}
\mu_{\gamma,H}^{\rm ATLAS}=(6.4\pm1.7)\times 10^4 ,\quad \mu_{\gamma,H}^{\rm CMS}=(2.7\pm1.1)\times 10^4\ .
\end{equation}
In the following we use this result as a basis for our computation and refer to a combined cross section
\begin{equation}\label{scomb}
\hat{\sigma}_{pp\to H}\widehat{BR}_{H\to\gamma\gamma} = 6.26 \pm 3.32~{\rm fb}~.
\end{equation}
\section{Effective singlet extensions}
\label{sec:H}
We start our analysis by extending the SM with a singlet spin-0 particle, $\phi$, that we assume for definiteness to have odd parity. Analogous results will however hold for the scalar case. Barring a portal coupling $\lambda_p H^2 \phi^2$, strongly constrained by the SM Higgs couplings measured at LHC \cite{Cheung:2015dta},
and by assuming that the contact between $\phi$ and the SM gauge boson is provided only by heavy particles which transform non-trivially under the SM symmetry group, we can write the effective interaction Lagrangian \cite{Coleppa:2012eh}
\begin{align}
\label{Lsinglet}
\mathcal{L_I}
&=
\kappa_s\frac{\alpha_s}{4\pi v}\phi \sum_a G^{a}_{\mu\nu}\tilde{G}^{a\mu\nu}
+ \\ \nonumber
&+\kappa_w\frac{\alpha}{4\pi v}\phi\left[B_{\mu\nu}\tilde{B}^{\mu\nu}
+b \sum_c W^{c}_{\mu\nu}\tilde{W}^{c\mu\nu} \right],
\end{align}
where $\kappa_s$, $\kappa_w$, and $b$ are free parameters and the tilded tensors represent the dual field strength tensors. Notice that whereas reproducing the di-photon signal bounds a combination of the former quantities, the cross section times branching ratio into a di-gluon final state depends solely on $\kappa_s$\footnote{ Because of the hierarchy in the coupling constants we expect that $\Gamma_{\phi\to \gamma\gamma} \ll \Gamma_{\phi\to gg}\simeq \Gamma_{tot}$ hold on most of the available parameter space.}. The value of this parameters is consequently bounded by the measurements of $\sigma(pp\to \phi \to gg)$, however, our Lagrangian in Eq.~\eqref{Lsinglet} allows us to match the observed $\sigma(pp\to\phi\to\gamma\gamma)$ irrespectively of the value assigned to $\kappa_s$ by simply adjusting $\kappa_w$ as required. The ratios between the branching ratios into the electroweak gauge bosons are instead regulated solely by $b$. In this case, by using the reference di-photon cross section value quoted in Eq.~\eqref{scomb}, we can infer the production cross section times branching ratio in the remaining electroweak bosons by simply multiplying the cross section for the relevant ratio of branching ratios. In Fig.~\ref{fig:spin0} we demonstrate the dependence of the electroweak gauge bosons production from the parameter $b$ in the approximation of massless outgoing particles.
\begin{figure}[h]
\centering
\includegraphics[width=.9\linewidth]{spin0.pdf}
\caption{The production cross section times branching ratios of electroweak gauge bosons for an on-shell 750 GeV pseudoscalar or scalar mediator as a function of the parameter $b$ of Eq.~\eqref{Lsinglet}.}
\label{fig:spin0}
\end{figure}
As we can see, within this model it is clearly possible to suppress the signal in the diboson and $\gamma Z$ channels as required by the ATLAS and CMS signals by simply requiring that $|b|<1$.
Direct couplings of the pseudoscalar $\phi$ to SM fermions $f$ can be written in the following fashion
\begin{equation}
{\cal L}_{\phi f\bar{f}}=-i \kappa_f \frac{y_f}{\sqrt{2}}\phi\bar{f}\gamma^5 f
\end{equation}
where we take the Yukawa coupling $y_f$ equal to its SM value rescaled by a factor $\kappa_f$, in agreement with the Minimal Flavor Violation (MFV) framework \cite{D'Ambrosio:2002ex}. However, given the lack of signals for $\phi$ in the $\tau\bar{\tau}$ and dilepton channel, we argue that $\kappa_f\ll 1$ must hold for every SM fermion and consequently disregard these interactions in our analysis.
We conclude this section by remarking that a singlet scalar, coupling to SM vector bosons via an effective Lagrangian as in Eq.~\eqref{Lsinglet} where the dual field strength tensor are replaced by the ordinary strength tensors, would present the same cross section times branching ratios as those shown in Fig.~\ref{fig:spin0}.
\section{Two Higgs Doublet models}
\label{sec:2HDM}
In the 2HDM \cite{Branco:2011iw} the physical heavy scalar $H$ can have couplings to fermions that are greatly enhanced, compared to their SM values, by the coupling coefficients:
\begin{center}\label{Hcc}
\begin{tabular}{lcccc}
& Type I & Type II & Type III & Type IV
\\
$a^H_d$
& $\sin \alpha/\sin \beta$
& $-\cos \alpha/\cos \beta$
& $\sin \alpha/\sin \beta$
& $-\cos \alpha/\cos \beta$
\\
$a^H_l$
& $\sin \alpha/\sin \beta$
& $-\cos \alpha/\cos \beta$
& $-\cos \alpha/\cos \beta$
& $\sin \alpha/\sin \beta$
\end{tabular}
\end{center}
with type independent coupling coefficients for upper EW component quarks and $W$ and $Z$ gauge bosons
\begin{equation}\label{uVcoups}
a^H_u= \sin \alpha/\sin \beta\ ,\quad a^H_V=\cos(\beta-\alpha)\ .
\end{equation}
The physical spectrum of 2HDM features also a charged Higgs $H^\pm$ and a pseudoscalar $A$. For a heavy Higgs mass $m_H>600$~GeV the model is already in the decoupling regime \cite{Altmannshofer:2012ar}, in which $H,H^\pm$, and $A$ have similar masses
\begin{equation}\label{decrM}
m_A^2 = m_H^2 + O(\lambda_i v^2) = m_{H^\pm}^2 + O(\lambda_i v^2) ~,
\end{equation}
and the mixing angles are related by
\begin{equation}\label{decrab}
\alpha=\beta-\pi/2+ O(\lambda_i v^2/m_A^2) ~,
\end{equation}
with the quartic couplings $\lambda_i$ constrained by perturbativity to values of $O(1)$. There is therefore the concrete possibility that $A$ and $H$ are too close in mass to be resolved as separate resonances, at least at the present level of accuracy, in which case the observed excess should be ascribed to both physical states. Indeed this could explain also the large width of the signal observed at ATLAS \cite{ATLAS}. For these reasons, in the following we consider $A$ and $H$ to be degenerate in mass and add their separate contributions to the di-photon decay rate. The pseudoscalar couplings, compared to a SM Higgs, are rescaled by the following coupling coefficients
\begin{center}\label{Acc}
\begin{tabular}{lcccc}
& Type I & Type II & Type III & Type IV
\\
$a^A_u$
& $1/\tan \beta$
& $1/\tan \beta$
& $1/\tan \beta$
& $1/\tan \beta$
\\
$a^A_d$
& $-1/\tan \beta$
& $\tan \beta$
& $-1/\tan \beta$
& $\tan \beta$
\\
$a^A_l$
& $-1/\tan \beta$
& $\tan \beta$
& $\tan \beta$
& $-1/\tan \beta$
\end{tabular}
\end{center}
Given the size of the $\mu_{\gamma,H}$ signal strength, we expect the signal be generated at one loop by a charged particle with a large coupling coefficient. The $H^\pm$ coupling to $H$, unlike the corresponding fermions couplings, lacks an enhancement or suppression factor; furthermore its contribution to the di-photon decay amplitude is roughly 1/4 of the fermion ones. The contribution of $H^\pm$ to the di-photon effective coupling is therefore marginal and will be neglected in the present analysis. Moreover, because of Eqs.~(\ref{uVcoups},\ref{decrab}), the $H$ couplings to $W$$W$ and $Z$$Z$ are very small in the decoupling regime, though for completeness we still include them in our computation. In the same regime the contribution of the $H\rightarrow h h$ and $A\rightarrow h Z$ channels becomes negligibly small \cite{Djouadi:2005gj}, and for this reason we do not include it in the present analysis.
We determine the values of the mixing angles $\alpha$ and $\beta$ by performing a fit to the signal strengths, defined in Eq.~\eqref{mugammaH}, by minimizing
\begin{equation}
\chi^2=\sum_i \left(\frac{\mu_i^{exp}-\mu_i^{th}}{\sigma_i^{exp}} \right)^2,
\end{equation}
where $\mu_i^{exp}$ and $\sigma_i$ are the experimental values of $\mu_{\gamma,H}$, Eq.~\eqref{mugammaHAC}, and $\mu_{\gamma,h}$, $\mu_{Z,h}$, $\mu_{W,h}$, $\mu_{b,h}$ $\mu_{\tau,h}$, with their respective uncertainties, as measured at ATLAS and CMS \cite{ATLAS2015044,CMS:2015kwa}, while $\mu_i^{th}$ are the 2HDM predictions obtained by rescaling the production cross sections and branching ratios of a 750~GeV SM Higgs, reported in \cite{Dittmaier:2011ti,Dittmaier:2012vm,Heinemeyer:2013tqa}, each with its corresponding coupling coefficient.\footnote{All the formulas necessary to perform the fit can be found for example in \cite{Alanne:2013dra}.}
The value of the minimum $\chi^2$ per degree of freedom (d.o.f.), as well as the corresponding $p$-value and $H$ coupling coefficients for each 2HDM are
\begin{center}
\begin{tabular}{lcccc}
& Type I & Type II & Type III & Type IV
\\
$\chi^2/d.o.f.$
& 0.95
& 0.78
& 0.95
& 0.83
\\
$p$-value
& 49\%
& 64\%
& 48\%
& 60\%
\\
$a_u^H$
& -16
& -16
& -16
& -16
\\
$a_d^H$
& -16
& 0.07
& -16
& 0.07
\\
$a_l^H$
& -16
& 0.07
& 0.07
& -16
\end{tabular}\label{chi2p}
\end{center}
with the same mixing angles
\begin{equation}\label{optmix}
\alpha=-1.51\ ,\quad\beta=0.063
\end{equation}
for every 2HDM type. For such mixing angles the $A$ coupling coefficients to fermions are numerically identical to the $H$ ones. The optimal mixing angles in Eq.~\eqref{optmix} imply a negligible coupling to vector bosons and a large enhancement of the $H$ coupling to upper EW component quarks, Eqs.~\eqref{uVcoups}, as compared to that of the 125~GeV Higgs $h$. The Type II 2HDM achieves the best fit. For comparison the SM results are
\begin{equation}
\chi^2/d.o.f.=2.33\ ,\quad p=1\%~.
\end{equation}
According only to these goodness of fit results, the 2HDMs would represent a valid explanation of the 750~GeV resonance observed at LHC, while the SM would be ruled out at the 95\% CL. However, we still have to impose the stringent constraints on the partial decay widths of the scalar resonance to SM fermions discussed below.
The couplings to lower component quarks and leptons are Type dependent, and for the optimal mixing angles would be suppressed in the Type II 2HDM: this model is therefore consistent with the current absence of a signal in the $WW$, $ZZ$, $\tau\bar{\tau}$, $b\bar{b}$ decay channels of $H$ at 8~TeV. On the other hand the constraint on the $t\bar{t}$ channel \cite{Aad:2015fna} needs to be imposed explicitly, given the large coupling of the $750$~GeV scalar and pseudoscalar to $t$. In the region selected by Eq.~\eqref{optmix} we can neglect, in first approximation, all the decay channels to SM particles but $t$ and gluons, and express the 8~TeV constraint \cite{Aad:2015fna} on the $pp\rightarrow H \rightarrow t\bar{t}$ cross section in terms of the SM quantities as
\begin{eqnarray}\label{ttXS}
680~{\rm fb}&>&\sigma_{pp\rightarrow \varphi\rightarrow t \bar{t}}\sim \sigma^{\rm SM}_{gg{\rm F}}{a_t^2 \textrm{Br}}^{\rm SM}_{\varphi\rightarrow t \bar{t}}\left[ \frac{1}{{\textrm{Br}}^{\rm SM}_{\varphi\rightarrow g g}+{\textrm{Br}}^{\rm SM}_{\varphi\rightarrow t \bar{t}}}\right.\nonumber\\
&+&\left.\frac{\kappa_A}{\kappa_A{\textrm{Br}}^{\rm SM}_{\varphi\rightarrow g g}+{\textrm{Br}}^{\rm SM}_{\varphi\rightarrow t \bar{t}}} \right]\,,
\end{eqnarray}
where $gg{\rm F}$ stands for ``gluon-gluon fusion'', and $\kappa_A\sim1.41$ is the pseudoscalar decay rate to two gluons normalised to the $H$ one and both calculated for unitary $a_t$, with
\begin{equation}
a_t \equiv a_u^H\sim a_u^A=1/\tan\beta\, .
\end{equation}
By using the values provided in \cite{Heinemeyer:2013tqa} for the SM quantities appearing in Eq.\eqref{ttXS} we obtain the constraint
\begin{equation}\label{atc}
| a_t|<1.34\,.
\end{equation}
In Fig.~\ref{fig:tbcbaTI&II&III&IV} we show the 68\%, 95\%, and 99\% CL contour plots of $1/\tan\beta\sim a_t$ vs $\cos(\beta-\alpha)=a^H_V$ for all the 2HDMs, with the shaded region excluded by Eq.~\eqref{atc}.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.40\textwidth]{TIcbatb}\hspace{0.5cm}
\includegraphics[width=0.40\textwidth]{TIIcbatb}
\includegraphics[width=0.40\textwidth]{TIIIcbatb}\hspace{0.5cm}
\includegraphics[width=0.40\textwidth]{TIVcbatb}
\caption{68\%, 95\%, and 99\% CL contour plots in $1/\tan\beta$ and $\cos(\beta-\alpha)$ plane for all 2HDMs, with the shaded region excluded by the $t\bar{t}$ experimental constraint \cite{Aad:2015fna}. In each plot the red star represents the point of minimum for $\chi^2$.}
\label{fig:tbcbaTI&II&III&IV}
\end{center}
\end{figure*}
Evidently, all the 2HDM are in strong tension with the $t\bar{t}$ experimental constraint \cite{Aad:2015fna}. Nevertheless, we point out that such a bound can be easily circumvented by adding new charged and colored particles which mediate the loop interactions of $H$ and $A$ necessary to reproduce the diphoton excess. In the next section we examine a specific case where these new particles are scalars, the stops in MSSM, while in Section~\ref{sec:2HDMvf} we study the 2HDM extended by new, vector-like quarks. To conclude, we remark that the perturbativity of the model also results in a bound that disfavour the 2HDM due to the implied $t\bar t$ coupling. We find, however, that this bound is less severe than the one implied by the observation of the $t \bar t$ channel at the LHC and, consequently, opted to neglect it.
\section{MSSM}
\label{sec:MSSM}
The low energy limit of MSSM corresponds to the Type II 2HDM. The most relevant correction to a Higgs decay to di-photon comes from the stop contribution, which can be expressed as a rescaling of the top coupling coefficients to both the light and heavy Higgses, respectively $h$ and $H$:
\begin{equation}\label{Rt}
a_t^{\prime h/H}=R_t a_t^{h/H},\ R_{t} = 1+\frac{m_t^2}{4}\left[\frac{1}{m_{\tilde{t}_1}^2}+
\frac{1}{m_{\tilde{t}_2}^2}-\frac{X_t^2}{m^2_{\tilde{t}_1} m^2_{\tilde{t}_2}}\right],
\end{equation}
with the stop mixing parameter
\begin{equation}
X_t=A_t-\mu/\tan\beta .
\end{equation}
Because of the tree level constraint on the light Higgs mass
\begin{equation}
m_h<m_Z |\cos(2\beta)|,
\end{equation}
the value of $\tan\beta$ is constrained in MSSM to be roughly larger than 5, for which value the stop mixing should be close to maximal
\begin{equation}
\frac{X_t^2}{m^2_{\tilde{t}_1} m^2_{\tilde{t}_2}}\sim 6~.
\end{equation}
In Fig.~\ref{3} we show the 68\%, 95\%, and 99\% CL contours in the $1/\tan\beta$ and $\cos(\beta-\alpha)$ plane for MSSM with $R_t=0.9~(1.1)$, left (right) panel. The shaded area is excluded by the constraint in Eq.~\eqref{atc}. In each plot the red star represents the point of minimum for $\chi^2$, which is characterised by a value of $\tan\beta$ too small to generate a Higgs mass of 125~GeV. While the local minima close to $\tan\beta=4$ satisfy the $t\bar{t}$ experimental constraint \cite{Aad:2015fna}, they produce a $p$-value equal to 1\% and, therefore, are strongly disfavoured. This is because a large $\tan\beta$ suppresses the top coupling to $H$ and $A$, while enhances the coupling to bottom and $\tau$, which have too small Yukawa couplings to produce the signal strength enhancement required by $\mu_{\gamma,H}$, Eq.~\eqref{mugammaHAC}. Possible corrections from the $H$ coupling to the charginos, generated by the wino, should be small given the small coupling of $H$ to $W$, proportional to $\cos(\beta- \alpha)$, as shown in Fig.~\ref{3}.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.40\textwidth]{R09cbatb}\hspace{0.5cm}
\includegraphics[width=0.40\textwidth]{R11cbatb}
\caption{68\%, 95\%, and 99\% CL contour plots in $1/\tan\beta$ and $\cos(\beta-\alpha)$ plane for MSSM with $R_t=0.9 (1.1)$, left (right) panel, with the shaded region excluded by the $t\bar{t}$ experimental constraint \cite{Aad:2015fna}. In each plot the red star represents the point of minimum for $\chi^2$.}
\label{3}
\end{center}
\end{figure*}
\section{2HDM extended by vector-like quarks}
\label{sec:2HDMvf}
In this section we consider the type I 2HDM extended by two vector-like quarks $Q$ and $U'$. The charges and $\mathbb{Z}_2$ parity of the new particles, as well as those of the scalars, are given in Table~\ref{4thgen}, while the $\mathbb{Z}_2$ parity is taken positive (negative) for left(right)-handed SM fermions. The choice of the $\mathbb{Z}_2$ parity assignments ensures MFV \cite{D'Ambrosio:2002ex}. We remark that, on general grounds, models presenting extra scalars which couple to heavy vector-fermions find a justification as low energy limits of string-inspired supersymmetric models \cite{Cvetic:2015vit,Dev:2015vjd,King:2016wep,Karozas:2016hcp}.
\begin{table}[htp]
\caption{Scalar and vector-like fermion content of the model.}
\begin{center}
\begin{tabular}{|ccccc|}
\hline
Field & $SU(3)_c $ & $SU(2)_L$ & $U(1)_Y $ & $\mathbb{Z}_2$\\
\hline
$H_1$ & 1 & $\begin{pmatrix} \left(v_1+h_1+i \phi_1^0\right)/\sqrt{2} \\ \phi^- \end{pmatrix}$ & -1/2 & $+$ \\
$H_2$ & 1 & $\begin{pmatrix} \phi^+ \\ \left(v_2+h_2+i \phi_2^0\right)/\sqrt{2} \end{pmatrix}$ & 1/2 & $-$ \\
$Q$ &3 & $\begin{pmatrix} U \\ B \end{pmatrix}$ & 1/6 & $+$\\
$U'$ & 3& $U'$ & $2/3$ & $+$\\
\hline
\end{tabular}
\end{center}
\label{4thgen}
\end{table}%
The type I 2HDM Lagrangian, in which the SM fermions couple only to $H_2$, is then augmented by the terms
\begin{align}\label{L}
\mathcal{L} & \supset
\left[
y^L_Q \bar{Q}_L \tilde{H}_1 U'_R +y^R_Q \bar{Q}_R \tilde{H}_1 U'_L+
\text{H.c.}\right]\nonumber\\
& +
m_Q \bar{Q} Q
+
m_{U'} \bar{U}' U' \,,
\end{align}
plus additional mixing couplings with the SM quarks. We do not write explicitly these terms which simply allow the vector-like quarks to decay to SM particles and avoid detection. We write in Appendix~\ref{msmx} the masses and relevant couplings of the mass eigenstates $T,\,T'$, and $B$. The experimental constraints from the processes $T,T'\rightarrow b W^+$ and $B\rightarrow b h$ require these masses to be larger than 705 and 846~GeV \cite{CMS:1900uua,CMS:2014afa}, respectively. To satisfy the experimental constraints we take the masses to be
\begin{equation}
m_T=800~{\rm GeV}\,,\ m_{T'}=900~{\rm GeV}\,,\ m_{B}=850~{\rm GeV}\,,
\end{equation}
and scan the full parameter space for data points producing a diphoton excess $\sigma_{pp\rightarrow H,A\rightarrow \gamma\gamma}=6$. To simplify the search we set $\tan\beta=6$, in a way that the SM fermion decay channels are highly suppressed. With our methodology we find a data point featuring a minimum average squared Yukawa coupling of
\begin{equation}\label{viap}
m_{U}=755~{\rm GeV}\,,\, m_{Q}=850~{\rm GeV}\,,\, y_Q^L=10.3\,,\, y_Q^R=9.22\,.
\end{equation}
Such point is phenomenologically viable, although the large Yukawa couplings in Eqs.~\eqref{viap} are expected to drive the model to the non-perturbative regime at relatively low energy of $O({\rm TeV})$, close to the resonance mass \cite{Bertuzzo:2016fmv}.
\section{Generic technicolour}
\label{sec:TC}
Finally, we would like to comment on the possibility that the Higgs boson and the hinted new 750~GeV resonance be composite objects.
This scenario may be realised, for example, in a generic technicolour model. The 125~GeV Higgs in this case would be associated to a technidilaton, the composite pseudo-Nambu Goldstone boson of the approximate scale symmetry of the strongly coupled theory (see for example \cite{Yamawaki:1985zg,Appelquist:1998xf,Sannino:2004qp}, or \cite{Matsuzaki:2012xx,Belyaev:2013ida} for a study of the viability of these models at LHC, and \cite{Fukano:2015hga,Franzosi:2015zra} for an interpretation within the same frameworks of the diboson excess at LHC Run I). Other spin-zero resonances in this case would not be protected by such approximate symmetry, and their masses could be estimated by scaling up the corresponding QCD composite states via a straightforward dimensional analysis \cite{Manohar:1983md}. Assuming the 750~GeV resonance to be a CP-even state, a naive estimate of its mass is given by twice the mass of its techniquark constituents \cite{Hill:2002ap}
\begin{equation}
m_H\sim 2\frac{m_P}{3} \frac{\sqrt{3}~v}{f_\pi \sqrt{N_D N_{\rm TC}}}=750~{\rm GeV}\ \Rightarrow \ N_D\sim \frac{12}{N_{\rm TC}},
\end{equation}
with $P$ being the proton, $f_\pi=100$~MeV the QCD pion decay constant, $N_D$ the number of electroweak (EW) doublets, and $N_{\rm TC}$ the number of techni-colors. The equation above is satisfied for example by $N_{\rm TC}=4$ and three EW doublets. Another possibility is that $H$ is actually a composite pseudoscalar, corresponding to the QCD pion $\eta^\prime$: in this case a naive estimate based on the known QCD properties produces \cite{Hill:2002ap}
\begin{equation}
m_H\sim m_{\eta^\prime} \frac{3 \sqrt{18}~v}{2 f_\pi N_{\rm TC} \sqrt{N_D N_{\rm TC}}}=750~{\rm GeV}\ \Rightarrow \ N_D\sim \frac{400}{N^3_{\rm TC}}~.
\end{equation}
In this case, again for $N_{\rm TC}=4$, the necessary number of EW doublets needed to explain the observed mass would be six. It is also worth to notice that these resonances, given their strong interactions to other composite states, are generally expected to have a wide width, which seems to be the case for the 750~GeV resonance observed at LHC.
In this scenario additional composite resonances, for example spin-one bosons $\rho_{\rm TC}$ with masses \cite{Hill:2002ap}
\begin{equation}
m_{\rho TC}\sim m_\rho \frac{v \sqrt{3}}{\sqrt{N_D N_{\rm TC}}}~,
\end{equation}
of the order of several TeVs, could be within the reach of Run II LHC searches.
\section{Conclusions}
\label{sec:conclusion}
We have argued in this work that the 750~GeV di-photon excesses seen by the ATLAS and the CMS collaborations may follow from the decays of a new resonance
with the global statistical significance exceeding $3~\sigma$.
We determined the cross sections of the signal times branching ratio in both experiments and find them to be consistent with each other at $1.8~\sigma$ level.
Using this result, we have shown that the di-photon excess can be explained consistently with the negative results for all other final states in the
singlet scalar extensions of the SM and in 2HDM extended by two vector-like quarks. At the same time, the simplest 2HDMs and the MSSM, a UV completion of type II 2HDM, seem incompatible with the result. Consequently,
in order to embed the observed phenomenology into a supersymmetric framework, non-standard extensions of the MSSM must be considered.
We finally commented on the possibility that the new hypothetical particle might be a spin zero resonance of some generic composite model and argued that in this scenario additional spin-one composite resonances would be within reach of Run II at LHC. While the LHC 750~GeV di-photon excess may still turn out to be a statistical fluctuation, we conclude that
it is also a good and consistent candidate for the first signal of new physics beyond the SM.
\begin{acknowledgments}
The authors thank Mario Kadastik, Kristjan Kannike, Andrew Fowlie, Christian Spethmann, Christian Veelken and Hardi Veerm\"ae for useful discussions.
This work was supported by the grants IUT23-6, PUTJD110, CERN+ and by EU through the ERDF CoE program.
\end{acknowledgments}
\onecolumngrid
|
1,314,259,993,278 | arxiv | \section{Introduction}
The $h$-cobordism theorem plays a crucial role in modern geometric
topology, providing the essential link between homotopy and geometry.
Indeed, comparing manifolds of the same homotopy type, one can often use
surgery methods to produce $h$-cobordisms between them, and then hope to
be able to show that the Whitehead torsion $\tau ({W}^{n +1}, {M}^n)$ in
$\tmop{Wh}} \newcommand{\db}{{D} (\pi_1 ({M}^n))$ is trivial. By the $s$-cobordism theorem, the two
manifolds will then be isomorphic (homeomorphic or diffepmorphic,
according to which category we work in).
The last step, however, is in general very difficult, and what makes the
problem even more complicated, but at the same time more interesting, is
that there exist $h$-cobordisms with non-zero torsion, but were the ends
still are isomorphic (cf. \cite{Hs1}, \cite{Hs2}, \cite{L},
\cite{JK}). Such $h$-cobordisms we call {\em inertial}. The central
problem is then to determine the subsets of elements of the Whitehead
group $\tmop{Wh}} \newcommand{\db}{{D} (\pi_1 ({M}^n))$ which can be realized as Whitehead torsion
of inertial $h$-cobordisms. This is in general very difficult, and only
partial results in this direction are known (\cite{Hs1}, \cite{Hs2},
\cite{L}).
The purpose of this note is to shed some light on this important
problem. \
\section{Inertial $h$-cobordisms}
In this section we recall basic notions and constructions concerning
various types of $h$-cobordisms. We will follow the notation and
terminology of \cite{JK}. For convenience we choose to formulate
everything in the category of topological manifolds, but for most of
what we are going to say, this does not make much difference. See
Section \ref{TopInv} for more on the relations between the different
categories.\smallskip
An $h$-cobordism $({W}; {M}, {M}')$ is a manifold ${W}$ with two
boundary components ${M}$ and ${M}'$, each of which is a deformation
retract of ${W}$.
We will think of this as an $h$-cobordism from ${M}$ to ${M}'$, thus
distinguishing it from the dual $h$-cobordism $({W}; {M}', {M})$. Since
the pair $({W}; {M})$ determines ${M}'$, we will often use the notation
$({W}; {M})$ for $({W}; {M}, {M}')$. We denote by $\ensuremath{\mathcal H}(M)$ the set of
homeomorphism classes relative ${M}$ of $h$-cobordisms from ${M}$.
If $X$ is a path connected space, we denote by $\tmop{Wh}} \newcommand{\db}{{D} (X)$ the Whitehead
group $\tmop{Wh}} \newcommand{\db}{{D} (\pi_1 (X))$. Note that this is independent of choice of base
point of $X$, up to unique isomorphism.
The $s$-cobordism theorem (cf. \cite{Ma}, \cite{Mi}) says that if ${M}$
is compact connected (closed) and of dimension at least 5 there is a
one-to-one correspondence between $\ensuremath{\mathcal H}({M})$ and $\tmop{Wh}} \newcommand{\db}{{D} ({M})$
associating to the $h$-cobordism $({W}; {M}, {M}')$ its Whitehead
torsion $\tau ({W}; {M}) \in \tmop{Wh}} \newcommand{\db}{{D} ({M})$. Given an element $({W}; {M},
{M}') \in \ensuremath{\mathcal H} ({M})$ the restriction of a retraction $r:W\to M$ to
$M'$ is a homotopy equivalence $h:M'\to M$, uniquely determined up to
homotopy. By a slight abuse of language, any such $h$ will be
referred to as ``the natural homotopy equivalence''. It induces a
unique isomorphism
\[ h_{\ast} : \tmop{Wh}} \newcommand{\db}{{D} ({M}') \rightarrow \tmop{Wh}} \newcommand{\db}{{D} ({M}).
\]
Recall also that there is an involution $\tau \rightarrow \bar{\tau}$ on
$\tmop{Wh}} \newcommand{\db}{{D} ({M})$ induced by tranposition of matrices and inversion of group
elements (cf. \cite{Mi}, \cite{O}). If $M$ is non-orientable, the
involution is also twisted by the orientation character $\omega:
\pi_1(M)\to \{\pm 1\}$, i.\,e. inversion of group elements is replaced
by $\tau\mapsto \omega(\tau)\tau^{-1}$.
Let $({W}; {M}, {M}')$ and $({W};{M}', {M})$ be dual $h$-cobordisms
with ${M},\!{M}'$ of dimension $n$. Then $\tau(W;M)$ and $\tau(W;M')$
are related by the basic duality formula (cf. \cite{Mi}, \cite{JK})
\[ h_{\ast} (\tau ({W}; {M}')) = (- 1)^n \overline{\tau({W}; {M})}. \]
We refer to Section \ref{comments} for further discussion of Whitehead
torsion.\smallskip
\begin{defn} The {\em{inertial set}} of a manifold ${M}$ is defined as
\[ I ({M}) = \{ ({W}; {M}, {M}') \in\ensuremath{\mathcal H}({M}) |{M}\cong{M}' \}, \] or
the corresponding subset of $\tmop{Wh}} \newcommand{\db}{{D} ({M})$.
\end{defn}
There are many ways to construct inertial $h$-cobordisms. Here we recall
three of these.
{\tmstrong{A}}. Let $G$ be an arbitrary (finitely presented) group. Then
there is a 2-dimensional simplicial complex $K$ (finite) with $\pi_1 (K)
\cong G$. Let $\tau_0 \in \tmop{Wh}} \newcommand{\db}{{D} (G)$ be an element with the property that
$\tau_0 = \tau (f)$ for some homotopy self-equivalence $f : K
\rightarrow K$. Denote by $N (K)$ a regular neighborhood of $K$ in a
high-dimensional Euclidean space $\mathbbm{R}^n$($n \geqslant 5$ will
do). Approximate $f : K \rightarrow K \subseteq N (K)$ by an embedding
whose image has neighborhood $N'(K)\subset \mathrm{int}\,N(K)$. By
uniqueness of neighborhoods, $N'(K)\approx N(K)$. Then ${W}= N (K)
-\mathrm{int}\, N'(K))$ is an inertial $h$-cobordism whose torsion $
\tau ({W}; \partial N' (K))$ can be identified with $\tau_0$ via the
$\pi_1$-isomorphisms $\partial N'(K)\subset N(K)\supset K$.
(cf. \cite{Hs1}, \cite{Hs2}).
\
{\tmstrong{B}}. Let $f : {M} \rightarrow {M}$ be a homotopy
self-equivalence of a closed manifold and let $\tau_0 = \tau (f) \in \tmop{Wh}} \newcommand{\db}{{D}
({M})$. Approximate $f : {M} \rightarrow {M} \subset M\times D^n$ by an
embedding (cf. \cite{Wa1}), where $D^n$ is the $n$-dimensional disk, $n$
big. In the same way as in A, this will lead to an inertial
$h$-cobordism between two copies of ${M} \times S^{n - 1}$, with
torsion $\tau_0$ (cf. \cite{JK}).
\
{\tmstrong{C}}. Let $({W}; {M}, {M}')$ be an $h$-cobordism with torsion
$\tau_0 = \tau ({W}; {M})$. Form the {\em double} (cf. \cite{Mi},
\cite{JK}):
\[ (\widetilde{{W}} ; {M}, {M}) := \left( {W} \underset{{M}'}{\cup}
{W}; {M}, {M} \right)
\] Then $\tau (\widetilde{{W}} ; {M}) = \tau_0 + (- 1)^n \bar{\tau}_0$
and this again often leads to a nontrivial inertial $h$-cobordism; for
example if $n$ is odd and the involution $- : \tmop{Wh}} \newcommand{\db}{{D} ({M}) \rightarrow \tmop{Wh}} \newcommand{\db}{{D}
({M})$ is nontrivial.\par
It will be convenient to introduce the notation $\db(M)$ for the
subgroup $\{\tau+ (- 1)^n \bar\tau| \tau\in \tmop{Wh}} \newcommand{\db}{{D}(M)\}$ of $\tmop{Wh}} \newcommand{\db}{{D}(M)$.
Note that $\db(M)$ depends only on $\pi_1(M)$, orientation and the
dimension of $M$.
The construction in {\tmstrong{C}} leads to $h$-cobordisms that are
particularly simple and have special properties: not only do they come
with canonical identifications of the two ends, but they are also {\em
strongly inertial}.
\begin{defn} {\em (Cf. \cite{JK})}. The $h$-cobordism $({W}; {M}, {M'})$
is called {\em strongly inertial}, if the natural homotopy equivalence
$h : {M}' \rightarrow {M}$ is homotopic to a homeomorphism.
\end{defn}
The set of (Whitehead torsions of) strongly inertial $h$-cobordism will
be denoted by ${SI}({M})$. It was observed in \cite{JK} that
$SI(M)\subseteq \tmop{Wh}} \newcommand{\db}{{D}(M)$ is a subgroup.
Obviously ${SI} ({M}) \subseteq I ({M})$ and there are many examples of
inertial but not strongly inertial $h$-cobordism, for example
constructed using the methods in {\tmstrong{A}} or {\tmstrong{B}}. In
fact, for any manifold $M$ of dimension $n \geqslant 5$, we have
$$ I(M\#_k(S^p\times S^{n-p})) = \tmop{Wh}} \newcommand{\db}{{D} (M\#_kS^p\times S^{n-p}), $$
for $2\leq p\leq n-2$ and $k$ big enough \cite{HL}. (If $\pi_1(M)$ is
finite, $k=2$ suffices.)
However, for ${SI}(M)$ there are restrictions. For example, since the
natural homopy equivalence $h$ is homotopic to a homeomorphism, its
Whitehead torsion $\tau(h)$ must vanish. But we have (equation
(\ref{eq:tau-h}) in Section \ref{comments})
$$
\tau(h)= -\tau(W;M)+(-1)^n\overline{\tau(W;M)},
$$
so $\tau(W;M)$ must satisfy the formula
$\tau(W;M)=(-1)^n\overline{\tau(W;M)}$, \i.\,e.
\begin{equation} {SI}(M) \subseteq A(M):=\{\tau\in \tmop{Wh}} \newcommand{\db}{{D}(M) |
\tau=(-1)^n\bar\tau\}.\label{eq:A(M)}
\end{equation}
In special cases we have even stronger restrictions, as in the following
result (Theorem 1.3 in \cite{JK}):
\begin{theorem}\label{abelian} Suppose ${M}$ is a closed oriented
manifold of odd dimension with finite abelian fundamental group. Then
every strongly inertial $h$-cobordism from ${M}$ is trivial.
\end{theorem}
This result motivated us to look more closely at strongly invertible
$h$-cobordisms with finite fundamental groups. Our main interest is the
following:\smallskip
\noindent{\tmstrong{Problem:}}{\tmem{ Let ${M}^n$ be a closed
$n$-dimensional (oriented) manifold with $n \geqslant 5$ and with finite
fundamental group $\pi_1 ({M}^n)$. Determine the subset ${SI} ({M}^n)$ of
$\tmop{Wh}} \newcommand{\db}{{D} ({M}^n)$. In particular, is $\ {SI} ({M}^n) =\db(M)$?}\medskip
Note that if $G$ is a finite abelian group, then the involution $- :
\tmop{Wh}} \newcommand{\db}{{D} (G)\rightarrow \tmop{Wh}} \newcommand{\db}{{D} (G)$ is trivial (cf. \cite{O}), and consequently
$\db({M}^n) = \{ 0 \}$ for $n$ odd. Hence, in this case ${SI}(M)=\db(M)$
by Theorem \ref{abelian}.
Our first new observation is that ${SI} ({M}^n) = \{ 0 \}$ also for odd
dimensional manifolds ${M}^n$ with $\pi_1 ({M}^n)$ finite periodic,
namely:
\begin{theorem}\label{periodic} Let $({W}^{n + 1} ; {M}^n, {N}^n)$ be a
strongly inertial $h$-cobordism with $M$ orientable, $n$ odd and $\pi =
\pi_1 ({M}^n)$ finite periodic. Then ${W}^{n + 1} = M^n \times I$ for
$n \geqslant 5$. Hence ${SI} ({M}^n) = \{ 0 \}$.
\end{theorem}
The class of finite periodic fundamental groups has attracted a lot of
attention in topology of manifolds and transformation groups
(cf. \cite{MTW}, \cite{KS2}). The most extensive classification results
for manifolds with finite fundamental groups involve this class of
groups.
Let ${M}^n$ be a closed, oriented manifold with $\pi_1 ({M}^n)$ finite
abelian. If $n$ is odd, then, as we observed, ${SI} ({M}^n) = \{ 0
\}$. In the even dimensional case the situation is quite different.
\begin{theorem}\label{DnotSI} For every $n\geqslant 3$ there are
oriented manifolds $M^{2n}$ with $\pi_1 ({M}^{2 n})$ finite cyclic and
with $\{ 0 \} \neq \db(M) \neq {SI} ({M}^{2 n})$.
\end{theorem}
The following result shows that orientability is essential in Theorem
\ref{abelian}:
\begin{theorem}\label{nonor} In every odd dimension $n\geqslant 5$ there
are closed nonorientable manifolds with finite, cyclic fundamental
groups and strongly inertial $h$-cobordisms from $M$ with nontrivial
Whitehead torsion.
\end{theorem}
Note that in this case $\db(M)$ is trivial. \smallskip
\begin{rems} {\rm ($i$). There are obvious inclusions $\{0\}\subset
D(M)\subset SI(M)\subset I(M) \subset \tmop{Wh}} \newcommand{\db}{{D}(M)$. In addition it is proved
in \cite{HJ} that $A(M)\subset I(M)$, such that combined with
(\ref{eq:A(M)}) we have a sequence of subsets
\begin{equation}\label{filtration} \{0\}\subset D(M)\subset SI(M)\subset
A(M)\subset I(M)\subset \tmop{Wh}} \newcommand{\db}{{D}(M).
\end{equation}
Clearly each of these inclusions can be an equality for some $M$, but
for each pair of subsets we now have examples of manifolds where the
inclusion is proper. (For $SI(M)\ne A(M)$, see e.\!g.
\cite[Example 6.4]{JK}.)\par
\par
($ii$) $D(M)$ and $A(M)$ depend only on the fundamental group, and
Khan \cite{QK} has shown that ${SI}(M)$ is homotopy invariant. It would
be interesting to know if ${SI}(M)$ also only depends on the fundamental
group. If so, it is a functorial, algebraically defined subgroup of
$\tmop{Wh}} \newcommand{\db}{{D}(M)$ between $D(M)$ and $A(M)$. What could it be? \par
Observe also
that the quotient $A(M)/D(M)$ is equal to the Tate cohomology group
$\hat H^n(\mathbb Z_2;\tmop{Wh}} \newcommand{\db}{{D}(M))$, where $n=\dim M$, and therefore $SI(M)/D(M)$
is a subgroup. Another description of this subgroup is given in the
beginning of Section \ref{proofs}.\par
Note that Hausmann has shown
that $ I(M)$ is {\em not} homotopy invariant, and in general is not a
subgroup of $\tmop{Wh}} \newcommand{\db}{{D}(M)$ \cite{Hs2}. However, it is preserved by the
involution $\tau\mapsto (-1)^{n+1}\bar \tau$ \cite[Lemma 5.6]{Hs2}.}
\end{rems}
There is one more piece of structure that we should mention: the group
$\pi_0(\tmop{Top}(M))$ of isotopy classes of homeomorphisms of $M$ acts
on $\tmop{Wh}} \newcommand{\db}{{D}(M)$ via the isomorphisms induced on the fundamental group.
(Recall that $\tmop{Wh}} \newcommand{\db}{{D}(M)$ is independent of choice of base point.)
Geometrically, this corresponds to changing an $h$-cobordism $(W;M)$ by
the way $M$ is identified with part of the boundary of $W$. Hence the
orbits represent equivalence classes under homeomorphisms preserving
boundary components, but not necessary the identity on any of them. A
simple example to illustrate this is the case where $M=P_1\#P_2$, where
$P_1$ and $P_2$ are copies of the same manifold. Since $\tmop{Wh}} \newcommand{\db}{{D}(M)\simeq
\tmop{Wh}} \newcommand{\db}{{D}(P_1)\oplus\tmop{Wh}} \newcommand{\db}{{D}(P_2)$ (\cite{Sta}), this means that every
$h$-cobordism from $M$ is a band-connected sum $W_1\#_{S^{n-1}\times
I}W_2$ of $h$-cobordisms from $P_1$ and $P_2$, and the homeomorphisms
interchanging $P_1$ and $P_2$ just interchanges $W_1$ and $W_2$.
\par
The observation now is that the action of $\pi_0(\tmop{Top}(M))$
clearly preserves the filtration (\ref{filtration}).\par
Note that on $\tmop{Wh}} \newcommand{\db}{{D}(M)$ this action factors through an action of the group
$\pi_0(\tmop{Aut}(M))$ of homotopy classes of homotopy equivalences of
$M$. Since the action of $\pi_0(\tmop{Aut}(M))$ is defined
algebraically, it must also preserve the functorial subgroups $D(M)$
and $A(M)$. \par
This action does not have an easy geometric
interpretation, but $SI(M)$ is still preserved, by the more subtle
functoriality of \cite[Theorem 3.1]{QK}, as explained in Corollary
\ref{SIinv} below.
However, it is an easy consequence of \cite[Theorem
6.1]{Hs2} that it does {\em not} in general preserve $I(M)$ .
\section{Proofs}\label{proofs}
In this section all manifolds have dimension at least five.
The proofs are based on the following commutative diagram, which is part
of the braid (\ref{eq:rotharray}) in Section \ref{comments}. The rows
are the Wall-Sullivan exact sequences for topological surgery (cf.
\cite{Wa1}, \cite{Ra}), and the columns are part of the Rothenberg
sequences for $L$-groups and structure sets.
$$
\dia{ L_{n + 2}^s (M) \ar[r]^{\gamma^s}\ar[d]^{l_1} & S^s({M} \times
I)\ar[r]^{\eta^s}\ar[d]^t & N({M} \times I)\ar[r]^{\theta^s}\ar[d]^= &
L_{n + 1}^s (M)\ar[d]^{l_0} \\ L_{n + 2}^h (M)
\ar[r]^{\gamma^h}\ar[d]^{\delta_L} & S^h({M} \times
I)\ar[r]^{\eta^h}\ar[d]^{\delta_S}& N({M} \times I)\ar[r]^{\theta^h}&
L_{n + 1}^h (M) \\ \hat H^n(\mathbb Z_2;\tmop{Wh}} \newcommand{\db}{{D}(M)) \ar@{=}[r]& \hat
H^n(\mathbb Z_2;\tmop{Wh}} \newcommand{\db}{{D}(M)) & &}
$$
We want to understand the quotient group ${SI}(M)/\db(M)$, and the clue
is the following observation:\smallskip
\begin{lemma}\label{SIquot} ${SI}(M)/\db(M)= \tmop{im} \delta_S \subset \hat
H^n(\mathbb Z_2;\tmop{Wh}} \newcommand{\db}{{D}(M))\subset \tmop{Wh}} \newcommand{\db}{{D}(M)/\db(M)$.
\end{lemma}
\begin{proof} (See also \cite{QK}.) Recall that an element of $S^h(M\times I)$ is represented by a homotopy equivalence
$f:W\to M\times I$ which is a homeomorphism on the boundary. Hence we can think of $W$ as an
$h$-cobordism from $M$, and as such it is clearly strongly inertial. Since the map $\delta_S$ is
as induced by $(f:W\to M\times I)\mapsto \tau(W,M)$, the inclusion $\supseteq$ follows.\par
To prove the opposite inclusion, let $(W;M,N)$ be a strongly inertial $h$-cobordism representing an element $z$ in
${SI}(M)/\db(M)$, and let $H:N\times I\to M$ be a homotopy from the natural homotopy equivalence $h_W=r_M|N$
to a homeomorphism. Define a map $W\to M$ as the
composite $W\xrightarrow{\approx} W\cup_N N\times I\to M$, where the last map is $H$ on the collar $N\times I$ and
the retraction $r_M$ on $W$. Combined with any map $(W;M.N)\to (I;0,1)$ this defines an element
of $S^h(M\times I)$ which image $z\in {SI}(M)/\db(M)$.
\end{proof}
We include the following corollary, which is our way of understanding
Theorem 3.1 in \cite{QK} and its proof.
\begin{cor}\label{SIinv}
Let $f:M\to M'$ be a homotopy equivalence of closed manifolds. Then
the induced isomorphism $f_*: \tmop{Wh}} \newcommand{\db}{{D}(M)\to \tmop{Wh}} \newcommand{\db}{{D}(M')$ restricts to
an isomorphism $f_*:SI(M)\to SI(M')$.
\end{cor}
\begin{proof} We need to verify that $f_*(SI(M))\subseteq SI(M')$.\par
Lemma \ref{SIquot} and functoriality of the surgery exact sequence
imply that the induced homomorphism $f_*: \tmop{Wh}} \newcommand{\db}{{D}(M)/D(M)\to \tmop{Wh}} \newcommand{\db}{{D}(M')/D(M')$
retricts to a homomorphism $f_*: SI(M)/D(M)\to SI(M')/D(M')$. In
other words, if $x\in SI(M)$, then $f_*(x)=y+d$, where $y\in SI(M')$
and $d\in D(M')$. But then obviously also $f_*(x)\in SI(M')$.
\end{proof}
The most obvious way to try to prove Theorems \ref{DnotSI}
and \ref{nonor} will now be to show that in these cases the homomorphism
$l_1$ in the diagram above is not onto. \par
In the case of Theorem \ref{DnotSI}, we need to study the map of {\em even} $L$-groups: $l_1:L^s_{2m}(\pi)\to L^h_{2m}(\pi)$,
where $\pi=\pi_1(M)$ and $2m=\tmop{dim}M+2$. Now assume that $\pi=\mathbb Z_k$ is a cyclic group of {\em odd} order $k$.
Then $l_1$ is injective. In fact, its image splits off as the free part plus a $\mathbb Z_2$ (Arf invariant) if $m$ is odd. Hence,
any other torsion in $L_{2m}^h(\mathbb Z_k)$ maps nontrivially by $\delta_L$.\par
The extra torsion is computed from the Rothenberg sequence relating $L^h_*$ and $L^p_*$-groups:
\[ \longrightarrow L_{2m+1}^p (\mathbb Z_{k}) \longrightarrow H^2(\mathbb Z_2;\widetilde{K_0} \mathbb Z [\mathbb Z_{k}]) \longrightarrow L_{2m}^h
(\mathbb Z_{k}) \longrightarrow L_{2m}^p (\mathbb Z_{k}), \]
where the groups $L_{2m+1}^p (\mathbb Z_{k})$ vanish by \cite[Corollary 4.3, p.58]{BK}.\par
An example where $H^2(\mathbb Z_2;\widetilde{K_0} \mathbb Z [\mathbb Z_{k}])$ is nontrivial is provided by \cite[Theorem 7.1, p.449]{KM},
where it is shown that $\widetilde {K_0}(\mathbb Z[\mathbb Z_{15})\approx \mathbb Z_2$. Hence, if we choose $M$ to be any orientable,
closed manifold of even dimension and fundamental group $\mathbb Z_{15}$, then $\db(M)\ne {SI}(M)$.\par
To see that $\db(M)\ne 0$, recall that $\tmop{Wh}} \newcommand{\db}{{D}(\mathbb Z_{15})\approx \mathbb Z^4$ (see e.\,g. \cite[11.5]{Co}), and that the involution is
trivial for abelian groups. Then
$\db(M)=2\tmop{Wh}} \newcommand{\db}{{D}(\mathbb Z_{15})\approx \mathbb Z^4$.\medskip
For Theorem \ref{nonor}, consider the cyclic 2-group $Z_{2^k},\ k\geqslant 4$, with the nontrivial orientation character
$\omega:\mathbb Z_{2^k}\to \{\pm1\}$. Computations in \cite[Theorem 3.4.5]{Wa3} and
\cite[Theorem B and formula p.44]{CS2} give
$$L^h_{2m+1}(\mathbb Z_{2^k},\omega)\xrightarrow[\approx]{\delta_L} H^1(\mathbb Z_2;Wh(\mathbb Z_{2^k})^-)\approx (\mathbb Z_{2^k})^{k-3},$$
where the cohomology is with respect to the involution twisted by $\omega$.
\medskip
The proof of Theorem \ref{periodic} goes by an argument similar to the proof
of Theorem 1.3 in \cite{JK} (Theorem \ref{abelian} above).
We need the following facts:\smallskip
\noindent{\em FACT 1}: The involution $- : \tmop{Wh}} \newcommand{\db}{{D} (\pi)
\rightarrow \tmop{Wh}} \newcommand{\db}{{D} (\pi)$ is trivial.
This is Claim 3 in [KS3] p.1527.
\smallskip
\noindent{\em FACT 2}: The homomorphism $l_1$ is surjective.
This is Claim 1 in [KS3] p.1527.
\smallskip
\noindent{\em FACT 3}: The homomorphism $l_0$ is injective on the
image of $\theta^s$.
{\em{Proof of FACT 3}}. Since $\tmop{im} \theta^s \subseteq L_{n + 1}^s
(\pi_2)$, where $\pi_2$ is the Sylow 2-subgroup of $\pi (\tmop{cf} .
[\tmop{Wa} 2])$ it is enough to show that restriction $l_0 : L_{n + 1}^s
(\pi_2) \rightarrow L_{n + 1}^h (\pi_2)$ is injective. To this end note that
$\tmop{SK}_1 (\pi_2) = 0$ (cf. \cite{O}), where
\[ \tmop{SK}_1 (\pi_{}) := \tmop{Ker} (K_1 (\mathbb Z [\pi])
\longrightarrow K_1 (\mathbbm{Q} [\pi])) \]
Indeed $\pi_2$ is either generalized quaternionic or cyclic!
As a consequence $L_{n + 1}^s (\pi_2) \cong L_{n + 1}^{'} (\pi_2)$ where
$L_{\ast}^{'} (-)$ are the weakly simple $L$-groups of C.T.C. Wall from
\cite{Wa3}.
Now, there is an exact sequence (cf. \cite[p. 78]{Wa3})
\[ 0 \to L_{2 n}^s (\pi_2) \longrightarrow L_{2 n}^h (\pi_2)
\longrightarrow \tmop{Wh}} \newcommand{\db}{{D}' (\pi_2) \otimes \mathbb Z_2 \longrightarrow
L_{2 n - 1}^s (\pi_2) \longrightarrow L_{2 n - 1}^h (\pi_2) \to
0 \]
and hence $l_{0|\tmop{im} \theta^s}$ is injective as claimed.
Given the Facts (1-3) the proof of Theorem \ref{periodic} is just a
repetition of the
argument in \cite{JK}.
\
\section{Further remarks}
(1) Let $\pi$ be a finite group with $\tmop{SK}_1 (\pi) = 0$, for example any
dihedral group, or many nonabelian metacyclic groups, etc. (see \cite{O} for more
such groups). Then $\tmop{Wh}} \newcommand{\db}{{D} (\pi) \cong \tmop{Wh}} \newcommand{\db}{{D}' (\pi)$ is torsion free
and the involution $- : \tmop{Wh}} \newcommand{\db}{{D} (\pi) \rightarrow \tmop{Wh}} \newcommand{\db}{{D} (\pi)$ is
trivial (cf. \cite{O}). This is enough to extend Theorem 2 to this class of
fundamental groups.
Indeed, let $({W}^{n + 1} ; {M}^n, {N}^n)$ be a
strongly inertial $h$-cobordism, $n$ odd. We can assume $n \geqslant 5$. Let
$h : {N}^n \longrightarrow {M}^n$ be the natural homotopy
equivalence. Since $h$ is homotopic to a homeomorphism, then $\tau (h) = 0$. On
the other hand, $\tau (h) = 2 \tau ({W}^{n + 1}, {M}^n)$. This
implies $\tau ({W}^{n + 1}, {N}^n) = 0$ and ${W}^{n +
1} ={M}^n \times I$, i.e. ${SI} ({M}^n) = \{ 0 \}$.
\
(2) There are periodic groups $\pi$ with $\tmop{SK}_1 (\pi) \neq 0$. For
example groups containing $\mathbb Z_p \times Q (8)$, where $p \geqslant 3$
is prime and $Q (8)$ is the quaternionic group of order 8.(cf. \cite{O}).
\
(3) There exist strongly inertial $h$-cobordisms with nontrivial
Whitehead torsion
$\tau ({W}^{n + 1}, {M}^n, {N}^n)$ with $n$ odd $n
\geqslant 5$.
To be more specific, let $p$ be an odd prime and let $G$ be a $p$-group
such that $SK_1(G)_{(p)}$ is non-trivial, for example the group given in
Example 8.11 of \cite{O}, p.\,201. Then the argument on page 323 of \cite{O}
shows that
that the involution $- :\tmop{Wh}} \newcommand{\db}{{D} (G) \rightarrow \tmop{Wh}} \newcommand{\db}{{D} (G)$ is nontrivial. Now let
${M}^n$, $n$ odd, $n \geqslant 5$ be a manifold with $\pi_1
({M}^n) \cong G$. Then the doubling construction gives a strongly inertial
$h$-cobordism $({W}^{n + 1} ; {M}^n, {M}^n)$ with
$\tau ({W}^{n + 1}; {M}^n)$ of the form $\tau_0 -\overline{\tau_0}$ for
$\tau_0 \in \tmop{Wh}} \newcommand{\db}{{D} (G)$. Choosing $\tau_0 \in
\tmop{Wh}} \newcommand{\db}{{D} (G)$ with $\tau_0 \neq \overline{\tau_0}$ gives the desired inertial
$h$-cobordism.
\
(4) Let $G$ be a finite group and ${M}^n, n \geqslant 5, n$ odd, a
closed manifold with $\pi_1 ({M}^n) \cong G$. The following is a
curious restatement of a special case of our problem.\smallskip
\noindent {\textbf Question}: {\tmem{Is ${SI} ({M}^n)
= \{ 0 \}$ if the involution $- : \tmop{Wh}} \newcommand{\db}{{D} (G) \rightarrow
\tmop{Wh}} \newcommand{\db}{{D} (G)$ is the identity?}} (`Only if' is trivial in this case, since
$\{\tau-\bar \tau|\tau\in\tmop{Wh}} \newcommand{\db}{{D} (G)\}\subset {SI}({M}^n)$.)
\smallskip
\noindent {\textbf Comments}: (a) The answer is yes for $G$-finite abelian or
periodic.
(b) Suppose ${SI} ({M}^n) = \{ 0 \}$, and let $\tau_0 \in\tmop{Wh}} \newcommand{\db}{{D} (G)$
be given. Again the doubling construction
gives a strongly inertial $h$-cobordism $({W}^{n + 1}, {M}^n,{N}^n)$
with torsion $\tau = \tau_0 - \overline{\tau_0}$. Since
${SI} ({M}^n) = \{ 0 \}$ then $\tau_0 = \overline{\tau_0}$, i.\,e.
the involution is trivial. On the other hand suppose the involution
$- :\tmop{Wh}} \newcommand{\db}{{D} (G) \rightarrow \tmop{Wh}} \newcommand{\db}{{D} (G)$ is trivial and let
$({W}^{n +1}, {M}^n, {N}^n)$ be a strongly inertial $h$-cobordism. Since
the natural homotopy equivalence $h : {M}^n \rightarrow {N}^n$
is homotopic to a homeomorphism, then
$0 = \tau (h) = - 2 \tau ({W}^{n +1}, {M}^n)$.
In particular, if $\tmop{Wh}} \newcommand{\db}{{D} (G)$ is torsion free, then the
involution $- : \tmop{Wh}} \newcommand{\db}{{D} (G) \rightarrow \tmop{Wh}} \newcommand{\db}{{D} (G)$ is always trivial.
Hence, for all such groups ${SI} ({M}^n) = \{ 0 \}$.
\
(5) There exist 4-dimensional inertial $s$-cobordisms which are not products!
(cf. \cite{CS}, \cite{KS2}).
\
\section{Addendum 1: On topological invariance}\label{TopInv}
It is a consequence of the $s$-cobordism theorem and smoothing theory
that if $M$ is a compact manifold and $\dim M\geq 5$,\
then the classification of $h$-cobordisms from $M$ up to isomorphism
relative to $M$ is the same in the three categories {\em TOP}, {\em PL} and
{\em DIFF}. For example, if $M$ is smooth and $(W,M)$ is a topological
$h$-cobordism, then $W$ has a smooth structure, unique up to
concordance, extending that
of $M$, and if two such $h$-cobordisms are homeomorphic rel $M$, then
they are also diffeomorphic rel $M$.
However, the following question is more subtle:
\begin{question}\label{ques}
Suppose $(W;M,N)$ is a smooth $h$-cobordism which is inertial in
{\em TOP}, does it follow that it is also inertial in {\em DIFF}?
\end{question}
In other words: if $M$ and $N$ are homeomorphic, are they then also
diffeomorphic? (Similar questions can of course be asked for the
pairs of categories ({\em DIFF,PL}) and ({\em PL,TOP }).)
Note that this indeed holds for the examples provided by the general
results
and constructions above; for example $D(M),\ A(M)$ and those obtained by
connected sum with products of spheres, and in Lemma 8.1 of \cite{JK} we
claimed that the answer is always yes. However, this
was based on a too optimistic application of the product structure
theorem for smoothings, and it does not hold as it stands\footnote{We
would like to thank Jean-Claude Hausmann for pointing out the error in
\cite{JK}.}.
We have, unfortunately, not been able to correct this in general,
but here is a proof
in the case of {\em strongly inertial} $h$-cobordisms.
\begin{prop}\label{SITOP} Let $M$ be a smooth, compact manifold. If $W$
is a $\tmop{PL}$ $h$-cobordism from $M$, then $W$ has a smooth structure
compatible with the given structure on $M$, unique up to concordance. If
$W$ is strongly inertial in $\tmop{PL}$, then it is also strongly
inertial in $\tmop{DIFF}$. \par
Replacing the pair of categories $(\tmop{DIFF,PL})$ by $(\tmop{DIFF,TOP})$ or
$(\tmop{PL,TOP})$, a similar result is true, provided $M$ has dimension at
least 5.
\end{prop}
\begin{proof} Denote by $\Gamma(M)$ the set of concordance classes of
smoothings of the underlying $PL$ manifold $M$. By smoothing theory,
this is a
homotopy functor. In particular, if $(W;M,N)$ is an $h$-cobordism, the
inclusions $M\subset_{j_M} W$ and $N\subset_{j_N} W$ induce
restriction {\em isomorphisms}
$$ \Gamma(M)\xleftarrow[\approx]{j_M^*} \Gamma(W) \xrightarrow[\approx]{j_N^*} \Gamma(N).$$
This proves the first part of the Lemma and also defines a unique
concordance class of structures on $N$. \par
Now let $M_\alpha$ be the given structure on $M$, $W_\alpha$ a structure
on $W$ restricting to $M_\alpha$ and $N_\alpha$ the restriction of this
again to $N$, such that $(W_\alpha;M_\alpha,N_\alpha)$ is a smooth
$h$-cobordism. Observe that since $j_M$ has a homotopy inverse $r_M$,
the composite isomorphism $\Gamma(M)\to \Gamma(N)$ is induced by $
r_M{\scriptstyle\circ} j_N$, i.\,e. the natural homotopy equivalence
$h_W$. But if the $h$-cobordism is ($PL$) strongly inertial,
the isomorphism is also induced by a $PL$ homeomorphism $f$. This means that
$N_\alpha$ is concordant to the smoothing $N_{f^*\!\alpha}$ on $N$
transported from $M_\alpha$ by $f$ in such a way that $f$ becomes
a {\em diffeomorphism} between $N_{f^*\!\alpha}$ and $M_\alpha$.
\par
Let $(N\times I)_\beta$ be a concordance between $N_\alpha$ and
$N_{f^*\!\alpha}$, i.\,e. a smooth structure restricting to $N_\alpha$ on
$N\times\{0\}$ and $N_{f^*\!\alpha}$ on $N\times\{1\}$. By the product
structure theorem (\cite[part I]{HM}) there
is a diffomorphism $H:(N\times I)_\beta\to N_\alpha\times I$
restricting to the identity on $N\times \{0\}$. Then $F(x,t)=H(f(x),t)$
defines a homotopy (in fact $PL$ isotopy) between $f$ and a
diffomorphism between $M_\alpha$ and $N_\alpha$. But $f$ was homotopic
to $h_W$.\par
The proofs in the other cases are analogous, but one now needs the
triangulation theory of \cite{KSb}, which is only valid in dimensions
$\geqslant 5$.
\end{proof}
\begin{rem} If $\dim M=4$, Question \ref{ques} has a negative answer,
even in the strongly inertial case. In fact, the first counterexamples
to the $h$-cobordism theorem given by Donaldson in \cite{D} are even
{\em strongly} inertial, so even Proposition \ref{SITOP}
(in case {\em (DIFF,TOP)}) fails in this
dimension.
\end{rem}
\medskip
\section{Addendum 2: Comments on torsion}\label{comments}
{We collect here some useful observations concerning the Whitehead
torsions of homotopy equivalences of manifolds and relations with
$h$--cobordisms.} \medskip
Recall that to a homotopy equivalence $f: K\to L$ of finite complexes is
associated a Whitehead torsion $\tau(f)=f_* \tau(M_f,K)\in Wh(L)$
\cite{Co}. Then the torsion of an $h$-cobordism $(W,M)$ can be
expressed as
$$\tau(W,M)=r_*\tau(\iota) = - \tau(r),$$
where $\iota$ is the inclusion $M\subset W$ and $r$ is a retraction
$W\to M$. If $j:N\hookrightarrow W$ is the inclusion of the other end of
$W$, we can express the torsion of the natural homotopy equivalence
$h=r\circ j$ as
\begin{eqnarray} \tau(h) = \tau(r)+r_*(\tau(j))&=&
-\tau(W;M)+r_*j_*(\tau(W;N))\notag \\
&=&-\tau(W;M)+(-1)^n\overline{\tau(W;M)}.\label{eq:tau-h}
\end{eqnarray}
The following observation shows that torsions of homotopy equivalences
of manifolds can not be arbitrary, unlike for $h$-cobordisms.
\begin{lemma} Let $f:(N,\partial N)\to (M,\partial M)$ be a homotopy
equivalence between compact, oriented and connected manifolds of
dimension $n$, such that $f$ is a homeomorphism on the boundary, and
let $\tau\in Wh(M)$ be its torsion. Then
$$\tau+(-1)^n\tau^*=0.$$
\end{lemma}
\begin{proof} There is a commutative diagram
$$\xymatrix{
C_*(N)\ar[r]^{f_\#}\ar[d] &C_*(M)\ar[d]\\ C_*(N,\partial
N)\ar[r]^{f_\#^{rel}} &C_*(M,\partial M)\\ C^*(N)\ar[u]^{D_N}&
C^*(M)\ar[l]^{f^\#}\ar[u]^{D_M}}
$$
of finitely generated $\mathbb Z\pi_1(M)$--modules, where the lower vertical
maps are given by Poincar\'e duality. (Everything with coefficients in
$\mathbb Z\pi_1(M)$). Then
$$ \tau(D_M)=\tau(f_\#^{rel})+f_*\tau(D_N)+f_*\tau(f^\#).$$
(Here $h_*$ is the map induces on Whitehead groups.) The result now
follows, since the Poincar\'e duality maps have vanishing torsion,
$\tau(f_\#^{rel})=\tau(f_\#)=\tau$ and
$h_*(\tau(f^\#))=(-1)^n(\tau(f_\#))^*.$
\end{proof}
\noindent{\em Remark.} More generally, without the assumption that
$f|\partial N$ is a homeomorphism (or at least a simple homotopy
equivalence), we get the formula
$$\tau(f)-\tau(f|\partial M) + (-1)^n(\tau(f))^*=0.$$
\noindent{\em Example.} Many finite groups have free Whitehead groups,
and then it is known that the involution is trivial (Wall). For an
even--dimensional closed manifold with one of these groups as
fundamental group, it follows that all homotopy equivalences are
simple.\medskip
The lemma is used in the following geometric proof of the Rothenberg
sequence for structure sets. We use the convention that $\mathcal S^h(M)$
($\ss(M)$) denotes the structure set of maps which are homeomorphisms on
the boundary.
\begin{theorem} Let $M$ be a compact, oriented and connected manifold of
dimension $n$. Then there is an exact sequence of based sets ({\em
groups}, in the topological category)
$$\to \stop^h(M\times I)\xrightarrow{\theta}\widehat H^n(\mathbb Z/2;Wh(M))\overset{\psi}{\to} \stop^s(M) \overset{\iota}{\to}
\stop^h(M) \overset{\theta}{\to} \widehat H^{n-1}(\mathbb Z/2;Wh(M)).$$
\end{theorem}
\begin{proof} The map $\iota$ is the obvious forgetful map; $\psi$ and
$\theta$ will be defined below.\par We start with $\theta$. Recall that
\begin{align*} \widehat H^{n-1}(\mathbb Z/2;Wh(M))=& \,\{\tau\in
Wh(M)|\tau=(-1)^{n-1}\tau^*\}/\{\tau+(-1)^{n-1}\tau^*\}\\ =&\,\{\tau\in
Wh(M)|\tau+(-1)^{n}\tau^*=0\}/\{\tau-(-1)^{n}\tau^*\}.
\end{align*}
If $f:N\to M$ represents an element of $\stop^h(M)$, it then follows
from the lemma above that $\tau(f)$ represents an element of $\widehat
H^{n-1}(\mathbb Z/2;Wh(M))$. We have to show that this element is
well--defined.
\par Let $f':N'\to M$ represent the same element of $\stop^h(M)$ as
$f$. Then there is an $h$--cobordism $W$ from $N$ to $N'$ and a map
$F:W\to M$ restricting to $f$ and $f'$ at the ends.
\begin{equation}\label{eq:hrel}\xymatrix{ N\ar[d]_{\cap}\ar[dr]^f &\\
W\ar[r]^F & M\\ N'\ar[u]^{\cup}\ar[ur]_{f'} }
\end{equation}
Let $\sigma=\tau(W,N)$ be the torsion of the $h$--cobordism. By equation
(\ref{eq:tau-h} above we have $\tau(h)=-\sigma+(-1)^n\sigma^*$, where
$h:N'\to N$ satisfies $f\circ h\simeq f'$. But then
$$\tau(f')=f_*\tau(h)+\tau(f),$$
and $f_*\tau(h)$ is trivial in $\hat H^{n-1}(\mathbb Z/2;Wh(M))$.\smallskip
Trivially $\theta\circ\iota=0$. Suppose now that $f\in\stop^h(M)$
satisfies $\theta(f)=0$, i.\,e. $\tau(f)=\sigma-(-1)^n\sigma^*$, for
some $\sigma\in Wh(M)$. Let $W$ be an $h$--cobordism with one end
equal to $N$ and with Whitehead torsion
$\tau(W,N)=f_*^{-1}(\sigma)$. Extension of the map $f$ yields a diagram
like (\ref{eq:hrel}). Then $f'$ and $f$ represent the same element of
$\mathcal S^h(M)$, and $\tau(f')=0$.\medskip
The last construction is also used to define $\psi$\,: let $\tau\in
Wh(M)$ represent an element of $\hat H^{n}(\mathbb Z/2;Wh(M))$, i.\,e.
$\tau=(-1)^n\tau^*$, and let $W$ be an $h$--cobordism with one end
equal to $M$ and torsion $\tau$. If the other end is $N$, the natural
homotopy equivalence $h:N\to M$ has torsion
$\tau(h)=-\tau+(-1)^n\tau^*=0$ and we set $\psi(\tau)=h$.\par If
$\tau=\sigma+(-1)^n\sigma^*$ we can choose $W$ to be the ``double'' of
an $h$--cobordism with torsion $\sigma$ (Milnor, Lemma 11.4), and the
construction gives $h=\text{id}_M$. Hence $\psi$ is well--defined.\par
The construction of $\psi$ is illustrated by the following special case
of diagram (\ref{eq:hrel}):
\begin{equation}\label{eq:psidef}\xymatrix{
M\ar[d]_{\cap}\ar[dr]^{\text{id}_M} &\\ W\ar[r]^F & M\\
N\ar[u]^{\cup}\ar[ur]_{h} }
\end{equation}
Exactness at $\ss(M)$ follows when we observe that this diagram also
expresses precisely that $h$ is equivalent to $\text{id}_M$ in
$\mathcal S^h(M)$.
\end{proof}
\noindent{\em Remarks.} (1) The sequence can be continued to a long
exact sequence of groups to the left by hooking it up in the obvious
way with the sequences for $M\times I$, $M\times I^2$ and so
on.\smallskip
(2) The maps $L^*_{n+1}(M)\to \mathcal S^*(M)$ in the surgery sequences
($*=$s,\,h) give an obvious map from the $L$-theory Rothenberg sequence,
and an interlocking braid (continuing to the left)
\begin{eqnarray} \xymatrix@C-3pc@R-1pc{ N(M\times I)\ar[rd]
\ar@/^+1pc/[rr] && L^h_{n+1}(M) \ar[rd] \ar@/^1pc/[rr] && \widehat
H^{n-1}(\mathbb Z/2;Wh(M)) \ar[rd] \label{eq:rotharray} & \\ &
{L^s_{n+1}(M)} \ar[ur] \ar[dr] && {\mathcal S^h(M)} \ar[ur] \ar[dr] &&
L^s_{n}(M)\ar[dr] & \\ \widehat H^n(\mathbb Z/2;Wh(M)) \ar[ur]
\ar@/_1pc/[rr]^J && \ss(M) \ar[ur] \ar@/_1pc/[rr]^{\sigma_*} && N(M)
\ar[ur]\ar@/_1pc/[rr] && L^h_{n}(M) \\ }
\end{eqnarray}
|
1,314,259,993,279 | arxiv | \section{Significance}
Measurement data sets of molecular biology and other experimental
sciences are being collected comprehensively to openly accessible
databases. We demonstrate that it is now possible to relate new
results to earlier science by searching with the actual data, instead
of only in the textual annotations which are restricted to known
findings and are the current state of the art. In a gene expression
database, the data-driven relationships between datasets matched well
with citations between the corresponding research papers, and even
found mistakes in the database.
\vspace{2mm}
Molecular biology, historically driven by the pursuit of
experimentally characterizing each component of the living cell, has
been transformed into a data-driven science \cite{Greene11,Tanay05,
Caldas12, Adler09, Schmid12, Gerber07} with just as much importance
given to the computational and statistical analysis as to experimental
design and assay technology. This has brought to the fore new
computational challenges, such as the processing of massive new
sequencing data, and new statistical challenges arising from the
problem of having relatively few ($n$) samples characterized for
relatively many ($p$) variables---the ``large $p$, small $n$''
problem. High throughput technologies often are developed to assay
many parallel variables for a single sample in a run, rather than many
parallel samples for a single variable, whereas the statistical power
to infer properties of biological conditions increases with larger
sample sizes. For cost reasons, most labs are restricted to generating
datasets with the statistical power to detect only the strongest
effects. In combination with the penalties of multiple hypothesis
testing, the limitations of ``large $p$, small $n$'' datasets are
obvious. It is therefore not surprising that much work has been
devoted to address this problem.
Some of the most successful methods rely on increasing the effective
number of samples by combining with data from other, similarly
designed, experiments, in a large meta-analysis \cite{Tseng12}.
Unfortunately, this is not straightforward either. Although public
data repositories, such as the ones at NCBI in the United States and
the EBI in Europe, serve the research community with ever-growing
amounts of experimental data, they largely rely on annotation and
meta-data provided by the submitter. Database curators and semantic
tools such as ontologies provide some help in harmonizing and
standardizing the annotation, but the user who wants to find datasets
that are combinable with her own, most often must resort to searches
in free text or in controlled vocabularies which would need much
downstream curation and data analysis before any meta-analysis can be
done \cite{Rung12}.
Ideally, we would like to let the data speak for themselves. Instead
of searching for datasets that have been described similarly, which
may not correspond to a statistical similarity in the datasets
themselves, we would like to conduct that search in a data-driven way,
using the dataset itself as the query, or a statistical (rather than a
semantic) description of it. This is implicitly done for example in
multi-task learning, a method from the machine learning field
\cite{Baxter97,Caruana97}, where several related estimation tasks are
pursued together, assuming shared properties across tasks. Multi-task
learning is a form of global analysis, which builds a single unified
model of the datasets. But as the number of datasets keeps increasing
and the amount of quantitative biological knowledge keeps
accumulating, the complexity of building an accurate unified model
becomes increasingly prohibitive.
Addressing the ``large $p$, small $n$'' problem requires taking into
account both the uncertainty in the data and the existing biological
knowledge. We now consider the hypothesized scenario where future
researchers increasingly develop hypotheses in terms of
(probabilistic) models of their data. Although far from realistic
today, a similar trend exists for sequence motif data, which are often
published as Hidden Markov models, for instance in the Pfam database
\cite{Punta12}.
In this paper we report on a feasibility study towards the scenario
where a large number of experiments have been modeled beforehand,
potentially by the researcher generating the data or the database
storing the model together with the data. We ask \emph{what could be
done with these models towards cumulatively building knowledge from
data in molecular biology}. Speaking about models generally and
assuming the many practical issues can be solved technically, we
arrive at our answer: \emph{a modeling-driven dataset retrieval
engine}, which a researcher can use for positioning her own
measurement data into the context of the earlier biology. The engine
will point out relationships between experiments in the form of the
retrieval results, which is a naturally understandable interface. The
retrieval will be based on data instead of the state of the art of
using keywords and ontologies, which will make unexpected and
previously unknown findings possible. The retrieval will use the
models of the datasets which, by our assumption above, incorporate
what the researchers producing the data thought was important, but the
retrieval will be designed to be more scalable than building one
unified grand model of all data. This also implies that the way the
models are utilized needs to be approximate. Compared to existing
data-driven retrieval methods \cite{Schmid12, Caldas12}, whole
datasets, incorporating the experimental designs, will be matched
instead of individual observations. The remaining question is how to
design the retrieval so that it both reveals the interesting and
important relationships and is fast to compute.
The model we present is a first step towards this goal. We assume a
new dataset can be explained by a combination of the models for the
earlier datasets and a novelty term. This is a mixture modeling or
regression task, in which the weights can be computed rapidly; the
resulting method scales well to large numbers of datasets, and the
speed of the mixture modeling does not depend on the sizes of the
earlier datasets. The largest weights in the mixture model point at
the most relevant earlier datasets. The method is applicable to
several types of measurement datasets, assuming suitable models
exist. Unlike traditional mixture modeling we do not limit the form of
the mixture components, thus we bring in the knowledge built into the
stored models of each dataset. We apply this approach to a large set
of experiments from EBI's ArrayExpress gene expression database
\cite{Lukk12}, treating each experiment in turn as a new dataset,
queried against all earlier datasets. Under our assumptions the
retrieval results can be interpreted as studies that the authors of
the study generating the query set could have cited, and we show that
the actual citations overlap with the retrieval results. The
discovered links between datasets additionally enable forming a ``hall
of fame'' of gene expression studies, containing the studies that
would have been influential assuming the retrieval system would have
existed. The links in the ``hall of fame'' verify and complement the
citation links: in our study they revealed corrections to the citation
data, as two frequently retrieved studies were not highly cited and
turned out to have erroneous publication entries in the database. We
provide an online resource for exploring and searching this ``hall of
fame'': {\tt
http://research.ics.aalto.fi/mi/setretrieval}.
Earlier work on relating datasets has provided partial solutions along
this line, with the major limitation of being restricted to pairwise
dataset comparisons, in contrast to the proposed approach of
decomposing a dataset into contributions from a set of earlier
datasets. Russ and Futschik \cite{Russ10} represented each dataset by
pairwise correlations of genes, and used them to compute dataset
similarities. This dataset representation is ill-suited for typical
functional genomics experiments as a large number of samples is
required to sensibly estimate gene correlation matrices. In addition,
it makes the dataset comparison computationally expensive, as the
representation is bulkier than the original dataset. In other works
specific case-control designs \cite{plosDisease} or known biological
processes \cite{Huttenhower08} are assumed; we generalize to
decompositions over arbitrary models.
\section{Combination of stored models for dataset retrieval}
Our goal is to infer data-driven relationships between a new ``query''
dataset $q$ and earlier datasets. The query is a dataset of $N_q$
samples $\{x_i^q\}_{i=1}^{N_q}$; in the ArrayExpress study the samples
are gene expression profiles, with the element $x_{ij}^q$ being
expression of the gene set $j$ in the sample $i$ of the query $q$, but
the setup is general and applicable to other experimental data as
well. Assume further a dataset repository of $N_S$ earlier datasets,
and assume that each dataset $s_j$, $j=1,\ldots,N_S$, has already been
modeled with a model denoted by $M^{s_j}$, later called a base model.
The base models are assumed to be probabilistic generative models,
{\it i.e.}, principled data descriptions capturing prior knowledge and
data-driven discoveries under specific distributional assumptions.
Base models for different datasets may come from different model
families, as chosen by the researchers who authored each dataset. In
this paper we use two types of base models, which are discrete
variants of principal component analysis ({\it Results}), but any
probabilistic generative models can be applied.
Assume tentatively that the dataset repository contains a library of
``base experiments'', carefully selected to induce all important known
biological effects with suitable design factors. In the special
example case of metagenomics with known constituent organisms, an
obvious set of base experiments would be the set of genomes of those
organisms \cite{Meinicke11}. A new experiment could then be expressed
as a combination of the base experiments, and potential novel
effects. More generally, for instance in a broad gene expression
atlas, it would be hard if not impossible to settle on a clean,
well-defined and up-to-date base set of experiments to correspond to
each known effect, so we choose to \emph{use the comprehensive
collection of experiments in the current databases as the base
experiments}. The problem setting then changes, from searching for a
unique explanation of the new experiment, to the down-to-earth and
realistic task of data-driven retrieval of a set of relevant earlier
experiments, relevant in the sense of having induced one or more of
the known or as-of-yet unknown biological effects.
We combine the earlier datasets by a method that is probabilistic but
simple and fast. We build a \emph{combination model} for the query
dataset as a mixture model of base distributions $p(x|M^{s_j})$, which
have been estimated beforehand. In our scenario, generative models
$M^{s_j}$ are available in the repository along with datasets $s_j$;
note that the $M^{s_j}$ need not all have the same form. In the
mixture model parameterized by $\boldsymbol{\Theta}^q =
\{\theta_j^q\}_{j=1}^{N_S + 1}$, the likelihood of observing the query
is
\begin{equation}
p(\{x_i^q\}_{i=1}^{N_q}; \boldsymbol{\Theta}^q)
= \prod_{i=1}^{N_q} \Big[\Big(\sum_{j=1}^{N_S} \theta^q_j p(x_i^q | M^{s_j}) \Big)+ \theta^q_{N_S + 1} p(x_i^q|\psi) \Big]
\end{equation}
where $\theta^q_j$ is the mixture proportion or \emph{weight} of the
$j$th base distribution (model of dataset $s_j$) and $\theta^q_{N_S +
1}$ is the weight for the novelty term. The novelty is modeled by a
``background model'' $\psi$, a broad nonspecific distribution covering
overall gene-set activity across the whole dataset repository. All
weights are non-negative and $\sum_{j=1}^{N_S + 1} \theta^q_j =
1$. In essence, this representation assumes that biological activity
in the query dataset can be approximately explained as a combination
of earlier datasets and a novelty term.
The remaining task is to infer the combination model
$\boldsymbol{\Theta}^q$ for each query $q$ given the known models
$M^{s_j}$ of datasets in the repository. We infer a maximum a
posteriori (MAP) estimate of the weights
$\boldsymbol{\Theta}^q=\{\theta^q_j\}_{j=1}^{N_S + 1}$. Alternatively
we could sample over the posterior, but MAP inference already yielded
good results. We optimize the combination weights to maximize their
(log) posterior probability
\begin{align} \label{costfun}
&\mbox{log }p(\{\theta^q_j\}|\{x^q_i\}, \{M^{s_j}\})
\propto \mbox{log }p(\{x^q_i\}|\{M^{s_j}\},\{\theta^q_j\})
+ \mbox{log }p(\{\theta^q_j\}) \nonumber\\
&\propto \sum_i \mbox{log} \Big[\Big(\sum_{j=1}^{N_S} \theta^q_j
p(x_i^q | M^{s_j}) \Big) + \theta^q_{N_S + 1} p(x_i^q|\psi)
\Big]
- \lambda\sum_{j=1}^{N_S + 1} {\theta^q_j}^2
\end{align}
where $p(\{\theta^q_j\}) = \mathcal{N}(0,\lambda^{-1}\boldsymbol{I})$ is
a naturally non-sparse $L_2$ prior for the weights with a regularization
term $\lambda$.
The cost function \eqref{costfun} is strictly concave (\emph{SI
Text}), and standard constrained convex optimization techniques can
be used to find the optimized weights. Algorithmic details for the
Frank-Wolfe algorithm and a proof of convergence are provided in
\emph{SI Text}. After computing the MAP estimate, we rank the
datasets for retrieval according to decreasing combination weights.
This modeling-driven approach has several advantages: 1) the
approximations become more accurate as more datasets are submitted to
the repository, increasing naturally the number of base distributions;
2) it is fast, since only the models of the datasets are needed, not
the large datasets themselves; 3) any model types can be included, as
long as likelihoods of an observed sample can be computed, hence all
expert knowledge built into the models in the repository can be used;
4) relevant datasets are not assumed to be ``similar'' to the query in
any na\"ive sense, they only need to explain a part of the query set;
5) the relevance scores of datasets have a natural quantitative
meaning as weights in the probabilistic combination model.
\subsection{Scalability}
As the size of repositories such as ArrayExpress doubles every two
years or even faster \cite{Parkinson09}, fast computation with respect
to the number $N_S$ of background datasets is crucial for future-proof
search methods. Already the first method above has a fast linear
computation time in $N_S$ (\emph{SI Text}), and an approximate variant
can be run in sublinear time. For that, the model combination will be
optimized only over the $k$ background datasets most similar to the
query, which can be found in time $O(N_S^{1/(1+\epsilon)})$ where
$\epsilon\ge 0$ is an approximation parameter \cite{Gionis99}, by
suitable hashing functions.
\section{Results}
\subsection{Data-driven retrieval of experiments is more accurate than
standard keyword search}
We benchmarked the combination model against state-of-the-art dataset
retrieval by keyword search, in the scenario where a user queries with
a new dataset against a database of earlier released datasets
represented by models. The data were from a large human gene
expression atlas \cite{Lukk12}, containing 206 public datasets with
$5372$ samples in total that have been systematically annotated and
consistently normalized. To make use of prior biological knowledge, we
preprocessed the data by gene set enrichment analysis
\cite{Subramanian05}, representing each sample by an integer vector
telling for each gene set the number of leading edge active genes
\cite{Caldas09} (\emph{Methods}). As base models we used two model types previously
applied in gene expression analysis \cite{Gerber07, Caldas09,
Engreitz10, Caldas12}: a discrete principal component analysis
method called Latent Dirichlet Allocation \cite{Pritchard00, Blei03},
and a simpler variant called mixture of unigrams \cite{Nigam00}
(\emph{SI Text}). Of the two types, for each dataset we
chose the model yielding the larger predictive likelihood
(\emph{SI Text}). For each query ($q$), the earlier
datasets ($s_j$) were ranked in descending order of the combination
proportion ($\theta_j^q$; estimated from Eq. \eqref{costfun}). That
is, base models which explained a larger proportion of the gene set
activity in the query were ranked higher. The approach yields good
retrieval: the retrieval result was consistently better than with
keyword searches applied to the titles and textual descriptions of the
datasets (Fig.~\ref{fig:PR_nonsmallVsall}), which is a standard
approach for dataset retrieval from repositories
\cite{Zhu08geometadb}.
\begin{figure}
\centering
\includegraphics{Fig1_l2_2201_plotLukk.eps}
\caption{Data-driven retrieval outperforms
the state of the art of keyword search on the human gene expression
atlas \cite{Lukk12}. Blue: Traditional precision-recall curve where
progressively more datasets are retrieved from left to right. All
experiments sharing one or more of the 96 biological categories of
the atlas were considered relevant. In keyword retrieval, either the
category names (``Keyword: 96 classes'') or the disease annotations
(``Keyword: disease'') were used as keywords. All datasets having at
least ten samples were used as query datasets, and the curves are
averages over all queries.
\label{fig:PR_nonsmallVsall}}
\end{figure}
We checked that the result is not only due to laboratory effects by
discarding, in a follow-up study, all retrieved results from the same
laboratory. The mean average precision decreased slightly (from $0.44$
to $0.42$; precision-recall curve in Fig.~S2) but still
supports the same conclusion.
\subsection{Network of computationally recommended dataset connections
reveals biological relationships}
When each dataset in turn is used as a query, the estimated
combination weights form a ``relevance network'' between datasets
(Fig.~\ref{fig:full_network} left), where each dataset is linked to
the relevant earlier datasets (for details, see {\it Methods}; a
larger figure in Fig.~S5; and an interactive searchable version in the
online resource). The network structure is dominated but not fully
explained by the tissue type. Normal and neoplastic solid tissues
(cluster 1) are clearly separate from cell lines (cluster 2) and from
hematopoietic tissue (cluster 4); the same main clusters were observed in
\cite{Lukk12}. Note that the model has not seen the tissue types but
has found them from the data. In closer inspection of the clusters,
some finer structure is evident. The muscle and heart datasets (gray)
form an interconnected subnetwork in the left edge of the image: nodes
near the bottom of the image (downstream) are explained by earlier
(upstream) nodes, which in turn are explained by nodes even further
upstream. As another example, in cluster 4 myeloma and leukemia
datasets are concentrated on the left side of the cluster, whereas the
right side mostly contains normal or infected mononuclear cells.
\begin{figure*}
\centering
\includegraphics[angle=270]{Fig2_l2_combined_figure.eps}
\caption{Relevance network of datasets in the human gene expression
atlas; data-driven links from the model (left) and citation links
(right). Left: each dataset was used as a query to retrieve earlier
datasets; a link from an earlier dataset to a later one means
the earlier dataset is relevant as a partial model of activity in
the later dataset. Link width is proportional to the normalized relevance
weight (combination weight $\theta_{j}^q$; only links with
$\theta_{j}^q\geq 0.025$ are shown, and datasets without links have
been discarded). Right: links are direct (gray) and indirect
(purple) citations. Node size is proportional to the estimated
influence, \emph{i.e.}, the total outgoing weight. Colors: tissue
types (six meta tissue types \cite{Lukk12}). The node layout was
computed from the data-driven network (details in
\emph{Methods}).\label{fig:full_network}}
\end{figure*}
There is a substantial number of links both across clusters and across
tissue categories. Among the top thirty cross-category links, 25
involve heterogeneous datasets containing samples from diverse tissue
origins. The strongest link connects GSE6365, a study on multile
myeloma, with GSE2113, a larger study from the same lab which largely
includes the GSE6365 samples. The dataset E-MEXP-66 is a hub connected
to all the clusters and to nodes in its own cluster having different
tissue labels. It contains samples studying Kaposi sarcoma, and
includes control samples from skin endothelial cells from blood
vessels and the lymph system. Blood vessels and cells belonging to the
lymph system are expected to be present in almost any solid tissue
biopsy as well as in samples based on blood samples. The strongest
link between two homogeneous datasets of different tissue types
connects GSE3307 (which compares skeletal muscle samples from healthy
individuals with 12 groups of patients affected by various muscle
diseases) to GSE5392, which measures transcriptome profiles of normal
brain and brain bipolar disorder. Interestingly, shortening of
telomeres has been associated both with bipolar disorder
\cite{Martinsson13} and muscular disorder
\cite{Mourkioti13}. Treatment of bipolar disorder has been found to
also slow down the onset of skeletal muscle disorder
\cite{Kitazawa08}.
Next we investigated ``outlier" datasets where the tissue type does
not match the main tissue types of a cluster, implying that they might
reveal commonalities between cellular conditions across tissues.
Cluster~1 contained three outlier datasets: two hematopoietic datasets
and one cell line dataset. The two hematopoietic outlier datasets are
studies related to macrophages and are both strongly connected to
GSE2004, which contains samples from kidney, liver, and spleen, sites
of long-lived macrophages. The first hematopoietic outlier, GSE2018
studies bronchoalveolar lavage cells from lung transplant receipts;
the majority of these cells are macrophages. The dataset has strong
links to solid tissue datasets including
GSE2004, and the diverse dataset E-MEXP-66. The second
hematopoietic outlier, GSE2665, is also strongly connected to GSE2004
and measures expression of lymphatic organs (sentinel lymph node) that
contain sinusoidal macrophages and sinusoidal endothelial cells. The
third outlier, E-MEXP-101, studies a colon carcinoma cell line and has
connections to other cancer datasets in cluster~1.
\subsection{Top dataset links overlap well with citation graph}
We compared the model-driven network to the actual citation links
(Fig.~\ref{fig:full_network}, right) to find out to what extent the
citation practice in the research community matches the data-driven
relationships. Of the top two hundred data-driven edges, 50\% overlapped
with direct or indirect citation links (see \emph{Methods} and
\emph{SI Text}). Most of the direct citations appear
within the four tissue clusters (Fig.~\ref{fig:full_network},
right). The two cross-cluster citations are not due to biological
similarity of the datasets. The publication for GSE1869 cites the
publication for GSE1159 regarding the method of differential
expression detection. The GSE7007, a study on Ewing sarcoma samples, cites
the study on human mesenchymal stem cells (E-MEXP-168) for stating
that the overall gene expression profiles differ between those samples.
We additionally compared the densely connected sets of experiments
between the two networks.
In the citation graph the breast cancer datasets GSE2603, GSE3494, GSE2990,
GSE4922, and GSE1456 form an interconnected clique in cluster 1, while
the three leukocyte datasets GSE2328, GSE3284, and GSE5580 form an
interconnected module in cluster 4. In the relevance network the corresponding
edges for both cliques are among the strongest links for those datasets,
and some of them are among the top 20 strongest edges in the network
(see \emph{SI Text} for the list of top 20 edges).
There are also densely connected modules in the relevance network that are
not strongly connected in the citation graph; when we systematically sought
cliques associated to each of the top 20 edges, the most strong edges
constitute a clique among E-MEXP-750, GSE6740 and GSE473, all three studying CD4+ T
helper cells which are an essential part of the human immune system.
Another
interesting set is among three T-cell related datasets in cluster 3. Two of the
datasets contain T lymphoblastic leukemia samples (E-MEXP-313 and E-MEXP-549), whereas
E-MEXP-337 reports thymocyte profiles. Thymocytes are developing T lymphocytes that
are matured in thymus, so this connection is biologically meaningful but not
straightforward to find from dataset annotations. Other strongly connected cliques are
discussed in the \emph{SI Text}.
\subsection{Analysis of network hubs discovers datasets deserving more
citations}
Datasets that have high weights in explaining other datasets have a
large weighted outdegree in the data-driven relevance network, and are
expected to be useful for many other studies. We checked whether the
publications corresponding to these \emph{central hubs} are highly
cited in the research community. There is a low but statistically
significant correlation between the weighted outdegree of datasets and
their citation counts (Fig.~\ref{fig:normalized_scatter_plot};
Spearman $\rho(169) = 0.2656$, $p < 0.001$). Both quantities were
normalized to avoid bias due to different release times of the
datasets (\emph{Methods}). We further examined whether the prestige
of the publication venue (measured by impact factor) and the senior
author (h-index of the last author) biased the citation counts, which
could explain the low correlation between the outdegree and the
citation count, and the answer was affirmative (\emph{Methods}).
We inspected more closely the datasets where the recommended or the
actual citation counts were high
(Fig.~\ref{fig:normalized_scatter_plot}): (A) datasets having low
citation counts but high outdegrees, (B) both high citation counts and
high outdegrees and (C) high citation counts but low outdegrees. We
manually checked the publication records of region A in Gene Expression
Omnibus (GEO) \cite{Barrett11} and ArrayExpress \cite{Parkinson09}, to
find out why the datasets had low citation counts despite their high
outdegree (data-driven citation recommendations). Two of the eight
datasets had an inconsistent publication record. The blue arrows in
Fig.~\ref{fig:normalized_scatter_plot} point from their original
position to the corrected position confirmed by GEO and ArrayExpress.
Thus the data-driven network revealed the inconsistency, and the new
positions, corresponding to higher citation counts, validate the
model-based finding that these datasets are good explainers for other
datasets. In region B, most of the papers have been published in high
impact journals and have relatively high number of samples (average
sample size of $154$) compared to region A (average sample size of
$75$). One of the eight datasets in the collection is the well known
Connectivity Map experiment (GSE5258). Lastly the set C mostly contains unique
targeted studies; there are five studies in the set, which are about
leukocytes of injured patients, Polycomb group (PcG) proteins,
senescence, Alzheimer's disease, and effect of cAMP agonist forskolin,
a traditional Indian medicine. The studies have been published in high
impact forums, and a possible reason of their low outdegree is their
specific cellular responses, which are not very common in the atlas.
\begin{figure}
\centering
\includegraphics[angle=270]{Fig3_l2_scatterplot.eps}
\caption{Data-driven prediction of usefulness of datasets vs. their
citation counts. Manual checks comparing sets for which the two
scores differed revealed inconsistent database records for two
datasets; the blue arrows point to their corrected locations, which
are more in line with the data-driven model. Regions A, B, and C: see
text.\label{fig:normalized_scatter_plot}}
\end{figure}
\section{Discussion}
Our main goal was to test the feasibility of the scenario where
researchers let the data speak for themselves when relating new
research to earlier studies. The conclusion is positive: even a
relatively straightforward and scalable mixture modeling approach
found both expected relationships such as tissue types, and
relationships not easily found with keyword searches, including cells
in different developmental stages or treatments resembling conditions
in other cell types. While biologists could find such connections by
bringing expert knowledge into keyword searches, the ultimate
advantage of the data-driven approach is that it also yields
connections beyond current knowledge, giving rise to new hypotheses
and follow-up studies. For example, it seems surprising that the
skeletal muscle dataset GSE6011 is linked also to
kidney and brain datasets.
Closer inspection yielded possible partial
explanations. Some kidney areas are rich in blood vessels, lined by
smooth muscle. Studies have shown common gene signatures between
skeletal muscle and brain. Abnormal expression of the protein
dystrophin leads to Duchenne muscular dystrophy, exhibited by a
majority of samples in GSE6011; the brain is another major expression
site for dystrophin \cite{Culligan2001}.
Interestingly the top three potentially novel datasets, where only less than 50\% of
the expression pattern is modelled by earlier datasets (i.e.
$\theta_{N_S + 1}^q > 0.5$), are GSE2603 (a central breast cancer set),
the Connectivity Map data (GSE5258) and the Burkitt's Lymphoma set
(GSE4475; a cancer fundamentally distinct from other types of lymphoma).
The first two are also recovered by the citation data (have
relatively high citation counts and appear in region B in
Fig.~\ref{fig:normalized_scatter_plot}), unlike the third (which is
part of region A in Fig.~\ref{fig:normalized_scatter_plot}).
Our case study focused on global analysis of the relevance network
obtained for a representative dataset collection, allowing for
comparisons with the citation graph. The data-driven relationships
corresponded to actual citations when available, but were richer and
were able to spot out errors in citation links. Another intended use
of the retrieval method is to support researchers in finding relevant
data on a particular topic of interest. We performed a study to obtain
insights into relationships among skeletal muscle datasets as well as
between skeletal muscle and other datasets, and showed that the
retrieval method lessens the need for laborious manual searches
(\emph{SI Text} and Fig.~S4).
In this work we made simplifying assumptions: we only employed two model
families, included biological knowledge only as pre-chosen gene sets,
and assumed all new experiments to be mixtures of earlier ones,
instead of sums of effects in them. We expect results to improve
considerably with more advanced future alternatives, with the research
challenge being to maintain scalability. Generalizability of the
search across measurement batches, laboratories, and measurement
platforms is a challenge. Our feasibility study showed that for
carefully preprocessed datasets (of the microarray
atlas~\cite{Lukk12}), data-driven retrieval is useful even across
laboratories. Our method is generally applicable to any single
platform, and takes into account the expert knowledge built into
models of datasets for that platform; abstraction-based data
representations, such as the gene set enrichment representation we
used, have potential to facilitate cross-platform analysis. As data
integration approaches develop
further~\cite{Tripathi11dami,Virtanen12aistats}, it may be possible to
do searches even across different omics types; here, integration of
meta data (pioneered in a specific semi-supervised framework
\cite{Wise2012}), several ontologies (MGED ontology, experimental
factor ontology and ontology of biomedical investigations
\cite{Zheng11}) and text mining results \cite{Jensen06, Rzhetsky08}
are obviously useful first steps.
\begin{materials}
\section{Gene expression data}
We used the human gene expression atlas \cite{Lukk12} available at
ArrayExpress under accession number E-MTAB-62. The
data were preprocessed by gene set enrichment analysis (GSEA) using the
canonical pathway collection (C2-CP) from the Molecular Signatures
Database \cite{Subramanian05}. Each sample was represented by its
top enriched gene sets \cite{Caldas09} (\emph{SI Text}).
\section{Node layout and normalized relevance weight}
The weight matrix contains a weight vector for each query dataset,
encoding the amount of variation in that query explained by each
earlier dataset. As query datasets from early years have only few even
earlier sets available, there is a bias towards the edges being
stronger for the datasets from early years. To remove the bias we
normalized, for the visualizations, the edge strengths of each query
data set by the number of earlier datasets. To visualize the
relationship network over time in Fig.~\ref{fig:full_network}, we
needed a layout algorithm that positions the datasets on the
horizontal axis highlighting structure and avoiding tangling. We used
a \emph{cluster-emphasizing} Sammon's mapping \cite{Sammon69};
Sammon's mapping is a nonlinear projection method or Multidimensional
Scaling algorithm which aims at preserving the interpoint distances
(here $1-\theta_j^q$). By clustering the network (with unsupervised
Markov clustering \cite{Dongen00}) and increasing between-cluster
distances by adding a constant ($c=1$) to them, the mapping was made
to emphasize clusters and hence untangle the layout.
\section{Citation graph}
Direct citations between dataset-linked publications were extracted
from the Web of Science (26 Jul 2012) and PubMed (17 Oct 2012). We
additionally considered two types of indirect edges. Firstly, we
introduced links between datasets whose publications share common
references. This covers for instance related datasets whose
publications appeared close in time, making direct citation unlikely.
A natural measure of edge strength is given by the number of shared
references. Secondly, we connect datasets whose articles are cited
together, because co-citation is a sign that the community perceives
the articles as related. Here, the edge strength was taken to be the
number of articles co-citing the two dataset publications; these edges
dominate the indirect links in the citation graph. For this analysis
we used citation data, available for $171$ datasets and provided by
Thomson Reuters as of 13 September 2012.
\section{Normalization of citation counts and weighted outdegrees}
As early datasets have many more papers which can cite them, and many
more later datasets which they can help model, both the citation
counts and estimated weighted outdegrees are expected to be upwards
biased for them. For Fig.~\ref{fig:normalized_scatter_plot} we
normalized the quantities; for each dataset we normalized the
outdegree by the number of newer datasets, and the citation count by
the time difference between publishing the data and the newest dataset
in the atlas. To make sure the normalization did not introduce
side effects we additionally checked that the same conclusions were
reached
without the citation count normalization (Fig.~S1; plotted
as stratified subfigures for each 1-year time window).
The citation
counts were extracted from PubMed on 16 May 2012.
\section{Citation counts are strongly influenced by external esteem of
the publication forum and the senior author} We stratified the data
sets according to the numbers of data-driven citation recommendations,
and studied whether the impact factor of the forum or the h-index of
the last author were predictive of the actual citation count in each
stratum. The strata were the top and bottom quartiles, and for each we
compared the top and bottom quartiles of the actual citation counts
(resulting in comparing the four corners of
Fig.~\ref{fig:normalized_scatter_plot}).
For low outdegree (low recommended citation count),
the h-index was lower for less cited datasets
($t_{11} = 2.78, p = 0.0086$; mean value $24.20$ vs $54.62$), and also
the impact factor was lower ($t_{7} = 2.6, p = 0.016$; mean value
$4.38$ vs $21.13$). Similarly, for high recommended citation count
the impact factor for the little-cited datasets was
lower ($t_{19} = 3.99, p = 4.0^{-4}$; mean
value $6.45$ vs $21.91$), while the difference in h-index was not
significant. All t statistics and p-values were computed by one-sided
independent sample Welch's t-tests. The h-indices and impact factors
were collected from Thomson Reuters Web of Knowledge and Journal
Citation Reports 2011 respectively on 23rd July 2012.
\end{materials}
\begin{acknowledgments}
We thank Matti Nelimarkka and Tuukka Ruotsalo for helping with
citation data. Certain data included herein are derived from the
following indices: Science Citation Index Expanded, Social Science
Citation Index and Arts \& Humanities Citation Index, prepared by
Thomson Reuters\textsuperscript{\textregistered}, Philadelphia,
Pennsylvania, USA, \textsuperscript{\copyright} Copyright Thomson
Reuters \textsuperscript{\textregistered}, 2011. This work was
financially supported by the Academy of Finland (Finnish Centre of
Excellence in Computational Inference Research COIN, grant no
251170).
\end{acknowledgments}
\section{Methods}
\paragraph{Gene Set Enrichment Analysis.} We used GSEA
\cite{Subramanian05} to bring in biological knowledge in the form of
pre-defined gene sets. GSEA starts by sorting genes with
respect to their normalized expression levels. GSEA essentially
consists of computing a running sum on the sorted list for each gene
set; this running sum (enrichment score) increases when a gene belongs
to the gene set and decreases otherwise; the final statistic is the
maximum of this running sum. The procedure essentially amounts to
computing a weighted Kolmogorov-Smirnov (KS) statistic. For each
sample the KS statistic is normalized by dividing it by the mean of
random KS statistics computed on randomly generated gene sets whose
size is matched with the actual gene set; the $50$ top-scoring gene
sets are selected according to this normalized score. This simple
thresholding ignores significance values but has been successfully
used in earlier meta-analysis studies \cite{Segal04, Caldas09,
Caldas12}; we earlier investigated the alternative of selecting gene
sets based on a standard q-value cut-off (with $q < 0.05$), but it
produced an excessively sparse encoding where more than $80\%$
samples had no active gene sets \cite{Caldas12}. The activity of
each gene set is finally expressed as its core set or \emph{leading
edge subset}, consisting of genes found before the running KS score
reached its maximum. We quantify the activity or a set simply by the
size of the leading edge subset. It can be used analogously to the so
called \emph{bag-of-words} representations in text analysis, and we
use it for the subsequent modeling of each dataset with the base models.
\paragraph{Base models.} \emph{Latent Dirichlet allocation} (LDA)
\cite{Pritchard00, Blei03, Griffiths04} and \emph{mixture of unigrams}
\cite{Nigam00} are probabilistic unsupervised models that give insight
to datasets by describing them in terms of latent components. Each
data sample, in our case quantified as gene set activities, is
represented by a probability distribution over components (sometimes
also called topics). The components are shared by all samples, but
with different degrees of activation for each, and each component
produces a characteristic distribution over the gene sets. In LDA
each sample may be produced by multiple hidden components while the
mixture of unigrams is a simplified version where each sample is
assumed to come from a single component. The computational problem is
to estimate the latent component structure (for each sample, the
distribution over components and for each component, its distribution
over gene sets) that has most likely generated the observed set of
samples.
We use standard inference solutions for the models: collapsed Gibbs
sampler for the LDA \cite{Griffiths04} and Expectation Maximization
for the mixture of unigrams \cite{Nigam00}. In LDA, the
hyperparameters, the prior probability of each component, were
optimized with Minka's stable fixed point iteration scheme
\cite{Minka00}, interleaved with the collapsed Gibbs sampling of the
other parameters. The number of Gibbs iterations was $2500$, found to
yield good performance also earlier \cite{Blei03,
Griffiths04, Teh06}. For the mixture of unigrams model, the maximum a
posterior solution is estimated with the Expectation Maximization (EM)
algorithm, using Laplace smoothing for the prior probabilities of the
mixture components \cite{Nigam00}.
\paragraph{Model selection for base models.} For each dataset we
estimated the two base models and selected the one that best models the
dataset. We split the dataset into two parts where 90\% of the
dataset samples were used for training the two models and the remaining
10\% samples to compute the test-set predictive likelihood, a
measure of how well the model fits the data. We repeated this procedure
in a 10-fold cross-validation setup for each dataset and each of the
two models. The model that performed better on average across the 10
folds was chosen to represent the dataset. Most datasets (74 out of 112)
preferred the more expressive LDA model.
The number of components was selected with cross-validation for both
models. We again used $10$-fold cross-validation, separately for each
dataset, to choose the number of components that lead to best
predictive likelihoods on test samples from the same dataset. We
observed that the optimal number of total topics could roughly be
summarized by $\lceil N_D/3 \rceil$ for LDA and $\lfloor \sqrt{N_D}
\rfloor$ for mixture of unigrams, where $N_D$ is the total number of
samples in a dataset. All predictive likelihoods (both for model
selection and for retrieval) were computed using an empirical
likelihood method (see \cite{Li06}). For very small datasets
($N_D<10$) within-set cross-validation is very noisy and we chose LDA
which had been selected for the majority of datasets with $10 \leq
N_D \leq 15$.
\paragraph{Strict Concavity of the objective function.}
For each query dataset $q$, there are multiple samples
$i=1,\ldots,N_q$. For each sample, each background dataset gives a
probability. Let $\vec{x}_i =
[p(x_i^q|M^{s_1}),p(x_i^q|M^{s_2}),\ldots,p(x_i^q|M^{s_{N_S}}),p(x_i^q|\Psi)]^\top$
be the column vector of probabilities for sample $i$ from all
background datasets and from the novelty model, where $^\top$ denotes
vector transpose. Then the probability given by a mixture of
background datasets is $\vec{\theta}^\top \vec{x}_i$ where
$\vec{\theta}$ is the column vector of mixture weights (mixing
proportions) over the background datasets and the novelty model.
Since the mixing weights are mixture probabilities, they must lie in
the canonical simplex denoted as
\begin{equation}
\Delta = \left\{\boldsymbol{\theta} |
\theta_j \ge 0, \sum_{j=1}^{N_S+1} \theta_j = 1\right\} \;.
\end{equation}
Our optimization takes the form
\begin{multline}
\max_{\vec{\theta}\in \Delta} \log \left( \exp(-\lambda||\vec{\theta}||^2) \prod_i \vec{\theta}^\top \vec{x}_i \right) \\
=\max_{\vec{\theta}\in \Delta} \sum_i \log(\vec{\theta}^\top \vec{x}_i) -\lambda||\vec{\theta}||^2
=\max_{\vec{\theta}\in \Delta} f(\vec{\theta})
\end{multline}
where the objective function is
$$
f(\vec{\theta}) = \sum_i \log(\vec{\theta}^\top \vec{x}_i)-\lambda||\vec{\theta}||^2 \;.
$$
The objective $f(\vec{\theta})$ is a strictly concave function with respect to the
multivariate parameter $\vec{\theta}$. This can be shown since the
second derivative of the function is negative.
In detail, the function's gradient is
\begin{equation}
\nabla f(\vec{\theta}) = \sum_i \frac{\vec{x}_i}{\vec{\theta}^\top \vec{x}_i} - 2\lambda \vec{\theta}
\end{equation}
and the matrix of second-order partial derivatives (Hessian matrix) is
\begin{equation}
\nabla^2 f(\vec{\theta}) = -\sum_i \frac{\vec{x}_i \vec{x}^\top_i}{(\vec{\theta}^\top \vec{x}_i)^2}
-2\lambda I
\end{equation}
where $I$ denotes the identity matrix. Since all elements of $\vec{x_i}$ are nonnegative,
it is easy to see the Hessian matrix is negative definite for all
$\vec{\theta}$
where $\theta_i \ge 0$
as long as
$\lambda>0$. Therefore the objective function is
strictly concave.
A local maximum of a strictly concave function on a convex feasible region
is the unique global maximum \cite{Bradley77}; therefore maximizing the objective function
to a local maximum by any algorithm yields the unique global maximum.
\paragraph{Maximizing the Objective Function.}
Maximization of concave functions on the unit simplex $\Delta$ can be
done by the \emph{Frank-Wolfe algorithm}. The algorithm performs the
following steps.
\emph{Step 1: initialization.} The algorithm initializes a solution
as the vertex of the simplex having the largest objective value. A
vertex of the simplex has $\theta_j=1$ for some $j$ and all other
elements of $\vec{\theta}$ are zero. At vertex
$j$, the objective function simplifies to
$$
\sum_i \log(x_{ij}) -\lambda
$$ where $x_{ij}=p(sample_i|dataset_j)$, which takes $O(N_q)$ time to
evaluate per vertex. Thus creating the initialization takes linear
$O(N_q N_S)$ time even with a simple brute-force evaluation of all
vertices.
\emph{Step 2: Iteration.} In each iteration, the algorithm improves
the solution by two steps: (1) Find the maximal element $j$ of the
gradient. (2) Find the point along the line $\vec{\theta} +
\alpha(\vec{e}(j)-\vec{\theta})$, $\alpha\in [0,1]$, which maximizes
the objective function. Here $\vec{e}(j)$ means the vector where the
element $j$ is one and the others are zero. Computation of the
gradient and finding the maximal element takes $O(N_q N_S)$ time.
\section{Proof of Convergence and Scalability}
\paragraph{Convergence Analysis.}
Since each iteration takes linear time with respect to the number of
query samples and background datasets, the only remaining issue is
the number of iterations required for good enough convergence. We now
analyze the convergence properties of this iteration in two cases.
\emph{Case 1: optimal $\alpha$.} At first, we consider the case where
the best value of $\alpha$ can be found along the line to a sufficient
accuracy in a fixed amount of time, for example by restricting
evaluations along the line to a fixed number.
Define a proportional regret function $h(\vec{\theta})=(f(\vec{\theta}^{*})-f(\vec{\theta}))/4 C_f$ where $\vec{\theta}^{*}$ is the optimal parameter value maximizing the objective function,
and $C_f \ge 0$
is a \emph{measure of curvature} of $f$.
In detail, $C_f$ is defined as the largest quantity such
that for all $\vec{\theta}_A\in \Delta$, $\vec{\theta}_B\in \Delta$, $\vec{\theta}_C$ where $\vec{\theta}_C=\vec{\theta}_A + \alpha (\vec{\theta}_B-\vec{\theta}_A)$ for some $\alpha$, we have
$$
f(\vec{\theta}_C) \ge f(\vec{\theta}_A) + (\vec{\theta}_C-\vec{\theta}_A)^{\top} \nabla f(\vec{\theta}_A) - \alpha^2 C_f \;.
$$
With this notation, it can be shown (\cite{Clarkson12}, Theorem 2.2) that
at iteration $k$ of the Frank-Wolfe algorithm the current solution $\vec{\theta}_k$
has regret
$$
h(\vec{\theta}_k) \le 1/(k+3)
$$
and thus
$$
f(\vec{\theta}^{*})-f(\vec{\theta}_k) \le 4C_f/(k+3) \;.
$$
Thus, to achieve a desired regret $\epsilon$, at most $4 C_f/\epsilon+3$
iterations are needed independently of the number of background datasets;
the amount of iterations needed depends only on the curvature.
\emph{Case 2: fixed $\alpha$.}
It can even be shown that using a fixed value
$\alpha_k=2/(k+3)$ at each iteration $k$ suffices to yield bounds for performance:
with this choice of $\alpha_k$ the regret bound at iteration $k+1$ becomes (\cite{Clarkson12}, Section 7)
$$
h(\vec{\theta}_{k+1}) \le 1/(k+4)\;
$$
and thus
$$
f(\vec{\theta}^{*})-f(\vec{\theta}_{k+1}) \le 4C_f/(k+4)
$$
which again shows the number of iterations to achieve a desired
regret does not depend on the number of background datasets,
only on the curvature. It is enough to show the curvature is finite and
does not depend on the number of background datasets; we now show this.
\emph{Analysis of the Curvature: }
As seen above, the smaller the curvature $C_f$,
the better the bounds for the regret $f(\vec{\theta}^{*})-f(\vec{\theta}_{k})$ are.
In our case the function $f$ is twice differentiable and it can be shown
(\cite{Clarkson12}, Section 4.1) that
$$
C_f \le \sup_{\vec{\theta}_A\in \Delta,\vec{\theta}_B\in \Delta,\alpha\in[0,1]} -\frac{1}{2}(\vec{\theta}_B-\vec{\theta}_A)^\top \nabla^2 f(\vec{\theta}_\alpha) (\vec{\theta}_B-\vec{\theta}_A) \;.
$$
where $\vec{\theta}_\alpha = \vec{\theta}_A+\alpha(\vec{\theta}_B-\vec{\theta}_A)$.
For our cost function this becomes
\begin{multline}
C_f \le \sup_{\vec{\theta}_A\in \Delta,\vec{\theta}_B\in \Delta,\alpha\in[0,1]} \frac{1}{2}(\vec{\theta}_B-\vec{\theta}_A)^\top \\
\left(\sum_i \frac{\vec{x}_i \vec{x}^\top_i}{(\vec{\theta}_\alpha^\top \vec{x}_i)^2}
+2\lambda I
\right) (\vec{\theta}_B-\vec{\theta}_A) \\
=
\sup_{\vec{\theta}_A\in \Delta,\vec{\theta}_B\in \Delta,\alpha\in[0,1]} \frac{1}{2} \\
\left( \sum_i \frac{((\vec{\theta}_B-\vec{\theta}_A)^\top\vec{x}_i)^2}{(\vec{\theta}_\alpha^\top \vec{x}_i)^2}
+2\lambda ||\vec{\theta}_B-\vec{\theta}_A||^2
\right) \\
\le
\sup_{\vec{\theta}_A\in \Delta,\vec{\theta}_B\in \Delta,\alpha\in[0,1],1\le i \le N_q} \frac{1}{2} \bigg(
N_q \frac{((\vec{\theta}_B-\vec{\theta}_A)^\top\vec{x}_i)^2}{(\vec{\theta}_\alpha^\top \vec{x}_i)^2}
+4\lambda
\bigg) \\
\le
\sup_{\vec{\theta}_A\in \Delta,\vec{\theta}_B\in \Delta,\alpha\in[0,1],1\le i \le N_q} \frac{1}{2} \bigg(
N_q \frac{(\vec{\theta}_B\vec{x}_i)^2+(\vec{\theta}_A^\top\vec{x}_i)^2}{(\vec{\theta}_\alpha^\top \vec{x}_i)^2}
+4\lambda
\bigg) \\
\le
\sup_{\vec{\theta}_A\in \Delta,\vec{\theta}_B\in \Delta,\alpha\in[0,1],1\le i \le N_q} \frac{1}{2} \bigg(
N_q \frac{2 (\max_j x_{ij})^2}{(\min_j x_{ij})^2}
+4\lambda
\bigg)
\;
\end{multline}
where for brevity we denoted $x_{ij} = p(x_i^q|M^{s_j})$ for $j=1,\ldots,N_S$
and $x_{i,{N_S+1}}=p(x_i^q|\Psi)$.
Notice that the right-hand side only depends on the maximal and minimal values
that background datasets give to query samples, not on the number of such background
datasets. Thus, as long as we ensure the maximal value is below some finite constant
and the minimal value is above some small nonzero constant, the curvature $C_f$ is finite
and the convergence bounds of the Frank-Wolfe algorithm therefore do not depend on the number
of background datasets. This condition is simple to ensure by suitable regularization
of the models of background datasets.
Therefore under the simple condition that probabilities
given to query samples are upper bounded and lower bounded above zero,
the algorithm converges (towards the unique global maximum) to a
desired tolerance of the regret
in a finite number of iterations $I$, which can depend on the number of query samples but is
independent of the size of number of background datasets.
\paragraph{Computational complexity.}
Our model needs to perform two main computation tasks for each new
dataset: optimization of the objective function and computation of the
predictive likelihoods. The optimization step needs to evaluate the
function value, the gradient and maximal element of the gradient; the
computation of the function value has complexity $O(N_q * N_S *
O(\mbox{computing } p({x_i}^q|M^{s_j}))$ while, as discussed in the
previous section, computing the gradient and finding the maximal
element takes linear $O(N_q * N_S)$ time in each iteration if a
fixed step size is used (or if the number of line search evaluations
is restricted below some fixed maximum). Thus the complete algorithm
takes $ O(I * N_q * N_S * O(\mbox{computing } p({x_i}^q|M^{s_j})) +
O(I*N_q*N_S)$. Clearly the dominating factor is the computation of
the function value: $O(I *N_q * N_S * O(\mbox{computing }
p({x_i}^q|M^{s_j}))$.
The predictive likelihood for the query sample is computed as its
average probability from $V=1000$ multinomial distributions, estimated
from randomly generated samples coming from the generative process of
the earlier-trained model $M^{s_j}$; this is the standard
\emph{empirical likelihood scheme} discussed in \cite{Li06}. The
complexity of computing the predictive likelihood for the query
dataset with $G$ features (in our case gene-sets), given a model with
$T$ latent components~\footnote{$T$ is upper bounded by the maximum
number of samples in a background dataset.} is $O(G*V*T)$. The
total computational complexity is then simply the complexity for
predictive likelihoods times the optimization scheme:
$O(I*N_q*N_S*G*V*T)$. Since the number of iterations is independent
of the number of background datasets, $N_S$, the complexity is linear
with respect to it, $O(N_S)$, and therefore the model is reasonably
tolerant to the fast growth of public repositories.
In our implementation a single query dataset took about $31$
iterations and $0.15$ seconds on average on an
Intel\textsuperscript{\textregistered} Core (TM) i7 CPU @ 2.93GHz, to
find the optimal weight vector.
\section{Results}
\paragraph{Normalization of citation counts.} Older datasets tend to
have higher citation counts and outdegrees; in Fig. 3 we removed this
bias by a normalization technique, and identified interesting datasets having very low citation counts and very high outdegrees or vice
versa. To verify the analysis results (identified datasets) are not
artifacts of the normalization, we reanalyzed the original data
without applying citation count normalization.
Fig.~\ref{fig:scatter_plot_stratified} shows the result as stratified
subfigures plotted for each year separately. The datasets identified
using the original citation counts (top-left and bottom-right corners
in each subfigure of Fig.~\ref{fig:scatter_plot_stratified}) are an
exact match with the datasets identified after normalization.
\paragraph{Retrieval performance after discounting for the laboratory
effect.} In microarray experiments laboratory effects are known to be
strong \cite{Zilliox07}. The 206 datasets studied were generated from
163 laboratories. The top laboratory was responsible for 7 datasets,
whereas 142 laboraties only contributed a single set. To test how much
the laboratory effects have affected our results, we discarded all
retrieved results from the same laboratory as the query set. The
original precision-recall curve and the corrected curve are in
Figure~\ref{fig:PR_labEffect}; the mean average precision dropped to
$0.44$ from $0.42$. The small change in performance shows that our
result is mainly due to other effects captured by the model than
the laboratory effect.
\paragraph{Quantitative comparison of data-driven results against the
citation patterns.} Of the 23 direct citation links, eleven were
also found by our model as having a non-zero edge weight. Six links
could not have been found; five of them are citations by papers having
very small datasets ($<10$), which we had considered to be too small
to act as queries while one citation link is between datasets released
on the same date. Of the remaining six links not observed in our
model, two are the cross-cluster citations that are not due to
biological similarity of the datasets as discussed in the paper;
two cell line datasets about multiple myeloma and large cell lymphoma
(GSE6205 and GSE6184 respectively) cite a leukemia study (GSE2113)
where plasma cells are profiled; one dataset measuring HIV
infection from T cells (GSE6740) cites a very small dataset about
thymocyte, the limited size of the set reducing its corresponding
model's relative capability to explain other sets compared to large
datasets in the collection. Finally, E-TABM-26, a prostate cancer study,
cites E-MEXP-156 (a study about tumorigenic and nontumorigenic Human
Embryonic Kidney Cells) in the context of cell apoptosis of cancer
cells in general.
Vice versa, we evaluated the top-weight data-driven edges against
direct and indirect citation links and found a favorable, non-random
overlap. For this we use a modern standard metric in information
retrieval called precision $@ k$ which measures precision at each
position $k$ in the ranked list of top retrieval results from a search
engine. In our case the results are the ranked list of inferred edges
in descending order of their edge strengths. The citation patterns
were used as the gold standard for existence of an edge between two
datasets using their corresponding publication information.
Fig.~\ref{fig:citgraph_comparison} shows that the precision $@ k$ is
reasonably high; for the top two hundred data-driven edges (\emph{i.e.},
$k=200$) the value is 0.5.
\paragraph{Densely connected set of experiments in the relevance network.}
For each of the top 20 strongest edges in the relevance network
(listed in Table~\ref{tab:top20links}) we searched for associated
cliques, \emph{i.e.}, connections within neighbors of the edge, where the
clique size is at least three. We found seven cliques; four of them
(breast cancer, leukocytes, human immune T helper cells and
developmental stages of thymocytes) are described in the main text.
The remaining three are an adenocarcinoma, a brain tissue and a
skeletal muscle clique. The first clique is among GSE4824, GSE5258
and GSE6914; all three datasets are heterogeneous collections of
different cancerous tissues where majority of the samples are cell
line profiles from either lung or breast adenocarcinoma. The second
clique is among GSE1297, GSE5392 and E-MEXP-114 that profile normal
and diseased brain tissue. The last clique is among all skeletal
muscle experiments of the collection where the strongest edge is
between GSE3307 and GSE6011 that both measure Duchenne muscular
dystrophy sampled from quadriceps muscle tissues, the former also
containing samples form other skeletal muscle diseases.
\paragraph{Skeletal muscle {retrieval} case study.}
We lastly present a
case study to illustrate how the model retrieval can support a
researcher in finding relevant data on a specific topic, lessening the
need for laborious manual searches. The topic of interest was gene
expression in skeletal muscle and our database consisted of the human
gene expression atlas~\cite{Lukk12}, which includes eight skeletal
muscle datasets among a total of 206 datasets, plus additional 16
skeletal muscle datasets extracted manually from ArrayExpress
(Table~\ref{tab:smcollection}). First, we tested how well the
data-driven method retrieved other skeletal muscle datasets when
querying with any single skeletal muscle dataset. The retrieval
performance across all 16 query datasets was close to optimal, whereas
keyword searches were not as good in the same task
(Fig.~\ref{fig:PR_SM}). The reason for that was the lack of a
consistent annotation vocabulary in the dataset descriptions; in
particular, different levels of specificity had been used.
Next we looked in more detail into the retrieval results of the
individual queries. For all of them, the retrieval result was
sparse, {\em i.e.}, less than 10\% of the datasets were found to be
relevant to the query (by a non-zero weight).
We further investigated the ranking of results provided by
the data-driven retrieval model.
For 12 queries all retrieved results were other skeletal muscle datasets;
results for the remaining queries contained
at least one false positive, and they are summarized in Table~\ref{tab:sm}.
The top retrieved non-skeletal muscle dataset is the brain tissue
dataset GSE5392, followed by the kidney dataset GSE781.
Kidney does not have a direct biological connection to skeletal
muscle. It is known that some areas of kidney are rich in blood
vessels, which are lined by smooth muscle. Skeletal muscle, smooth
muscle and cardiac muscle are the three main muscle types in the human
body. Interestingly, one skeletal muscle datasets, E-MEXP-216, was never retrieved
by any skeletal muscle query, which suggests
that it is an outlier in some respect. The dataset contains
a combination of human and macacque liver and skeletal muscle
samples. It contains only four human skeletal muscle samples.
Finally, the internal ranking of skeletal muscle datasets in response
to a particular query dataset seems to be guided to a large extent by
health conditions:
\begin{itemize}
\item {\bf E-GEOD-9397} contains samples annotated with disease status FSHD
(Facioscapulohumeral muscular dystrophy). The top retrieved dataset
for that query is E-GEOD-10760, the only other dataset in the
collection annotated with FSHD.
\item The top retrieved result for {\bf E-GEOD-12648} is {\bf
E-GEOD-11686} and vice versa. Both of them are neuromuscular
disorders, hereditary inclusion body myopathy (HIBM) and cerebral
palsy, and they are the only datasets with these specific diseases
in our data collection.
\item {\bf E-GEOD-1786} partially contains samples from COPD (Chronic
Obstructive Pulmonary Disease) subjects; the disease has a muscle
wasting effect. The top retrieved result is
E-GEOD-10760; it contains samples from the same muscle type, but
another disease (FSHD) which also leads to muscle wasting.
\item {\bf E-GEOD-1295} contains samples of the trained and untrained
muscle of non-young overweight people with prediabetic metabolic
syndrome. Half of the samples in E-GEOD-1786 are also from trained
muscles (of COPD patients and controls, old overweight
men). E-GEOD-1786 appears at rank 6, and it is the only background
dataset known to contain trained muscle samples. Among the top 5
datasets, 3 are annotated to contain disease samples related to
weakening of muscles, another one is known to contain old overweight
samples ({\bf E-GEOD-8441}).
\end{itemize}
In summary, disease conditions and health states of tissue seem to
determine the ranking within datasets of the same tissue (sub)type.
|
1,314,259,993,280 | arxiv | \section{Literature Review}
\label{s: literature review}
Since NVIDIA first proposed the term ``Graphics Processing Unit'',
known as GPU, in the year of 1999, GPU has grown into
a heterogeneous parallel computing architecture,
which is often used to solve complicated
scientific and engineering problems.
The two most commonly used platforms are OpenCL and CUDA.
Scholars from different disciplines reported
successful applications of the two, like
\cite{cao.et.al, veronese.krohling} in mathematics,
\cite{bach.et.al} in physics,
\cite{pu.et.al, iwai.et.al, harish.narayanan,
zhu.et.al} in computer science,
\cite{garrett.et.al, khokhlov.et.al, molero.et.al,
komatitsch.et.al} in seismic engineering,
\cite{chang.et.al, ligowski.rudnicki}
in bioinformatics,
\cite{wang.et.al} in communication,
\cite{callico.et.al,
keck.et.al, pan.et.al, scherl.et.al} in image processing,
and so on.
A crowd is fundamentally a many-body system that would
require quite a lot of computation efforts for analysis.
GPU should naturally provide a good solution to the
problem in terms of numerical computation.
Unfortunately, the research on GPU in the field of
crowd simulation is far behind other disciplines.
Before the authors, only a few scholars have been engaged
in relevant research%
\cite{was.et.al, mroz.was, dutta.et.al, rahman.et.al}.
Furthermore, as far as the final results are concerned,
their research findings are not exciting and
suggest a space for further improvement.
As referred by Molero \et\cite{molero.et.al},
developing a scalable and portable GPU parallel model
is a challenge.
Especially, in order to fully utilize the power of GPU,
owing a suitable architecture is the key.
In other words, for a math model, it can be well mapped
into GPU only when its logic architecture is appropriate.
With realizing the point, the authors decided to propose
a field-based pedestrian model first after consideration%
\cite{yu.et.al-1}.
In addition to the better modeling of crowd dynamics,
the proposed continuous model has the advantage of easy
discretization so that a discrete version and a later
OpenCL-based implementation were developed.
Yu \et\cite{yu.et.al-2} reported that this can bring
an at most $30.8$ times speedup with comparison to the CPU model.
The paper is the follow-up research and tries to improve
the work of \cite{yu.et.al-2} in two aspects.
Firstly, it develops a method based on the idea of
divide-and-conquer to solve the problem of
global memory depletion when fields have
a large geometric size.
This is key as now it is possible to analyze
super-large scale crowd's finer walking behavior.
Secondly, potential factors affecting OpenCL are
thoroughly considered in order to further improve
the numerical efficiency.
The left content is organized as the following.
A brief of the continuous and discrete social field
pedestrian models is presented at first.
The discussion of introduced improvements comes next.
Then conducted numerical experiments are exhibited.
The conclusion is given in the end.
\section{Continuous and Discrete Models}
\label{s: continuous and discrete models}
Firstly, in order to avoid unnecessary confusion with
the cellular automata, the term of space unit
abbreviated as \su is used to express the minimal discrete
space. To save space, only a brief is given.
Interested readers can refer to
\cite{yu.et.al-1, yu.et.al-2} for a detailed
description of the models.
In the continuous model,
a pedestrian's physical movement is simulated as
a response to the pedestrian's subjective perception
of the objective environment.
The objective environment is represented by force incurred
by presumed fields. To model various practical phenomena,
total five kinds of fields are introduced, among which
omnidirectional attractive and repulsive fields are
to model influence of static openings and obstacles.
Directional attractive and repulsive fields and
recurrent repulsive fields are all to model influence
due to neighboring pedestrians' movement.
But their evolution laws are different as
they are targeting at walking behavior observed under
different density regimes.
Another point worthy of mention is introduction of
the concept of regulation function.
Through regulation functions, the objective environment
around a pedestrian $p$ represented by force can be
adjusted correspondingly to form the dynamic subjective or
perceived environment that is used to determine $p$'s
next movement. In this way, pedestrians' intelligence
can be well considered. Additionally it is stressed
that the \textit{local} density instead of
the well-known macroscopic one should be used to reflect
pedestrians' biased perception of the environment.
Except allowing the continuous model to own a larger degree
of freedom, using the concept of field also leads to
a straightforward discretization.
As shown in the work of \cite{yu.et.al-2}, a discrete model
was derived. Especially, the concept of walk period was
introduced. Using the concept, the variance of walking velocities
of pedestrians can be well studied under the assumption that
pedestrians' maximal speed is 1 \su per tick.
It should be noted that, for the discrete model,
the assumption $v_{max} = 1$ is not insignificant, but
one of the key points.
More importantly, the method will not add additional
complexities to the underlying logic,
but it should be emphasized that the concept
is related with some form of space fineness.
Therefore, in the discrete model, pedestrians are allowed to
occupy more than one \su and the behavior of jostling can
be studied.
In the meantime, to ensure that, for a pedestrian $p$,
one and only one \su will be $p$'s center, it is ruled
that $p$'s width and height can be different but must be
odd \su{(s)} like $1$, $3$, $5$, ..., and so on.
\section{Algorithm Related Improvement}
\label{s: algorithm related improvement}
\subsection{Architecture of OpenCL-based Computation Model}
\label{s: architecture of opencl-based computation model}
\begin{algorithm}
\caption{OpenCL-based GPU Model}\label{a: opencl based gpu model}
\begin{algorithmic}[l]
\Procedure{Main}{}
\State $p \gets \text{simulation period}$
\State $t \gets 0$
\While{$t < p$}
\State \emph{k-1.} initialize the temporary storage;
\State \emph{k-2.} determine pedestrians' next movement;
\State \emph{k-3.} vote which pedestrian should occupy unoccupied \su{(s)};
\State \emph{k-4.} perform pedestrians' next movement;
\State \emph{k-5.} write cached changes back;
\State $t \gets t + 1$
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{a: opencl based gpu model} lists
the architecture of the OpenCL-based model previously developed.
As indicated, five sub-jobs will be repeated orderly
at every simulation tick.
In \emph{k-1}, for each \su, a work-item will be assigned
to initialize the temporary storage allocated in the global memory space.
In \emph{k-2}, for each pedestrian, a work-item will be assigned
to determine the pedestrian's next movement.
In \emph{k-3}, for each \su, a work-item will be assigned
to vote which pedestrian among the candidates should occupy
if the \su is unoccupied.
In \emph{k-4}, for each pedestrian, a work-item will be
assigned. If the pedestrian's next movement is not still,
the assigned work-item then checks whether the pedestrian is
the one to occupy for the \su{(s)} to be occupied.
If yes, the pedestrian will be physically moved and
relevant changes will be buffered.
Finally, in \emph{k-5}, for each \su, a work-item will be assigned
to write the changes buffered in \emph{k-4} back to
the global storage, which is camouflaged as a 3-D image.
When the discrete model was mapped into the OpenCL heterogeneous
computing framework, mechanisms were introduced, among which
two used to avoid atomic functions like \texttt{atomic\_add} etc
are worthy of little words.
In real applications, using atomic functions is the direct way for
solving competition among work-items.
On the other hand, it should be noted that atomic functions would
significantly harm the computation performance, especially when
a global memory is being manipulated.
Thus, to achieve a better performance, people will struggle to
avoid atomic functions, if possible, even though this may generally
require an overhaul.
For our problem, competition could occur in the
following two situations.
\begin{itemize}
\item[\textbf{s-1}.] Fields stored in the shared 3-D image
are updated due to pedestrians' movement.
Competition is solved through the concept of strength fan-out.
It is observed that, for one memory storage place in the shared
3-D image, once a field's discrete geometry space is determined,
the set of \su{s} can be computed beforehand so that
the memory storage place's content will be affected only if
the field's central \su belongs to the computed set.
Furthermore, although the set of \su{s} computed would change
if the field is moving,
the number of \su{s} keeps unchanged so that the concept of
strength fan-out is introduced.
This can be illustrated by examining a recurrent repulsive field
that is locating at the origin and whose discrete geometry space
is $7\times 7$ \su{s}.
Figures~\ref{discrete-sect.eps} and \ref{discrete-strength.eps}
give the discrete field strength incurred.
According to figure~\ref{discrete-sect.eps}, it is found that
the presumed recurrent repulsive field's strength fan-out is 6
and the set of computed \su{s} contains $(-1, -1)$, $(-1, 0)$,
$(0, 0)$, $(1, 0)$, $(0, 1)$, and $(1, 1)$.
Figure~\ref{update-race.eps} demonstrates that,
if the field's central \su belongs to the computed set,
the sect index of incurred discrete strength at
the \su $(2, 2)$ is always 1, meaning that
the same global memory address will be accessed.
With the concept of strength fan-out,
a big enough global memory space can be allocated
beforehand to buffer field strength related
changes made in \emph{k-4},
which will be written back to
the shared 3-D image all at once in \emph{k-5}.
In this way, no atomic function is required.
\fig{width=0.45\textwidth}{discrete-sect.eps}
{This exhibits sect indexes of field strength incurred by
the presumed recurrent repulsive field with
$k = 1$ and $\alpha = -0.5$.
A same sect index means that the same memory storage place
is to be changed.}
\fig{width=0.45\textwidth}{discrete-strength.eps}
{This exhibits scalar values of field strength incurred by
the presumed recurrent repulsive field.}
\fig{width=0.5\textwidth}{update-race.eps}
{This exhibits, for the presumed recurrent repulsive field,
6 \su{s} exist so that, if the field's central \su is one of them,
the field can incur discrete strength at the \su $(2, 2)$
with the sect index value being $1$.}
\item[\textbf{s-2}.] Pedestrians compete with each other
for empty \su{(s)}.
Competition is solved by the observation that at most
$8$ pedestrians will participate the occupancy competition of
one \su under the assumption $v_{max} = 1$.
A register-vote mechanism is adopted.
An enrollment container that can hold at most $8$ pedestrians
will be allocated for each \su.
For an empty \su, pedestrians trying to occupy the \su should
register first at the \su's enrollment container (\emph{k-2}).
Later, an election among the registered pedestrians will be hold
to determine who should occupy the \su (\emph{k-3}).
\end{itemize}
\subsection{Technical Problems Suffered}
\label{s: technical problems sufferred}
The foregoing methods both follow the same idea of
preventing competition in expense of memory space,
but a big difference does exist.
For the method used to solve competition occurring
in \textbf{s-2}, an upper-bound limit with regarding
the required memory space exists since at most $8$
pedestrians will compete with each other
for occupancy of a \su.
Unfortunately, this is no more valid for the method
used to solve competition occurring in \textbf{s-1}.
When a field's discrete geometry space becomes larger,
the corresponding strength fan-out increases.
So is the required memory space.
Table~\ref{t: strength fan-out} lists strength fan-outs
when different geometries are assumed for
the presumed recurrent repulsive field.
\tab{t: strength fan-out}{List of strength fan-outs}{
\begin{tabular}{c|ccccccc}
\backslashbox{$\scriptstyle wd$}{$\scriptstyle ht$} & 1 & 3 & 5 & 7 & 9 & 11 & $\cdots$\\\hline
1 & 0 & 1 & 2 & 3 & 4 & 5 & \multirow{6}{*}{$\vdots$}\\
3 & 1 & 1 & 2 & 5 & 8 & 11\\
5 & 2 & 3 & 4 & 6 & 10 & 14\\
7 & 3 & 6 & 6 & 6 & 10 & 14\\
9 & 4 & 8 & 9 & 10 & 11 & 15\\
11 & 5 & 11 & 14 & 15 & 15 & 15\\
$\vdots$ & \multicolumn{6}{c}{$\cdots$} & $\ddots$\\
\end{tabular}}
In general, a field's discrete geometry will take a value
shown in table~\ref{t: strength fan-out}, thus is not large,
but situations where a large geometry is used do exist.
Firstly, a study of pedestrians' finer walking behaviors
generally requires larger fields so that distant effect
can be well considered.
Secondly, pedestrians are allowed to occupy more than one \su
so that fields should be scaled up correspondingly.
For the presumed recurrent repulsive field,
table~\ref{t: strength fan-out 2} lists
the strength fan-outs for different geometries.
As shown in figure~\ref{strength-fanout-ratio.eps},
the strength fan-out is increasing at a dramatic rate.
\tab{t: strength fan-out 2}
{This exhibits strength fan-outs when different geometries
are assumed for the presumed recurrent repulsive field.
For a ratio, the corresponding geometry's width and height
will both equal to $7 * ratio$.}%
{\begin{tabular}{c|cccccc}\hline
ratio & 1 & 3 & 5 & 7 & 9 & 11\\
\textit{strength fan-out} & 6 & 61 & 164 & 328 & 535 & 808\\
$m_{recur}$; byte & 192 & 1952 & 5248 & 10496 & 17120 & 25856\\
$M$; GB & 0.7 & 7.3 & 19.6 & 39.1 & 63.8 & 96.3\\\hline
\end{tabular}}
\fig{width=0.6\textwidth}{strength-fanout-ratio.eps}
{Geometry Ratio vs. Strength Fan-out}
The OpenCL-based implementation was developed for simulation of
super-large scale crowd. Let us examine the memory space
required to solve competition occurring in \textbf{s-1}
if a half million population is simulated.
To keep the discussion simple,
the following assumptions are made.
The whole discrete space is $1000\times 1000$
and pedestrians are all of a geometry $1\times 1$ \su.
Thus a half million population would actually mean that
the macroscopic density is $0.5$.
All fields' geometry is same, so is their strength fan-out
denoted as \textit{SF}.
For each \su, the memory space $m_{recur}$ required to store
recurrent repulsive fields is then $\text{\textit{SF}} * 8 * 4$
and the total memory space needed is
$m = m_{attra} + m_{repul} + 2 * m_{recur}$.
Lastly, multiplying $m$ by the number of \su{s} gives
the total memory space $M$ needed.
Table~\ref{t: strength fan-out 2} also lists values of
$m_{recur}$ and $M$.
As shown, when fields' geometry is $21\times 21$,
$M$ already reaches to about $7.3$ GB,
not to say even larger sizes.
\subsection{Solution Illustration}
\label{s: solution illustration}
The former methodology is to cache all of the changes at a time
and algorithm~\ref{a: one-step sum} lists the pseudo-code
for recurrent repulsive fields.
For directional attractive and repulsive
fields, the algorithms are almost same.
However this may cause the foregoing memory depletion problem.
\begin{algorithm}
\caption{One-Step Sum For Recurrent Repulsive Fields}\label{a: one-step sum}
\begin{algorithmic}[1]
\Procedure{One-Step Sum in \emph{k-4}}{float cache[]}
\ForAll{$c \in \text{\su{s}}$}
\State $\text{\texttt{float *$s$}} \gets \text{\Call{FindCache}{cache, $c$}}$
\State \Call{ComputeAndCacheChanges}{$s$}
\EndFor
\EndProcedure
\item[]
\Procedure{One-Time Sum in \emph{k-5}}{float cache[]}
\ForAll{$c \in \text{\su{s}}$}
\State $\text{\texttt{float *$s$}} \gets \text{\Call{FindCache}{cache, $c$}}$
\State $sum \gets 0$
\For{$i = 0$ to $8 * \text{\textit{strength fan-out} - 1}$}
\State $sum \gets sum + s[i]$
\EndFor
\State \Call{WriteBack}{$c$, $sum$}
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{equation}\label{eq: one vs. multi summation}
\sum_{i=0}^{n-1}a[i] \overset{\scriptscriptstyle n=K*m}{=} \sum_{j=0}^{K-1}\sum_{i=0}^{m-1}a[j*K+i]
\end{equation}
The proposed solution is based on the observation that
what is really cared is the final change summed up.
The summation way is of less importance.
As the aforementioned one-step summation may cause
the problem of memory depletion, the multi-step summation
can be used instead.
This is mathematically equivalent to Eq.~\ref{eq: one vs. multi summation}.
When $n$ is too large, it is fine to divide it into the multiplication
of two numbers, i.e. $n = K * m$. Especially, with keeping one number
constant, an upper bound of memory consumption can be set up.
\begin{algorithm}
\caption{Multi-Step Sum For Recurrent Repulsive Fields}\label{a: multi-step sum}
\begin{algorithmic}[1]
\Procedure{Multi-Step Sum in \emph{k-4}}{float cache[]}
\ForAll{$c \in \text{\su{s}}$}
\State $\text{\texttt{float $s$[$K$]}} \gets \text{\Call{FindCache}{cache, $c$}}$
\State $m \gets 8 * \text{\textit{strength fan-out}} / K$
\For{$i = 0$ to $m - 1$}
\State \Call{ComputeAndCacheStepChanges}{$s$, $i$}
\EndFor
\EndFor
\EndProcedure
\item[]
\Procedure{Multi-Step Sum in \emph{k-5}}{float cache[]}
\ForAll{$c \in \text{\su{s}}$}
\State $\text{\texttt{float $s$[$K$]}} \gets \text{\Call{FindCache}{cache, $c$}}$
\State $sum \gets 0$
\For{$i = 0$ to $K-1$}
\State $sum \gets sum + s[i]$
\EndFor
\State \Call{WriteBack}{$c$, $sum$}
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
The proposed methodology's pseudo-code is given in
algorithm~\ref{a: multi-step sum}.
With comparison to the one shown in algorithm~\ref{a: one-step sum},
one additional loop between lines 5 and 6 is first noticed,
which actually performs the inner summation of
Eq.~\ref{eq: one vs. multi summation}.
More importantly, through the additional loop,
the loop count of the one between lines 11 and 12 is
bound to $K$.
Correspondingly, this would limit the memory space required,
i.e. \texttt{float[$K$]} in lines 3 and 9.
In the up-to-date implementation, $K$ is allowed to be
$2$, $4$, $8$, and $16$ in order to use
the built-in geometric function - $dot$ product.
Sometimes $K$ may not divide $n$, i.e. $K \centernot| n$.
To solve, $n$ can be extended to a multiplier of $K$ by
appending corresponding zero strength.
\section{Computation Related Improvement}
\label{s: computation related improvement}
\subsection{Structure of OpenCL}
\label{s: structure of opencl}
\fig{width=0.75\textwidth}{architecture.eps}{OpenCL Architecture}
Computation techniques are developed with the OpenCL architecture
being kept in mind. Therefore it is worthy to briefly illustrate
the architecture at first, as shown in figure~\ref{architecture.eps}.
At the hardware level, an OpenCL computing environment is composed of
a host and one or more compute devices.
A compute device is composed of multiple compute units,
each of which is further composed of multiple process elements.
In the level of execution, compute unit and process element
are also named work-group and work-item respectively.
To work with OpenCL, the general work flow chart is as follows.
Firstly, computation tasks to be fulfilled are
programmed as kernels, using the OpenCL C dialect.
Secondly, tasks are submitted to the command queue.
The OpenCL computing environment then takes
the full control of submitted tasks.
For example, at a suitable time, it will pick an apt
task and execute with appropriate global and local work sizes.
In order to improve the numerical efficiency,
OpenCL provides both task and data level of parallelism.
The task level of parallelism is embodied by the fact
that more than one task can be executed concurrently
as multiple compute units exist.
OpenCL provides the SIMD processing fashion for
the data level of parallelism, which can be viewed in
two sub-levels. Logically, when a task is executed,
the kernel representing the task will be run literally
at the same time by work-items distributed among
more than one work-group.
Therefore a work-group is a bunching of work-items,
but not the most basic one, which is instead \textit{warp}
(Note: AMD calls this \textit{wavefront}.)
A warp is the smallest execution unit of code so that
the same machine instruction will be emitted to
all of work-items in it.
In this sense, the mechanism of warp provides the physical
level of data parallelism.
In addition, the OpenCL memory model is noteworthy too,
which is also exhibited in figure~\ref{architecture.eps}.
Firstly, data stored in the host memory can not be
directly accessed by compute devices and must be transferred
to the global memory beforehand through relevant OpenCL functions.
The global memory and constant memory can be accessed by
all process elements in the same compute device.
In addition, each compute unit has its own local memory
that is accessible to all process elements in it.
Lastly, each process element has the private memory,
which can be accessed only by the process element itself.
Except access privileges, the memory spaces are distinguished
in aspects of band width, capacity, and so on.
In terms of capacity, the global memory is largest and
reaches to the level of gibibyte.
Next is the local memory, which is generally in the level of kilobyte.
The private memory is least that is generally 1 kilobyte or less.
On the other hand, in terms of band width, the private memory is largest,
then is the local memory. And the global memory is least.
\subsection{Computational Techniques}
\label{s: computational techniques}
With OpenCL's architecture being thoroughly investigated,
the following aspects are considered to improve
the numerical efficiency.
\begin{description}[leftmargin=0pt,font=\normalfont]
\item[\underline{\textit{Data Bandwidth}}]
In the former implementation, all computation work
is fulfilled by directly manipulating data
in the global memory after the transferring
from the host memory.
However this way does not fully consider
the OpenCL memory model's structure.
In OpenCL devices, the local memory is on-chip
and close to the process elements,
thus is much faster than the global memory.
By saying that, a more appropriate way to work with
the memory model is as follows.
Firstly, data is transferred from the host memory
to the global memory.
Secondly, for each work-group, the portion of data
to be worked by the work-group is copied to
the local memory through asynchronous copying functions
like \texttt{async\_work\_group\_copy} etc.
Once computation work is done,
data may be transferred back to synchronize
the one stored in the host memory, if necessary.
\item[\underline{\textit{Concurrency}}]
Formerly it is programmed so that tasks are almost executed
in the order of submission.
In addition, there is a strict execution sequence
between CPU and GPU so that CPU has to wait GPU to
finish all of the submitted tasks before starting to
process routine work.
Now the whole framework is re-organized so that
1. many times, more than one task can be executed simultaneously;
2. the strict sequence between CPU and GPU is weakened
to a large extent and even disappears in some cases.
Figure~\ref{workflow.eps} exhibits the flow chart of jobs
submitted. Further, to see in which time frame each submitted job
is executed, a time profiling is fulfilled to one of numerical
experiments conducted, as shown in figure~\ref{percent.eps}.
\fig{width=0.8\textwidth}{workflow.eps}
{This exhibits the flow chart of jobs that will be fulfilled by
GPU and CPU at each simulation tick.
Arabic numbers represent various buffer reading and writing
tasks to be submitted to the OpenCL command queue.
\emph{k-1}, \emph{k-2}, ..., \emph{k-5} are the kernels
appearing in algorithm~\ref{a: opencl based gpu model}.
After task no.10, the host, i.e. CPU, will start to
process routine work.
Especially, if occupancy state of openings
is changed due to pedestrians' leaving and entering,
additional tasks, i.e. task no.11, will be submitted
correspondingly.}
\fig{width=1\textwidth}{percent.eps}
{This exhibits each job's time frame of execution in percentage.
The ones for jobs no.3 and no.11 shown in figure~\ref{workflow.eps}
are missing due to usage of the periodic boundary condition.
This also causes the time frame used by the host to be very short,
which otherwise will largely overlap the one for job \textit{k-5}
in reality. In addition, the shown ending time is the one
when the installed event callback procedure is called,
not exactly the one when the job is accomplished.
This is why multiple jobs appear to end at the same time.}
\item[\underline{\textit{Bank Conflict}}]
The way of emitting one machine instruction to
all work-items bundled in the same warp at a time
improves the degree of parallelism,
but also brings problems and difficulties deserving of attention.
One is the so called bank conflict.
In general, addresses of the local memory are grouped into banks.
For work-items in the same warp, access of the local memory
has to be serialized if the same bank is used.
Thus, to avoid bank conflicts, a feasible way is post-patch.
For example, in the current implementation,
internal \texttt{C struct\textrm{s}} used by work-items
will be intentionally patched to a size of a prime integer
by appending a corresponding number of bytes.
In this way, bank conflicts can be prevented at most.
\item[\underline{\textit{Divergence}}]
Another serious problem caused by the SIMD processing fashion
is divergence. As all work-items in a warp will execute the same
machine instruction at a time, existence of branch statements
such as \texttt{if-else}, \texttt{switch} etc will result in
a portion of work-items to do idle work.
A general solution does not exist.
However some practices like using the trinary operator
\texttt{?:} if possible etc are better to be followed.
Thus the implementation is wholly re-structured
to reduce appearance of branch statements.
For instance, according to \cite{yu.et.al-2},
an array of eight strength will be sorted to determine
each pedestrian's next movement at each simulation tick.
The sorting was previously accomplished by using
the general quick-sort algorithm.
Now a tailored algorithm completely based on \texttt{?:}
is used instead in expense of generality and flexibility.
\end{description}
\tab{t: summary of computational techniques}
{This summarizes severity and implementation difficulty
of considered computational techniques.
A smaller value means more severe or harder.}{
\begin{tabular}{ccc}
& Severity & Hardness\\\hline\noalign{\smallskip}
\textit{Data Bandwidth} & \circled{1} & \circled{6}\\
\textit{Competence} & \circled{2} & \circled{1}\\
\textit{Divergence} & \circled{3} & \circled{2}\\
\textit{Bank Conflict} & \circled{4} & \circled{4}\\
\textit{Concurrency} & \circled{5} & \circled{5}\\
\textit{Locality of Access} & \circled{6} & \circled{3}\\
\end{tabular}}
To map the social field model into
the OpenCL heterogeneous framework, factors
including those already discussed in the work of
\cite{yu.et.al-2} are considered so far.
Thus it is meaningful to have a summary
(table~\ref{t: summary of computational techniques}),
among which the issue of competence is more to say.
To solve competence without atomic functions,
two general methodologies exist.
The first one is to interweave operations
in a way so that no competition would occur,
for example \cite{komatitsch.et.al}.
Unfortunately, such an interweaving may not
exist for all of problems including
the one being discussed.
The second one follows the idea of
sacrifice of space in terms of time.
However it may cause memory to be exhausted quickly.
To solve, the idea of divide-and-conquer
can be resorted to set up an upper bound,
as what is exhibited in the paper.
\section{Numerical Experiments}
\label{s: numerical experiments}
In order to examine the current GPU model's numerical efficiency,
the scenarios used in the work of \cite{yu.et.al-2} are re-experimented.
Firstly, the discrete space runs from $100\times 100$,
$200\times 200$, $\cdots$, to $1000\times 1000$.
Secondly, for each discrete space, the macroscopic density
runs from $0.1$, $0.2$, $\cdots$, to $0.9$.
Thirdly, all fields' geometry is set to $7\times 7$.
Lastly, for each macroscopic density, the pedestrian flow
is assumed to be uni-directional, bi-directional,
4-directional, and 8-directional.
This gives total $10*9*4=360$ combinations.
Each combination lasts $1000$ simulation ticks and repeats
$10$ times to derive an average running time.
Especially, to keep the number of pedestrians constant,
the periodic boundary condition is used.
And pedestrians' geometry and walk period are all
assumed to $1\times 1$ and $1$.
Figures~\ref{1-directional}, \ref{2-directional},
\ref{4-directional} and \ref{8-directional} give
the comparison results.
\begin{figure}[ht]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{former-1.eps}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{revised-1.eps}
\end{minipage}
\caption{\label{1-directional}%
\small{This exhibits the performance ratios of running times of
the CPU model to those of the GPU models for
the uni-directional case.
The left sub-plot shows the previous GPU model's and
the right sub-plot shows the current GPU model's.}}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{former-2.eps}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{revised-2.eps}
\end{minipage}
\caption{\label{2-directional}%
\small{Performance ratios (bi-directional)}}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{former-4.eps}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{revised-4.eps}
\end{minipage}
\caption{\label{4-directional}%
\small{Performance ratios (4-directional)}}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{former-8.eps}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth,scale=0.3]{revised-8.eps}
\end{minipage}
\caption{\label{8-directional}%
\small{Performance ratios (8-directional)}}
\end{figure}
As exhibited, the current GPU model's numerical efficiency
is even better.
With comparison to the CPU model, the previous GPU model's
performance improvement was between 15x and 25x most of the time.
And the highest value was 30.8x.
Now the current GPU model's performance improvement is
between 45x and 60x most of the time, comparing to the CPU model.
And the highest value 71.56x happens in the experimented scenario
(geometry $700\times 700$; density 0.5; 8-directional).
This can also be clearly seen in figure~\ref{former-revised.eps},
which plots all of the $360$ performance ratios of running times
of the previous GPU model to those of the current GPU model.
Furthermore it delineates that the average improvement ratio is 4.44x.
\fig{width=0.6\textwidth}{former-revised.eps}
{This exhibits the all $360$ performance ratios of
running times of the previous GPU model to
those of the current GPU model.
The maximal performance ratio is $13.3$
(geometry $100\times 100$; density 0.7; uni-directional).
The minimal performance ratio is $1.25$
(geometry $1000\times 1000$; density 0.9; 4-directional).
And the average value is $4.44$.}
\tab{t: multi-step sum}{Running Times}{
\begin{tabular}{c|c}
geometry & running time (seconds)\\\hline
$7\times 7$ & $62$\\
$21\times 21$ & $317$\\
$35 \times 35$ & $738$\\
$49 \times 49$ & $1424$\\
$63 \times 63$ & $2187$\\
$77 \times 77$ & $3238$\\
\end{tabular}}
\fig{width=0.6\textwidth}{linearity.eps}
{This exhibits the strict linearity between
the running time and strength fan-out.}
To see the efficiency of the algorithm introduced in
the section~\ref{s: solution illustration},
the scenario
(geometry $1000\times 1000$; density 0.5; 8-directional)
is chosen to be the baseline.
Then simulations are experimented by letting fields'
geometry running from $7\times 7$, $21\times 21$, ...,
to $77\times 77$.
The examined geometries match the ratios shown in
table~\ref{t: strength fan-out 2}.
Especially, since pedestrians' geometry is $1\times 1$
and a typical pedestrian's physical size is $0.3m \times 0.3m$,
this is equivalent to consideration of impact of
neighboring pedestrians, openings, and obstacles
within $0.3m$, $2.1m$, ..., till $23.1m$
in the pedestrian model.
The results are given in table~\ref{t: multi-step sum}.
In addition, according to algorithm~\ref{a: opencl based gpu model},
a strict linearity should exist between the running time
and strength fan-out, which is exhibited in
figure~\ref{linearity.eps}.
\tab{t: walk period}
{This exhibits running times in second for different walk periods.
Again the scenario (geometry $1000\times 1000$; density $0.5$;
8-directional) is chosen to be the baseline.
And simulations are accomplished by keeping the minimal period
being $1$ and letting the maximal period being $1$, $3$, ...,
and so on.
For convenience, it also indicates performance ratios
of the current GPU model with comparison to
the previous GPU model and the CPU model.}{
\begin{tabular}{c|cccccc}
Maximal Period & 1 & 3 & 5 & 7 & 9 & 11\\\hline
GPU model (current) & $62$ & $64$ & $64$ & $63$ & $62$ & $62$\\\hline
GPU model (previous) & $134$ & $203$ & $207$ & $207$ & $208$ & $208$\\
\textit{Ratio} & \textit{2.2} & \textit{3.3} & \textit{3.3} & \textit{3.3} & \textit{3.4} & \textit{3.4}\\\hline
CPU model & $4114$ & $2478$ & $2581$ & $2436$ & $2452$ & $2456$\\
\textit{Ratio} & \textit{66.4} & \textit{40.0} & \textit{41.6} & \textit{39.3} & \textit{39.5} & \textit{39.6}\\
\end{tabular}}
The concept of walk period was introduced for study of
varied walking velocities.
The method's most valuable advantage is that
no additional complexity will be added to
the underlying model logic.
In the work of \cite{yu.et.al-2}, impact of the mechanism upon
the numerical efficiency was experimented.
With improvements being introduced, the experiments are
rerun with the results being given in
table~\ref{t: walk period}.
Except the current GPU model wins again,
it is noticed that using a large maximal walk period
seems to have different impact on the three models.
For example, for the current GPU model,
the impact is negligible, but it will cause the previous GPU model
to use a long period to finish the simulation.
As aforementioned, in the discrete model,
the concept of walk period has an implicit relationship
with the space fineness so that pedestrians are allowed
to occupy more than one \su. A by-product is that
the study of jostling is now feasible through
dynamically adjusting pedestrians' geometry.
Therefore an experiment considering both walk period
and dynamic geometry is conducted.
Samely the scenario (geometry $1000\times 1000$; density 0.5;
8-directional) is used as the baseline.
The walk periods run from 1, 3, ..., to 11
and pedestrian geometries run from
$1\times 1$, $3\times 3$, ..., to $11\times 11$.
Noteworthy, as the whole space is kept as $1000\times 1000$,
changing pedestrians' geometry
will impact the number of pedestrians and fields' geometry.
The results are given in table~\ref{t: combo}.
Firstly, it is seen that, for each pedestrian geometry considered,
the difference of running times for different walk periods
is insignificant. This coincides with the conclusion made in
the previous experiment.
Secondly, it seems that fields' geometry has a more serious impact.
Increasing pedestrians' geometry
will both decrease the population and increase fields' geometry
in the almost same ratio.
But the running time becomes larger with comparing to
the basic case where pedestrians' geometry is $1\times 1$.
To some extent, this coincides with the fact that
\textit{k-5} is the most time consuming one among
the jobs submitted according to figure~\ref{percent.eps}.
Additionally, among the six pedestrian geometries experimented,
the one for $3\times 3$ requires the longest running time.
\tab{t: combo}{This exhibits running times for different
combinations of walk period and pedestrian geometry.
Meanwhile, for each experimented pedestrian geometry,
the corresponding number of pedestrians and fields' geometry
are also given. For example, for pedestrian geometry $1\times 1$,
the number of pedestrians and fields' geometry are $500000$ and
$7\times 7$.}{
\begin{tabular}{c|cccccc}
& $500000$ & $55555$ & $20000$ & $10204$ & $6172$ & $4132$\\
& $7\times 7$ & $21\times 21$ & $35\times 35$ & $49\times 49$ & $63\times 63$ & $77\times 77$\\\cline{2-7}
& $1\times 1$ & $3\times 3$ & $5\times 5$ & $7\times 7$ & $9\times 9$ & $11\times 11$\\\hline
1 & $63.5$ & $264.6$ & $213.5$ & $207.7$ & $197.6$ & $201.5$\\
3 & $63.9$ & $261.7$ & $202.0$ & $197.1$ & $186.9$ & $192.5$\\
5 & $64.0$ & $264.1$ & $204.6$ & $195.5$ & $184.4$ & $191.2$\\
7 & $63.8$ & $262.9$ & $201.3$ & $193.9$ & $182.0$ & $191.6$\\
9 & $63.7$ & $262.2$ & $200.1$ & $194.5$ & $184.2$ & $190.5$\\
11 & $63.9$ & $261.8$ & $200.8$ & $193.8$ & $182.1$ & $189.6$\\
\end{tabular}}
\section{Conclusion}
\label{s: conclusion}
In the paper, the previous OpenCL-based implementation of
the social field model is improved in two aspects.
In one aspect, the problem of memory depletion is solved
by the idea of divide-and-conquer.
The computational model is now ready to power analysis of
super-large scale crowd's complicated and finer walking behaviors.
In the other aspect, the OpenCL heterogeneous framework
is thoroughly studied and relevant computational techniques
are implemented, which brings the numerical efficiency
to an even higher level.
With regarding to the future work, first of all the authors
plan to develop useful transportation related functional modules,
such as XML-based complicated scenario abstraction,
macroscopic/mesoscopic route selection algorithms, and so on.
Secondly, the authors will set out to study how to use
the contemporary information technologies such as deep-learning etc
to automatically collect valuable data from raw video images.
At the moment, the lack of relevant accurate data is a big problem
for quantitative validation and calibration of pedestrian models.
The authors believe that the studies together will help scholars
to better understand the complex dynamics of evacuation processes.
|
1,314,259,993,281 | arxiv | \section{Introduction}
Humans can distinguish 30,000 basic object classes \cite{object_cat_1987}
and many more subordinate ones (e.g.~breeds of dogs). They can also
create new categories dynamically from few examples or solely based
on high-level description. In contrast, most existing computer vision
techniques require hundreds of labelled samples for each object class
in order to learn a recognition model. Inspired by humans' ability
to recognise without seeing samples, and motivated by the prohibitive
cost of training sample collection and annotation, the research area
of \emph{learning to learn} or \emph{lifelong learning} \cite{PACbound2014ICML,chen_iccv13}
has received increasing interests. These studies aim to intelligently
apply previously learned knowledge to help future recognition tasks.
In particular, a major and topical challenge in this area is to build
recognition models capable of recognising novel visual categories
without labelled training samples, i.e.~zero-shot learning (ZSL).
The key idea underpinning ZSL approaches is to exploit knowledge
transfer via an intermediate-level semantic representation. Common
semantic representations include binary vectors of visual attributes
\cite{lampert2009zeroshot_dat,liu2011action_attrib,yanweiPAMIlatentattrib}
(e.g. 'hasTail' in Fig.~\ref{fig:domain-shift:Low-level-feature-distribution})
and continuous word vectors \cite{wordvectorICLR,DeviseNIPS13,RichardNIPS13}
encoding linguistic context. In ZSL, two datasets with disjoint classes
are considered: a labelled auxiliary set where a semantic representation
is given for each data point, and a target dataset to be classified
without any labelled samples. The semantic representation is assumed
to be shared between the auxiliary/source and target/test
dataset. It can thus be re-used for knowledge transfer between the source and
target sets: a projection function mapping
low-level features to the semantic representation is learned from
the auxiliary data by classifier or regressor.
This projection is then applied to map each unlabelled
target class instance into the same semantic space.
In this space, a `prototype' of each target class is specified, and each projected target instance is classified
by measuring similarity to the class prototypes.
Depending on the semantic space, the class prototype could be a binary attribute
vector listing class properties (e.g., 'hasTail') \cite{lampert2009zeroshot_dat}
or a word vector describing the linguistic context of the textual
class name \cite{DeviseNIPS13}.
Two inherent problems exist in this conventional zero-shot learning
approach. The first problem is the \textbf{projection domain shift
problem}. Since the two datasets have different and potentially unrelated
classes, the underlying data distributions of the classes differ,
so do the `ideal' projection functions between the low-level feature
space and the semantic spaces. Therefore, using the projection functions
learned from the auxiliary dataset/domain without any adaptation to
the target dataset/domain causes an unknown shift/bias. We call it
the \textit{projection domain shift} problem. This is illustrated
in Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}, which
shows two object classes from the Animals with Attributes (AwA) dataset
\cite{lampert13AwAPAMI}: Zebra is one of the 40 auxiliary classes
while Pig is one of 10 target classes. Both of them share the same
`hasTail' semantic attribute, but the visual appearance of their tails
differs greatly (Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}(a)).
Similarly, many other attributes of Pig are visually different from
the corresponding attributes in the auxiliary classes. Figure \ref{fig:domain-shift:Low-level-feature-distribution}(b)
illustrates the projection domain shift problem by plotting (in 2D
using t-SNE \cite{tsne}) an 85D attribute space representation of
the image feature projections and class prototypes (85D binary attribute
vectors). A large discrepancy can be seen between the Pig prototype
in the semantic attribute space and the projections of its class member
instances, but not for the auxiliary Zebra class. This discrepancy
is caused when the projections learned from the 40 auxiliary classes
are applied directly to project the Pig instances -- what `hasTail'
(as well as the other 84 attributes) visually means is different now.
Such a discrepancy will inherently degrade the effectiveness of zero-shot
recognition of the Pig class because the target class instances are
classified according to their similarities/distances to those prototypes.
To our knowledge, this problem has neither been identified nor addressed
in the zero-shot learning literature.
\begin{figure}[t]
\centering{}\includegraphics[scale=0.26]{idea_illustration}\caption{\label{fig:domain-shift:Low-level-feature-distribution}An illustration
of the projection domain shift problem. Zero-shot prototypes are shown
as red stars and predicted semantic attribute projections
(defined in Sec.~3.2) shown in blue.}
\end{figure}
The second problem is the \textbf{prototype sparsity problem}: for
each target class, we only have a single prototype which is insufficient
to fully represent what that class looks like. As shown in Figs.~\ref{fig:t-SNE-visualisation-of}(b)
and (c), there often exist large intra-class variations and inter-class
similarities. Consequently, even if the single prototype is centred
among its class instances in the semantic representation space, existing
zero-shot classifiers will still struggle to assign correct class
labels -- one prototype per class is not enough to represent the intra-class
variability or help disambiguate class overlap \cite{Eleanor1977}.
In addition to these two problems, conventional approaches to
zero-shot learning are also limited in \textbf{exploiting multiple
intermediate semantic representations}. Each representation (or semantic
`view') may contain complementary information -- useful for
distinguishing different classes in different ways. While both visual attributes \cite{lampert2009zeroshot_dat,farhadi2009attrib_describe,liu2011action_attrib,yanweiPAMIlatentattrib}
and linguistic semantic representations such as word vectors \cite{wordvectorICLR,DeviseNIPS13,RichardNIPS13}
have been independently exploited successfully, it remains unattempted
and non-trivial to synergistically exploit multiple semantic
views. This is because they are often of very different dimensions
and types and each suffers from different domain shift effects discussed
above.
In this paper, we propose to solve the projection domain shift problem
using transductive multi-view embedding. The
transductive setting means using the unlabelled test data to improve
generalisation accuracy. In our framework, each unlabelled
target class instance is represented by multiple views: its low-level
feature view and its (biased) projections in multiple semantic spaces
(visual attribute space and word space in this work). To rectify the
projection domain shift between auxiliary and target datasets, we
introduce a multi-view semantic space alignment process to correlate
different semantic views and the low-level feature view by projecting
them onto a common latent embedding space learned using multi-view Canonical
Correlation Analysis (CCA) \cite{multiviewCCAIJCV}. The intuition is that when the biased target data projections (semantic representations) are correlated/aligned with their (unbiased) low-level feature representations, the bias/projection domain shift is alleviated. The effects of this process
on projection domain shift are illustrated by Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}(c),
where after alignment, the target Pig class prototype is much closer
to its member points in this embedding space. Furthermore, after exploiting
the complementarity of different low-level feature and semantic views
synergistically in the common embedding space, different target classes
become more compact and more separable (see Fig.~\ref{fig:t-SNE-visualisation-of}(d)
for an example), making the subsequent zero-shot recognition a much
easier task.
Even with the proposed transductive multi-view embedding framework,
the prototype sparsity problem remains -- instead of one prototype
per class, a handful are now available depending on how many views
are embedded, which are still sparse. Our solution
is to pose this as a semi-supervised learning \cite{zhu2007sslsurvey}
problem: prototypes in each view are treated as labelled `instances',
and we exploit the manifold structure of the unlabelled data distribution
in each view in the embedding space via label propagation on a graph.
To this end, we introduce a novel transductive multi-view hypergraph
label propagation (TMV-HLP) algorithm for recognition. The
core in our TMV-HLP algorithm is a new {\emph{distributed
representation} of graph structure termed heterogeneous
hypergraph which allows us to exploit the complementarity of different
semantic and low-level feature views, as well as the manifold structure
of the target data to compensate for the impoverished supervision
available from the sparse prototypes. Zero-shot learning is then
performed by semi-supervised label propagation from the prototypes
to the target data points within and across the graphs. The whole
framework is illustrated in Fig.~\ref{fig:The-pipeline-of}.
By combining our transductive embedding framework and the TMV-HLP
zero-shot recognition algorithm, our approach generalises seamlessly
when none (zero-shot), or few (N-shot) samples of the target classes
are available. Uniquely it can also synergistically exploit zero +
N-shot (i.e., both prototypes and labelled samples) learning. Furthermore,
the proposed method enables a number of novel cross-view annotation
tasks including \textit{zero-shot class description} and \textit{zero
prototype learning}.
\noindent \textbf{Our contributions}\quad{}Our contributions are
as follows: (1) To our knowledge, this is the first attempt to investigate
and provide a solution to the projection domain shift problem in zero-shot
learning. (2) We propose a transductive multi-view embedding space
that not only rectifies the projection shift, but also exploits the
complementarity of multiple semantic representations of visual data.
(3) A novel transductive multi-view heterogeneous hypergraph label
propagation algorithm is developed to improve both zero-shot and N-shot
learning tasks in the embedding space and overcome the prototype sparsity
problem. (4) The learned embedding space enables a number of novel
cross-view annotation tasks. Extensive experiments are carried
out and the results show that our approach significantly outperforms
existing methods for both zero-shot and N-shot recognition on three
image and video benchmark datasets.
\section{Related Work}
\noindent \textbf{Semantic spaces for zero-shot learning}\quad{}To
address zero-shot learning, attribute-based semantic representations have
been explored for images \cite{lampert2009zeroshot_dat,farhadi2009attrib_describe}
and to a lesser extent videos \cite{liu2011action_attrib,yanweiPAMIlatentattrib}.
Most existing studies \cite{lampert2009zeroshot_dat,hwang2011obj_attrib,palatucci2009zero_shot,parikh2011relativeattrib,multiattrbspace,Yucatergorylevel,labelembeddingcvpr13}
assume that an exhaustive ontology of attributes has been manually
specified at either the class or instance level. However, annotating
attributes scales poorly as ontologies tend to be domain specific.
This is despite efforts exploring augmented data-driven/latent attributes
at the expense of name-ability \cite{farhadi2009attrib_describe,liu2011action_attrib,yanweiPAMIlatentattrib}.
To address this, semantic representations using existing ontologies
and incidental data have been proposed \cite{marcuswhathelps,RohrbachCVPR12}.
Recently, \emph{word vector} approaches based on distributed language
representations have gained popularity. In this case a word space
is extracted from linguistic knowledge bases e.g.,~Wikipedia by natural
language processing models such as \cite{NgramNLP,wordvectorICLR}.
The language model is then used to project each class'
textual name into this space. These projections can be used as prototypes for zero-shot learning
\cite{DeviseNIPS13,RichardNIPS13}. Importantly, regardless of the
semantic spaces used, existing methods focus on either designing better
semantic spaces or how to best learn the projections. The former is
orthogonal to our work -- any semantic spaces can be used in our framework
and better ones would benefit our model. For the latter, no existing
work has identified or addressed the projection domain shift problem.
\noindent \textbf{Transductive zero-shot learning}
was considered by Fu et al.~\cite{fu2012attribsocial,yanweiPAMIlatentattrib}
who introduced a generative model to for user-defined and latent
attributes. A simple transductive zero-shot learning algorithm is
proposed: averaging the prototype's k-nearest neighbours to exploit
the test data attribute distribution. Rohrbach
et al.~\cite{transferlearningNIPS}
proposed a more elaborate transductive strategy, using graph-based
label propagation to exploit the manifold structure of the test data.
These studies effectively transform the ZSL task into a transductive
semi-supervised learning task \cite{zhu2007sslsurvey} with prototypes
providing the few labelled instances. Nevertheless,
these studies and this paper (as with most previous work \cite{lampert13AwAPAMI,lampert2009zeroshot_dat,RohrbachCVPR12})
only consider recognition among the novel classes: unifying zero-shot
with supervised learning remains an open challenge \cite{RichardNIPS13}.
\noindent \textbf{Domain adaptation}\quad{}Domain adaptation methods
attempt to address the domain shift problems that occur when the assumption
that the source and target instances are drawn from the same distribution
is violated. Methods have been derived for both classification \cite{fernando2013unsupDAsubspace,duan2009transfer}
and regression \cite{storkey2007covariateShift}, and both with \cite{duan2009transfer}
and without \cite{fernando2013unsupDAsubspace} requiring label information
in the target task. Our zero-shot learning problem means that most
of supervised domain adaptation methods are inapplicable. Our projection
domain shift problem differs from the conventional domain shift problems
in that (i) it is indirectly observed in terms of the projection
shift rather than the feature distribution shift, and (ii) the source
domain classes and target domain classes are completely different
and could even be unrelated. Consequently our domain adaptation method
differs significantly from the existing unsupervised ones such as
\cite{fernando2013unsupDAsubspace} in that our method relies on correlating different representations of the unlabelled
target data in a multi-view embedding space.
\noindent \textbf{Learning multi-view embedding spaces}\quad{}Relating
low-level feature and semantic views of data has been exploited in
visual recognition and cross-modal retrieval. Most existing work \cite{SocherFeiFeiCVPR2010,multiviewCCAIJCV,HwangIJCV,topicimgannot}
focuses on modelling images/videos with associated text (e.g. tags
on Flickr/YouTube). Multi-view CCA is often exploited to provide unsupervised
fusion of different modalities. However, there are two fundamental
differences between previous multi-view embedding work and ours: (1)
Our embedding space is transductive, that is, learned from unlabelled
target data from which all semantic views are estimated by projection
rather than being the original views. These projected views thus have
the projection domain shift problem that the previous work does not
have. (2) The objectives are different: we aim to rectify the projection
domain shift problem via the embedding in order to perform better
recognition and annotation while previous studies target primarily
cross-modal retrieval. Note that although in this work, the popular CCA model is adopted for multi-view embedding, other models \cite{Rosipal2006,DBLP:conf/iccv/WangHWWT13}
could also be considered.}
\noindent \textbf{Graph-based label propagation}\quad{}In most previous
zero-shot learning studies (e.g., direct attribute prediction (DAP)
\cite{lampert13AwAPAMI}), the available knowledge (a single
prototype per target class) is very limited. There has therefore been
recent interest in additionally exploiting the unlabelled target data
distribution by transductive learning \cite{transferlearningNIPS,yanweiPAMIlatentattrib}.
However, both \cite{transferlearningNIPS} and \cite{yanweiPAMIlatentattrib}
suffer from the projection domain shift problem, and are unable to
effectively exploit multiple semantic representations/views. In contrast, after
embedding, our framework synergistically
integrates the low-level feature and semantic representations by transductive
multi-view hypergraph label propagation (TMV-HLP). Moreover, TMV-HLP
generalises beyond zero-shot to N-shot learning if labelled instances
are available for the target classes.
In a broader context, graph-based label propagation \cite{zhou2004graphLabelProp}
in general, and classification on multi-view graphs (C-MG) in particular
are well-studied in semi-supervised learning. Most
C-MG solutions are based on the seminal work of Zhou \emph{et al}
\cite{Zhou2007ICML} which generalises spectral clustering from a
single graph to multiple graphs by defining a mixture of random walks
on multiple graphs. In the embedding space, instead
of constructing local neighbourhood graphs for each view independently
(e.g.~TMV-BLP \cite{embedding2014ECCV}), this paper proposes a {\emph{distributed
representation} of pairwise similarity using heterogeneous
hypergraphs. Such a distributed heterogeneous hypergraph representation
can better explore the higher-order relations between any two nodes
of different complementary views, and thus give rise to a more robust pairwise similarity
graph and lead to better classification performance than previous
multi-view graph methods \cite{Zhou2007ICML,embedding2014ECCV}.
Hypergraphs have been used as an effective tool to align multiple
data/feature modalities in data mining \cite{Li2013a}, multimedia
\cite{fu2010summarize} and computer vision \cite{DBLP:journals/corr/LiLSDH13,Hong:2013:MHL:2503901.2503960}
applications. A hypergraph is the generalisation of a 2-graph with
edges connecting many nodes/vertices, versus connecting two nodes
in conventional 2-graphs. This makes it cope better with noisy nodes
and thus achieve better performance than conventional graphs \cite{videoObjHypergraph,ImgRetrHypergraph,fu2010summarize}.
The only existing work considering hypergraphs for multi-view data
modelling is \cite{Hong:2013:MHL:2503901.2503960}. Different from
the multi-view hypergraphs proposed in \cite{Hong:2013:MHL:2503901.2503960}
which are homogeneous, that is, constructed in each view independently,
we construct a multi-view heterogeneous hypergraph: using the nodes
from one view as query nodes to compute hyperedges in another view.
This novel graph structure better exploits the complementarity of
different views in the common embedding space.
\section{Learning a Transductive Multi-View Embedding Space}
A schematic overview of our framework is given in
Fig.~\ref{fig:The-pipeline-of}. We next introduce some notation
and assumptions, followed by the details of how to map image features
into each semantic space, and how to map multiple spaces into a common
embedding space.
\subsection{Problem setup \label{sub:Problem-setup}}
We have $c_{S}$ source/auxiliary classes with $n_{S}$ instances
$S=\{X_{S},Y_{S}^{i},\mathbf{z}_{S}\}$ and $c_{T}$ target classes
$T=\left\{ X_{T},Y_{T}^{i},\mathbf{z}_{T}\right\} $ with $n_{T}$
instances. $X_{S} \in \Bbb{R}^{n_{s}\times t}$ and $X_{T}\in \Bbb{R}^{n_{T}\times t}$ denote the $t-$dimensional low-level feature vectors of auxiliary and target instances respectively.
$\mathbf{z}_{S}$ and $\mathbf{z}_{T}$ are the auxiliary and target
class label vectors. We assume the auxiliary and target classes are
disjoint: $\mathbf{z}_{S}\cap\mathbf{z}_{T}=\varnothing$. We have
$I$ different types of semantic representations; $Y_{S}^{i}$
and $Y_{T}^{i}$ represent the $i$-th type of $m_{i}$-dimensional
semantic representation for the auxiliary and target datasets respectively;
so $Y_{S}^{i}\in \Bbb{R}^{n_{S}\times m_{i}}$ and $Y_{T}^{i}\in \Bbb{R}^{n_{T}\times m_{i}}$.
Note that for the auxiliary dataset, $Y_{S}^{i}$ is given as each
data point is labelled. But for the target dataset, $Y_{T}^{i}$ is
missing, and its prediction $\hat{Y}_{T}^{i}$ from $X_{T}$ is used
instead. As we shall see, this is obtained using a projection
function learned from the auxiliary dataset. The problem of zero-shot
learning is to estimate $\mathbf{z}_{T}$ given $X_{T}$ and $\hat{Y}_{T}^{i}$.
Without any labelled data for the target classes, external knowledge
is needed to represent what each target class looks like, in the form
of class prototypes. Specifically, each target class $c$ has a pre-defined
class-level semantic prototype $\mathbf{y}_{c}^{i}$ in each semantic
view $i$. In this paper, we consider two types of intermediate semantic
representation (i.e.~$I=2$) -- attributes and word vectors, which
represent two distinct and complementary sources of information. We
use $\mathcal{X}$, $\mathcal{A}$ and $\mathcal{V}$ to denote the
low-level feature, attribute and word vector spaces respectively.
The attribute space $\mathcal{A}$ is typically manually defined using
a standard ontology. For the word vector space $\mathcal{V}$, we
employ the state-of-the-art skip-gram neural network model \cite{wordvectorICLR}
trained on all English Wikipedia articles%
\footnote{To 13 Feb. 2014, it includes 2.9 billion words from a 4.33 million-words
vocabulary (single and bi/tri-gram words).%
}. Using this learned model, we can project the textual name of any
class into the $\mathcal{V}$ space to get its word vector representation.
Unlike semantic attributes, it is a `free' semantic representation
in that this process does not need any human annotation. We next address
how to project low-level features into these two spaces.
\begin{figure*}
\begin{centering}
\includegraphics[scale=0.45]{framework_illustration3}
\par\end{centering}
\caption{\label{fig:The-pipeline-of}The pipeline of our framework illustrated on the task of classifying unlabelled target data into two classes.}
\end{figure*}
\subsection{Learning the projections of semantic spaces }
Mapping images and videos into semantic space $i$ requires a projection
function $f^{i}:\mathcal{X}\to\mathcal{Y}^{i}$. This is typically
realised by classifier \cite{lampert2009zeroshot_dat} or regressor
\cite{RichardNIPS13}. In this paper, using the auxiliary set $S$,
we train support vector classifiers $f^{\mathcal{A}}(\cdot)$ and
support vector regressors $f^{\mathcal{V}}(\cdot)$ for each dimension\footnote{Note that methods for learning projection functions for all dimensions
jointly exist (e.g.~\cite{DeviseNIPS13}) and can be adopted in our
framework.} of the auxiliary class attribute and word vectors respectively. Then the target class instances $X_{T}$ have the semantic projections:
$\hat{Y}_{T}^{\mathcal{A}}=f^{\mathcal{A}}(X_{T})$ and $\hat{Y}_{T}^{\mathcal{V}}=f^{\mathcal{V}}(X_{T})$.
However, these predicted intermediate semantics have the projection
domain shift problem illustrated in Fig.~\ref{fig:domain-shift:Low-level-feature-distribution}.
To address this, we learn a transductive multi-view semantic embedding
space to align the semantic projections with the low-level features
of target data.
\subsection{Transductive multi-view embedding }
We introduce a multi-view semantic alignment (i.e.
transductive multi-view embedding) process to correlate target instances
in different (biased) semantic view projections with their low-level
feature view. This process alleviates the projection domain shift
problem, as well as providing a common space in which heterogeneous
views can be directly compared, and their complementarity exploited
(Sec.~\ref{sec:Recognition-by-Multi-view}). To this end, we employ multi-view
Canonical Correlation Analysis (CCA) for $n_{V}$ views, with the
target data representation in view $i$ denoted ${\Phi}^{i}$, a $n_{T}\times m_{i}$
matrix. Specifically, we project three views of each
target class instance $f^{\mathcal{A}}(X_{T})$, $f^{\mathcal{V}}(X_{T})$
and $X_{T}$ (i.e.~$n_{V}=I+1=3$) into a shared embedding space.
The three projection functions $W^{i}$ are learned by
\begin{eqnarray}
\mathrm{\underset{\left\{ W^{i}\right\} _{i=1}^{n_{V}}}{min}} & \sum_{i,j=1}^{n_{V}} & Trace(W^{i}\Sigma_{ij}W^{j})\nonumber \\
= & \sum_{i,j=1}^{n_{V}} & \parallel{\Phi}^{i}W^{i}-{\Phi}^{j}W^{j}\parallel_{F}^{2}\nonumber \\
\mathrm{s.t.} & \left[W^{i}\right]^{T}\Sigma_{ii}W^{i}=I & \left[\mathbf{w}_{k}^{i}\right]^{T}\Sigma_{ij}\mathbf{w}_{l}^{j}=0\nonumber \\
i\neq j,k\neq l & i,j=1,\cdots,n_{V} & k,l=1,\cdots,n_{T}\label{eq:multi-viewCCA}
\end{eqnarray}
where $W^{i}$ is the projection matrix which maps the view ${\Phi}^{i}$
($\in \Bbb{R}^{n_{T}\times m_{i}}$) into the embedding space and $\mathbf{w}_{k}^{i}$
is the $k$th column of $W^{i}$\textcolor{black}{.} $\Sigma_{ij}$
is the covariance matrix between ${\Phi}^{i}$ and ${\Phi}^{j}$.
The optimisation problem above is multi-convex as long as $\Sigma_{ii}$
are non-singular. The local optimum can be easily found by iteratively
maximising over each $W^{i}$ given the current values of the other
coefficients as detailed in \cite{CCAoverview}.
The dimensionality $m_{e}$ of the embedding space
is the sum of the input view dimensions, i.e.~$m_{e}=\sum_{i=1}^{n_{V}}m_{i}$,
so $W^{i}\in \Bbb{R}^{m_{i}\times m_{e}}$. Compared to the classic approach
to CCA \cite{CCAoverview} which projects to a lower dimension space,
this retains all the input information including uncorrelated dimensions
which may be valuable and complementary. Side-stepping the task of
explicitly selecting a subspace dimension, we use a more stable and
effective soft-weighting strategy to implicitly emphasise significant
dimensions in the embedding space. This can be seen as a generalisation
of standard dimension reducing approaches to CCA, which implicitly
define a binary weight vector that activates a subset of dimensions
and deactivates others. Since the importance of each dimension is
reflected by its corresponding eigenvalue \cite{CCAoverview,multiviewCCAIJCV},
we use the eigenvalues to weight the dimensions and define a \emph{weighted
embedding space} $\Gamma$:
\begin{equation}
{\Psi}^{i}={\Phi}^{i}W^{i}\left[D^{i}\right]^{\lambda}={\Phi}^{i}W^{i}\tilde{D}^{i},\label{eq:ccamapping}
\end{equation}
where $D^{i}$ is a diagonal matrix with its diagonal elements set
to the eigenvalues of each dimension in the embedding space, $\lambda$
is a power weight of $D^{i}$ and empirically set to $4$ \cite{multiviewCCAIJCV},
and ${\Psi}^{i}$ is the final representation of the target data from
view $i$ in $\Gamma$. We index the $n_{V}=3$ views as $i\in\{\mathcal{X},\mathcal{V},\mathcal{A}\}$
for notational convenience. The same formulation can be used if more
views are available.
\noindent \textbf{Similarity in the embedding space}\quad{}The choice
of similarity metric is important for high-dimensional embedding spaces. For the subsequent recognition and annotation
tasks, we compute cosine distance in $\Gamma$ by $l_{2}$ normalisation:
normalising any vector $\bm{\psi}_{k}^{i}$ (the $k$-th row of ${\Psi}^{i}$) to unit length (i.e.~$\parallel\bm{\psi}_{k}^{i}\parallel_{2}=1$).
Cosine similarity is given by the inner product of any two vectors
in $\Gamma$.
\section{Recognition by Multi-view Hypergraph Label Propagation \label{sec:Recognition-by-Multi-view}}
For zero-shot recognition, each target class $c$ to be recognised
has a semantic prototype $\mathbf{y}_{c}^{i}$ in each view $i$.
Similarly, we have three views of each unlabelled instance $f^{\mathcal{A}}(X_{T})$,
$f^{\mathcal{V}}(X_{T})$ and $X_{T}$. The class prototypes are expected
to be the mean of the distribution of their class in semantic space,
since the projection function $f^{i}$ is trained to map instances
to their class prototype in each semantic view. To exploit the learned
space $\Gamma$ to improve recognition, we project both the unlabelled
instances and the prototypes into the embedding space\textcolor{red}{}%
\footnote{Before being projected into $\Gamma$, the prototypes
are updated by semi-latent zero shot learning algorithm in~\cite{yanweiPAMIlatentattrib}.}%
}. The prototypes $\mathbf{y}_{c}^{i}$ for views $i\in\{\mathcal{A},\mathcal{V}\}$
are projected as $\bm{\psi}_{c}^{i}=\mathbf{y}_{c}^{i}W^{i}\tilde{D}^{i}$.
So we have $\bm{\psi}_{c}^{\mathcal{A}}$ and $\bm{\psi}_{c}^{\mathcal{\mathcal{V}}}$
for the attribute and word vector prototypes of each target class
$c$ in $\Gamma$. In the absence of a prototype for the (non-semantic)
low-level feature view $\mathcal{X}$, we synthesise it as $\bm{\psi}_{c}^{\mathcal{X}}=(\bm{\psi}_{c}^{\mathcal{A}}+\bm{\psi}_{c}^{\mathcal{\mathcal{V}}})/2$.
If labelled data is available (i.e., N-shot case), these are also projected
into the space. Recognition could now be achieved using NN classification
with the embedded prototypes/N-shots as labelled data. However, this
does not effectively exploit the multi-view complementarity, and suffers
from labelled data (prototype) sparsity. To solve this problem, we next introduce a unified
framework to fuse the views and transductively exploit the manifold
structure of the unlabelled target data to perform zero-shot and N-shot
learning.
Most or all of the target instances are unlabelled, so classification
based on the sparse prototypes is effectively a semi-supervised learning
problem \cite{zhu2007sslsurvey}. We leverage graph-based semi-supervised
learning to exploit the manifold structure of the unlabelled data
transductively for classification. This differs from the conventional
approaches such as direct attribute prediction (DAP) \cite{lampert13AwAPAMI}
or NN, which too simplistically assume that the data distribution
for each target class is Gaussian or multinomial. However, since
our embedding space contains multiple projections of the target data
and prototypes, it is hard to define a single graph that synergistically
exploits the manifold structure of all views. We therefore construct
multiple graphs within and across views in a transductive multi-view
hypergraph label propagation (TMV-HLP) model. Specifically, we construct the
heterogeneous hypergraphs across views to combine/align the different
manifold structures so as to enhance the robustness and exploit the
complementarity of different views. Semi-supervised learning is then
performed by propagating the labels from the sparse prototypes
(zero-shot) and/or the few labelled target instances (N-shot) to the
unlabelled data using random walk on the graphs.
\begin{figure*}[t]
\centering{}\includegraphics[scale=0.4]{illustrate_hypergraph}
\caption{\label{fig:Outliers-illustrations}An example of constructing heterogeneous
hypergraphs. Suppose in the embedding space, we have 14 nodes belonging
to 7 data points $A$, $B$, $C$, $D$, $E$, $F$ and $G$ of two
views -- view $i$ (rectangle) and view $j$ (circle). Data points
$A$,$B$,$C$ and $D$,$E$,$F$,$G$ belong to two different classes
-- red and green respectively. The multi-view semantic embedding maximises
the correlations (connected by black dash lines) between the two views
of the same node. Two hypergraphs are shown ($\mathcal{G}^{ij}$ at
the left and $\mathcal{G}^{ji}$ at the right) with the heterogeneous
hyperedges drawn with red/green dash ovals for the nodes of red/green
classes. Each hyperedge consists of two most similar nodes to the
query node. }
\end{figure*}
\subsection{Constructing heterogeneous hypergraphs}
\label{sub:Heterogenous-sub-hypergraph} \textbf{Pairwise node similarity}\quad{}The
key idea behind a hypergraph based method is to group similar data
points, represented as vertices/nodes on a graph, into hyperedges, so that the subsequent computation is less sensitive to individual noisy nodes.
With the hyperedges, the pairwise similarity between two data points
are measured as the similarity between the two hyperedges that they
belong to, instead of that between the two nodes only. For both forming
hyperedges and computing the similarity between two hyperedges, pairwise
similarity between two graph nodes needs to be defined. In our embedding
space $\Gamma$, each data point in each view defines a node, and
the similarity between any pair of nodes is:
\begin{equation}
\omega(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j})=\exp(\frac{<\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}>^{2}}{\varpi})\label{eq:sim_graph}
\end{equation}
where $<\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}>^{2}$ is the square of
inner product between the $i$th and $j$th projections of nodes $k$
and $l$ with a bandwidth parameter $\varpi$%
\footnote{Most previous work \cite{transferlearningNIPS,Zhou2007ICML} sets
$\varpi$ by cross-validation. Inspired by \cite{lampertTutorial},
a simpler strategy for setting $\varpi$ is adopted: $\varpi\thickapprox\underset{k,l=1,\cdots,n}{\mathrm{median}}<\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}>^{2}$
in order to have roughly the same number of similar and dissimilar
sample pairs. This makes the edge weights from different pairs of
nodes more comparable.%
}. Note that Eq (\ref{eq:sim_graph}) defines the pairwise similarity
between any two nodes within the same view ($i=j$) or across different
views ($i\neq j$).
\noindent \textbf{Heterogeneous hyperedges}\quad{}Given the multi-view
projections of the target data, we aim to construct a set of across-view
heterogeneous hypergraphs
\begin{equation}
\mathcal{G}^{c}=\left\{ \mathcal{G}^{ij}\mid i,j\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} ,i\neq j\right\} \label{eq:hypergraph_def}
\end{equation}
where $\mathcal{G}^{ij}=\left\{ \Psi^{i},E^{ij},\Omega^{ij}\right\} $
denotes the cross-view heterogeneous hypergraph from view $i$ to
$j$ (in that order) and $\Psi^{i}$ is the node
set in view $i$; $E^{ij}$ is the hyperedge set and $\Omega^{ij}$
is the pairwise node similarity set for the hyperedges. Specifically,
we have the hyperedge set $E^{ij}=\left\{ e_{\bm{\psi}_{k}^{i}}^{ij}\mid i\neq j,\: k=1,\cdots n_{T}+c_{T}\right\} $
where each hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$ includes the nodes%
\footnote{Both the unlabelled samples and the prototypes are
nodes.} in view $j$ that are the most similar to node $\bm{\psi}_{k}^{i}$
in view $i$ and the similarity set $\Omega^{ij}=\left\{ \Delta_{\bm{\psi}_{k}^{i}}^{ij}=\left\{ \omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)\right\} \mid i\neq j,\bm{\psi}_{l}^{j}\in e_{\bm{\psi}_{k}^{i}}^{ij}\: k=1,\cdots n_{T}+c_{T}\right\} $
where $\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)$ is
computed using Eq (\ref{eq:sim_graph}).
\noindent We call $\bm{\psi}_{k}^{i}$ the query
node for hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$, since the hyperedge
$e_{\bm{\psi}_{k}^{i}}^{ij}$ intrinsically groups all nodes in view
$j$ that are most similar to node $\bm{\psi}_{k}^{i}$ in view $i$.
Similarly, $\mathcal{G}^{ji}$ can be constructed by using nodes
from view $j$ to query nodes in view $i$. Therefore given three
views, we have six across view/heterogeneous hypergraphs. Figure \ref{fig:Outliers-illustrations}
illustrates two heterogeneous hypergrahs constructed from two views.
Interestingly, our way of defining hyperedges naturally corresponds
to the star expansion \cite{hypergraphspectral} where the query node
(i.e.~$\bm{\psi}_{k}^{i}$) is introduced to connect each node in
the hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
\noindent \textbf{Similarity strength of hyperedge}\quad{}For each hyperedge
$e_{\bm{\psi}_{k}^{i}}^{ij}$, we measure its similarity strength
by using its query nodes $\bm{\psi}_{k}^{i}$. Specifically,
we use the weight $\delta_{\bm{\psi}_{k}^{i}}^{ij}$ to indicate the
similarity strength of nodes connected within the hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
Thus, we define $\delta_{\bm{\psi}_{k}^{i}}^{ij}$ based on the mean
similarity of the set $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ for the hyperedge
\begin{align}
\delta_{\bm{\psi}_{k}^{i}}^{ij} & =\frac{1}{\mid e_{\bm{\psi}_{k}^{i}}^{ij}\mid}\sum_{\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)\in\Delta_{\bm{\psi}_{k}^{i}}^{ij},\bm{\psi}_{l}^{j}\in e_{\bm{\psi}_{k}^{i}}^{ij}}\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right),\label{eq:heterogenous_similarity_weight}
\end{align}
where $\mid e_{\bm{\psi}_{k}^{i}}^{ij}\mid$ is the cardinality of
hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
\noindent In the embedding space $\Gamma$, similarity
sets $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ and $\Delta_{\bm{\psi}_{l}^{i}}^{ij}$
can be compared. Nevertheless, these sets come from heterogeneous views
and have varying scales. Thus some normalisation steps are necessary
to make the two similarity sets more comparable and the subsequent
computation more robust. Specifically, we extend zero-score normalisation
to the similarity sets: (a) We assume $\forall\Delta_{\bm{\psi}_{k}^{i}}^{ij}\in\Omega^{ij}$
and $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ should follow Gaussian distribution.
Thus, we enforce zero-score normalisation to $\Delta_{\bm{\psi}_{k}^{i}}^{ij}$.
(b) We further assume that the retrieved similarity set $\Omega^{ij}$
between all the queried nodes $\bm{\psi}_{k}^{i}$ ($l=1,\cdots n_{T})$
from view $i$ and $\bm{\psi}_{l}^{j}$ should also follow Gaussian
distributions. So we again enforce Gaussian distribution to the pairwise
similarities between $\bm{\psi}_{l}^{j}$ and all query nodes from
view $i$ by zero-score normalisation. (c) We select the first $K$
highest values from $\Delta_{\bm{\psi_{k}^{i}}}^{ij}$ as new similarity
set $\bar{\Delta}_{\bm{\psi}_{k}^{i}}^{ij}$ for hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$.
$\bar{\Delta}_{\bm{\psi}_{k}^{i}}^{ij}$ is then used in Eq (\ref{eq:heterogenous_similarity_weight})
in place of ${\Delta}_{\bm{\psi}_{k}^{i}}^{ij}$. These normalisation
steps aim to compute a more robust similarity between each pair of
hyperedges.
\noindent \textbf{Computing similarity between
hyperedges} \quad{} With the hypergraph, the similarity between two nodes is computed
using their hyperedges $e_{\bm{\psi}_{k}^{i}}^{ij}$.
Specifically, for each hyperedge there is an associated incidence
matrix $H^{ij}=\left(h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\right)_{(n_{T}+c_{T})\times\mid E^{ij}\mid}$
where
\begin{equation}
h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)=\begin{cases}
\begin{array}[t]{cc}
1 & if\:\bm{\psi}_{l}^{j}\in e_{\bm{\psi}_{k}^{i}}^{ij}\\
0 & otherwise
\end{array}\end{cases}\label{eq:heterogenous_hard_incidence_matrix}
\end{equation}
To take into consideration the similarity strength between hyperedge
and query node, we extend the binary valued hyperedge incidence matrix
$H^{ij}$ to soft-assigned incidence matrix $SH^{ij}=\left(sh\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\right)_{(n_{T}+c_{T})\times\mid E^{ij}\mid}$
as follows
\begin{equation}
sh\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)=\delta_{\bm{\psi}_{k}^{i}}^{ij}\cdot\omega\left(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{j}\right)\cdot h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\label{eq:soft_incident_matrix}
\end{equation}
This soft-assigned incidence matrix is the product of three components:
(1) the weight $\delta_{\bm{\psi}_{k}^{i}}$ for hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$;
(2) the pairwise similarity computed using queried node $\bm{\psi}_{k}^{i}$;
(3) the binary valued hyperedge incidence matrix element $h\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)$.
To make the values of $SH^{ij}$ comparable among the different heterogeneous
views, we apply $l_{2}$ normalisation to the soft-assigned incidence
matrix values for all node incident to each hyperedge.
Now for each heterogeneous hypergraph, we can finally define the pairwise
similarity between any two nodes or hyperedges. Specifically for $\mathcal{G}^{ij}$,
the similarity between the $o$-th and $l$-th nodes is
\begin{equation}
\omega_{c}^{ij}\left(\bm{\psi}_{o}^{j},\bm{\psi}_{l}^{j}\right)=\sum_{e_{\bm{\psi}_{k}^{i}}^{ij}\in E^{ij}}sh\left(\bm{\psi}_{o}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right)\cdot sh\left(\bm{\psi}_{l}^{j},e_{\bm{\psi}_{k}^{i}}^{ij}\right).\label{eq:hyperedge_weights}
\end{equation}
With this pairwise hyperedge similarity, the hypergraph definition
is now complete. Empirically, given a node, other nodes on the graph that have very low similarities will have very limited effects on its label.
Thus, to reduce computational cost, we only use the K-nearest-neighbour
(KNN)\footnote{$K=30$. It can be varied from $10\sim50$ with
little effect in our experiments.} nodes of each node~\cite{zhu2007sslsurvey} for the subsequent label propagation step.
\noindent \textbf{The advantages of heterogeneous hypergraphs}\quad{}We
argue that the pairwise similarity of heterogeneous hypergraphs is
a distributed representation \cite{Bengio:2009:LDA:1658423.1658424}. To explain it, we can use star
extension \cite{hypergraphspectral} to extend a hypergraph into a
2-graph. For each hyperedge $e_{\bm{\psi}_{k}^{i}}^{ij}$, the query
node $\bm{\psi}_{k}^{i}$ is used to compute the pairwise similarity
$\Delta_{\bm{\psi}_{k}^{i}}^{ij}$ of all the nodes in view $j$.
Each hyperedge can thus define a hyper-plane by categorising the nodes
in view $j$ into two groups: strong and weak similarity group regarding
to query node $\bm{\psi}_{k}^{i}$. In other words, the hyperedge
set $E^{ij}$ is multi-clustering with linearly separated regions
(by each hyperplane) per classes. Since the final pairwise similarity
in Eq (\ref{eq:hyperedge_weights}) can be represented by a set of
similarity weights computed by hyperedge, and such weights are not
mutually exclusive and are statistically independent, we consider
the heterogeneous hypergraph a distributed representation. The advantage
of having a distributed representation has been studied by Watts and
Strogatz~\cite{Watts-Colective-1998,Watts.2004} which shows that
such a representation gives rise to better convergence rates and better
clustering abilities. In contrast, the homogeneous hypergraphs adopted
by previous work \cite{ImgRetrHypergraph,fu2010summarize,Hong:2013:MHL:2503901.2503960}
does not have this property which makes them less robust against noise.
In addition, fusing different views in the early stage of graph construction
potentially can lead to better exploitation of the complementarity
of different views. However,
it is worth pointing out that (1) The reason we can query nodes across
views to construct heterogeneous hypergraph is because we have projected
all views in the same embedding space in the first place. (2) Hypergraphs
typically gain robustness at the cost of losing discriminative power
-- it essentially blurs the boundary of different clusters/classes
by taking average over hyperedges. A typical solution is to
fuse hypergraphs with 2-graphs~\cite{fu2010summarize,Hong:2013:MHL:2503901.2503960,Li2013a},
which we adopt here as well.
\subsection{Label propagation by random walk}
Now we have two types of graphs: heterogeneous hypergraphs $\mathcal{G}^{c}=\left\{ \mathcal{G}^{ij}\right\} $
and 2-graphs\footnote{That is the K-nearest-neighbour graph of each view
in $\Gamma$ \cite{embedding2014ECCV}.
$\mathcal{G}^{p}=\left\{ \mathcal{G}^{i}\right\} $.
Given three views ($n_{V}=3$), we thus have nine graphs in total
(six hypergraphs and three 2-graphs). To classify
the unlabelled nodes, we need to propagate label information from the prototype
nodes across the graph. Such semi-supervised label propagation \cite{Zhou2007ICML,zhu2007sslsurvey}
has a closed-form solution and is explained as a random walk.
A random walk requires pairwise transition probability
for nodes $k$ and $l$. We obtain this by aggregating the information
from all graphs $\mathcal{G}=\left\{ \mathcal{G}^{p};\mathcal{G}^{c}\right\} $,
\begin{align}
p\left(k\rightarrow l\right) & =\sum_{i\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} }p\left(k\rightarrow l\mid\mathcal{G}^{i}\right)\cdot p\left(\mathcal{G}^{i}\mid k\right)+\label{eq:transition_probability}\\
& \sum_{i,j\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} ,i\ne j}p\left(k\rightarrow l\mid\mathcal{G}^{ij}\right)\cdot p\left(\mathcal{G}^{ij}\mid k\right)\nonumber
\end{align}
where
\begin{equation}
p\left(k\rightarrow l\mid\mathcal{G}^{i}\right)=\frac{\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{i})}{\sum_{o}\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{o}^{i})},\label{eq:prob_graphs-1}
\end{equation}
and
\[
p\left(k\rightarrow l\mid\mathcal{G}^{ij}\right)=\frac{\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{l}^{j})}{\sum_{o}\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{o}^{j})}
\]
and then the posterior probability to choose graph $\mathcal{G}^{i}$
at projection/node $\bm{\psi}_{k}^{i}$ will be:
\begin{align}
p(\mathcal{G}^{i}|k) & =\frac{\pi(k|\mathcal{G}^{i})p(\mathcal{G}^{i})}{\sum_{i}\pi(k|\mathcal{G}^{i})p(\mathcal{G}^{i})+\sum_{ij}\pi(k|\mathcal{G}^{ij})p(\mathcal{G}^{ij})}\label{eq:post_prob_i}\\
p(\mathcal{G}^{ij}|k) & =\frac{\pi(k|\mathcal{G}^{ij})p(\mathcal{G}^{ij})}{\sum_{i}\pi(k|\mathcal{G}^{i})p(\mathcal{G}^{i})+\sum_{ij}\pi(k|\mathcal{G}^{ij})p(\mathcal{G}^{ij})}
\end{align}
\noindent where $p(\mathcal{G}^{i})$ and $p(\mathcal{G}^{ij})$ are
the prior probability of graphs $\mathcal{G}^{i}$ and $\mathcal{G}^{ij}$
in the random walk. This probability expresses prior expectation about
the informativeness of each graph. The same\emph{ }Bayesian model
averaging \cite{embedding2014ECCV} can be used here to estimate these
prior probabilities. However, the computational cost is combinatorially
increased with the number of views; and it turns out the prior is
not critical to the results of our framework. Therefore, uniform prior
is used in our experiments.
The stationary probabilities for node $k$ in $\mathcal{G}^{i}$ and
$\mathcal{G}^{ij}$ are
\begin{align}
\pi(k|\mathcal{G}^{i}) & =\frac{\sum_{l}\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{l}^{i})}{\sum_{o}\sum_{l}\omega_{p}^{i}(\bm{\psi}_{k}^{i},\bm{\psi}_{o}^{i})}\label{eq:stati_prob_i}\\
\pi(k|\mathcal{G}^{ij}) & =\frac{\sum_{l}\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{l}^{j})}{\sum_{k}\sum_{o}\omega_{c}^{ij}(\bm{\psi}_{k}^{j},\bm{\psi}_{o}^{j})}\label{eq:heterogeneous_stationary_prob}
\end{align}
Finally, the stationary probability across the multi-view hypergraph
is computed as:
\begin{align}
\pi(k) & =\sum_{i\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} }\pi(k|\mathcal{G}^{i})\cdot p(\mathcal{G}^{i})+\label{eq:stat_prob}\\
& \sum_{i,j\in\left\{ \mathcal{X},\mathcal{V},\mathcal{A}\right\} ,i\neq j}\pi(k|\mathcal{G}^{ij})\cdot p(\mathcal{G}^{ij})
\end{align}
\noindent Given the defined graphs and random walk process, we can
derive our label propagation algorithm (TMV-HLP). Let $P$ denote
the transition probability matrix defined by Eq (\ref{eq:transition_probability})
and $\Pi$ the diagonal matrix with the elements $\pi(k)$ computed
by Eq (\ref{eq:stat_prob}). The Laplacian matrix $\mathcal{L}$ combines
information of different views and is defined as: $\mathcal{L}=\Pi-\frac{\Pi P+P^{T}\Pi}{2}.$
The label matrix $Z$ for labelled N-shot data or zero-shot prototypes
is defined as:
\begin{equation}
Z(q_{k},c)=\begin{cases}
\begin{array}{c}
1\\
-1\\
0
\end{array} & \begin{array}{c}
q_{k}\in class\, c\\
q_{k}\notin class\, c\\
unknown
\end{array}\end{cases}\label{eq:initial_label}
\end{equation}
Given the label matrix $Z$ and Laplacian $\mathcal{L}$, label propagation
on multiple graphs has the closed-form solution \cite{Zhou2007ICML}: $\hat{Z}=\eta(\eta\Pi+\mathcal{L})^{-1}\Pi Z$ where $\eta$ is
a regularisation parameter%
\footnote{It can be varied from $1-10$ with little effects in our experiments.}. Note that in our framework, both labelled target class instances
and prototypes are modelled as graph nodes. Thus the difference between
zero-shot and N-shot learning lies only on the initial labelled instances:
Zero-shot learning has the prototypes as labelled nodes; N-shot has
instances as labelled nodes; and a new condition exploiting both prototypes
and N-shot together is possible. This unified recognition framework
thus applies when either or both of prototypes and labelled instances
are available. The computational cost of our TMV-HLP
is $\mathcal{O}\left((c_{T}+n_{T})^{2}\cdot n_{V}^{2}+(c_{T}+n_{T})^{3}\right)$,
where $K$ is the number of nearest neighbours in the KNN graphs,
and $n_{V}$ is the number of views. It costs $\mathcal{O}((c_{T}+n_{T})^{2}\cdot n_{V}^{2})$
to construct the heterogeneous graph, while the inverse matrix of Laplacian
matrix $\mathcal{L}$ in label propagation step will take $\mathcal{O}((c_{T}+n_{T})^{3})$
computational time, which however can be further reduced to $\mathcal{O}(c_{T}n_{T}t)$ using the recent
work of Fujiwara et al.~\cite{FujiwaraICML2014efficientLP}, where $t$ is an iteration parameter
in their paper and $t\ll n_{T}$.
\section{Annotation and Beyond\label{sec:Annotation-and-Beyond}}
Our multi-view embedding space $\Gamma$ bridges the semantic gap
between low-level features $\mathcal{X}$ and semantic representations
$\mathcal{A}$ and $\mathcal{V}$. Leveraging this cross-view mapping,
annotation \cite{hospedales2011video_tags,topicimgannot,multiviewCCAIJCV}
can be improved and applied in novel ways. We consider three annotation
tasks here:
\noindent \textbf{Instance level annotation}\quad{}Given a new instance
$u$, we can describe/annotate it by predicting its attributes. The
conventional solution is directly applying $\hat{\mathbf{y}}_{u}^{\mathcal{A}}=f^{\mathcal{A}}(\mathbf{x}_{u})$
for test data $\mathbf{x}_{u}$ \cite{farhadi2009attrib_describe,multiviewCCAIJCV}.
However, as analysed before, this suffers from the projection domain
shift problem. To alleviate this, our multi-view embedding space aligns
the semantic attribute projections with the low-level features of
each unlabelled instance in the target domain. This alignment can
be used for image annotation in the target domain. Thus, with our
framework, we can now infer attributes for any test instance via the
learned embedding space $\Gamma$ as $\hat{\mathbf{y}}_{u}^{A}=\mathbf{x}_{u}W^{\mathcal{X}}\tilde{D}^{\mathcal{X}}\left[W^{\mathcal{A}}\tilde{D}^{\mathcal{A}}\right]^{-1}$.
\noindent \textbf{Zero-shot class description}\quad{}From a broader
machine intelligence perspective, one might be interested to ask what
are the attributes of an unseen class, based solely on the class name.
Given our multi-view embedding space, we
can infer the semantic attribute description of a novel class. This
\textit{zero-shot class description} task could be useful, for example,
to hypothesise the zero-shot attribute prototype of a class instead
of defining it by experts \cite{lampert2009zeroshot_dat} or ontology
\cite{yanweiPAMIlatentattrib}. Our transductive embedding enables
this task by connecting semantic word space (i.e.~naming) and discriminative
attribute space (i.e.~describing). Given the prototype $\mathbf{y}_{c}^{\mathcal{V}}$
from the name of a novel class $c$, we compute $\hat{\mathbf{y}}_{c}^{\mathcal{A}}=\mathbf{y}_{c}^{\mathcal{V}}W^{\mathcal{V}}\tilde{D}^{\mathcal{V}}\left[W^{\mathcal{A}}\tilde{D}^{\mathcal{A}}\right]^{-1}$
to generate the class-level attribute description.
\noindent \textbf{Zero prototype learning}\quad{}This task is the
inverse of the previous task -- to infer the name of class given a
set of attributes. It could be useful, for example, to validate or
assess a proposed zero-shot attribute prototype, or to provide an
automated semantic-property based index into a dictionary or database.
To our knowledge, this is the first attempt to evaluate the quality
of a class attribute prototype because no previous work has directly
and systematically linked linguistic knowledge space with visual attribute
space. Specifically given an attribute prototype $\mathbf{y}_{c}^{\mathcal{A}}$,
we can use $\hat{\mathbf{y}}_{c}^{\mathcal{V}}=\hat{\mathbf{y}}_{c}^{\mathcal{A}}W^{\mathcal{A}}\tilde{D}^{\mathcal{A}}\left[W^{\mathcal{V}}\tilde{D}^{\mathcal{V}}\right]^{-1}$
to name the corresponding class and perform retrieval on dictionary
words in $\mathcal{V}$ using $\hat{\mathbf{y}}_{c}^{\mathcal{V}}$.
\section{Experiments}
\subsection{Datasets and settings }
We evaluate our framework on three widely used image/video
datasets: Animals with Attributes (AwA), Unstructured Social Activity
Attribute (USAA), and Caltech-UCSD-Birds (CUB). \textbf{AwA} \cite{lampert2009zeroshot_dat}
consists of $50$ classes of animals ($30,475$ images) and $85$ associated
class-level attributes. It has a standard source/target split for
zero-shot learning with $10$ classes and $6,180$ images held out
as the target dataset. We use the same `hand-crafted' low-level features
(RGB colour histograms, SIFT, rgSIFT, PHOG, SURF and local self-similarity
histograms) released with the dataset (denoted as $\mathcal{H}$);
and the same multi-kernel learning (MKL) attribute classifier from
\cite{lampert2009zeroshot_dat}.\textbf{ USAA} is a video dataset
\cite{yanweiPAMIlatentattrib} with $69$ instance-level attributes
for $8$ classes of complex (unstructured) social group activity videos
from YouTube. Each class has around $100$ training and test videos
respectively. USAA provides the instance-level attributes since there
are significant intra-class variations. We use the thresholded mean
of instances from each class to define a binary attribute prototype
as in \cite{yanweiPAMIlatentattrib}. The same setting in \cite{yanweiPAMIlatentattrib}
is adopted: $4$ classes as source and $4$ classes as target data.
We use exactly the same SIFT, MFCC and STIP low-level features for
USAA as in \cite{yanweiPAMIlatentattrib}. \textbf{CUB-200-2011} \cite{WahCUB_200_2011}
contains $11,788$ images of $200$ bird classes. This is more challenging
than AwA -- it is designed for fine-grained recognition and
has more classes but fewer images. Each class is annotated with $312$
binary attributes derived from a bird species ontology. We use $150$
classes as auxiliary data, holding out $50$ as test data. We extract
$128$ dimensional SIFT and colour histogram descriptors from regular
grid of multi-scale and aggregate them into image-level feature Fisher
Vectors ($\mathcal{F}$) by using $256$ Gaussians, as in \cite{labelembeddingcvpr13}.
Colour histogram and PHOG features are also used to extract global
color and texture cues from each image. Due to the recent progress
on deep learning based representations, we also extract OverFeat
($\mathcal{O}$) \cite{sermanet-iclr-14}%
\footnote{We use the trained model of OverFeat in \cite{sermanet-iclr-14}.%
} from AwA and CUB as an alternative to $\mathcal{H}$ and $\mathcal{F}$
respectively. In addition, DeCAF ($\mathcal{D}$) \cite{decaf}
is also considered for AwA.
We report absolute classification accuracy on USAA and mean accuracy
for AwA and CUB for direct comparison to published results. The word
vector space is trained by the model in \cite{wordvectorICLR} with
$1,000$ dimensions.
\begin{table*}[ht]
\begin{centering}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
Approach & \multicolumn{1}{c|}{AwA ($\mathcal{H}$ \cite{lampert2009zeroshot_dat})} & AwA ($\mathcal{O}$) & AwA $\left(\mathcal{O},\mathcal{D}\right)$ & USAA & CUB ($\mathcal{O}$) & CUB ($\mathcal{F}$) \tabularnewline
\hline
DAP & 40.5(\cite{lampert2009zeroshot_dat}) / 41.4(\textcolor{black}{\cite{lampert13AwAPAMI})
/ 38.4{*}} & 51.0{*} & 57.1{*} & 33.2(\cite{yanweiPAMIlatentattrib}) / 35.2{*} & 26.2{*} & 9.1{*}\tabularnewline
IAP & 27.8(\cite{lampert2009zeroshot_dat}) / 42.2(\textcolor{black}{\cite{lampert13AwAPAMI})} & -- & -- & -- & -- & --\tabularnewline
M2LATM \cite{yanweiPAMIlatentattrib} {*}{*}{*} & 41.3 & -- & -- & 41.9 & -- & --\tabularnewline
ALE/HLE/AHLE \cite{labelembeddingcvpr13} & 37.4/39.0/43.5 & -- & -- & -- & -- & 18.0{*}\tabularnewline
Mo/Ma/O/D \cite{marcuswhathelps} & 27.0 / 23.6 / 33.0 / 35.7 & -- & -- & -- & -- & --\tabularnewline
PST \cite{transferlearningNIPS} {*}{*}{*} & 42.7 & 54.1{*} & 62.9{*} & 36.2{*} & 38.3{*} & 13.2{*}\tabularnewline
\cite{Yucatergorylevel} & 48.3{*}{*} & -- & -- & -- & -- & --\tabularnewline
TMV-BLP \cite{embedding2014ECCV}{*}{*}{*} & 47.7 & 69.9 & 77.8 & 48.2 & 45.2 & 16.3\tabularnewline
\hline
TMV-HLP {*}{*}{*} & \textbf{49.0} & \textbf{73.5} & \textbf{80.5} & \textbf{50.4} & \textbf{47.9} & \textbf{19.5}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\noindent \caption{\label{tab:Comparison-with-stateofart}Comparison with the state-of-the-art
on zero-shot learning on AwA, USAA and CUB. Features $\mathcal{H}$,
$\mathcal{O}$, $\mathcal{D}$ and $\mathcal{F}$ represent hand-crafted, OverFeat, DeCAF,
and Fisher Vector respectively. Mo, Ma, O and D represent the highest
results by the mined object class-attribute associations, mined attributes,
objectness as attributes and direct similarity methods used in \cite{marcuswhathelps}
respectively. `--': no result reported. {*}: our implementation. {*}{*}:
requires additional human annotations.{*}{*}{*}:
requires unlabelled data, i.e.~a transductive setting. }
\end{table*}
\subsection{Recognition by zero-shot learning }
\subsubsection{Comparisons with state-of-the-art}
We compare our method (TMV-HLP) with the recent state-of-the-art
models that report results or can be re-implemented by us on the three
datasets in Table~\ref{tab:Comparison-with-stateofart}. They cover
a wide range of approaches on utilising semantic intermediate representation
for zero-shot learning. They can be roughly categorised according
to the semantic representation(s) used: DAP and IAP (\cite{lampert2009zeroshot_dat},
\cite{lampert13AwAPAMI}), M2LATM \cite{yanweiPAMIlatentattrib},
ALE \cite{labelembeddingcvpr13}, \cite{transferlearningNIPS} and
\cite{unifiedProbabICCV13} use attributes only; HLE/AHLE \cite{labelembeddingcvpr13}
and Mo/Ma/O/D \cite{marcuswhathelps} use both attributes and linguistic
knowledge bases (same as us); \cite{Yucatergorylevel} uses attribute
and some additional human manual annotation. Note that our linguistic
knowledge base representation is in the form of word vectors, which
does not incur additional manual annotation. Our method also does
not exploit data-driven attributes such as M2LATM \cite{yanweiPAMIlatentattrib}
and Mo/Ma/O/D \cite{marcuswhathelps}.
Consider first the results on the most widely used AwA. Apart
from the standard hand-crafted feature ($\mathcal{H}$),
we consider the more powerful OverFeat deep feature ($\mathcal{O}$),
and a combination of OverFeat and DeCAF $\left(\mathcal{O},\mathcal{D}\right)$%
\footnote{With these two low-level feature views, there are six views in total
in the embedding space.%
}. Table~\ref{tab:Comparison-with-stateofart} shows that (1) with
the same experimental settings and the same feature ($\mathcal{H}$),
our TMV-HLP ($49.0\%$) outperforms the best result reported so far
(48.3\%) in \cite{Yucatergorylevel} which, unlike ours, requires additional human
annotation to relabel the similarities between auxiliary and target
classes.
(2) With the more powerful OverFeat feature, our method achieves $73.5\%$
zero-shot recognition accuracy. Even more remarkably, when both the
OverFeat and DeCAF features are used in our framework, the result
(see the AwA $\left(\mathcal{O},\mathcal{D}\right)$ column) is $80.5\%$.
Even with only 10 target classes, this is an extremely good result given
that we do not have any labelled samples from the target classes. Note
that this good result is not solely due to the feature strength, as
the margin between the conventional DAP and our TMV-HLP is much bigger
indicating that our TMV-HLP plays a critical role in achieving this
result.
(3) Our method is also superior to the AHLE method in \cite{labelembeddingcvpr13}
which also uses two semantic spaces: attribute and WordNet hierarchy.
Different from our embedding framework, AHLE simply concatenates the
two spaces. (4) Our method also outperforms the other alternatives
of either mining other semantic knowledge bases (Mo/Ma/O/D \cite{marcuswhathelps})
or exploring data-driven attributes (M2LATM \cite{yanweiPAMIlatentattrib}).
(5) Among all compared methods, PST \cite{transferlearningNIPS} is
the only one except ours that performs label propagation based transductive
learning. It yields better results than DAP in all the experiments
which essentially does nearest neighbour in the semantic space. TMV-HLP
consistently beats PST in all the results shown in Table~\ref{tab:Comparison-with-stateofart}
thanks to our multi-view embedding. (6) Compared to our TMV-BLP model \cite{embedding2014ECCV}, the superior results of TMV-HLP shows that the proposed heterogeneous hypergraph is more effective than the homogeneous 2-graphs used in TMV-BLP for zero-shot learning.
Table \ref{tab:Comparison-with-stateofart} also shows that on two
very different datasets: USAA video activity, and CUB fine-grained,
our TMV-HLP significantly outperforms the state-of-the-art alternatives.
In particular, on the more challenging CUB, 47.9\% accuracy is achieved
on 50 classes (chance level 2\%) using the OverFeat feature. Considering
the fine-grained nature and the number of classes, this is even more
impressive than the 80.5\% result on AwA.
\subsubsection{Further evaluations\label{sec:further eva}}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.33]{CCA_combined}
\par\end{centering}
\protect\caption{\label{fig:CCA-validation} (a) Comparing soft and hard dimension weighting of CCA for AwA. (b) Contributions of CCA and label propagation on AwA. $\Psi^\mathcal{A}$ and $\Psi^\mathcal{V}$ indicate the subspaces of target data from view $\mathcal{A}$ and $\mathcal{V}$ in $\Gamma$ respectively. Hand-crafted features are used in both experiments. }
\end{figure}
\noindent \textbf{Effectiveness of soft weighting for CCA embedding}\quad{}
In this experiment,
we compare the soft-weighting (Eq (\ref{eq:ccamapping})) of CCA embedding space $\Gamma$ (a strategy adopted in this work) with the conventional hard-weighting strategy of selecting the number of dimensions for CCA projection.
Fig.~\ref{fig:CCA-validation}(a) shows that the performance of the hard-weighting CCA depends on the number of projection dimensions selected (blue curve). In contrast, our soft-weighting strategy uses all dimensions weighted by the CCA eigenvalues, so that the important dimensions are automatically weighted more highly. The result shows that this strategy is clearly better and it is not very sensitive to the weighting parameter $\lambda$, with choices of $\lambda>2$ all working well.
\noindent \textbf{Contributions of individual components}\quad{}
There are two major components in our ZSL framework: CCA embedding and label propagation. In this experiment we investigate whether both of them contribute to the strong performance. To this end, we compare the ZSL results on AwA with label propagation and without (nearest neighbour) before and after CCA embedding. In Fig.~\ref{fig:CCA-validation}(b), we can see that: (i) Label propagation always helps regardless whether the views have been embedded using CCA, although its effects are more pronounced after embedding. (ii) Even without label propagation, i.e.~using nearest neighbour for classification, the performance is improved by the CCA embedding. However, the improvement is bigger with label propagation. This result thus suggests that both CCA embedding and label propagation are useful, and our ZSL framework works the best when both are used.
\begin{figure*}[t]
\begin{centering}
\includegraphics[width=0.8\textwidth]{effectivenessOfAwAUSAA}
\par\end{centering}
\caption{\label{fig:zero-shot-learning-on}Effectiveness of transductive multi-view
embedding. (a) zero-shot learning on AwA using only hand-crafted features;
(b) zero-shot learning on AwA using hand-crafted and deep features
together; (c) zero-shot learning on USAA. $[\mathcal{V},\mathcal{A}]$
indicates the concatenation of semantic word and attribute space vectors.
$\Gamma(\mathcal{X}+\mathcal{V})$ and $\Gamma(\mathcal{X}+\mathcal{A})$
mean using low-level+semantic word spaces and low-level+attribute
spaces respectively to learn the embedding. $\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$
indicates using all $3$ views to learn the embedding. }
\end{figure*}
\noindent \textbf{Transductive multi-view embedding}\quad{}To further
validate the contribution of our transductive multi-view embedding
space, we split up different views with and without embedding and the
results are shown in Fig.~\ref{fig:zero-shot-learning-on}. In Figs.~\ref{fig:zero-shot-learning-on}(a)
and (c), the hand-crafted feature $\mathcal{H}$ and SIFT, MFCC and
STIP low-level features are used for AwA and USAA respectively, and
we compare $\mathcal{V}$ vs.~$\Gamma(\mathcal{X}+\mathcal{V}$), $\mathcal{A}$ vs.~$\Gamma(\mathcal{X}+\mathcal{A})$ and $[\mathcal{V},\mathcal{A}]$
vs.~$\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$ (see the caption
of Fig.~\ref{fig:zero-shot-learning-on} for definitions). We use
DAP for $\mathcal{A}$ and nearest neighbour for $\mathcal{V}$ and
$[\mathcal{V},\mathcal{A}]$, because the prototypes of $\mathcal{V}$
are not binary vectors so DAP cannot be applied. We use TMV-HLP for
$\Gamma(\mathcal{X}+\mathcal{V})$ and $\Gamma(\mathcal{X}+\mathcal{A})$
respectively. We highlight the following observations: (1) After transductive
embedding, $\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$, $\Gamma(\mathcal{X}+\mathcal{V})$
and $\Gamma(\mathcal{X}+\mathcal{A})$ outperform $[\mathcal{V},\mathcal{A}]$,
$\mathcal{V}$ and $\mathcal{A}$ respectively. This means that the
transductive embedding is helpful whichever semantic space is used
in rectifying the projection domain shift problem by aligning the
semantic views with low-level features. (2) The results of $[\mathcal{V},\mathcal{A}]$
are higher than those of $\mathcal{A}$ and $\mathcal{V}$ individually,
showing that the two semantic views are indeed complementary even
with simple feature level fusion. Similarly, our TMV-HLP on all views
$\Gamma(\mathcal{X}+\mathcal{V}+\mathcal{A})$ improves individual
embeddings $\Gamma(\mathcal{X}+\mathcal{V})$ and $\Gamma(\mathcal{X}+\mathcal{A})$.
\begin{figure*}[ht]
\begin{centering}
\includegraphics[scale=0.4]{visualization_fusing_mutiple_view}
\par\end{centering}
\caption{\label{fig:t-SNE-visualisation-of}t-SNE Visualisation of (a) OverFeat
view ($\mathcal{X}_{\mathcal{O}}$), (b) attribute view ($\mathcal{A}_{\mathcal{O}}$),
(c) word vector view ($\mathcal{V}_{\mathcal{O}}$), and (d) transition
probability of pairwise nodes computed by Eq (\ref{eq:transition_probability})
of TMV-HLP in ($\Gamma(\mathcal{X}+\mathcal{A}+\mathcal{V})_{\mathcal{O},\mathcal{D}}$).
The unlabelled target classes are much more separable in (d).}
\end{figure*}
\noindent \textbf{Embedding deep learning feature views also helps}\quad{}In Fig.~\ref{fig:zero-shot-learning-on}(b)
three different low-level features are considered for AwA: hand-crafted
($\mathcal{H}$), OverFeat ($\mathcal{O}$) and DeCAF features ($\mathcal{D}$).
The zero-shot learning results of each individual space are indicated
as $\mathcal{V}_{\mathcal{H}}$, $\mathcal{A}_{\mathcal{H}}$, $\mathcal{V}_{\mathcal{O}}$,
$\mathcal{A}_{\mathcal{O}}$, $\mathcal{V}_{\mathcal{D}}$, $\mathcal{A}_{\mathcal{D}}$
in Fig.~\ref{fig:zero-shot-learning-on}(b) and we observe that $\mathcal{V}_{\mathcal{O}}>\mathcal{V}_{\mathcal{D}}>\mathcal{V}_{\mathcal{H}}$
and $\mathcal{A}_{\mathcal{O}}>\mathcal{A}_{\mathcal{D}}>\mathcal{A}_{\mathcal{H}}$.
That is OverFeat $>$ DeCAF $>$ hand-crafted features. It is widely
reported that deep features have better performance than `hand-crafted'
features on many computer vision benchmark datasets \cite{2014arXiv1405.3531C,sermanet-iclr-14}.
What is interesting to see here is that OverFeat $>$ DeCAF since
both are based on the same Convolutional Neural Network (CNN) model
of \cite{KrizhevskySH12}. Apart from implementation details, one
significant difference is that DeCAF is pre-trained by ILSVRC2012
while OverFeat by ILSVRC2013 which contains more animal classes meaning
better (more relevant) features can be learned. It is also worth pointing
out that: (1) With both OverFeat and DeCAF features, the number of
views to learn an embedding space increases from $3$ to $9$; and our
results suggest that the more views, the better chance to solve the
domain shift problem and the data become more separable as different
views contain complementary information.
(2) Figure~\ref{fig:zero-shot-learning-on}(b) shows that when all
9 available views ($\mathcal{X}_{\mathcal{H}}$, $\mathcal{V}_{\mathcal{H}}$,
$\mathcal{A}_{\mathcal{H}}$, $\mathcal{X}_{\mathcal{D}}$, $\mathcal{V}_{\mathcal{D}}$,
$\mathcal{A}_{\mathcal{D}}$, $\mathcal{X}_{\mathcal{O}}$, $\mathcal{V}_{\mathcal{O}}$
and $\mathcal{A}_{\mathcal{O}}$) are used for embedding, the result
is significantly better than those from each individual view. Nevertheless,
it is lower than that obtained by embedding views ($\mathcal{X}_{\mathcal{D}}$,
$\mathcal{V}_{\mathcal{D}}$, $\mathcal{A}_{\mathcal{D}}$, $\mathcal{X}_{\mathcal{O}}$,
$\mathcal{V}_{\mathcal{O}}$ and $\mathcal{A}_{\mathcal{O}}$). This
suggests that view selection may be required when a large number of
views are available for learning the embedding space.
\noindent \textbf{Embedding makes target classes more separable}\quad{}We
employ t-SNE \cite{tsne} to visualise the space $\mathcal{X}_{\mathcal{O}}$,
$\mathcal{V}_{\mathcal{O}}$, $\mathcal{A}_{\mathcal{O}}$ and $\Gamma(\mathcal{X}+\mathcal{A}+\mathcal{V})_{\mathcal{O},\mathcal{D}}$
in Fig.~\ref{fig:t-SNE-visualisation-of}. It shows that even in
the powerful OverFeat view, the 10 target classes are heavily overlapped
(Fig.~\ref{fig:t-SNE-visualisation-of}(a)). It gets better in the
semantic views (Figs.~\ref{fig:t-SNE-visualisation-of}(b) and (c)).
However, when all 6 views are embedded, all classes are clearly separable
(Fig.~\ref{fig:t-SNE-visualisation-of}(d)).
\noindent \textbf{Running time}\quad{}In practice, for the AwA dataset
with hand-crafted features, our pipeline takes less than $30$ minutes
to complete the zero-shot classification task (over $6,180$ images)
using a six core $2.66$GHz CPU platform. This includes the time for
multi-view CCA embedding and label propagation using our heterogeneous
hypergraphs.
\subsection{Annotation and beyond}
In this section we evaluate our multi-view embedding space for the
conventional and novel annotation tasks introduced in Sec.~\ref{sec:Annotation-and-Beyond}.
\noindent \textbf{Instance annotation by attributes}\quad{}To quantify
the annotation performance, we predict attributes/annotations for
each target class instance for USAA, which has the largest instance
level attribute variations among the three datasets. We employ two
standard measures: mean average precision (mAP) and F-measure (FM)
between the estimated and true annotation list. Using our multi-view
embedding space, our method (FM: $0.341$, mAP: $0.355$) outperforms
significantly the baseline of directly estimating $\mathbf{y}_{u}^{\mathcal{A}}=f^{\mathcal{A}}(\mathbf{x}_{u})$
(FM: $0.299$, mAP: $0.267$).
\begin{table}[ht]
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline
AwA & & Attributes\tabularnewline
\hline
\hline
\multirow{2}{*}{pc} & T-5 & active, \textbf{furry, tail, paws, ground.}\tabularnewline
\cline{2-3}
& B-5 & swims, hooves, long neck, horns, arctic\tabularnewline
\hline
\multirow{2}{*}{hp} & T-5 & \textbf{old world, strong, quadrupedal}, fast, \textbf{walks}\tabularnewline
\cline{2-3}
& B-5 & red, plankton, skimmers, stripes, tunnels\tabularnewline
\hline
\multirow{2}{*}{lp} & T-5 & \textbf{old world, active, fast, quadrupedal, muscle}\tabularnewline
\cline{2-3}
& B-5 & plankton, arctic, insects, hops, tunnels\tabularnewline
\hline
\multirow{2}{*}{hw} & T-5 & \textbf{fish}, \textbf{smart, fast, group, flippers}\tabularnewline
\cline{2-3}
& B-5 & hops, grazer, tunnels, fields, plains\tabularnewline
\hline
\multirow{2}{*}{seal} & T-5 & \textbf{old world, smart}, \textbf{fast}, chew teeth, \textbf{strong}\tabularnewline
\cline{2-3}
& B-5 & fly, insects, tree, hops, tunnels\tabularnewline
\hline
\multirow{2}{*}{cp} & T-5 & \textbf{fast, smart, chew teeth, active}, brown\tabularnewline
\cline{2-3}
& B-5 & tunnels, hops, skimmers, fields, long neck\tabularnewline
\hline
\multirow{2}{*}{rat} & T-5 & \textbf{active, fast, furry, new world, paws}\tabularnewline
\cline{2-3}
& B-5 & arctic, plankton, hooves, horns, long neck\tabularnewline
\hline
\multirow{2}{*}{gp} & T-5 & \textbf{quadrupedal}, active, \textbf{old world}, \textbf{walks},
\textbf{furry}\tabularnewline
\cline{2-3}
& B-5 & tunnels, skimmers, long neck, blue, hops\tabularnewline
\hline
\multirow{2}{*}{pig} & T-5 & \textbf{quadrupedal}, \textbf{old world}, \textbf{ground}, furry,
\textbf{chew teeth}\tabularnewline
\cline{2-3}
& B-5 & desert, long neck, orange, blue, skimmers\tabularnewline
\hline
\multirow{2}{*}{rc} & T-5 & \textbf{fast}, \textbf{active}, \textbf{furry}, \textbf{quadrupedal},
\textbf{forest}\tabularnewline
\cline{2-3}
& B-5 & long neck, desert, tusks, skimmers, blue\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Zero-shot description of 10 AwA target classes. $\Gamma$ is learned
using 6 views ($\mathcal{X}_{\mathcal{D}}$, $\mathcal{V}_{\mathcal{D}}$,
$\mathcal{A}_{\mathcal{D}}$, $\mathcal{X}_{\mathcal{O}}$, $\mathcal{V}_{\mathcal{O}}$
and $\mathcal{A}_{\mathcal{O}}$). The true positives are highlighted
in bold. pc, hp, lp, hw, cp, gp, and rc are short for Persian cat,
hippopotamus, leopard, humpback whale, chimpanzee, giant panda, and
raccoon respectively. T-5/B-5 are the top/bottom 5 attributes predicted
for each target class.}
\label{fig:ZeroShotDescription}
\end{table}
\begin{table*}[ht]
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline
(a) Query by GT attributes of & Query via embedding space & Query attribute words in word space\tabularnewline
\hline
\hline
graduation party & \textbf{party}, \textbf{graduation}, audience, caucus & cheering, proudly, dressed, wearing\tabularnewline
\hline
music\_performance & \textbf{music}, \textbf{performance}, musical, heavy metal & sing, singer, sang, dancing\tabularnewline
\hline
wedding\_ceremony & \textbf{wedding\_ceremony}, wedding, glosses, stag & nun, christening, bridegroom, \textbf{wedding\_ceremony}\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\begin{centering}
\begin{tabular}{|c|c|}
\hline
(b) Attribute query & Top ranked words\tabularnewline
\hline
\hline
wrapped presents & music; performance; solo\_performances; performing\tabularnewline
\hline
+small balloon & wedding; wedding\_reception; birthday\_celebration; birthday\tabularnewline
\hline
+birthday song +birthday caps & \textbf{birthday\_party}; prom; wedding reception\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{fig:ZAL_Task}Zero prototype learning on USAA. (a) Querying
classes by groundtruth (GT) attribute definitions of the specified
classes. (b) An incrementally constructed attribute query for the
birthday\_party class. Bold indicates true positive.}
\end{table*}
\noindent \textbf{Zero-shot description}\textit{\quad{}}\textit{\emph{In
this}} task, we explicitly infer the attributes corresponding to a
specified novel class, given only the textual name of that class without
seeing any visual samples. Table~\ref{fig:ZeroShotDescription} illustrates
this for AwA. Clearly most of the top/bottom $5$ attributes predicted
for each of the 10 target classes are meaningful (in the ideal case,
all top $5$ should be true positives and all bottom $5$ true negatives).
Predicting the top-$5$ attributes for each class gives an F-measure of $0.236$.
In comparison, if we directly
select the $5$ nearest attribute name projection to the class name
projection (prototype) in the word space, the F-measure is
$0.063$, demonstrating the importance of learning the multi-view
embedding space. In addition to providing a method to automatically
-- rather than manually -- generate an attribute ontology, this task
is interesting because even a human could find it very challenging
(effectively a human has to list the attributes of a class which he
has never seen or been explicitly taught about, but has only seen
mentioned in text).
\noindent \textbf{Zero prototype learning}\quad{}In this task we
attempt the reverse of the previous experiment: inferring a class
name given a list of attributes. Table \ref{fig:ZAL_Task} illustrates
this for USAA. Table \ref{fig:ZAL_Task}(a) shows queries by the groundtruth
attribute definitions of some USAA classes and the top-4 ranked
list of classes returned. The estimated class names of each attribute
vector are reasonable -- the top-4 words are either the class name
or related to the class name. A baseline is to use the textual names
of the attributes projected in the word space (summing their
word vectors) to search for the nearest classes in word space, instead
of the embedding space. Table \ref{fig:ZAL_Task}(a) shows that the
predicted classes in this case are reasonable, but significantly
worse than querying via the embedding space. To quantify this we evaluate
the average rank of the true name for each USAA class when queried
by its attributes. For querying by embedding space, the average rank
is an impressive $2.13$ (out of $4.33$M words
with a chance-level rank of $2.17$M), compared with the average rank
of $110.24$ by directly querying word space \cite{wordvectorICLR}
with textual descriptions of the attributes. Table \ref{fig:ZAL_Task}(b)
shows an example of ``incremental'' query using the ontology definition
of birthday party \cite{yanweiPAMIlatentattrib}. We first query the
`wrapped presents' attribute only, followed by adding `small
balloon' and all other attributes (`birthday songs and `birthday
caps'). The changing list of top ranked retrieved words intuitively
reflects the expectation of the combinatorial meaning of the attributes.
\section{Conclusions}
We identified the challenge of projection domain shift in zero-shot
learning and presented a new framework to solve it by rectifying the
biased projections in a multi-view embedding space. We also proposed
a novel label-propagation algorithm TMV-HLP based on heterogeneous
across-view hypergraphs. TMV-HLP synergistically exploits multiple
intermediate semantic representations, as well as the manifold structure
of unlabelled target data to improve recognition in a unified way
for zero shot, N-shot and zero+N shot learning tasks. As a result
we achieved state-of-the-art performance on the challenging AwA, CUB
and USAA datasets. Finally, we demonstrated that our framework enables
novel tasks of relating textual class names and their semantic attributes.
A number of directions have been identified for future work. First,
we employ CCA for learning the embedding space. Although
it works well, other embedding frameworks can be considered (e.g.~\cite{DBLP:conf/iccv/WangHWWT13}).
In the current pipeline, low-level features are first
projected onto different semantic views before embedding.
It should be possible to develop a unified embedding framework to combine these
two steps. Second, under a realistic lifelong learning
setting \cite{chen_iccv13}, an unlabelled data point could either
belong to a seen/auxiliary category or
an unseen class. An ideal framework should be able to classify both seen and unseen classes \cite{RichardNIPS13}.
Finally, our results suggest that more views, either manually defined
(attributes), extracted from a linguistic corpus (word space),
or learned from visual data (deep features), can potentially give
rise to better embedding space. More investigation is needed on how to systematically design and select semantic views for
embedding.
\bibliographystyle{abbrv}
|
1,314,259,993,282 | arxiv | \section{Introduction}
The $1/f$ noise is a random process described by the power spectral density
(PSD), $S(f)$, roughly proportional to the reciprocal frequency, $1/f$, i.e.,
$S(f)\propto1/f^\beta$, with $\beta$ close to $1$. It was observed first as an
excess low-frequency noise in vacuum tubes \cite{Johnson1925,Schottky1926},
later in condensed matter
\cite{Bernamont1934,Bernamont1937a,Bernamont1937b,McWhorter1957,Hooge1981} and
other systems \cite{Weissman1988,Mandelbrot-1999,Scholarpedia2007}. The general
nature of $1/f$ noise (named also ``flicker noise'' and ``$1/f$ fluctuations'')
is up to now the subject of several discussions and investigations, see
\cite{Wong2003,Scholarpedia2007,Kogan2008,Balandin2013} for review.
Many models have been proposed to explain the origin of $1/f$ noise. A short
discussion about the models and theories of $1/f$ noise is available in the
introduction of paper \cite{Kaulakys2009}. Widely used model of $1/f$ noise
interprets the spectrum as a superposition of Lorentzians with a wide range
distribution of relaxation times
\cite{Bernamont1937b,McWhorter1957,Watanabe2005,Kaulakys2005}. Another
possibility to model signals and processes featuring $1/f^\beta$ noise is a
representation of the signals as consisting of the renewal pulses or events with
the power-law distribution of the inter-event time \cite{Lowen2005}.
A class of models of $1/f$ noise relevant for driven nonequilibrium systems
involves the self-organized criticality (SOC)
\cite{Bak1987,Bak1996,Banerjee2006,Huang2015}. SOC refers to the tendency of
nonequilibrium systems driven by slow constant energy input to organize
themselves into a correlated state where all scales are relevant \cite{Bak1996}.
In \cite{Bak1987} a simple driven automaton model of sandpiles that reaches a
state characterized by power-law time and space correlations has been
introduced. However, the mechanism of self-organized criticality not necessarily
results in $1/f^{\beta}$ fluctuations with $\beta$ close to $1$
\cite{Jensen1989,Kertesz1990}. The $1/f$ noise in the fluctuations of a mass was
first seen in a sandpile model with threshold dissipation, proposed in
\cite{Ali1995}. In addition, the exponent $\beta$ is exactly $1$ in the spectrum
of fluctuations of mass in a one-dimensional directed model of sandpiles
\cite{Maslov1999}.
In most cases the $1/f$ noise is a Gaussian process \cite{Kogan2008,Li2012},
however sometimes $1/f$ fluctuations are non-Gaussian
\cite{Orlyanchik2008,Melkonyan2010}. Processes with the power-law distributions
of the signal characteristics can be modeled by presuming that the time between
the adjacent pulses experience slow (the change from one inter-pulse duration to
the next much smaller than the duration itself) Brownian-like motion
\cite{Kaulakys1998,Kaulakys1999,Kaulakys2000-2}. Moreover, the nonlinear
stochastic differential equations (SDEs) generating $1/f^\beta$ noise have been
obtained and analyzed \cite{Kaulakys2004,Kaulakys2006,Kaulakys2009} starting
from this point process model. SDE generating $1/f$ noise should necessarily be
nonlinear, because systems of linear SDEs do not generate signals with $1/f$
spectrum. Such nonlinear SDEs have been applied to describe signals in
socio-economical systems \cite{Gontis2010,Mathiesen2013}.
In the signal consisting of a sequence of pulses the pulse number is a
progressively increasing quantity and it can be understood as an internal time
of the process. The purpose of this paper is to investigate the distinction
between the internal time of the system and the physical time in connection with
$1/f$ noise. We intend to generalize the mechanism leading to $1/f$ noise in the
point process model, proposed in
\cite{Kaulakys1998,Kaulakys1999,Kaulakys2000-2}. Instead of a sequence of pulses
we start from an SDE describing a Brownian-like motion. We compose a new
equation by interpreting the time in the SDE as an internal parameter and adding
an additional equation relating the internal time to the physical time. We
demonstrate that the relation between the internal time and the external time,
depending on the intensity of the signal, can lead to $1/f$ noise in a wide
interval of frequencies.
A process $x(\tau(t))$ obtained by randomizing the time clock of a random
process $x(t)$ using a new clock $\tau(t)$, where $\tau(t)$ is a random process
with non-negative increments, is called the subordinated process
\cite{Feller1971}. The process $\tau(t)$ is referred to as directing process,
randomized time or operational time. In physics the time-subordinated equations
have been applied to describe anomalous diffusion. Fogedby \cite{Fogedby1994}
introduced a class of coupled Langevin equations consisting of a Langevin
process $x(s)$ in a coordinate $s$ and a L{\'e}vy process representing a
stochastic relation $t(s)$. This class of coupled Langevin equations has been
further investigated in \cite{Baule2005}, where $N$-time joint probability
distributions have been analyzed. Properties of the inverse $\alpha$-stable
subordinator have been considered in \cite{Piryatinska2005,Magdziarz2006}. It
has been shown \cite{Stanislavsky2003, Magdziarz2007} that the description of
anomalous diffusion by a Markovian dynamics governed by an ordinary Langevin
equation but proceeding in an auxiliary, operational time instead of the
physical time is equivalent to a fractional Fokker-Planck equation. Numerical
simulation of subordinated equations has been explored in
\cite{Kleinhans2007,Magdziarz2007}.
In contrast to the description of the anomalous diffusion, in this paper we
consider the situation when small increments of the physical time are
proportional to the increments of the operational time, with the the coefficient
of proportionality that depends on the stochastic variable $x$ representing
the signal intensity. Thus, in our case the randomness of the operational time
comes from the randomness of $x$.
The paper is organized as follows: In \sref{sec:pulses} we briefly present the
point process model of $1/f$ noise and obtain the PSD of the signal by a new
method. In \sref{sec:subord} we generalize the mechanism leading to $1/f$ noise
presented in \sref{sec:pulses}. We introduce the difference between the
physical and the internal time and consider time-subordinated Langevin
equations. In \sref{sec:example} we examine several stochastic processes and,
introducing the internal and external times, we check whether $1/f$ noise can be
obtained. In \sref{sec:numer} we discuss a way of solving highly non-linear SDEs
by introducing suitably chosen internal time and the variable step of
integration. \Sref{sec:concl} summarizes our findings.
\section{1/f noise in a signal consisting of pulses}
\label{sec:pulses}One of the models of $1/f$ noise has been presented in
\cite{Kaulakys1998,Kaulakys1999,Kaulakys2000-2}. In this model a signal consist
of pulses with the time between adjacent pulses undergoing a Brownian-like
motion. It has been shown that this Brownian-like motion of the inter-pulse
durations can yield $1/f$ noise. In this section we briefly present this model
and obtain the PSD of the signal using a different method
than the method used in \cite{Kaulakys1998,Kaulakys1999,Kaulakys2000-2}. The new
method allows us better estimate the frequency range where the PSD has $1/f$
behavior.
Let us consider a signal consisting of a pulse sequence having correlated
inter-pulse durations. We assume that: (i) the pulse sequences are stationary
and ergodic; (ii) all pulses are described by the same shape function $A(t)$.
The general form of the signal can be written as
\begin{equation}
I(t)=\sum_{k}A(t-t_{k})\,,\label{eq:signal}
\end{equation}
where the functions $A(t)$ determine the shape of the individual pulse and time
moments $t_{k}$ determine when the pulse occurs. Inter-pulse duration is
$\vartheta_{k}=t_{k+1}-t_{k}$. Such a pulse sequence is schematically shown in
\fref{fig:point}.
\begin{figure}
\includegraphics[width=0.6\textwidth]{fig1}
\caption{Sequence of pulses with random inter-pulse durations $\vartheta_{k}$.}
\label{fig:point}
\end{figure}
The PSD of such a signal is given by the equation
\begin{equation}
S(f)=\lim_{T\rightarrow\infty}\left\langle \frac{2}{T}\left|
\int_{t_{i}}^{t_{f}}I(t)\rme^{-\rmi 2\pi ft}\rmd t\right|^{2}\right\rangle \,,
\label{eq:spectr}
\end{equation}
where $T=t_{f}-t_{i}$ is the observation time and the brackets
$\langle\cdot\rangle$ denote the averaging over realizations of the pulse
sequence. Note that in equation \eref{eq:spectr} we consider one-sided PSD, thus
we have multiplier $2$ in it. Introducing the Fourier transform $F(\omega)$ of
the pulse shape function $A(t)$, we can write equation \eref{eq:signal} as
\begin{equation}
S(f)=|F(\omega)|^{2}\lim_{T\rightarrow\infty}\left\langle \frac{2}{T}
\left|\sum_{k}\rme^{-\rmi\omega t_{k}}\right|^{2}\right\rangle \,.
\end{equation}
Here $\omega=2\pi f$. If the pulses are narrow and we are considering low
frequencies then the Fourier transform $F(\omega)$ of the pulse shape is almost
constant. In this case we can replace the actual pulses with $\delta$-functions
and drop $F(\omega)$ in the equations.
The PSD can be decomposed into two parts,
\begin{eqnarray}
S(f) & = \lim_{T\rightarrow\infty}\left\langle \frac{2}{T}
\sum_{k,k'}\rme^{\rmi\omega(t_{k'}-t_{k})}\right\rangle \\
& = \lim_{T\rightarrow\infty}\left\langle \frac{2}{T}\sum_{k}1\right\rangle
+\lim_{T\rightarrow\infty}\left\langle \frac{2}{T}\left(\sum_{k'>k}
\rme^{\rmi\omega(t_{k'}-t_{k})}+\sum_{k>k'}
\rme^{\rmi\omega(t_{k'}-t_{k})}\right)\right\rangle
\label{eq:s-intermed}\\
& \equiv S_{1}(f)+S_{2}(f)\,.
\end{eqnarray}
The first term can be written as
\begin{equation}
S_{1}(f)=2\nu\,,
\label{eq:s1}
\end{equation}
where $\nu$ is the mean number of pulses per unit time. By changing $k$ into
$k'$ in the second part of the PSD one sees that it can be expressed as
\begin{equation}
S_{2}(f)=4\mathop{\mathrm{Re}}\lim_{T\rightarrow\infty}\left\langle
\frac{1}{T}\sum_{k'>k}\rme^{\rmi\omega(t_{k'}-t_{k})}\right\rangle\,,
\label{eq:s2}
\end{equation}
where the time difference $t_{k'}-t_{k}$ is
\begin{equation}
t_{k'}-t_{k}=\sum_{q=k}^{k'-1}\vartheta_{q}\,.
\end{equation}
Thus, equation \eref{eq:s-intermed} becomes
\begin{equation}
S(f)=2\nu+4\nu\mathop{\mathrm{Re}}\sum_{q=1}^{\infty}\left\langle
\rme^{\rmi\omega\sum_{j=0}^{q-1}\vartheta_{j}}\right\rangle \,.
\label{eq:spectr-inter}
\end{equation}
Assuming that the joint probability
$P(\vartheta_{0},\vartheta_{1},\ldots\vartheta_{q-1})$ exist we can write the
average in the above equation as
\begin{eqnarray}
\fl\left\langle \rme^{\rmi\omega\sum_{j=0}^{q-1}\vartheta_{j}}\right\rangle & = &
\int\rmd\vartheta_{0}\int\rmd\vartheta_{1}\cdots\int\rmd\vartheta_{q-1}
P(\vartheta_{0},\vartheta_{1},\ldots,\vartheta_{q-1})
\rme^{\rmi\omega\sum_{j=0}^{q-1}\vartheta_{j}}\\
& = & \int\rmd\vartheta_{0}
P_{\vartheta}(\vartheta_{0})\rme^{\rmi\omega\vartheta_{0}}
\int\rmd\vartheta_{1}
P(\vartheta_{1}|\vartheta_{0})
\rme^{\rmi\omega\vartheta_{1}}\cdots\nonumber\\
& & \times\int\rmd\vartheta_{q-1}
P(\vartheta_{q-1}|\vartheta_{0},\vartheta_{1},\ldots,\vartheta_{q-2})
\rme^{\rmi\omega\vartheta_{q-1}}\,.
\end{eqnarray}
If the inter-pulse durations follow the Markov process then the conditional
probabilities depend only on the previous value of the inter-pulse duration,
$P(\vartheta_{j}|\vartheta_{0},\vartheta_{1},\ldots,\vartheta_{j-1})=
P(\vartheta_{j}|\vartheta_{j-1})$. In this case
\begin{eqnarray}
\fl\left\langle\rme^{\rmi\omega\sum_{j=0}^{q-1}\vartheta_{j}}\right\rangle =
\int\rmd\vartheta_{0}P_{\vartheta}(\vartheta_{0})\rme^{\rmi\omega\vartheta_{0}}
\int \rmd\vartheta_{1}P(\vartheta_{1}|\vartheta_{0})
\rme^{\rmi\omega\vartheta_{1}}\cdots\nonumber\\
\times\int\rmd\vartheta_{q-1}P(\vartheta_{q-1}|\vartheta_{q-2})
\rme^{\rmi\omega\vartheta_{q-1}}\,.\label{eq:tmp-1}
\end{eqnarray}
Let us consider a situation when the probability density function (PDF) of
inter-pulse durations $P_{\vartheta}(\vartheta)$ is significant only for
$\vartheta$ in some range $\vartheta_{\mathrm{min}}\leq\vartheta\leq
\vartheta_{\mathrm{max}}$ and is very small for $\vartheta$ outside this range.
In addition, we will assume that the conditional probability
$P(\vartheta_{j}|\vartheta_{j-1})$ has the following properties: the average
equal to the previous value of inter-pulse duration
\begin{equation}
\int P(\vartheta_{j}|\vartheta_{j-1})\vartheta_{j}\rmd\vartheta_{j}=\vartheta_{j-1}
\label{eq:cond-avg}
\end{equation}
and the dispersion
\begin{equation}
\sigma^{2}=\int P(\vartheta_{j}|\vartheta_{j-1})(\vartheta_{j}-\vartheta_{j-1})^{2}
\rmd\vartheta_{j}
\end{equation}
is much smaller than the dispersion of inter-pulse durations
\begin{equation}
\sigma_{\vartheta}^{2}=\int P_{\vartheta}(\vartheta)(\vartheta-\bar{\vartheta})^{2}
\rmd\vartheta\,.
\end{equation}
These assumptions denote that the average difference between the neighboring
inter-pulse durations is small, i.e., the increments and decrements of the
inter-event duration are small in comparison to the inter-event time it self.
When $\vartheta_{\mathrm{max}}\gg\vartheta_{\mathrm{min}}$ then the dispersion
of inter-pulse durations is
$\sigma_{\vartheta}^{2}\sim\vartheta_{\mathrm{max}}^{2}$. Thus, we assume that
$\sigma\ll\vartheta_{\mathrm{max}}$. When the assumptions \eref{eq:cond-avg} and
$\sigma\ll\sigma_{\vartheta}$ hold, we can approximate the conditional
probability $P(\vartheta_{j}|\vartheta_{j-1})$ by a $\delta$-function:
$P(\vartheta_{j}|\vartheta_{j-1})\approx\delta(\vartheta_{j}-\vartheta_{j-1})$.
The approximation in equation \eref{eq:tmp-1} is valid only for sufficient small
$q$, smaller than some maximum value $q_{\mathrm{max}}$, because the error grows
with the number of terms. Using in equation \eref{eq:tmp-1} the approximation of
the conditional probability by $\delta$-function we obtain
\begin{equation}
\left\langle\rme^{\rmi\omega\sum_{j=0}^{q-1}\vartheta_{j}}\right\rangle \approx
\int_{0}^{\infty}P_{\vartheta}(\vartheta_{0})
\rme^{\rmi\omega q\vartheta_{0}}\rmd\vartheta_{0}=\chi_{\vartheta}(\omega q)\,,
\end{equation}
where
\begin{equation}
\chi_{\vartheta}(\omega)=\int_{0}^{\infty}P_{\vartheta}(\vartheta)
\rme^{\rmi\omega\vartheta}\rmd\vartheta
\end{equation}
is the characteristic function of inter-pulse durations.
We can estimate the value of $q_{\mathrm{max}}$ as follows: the approximation of
the conditional probability $P(\vartheta_{j}|\vartheta_{j-1})$ by
$\delta$-function is not applicable when the dispersion of $\vartheta_{q-1}$ for
a given $\vartheta_{0}$ becomes comparable with the dispersion
$\sigma_{\vartheta}^{2}$. Assuming that the dispersion of $\vartheta_{j}$, for a
given $\vartheta_{0}$, grows linearly with $j$ (as would be the case for a
random walk) we require that
$\sigma^{2}q_{\mathrm{max}}\lesssim\sigma_{\vartheta}^{2}$ and, therefore,
\begin{equation}
q_{\mathrm{max}}\sim\frac{\vartheta_{\mathrm{max}}^{2}}{\sigma^{2}}\,.
\end{equation}
For high enough frequency, when
\begin{equation}
\omega q_{\mathrm{max}}\vartheta_{\mathrm{max}}\gg1\,,
\end{equation}
the characteristic functions $\chi_{\vartheta}(\omega q)$ corresponding to large
$q\sim q_{\mathrm{max}}$ are small and we can neglect in equation
\eref{eq:spectr-inter} the terms with $q>q_{\mathrm{max}}$. Including only the
terms with $q\leq q_{\mathrm{max}}$ we get the expression for the PSD
\begin{equation}
S(f)\approx2\nu\sum_{q=-q_{\mathrm{max}}}^{q_{\mathrm{max}}}\chi_{\vartheta}(\omega q)\,.
\label{eq:summ}
\end{equation}
After the summation in equation \eref{eq:summ} we obtain
\begin{equation}
\fl S(f)\approx2\nu\int_{0}^{\infty}\frac{\sin\left(\left(\frac{1}{2}+q_{\mathrm{max}}\right)
\omega\vartheta\right)}{\sin\left(\frac{\omega\vartheta}{2}\right)}
P_{\vartheta}(\vartheta)\rmd\vartheta
\approx\frac{4\nu}{\omega}\int_{\omega\vartheta_{\mathrm{min}}}^{\omega\vartheta_{\mathrm{max}}}
\frac{\sin(q_{\mathrm{max}}u)}{u}P_{\vartheta}\left(\frac{u}{\omega}\right)\rmd u\,.
\end{equation}
We have dropped $1/2$ in $\sin(\cdot)$ because $q_{\mathrm{max}}$ is large,
$q_{\mathrm{max}}\gg1$. In addition, for small frequencies
$\omega\vartheta_{\mathrm{max}}\ll1$ we approximated $\sin(u/2)$ in the
denominator as $u/2$. The function $\sin(q_{\mathrm{max}}u)/u$ has a sharp peak
of the width $\pi/q_{\mathrm{max}}$ at $u=0$ and decreases at larger $u$. If
$\omega\vartheta_{\mathrm{max}}\gg\pi/q_{\mathrm{max}}$ then this peak is much
narrower that the width of the PDF $P_{\vartheta}$. In addition, the peak of the
function $\sin(q_{\mathrm{max}}u)/u$ has a significant overlap with
$P_{\vartheta}$ when $\omega\vartheta_{\mathrm{min}}\ll\pi/q_{\mathrm{max}}$. In
this case we obtain the following approximate expression for the PSD:
\begin{equation}
S(f)\approx\frac{4\nu}{\omega}P_{\vartheta}(\vartheta_{\mathrm{min}})
\int_{0}^{\infty}\frac{\sin(q_{\mathrm{max}}u)}{u}\rmd u
=\frac{\nu}{f}P_{\vartheta}(\vartheta_{\mathrm{min}})\,.
\end{equation}
This equation shows that we get $1/f$ spectrum.
Summing up the assumptions made above, the range of the frequencies where this
expression for PSD holds is
\begin{equation}
\frac{\sigma^{2}}{\vartheta_{\mathrm{max}}^{3}}\ll f\ll
\min\left(\frac{\sigma^{2}}{\vartheta_{\mathrm{min}}\vartheta_{\mathrm{max}}^{2}},
\frac{1}{\vartheta_{\mathrm{max}}}\right)\,.
\label{eq:range}
\end{equation}
When $\vartheta_{\mathrm{min}}<\sigma^{2}/\vartheta_{\mathrm{max}}$ the upper
limit of the frequency range is determined by $\vartheta_{\mathrm{max}}$. In
this case the ratio of upper and lower limiting frequencies is
$\vartheta_{\mathrm{max}}^{2}/\sigma^{2}$. For larger $\vartheta_{\mathrm{min}}$
the ratio of upper and lower limiting frequencies is
$\vartheta_{\mathrm{max}}/\vartheta_{\mathrm{min}}$.
\begin{figure}
\includegraphics[width=0.6\textwidth]{fig2}
\caption{The PSD of a signal when the inter-pulse duration performs a random
walk \eref{eq:random-walk}. The dashed (green) line shows $1/f$ spectrum. The
parameters used are $\vartheta_{\mathrm{min}}=0$, $\vartheta_{\mathrm{max}}=10$,
$\sigma=0.1$.}
\label{fig:spectrum-interpulse}
\end{figure}
As an example, let us consider the point process where the inter-pulse durations
perform a random walk and are related via the equation
\begin{equation}
\vartheta_{j+1}=\vartheta_{j}\pm\sigma\,.\label{eq:random-walk}
\end{equation}
Here each sign occurs with probability $1/2$. In addition, we have reflections
from the minimum inter-pulse duration $\vartheta_{\mathrm{min}}=0$ and from the
maximum inter-pulse duration $\vartheta_{\mathrm{max}}$. Numerically obtained
PSD of such a signal is shown in \fref{fig:spectrum-interpulse}. We see a
power-law part in the PSD with the slope $-1$ in a broad range of frequencies
from $4.\times10^{-5}$ to $10^{-1}$. This range of frequencies agrees with the
estimation \eref{eq:range}.
PSD of the power-law form with the exponents different from $-1$ can
be obtained by including in equation \eref{eq:cond-avg} an additional drift term.
In \cite{Kaulakys2005} it has been shown that the drift
term of the power-law form $\vartheta^{\delta}$ and power-law PDF of the
inter-pulse duration $P_{\vartheta}(\vartheta)\sim\vartheta^{\alpha}$
lead to the power-law PSD $S(f)\sim1/f^{\beta}$ with $\beta=1+\alpha/(2-\delta)$.
As a process generating the power-law probability distribution function
for $\vartheta_{j}$ a multiplicative stochastic process
\begin{equation}
\vartheta_{j+1}=\vartheta_{j}+\gamma\vartheta_{j}^{2\mu-1}+
\sigma\vartheta_{j}^{\mu}\varepsilon_{j}\label{eq:multiplicative}
\end{equation}
has been suggested. Here $\varepsilon_{k}$ are normally distributed uncorrelated
random variables with a zero expectation and unit variance. For this process
$\delta=2\mu-1$ and $\alpha=2\gamma/\sigma^{2}-2\mu$. \Eref{eq:multiplicative}
has been used for modeling of the internote interval sequences of the musical
rhythms \cite{Levitin2012}.
\section{Time-subordinated Langevin equations}
\label{sec:subord}In this section we generalize the model presented in the
previous section. We do this by noticing that in the pulse sequence there are
two strictly increasing sequences of numbers: the physical time $t$ and the
pulse number $k$. The pulse number can be interpreted as an internal time of the
pulse sequence. The relation between the physical time and the internal time is
not deterministic because the inter-pulse durations are random. Thus we propose
the introduction of the difference between the physical and the internal,
operational, time as a way to obtain $1/f$ noise also for other stochastic
processes. To do this we start with a stochastic process and interpret the time
as an internal parameter. In addition to this stochastic process we need to
include an additional relation between the physical time and the internal time.
In order to maintain a similarity to the point process described in the previous
section, the increments of the physical time should be a power-law function of
the magnitude of the signal. In this section as an initial stochastic process we
take a process described by a stochastic differential equation.
Langevin equation coupled to an additional equation for the physical time have
been introduced to describe anomalous diffusion \cite{Fogedby1994,Baule2005}.
In particular, position-dependent time subordinator has been investigated in
\cite{Srokowski2014}.
Let us start with the Langevin equation describing the diffusion of the particle
subjected to an external force
\begin{equation}
\rmd x_{t}=a(x_{t})\rmd t+b(x_{t})\rmd W_{t}\,.\label{eq:sde-initial}
\end{equation}
Here $a(x)$ and $b(x)$ are drift and diffusion coefficients and $W_{t}$ is a
standard Wiener process. For generality we assume that both coefficients $a$ and
$b$ can depend on the stochastic variable $x$. In case when the diffusion
coefficient $b$ in equation \eref{eq:sde-initial} depends on $x$ we assume It\^o
interpretation. In equation \eref{eq:sde-initial} we replace the physical time
$t$ by the operational time $\tau$,
\begin{equation}
\rmd x_{\tau}=a(x_{\tau})\rmd\tau+b(x_{\tau})\rmd W_{\tau}\,.\label{eq:x-internal}
\end{equation}
The PDF $P_{x}(x;\tau)$ of the stochastic variable $x$ as a function
of the operational time $\tau$ obeys the Fokker-Planck equation corresponding
to It\^o SDE~\eref{eq:x-internal} \cite{Gardiner2004}
\begin{equation}
\frac{\partial}{\partial\tau}P_{x}(x;\tau)=-\frac{\partial}{\partial x}a(x)P_{x}(x;\tau)
+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}b^{2}(x)P_{x}(x;\tau)\,.
\label{eq:FP-x}
\end{equation}
Primarily we consider the situation when the small increments of the physical
time are deterministic and are proportional to the increments of the operational
time. Thus, the physical time $t$ is related to the operational time $\tau$ via
the equation
\begin{equation}
\rmd t_{\tau}=g(x_{\tau})\rmd\tau\,.\label{eq:internal-external}
\end{equation}
Here the positive function $g(x)$ is the intensity of random time that depends
on the intensity of the signal $x$. If we interpret equation
\eref{eq:sde-initial} as describing the diffusion of a particle in
non-homogeneous medium, the function $g(x)$ models the position of structures
responsible for either trapping or accelerating the particle
\cite{Srokowski2014}. Large values of $g(x)$ corresponds to trapping of the
particle, whereas small $g(x)$ leads to the acceleration of diffusion. For fixed
particle position $x$ the coefficient $g(x)$ in equation
\eref{eq:internal-external} is constant and from equation
\eref{eq:internal-external} follows the relationship
\begin{equation}
\frac{\partial}{\partial\tau}P(t;\tau|x)=-\frac{\partial}{\partial t}g(x)P(t;\tau|x)\,.
\label{eq:prob-t-tau}
\end{equation}
for the PDF $P(t;\tau|x)$ of the physical time $t$ as a function of the
operational time $\tau$. Equations \eref{eq:x-internal} and
\eref{eq:internal-external} together define the subordinated process. However,
now the processes $x(\tau)$ and $t(\tau)$ are not independent.
Let us derive the Langevin equation for the stochastic variable $x$ in the
physical time $t$. To do this, we consider the joint PDF $P_{x,t}(x,t;\tau)$ of
the stochastic variables $x$ and $t$. Equations \eref{eq:x-internal} and
\eref{eq:internal-external} yield the two-dimensional Fokker-Planck equation
\begin{equation}
\frac{\partial}{\partial\tau}P_{x,t}(x,t;\tau)=-\frac{\partial}{\partial x}a(x)P_{x,t}
+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}b^{2}(x)P_{x,t}
-\frac{\partial}{\partial t}g(x)P_{x,t}\,.
\label{eq:FP-x-t-tau}
\end{equation}
This equation is a combination of equations \eref{eq:FP-x} and
\eref{eq:prob-t-tau}. The zero of the physical time $t$ coincides with the zero
of the operational time $\tau$, therefore, the initial condition for equation
\eref{eq:FP-x-t-tau} is $P_{x,t}(x,t;0)=P_{x}(x,0)\delta(t)$. Coinciding zeros
of $t$ and $\tau$ lead also to the boundary condition $P_{x,t}(x,0;\tau)=0$ for
$\tau>0$, because $t$ and $\tau$ are strictly increasing.
Instead of $x$ and $t$ we can consider $x$ and $\tau$ as stochastic variables.
The stochastic variable $t$ is related to the operational time $\tau$ via
equation \eref{eq:internal-external}, therefore, the joint PDF
$P_{x,\tau}(x,\tau;t)$ of the stochastic variables $x$ and $\tau$ is related to
the PDF $P_{x,t}(x,t;\tau)$ according to the equation
\begin{equation}
P_{x,\tau}(x,\tau;t)=g(x)P_{x,t}(x,t;\tau)\,.\label{eq:transform}
\end{equation}
This equation can be obtained by noticing that the last term in equation
\eref{eq:FP-x-t-tau} contains derivative $\frac{\partial}{\partial t}$ and thus
should be equal to $-\frac{\partial}{\partial t}P_{x,\tau}$. Using equations
\eref{eq:FP-x-t-tau} and \eref{eq:transform} we get
\begin{equation}
\fl\frac{\partial}{\partial t}P_{x,\tau}(x,\tau;t)=-\frac{\partial}{\partial x}a(x)
\frac{1}{g(x)}P_{x,\tau}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}b^{2}(x)
\frac{1}{g(x)}P_{x,\tau}-\frac{\partial}{\partial\tau}\frac{1}{g(x)}P_{x,\tau}\,.
\label{eq:FP-x-tau-t}
\end{equation}
The PDF $P_{x,\tau}$ has the initial condition
$P_{x,\tau}(x,\tau;0)=P_{x}(x,0)\delta(\tau)$ and the boundary condition
$P_{x,\tau}(x,0;t)=0$ for $t>0$. The PDF of the subordinated random process
$x_{t}$ is $P(x,t)=\int P_{x,\tau}(x,\tau;t)\rmd\tau$. Integrating both sides of
equation \eref{eq:FP-x-tau-t} we obtain
\begin{equation}
\frac{\partial}{\partial t}P(x,t)=-\frac{\partial}{\partial x}\frac{a(x)}{g(x)}P(x,t)
+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}\frac{b^{2}(x)}{g(x)}P(x,t)\,.
\label{eq:FP-x-external}
\end{equation}
Thus, position-dependent trapping leads to the position-dependent
coefficients in the Fokker-Planck equation, even if the initial
SDE~\eref{eq:x-internal} has constant coefficients. \Eref{eq:FP-x-external}
corresponds to the single equation in the physical time with the multiplicative
noise,
\begin{equation}
\rmd x_{t}=\frac{a(x_{t})}{g(x_{t})}\rmd t+\frac{b(x_{t})}{\sqrt{g(x_{t})}}\rmd W_{t}\,.
\label{eq:sde-physical}
\end{equation}
In fact, the Fokker-Planck equation \eref{eq:FP-x-tau-t} can be obtained from
the coupled equations \eref{eq:sde-physical} and
\begin{equation}
\rmd\tau_{t}=\frac{1}{g(x_{t})}\rmd t\,.
\label{eq:external-internal}
\end{equation}
The relationship between the physical time $t$ and the operational time $\tau$
can be not necessarily deterministic, equation \eref{eq:internal-external} can
have a stochastic term. If the fluctuations of this stochastic term are much
faster than the fluctuations of the stochastic variable $x$, we can approximate
them by the average value. In this case $g(x)$ describes the average increment
of the physical time. If this average is positive, the derivation presented
above is still valid and equation \eref{eq:sde-physical} holds.
\section{Example equations generating signals with $1/f$ noise}
\label{sec:example}In this section we consider several stochastic processes and,
introducing the internal and external times, we check whether $1/f$ noise can be
obtained. In a signal consisting of pulses the internal time is just the pulse
number and the increment of the physical time is equal to the inter-pulse
duration. The intensity of such a signal is inversely proportional to the
inter-pulse duration. In order to obtain $1/f$ noise similarly as for the signal
consisting of pulses we choose the function $g(x)$ in equation
\eref{eq:internal-external} as a power-law function of $x$, $g(x)\sim
x^{-2\eta}$, where $\eta$ is the power-law exponent.
Let us start from a simple Brownian motion
\begin{equation}
\rmd x_{\tau}=\rmd W_{\tau}\,.\label{eq:brownian}
\end{equation}
In order to keep the stochastic variable $x$ always positive we include
reflective boundary at $x=x_{\mathrm{min}}>0$. We consider equation
\eref{eq:brownian} together with the relation
\begin{equation}
\rmd t_{\tau}=x_{\tau}^{-2\eta}\rmd\tau
\label{eq:phys-intern-2}
\end{equation}
between the physical time $t$ and internal time $\tau$. According to
\eref{eq:sde-physical} the resulting equation in the physical time is
\begin{equation}
\rmd x_{t}=x_{t}^{\eta}\rmd W_{t}\,.
\end{equation}
More generally, the initial equation can include a position-dependent force. If
we take the equation describing the Bessel process
\begin{equation}
\rmd x_{\tau}=\left(\eta-\frac{\lambda}{2}\right)\frac{1}{x_{\tau}}\rmd\tau
+\rmd W_{\tau}
\label{eq:bessel}
\end{equation}
together with equation \eref{eq:phys-intern-2}, then the resulting equation in
the physical time becomes
\begin{equation}
\rmd x_{t}=\left(\eta-\frac{\lambda}{2}\right)x_{t}^{2\eta-1}\rmd t
+x_{t}^{\eta}\rmd W_{t}\,.
\label{eq:sde-1}
\end{equation}
Here the parameter $\lambda$ gives the power-law exponent of the steady-state PDF.
The same equation \eref{eq:sde-1} in physical time arises starting from the
geometric Brownian motion,
\begin{equation}
\rmd x_{\tau}=\left(\eta-\frac{\lambda}{2}\right)x_{\tau}\rmd\tau
+x_{\tau}\rmd W_{\tau}\,,
\end{equation}
and the relation between the internal time and the physical time
\begin{equation}
\rmd t_{\tau}=x_{\tau}^{-2(\eta-1)}\rmd\tau\,.
\end{equation}
Nonlinear SDE~\eref{eq:sde-1} for generating signals with $1/f^{\beta}$ spectrum
has been proposed in \cite{Kaulakys2004,Kaulakys2006}. As has been shown in
\cite{Ruseckas2014}, the reason for the appearance of $1/f$ spectrum is the
scaling properties of the signal: the change of the magnitude of the variable
$x\rightarrow ax$ is equivalent to the change of the time scale $t\rightarrow
a^{2(\eta-1)}t$. Connection of the power-law exponent $\beta$ in the PSD with
the parameters of equation \eref{eq:sde-1} is given by the equation
\cite{Kaulakys2006,Ruseckas2014}
\begin{equation}
\beta=1+\frac{\lambda-3}{2(\eta-1)}\,.
\label{eq:beta}
\end{equation}
Analysis \cite{Ruseckas2010} of SDE~\eref{eq:sde-1} shows that equation
\eref{eq:beta} is valid only for the values of the parameters $\eta$ and
$\lambda$ yielding $0 < \beta < 2$.
Nonlinear SDE~\eref{eq:sde-1} leads to the stationary process and non-diverging
steady state PDF only when the diffusion of stochastic variable $x$ is
restricted. The simplest choice of the restriction is the reflective boundary
conditions at $x=x_{\mathrm{min}}$ and $x=x_{\mathrm{max}}$. The presence of the
restrictions of diffusion makes the scaling properties of equation
\eref{eq:sde-1} only approximate and limits the power-law part of the PSD to a
finite range of frequencies. This range of frequencies has been qualitatively
estimated as \cite{Ruseckas2014}
\begin{eqnarray}
x_{\mathrm{min}}^{2(\eta-1)} \ll 2\pi f\ll x_{\mathrm{max}}^{2(\eta-1)}\,,\qquad\eta>1\,,\\
x_{\mathrm{max}}^{-2(1-\eta)} \ll 2\pi f\ll x_{\mathrm{min}}^{-2(1-\eta)}\,,\qquad\eta<1\,.\nonumber
\end{eqnarray}
By increasing the ratio $x_{\mathrm{max}}/x_{\mathrm{min}}$ one can get an
arbitrarily wide range of the frequencies where the PSD has $1/f^{\beta}$
behavior.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig3a}\hspace{1cm}\includegraphics[width=0.45\textwidth]{fig3b}
\caption{(a) Signal generated by equation \eref{eq:sde-1} with the parameters
$\eta=5/2$ and $\lambda=3$ (solid red line) together with the corresponding
internal time (dashed green line). Reflective boundaries at $x_{\mathrm{min}}=1$
and $x_{\mathrm{max}}=1000$ have been used. (b) PSD
of the generate signal. Dashed green line shows the slope $1/f$.}
\label{fig:nonlin-sde}
\end{figure}
An example of a signal generated by equation \eref{eq:sde-1} together with the
internal time $\tau$ is shown in \fref{fig:nonlin-sde}(a). We used the
parameters $\eta=5/2$, $\lambda=3$ and reflective boundaries at
$x_{\mathrm{min}}=1$ and $x_{\mathrm{max}}=1000$. The method of numerical
solution is discussed in the next section. We see that internal time $\tau$
increases rapidly when the signal $x$ acquires large values and $\tau$ changes
slowly when $x$ is small. According to equation \eref{eq:beta} the choice of
$\lambda=3$ should result in $1/f$ behavior of the PSD. The corresponding power
spectral density $S(f)$ is shown in \fref{fig:nonlin-sde}(b). The numerical
solution of the equation confirms a presence of a wide region of frequencies
where the spectrum has $1/f$ behavior.
When the stochastic variable $x$ can acquire both positive and negative values,
the function $g(x)$ cannot be just a simple power-law, because $g(x)$ becomes
unbounded or equal to zero when $x\rightarrow0$. In order to avoid this problem
we require that function $g(x)$ should have power-law behavior only
asymptotically, for large vales of $|x|$. One of the possible choices is
\begin{equation}
g(x)=\frac{1}{(x^{2}+x_{0}^{2})^{\eta}}\,.
\end{equation}
Here we added a constant $x_{0}$ that corrects the behavior of the function
$g(x)$ at $x=0$. The power-law behavior is preserved when $|x|\gg x_{0}$.
The stochastic variable $x$ can acquire both positive and negative values if we
start from the Ornstein-Uhlenbeck process
\begin{equation}
\rmd x_{\tau}=-\gamma x_{\tau}\rmd\tau+\rmd W_{\tau}\,.
\label{eq:orn-uhl}
\end{equation}
Here the parameter $\gamma$ is the relaxation rate. We consider equation
\eref{eq:orn-uhl} together with the relation
\begin{equation}
\rmd t_{\tau}=\frac{1}{(x_{\tau}^{2}+x_{0}^{2})^{\eta}}\rmd\tau
\label{eq:phys-intern-3}
\end{equation}
between the physical time $t$ and internal time $\tau$. According to
\eref{eq:sde-physical}, equations \eref{eq:orn-uhl} and \eref{eq:phys-intern-3}
leads to SDE
\begin{equation}
\rmd x_{t}=-\gamma(x_{t}^{2}+x_{0}^{2})^{\eta}x_{t}\rmd t
+(x_{t}^{2}+x_{0}^{2})^{\frac{\eta}{2}}\rmd W_{t}
\label{eq:sde-4}
\end{equation}
in the physical time $t$. \Eref{eq:sde-4} can be written as
\begin{equation}
\rmd x_{t}=\left(-\frac{x_{t}^{2}+x_{0}^{2}}{x_{\mathrm{max}}^{2}}\right)
(x_{t}^{2}+x_{0}^{2})^{\eta-1}x_{t}\rmd t+(x_{t}^{2}+x_{0}^{2})^{\frac{\eta}{2}}\rmd W_{t}
\end{equation}
where
\begin{equation}
x_{\mathrm{max}}=\frac{1}{\sqrt{\gamma}}
\end{equation}
defines a cut-off position at large values of $x$.
Another interesting equation describing the evolution in internal time is
\begin{equation}
\rmd x_{\tau}=\left(\eta-\frac{\lambda}{2}\right)\frac{x_{\tau}}{x_{\tau}^{2}+x_{0}^{2}}
\rmd\tau +\rmd W_{\tau}\,.
\label{eq:sde-3}
\end{equation}
In this equation the relaxation rate depends on the magnitude of the signal. If
$|x|\ll x_{0}$ we get the equation of Ornstein-Uhlenbeck type, whereas for
large values of $|x|$ the relaxation decreases with increasing $|x|$.
\Eref{eq:sde-3} together with \eref{eq:phys-intern-3} result in the following
equation in the physical time:
\begin{equation}
\rmd x_{t}=\left(\eta-\frac{\lambda}{2}\right)(x_{t}^{2}+x_{0}^{2})^{\eta-1}x_{t}
\rmd t+(x_{t}^{2}+x_{0}^{2})^{\frac{\eta}{2}}\rmd W_{t}\,.
\label{eq:sde-q-gauss}
\end{equation}
Finally, the combination of equations \eref{eq:orn-uhl} and \eref{eq:sde-3},
\begin{equation}
\rmd x_{\tau}=-\left(\gamma-\left(\eta-\frac{\nu}{2}\right)\frac{1}{x_{\tau}^{2}
+x_{0}^{2}}\right)x_{\tau}\rmd\tau+\rmd W_{\tau}\,,
\end{equation}
together with \eref{eq:phys-intern-3} leads to a more general equation
in the physical time
\begin{equation}
\rmd x_{t}=\left(\eta-\frac{\nu}{2}-\frac{x_{t}^{2}+x_{0}^{2}}{x_{\mathrm{max}}^{2}}\right)
(x_{t}^{2}+x_{0}^{2})^{\eta-1}x_{t}\rmd t
+(x_{t}^{2}+x_{0}^{2})^{\frac{\eta}{2}}\rmd W_{t}\,.
\end{equation}
Nonlinear SDE~\eref{eq:sde-q-gauss} has been investigated in
\cite{Ruseckas2011}. It has been shown that SDE~\eref{eq:sde-q-gauss} generates
a signal with the steady-state PDF described by the $q$-Gaussian distribution
featuring in the non-extensive statistical mechanics. In addition, the spectrum
of the generated signal has $1/f^{\beta}$ behavior in a wide range of
frequencies, with the power-law exponent $\beta$ given by equation
\eref{eq:beta}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig4a}\hspace{1cm}\includegraphics[width=0.45\textwidth]{fig4b}
\caption{(a) Signal generated by equation \eref{eq:sde-q-gauss} with the
parameters $\eta=5/2$, $\lambda=3$ and $x_{0}=1$ (solid red line) together with
the corresponding internal time (dashed green line). (b) PSD
of the generate signal. Dashed green line shows the slope $1/f$.}
\label{fig:q-gaussian}
\end{figure}
An example of a signal generated by equation \eref{eq:sde-q-gauss} together with
the internal time is shown in \fref{fig:q-gaussian}(a). We used the parameters
$\eta=5/2$, $\lambda=3$ and $x_{0}=1$. We see that the internal time $\tau$
increases rapidly when the absolute value of the signal $x$ is large and $\tau$
changes slowly when the absolute value of $x$ is small. The internal time $\tau$
increases both for positive and negative values of $x$.
The PSD of a signal generated by equation \eref{eq:sde-q-gauss} is shown in
\fref{fig:q-gaussian}(b). The numerical solution confirms a presence of a region
where the spectrum behaves as $1/f$. Thus the introduction of negative values of
$x$ does not destroy $1/f$ spectrum.
\section{Numerical approach}
\label{sec:numer}Introduction of the internal time can be an effective technique
for solution of highly non-linear SDEs. For numerical solution of nonlinear
equations the solution schemes involving a fixed time step $\Delta t$ can be
inefficient. For example, in equation \eref{eq:sde-1} with $\eta>1$ large values
of stochastic variable $x$ lead to large coefficients and thus require a very
small time step. The numerical solution scheme can by improved by introducing
the internal time $\tau$ that is different from the real, physical, time $t$.
Let us consider equation \eref{eq:sde-1} with the noise multiplicativity
exponent $\eta>1$. We can introduce internal time $\tau$ using the equation
\begin{equation}
\rmd\tau_{t}=x_{t}^{2\eta}\rmd t\,.
\end{equation}
Then, according to equations \eref{eq:sde-physical} and
\eref{eq:external-internal}, SDE~\eref{eq:sde-1} is equivalent to coupled
equations
\begin{eqnarray}
\rmd x_{\tau} = \left(\eta-\frac{\nu}{2}\right)\frac{1}{x_{\tau}}\rmd\tau+\rmd W_{\tau}\,,
\label{eq:sde-5}\\
\rmd t_{\tau} = \frac{1}{x_{\tau}^{2\eta}}\rmd\tau\,.
\end{eqnarray}
Now, equation \eref{eq:sde-5} is much simpler than the initial equation
\eref{eq:sde-1}. Discretizing the internal time $\tau$ with the step
$\Delta\tau$ and using the Euler-Marujama approximation for the
SDE~\eref{eq:sde-5} we get
\begin{eqnarray}
x_{k+1}= x_{k}+\left(\eta-\frac{\lambda}{2}\right)\frac{1}{x_{k}}\Delta\tau
+\sqrt{\Delta\tau}\varepsilon_{k}\,,\label{eq:discr-1}\\
t_{k+1}= t_{k}+\frac{\Delta\tau}{x_{k}^{2\eta}}\,.\label{eq:discr-2}
\end{eqnarray}
Here $\varepsilon_{k}$ are normally distributed uncorrelated random variables.
Equations \eref{eq:discr-1} and \eref{eq:discr-2} provide the numerical method
for solving equation \eref{eq:sde-1}. One can interpret equations
\eref{eq:discr-1}, \eref{eq:discr-2} as an Euler-Marujama scheme with a variable
time step $\Delta t_{k}=\Delta\tau/x_{k}^{2\eta}$ that adapts to the
coefficients in the equation. The cost of the introduction of the internal time
is the randomness of the increments of the real, physical time $t$. To get the
discretization of time with fixed steps the signal generated in such a way
should be interpolated.
Another possible choice is to introduce the internal time $\tau$ by the equation
\begin{equation}
\rmd\tau_{t}=x_{t}^{2(\eta-1)}\rmd t\,.
\end{equation}
In this case we obtain a different pair of equations
\begin{eqnarray}
\rmd x_{\tau} = \left(\eta-\frac{\lambda}{2}\right)x_{\tau}\rmd\tau+x_{\tau}\rmd W_{\tau}\,,
\label{eq:sde-6}\\
\rmd t_{\tau} = \frac{1}{x_{\tau}^{2(\eta-1)}}\rmd\tau\,.
\end{eqnarray}
Note, that now the internal time $\tau$ is dimensionless even if $x$ and $t$ are
not. Discretizing the internal time $\tau$ with the step $\Delta\tau$ and using
the Euler-Marujama approximation for the SDE~\eref{eq:sde-6} we obtain
\begin{eqnarray}
x_{k+1} = x_{k}+\left(\eta-\frac{\lambda}{2}\right)x_{k}\Delta\tau
+x_{k}\sqrt{\Delta\tau}\varepsilon_{k}\,.\\
t_{k+1} = t_{k}+\frac{\Delta\tau}{x_{k}^{2(\eta-1)}}\,.
\end{eqnarray}
This method of solution has been proposed in \cite{Kaulakys2004}. On the other
hand, using Milstein approximation for the SDE~\eref{eq:sde-6} we have
\begin{eqnarray}
x_{k+1} = x_{k}+\left(\eta-\frac{\lambda}{2}\right)x_{k}\Delta\tau
+x_{k}\sqrt{\Delta\tau}\varepsilon_{k}+\frac{1}{2}x_{k}\Delta\tau(\varepsilon_{k}^{2}-1)\,,
\label{eq:discr-3}\\
t_{k+1} = t_{k}+\frac{\Delta\tau}{x_{k}^{2(\eta-1)}}\,.
\end{eqnarray}
Note, that the last term in equation \eref{eq:discr-3} differs from the
corresponding term in the equation obtained just by using a variable time step
$\Delta t=\Delta\tau/x_{k}^{2(\eta-1)}$ in the Milstein approximation for
equation \eref{eq:sde-1}.
Numerical simulation of subordinated equations using fixed step of operational
time and random increment of physical time has been discussed in
\cite{Kleinhans2007,Magdziarz2007}. Variable time step makes numerical
simulation in \cite{Kleinhans2007,Magdziarz2007} similar to the method proposed
in this section. The main difference of our method from previous discussions
of subordinated equations lies in the depence of the increment of the physical
time on the magnitude of the signal $x$.
\section{Discussion and conclusions}
\label{sec:concl}
In summary, we have demonstrated that starting from a random process described
by a SDE and introducing the difference between the internal time and the
physical time $1/f$ behavior of the PSD can be obtained.
One of the physical situations where the difference between the internal and
physical time can arise is transport in an inhomogeneous medium. Impurities and
regular structures in the medium can cause transport of variable speed, the
particle may be trapped for some time or accelerated. Nonhomogeneous systems
exhibit not only subdiffusion related to traps, but also enhanced diffusion as a
result of the disorder. For example, movement of particles between two
neighboring lattice sites in an interacting particle system is superdiffusive
due to the disorder and subdiffusive without the disorder \cite{Ben-Naim2009}.
The dynamics in a medium with traps is described by the continuous time random
walk theory (CTRW) \cite{Metzler2000,Metzler2004}. In a description equivalent to
CTRW the dynamics of the particle is Markovian and governed by the Langevin
equation in an auxiliary, operational, time instead of the physical time. This
Markovian process is subordinated to the process yielding the physical time.
In the case of subdiffusion the PSD of the signals generated by subordinated
Langevin equations has power-law behavior $S(f)\sim f^{\alpha - 1}$ as
$f\rightarrow 0$ \cite{Yim2006}, where $\alpha$ is the power-law exponent in the
time dependence of the mean square displacement. Since for subdiffusion $0 <
\alpha < 1$, the power-law exponent $\beta$ in the PSD is smaller than $1$. The results
obtained in this paper suggest that $1/f$ noise in subdiffusion should occur in
heterogeneous medium, where trapping time depends on the position
\cite{Kazakevicius2015}.
The traditional CTRW provides a homogeneous description of the medium. More
complex situation is the diffusion in nonhomogeneous media, for example,
diffusion on fractals and multifractals \cite{Schertzer2001}. Heterogeneous
medium with steep gradients of the diffusivity can be created via a local
variation of the temperature in thermophoresis experiments
\cite{Maeda2012,Mast2013}. Spatial heterogeneities are also present in the case
of anomalous diffusion in subsurface hydrology \cite{Dentz2010}. In the random
walk description spatially varying diffusivity can be translated into a local
dependence of the waiting time for a jump event. In the heterogeneous medium the
properties of a trap can reflect the medium structure, thus in the description
of transport the waiting time should explicitly depend on the position of the
particle \cite{Srokowski2014}. A method to include position dependent waiting
time is a consideration of the position-dependent time subordinator
\cite{Srokowski2014}.
In general, the trapping time can depend not on the position of the particle but
on some other quantity. Then in the dynamics of this quantity the difference
between the physical and operational time also arises, with the relationship
between the times dependent on the intensity of the signal.
In socio-economical systems the internal time can reflect fluctuating human
activity \cite{Mathiesen2013}. For example, in finance the long-range
correlations in volatility arise due to fluctuations of the trading activity
\cite{Plerou2001,Gabaix2003}.
We have shown that $1/f$ noise occurs when the internal time and the physical
time are related via the power-law function of the signal intensity, for
example, via equation \eref{eq:phys-intern-2} or \eref{eq:phys-intern-3}.
Although we have considered only random processes described by a SDE, we expect
that the mechanism of the appearance of $1/f$ noise presented here is quite
general and should work also for other random processes. We anticipate that the
present model can be useful for explaining $1/f$ noise in different complex
systems.
In addition, we suggested a way of solving highly non-linear SDEs by introducing
suitably chosen internal time and variable step of integration.
\section*{References}
\providecommand{\newblock}{}
|
1,314,259,993,283 | arxiv | \section{Introduction}\label{Intro}
Ultra luminous X-ray sources (ULXs) are off-nuclear extragalactic
X-ray sources with isotropic bolometric luminosities in excess of the
Eddington limit for a $\sim20M_{\odot}$ black hole (henceforth referred to
as BH). An empirical approach defines ULXs as sources with X-ray
luminosities in the range $L_{X}\sim10^{39}-10^{41}{\rm\,erg\,s^{-1}}$. These
sources are quite common in starburst and late type galaxies and are
thought to be mostly binaries with a donor star transferring matter
onto a black hole. There is not yet a general consensus on the
mechanism that drives such large luminosities (see
e.g. \citealt{zam09} and references therein). Some proposed
explanations involve $\simless 20M_{\odot}$ BHs with thick discs producing
beamed emission, slim discs with photon bubble trapping and
super-Eddington emission (see e.g., \citealt{pou07} and references
therein), or two-phase super-Eddington radiatively efficient discs
\citep{soc06}. The most intriguing possibility would be the presence
of the so-called intermediate mass black holes (IMBHs): even if ULXs
are genuine isotropic emitters, the Eddington limit would not be
violated if the accretor has a mass of
$\sim10^{2}-10^{4}\,M_{\odot}$. An alternative formation scenario has
been proposed and recently explored in detail in which a
portion of ULXs contains $\sim 30-90 M_\odot$ BHs formed in a low
metallicity environment and accreting in a slightly critical regime
(\citealt{map09,zam09}). The binary nature of ULXs was supported by
the possible orbital periodicity at 62 days detected by \citet{kaa06,
kaa06b} and \citet{kaa07} for the ULX M82 X-1. Constraints from the
orbital period, X-ray luminosity and optical photometry suggested that
the most likely explanation for the compact accretor in this ULX is an
IMBH of mass larger than 200 $M_{\odot}$ (\citealt{pat06}; see however
\citealt{beg06} for an alternative interpretation without an IMBH).
\citet{pat08} and \citet{mad08} demonstrated how several further
constraints can be put on the nature of ULXs from the identification
of optical counterparts. In the majority of cases optical
identifications indicate the existence of high mass donor stars
($M\lower.5ex\hbox{\gtsima} 8 M_{\odot}$). The optical emission coming from the donor
is expected to be strongly contaminated by the contribution of the
external regions of the accretion disc, and by reprocessing of the
X-ray radiation generated in the innermost portion of the disc.
Contribution from the X-ray irradiation of the donor star surface also
plays a role (\citealt{cop05}, \citealt{pat08}).
The identification of a unique optical counterpart however is not an
easy task, as ULXs are often observed in crowded regions of star
formation. The ULX NGC1313 X-2 (henceforth referred to as X-2) is one
of the most promising sources in this respect. \citet{zam04} and
\citet{muc05} first identified two candidate counterparts for this
source (C1 and C2). \citet{muc07} and \citet{liu07} pinpointed C1 as
the most likely counterpart by means of a model of the optical
emission and a refined analysis of the HST and {\it Chandra}
astrometry. Through an independent theoretical investigation, we
showed that the object C1 is the only one consistent with the
properties predicted by a binary evolution model
(\citealt{pat08}). \citet{muc05} estimated a mass of $\sim
20\,M_{\odot}$ for C1, later refined to be in the interval 10-18
$M_\odot$ by \citet{muc07}. \citet{gri08} identified the donor as a
star of 8-16$M_{\odot}$ whereas \citet{liu07} proposed a somewhat smaller
mass ($\sim 8M_{\odot}$). \citet{gri08} derived an age of $20\pm5$ Myr for
the star cluster where X-2 resides and hence an upper limit for the
donor mass of $\sim 12M_{\odot}$. However, all these observational
studies did not consider the effects of binary evolution on the donor
star colours and age. Taking into account the effects of binary
evolution and irradiation, we estimated the mass of C1 to be $\simless
15\,M_{\odot}$ (\citealt{pat08}). Finally, from an estimate of the
amount of energy injected in the bubble nebula surrounding this ULX,
\citet{pak06} derived a timescale for the active phase of X-2 of the
order of $\sim 10^{6}$ yr. \citet{pak06} further confirmed C1 as the
likely counterpart reporting evidence of a broad 4686 He II line in
the optical spectrum of X-2.
Recently, \citet{liu09} tentatively identified a modulation in the $B$
band lightcurve of X-2 with a period of $6.12\pm0.16$ d. If this is
confirmed, X-2 will be the most constrained ULX known to date.
In this Letter we will use all the available data of X-2 coming from
optical observations (including the $\sim 6$ d orbital period) and
compare them with the evolution of an ensemble of irradiated X-ray
binary models in order to constrain the nature of the compact
accretor.
\begin{figure*}
\begin{center}
\rotatebox{-90}{\includegraphics[width=5.5cm]{caseAB.eps}}
\end{center}
\caption{Colour-Magnitude Diagram (CMD) for binaries with a 100$M_{\odot}$
({\it left panel}) and a 20$M_{\odot}$ BH ({\it right panel}), undergoing
\textbf{case AB} mass transfer. The black triangle corresponds to the
optical counterpart C1 with its 1$\sigma$
errorbar. The assumed distance for X-2 is 3.7 Mpc and the errorbar in
$M_{V}$ reflects the maximum uncertainty in the distance determination of
NGC 1313 (see text). All the tracks are plotted only during the
contact phases. The effects of irradiation and the optical
contamination of the accretion disc are included. The
donors are: $25 M_{\odot}$ ({\it yellow}), $20 M_{\odot}$ ({\it cyan}), $15
M_{\odot}$ ({\it pink}), $12 M_{\odot}$ ({\it blue}), $10 M_{\odot}$ ({\it
green}), and $8 M_{\odot}$ ({\it red}).}
\label{caseAB}
\end{figure*}
\section{Binary evolution and X-ray reprocessing}\label{evol}
The binary evolution model adopted here is the same as outlined in
\citealt{pat08} (to which we refer for details; see also
\citealt{mad08}, \citealt{pat06}, \citealt{mad06}, \citealt{rap05}, \citealt{pat05}, \citealt{pod03}, for a discussion on BH
binary evolution models). We consider two ensembles of binaries: the
first with BHs of $20\,M_{\odot}$ and the second with BHs of
$100\,M_{\odot}$. The donor stars have initial masses between 8 and
25$M_{\odot}$. Two different values of the (zero age) metallicity,
both sub-solar, are considered: $Z=0.004$ (\citealt{ryd93}) and
$Z=0.01$ (close to the value $Z=0.008$ estimated by \citealt{had07}
and \citealt{wal97}). If the donor starts a contact phase via
Roche-lobe overflow (RLOF), we assume that a geometrically thin
optically thick accretion disc forms around the BH. The efficiency for
the conversion of rest mass into radiation is fixed at $10\%$. The
accretion rate is instantaneously taken to be equal to the
mass-transfer rate from the companion. When the accretion rate ${\dot
M}$ exceeds the Eddington rate ${\dot M}_{Edd}$, we impose ${\dot M} =
{\dot M}_{Edd}$ and assume that the excess mass is expelled from the
system. In our simulations, all the binaries that start RLOF on the
main sequence (MS) have also a second episode of mass transfer after
the terminal age main sequence (TAMS). Therefore we term such binary
models as \textbf{case AB} while those starting the first contact
phase after the TAMS are termed \textbf{case B}.
The UV/optical luminosity is computed summing to the donor emission
the contribution of the accretion disc, and including the effects of
the reprocessed X-ray radiation (see \citealt{pat08} for details). In
the present version of the code we improved the accuracy in the
calculations of the optical colours of the irradiated accretion disc,
which are now $\sim 0.1$ mag redder than those previously reported.
None of the results and conclusions published in \cite{pat08}
are significantly affected by this revision.
For all the donors considered here the disc contribution to the
UV/optical emission can be dominant for BHs of $\sim 100 M_\odot$,
whereas it is less important but still significant for 20$M_{\odot}$ BHs
\citep{pat08}. The albedo of the irradiated layers is fixed at
$f_a=0.9$. To investigate the effects of a different albedo we evolved
some binaries with $f_a=0.7$ and $f_a=0.95$. As reference, we chose
five values for the inclination angle of the binary, $i=0^0, 30^0,
45^0, 60^0, 80^0$. For each angle we computed the optical emission
summing the average luminosity of the irradiated and non-irradiated
surfaces of the donor to the flux emitted from the visible part of the
accretion disc.
\section{Observational constraints}\label{obs}
X-2 is located in the barred spiral galaxy NGC 1313 at a distance of
3.7-4.27 Mpc (\citealt{tul88}, \citealt{men02}, \citealt{riz07}). Its
observed X-ray luminosity varies between a few $\times 10^{39}{\rm\,erg\,s^{-1}}$
and $3\times10^{40}{\rm\,erg\,s^{-1}}$ in the 0.3-10 keV band
(\citealt{fen06,muc07}).
Recently, \citet{liu09} found a possible periodicity of $6.12\pm0.16$
d in the $B$ band lightcurve of C1, that was interpreted as the
orbital period of the binary. Three cycles were detected in the $B$
band, while no modulation was found in $V$. According to
\citet{liu09}, the period is $12.24\pm0.16$ d if the X-ray irradiation
of the donor is unimportant, while it is $6.12\pm0.16$ d in case
of significant irradiation. Previous studies carried
out on the available \textit{HST} and \textit{VLT} observations led to
negative results \citep{gri08}. More recently, lack of significant
photometric variability on a new sequence of \textit{VLT} observations
has been reported by \cite{gri09}. Therefore, we consider the
detection of \citet{liu09} with caution and are aware that it will
need to be confirmed before any definite conclusion can be drawn.
In the following we will use the $V$ and $B$ band photometry of C1 as
determined by \citet{muc07}. As the source is variable, we have
further corrected the magnitudes and colours by using the average value of $V$
and by propagating the errors on $V$ and $B$. The error in
the absolute magnitudes is taken to be equal to the maximum
uncertainty in the different distance determinations of NGC 1313
(\citealt{tul88,men02,riz07}). Concerning
the reddening, two different estimates of the colour excess were
derived in the literature: $E(B-V)=0.1$ \citep{muc07,gri08} and
$E(B-V)=0.3$ \citep{liu07}. As the analyses of \cite{muc07} and
\cite{gri08}, based in part on independent methods
converge toward the same value, in the following we chose
$E(B-V)=0.1$\footnote{If we consider E(B-V)=0.3, all the models with a
20$M_{\odot}$ BH become incompatible with the position of C1, and the
minimum donor mass required to match the observations for a 100$M_{\odot}$
BH becomes $M\lower.5ex\hbox{\gtsima} 25M_{\odot}$ with characteristic age $t\simless
10$Myr (H-shell burning phase), in strong disagreement with the
observed cluster age of $20\pm 5$Myr, reported by \citet{gri09}}. The
adopted $M_{V}$ magnitude is therefore in the range -4.38 to -4.79 and
the $B-V$ colour is $-0.13\pm0.06$.\\
\section{Results}\label{results}
\subsection{Case AB mass transfer}
In Fig.~\ref{caseAB} we show the results of the binary evolution calculations
(including the effects of irradiation) for Z=0.01 and \textbf{case AB} mass
transfer, and for an inclination $i=0^0$.
The track for a 15$M_{\odot}$ donor with a $100 M_{\odot}$ BH passes through
the C1 errorbox during the MS, whereas the 20$M_{\odot}$ BH matches the
observations during the giant phase. When this happens, the orbital
period for the 15$M_{\odot}$ donor and $100 M_{\odot}$ BH model is $\sim 5.9$
d, very close to the determination of \citet{liu09}. The donor age is
$\sim 16$ Myr, which is consistent with the estimate of \citet{gri08}
for the host OB association.
We decreased the BH mass to 50--70$M_{\odot}$, keeping the donor mass
at 15$M_{\odot}$, to verify if a slightly lighter BH would produce a
significantly different result. In all cases the donor colours, age
and orbital period match the observations when the star is close to
the TAMS, with values similar to those of
the 100 $M_{\odot}$ BH case.
The mass transfer of the 15$M_{\odot}$ donor at the position of C1 is
typically between $\dot{M}\sim2\times 10^{-7}M_{\odot}\rm\,yr^{-1}$ and
$\dot{M}\sim2\times 10^{-6}M_{\odot}\rm\,yr^{-1}$. This is consistent with
the measured average luminosity of X-2, $\sim 4 \times
10^{39}\rm\,erg\,s^{-1}$ \citep{muc07}, which requires $\dot{M}\sim
10^{-6}M_{\odot}\rm\,yr^{-1}$.
When considering the tracks with a smaller metallicity (Z=0.004), the
situation is similar and the effect on the stellar colours is minimal.
Therefore we will consider for reference only tracks calculated for Z=0.01.
The $M_{V}$ band magnitude of ULXs is lower (i.e., the luminosity larger)
for 50-100$M_{\odot}$ BHs than for 20$M_{\odot}$ BHs (see Fig.~\ref{caseAB} and
\citealt{pat08}). This is mainly due to the larger optical
contamination of the irradiated disc and for an intrinsic binary
evolution effect that makes a companion star around a 50--100$M_{\odot}$ BH
brighter than that around a 20$M_{\odot}$ BH when RLOF starts at a
fixed donor age. For a 50--100$M_{\odot}$ BH the (irradiated) disc flux exceeds
the (irradiated) donor flux up to 100-300\%, depending on the
photometric band and the mass ratio considered, while for a 20$M_{\odot}$
BH it is between 20 and 130\% of the donor flux at maximum
accretion rate. A small increase of the order of $\sim 0.1$ mag in
$B-V$ occurs when increasing the BH mass.
All the results remain essentially unchanged if we increase the
inclination angle up to $80^0$. Only for $i\lower.5ex\hbox{\gtsima} 80^0$ there is a
significant decrease of the accretion disc contribution and hence of
the luminosity ($\propto \cos i$), which is particularly significant
for binaries around a 50--100$M_{\odot}$ BH. A light curve folded over
the orbital period was simulated calculating the area of the
irradiated/non-irradiated surface seen by a distant observer as a
function of orbital phase and inclination, integrating the specific
intensity separately over the two surfaces and summing up the
resulting fluxes. At large inclination angles ($i\lower.5ex\hbox{\gtsima} 60^0$) the
light curve is approximately sinusoidal and the maximum amplitude of
the orbital modulation in the $B$ band is $\sim 0.04$ mag (including
disc emission), about a factor of $\sim 2$ smaller than the amplitude
of the observed modulation \citep{liu09}. For a 20$M_{\odot}$ BH, the
contribution from the disc is less important and the amplitude of the
modulation is close to the observed value ($\sim 0.1$ mag). However,
if moderate beaming is assumed, the X-ray radiation is no longer able
to hit the donor surface and the orbital modulation caused by X-ray
heating is expected to be largely suppressed. To verify this scenario,
we evolved a set of tracks with a 20$M_{\odot}$ BH and no effect of
irradiation for \textbf{case AB}.
The tracks of a 15$M_{\odot}$ donor are compatible with C1 when
the star is on the giant branch with an age of $\sim19$ Myr, although the
period of the binary is $\sim 15-23$ d.
When considering binaries with irradiation, the evolutionary tracks do
not show dramatic changes in position and shape on the CMD when
varying the albedo $f_a$. The main effect is an increase (up to a
factor 2 for $f_a=0.7$) of the amplitude of the orbital modulation
caused by X-ray irradiation. The difference in the $B-V$ colour is
$\sim 0.1$ mag or less, while the change in the $M_{V}$ magnitude is
smaller than 1 mag. When comparing the tracks with the position of C1
on the CMD, this translates into an uncertainty of only a few solar
masses in the determination of the donor mass. This is true also for
the {\bf case B} discussed below. Both 20$M_{\odot}$ and 50--100$M_{\odot}$
BH models are still crossing C1 when the donor is on the giant branch
and on the MS, respectively.
By considering all these small sources of uncertainties for the
stellar tracks, we can consider the position of C1 also consistent
with donors on the MS with masses of 12--20$M_{\odot}$ and a
50--100$M_{\odot}$ BH. However, when the track of the 20$M_{\odot}$ donor
crosses C1, the donor age is 4--11 Myr with a period 1.5--5.9 d. The
donor is therefore too young to be compatible with the stellar cluster
age (15--25 Myr). For the 12$M_{\odot}$ donor, the age is 20--22 Myr and
the orbital period is 3.1--5.5 d.
As mentioned above, the photometry of C1 is not consistent with the
tracks relative to a $20 M_{\odot}$ BH for donors on the MS. For these
models, the mass transfer rate never exceeds the Eddington limit by
more than a factor 2 during the main sequence for donor masses
$M\simless 15M_{\odot}$. This means that even in case of genuinely
super-Eddington accretion, the amount of X-ray irradiation would never
greatly exceed the value used in our calculations.
The same is true for the orbital period. At the TAMS, it increases
with the BH mass, and it is around 4.5--5 d for a BH of 20$M_{\odot}$ and
5--6 d for a 100$M_{\odot}$ BH. Therefore all the $20M_{\odot}$ BH binaries
need to be in the H-shell burning phase to be consistent with the
observations. The donors compatible with the position of C1 have ages
in the range 12--36 Myr and orbital periods of 5--20 d, with a mass
transfer rate always above the Eddington limit. Donors of 10$M_{\odot}$
and 20$M_{\odot}$ can immediately be excluded since they are too old
($\sim 36$ Myr) or too young ($\simless12$ Myr) to be compatible with
the star cluster age (15--25 Myr).
\begin{figure*}
\begin{center}
\rotatebox{-90}{\includegraphics[width=5.5cm]{caseB.eps}}
\end{center}
\caption{CMD for binaries with a BH of $100 M_{\odot}$ ({\it left panel}) and $20 M_{\odot}$
({\it right panel}) for \textbf{case B} mass transfer. The notation is the same as in
Fig.~\ref{caseAB}.
For a $100 M_{\odot}$ BH system the $M_{V}$ band magnitude of the tracks is too large for any model,
while a 20$M_{\odot}$ BH with a donor of $\sim 10-12M_{\odot}$ is consistent with the
position of the counterpart C1. However, these short-lived RLOF systems are not
consistent with the dynamical age inferred from the bubble nebula surrounding X-2.}
\label{caseB}
\end{figure*}
\subsection{Case B mass transfer}
In Fig.~\ref{caseB} we show the results of the binary evolution
calculations for \textbf{case B} mass transfer. We assume again
Z=0.01 and an inclination $i=0^0$. The RLOF starts when the donor
leaves the MS and its envelope expands crossing the
Hertzsprung-Gap. The figure shows that 50--100$M_{\odot}$ BH binaries are
too bright ($\sim 1$ mag) to account for the position of C1 on the
CMD. The disc flux may exceed the donor flux up to a factor of 10.
These models can therefore be excluded as possible candidates for
X-2. The tracks of 20$M_{\odot}$ BH binaries with donors of
$8M_{\odot}\simless M\simless 12M_{\odot}$ are compatible with the position of
C1 (with a relative flux contribution from the disc reaching $\sim
180$\% at maximum accretion rate). As for \textbf{case AB} mass
transfer, consistency with the photometry of C1, the observed value of
the orbital period and the age of field stars is never achieved for
donors of 8, 15, 20 and 25$M_{\odot}$. For the 10 and 12$M_{\odot}$ model, the
donor age is $\sim25$ and $18$Myr respectively when the orbital period
is $P_{orb}\sim 6.1$ d and the magnitude and colours are compatible
with C1. However, the binaries have spent only a tiny amount of time
($\sim 3000$ years for the 10 and 12$M_{\odot}$ donors) in RLOF contact
when all the parameters are within the observed range.
\section{Discussion}\label{discussion}
We demonstrated that, when using the CMD and the constraints from the
characteristic ages of the parent stellar cluster and the bubble
nebula, the binary evolution of X-2 in NGC 1313 is described
in terms of two possibilities for a \textbf{case AB} scenario:
\begin{itemize}
\item mass transfer of a MS donor of
$\sim12-15M_{\odot}$ onto a $\sim50-100M_{\odot}$ BH
\item mass transfer of a H-shell burning donor of
$\sim12-15M_{\odot}$ onto a $\sim20M_{\odot}$ BH
\end{itemize}
\textbf{Case B} mass transfer for a 50--100$M_{\odot}$ BH is finally ruled out as
the shift between the CMD tracks and the position of the optical
counterpart is too large. This result relies solely on the position
of the counterpart C1 on the CMD and assumes that all the binaries are
X-ray irradiated and that their emission is contaminated by the
accretion disc. Also \textbf{Case B} mass transfer for a 20$M_{\odot}$ BH
can be ruled out considering the too short-lived RLOF phase which
appears in contradiction with the $\sim10^{6}$ yr age of the bubble
nebula surrounding X-2 \citep{pak06}.
For \textbf{Case AB} mass transfer, the first contact phase during the MS is
sufficiently long to inject the required energy in the bubble nebula
and therefore makes the 20$M_{\odot}$ BH onto a 12-15 $M_{\odot}$ H-shell donor a
possible candidate for X-2. However, a strongly beamed system would not
be consistent with the observations if the contribution to the nebular emission
from the photo-ionization by the ULX is important, as the nebula is essentially
isotropically illuminated \citep{pak06}.
Note that \textbf{case B} and \textbf{case AB} binary colours tend to
become redder as the evolution proceeds, as a consequence of surplus
mass ejection. This is in contrast to what was reported by
\citet{mad08} who assumed that all the matter leaving the donor is
retained by the BH and contributes to the optical contamination. When
the donor leaves the main sequence, the mass transfer rate increases
by up to 1-2 orders of magnitude. This means that in our model there
is a substantial mass loss from the binary that does not contribute to
the X-ray luminosity and to the reprocessing. This leads to redder
colours for the binary and to a stable mass transfer with a lack of a
common envelope phase (see \citealt{van05} and \citealt{lom05} for a
discussion).
If we use the tentative identification of the orbital period of
\citet{liu09} as \textit{further constraint}, binaries with a massive
50--100$M_{\odot}$ BH and a $\sim12$--$15M_{\odot}$ donor close to the TAMS
are compatible with the observations ($\sim6$ d), while all the
\textbf{case AB} binaries with a 20$M_{\odot}$ BH are excluded ($>8$ d).
In general the orbital modulation in the optical lightcurve is caused
by X-ray irradiation and ellipsoidal variations. Simulating a
complete light curve is beyond the purpose of the present
investigation. As discussed in Section 4.1, we computed the amplitude
of the modulation induced by X-ray irradiation alone
($\sim0.05$--$0.15$ mag for $i \ga 45^0$). Ellipsoidal variations can
be estimated from \citet{boc79} and, for the typical mass ratios
considered here, have amplitudes $\sim0.1$--$0.2$ mag ($i \ga 45^0$).
So, the two effects are comparable. In both cases the reported values
refer to the amplitude of the modulation without considering the
accretion disc contribution. If the latter is included, the amplitude
is reduced by a factor $\sim2$. Ellipsoidal variations induce two
maxima and two minima per orbital cycle and therefore the observed
modulation of $\sim6$ days would correspond to an orbital period of 12
days. However, depending on the parameters, the secondary minimum may
be largely suppressed and hence, within the photometric errors, the
observed modulation would correspond to the $\sim6$ days orbital period
adopted by \citet{liu09}. Clearly, more measurements are needed in
order to reach a definitive conclusion. If the orbital period were
$\sim12$ days, it would be compatible with a \textbf{case AB} binary
with a 20$M_{\odot}$ (or slightly larger) BH and isotropic irradiation
(8--20 d), while it would remain too small to be consistent with a
similar system in case beaming of the X-ray flux prevents irradiation
(15--23 d).
For 50--100$M_{\odot}$ BHs the expected amplitude of the orbital modulation is
larger in the $B$ band than in the $V$ or $R$ bands because, at longer
wavelengths, where the donor spectrum decays more rapidly than the
irradiated disc spectrum, the contamination from the disc is
comparatively stronger. However, the difference in the amplitude
between the $B$ and $V$ band may not be sufficient to explain a
detection in the $B$ band and a simultaneous non-detection in the $V$
band. Further investigation is needed to assess this point. In the
present assumptions, the most favourable optical band where to search
for the orbital modulation appears to be the $U$ and $B$ bands, where the ratio
between donor and disc emission is maximum. It is interesting to note
also that, because of the disc contamination, the optical spectrum is
characterized by a rather flat continuum with $\propto \nu^{1.1}$,
clearly distinguishable from a Rayleigh-Jeans tail. All these
predictions may be easily tested with further photometric and
spectroscopic follow-ups of object C1. Finally, if spectroscopic data
of the donor of NGC 1313 X-2 will support an identification of a MS
star, the possibility that NGC 1313 X-2 hosts a 50--100$M_{\odot}$ BH is strongly
favoured, independently of the period determination, whereas a giant
donor will immediately rule out the possibility of a BH this massive.
We note however that we did not perform a complete survey of the
parameter space evolving case AB systems with BH masses between 20 and
50$M_{\odot}$. In fact, depending on the actual value of the orbital period,
for values of the BH in this mass range, there may also be agreement
with observations. A systematic investigation of this type is postponed to
when a more robust assessment of the orbital period will be available.
\section*{Acknowledgments}
We would like to thank Lev Yungelson, Rudy Wijnands and Lex Kaper for
useful discussions. AP is supported by an NWO Veni fellowship. LZ
acknowledges financial support from INAF through grant PRIN-2007-26.
\newcommand{Nat}{Nat}
\newcommand{MNRAS}{MNRAS}
\newcommand{AJ}{AJ}
\newcommand{PASP}{PASP}
\newcommand{A\&A}{A\&A}
\newcommand{ApJL}{ApJL}
\newcommand{ApSS}{ApSS}
\newcommand{ApJS}{ApJS}
\newcommand{AAPS}{AAPS}
\newcommand{ApJ}{ApJ}
|
1,314,259,993,284 | arxiv | \section{Introduction}
In the efforts to extract a relation between string theory
and physics, we find two main problems, namely how
the large vacuum degeneracy is lifted and how
supersymmetry is broken at low energies.
These problems, when present at string tree level,
cannot be solved at any order in string perturbation
theory. The reason is the following:
It is known that at tree-level, setting all the matter
fields to zero forces the superpotential to vanish,
for {\it any} value of the moduli and dilaton fields.
The corresponding scalar
potential vanishes implying flat directions for the moduli and
dilaton.
Also, the $F$ and $D$ auxiliary fields, which
are the order parameters for supersymmetry breaking,
vanish in this situation, implying unbroken
supersymmetry.
Since the
superpotential does not get renormalized in perturbation theory,
if it vanish at tree level it will also vanish
at all orders of string perturbation theory.
Then the F-term part of the potential also vanishes
perturbatively.
The only perturbative correction that could alter
this situation is the generation of a Fayet-Iliopoulos
D-term by an `anomalous' $U(1)$, usually present in 4D strings.
However, in all the cases considered so far there are
charged fields getting nonvanishing vev's which cancel the
$D$-term, breaking gauge symmetries instead of
supersymmetry.
Therefore these problems are exact in perturbation theory and the
only
hope to solve them is nonperturbative physics.
This has a good and a bad side. The good side is that
nonperturbative effects represent the most natural way to
generate large hierarchies due to their exponential
suppression, this is precisely what is needed to obtain the
Weinberg-Salam scale from the fundamental string or Planck scale.
The bad side is that despite many efforts, we do not yet have
a nonperturbative formulation of string theory.
At the moment, the only concrete nonperturbative information
we can extract is from the purely {\it field theoretical}
nonperturbative
effects inside string theory. Probably the simplest and
certainly the most studied of those effects is gaugino
condensation in a hidden sector of the gauge group, since it
has the potential of breaking supersymmetry as well as lifting
some of the flat directions, as we will presently discuss.
\section{Gaugino Condensation}
The idea of breaking supersymmetry in a dynamical way was
first presented in refs.~\cite{witten}. In those articles a
general topological argument was developed in terms of the
Witten index $Tr(-)^F$, showing that dynamical supersymmetry
breaking {\it cannot} be achieved unless there is chiral matter or
we include supergravity effects for which the index
argument does not apply.
This was subsequently verified by explicitly studying gaugino
condensation in pure supersymmetric Yang-Mills, a vector-like
theory, for which gauginos condense but do not break
global supersymmetry \cite{vy} (for a review see \cite{amati}).
Breaking global supersymmetry with chiral matter
was an open possibility in principle, but this approach ran into
many problems when tried to be realized in practice.
The situation improved very much with the coupling to supergravity.
The reason was that simple
gaugino condensation was argued to be sufficient to break
supersymmetry once the coupling to gravity was included. This
works in a hidden sector mechanism where gravity is the messenger
of supersymmetry breaking to the observable sector
\cite{peter}.
Furthermore, string theory provided a natural realization of this
mechanism \cite{din,drsw}\ by having
naturally a hidden sector especially in the $E_8\times E_8$ versions.
Also, it gave another direction to the mechanism by the fact that
gauge couplings are field dependent
(as anticipated for supergravity models in ref.~\cite{fgn}). This
same
fact raised the hope that gaugino condensation could lift the
moduli and dilaton flat directions, but soon it was recognized that
it only changed flat to runaway potentials, thus destabilizing those
fields in the `wrong' direction (zero gauge coupling and infinite
radius)\footnote{The possibility of a nonvanishing
$\Avg{H_{ijk}}$ stabilizing the potential with vanishing
cosmological constant \cite{drsw}, was discarded after it was
realized that this
field was always quantized, breaking supersymmetry at the Planck
scale,
also its incorporation does not seem consistent with $T$-duality.}.
A simple way to see this is by setting the gaugino condensate
$\Avg{\lambda^\alpha \lambda_\alpha}\sim \Lambda^3$ with $\Lambda\sim M\exp(-1/(bg^2))$,
the renormalization
group invariant scale. Here $M\sim 10^{19}$ Gev is the
compactification
scale, $b$ the coefficient of the one-loop beta function of the
hidden sector group and
$g$ the corresponding gauge coupling. In string theory we have that
$4\pi g^{-2}\sim \Avg{S+S^*}$ where $S$ is the chiral dilaton field
(including also the axion and fermionic partner). Also,
$M^{-1}\sim \Avg{T+T^*}$ with $T$ being one of the moduli
fields. Substituting naively $\Avg{\lambda^\alpha \lambda_\alpha}$ into the lagrangian
induces a scalar
potential for the real parts of $S$ and $T$ ($S_R$ and $T_R$
respectively), namely $V(S_R,T_R)\sim
\frac{1}{S_RT_R^3}\exp(-3S_R/4\pi b)$.
This potential has a runaway behaviour for both $S_R$ and $T_R$,
as advertized.
The $T$ dependence of the potential was completely changed after the
consideration of target space or $T$ duality. In its simplest form,
this symmetry acts on the field $T$ as an $SL(2,{\bf Z})$
symmetry:
\begin{equation}
T\rightarrow \frac{a\, T-i\, b}{i\, c\,T+d}, \qquad a\, d-b\, c=1.
\end{equation}
It was shown \cite{filq}, that imposing this symmetry
changes the structure of the scalar potential for the moduli fields
in such a way that it develops a minimum at $T\sim 1.2$ (in
string units), whereas the potential blows-up at the
decompactification
limit ($T_R\rightarrow\infty$), as desired.
The modifications due to imposing $T$ duality can be traced to the
fact
that the gauge couplings get moduli dependent threshold corrections
from
loops of heavy string states \cite{dkl}. This in turn generates a
moduli dependence on the superpotential induced by gaugino
condensation
of the form $W(S,T)\sim\eta(T)^{-6}\exp(-3S/8\pi b)$ with
$\eta(T)$ the Dedekind function.
This mechanism however did not help in changing the runaway behaviour
of the potential in the direction of $S$. For stabilizing $S$, the
only
proposal was to consider gaugino condensation of a nonsemisimple
gauge group, inducing a sum of exponentials in the superpotential
$W(S)\sim \sum_i{\alpha_i \exp(-3S/8\pi b_i)}$
which conspire to generate a local minimum for $S$ \cite{krasnikov}.
These have been named `racetrack' models in the recent literature.
It was later found that combining the previous ideas together with
the addition of matter fields in the hidden sector (natural in many
string models)\cite{lt,ccm}, was sufficient to find a minimum with
almost all
the right properties, namely, $S$ and $T$ fixed at the desired value,
$S_R\sim 25, T_R\sim 1$, supersymmetry broken at a small
scale ($\sim 10^{2-4}$ GeV) in the observable sector, etc.
This lead to studies of the induced soft breaking terms at low
energies.
Besides that relative succes, there are at least five problems that
assures us that we are far from a satisfactory treatment of
these issues.
\begin{description}
\item[(i)] Unlike the case for $T$, fixing the
$vev$ of the dilaton field $S$, at the phenomenologically
interesting value, is not achieved in a satisfactory way.
The conspiracy of several condensates with hidden
matter to generate a local minimum
at a good value, requires certain amount of fine tunning and
cannot be called natural.
\item[(ii)] The cosmological constant turns out to be
always negative,
which looks like an unsourmountable problem at present. This
also makes the analysis of soft breaking terms less reliable,
because in order to talk about them, a constant
piece has to be added to the lagrangian that
cancels the cosmological constant. It is then hard to
believe that the unknown mechanism generating this term would
leave the results on soft breaking terms (such as
small gaugino masses) untouched.
\item[(iii)] The derivation of the effective theory below
condensation is not completely understood. There are several
approaches to this and the exact relation among them is not
completely
clear.
\item[(iv)] There is an inherently stringy problem which is due to
the fact that the $S$ field is not stringy. $S$ is only the dual
of another field, $L$ which is the one created by string vertex
operators, having the dilaton and the antisymmetric tensor field
$B_{\mu\nu}$ (instead of the axion) as the bosonic components.
The problem resides in the fact that, if there is not a Peccei-Quinn
(PQ)
symmetry $S\rightarrow S+i\, constant$, as in the many condensates
scenario,
it is not clear if the theory in terms of $S$ is any longer dual to
the $L$ theory. This sets serious doubts on whether the $S$ approach
mentioned above is valid at all. Another way to express this problem
is to ask if it is possible to formulate directly gaugino
condensation in
terms of the stringy field $L$.
\item[(v)] Finally, even if the previous problems were solved, there
are
at least two serious cosmological problems for the gaugino
condensation
scenario. First, it was found under very general grounds, that
it was not possible to get inflation with the type of dilaton
potentials
obtained from gaugino condensation \cite{bs}. Second is the so-called
`cosmological moduli problem' which applies to any (nonrenormalizble)
hidden sector scenario including gaugino condensation
\cite{modprob}. In this case,
it can be shown that the moduli and dilaton fields acquire masses
of the electroweak scale ($\sim 10^2$ GeV) after supersymmetry
breaking.
Therefore if stable, they overclose the universe, if
unstable, they destroy nucleosynthesis by their
late decay, since they only have gravitational strength interactions.
\end{description}
In the next section, I will present a general description
of the effective theory below condensation scale, addressing the
issue
of problem (iii) above. Section 4 will show the solution of
problem (iv) whereas in section 5, I will discuss ideas towards
solving
problems (i) and (v). The resolution of problem (ii) is
left to the reader.
\section{Wilson vs 2PI Actions}
To study the effects of gaugino condensation we should be
able to answer the following questions:
Do gauginos condense? If so, is supersymmetry broken by this
effect? What is the effective theory below the scale
of condensation?
In order to answer these questions, several ideas have been put
forward
\cite{vy,fgn,drsw,mr}. Let me revise briefly the different
approaches.
In ref.~\cite{vy}, a chiral superfield $U$ was introduced
representing the condensate $W^\alpha W_\alpha$. The effective supersymmetric
theory
in terms of $U$ was found by matching the anomaly of an
original $R$-symmetry of the underlying
supersymmetric Yang-Mills action.
In refs.~\cite{drsw}, \cite{kp},
the same anomalous symmetry was used
to reproduce the effective action below condensation scale,
without the need of introducing $U$. That gave rise
to the superpotential $W(S)\sim\exp(-3S/8\pi b)$ mentioned
before. The earlier approach of ref.~\cite{fgn} was
based on the direct substitution of $\lambda^\alpha \lambda_\alpha$ in
the original supergravity lagrangian. A more recent
analysis of ref.~\cite{mr}, uses a Nambu-Jona-Laisinio
approach to describe the condensation mechanism.
Even though some of these approaches
gave similar results, there are important differences among them.
In particular, following ref.~\cite{fgn}, since they
substitute $\lambda^\alpha \lambda_\alpha$ directly into the supersymmetric action
in components, the effective lagrangian is not explicitly
supersymmetric unlike for instance the results of
ref.~\cite{drsw}. \footnote{These two
approaches were shown to be equivalent in
ref.~\cite{kl}, once the
superconformal structure of the original supergravity action
is considered in detail, giving rise an explicit supersymmetric
action as in \cite{drsw}}
Also, the approach of
\cite{mr}, even though it reproduces the
results in \cite{vy} at tree-level, by including quantum
corrections, they find very different results, for instance,
the dilaton could be stabilized with a single condensing group.
Finally the formalisms of \cite{vy} and \cite{drsw}
have been compared in \cite{lt,kl}. They eliminate
the field $U$ by {\it assuming} it does not break
global supersymmetry, {\it ie} by using $\partial W/
\partial U=0$ and find agreement between the two methods.
However this condition should not be imposed beforehand and it
is not well justified in the supergravity case.
We can see there is no satisfactory understanding of the effective
theory below condensation. Furthermore, the
anomalous symmetry argument which is the most solid
description of the single condensing case, cannot be used for the
interesting case of several condensing groups.
We will now present a self contained discussion which
will at the end identify the main approaches with
known field theory quantities, {\it ie} the 2PI and
Wilsonian effective actions \cite{bdqq2}, and mention
how these two approaches
are actually related in a consistent manner.
\subsection{Supergravity Basics}
Since the
fields $S$ and $T$ are expected to have very large vev's,
it is more convenient to work with local supersymmetry without
taking the Planck scale to $\infty$.
The most general action for chiral matter
supermultiplets $\Sigma$ coupled to supergravity
can be written as \cite{cfgvp}:
\begin{eqnarray}
{\cal I}&=&\int d^4x \; \left\{ -\frac{3}{4}
[S_0 S_0^* e^{-K(\Sigma,\Sigma^*)
/3}]_D+\right. \\
& &\left.[S_0^3 W(\Sigma)]_F + [\frac{1}{4}f_{ab}(\Sigma)
W^{\alpha a} W^{b}_{\alpha} ]_F + {\rm cc}\right\}\nonumber
\end{eqnarray}
where the K\"ahler potential $K(\Sigma,\Sigma^*)$, the
superpotential
$W(\Sigma)$
and the gauge kinetic function $f_{ab}(\Sigma)$ define a particular
theory. The field $S_0$ is an extra chiral superfield
called `the compensator'. Its existence is due to the fact that
action (2) is not only invariant under super Poincar\'e symmetries
but
under the full superconformal symmetry.
This simplifies the treatment of the theory in particular
the calculation of the action in components. Super Poincar\'e
supergravity
is easily obtained by explicitly fixing the
field $S_0$ to a particular value, it is usually chosen in such a way
that the coefficient of the Einstein term in the action is just
Newton's constant.
Two symmetries of the superconformal algebra have a particular
importance
for us: Weyl and chiral $U(1)$ transformations. These two symmetries
do not commute
with supersymmetry. The chiral $U(1)$ group
is at the origin of the R-symmetry of Poincar\'e theories. Weyl
and chiral transformations with parameters $\lambda$
and $\theta$ respectively,
act on component fields with a factor
$e^{w_j\lambda + in_j\theta/2}$,
$w_j$ and $n_j$ being the Weyl and chiral weights of the component
field.
For a left-handed chiral multiplet $(z,\psi,f)$, one finds the
following
weights:
\begin{eqnarray}
z : & w, & n=w , \nonumber \\
\psi :& w+{1\over2}, & n-{3\over2}, \nonumber \\
f :& w+1, & n-3.
\end{eqnarray}
Chiral matter multiplets $\Sigma$ have $w=n=0$, except for $S_0$
which has
$w=n=1$. The chiral multiplet of gauge field strength $W^a$ has
$w=n=3/2$.
The $U(1)$ transformations of (left-handed)
gauginos and chiral fermions are therefore:
\begin{equation}
\lambda^a \longrightarrow e^{3i\theta/4}\lambda^a ,\qquad\qquad
\psi \longrightarrow e^{-3i\theta/4}\psi.
\end{equation}
These transformations generate a gauge-chiral
$U(1)$ mixed anomaly. This anomaly can be cancelled
by the `Green-Schwarz' counterterm \cite{dfkz,kl}\ :
\begin{equation}
\Delta {\cal I}= - c \left\{\int d^4x \; [
{1\over4}\,Tr W^\alpha W_\alpha\, \log S_0]_F+{\rm cc}\, \right\} .
\end{equation}
where
$
c = {3 \over 2 \pi } \left[ C(G) - \sum_I C\left(R_I \right)
\right],
$
(C here represents the Casimir of the representation,
for the case without matter we have that $c=8\pi b$).
This counterterm is claimed to cancel the anomaly to all orders in
perturbation
theory \cite{kl}\ and plays an important role
in what follows.
The action (12) has also a
symmetry under K\"ahler transformations:
$K\rightarrow K+\varphi(\Sigma)+\varphi^*(\Sigma^*),
\, W\rightarrow e^{-\varphi(\Sigma)}\, W $
since any such a transformation can be absorbed by redefining $S_0$:
$S_0\longrightarrow e^{\varphi/3}S_0$.
\subsection{The Wilson Effective Action}
Let us now restrict to a simple case that has all the properties we
need
to discuss gaugino condensation, {\it ie} a single
chiral multiplet $S$ coupled to supergravity and a nonabelian
gauge group with $K=K_p(S+S^*)$ arbitrary, $W(S)=0$
and $f(S)=S$. This is the case for the dilaton in
string theory at the perturbative level. This defines the
effective (Wilson) action at scales $M\geq E\geq \Lambda$.
We are interested in the Wilson action at
scales $\Lambda\geq E\geq 10^2$ GeV in which we
expect that gauginos have condensed and $S$ is the only
degree of freedom, that means we want to integrate out
the full gauge supermultiplet to obtain the
effective action for $S$ at low energies.
This is precisely the approach of ref. \cite{drsw}\ mentioned above.
We need to compute:
\begin{eqnarray}
e^{i\Gamma(S,S_0)}&\equiv
&\int DV \exp{i\int d^4x}\left\{ [\left(S- c\,
\log{S_0}\right)\right. \nonumber \\
&
\left. \mathop{\rm Tr}\, W^\alpha W_\alpha ]_F +{\rm cc}\right\}
\end{eqnarray}
First of all we can observe that $\Gamma(S,S_0)$ depends on its
arguments
only through the combination $S_0\exp(-S/c)$.
Second, since the result of the integration has to be superconformal
invariant (because the anomaly is cancelled), we
know that $\Gamma [S_0\exp(-S/c)]$ has to be
written in the form of equation (2)
(plus higher derivative terms)
with $f=0$ since there
are no gauge fields. Since the powers of $S_0$ are
exactly given by (2) and $S_0$ only appears multiplying
$\exp(-S/c)$ we can just read the super and K\"ahler potentials
to be:
\begin{eqnarray}
W(S)&=&w e^{-3S/c} \nonumber \\
e^{-K/3}&=&e^{-K_p/3}-k\,e^{-(S+S^*)/c}
\end{eqnarray}
where $w$ and $k$ are arbitrary constants ($k>0$
to assure positive kinetic energy).
The superpotential is just the one found in \cite{drsw}. The
correction to the K\"ahler potential is new
\cite{bdqq2}. Notice that both
are corrections of order $\exp{-1/g^2}$ as expected.
A word of caution is in order.
Unlike the superpotential which has
no corrections in perturbation theory, the K\"ahler
potential can be corrected order by order in perturbation
theory, therefore in practice the perturbative part
of the K\"ahler potential $K_p$ is simply unknown and
for weak coupling those corrections are bigger than
the nonperturbative correction found here. Our result
could be useful, only after the exact perturbative
K\"ahler potential is known. It is still interesting to
realize that such a simple symmetry argument can give us
the exact expressions for the {\it nonperturbative} super and
K\"ahler
potentials, without the need of holomorphy!
\subsection{The 2PI Effective Action}
To answer the questions posed at the beginning
of this chapter, {\it ie} whether
gauginos condense and break supersymmetry, it
is convenient to think
about the case of spontaneous breaking of gauge symmetries.
In that case we minimize the effective potential for
a Higgs field, obtained
from the 1PI effective action and see if the
minimum breaks or not the corresponding gauge symmetry.
In our case, we are interested in the expectation value
of a composite field, namely $\lambda^\alpha \lambda_\alpha$ or its
supersymmetric expression $W^\alpha W_\alpha$. Therefore we need
the so-called two particle irreducible effective
action
We start then with the generating functional in
the presence of an external current $J$ coupled to
the operator that we want the expectation value of,
namely,
$W^\alpha W_\alpha$
\begin{eqnarray}
e^{i{\cal W}[S,S_0,J]}&\equiv &\int DV \exp i\int d^4x\left\{
[(S-c\log S_0 \right. \nonumber \\
& & \left. + J ) \mathop{\rm Tr}W^\alpha W_\alpha]_F +{\rm cc}\right\}
\end{eqnarray}
{}From this we have
\begin{equation}
\frac{\delta{\cal W}}{\delta J}=\AvgW^\alpha W_\alpha\equiv U
\end{equation}
and define the 2PI action as
\begin{equation}
\Gamma[S,S_0,\hat U]\equiv {\cal W}-\int d^4x\left(\hat U J\right)
\end{equation}
To find the explicit form of $\Gamma$ we
use the fact that ${\cal W} $ depends on its three arguments
only thorugh the combination $S+J-c\log S_0$,
therefore, we can see that $\delta \Gamma/\delta (S-c\log S_0)
=\delta \Gamma/\delta J=\hat U$. Integrating this
equation
determines the dependence of $\Gamma$ in $S$ and
$S_0$:
\begin{equation}
\Gamma[S,S_0,\hat U]=\hat U\left(S-c\log S_0\right)+\Xi (\hat U)
\end{equation}
where $\Xi (\hat U)$ can be determined using symmetry arguments
as follows.
First we define a chiral superfield $U$ by
$\hat U\equiv S_0^3 U$. Therefore $U$ is a standard chiral superfield
with vanishing chiral and conformal weight ($w=n=0$).
Then $\Gamma[S, S_0,U S_0^3]$ can be writeen in the form (2)
with chiral fields $S$ and $U$.
Again the fact that the $S_0$ dependence of (2) is
very restricted, allows us to just read again the
corresponding K\"ahler and superpotential.
We find:
\begin{eqnarray}
W[S,U]&=&U[S+\frac{c}{3}\log U+\xi]\nonumber\\
e^{-K/3}&=&e^{-K_p/3}-a\, \left(UU^*\right)^{1/3}
\end{eqnarray}
Here $\xi$ is an arbitrary constant.
We can see that the superpotential corresponds to
the one found in \cite{vy}. The K\"ahler potential
is new, in \cite{vy}\ it was found for the global case, to which
this
reduces in the global limit.
Notice that we have identified the two main approaches to gaugino
condensation with the two relevant actions in field theory,
namely the Wilson and 2PI effective actions.
Our approach to the 2PI action is a reinterpretation of the
one in \cite{vy}. We have to stress that in our treatment
$U$ is only a {\it classical} field, not to be integrated out in any
path integral. It also does not make sense to consider loop
corrections to its potential, this solves the question
raised in \cite{mr} where loop corrections to the
$U$ potential could change the tree level results.
Furthermore, since $U$ is classical we can eliminate it by
just solving its field equations:$\partial\Gamma/\partial U=0$.
(Since this implies $J=0$, it makes equations (11) and (9)
reduce to (7).) These equations cannot
be solved explicitly but we find the solution in
an $1/\Lambda$ expansion. We find that the solution of these
equations reproduce the Wilson action derived in the
previous subsection (obtaining both $W(S)$ and $K(S+S^*)$
as in equation (9)) plus extra terms suppressed by inverse powers
of the condensation scale. This shows explicitly the relation
between the two approaches.
We can also consider the case of several condensates.
This case shows the power of the techniques used previously.
Following the original discussions of \cite{drsw}\
it was needed to use the PQ symmetry
of $S$ to cancel the $U(1)_R$ anomaly, however
when there are several condensing groups we would have neede several
$S$ fields
to cancel the anomaly (see \cite{bdqq2}) but there is only
one $S$ field in string theory. In our approach
however, we use the counterterm (5) which in the case
of several groups is a sum of terms \cite{kl}. Therefore
we have one counterterm for each group and so the path integrals
just factorize into products for each of the {\it many}
condensates, implying that the total superpotential
($W$) and $e^{-K/3}$ functions are the sum
of the ones for one single condensate. This is
the first real {\it derivation} of this well used
result!
By studying the effective potential for $U$ we recover the previously
known results. For one condensate and field independent
gauge couplings (no field $S$) the gauginos
condense ($U\neq 0$) but supersymmetry is unbroken.
For field dependendt gauge coupling, the minimum is for $U=0$
($S\rightarrow\infty$) so gauginos do not condense
(this is reflected in the runaway behaviour of the Wilsonian
action for $S$). For several condensing groups we find $U\neq 0$
and supersymmetry broken or not, depending on the case
\cite{ccm}.
\section{Linear vs Chiral Formalisms}
Here we report on the resolution of question (iv)
of section 2 \cite{bdqq1}\ :
perturbative $4D$ string theory has in its spectrum
a two-index
antisymmetric
tensor field $B_{\mu\nu}$.
Because it only has derivative
couplings, $B_{\mu\nu}$ is dual to a pseudoscalar field, the axion
$a$.
We can transform back and forth from the $B_{\mu\nu}$ and $a$
formulations as long as the corresponding shift
symmetries are preserved. It is known
that nonperturbative effects break the PQ symmetry of $a$
giving it a mass, then
the puzzle is: what happens to the stringy $B_{\mu\nu}$ field
in the presence of non-perturbative effects?
Is the duality symmetry also broken by those effects?
Is it then correct to forget about the $B_{\mu\nu}$ field, as
it is usually done, and work only with $a$? (Since,
unlike the axion, $B_{\mu\nu}$ is
the field created by string vertex operators).
The answer to these questions is very interesting:
duality symmetry is {\it not} broken by the
nonperturbative
effects but the $B_{\mu\nu}$ field disappears from the propagating
spectrum! Its place is taken by a massive $3$-index
antisymmetric tensor field $H_{\mu\nu\rho}$ dual to the
massive axion.
Here I will just sketch the main steps of the derivation
and refer the reader to \cite{bdqq1}\ for further details.
In $4D$ strings, the antisymmetric tensor belongs to a
linear superfield $L$ ($\overline{\cal DD}L=0$),
together with the dilaton
and the dilatino. For simplicity we only consider
the couplings of this field to gauge superfields in
global supersymmetry (the supergravity extension is straightforward),
the most general action is then
the $D$-term of an arbitrary function $\Phi$,
${\cal L}_L = [ \Phi(\hat L)]_D$, with $\hat L\equiv
L-\Omega$ and $\Omega$ the Chern Simons superfield,
satisfying $\overline{\cal DD}\Omega=W^\alpha W_\alpha$.
Since the gauginos appear in the lagrangian through the arbitrary
function $\Phi$, the analysis of gaugino condensation is far more
complicated in the linear case than in the chiral case. Furthermore,
the
Wilson action is not well defined in this case, because the field $L$
is
not gauge invariant, we cannot just integrate the gauge fields out
leaving an effective action for $L$ alone as we did for $S$.
Therefore we have to consider the 2PI action, and to
find it, we have to work in the first order formalism
where the gauge fields appear only through $\mathop{\rm Tr}W^\alpha W_\alpha$ as
in the $S$ case. This will also allow us to perform a duality
transformation and show that the $L$ and $S$ approaches are
equivalent.
The duality transformation is obtained by
starting with the first order system coupled to the external
current $J$:
\begin{eqnarray}
e^{ i{\cal W}(J)}
&=& \int DV\, DS\, DY\, \exp
i\,
\int d^4x\; \left( \right.
\nonumber \\
& & \left.
{\cal L}(Y,S)
+2\, \Re[J\mathop{\rm Tr}W^\alpha W_\alpha]_F\right)
\end{eqnarray}
Where $V$ is the gauge superfield, $Y$ an arbitrary vector
superfield with the lagrangian ${\cal L}(Y,S)=
\{\Phi(Y)\}_D
+\{S\overline{\cal DD}(Y +\Omega)\}_F$,
and $S$ (the same $S$ of of the previous section!) starting life
as
a Lagrange multiplier chiral superfield.
Integrating out $S$, implies
$\overline{\cal DD}(Y+\Omega)=0$ or $Y=L-\Omega\equiv\hat L$,
giving back the original theory. On the other hand
integrating first $Y$
gives the dual theory in terms of $S$ and $V$. This is the
situation above the condensation scale. Below condensation, however,
we have to integrate first the gauge fields, after that
we have the same two options for getting the two dual theories,
the difference now is that the integration over $V$ breaks
the PQ symmetry (if there are at least
two condensing gauge groups)
and we are left with a duality without global symmetries.
To see this, we will concentrate on the
$2PI$ effective action $\Gamma(U,Y,S)$ obtained in the standard
way for $U\equiv\Avg{TrW^\alpha W_\alpha}$ \cite{bdqq2}.
The important result is that since $\cal W$ depends on
$S$ and $J$ only through the combination $S+J$, we can see
as in eq.~ (12) that
$\Gamma(U,S,Y)=US+\Xi(U,Y)$, where $\Xi(U,Y)$ is arbitrary,
therefore $S$ appears only
linearly in the path integral and its integration gives
again a $\delta$-function, but
imposing now $\overline{\cal{DD}}Y=-U$ instead of
the constraint $\overline{\cal DD}(Y+\Omega)=0$ above condensation
scale.
We can then see that there is no linear
multiplet implied by this new constraint. This is an indication
that
the $B_{\mu\nu}$ field is no longer in the spectrum.
The new propagating bosonic degrees of freedom in $Y$ are,
a scalar component, the dilaton, becoming
massive after gaugino condensation and
a vector field $v^\mu$ dual to $a$, the pseudoscalar component of
$S$.
Instead of showing the details
of this duality in components, I will describe the following
slightly
simplified toy model which has all the relevant properties:
$${\cal L}_{v^\mu,a} = -{1\over 2}v^\mu v_\mu-a \partial_\mu
v^\mu
-m^2 a^2 $$
If we solve for $v^\mu$ we obtain $v_\mu=-\partial_\mu a$,
substituting back we find
$${\cal L}_{a}={1\over 2}\partial^\mu a
\partial_\mu a -m^2 a^2$$
describing the massive scalar
$a$. On the other hand, solving for $a$ we get
$a=-{1\over 2m^2}(\partial_\mu v^\mu)$ which gives
$${\cal L'}_{v^\mu}=
-{1\over 2}v^\mu v_\mu+{1\over 4m^2}(\partial_\mu v^\mu)^2.$$
The lagrangian ${\cal L'}_{v^\mu}$
also describes a massive scalar given by the longitudinal, spin zero,
component of $v^\mu$.
We can see that the only component that
has time derivatives is $v^0$, so the other three are
auxiliary fields.
Notice that for $m=0$, we recover the standard duality
among a massless axion and $B_{\mu\nu}$ field.
Therefore,
after the gaugino condensation process,
the original $B_{\mu \nu}$ field of the
linear multiplet is projected out of the spectrum in favour
of a massive scalar field corresponding to the
longitudinal component of $v^\mu$ or
to the transverse component of the antisymmetric tensor
$H_{\mu\nu\rho}\equiv \epsilon_{\mu\nu\rho\sigma}v^\sigma$.
Thus solving the
puzzle of the axion mass in the two dual formulations.
Other interesting discussions of gaugino condensation in
the linear formalism can be found in \cite{others}.
\section{Scenarios for SUSY Breaking}
The results of the previous sections have shown us that
the general results extracted in the past years about
gaugino condensation in string models, in terms of the
field $S$, are robust.
We have seen how gaugino condensation can in principle
lift the string vacuum degeneracy and break supersymmetry
at low energies (modulo de problems mentioned before).
But this is a very particular field theoretical mechanism
and it would be surprising that other nonperturbative effects
at the Planck scale could be completely irrelevant for these
issues. In general we should always consider the two types
of nonperturbative effects:stringy (at the Planck scale) and
field theoretical (like gaugino condensation). Four different
scenarios
can be considered depending on which class of mechanism solves
each of the two problems:lifting the vacuum degeneracy and breaking
supersymmetry.
For breaking supersymmetry at low energies, we expect that
a field theoretical effect should be dominant in order
to generate the hierarchy of scales.
We are then left with two preferred scenarios:
either the dominant nonperturbative effects are field theoretical,
solving both problems simultaneously, or there is a `two steps'
scenario
in which stringy effects dominate to lift vacuum degeneracy
and field theory effects dominate to break supersymmetry.
The first scenario has been the only one considered so far,
the main reason is that we can control field theoretical
nonperturbative effects but not the stringy. In this scenario,
independent of the particular mechanism, we have to face the
cosmological moduli problem.
In the two steps scenario
the dilaton and moduli fields are fixed at high energies
with a mass $\sim M_{Planck}$ thus avoiding the cosmological moduli
problem.
It is also reasonable to expect that Planck scale
effects can generate a potential for $S$ and $T$.
The problem resides in the implementaion of this scenario
\cite{biq},
mainly due
to our ignorance of nonperturbative string effects.
\subsection{S Duality}
To approach nonperturbative string effects
we may use the conjectured $SL(2,Z)$
$S$-duality in $N=1$ effective lagrangians \cite{filq2}\ :
\begin{equation}
S\rightarrow \frac{a\, S-i\, b}{i\, c\,S+d}, \qquad a\, d-b\, c=1.
\end{equation}
Even though there is mounting evidence for this symmetry in
$N=4,2$ string backgrounds, it is not yet clear how
it will be extended to $N=1$ and if so most probably the
lagrangian is not invariant under this symmetry
since it
usually exchanges `electric' and `magnetic' degrees of freedom.
However, similar to the case of $T$ duality, if we restrict to
the part of the action that depends only on $S$,
(which is the relevant part when looking for vacuum configurations)
this is expected to be
invariant under $S$ duality. Recall that if we do the same for
the classical action, the continuous $SL(2,R)$ transformation
is a symmetry of the truncated action, so the argument that quantum
effects break the continuous to the
discrete $S$ duality could actually make sense
in this case. As found in ref.~\cite{filq2}, the superpotential
should
be a modular form of weight $-1$ and can be written as:
\begin{equation}
W(S)= \eta(S)^{-2}\, Q[j(S)]
\end{equation}
where $Q$ is an arbitrary rational function of the absolute
modular invariant
function $j(S)$. Its arbitrariness forbids us to
extract concrete conclusions, but there are several general issues
worth mentioning. Since the weight of $W(S)$ is negative, it
necessarily has poles \cite{cfilq}. If we further impose that
the scalar potential has to vanish at
$S_R\rightarrow\infty$ (zero
string coupling)\cite{hm}\ there
should be poles at finite values of $S$ which may need
interpretation.
The functions $\eta(S)$ and $j(S)$ can be expressed as
infinite sums of $q\equiv e^{-2\pi S}$, thus encompassing
the expected nonperturbative instanton-like expansion.
The selfdual points $S=1,\exp{i\pi/6}$ are always extrema
of the potential and very often are minima. For those
points supersymmetry is unbroken, thus making the two
steps scenario very plausible at least for the $S$ field.
This way of fixing the vev of $S$ is much more elegant than
the racetrack scenario with several condensing gauge groups.
It is similar to the way we understood the fixing of $T$.
A general question to be addressed to this scenario is that
usually the vev of $S$ is very close to $S_R\sim 1$ because
the nontrivial structure of the potentials is always close to the
selfdual points. This is far from the phenomenologically required
value where we want $4\pi/g^2\sim 25$. However, as emphasized
in \cite{lnn}\ the gauge coupling is $S$ only
at tree level, it is expected to get nonperturbative corrections
and we may have a situation with $S_R=1$ but with a
larger value of $f(S)$ at the minimum leading to the
desired gauge coupling at the string scale.
Let us mention as an aside that the gaugino condensation
process can be made consistent with $S$-duality
\cite{hm,lnn,bg}. A way to do it is to write the
gaugino condensation superpotential $W\sim \exp{-\frac{3S}{c}}$
as the first term in an infinite expansion of the form
(15). Another approach is to try to {\it derive} the effective
superpotential from nonperturbative corrections to the
gauge kinetic function $f(S)$. The problem with this approach
is that we do not know how $f(S)$ should transform under $S$ duality
(we cannot forget the gauge fields as we did for finding $W(S)$).
In ref.~\cite{lnn}, it was assumed that $f$ is invariant,
but then the gaugino condensation-induced superpotential
$W\sim \exp{-\frac{3f}{c}}$ would also be invariant
instead of a weight $-1$ form as required by $S$-duality.
An extra factor $\eta(S)^{-2}Q[j(S)]$ has to be put in by hand
without justification, losing the connection with the
condensation process.
A probably better way to derive an $S$ duality
invariant effective theory after gaugino condensation,
may be to assume a
noninvariant $f(S)$ \cite{ilq}, after all that is
precisely what happens in $T$ duality
for which $f(T)\sim \log\eta(T)$. If for instance we take,
\begin{equation}
f(S)=\frac{C}{\pi}\log\left\{\eta(S)\,
(j(S)-744)^{(C-12)/24C}\right\}
\end{equation}
nonperturbatively (here $C$ is the Casimir of the
corresponding gauge group, see discussion below equation (5)),
we can see that it has the right limit
for large $S$ ({\it ie} $f\rightarrow S$) and induces
a gaugino condensation superpotential $W(S)\sim \eta(S)^{-2}
(j(S)-744)^{(12-C)/12C}$ which has the right transformation
properties under $S$ duality and reduces to the gaugino
condensation superpotential in the large $S$ limit.
The noninvariance of $f(S)$ may probably
be related with $S$-duality anomalies \cite{ilq}\ as it happened
in the $T$ duality case.
A problem with this approach is that if we are considering
nonperturbative corrections to the $f$ function, we should also
include those corrections for $W$ and $K$. This may
diminish the importance of the gaugino condensation-induced
superpotential
above, because it would be just an extra contribution to
the original nonperturbative superpotential which we do not know.
There may still be situations, as argued in \cite{bd}, for which
gaugino condensation superpotentials could nevertheless
be dominant.
\subsection{Two Steps Scenario}
In the two steps scenario, after we have fixed the vev
of the moduli by stringy effects, it remains the question of
how supersymmetry is broken at low energies. Notice that we
would be left with the situation present before the advent
of string theory in which the gauge coupling is
field {\it independent}. In that
case we know from Witten's index that gaugino condensation cannot
break global supersymmetry. Since there are no `moduli'
fields with large vev's, the supergravity correction should
be negligible because we are working at energies much smaller
than $M_{Planck}$.
In fact we can perform a calculation by
setting $S$ to a constant in eq.~(12),
it is straightforward to show that supersymmetry
is still unbroken in that case \cite{biq}, as expected. A more
general
way to see this is
computing explicitly the $1/M_{Planck}$ correction to a
global supersymmetric solution $W_\phi=0$, and see that it
coincides with the solution of
$W_\phi+WK_\phi/M_p^2=0$ which is always a
supersymmetric extremum of
the supergravity scalar potential.
As mentioned in section 2, there seems to be however a counterexample
in the literature.
In ref.~\cite{peter} a modification of the
K\"ahler potential (12) was considered:
\begin{equation}
e^{-K/3}=1-a\, \left(UU^*\right)^{1/3}-b\, \left(UU^*\right)
\end{equation}
with the same superpotential. For $a=-9b$
supersymmetry was found to be broken with vanishing cosmological
constant.
But also for this choice of parameters
the global limit is such that $K_{UU^*}$ vanishes, and so the kinetic
energy
for $U$. This makes the corresponding
minimum in the global case ill defined, since there may be other
nonconstant field configurations with vanishing energy.
This is then not a counterexample, because the
global theory is not well defined in the minimum.
In any case, in our general analysis, there are no such extra
corrections to the K\"ahler potential for $U$.
We are then left with a situation that if global supersymmetry
is unbroken, we cannot break local supersymmetry, unless
there are moduli like fields.
This can bring us further back to the past and reconsider models
with dynamical
breaking of global supersymmetry
(for a recent discussion with new insights see \cite{dnn}\ ).
\section{Conclusions}
\begin{description}
\item[(i)] Gaugino condensation provides a simple example
of how supersymmetry can be broken dynamically with
partial succes. Some of the problems may be solved
after having better control of the supergravity
lagrangian. In particular, in the single hidden sector
group case we have seen that the gauginos do not
condense, but this situation may be changed after perturbative and
nonperturbative corrections to the K\"ahler potential are considered
\cite{bd}. The cosmological problems may be
more generic, however.
\item[(ii)] The gaugino condensation process
is also an interesting laboratory to test
nonperturbative properties of string and field theories. In
particular
duality symmetries survive this simple,
but nontrivial, nonperturbative test.
\item[(iii)]The different approaches to describe the effective
theory underlying the condensation process correspond
simply to the use of the Wilson or 2PI effective actions, therefore
there is
a well defined relation among them.
Even though the Wilson action is usually simpler to work with,
the 2PI action is more suitable to follow the condensation process,
it also is the only one that could be used to describe
the condensation of gauginos in the `linear formalism'.
The Wilson action cannot be used without previously identifying the
low energy degrees of freedom.
We needed the 2PI action to find out that the
axion degree of freedom is represented by a massive
$H_{\mu\nu\rho}$ tensor.
\item[(iv)] The linear and chiral descriptions are equivalent,
even in the absence of PQ symmetries.
Which formulation is more convenient depends on the situation.
In the linear description, the stringy $B_{\mu\nu}$
field is replaced by the massive $H_{\mu\nu\rho}$ field.
We believe, this will also be the case in
more general nonperturbative effects.
We may conjecture that this result could be related with the claims
that `stringy' nonperturbative effects are
not well described by strings but better by membranes, which
couple naturally to $H_{\mu\nu\rho}$ or five-branes, which
provide the 10D origin of the field $S$. A (massless) field
$H_{\mu\nu\rho}$ also
appears naturally in 11D supergravity.
\item[(v)] There is not a compelling
scenario for supersymmetry breaking
and the field remains open, but we have a much
better perspective on the relevant issues now. The
nonrenormalizable hidden sector models of which the
gaugino condensation is
a particular case, may need a convincing solution of the
cosmological moduli problem to still be considered
viable. Hopefully, this will lead to interesting feedback
between cosmology and string theory \cite{bms}.
Furthermore, the recent progress in understanding
supersymmetric gauge theories can be of much use for
reconsidering gaugino condensation with hidden matter,
the discussion in the string literature is far from complete.
The understanding of models with chiral
matter could also provide new insights to
global supersymmetry
breaking, relevant to the
two steps scenario mentioned above. In any case the techniques
found to be useful in the simplest gaugino condensation
approach discussed here, will certainly help in understanding those
more
complicated models.
\end{description}
I thank the organizers for the invitation to
participate in such an exciting conference.
|
1,314,259,993,285 | arxiv |
\section{Introduction}
\IEEEPARstart{A}{s robotic}
systems are applied to increasingly unstructured and unpredictable environments, the ability to identify and adapt to their environment is becoming of critical importance.
The collaboration with humans represents a particular challenge, as the interaction varies between individuals.
The manipulation of an articulated object by a human in collaboration with a robot is one example, where the robot performance can be improved by learning a model to describe and predict the human motor control behavior \cite{Lee2012}.
The literature on human control behavior widely agrees on the fact that human motor performance is achieved through the reactive and predictive component (see the review in \cite{Wolpert2011}).
The reactive component is triggered by sensory inputs and updates an ongoing motor command; it can, therefore, be interpreted as the feedback control action.
The predictive component capitalizes on the ability to anticipate motor events based on memory in order to accomplish a given task under foreseeable conditions, which can be interpreted as feedforward action \cite{Kawato1999}.
The existence of these two components has been highlighted in studies of various motor control tasks, including grasping and manipulation \cite{Johansson1988,Johansson1992,Fu2010}.
In this work, we present a shortest path inverse optimal control method, which is applied to train a predictive model of human motor control.
The inverse optimal control method is thereby used to learn the parameters of an optimal control problem from demonstrated state and input trajectories.
In particular, it learns both the objective function and constraints of an underlying infinite-horizon optimal control problem from observed trajectory segments of finite length using optimality conditions of a corresponding shortest path problem and a candidate constraint set.
The optimality conditions are derived based on Bellman's principle of optimality \cite{Bellman1957} and the \underline Karush-\underline{K}uhn-\underline{T}ucker (KKT) optimality conditions \cite{Kuhn1951}.
The proposed method is convex for objective functions that are linear in their parameters and for general nonlinear systems, where relevant constraints are identified from the candidate constraint set using Lagrange multipliers.
The method is utilized to train a predictive model of movements of three human subjects from a human manipulation task.
We set up a human manipulation experiment, where three human subjects manipulated one end of a passive kinematic object whose position was changed consecutively by a robot.
In this context, the goal of the inverse learning method is to train a predictive model of human movements.
The underlying hypothesis is that the demonstrations of the human manipulation task are optimal with respect to an infinite-horizon constrained optimal control problem.
The experimental study highlights the potential of the proposed learning approach by providing good predictive performance for individual human movements.
In particular, the proposed shortest path formulation is shown to be beneficial for suboptimal execution, i.e., disregard the reactive human motor control component in the application considered in this paper.
Related inverse optimal control approaches are presented in \cite{Kalman1964, Priess2015,Menner2018,Mombaur2010,Puydupin2012,Englert2017,Majumdar2017}.
The approaches in \cite{Kalman1964,Priess2015,Menner2018} can be interpreted as an inverse method of an infinite-horizon optimal control problem, but they are restricted to unconstrained, linear systems and quadratic objective functions.
In \cite{Mombaur2010}, a bilevel approach to solve an inverse unconstrained optimal control problem is presented.
The techniques closest to our method are \cite{Puydupin2012,Englert2017,Majumdar2017}, where the KKT conditions are similarly used for learning the stage cost but the constraints are assumed to be known.
The two main distinctions of our approach with respect to \cite{Puydupin2012,Englert2017,Majumdar2017} are the consideration of an optimal control problem with an infinite horizon and the simultaneous identification of constraints from a candidate constraint set that is constructed from data with a convex optimization problem.
By using a shortest path formulation, the required trajectory segment for learning the parameters of the underlying optimal control problem can be shorter, e.g., compared to \cite{Englert2017}, and the learned parameters are invariant with respect to the chosen trajectory segment.
As for the application, the incorporation of constraints results in better predictions of human movement, whereas the consideration of a shortest path formulation allows for isolating trajectory segments where the predictive component is dominant, i.e., where the hypothesis of optimal demonstrations with respect to an optimal controller is valid.
\section{Shortest Path Inverse Optimal Control}
\label{sec:opt}
This section presents an \underline inverse \underline optimal \underline control (IOC) approach based on a shortest path formulation to learn an objective function and constraints from observations.
The observations are represented as trajectories of state measurements $x(k)\in \mathbb{R}^n$ and inputs $u(k)\in \mathbb{R}^m$ at time-step $k$, where
\begin{equation}
\label{eq:sys}
x(k+1)=f(x(k),u(k))
\end{equation}
with the potentially nonlinear function $f(\cdot)$ modeling the evolution of the state.
For the derivation of the inverse method in this section, we assume that $f(\cdot)$ is given.
Section~\ref{sec:robot} discusses how to identify $f(\cdot)$ for the considered application.
Observed trajectories are assumed to be optimal with respect to an infinite-horizon constrained optimal control problem, i.e., $x(k+i)=x_i^\star$ and $u(k+i)=u_i^\star$ $\forall$ $i\geq 0$ with
\begin{subequations}
\label{eq:dp}
\begin{align}
\left\{ x_i^\star,u_i^\star \right\}_{i=0}^\infty
=\ \arg \min_{x_i,u_i}\ &\sum_{i=0}^{\infty} l(x_i,u_i;L)
\\
{\rm s.t.}\ &
\label{eq:opt_sysdyn}
x_{i+1} = f(x_i,u_i) & \forall\ i\geq 0
\\
\label{eq:opt_const}
& C(x_i,u_i) \leq 0 & \forall\ i\geq 0
\\
\label{eq:opt_init}
& x_0=x(k)
\end{align}
\end{subequations}
with stage cost $l(x_i,u_i;L)$ defined as a parametric function with parameters $L$, constraint set $C(x_i,u_i) \leq 0$, and initial state $x(k)$.
The notation $\left\{ \cdot \right\}_{i=0}^\infty$ is used to indicate indices from $i=0$ to $\infty$.
The goal in this work is to train a predictive model by learning both $l(x_i,u_i;L)$ and $C(x_i,u_i)$ from state and input measurements, which is referred to as the inverse problem to \eqref{eq:dp} in the following.
\subsubsection*{Problem Definition}
The first difficulty in the inverse problem of \eqref{eq:dp} is that measurements $x(k),u(k)$ are not available for $k\rightarrow \infty$ but only in some finite segment.
We address this using a shortest path formulation (see Section~\ref{sec:DPtrajseg}).
For cases, where the constraint set $C(\cdot,\cdot)$ is unknown, we propose the construction of a candidate constraint set.
The main step of the proposed approach is the derivation of optimality conditions of the shortest path formulation using the candidate constraint set (see Section~\ref{sec:KKTtrajseg}).
The optimality conditions are then used to simultaneously identify constraints from the candidate set and learn the stage cost parameters.
\subsection{Formulation of infinite-horizon as shortest path problem}
\label{sec:DPtrajseg}
We formulate the infinite-horizon problem as a shortest path problem of finite length $e$ and show that the minimizers of both the infinite-horizon problem and the shortest path problem are identical along the path, i.e., from time $k$ to $k+e$.
Let ${X}^m :=
[\ x(k)^\mathrm{T}\
x(k+1)^\mathrm{T}\
\hdots\
x({k+e})^\mathrm{T}\ ]^\mathrm{T} \in \mathbb{R}^{n(e+1)}$ and
${U}^m :=
[\ u(k)^\mathrm{T}\
u(k+1)^\mathrm{T}\
\hdots\
u({k+e-1})^\mathrm{T}\ ]^\mathrm{T} \in \mathbb{R}^{me}$ be the collection of state and input measurements, respectively, over the time interval $k$ through $k+e$.
If $X^m$, $U^m$ describe the shortest path, then they (at least locally) minimize
\begin{equation}
\label{eq:segment}
\begin{aligned}
\left\{X^m,U^m\right\} = \arg &
\min_{
x_i,
u_i}\
\sum_{i=0}^{e-1} l(x_i,u_i;L)
\\
&
{\rm s.t.}
\begin{array}[t]{lll}
x_{i+1} = f(x_i,u_i)
\\
C(x_i,u_i) \leq 0\quad i=0,...,e-1
\\
x_0 = x(k)
\\
x_e = x(k+e).
\end{array}
\end{aligned}
\end{equation}
Using Bellman's principle of optimality \cite{Bellman1957}, we can show that $X^m$, $U^m$ then also correspond to minimizers of \eqref{eq:dp} for $i=k,...\ k+e$, which is formally stated in the following theorem.
\begin{theorem}
Consider a trajectory segment of measurements ${X^m}$, ${U^m}$ from a dynamical system \eqref{eq:sys}.
If the observed inputs $U^m$ are the result of the optimal control problem in \eqref{eq:dp} for times $k,...,k+e-1$, then $X^m$, $U^m$ also (at least locally) minimize the optimization problem in \eqref{eq:segment}.
\end{theorem}
\begin{proof}
The optimization problem in \eqref{eq:dp} can be written as
\begin{equation}
\begin{aligned}
\label{eq:dp2}
J^\star(x(k))=\ \min_{x_i,u_i}\
&
\sum_{i=0}^{e-1} l(x_i,u_i;L) + \sum_{i=e}^{\infty} l(x_i,u_i;L)
\\
{\rm s.t.}\ &
\eqref{eq:opt_sysdyn},\eqref{eq:opt_const},
\eqref{eq:opt_init}.
\end{aligned}
\end{equation}
If $x_e^\star$ is known, then, using Bellman's principle of optimality \cite{Bellman1957} with $x_e=x_e^\star$, \eqref{eq:dp2} can be formulated as
\begin{equation}
\begin{aligned}
\label{eq:dp4}
J^\star(x(k)) =\
\min_{x_i,u_i}\
& \sum_{i=0}^{e-1} l(x_i,u_i;L) + J^\star(x_e^\star)
\\
\quad {\rm s.t.}\
&
\begin{array}[t]{lll}
x_{i+1} = f(x_i,u_i) & i=0,...,e-1
\\
C(x_i,u_i) \leq 0 & i=0,...,e-1
\\
x_0=x(k)
\\
x_e=x_e^\star.
\end{array}
\end{aligned}
\end{equation}
Hence, the minimizers of \eqref{eq:dp} and \eqref{eq:dp4} are equal for all $i=0,...,e$.
The result follows with $x_e^\star=x(k+e)$.
\end{proof}
Note that problem \eqref{eq:segment} differs from a standard finite-horizon formulation as used in \cite{Englert2017} by the end-point constraint $x_e=x(k+e)$, which makes a key difference for learning the problem parameters, as will be illustrated in Section~\ref{sec:simulation}.
\begin{remark}
The shortest path formulation originates from the hypothesis that demonstrations are optimal with respect to the infinite-horizon problem in \eqref{eq:dp}.
For a different model/ hypothesis, the formulation of the inverse problem can differ.
A particular advantage of the shortest path formulation is that any path along the measured trajectory can be used for learning.
This allows for selecting particular paths where the assumption of optimal execution/data is fulfilled "more closely," e.g., high signal-to-noise ratio or negligible reactive human motor control component in the application considered.
\end{remark}
\subsection{Optimality conditions}
\label{sec:KKTtrajseg}
In the following, we derive optimality conditions of the shortest path problem in \eqref{eq:segment} and show how they can be used for {learning} both parameters of the stage cost and constraints.
First, we express the optimization problem in \eqref{eq:segment} in terms of the inputs $u_i$ by recursively defining $x_i=F_i(U,x_0)$:
\begin{align}
\label{eq:FU}
F_i(U,x_0) :=
\begin{cases}
x_0 \quad &{\rm if}\quad i = 0
\\
f(F_{i-1}(U,x_0),u_{i-1})
\ &{\rm else}
\end{cases}
\end{align}
with
${U} :=
\begin{bmatrix}
u_0^\mathrm{T}
&
u_1^\mathrm{T}
&
\hdots
&
u_{e-1}^\mathrm{T}
\end{bmatrix}^\mathrm{T}$.
Hence, the resulting optimization problem is given as
\begin{equation}
\label{eq:seg_final}
\begin{aligned}
\min_{U}\ &
\sum_{i=0}^{e-1} l(F_i(U,x(k)),u_i;L)
\\
{\rm s.t.}\
&
\begin{array}[t]{lll}
C(F_i(U,x(k)),u_i) \leq 0\quad i=0,..., e-1
\\
F_e(U,x(k)) = x(k+e),
\end{array}
\end{aligned}
\end{equation}
where we use $x_0 = x(k)$.
The Lagrangian $\mathcal{L}(U,\lambda,\nu,L)$ of the optimization problem in \eqref{eq:seg_final} is given by
\begin{align}
\begin{aligned}
\label{eq:lagrangian}
&\mathcal{L}( U,\lambda,\nu,L) = \nu^\mathrm{T} (F_e(U,x(k)) - x(k+e))
\\
&
+ \sum_{i=0}^{e-1} l(F_i(U,x(k)),u_i;L)
+ \lambda_i^\mathrm{T} {C}(F_i(U,x(k)),u_i)
\end{aligned}
\end{align}
with the Lagrange multipliers $\lambda_i \geq 0$ and $\nu \in \mathbb{R}^n$ (see \cite{Boyd2004}), and $L$ denoting the parameters of the stage cost $l(x_i,u_i;L)$.
Using $\mathcal{L}(\cdot)$ in \eqref{eq:lagrangian}, the KKT optimality conditions for the trajectory segment are given by
\begin{subequations}
\label{eq:kkt}
\begin{align}
&\nabla_U
\mathcal{L}(U,\lambda,\nu,L)
= 0
\\
&\lambda_i^{ \mathrm{T}} {C}(F_i(U,x(k)),u_i) = 0 & \hspace{-.6cm} i=0,...,e-1
\\
&\lambda_i\geq 0 & \hspace{-.6cm} i=0,...,e-1
\\
&
\label{eq:primal}
C(F_i(U,x(k)),u_i) \leq 0 & \hspace{-.6cm} i=0,...,e-1
\\
&
\label{eq:Fe}
F_e(U,x(k)) - x(k+e)=0.
\end{align}
\end{subequations}
\subsubsection{Construction of candidate constraint set}
\label{sec:cons}
Eq. \eqref{eq:primal} will hold for any observed trajectory with optimal execution (primal feasibility); however, the function $C$ might be unknown.
If $C$ is unknown, we propose to use \eqref{eq:primal} to construct candidate constraints $\bar C(x_i,u_i)$ as the convex hull of all observed data points of the form $P[x_i^\mathrm{T}\ u_i^\mathrm{T}]^\mathrm{T}\leq p$.
A subset of the candidate constraints is then identified as constraints via the KKT conditions.
A method for computing the convex hull, i.e., $P$ and $p$, is, e.g., presented in \cite{Barber1996}.
\subsubsection{Optimality conditions for learning}
The idea of the proposed approach is to solve \eqref{eq:kkt} for the parameters $L$ of the stage cost $l(x_i,u_i;L)$ as well as for $\lambda_i$ and $\nu$, given measurements ${X^m},\ {U^m}$ and the candidate constraints $\bar C(x_i,u_i)$, i.e.
\begin{subequations}
\label{eq:kktinfer}
\begin{align}
\label{eq:stationarity}
& \nabla_U
\left.
\bar{\mathcal{L}}(U,\lambda,\nu,L)
\right|_{U= {U^m}}=0
\\
\label{eq:complem_ex}
& \lambda_i^{ \mathrm{T}} {\bar C}(x(k+i),u(k+i)) = 0 & i=0,...,e-1
\\
\label{eq:complem_dual}
& \lambda_i\geq 0 & i=0,...,e-1
\end{align}
\end{subequations}
with the approximate Lagrangian $\bar{\mathcal{L}}(\cdot)$ defined as in \eqref{eq:lagrangian} where $\bar C(F_i(U,x(k)),u_i)$ replaces $C(F_i(U,x(k)),u_i)$.
Eq. \eqref{eq:primal} is only needed for the construction of candidate constraints and \eqref{eq:Fe} holds by construction.
Hence, both $\bar C(x(i),u(i))\leq 0$ and \eqref{eq:Fe} are not needed for learning the stage cost parameters [see \eqref{eq:kkt} with \eqref{eq:kktinfer}].
The feasibility problem in \eqref{eq:kktinfer} is convex if $l(x_i,u_i;L)$ is linear in $L$.
One can show that \eqref{eq:kktinfer} is always feasible using the convex hull as the candidate constraint set, provided optimal and noise-free data.
The Lagrange multipliers $\lambda_i$ and their values are essential in the proposed IOC approach in order to identify constraints from the candidate set.
Each scalar $\lambda_{i,j}$ can be interpreted as a force keeping the optimization problem \eqref{eq:seg_final} from violating the corresponding primal constraint $\bar C_j(x_i,u_i)\leq 0$ at time $i$.
In other words, the value of a dual variable $\lambda_{i,j}$ indicates the sensitivity of the optimization problem to the corresponding constraint \cite{Boyd2004}.
We define a measure for the identification of constraint $j$ as $\Lambda_j\geq \bar \Lambda$ with
\begin{align}
\label{eq:lambdasum}
\textstyle
\Lambda_j=\sum_{i=0}^{e-1}\lambda_{i,j},
\end{align}
where $\bar \Lambda \geq 0$ is a problem-specific threshold value.
If, e.g., $\Lambda_j=0$, the $j^{\rm th}$ constraint does not affect the minimizer of the optimization problem and does not represent a constraint.
If, however, the value of $\Lambda_j$ is very high, the minimizer is strongly affected by the constraint $j$ and the constraint is therefore crucial in explaining the observed trajectory.
Hence, $\Lambda_j$ relates directly to the importance of constraint $j$.
The larger $\Lambda_j$, the more important is constraint $j$.
We utilize this relation to identify constraints from the candidate set.
The identified constraints are used in the predictive model, along with the learned parameters of the stage cost.
\subsection{Sub-optimal and noisy data}
\label{sec:prac}
Eq. \eqref{eq:kktinfer} will be feasible if, and only if, the trajectory is the solution of an optimal control problem of the form \eqref{eq:dp}.
In practice, however, even if this modeling assumption is correct, the feasibility problem in \eqref{eq:kktinfer} will not be satisfied exactly due to measurement or process noise.
In order to learn from sub-optimal or noisy data, we propose to solve the relaxed problem
\begin{equation}
\label{eq:opt_sub}
\begin{aligned}
\min_{
L, \nu, \lambda_i}\
&
\left\|
\left. \nabla_U \bar{\mathcal L}(U,\lambda,\nu,L) \right|_{U=U^m}
\right\|_2^2
\\
\quad {\rm s.t.}\
&
\begin{array}[t]{lll}
\lambda_i^\mathrm{T} \bar C( x(k+i),u(k+i)) =0
\\
\lambda_i \geq 0 \quad\quad\quad\quad\quad i=0,...,e-1.
\end{array}
\end{aligned}
\end{equation}
It is easy to verify that
$\left\| \left. \nabla_U \bar{\mathcal{L}}(\cdot) \right|_{U=U^m}\right\|_2^2=0$ indicates optimality with respect to \eqref{eq:kktinfer} and that \eqref{eq:opt_sub} is always feasible.
\begin{remark}
The use of a shortest path formulation in this work is reflected through the term $\nu^\mathrm{T} (F_e(U,x(k)) - x(k+e))$ in \eqref{eq:lagrangian}.
Thus, an inverse approach with finite horizon as in \cite{Englert2017} is obtained with $\nu=0$.
\end{remark}
\begin{remark}[On active and identified constraints]
A constraint $j$ is active if $\bar C_j(x_i,u_i) = 0$ at time $i$.
Using the proposed method for constructing candidate constraints, there are always active candidate constraints.
However, it is important to note that not all active candidates yield $\Lambda_j>0$; it is also possible that candidate $j$ is active, i.e., $\bar C_j(x_i,u_i) = 0$, and $\Lambda_j=0$.
Inversely, $\Lambda_j=0$ does not mean that the candidate $j$ is never active but that the observed trajectory would have been the same with and without candidate $j$.
Hence, candidate constraint $j$ is not identified as constraint if $\Lambda_j=0$.
Section~\ref{sec:simulation} illustrates this concept in a simulation example.
\end{remark}
\section{Illustrative Example}
\label{sec:simulation}
In this section, we illustrate the IOC procedure and highlight its key benefits in simulation for a pendulum with the discrete-time state-space representation:
\begin{align*}
\begin{bmatrix}
x_1(k+1)
\\
x_2(k+1)
\end{bmatrix}
=
\begin{bmatrix}
x_1(k) + T_s x_2(k)
\\
x_2(k) - T_s\frac{g}{l}\sin x_1(k)
\end{bmatrix}
+T_s
\begin{bmatrix}
0 \\
\frac{1}{ml^2}
\end{bmatrix}
u(k)
\end{align*}
with $x_1(k)=\theta(t)$ at $t=kT_s$ and $T_s=0.01\rm s$, $g=9.81 \rm m/s^2$, $l=1\rm m$, and $m=1\rm kg$.
$\theta(t)$ is the angle and $u(t)$ is the applied torque in $\rm Nm$, where $|u(t)|\leq \bar u$ with $\bar u= 5\rm Nm$ is assumed to be the available torque.
In the following, we consider an optimal controller of the form \eqref{eq:dp} with constraints $u_{i}\leq 5$ and $-u_{i}\leq 5$ and stage cost $l(x_i,u_i;Q^{\rm gt},r^{\rm gt})=x_i^\mathrm{T} Q^{\rm gt} x_i+r^{\rm gt} |u_i| +u_i^2$.
The goal in this example is to learn the constraints and the parameters $Q^{\rm gt}$ and $r^{\rm gt}$.
\subsection{Learning with shortest path and finite horizon methods}
\label{sec:infvsf}
First, we highlight the main differences between the proposed shortest path formulation and two finite-horizon methods, i.e., a method using the KKT conditions similarly as in \cite{Englert2017} and a probabilistic IOC method which uses a likelihood maximization similarly as in \cite{Levine2012}.
The finite-horizon KKT method differs from the presented approach by virtue of the term $\nu^\mathrm{T} (F_e(U,x(k)) - x(k+e))$ in \eqref{eq:lagrangian} and thus, follows readily with $\nu=0$ (removing the term).
The proposed IOC approach, similarly as the approach in \cite{Englert2017}, yield a convex semi-definite program, which can, e.g., be solved with MOSEK \cite{MOSEK}, whereas the likelihood maximization method yields a non-convex optimization problem, which in this example is solved with a projected gradient descent method.
Figure~\ref{fig:e} shows results with trajectory segments from $t=0$s through $t_e$ generated with $Q^{\rm gt}=I$ and $r^{\rm gt}=0$, where we enforce $Q\succeq 0$.
The middle plot shows that the proposed method only needs a segment from $t=0$s through $t_e\approx 0.5$s to find the ground truth.
Both methods with finite horizon are not able to learn the ground truth even if the segments are long and $\theta(t)$ is close to stationarity (see $Q_{12} \approx 1$ at $t_e=1000$s).
\subsection{Learning with and without candidate constraints}
\label{sec:convsunc}
Next, consider the trajectories with $Q^{\rm gt}=10 I$ and $r^{\rm gt}=1$ for comparing methods with and without candidate constraints using segments from $t_i$ to $t_i+2$s [see Figure~\ref{fig:e200} (top)].
\subsubsection*{IOC, constrained (2nd plot from the top)}
The first step is to construct candidate constraints for the input $u(k)$:
\begin{subequations}
\label{eq:g}
\begin{align}
\label{eq:gu}
u(k)\leq g_u
\\
\label{eq:gl}
-u(k)\leq g_l
\end{align}
\end{subequations}
where $g_u$ and $g_l$ depend on the chosen segment and are displayed in red (diamond markers) and green (triangle markers), respectively.
The algorithm returns $Q$ and $r$ as well as $\Lambda_1$ and $\Lambda_2$, which are defined in \eqref{eq:lambdasum} and correspond to the candidate constraints \eqref{eq:gu} and \eqref{eq:gl}, respectively.
The parameters $Q$ and $r$ are very close to the ground truth for all $t_i$.
If $t_i<0.96$s, $g_u=5$ and $\Lambda_1>0$ suggesting that $u(k)\leq 5$ is indeed a constraint.
If $t_i>0.96$s, $g_u<5$ and $\Lambda_1=0$ suggesting that $u(k)\leq g_u<5$ is not a constraint, which is correct, as the constraint is not active.
For all $t_i$, $g_l<5$ and $\Lambda_2=0$ (not displayed) suggesting that $-u(k)\leq g_l<5$ is not a constraint.
Overall, $Q$ is learned reliably and for $t_i<0.96$, $u(k)\leq u_{max}$ is learned as constraint.
The trajectory does not provide conclusive evidence about the existence of a lower bound, i.e., $-u(k)\leq u_{max}$, which is expected as $g_l<5\ \forall t_i$.
\subsubsection*{IOC, unconstrained (3rd plot from the top)}
If $t_i>0.96$s, $Q$ and $r$ are very close to the ground truth, which is expected since the control problem is virtually unconstrained in these segments.
However, if no candidate constraints are constructed a priori, $Q$ and $r$ differ for $t_i<0.96$s as the observed trajectory cannot be explained by means of an unconstrained optimal control problem.
\subsubsection*{Finite-horizon IOC, constrained (bottom plot)}
The method learns the constraint $u(k)\leq 5$ using similar arguments as the proposed shortest path IOC method; however,
it fails to capture the ground truth stage cost parameters with $r\approx 0$ and $Q$ not close to $Q^{\rm gt}$ for all trajectory segments.
\begin{figure}[t]
\centering
\includegraphics[width=0.96\columnwidth]{pendulum_e.pdf}
\caption{{Top: State and input trajectories.
Middle: $Q$ learned with shortest path IOC.
Bottom: $Q$ learned with two finite-horizon methods: KKT and maximum likelihood.}}
\label{fig:e}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.96\columnwidth]{pendulum_constraint.pdf}
\caption{
Top: State and input trajectories.
2nd from the top: Parameters learned with the proposed method with candidate constraints.
3rd from the top: $Q$ learned without candidate constraints.
Bottom: Parameters learned with finite-horizon method with candidate constraints.
}
\label{fig:e200}
\end{figure}
\subsection{Summary of analysis}
In this section, we have illustrated the benefits of the proposed approach.
In particular, we showed the candidate constraint construction and how to simultaneously learn parameters of the stage cost and identify constraints from the candidate set.
Further, we have shown that the proposed shortest path formulation only requires a short segment of measurements to learn the stage cost parameters and identify constraints, whereas finite-horizon approaches require a comparably long segment.
Moreover, we have shown the importance of the candidate constraint set as a substantial component for correctly identifying the stage cost.
\section{Manipulation of a Passive Kinematic Object}
\label{sec:robot}
In this section, we show how to train a predictive model for human movements in a manipulation task using the proposed method.
We conducted experiments with three human subjects where the underlying hypothesis is that humans plan their movements by solving a constrained optimal control problem.
\subsection{Experiment description and system modeling}
In the experiment, the human subjects manipulated one end of an object whose position was changed consecutively by a robot.
The manipulation task was set up to provide a foreseeable environment triggering the human's predictive motor control component such that the reactive control component can be disregarded (at least at the beginning of the movement).
The object was articulated and unactuated and was composed of three lightweight wooden links and one cardboard handle, which acted as both a revolute joint and the manipulation point (see Figure~\ref{fig:hri}).
Hence, it had four revolute joints, one connecting its end link to the robot (joint 1), two connecting the three wooden links (joint 2 \& 3), and
the cardboard handle (joint 4), which was gripped by the subject such that the forearm and the handle acted as a single rigid body.
After familiarizing themself with the robot, the human was instructed to achieve specific angles for two of the object's joints,
the joint connecting the object to the robot (joint 1 in Figure~\ref{fig:hri}) and the first joint after that (joint 2),
both of which have vertical rotational axes (perpendicular to the ground).
The target angles were communicated to the subjects visually by reference-markers attached to the links.
The subjects were asked to only move when the robot was stationary.
First, the robot moved to disturb the system state.
When the robot's motion ended, the subject corrected the reference error.
Motion capture sensors were placed on all links of each kinematic chain and recorded through the Phasespace Python API.
The derivation of the individual movement model, i.e., the system dynamics, of each subject is based on modeling the passive kinematic object and the human arm as a kinematic chain \cite{isb1} whose parameters were identified from measurements.
In this model, the base frame is attached to the torso and the manipulation frame is attached to the grip location of the hand.
Ball joints such as the shoulder joint are modeled as three revolute joints in series with orthogonal axes intersecting at the center of the joint.
This leads to the ball joint configuration being described with intrinsic Euler angles rotating around a point in space \cite{shoulder,ball}.
The elbow joint is modeled as a single revolute joint.
The wrist is modeled as three revolute joints in series; however a wrist brace was used in the experiment to restrict the motions in the frontal and sagittal plane, that is, waving and flapping motions.
Pronation and supination (twisting about the forearm) could not be restricted by the brace; however the experiment was designed such that the kinematic chain of the object itself constrained this movement.
Both the placement of the motion capture markers and the kinematic modeling are shown in Figure~\ref{fig:mocap}.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{sensor.pdf}
\\ \includegraphics[width=0.825\columnwidth]{hri2.png}
\caption{
Top: Modeling of the human arm and the object.
Bottom: Experiment setup with the Kuka LBR iiwa robot.
Joints included in the model are shown in green, while the blue joint represents a freedom of motion that was constrained by experiment design.
The motion capture markers are illustrated in red.
}
\label{fig:hri}
\label{fig:mocap}
\end{figure}
The system state $
x(t) =
[\
x_h (t)^\mathrm{T}\
x_o (t)^\mathrm{T}\
]^\mathrm{T}$ is composed of the joint angles of the human, $x_h(t) \in \mathbb{R}^4$, and of the object, $x_o(t) \in \mathbb{R}^4$.
The input to the system,
$u(t)=\dot{x}_h(t)$,
is given by the joint velocities of the human arm.
The velocities of the object joint angles are given by:
\begin{equation}
\dot{x}_o(t) = J^{\ddagger}_{o}(x_o(t)) V_{g}(t),
\label{eq:xrdot}
\end{equation}
where $J_{o}(x_o(t)) \in \mathbb{R}^{6 \times 4}$ is the Jacobian mapping joint velocities of the object to $V_g(t)$, the absolute twist velocity of the manipulation frame, and $J^{\ddagger}_{o}(x_o(t)) \in \mathbb{R}^{4 \times 6} $ denotes its Moore-Penrose pseudo-inverse \cite{inverse}.
Given that the human maintained a stationary base in the experiment, we can express $V_g(t)$ in terms of the human arm joint velocities and the Jacobian of the human arm, $J_h(x_h(t)) \in \mathbb{R}^{6 \times 4}$:
\begin{equation}
V_g(t) = J_{h}(x_h(t))\dot x_h(t).
\label{eq:griptwist}
\end{equation}
Using \eqref{eq:xrdot} and \eqref{eq:griptwist}, $\dot{x}_o(t) = J^{\ddagger}_{o}(x_o(t)) J_{h}(x_h(t))\dot x_h(t)$, and thus, the overall dynamics of the system is given by
\begin{equation}
\begin{bmatrix}
\dot x_h (t)
\\
\dot x_o (t)
\end{bmatrix}
=
\begin{bmatrix}
I
\\
J^\ddagger_o(x_o(t)) J_h(x_h(t))
\end{bmatrix}
u(t). \label{eq:exp_sys_desc}
\end{equation}
In order to obtain the Jacobians, the twists representing the joints in each kinematic chain are identified by recording traces of the subject's range of motion and applying the techniques in \cite{iros15}.
The Jacobians $J_h(x_h(t))$ and $J_o(x_o(t))$ in \eqref{eq:exp_sys_desc} are derived using the formula for the body Jacobian as in \cite{math}.
A discrete-time representation of \eqref{eq:exp_sys_desc} is derived using an Euler-forward scheme with the sampling time $T_s$:
\begin{align*}
\begin{bmatrix}
x_h (k+1)
\\
x_o (k+1)
\end{bmatrix}
=
\begin{bmatrix}
x_h (k)
\\
x_o (k)
\end{bmatrix}
+
T_s
\begin{bmatrix}
I
\\
J^\ddagger_o(x_o(k)) J_h(x_h(k))
\end{bmatrix}
u(k).
\end{align*}
An unscented Kalman filter as described in \cite{ukf} is implemented to estimate the system state, where a static process model is chosen to smoothen the estimated angles, since measurement noise is amplified by the kinematic transformation.
The inputs are computed as $u(k)=(x_h(k+1)-x_h(k))/T_s$.
\subsection{Learning predictive model for human movements}
\label{sec:ex}
Each of the three subjects maneuvered the object 15 times to correct the reference error induced by the robot.
For each experiment, we recorded the entire trajectory from the start of the human movement until the subject was instructed to remain stationary.
For reasons discussed in Section~\ref{sec:compareFI}, we use the initial $1.2$s, i.e., $e=65$ in \eqref{eq:kktinfer} with sampling time $T_s=0.0185$s for learning, which corresponds to roughly 60\% of each trajectory.
In order to generalize from the available sparse data, we utilize leave-one-out cross-validation \cite{James2013}, where we learn the parameters of the predictive model 15 times, each time removing one of the recorded trajectories.
This is done to assess the robustness of the model.
\subsubsection{Design choices}
\label{sec:12norm}
In this work, we train a predictive model with quadratic stage cost.
Our goal is to exemplify the proposed method to build a simple predictive model of human movement.
Quadratic stage costs are commonly used as objective function in optimal control offering a good compromise between complexity and expressivity, where the cost minimizes a trade-off between tracking a given target and control effort.
Note that higher-order or more complex stage cost terms are possible with the proposed framework and there are various possibilities to express human movements \cite{Oguz2018b}.
Given that the task requires tracking a reference for only two of the states, we take a stage cost of the form
\begin{align*}
l(x_i,u_i) =\ & (Sx_i - y_s)^\mathrm{T} Q (Sx_i - y_s) + u_i^\mathrm{T} Ru_i,
\end{align*}
where $y_s \in \mathbb{R}^2$ is the reference, $S=[\ 0_{2\times 4}\ I_2\ 0_{2\times 4}\ ]$ selects the states (two joint angles of the object) tracking $y_s$, and $Q,R$ are the penalty parameters.
We enforce $Q,R\succeq 0$ in order to obtain physically meaningful penalties for both deviation to the target angles and control effort.
Also, we restrict the input penalties to $\sum_{i=1}^m R_{ii}=1$, which fixes the scaling of the stage cost and avoids the trivial solution of all parameters being zero.
We train one predictive model without constraints and one with a polytopic candidate constraint set for each subject.
\subsubsection*{Candidate constraints}
The object's states $x_o(k)$ are modeled as unconstrained.
The human's states $x_h(k)$ consist of the three shoulder joint angles and the elbow angle; the inputs $u(k)$ are the three angular velocities of the shoulder joint and the angular velocity of the elbow.
Constraints on joint angles directly relate to constraints on $x_h(k)$, velocity constraints relate to constraints on $u(k)$, and acceleration constraints are computed as a rate constraint: $a(k) = (u(k+1)-u(k))/T_s$.
\subsubsection{Learning results}
\label{sec:infres}
Figure~\ref{fig:results} shows the mean and standard deviation of $Q$ and $R$ obtained with the proposed IOC method.
The most distinct feature is the scale of the parameters $Q_{ij}$, varying from order $10^{-2}$ for Subject~1, $10^{-3}$ for Subject~2, to $10^{-6}$ for Subject~3.
The second most distinct feature is the difference in the diagonal elements of $R$ that reflect movement of the shoulder, i.e., $R_{11}$, $R_{22}$, and $R_{33}$, whereas the penalty on elbow velocity is comparable, i.e., $R_{44}\approx 0.2$ for all subjects.
Off-diagonal elements in $R$ are similar across subjects.
Table~\ref{tb:lagrange} shows the sum of Lagrange multipliers as in \eqref{eq:lambdasum}, which are used to identify constraints from the candidate constraint set.
The Lagrange multipliers are stated as the mean over all experiments to identify constraints on angle, velocity, and acceleration of shoulder and elbow joints.
We consider constraint $j$ as identified if the corresponding Lagrange multiplier $\Lambda_j\geq \bar \Lambda = 10^{-3}$.
It can be seen that constraints are predominantly on shoulder movement.
Constraints on elbow movement seem less important for all subjects.
Note that even though the stage cost parameters in Figure~\ref{fig:results} obtained with constrained and unconstrained IOC are relatively close for the individual subject, the resulting prediction models differ by virtue of the constraints identified as in Table~\ref{tb:lagrange}.
\begin{figure}[htb]
\centering
\includegraphics[width=1\columnwidth]{cost.pdf}
\caption{
Mean and standard deviation of cost parameters $Q$ and $R$ for unconstrained learning (black stars) and constrained learning (red diamonds).
}
\label{fig:results}
\end{figure}
\begin{table}[tbh]
\setlength\tabcolsep{3.9pt}
\caption{
Lagrange multipliers to identify constraints
}
\begin{center}
\def1.05{1.05}
\label{tb:lagrange}
\begin{tabular}{llllll llll}
\hline
& \multicolumn{2}{c}{Angle} & \multicolumn{2}{c}{Velocity} & \multicolumn{2}{c}{Acceleration} \\
& Shoulder & Elbow & Shoulder & Elbow & Shoulder & Elbow\\
\hline
Subject 1
& $22.8$ & $0$ & $3.31e$-$2$ & $0$ & $1.38e$-$2$ & $8.66e$-$4$
\\
Subject 2
& $11.5$ & $0$ & $2.78e$-$1$ & $0$ & $2.15e$-$2$ & $6.98e$-$4$
\\
Subject 3
& $3.50$ & $0$ & $4.36e$-$1$ & $2.86e$-$4$ & $1.05e$-$1$ & $0$
\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Evaluation of trained human manipulation model}
\label{sec:evalinf}
The difficulty in evaluating the quality of the trained model for human-centered experiments is the lack of a ground truth as reference.
We therefore assess the quality of modeling human movement as an optimal control problem \eqref{eq:dp} by comparing the true trajectory with the prediction provided by the model.
The predictions are obtained by solving problem \eqref{eq:segment} with the learned stage cost and identified constraints from the initial position at time $t=0$s through $t=t_e=1.2$s using IPOPT \cite{Biegler2009} (see Figure~\ref{fig:graph_con} for a sample prediction).
We compute 15 sets of stage cost matrices by leaving out one trajectory for each learning.
In order to evaluate the quality of the trained model, we use the left-out measured trajectory for validation against the predicted trajectory, which would result from \eqref{eq:segment} with the learned stage cost and constraints.
This technique ensures that the predicted trajectory is not biased by the corresponding measured trajectory.
The mismatch between prediction $\hat x_i^j\in \mathbb{R}^8$ and measurement $x^j(i)\in \mathbb{R}^8$ of trajectory $j$ is measured as the \underline root \underline mean \underline square (RMS) error:
\begin{align}
\label{eq:error}
\textstyle
E^j
=
\sqrt{
\frac{1
}{
8e}
\sum_{i=1}^{e}
\|
\hat x_i^j - x^j(i)
\|_2^2
}.
\end{align}
\subsubsection{Intra-subject evaluation}
First, we compute the errors $E^j$ in \eqref{eq:error} for each trajectory $j$ per subject.
Figure~\ref{fig:graph_con} shows one measured trajectory of Subject~2 and the predictions obtained with the unconstrained and the constrained model.
The prediction obtained with the unconstrained model shows a larger RMS error, best seen in the plot of human joint angles.
The prediction obtained with the constrained model shows a lower error.
Table~\ref{tb:graph_con} presents the mean and standard deviation over all 15 prediction errors for all subjects.
It shows that, generally, the predictions have low errors ($<3.3^\circ$), where Subject~1 has the lowest ($<1^\circ$).
On average, the presence of constraints improve the predictions by 20\%-25\%.
\begin{table}[h]
\setlength\tabcolsep{8.2pt}
\caption{
Prediction errors: Unconstrained vs. constrained
}
\begin{center}
\def1.05{1.05}
\label{tb:graph_con}
\begin{tabular}{l cccccclllll llll}
\hline
Constraint set & unconstrained & constrained
\\
\hline
Subject 1
& $0.96^\circ \pm 0.49^\circ$
& $0.78^\circ \pm 0.42^\circ$
\\
Subject 2
& $3.26^\circ \pm 1.75^\circ$
& $2.45^\circ \pm 0.87^\circ$
\\
Subject 3
& $1.87^\circ \pm 1.00^\circ$
& $1.56^\circ \pm 0.79^\circ$
\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Inter-subject cross-evaluation}
Next, we analyze the individuality of the trained models, where the error $E^j$ in \eqref{eq:error} is computed three times for each trajectory $j$:
We compute the error using the prediction model of the subject who generated trajectory $j$;
then, we compute $E^j$ of the predicted trajectory $\hat x_i^j$ using the other subjects' prediction models, where we use the proposed IOC method with polytopic constraints.
Figure~\ref{fig:inter} shows an example of a measured trajectory from Subject~1, compared against predictions generated with the models of all subjects.
The measured trajectory and the predicted trajectory of Subject~1 are close (error: $0.55^\circ$).
The predicted trajectories of Subject~2 \& 3 show higher errors.
Table~\ref{tb:inter} states the mean and standard deviation of the errors between measurements of Subject~$j$ in columns $j$ and prediction with objective of Subject~$i$ in rows $i$ over all trajectories.
Hence, a good separation between the subjects means large entries in the off-diagonal entries $i\neq j$.
The results show a high confidence in separating Subject~1 from the other two with high confusion errors ($3.23^\circ$, $2.39^\circ$ vs. $0.78^\circ$).
The confidence to identify Subject~2 from a given trajectory is also high with confusion errors ($3.99^\circ$, $3.59^\circ$ vs. $2.45^\circ$).
A less clear separation is observed for Subject~3, where the confusion errors are lower ($2.22^\circ$, $1.91^\circ$ vs. $1.56^\circ$).
Overall, this cross-validation suggests that the models trained to predict the distinct motor behavior are individual.
\begin{figure}[!t]
\centering
\includegraphics[width=0.87\columnwidth]{graph1.pdf}
\caption{
Measured trajectory in black, predicted trajectory with the unconstrained model in gray (error $4.17^\circ$) and the constrained model in red (error $1.40^\circ$).
The upper plot shows the shoulder flexion, shoulder abduction, and shoulder rotation, as well as elbow flexion.
The object states to be tracked are shown in the lower plot as dashed black lines and are related to the corresponding joints with a diamond and a star marker.
}
\label{fig:graph_con}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{graph2.pdf}
\caption{
Measured trajectory of Subject~1 in black, predicted trajectory of Subject~1 in red (error: $0.55^\circ$).
Left plots: Predicted trajectory of Subject~2 in green (error: $3.62^\circ$).
Right plots: Predicted trajectory of Subject~3 in blue (error: $1.97^\circ$).
}
\label{fig:inter}
\end{figure}
\begin{table}[!h]
\caption{
Prediction errors: Cross-validation between subjects
}
\begin{center}
\def1.05{1.05}
\label{tb:inter}
\begin{tabular}{ccccccc llllll llll}
\hline
\multicolumn{2}{l}{Trajectories of } & Subject 1 & Subject 2 & Subject 3 \\
\hline
\multirow{3}{*}{\rotatebox{90}{Model}}
&
Subject 1 &
$0.78^\circ \pm 0.42^\circ$ & $3.99^\circ\pm 1.53^\circ$ & $2.22^\circ\pm 1.14^\circ$
\\
&
Subject 2 &
$3.23^\circ\pm 1.03^\circ$ & $2.45^\circ\pm 0.87^\circ$ & $1.91^\circ\pm 0.93^\circ$
\\
&
Subject 3 &
$2.39^\circ\pm 0.68^\circ$ & $3.59^\circ\pm 1.68^\circ$ & $1.56^\circ\pm 0.79^\circ$
\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Benefit of shortest path formulation}
\label{sec:compareFI}
In the following, we discuss the advantages of using a shortest path formulation over a finite horizon in the context of the considered application.
If the entire trajectory is used for training and stationarity is reached, i.e., $e$ is large, both the proposed shortest path method and a finite-horizon method are similar.
In the context of the considered application, however, we encountered two main challenges when considering the entire trajectory.
Firstly, in the final part of the trajectory, the target angles are more or less reached and the measured signals are close to stationarity.
As a result, the signal-to-noise ratio is low and can corrupt learning.
Secondly, we observed small corrections around the target angles in the experiment suggesting the presence of reactive movements, which renders the final part of the trajectory not indicative of the predictive human motor control.
For shorter segments, the predictive component dominates both noise and reactive component but the solution from a finite-horizon formulation diverts from that with a shortest path (see Section~\ref{sec:simulation}).
The proposed IOC approach allows for using only the initial part of the trajectory for learning where stationarity is not reached.
Overall, the presence of both reactive human motor control component and noise do not fulfill the assumption of optimal execution with respect to \eqref{eq:dp}.
We used the initial 60\% of the trajectory, which was observed to be a good trade-off between segment-length and avoidance of the reactive component.
Figure~\ref{fig:e_study} revisits the trajectory in Figure~\ref{fig:graph_con} to illustrate the above discussion on the horizon length $e$.
The upper plot shows the complete recorded trajectory, where some correction around the target angles can be observed for $t\geq 1.4$s (see joint angle marked by the diamond symbol).
The lower plot displays the RMS error \eqref{eq:error} of the predictions that result from different horizon lengths $e$.
The RMS error increases as a result of both the correction around the target angles and the low signal-to-noise ratio.
It highlights that the modeling assumption as an open-loop optimal control problem is suitable for the predictive part, but not in the presence of the reactive component.
\begin{figure}[h]
\centering
\includegraphics[width=0.82\columnwidth]{e_study.pdf}
\caption{
Top: Target angles to be tracked are shown as dashed black lines and are related to the corresponding joints with a diamond and a star marker.
Bottom: RMS error of prediction with different horizon lengths $e$.
}
\label{fig:e_study}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
This paper presented an inverse optimal control approach to learn both cost function parameters and constraints from demonstrations, i.e., state and input measurements of dynamical systems.
The shortest path formulation is shown to be the inverse problem to an infinite-horizon optimal control problem. By relying on the Karush-Kuhn-Tucker conditions, the problem is convex for cost functions that are linear in their parameters.
We set up a human manipulation experiment to exemplify the proposed approach for modeling and predicting human arm movements.
In the experiment, three human subjects manipulated one end of a passive kinematic object whose position was changed consecutively by a robot.
The benefits of using a shortest path formulation and the consideration of constraints on human movements were highlighted.
The results showed that a model with good predictive capabilities can be learned using a quadratic cost function for both states and inputs together with constraints on shoulder movements using the proposed formulation.
Finally, it was shown that the predictive models of the human subjects are individual.
\bibliographystyle{IEEEtran}
|
1,314,259,993,286 | arxiv | \section*{Introduction}
Geometric invariants are natural objects in the problem of
classifying varieties. For instance, the Riemann
curvature is such an invariant in Riemannian geometry and isometries are the
corresponding transformations. In symplectic geometry, the
transformations are called symplectomorphisms but the Darboux Theorem
states the non existence of local symplectic invariants.
In complex geometry,
Cauchy-Riemann invariants are local invariants interferring in the
classification of complex manifolds with boundaries. The generic
corresponding situation is the equivalence problem between
strictly pseudoconvex hypersurfaces in complex manifolds. Such
classifications, initiated and completely treated by {\'E}.Cartan
\cite{car} in complex dimension two, was
intensively studied and two important approaches in higher dimension are
the theory of normal forms due to
SS.Chern-J.Moser \cite{che-mos} and the Fefferman theorem \cite{fe}
connecting complex and Cauchy-Riemann geometries.
The integral representation theory, which can be considered
as the extension of the well-known Cauchy-Green formula, was
elaborated mainly by
Grauert-Lieb \cite{gra-lie} and Henkin \cite{hen} for strictly pseudoconvex domains
in the Euclidean complex space and enabled
to solve the $\bar \partial$-problem.
In contrast with complex manifolds, there is generically no
pseudoholomorphic map between almost complex
manifolds, since the holomorphicity condition is given by a
nonsolvable overdetermined system. In particular, the lack of
holomorphic coordinates prevents from developing a straightforward
integral representation theory.
The local existence of pseudoholomorphic discs in any almost complex
manifold goes back to the work of N.Nijenhuis-W.Woolf
\cite{ni-wo}. With the idea of generalizing the Newlander-Nirenberg
Theorem to integrable almost complex manifolds with low regularity, they
presented both an analytic and a geometric approach to the
$\bar{\partial}_J$-equation, satisfied by pseudoholomorphic discs in
an almost complex manifold $(M,J)$. This equation is locally a
perturbation of the standard elliptic $\bar \partial$-equation; the
local existence of pseudoholomorphic discs relies on the stability of
the solutions to elliptic systems of partial differential equations
under small quasi-linear elliptic deformation. Pseudoholomorphic discs
played a fundamental r{\^o}le in symplectic geometry where some global
symplectic invariants are associated to the moduli space of
holomorphic curves for a compatible almost complex structure. Almost
complex manifolds are viewed as natural manifolds for a deformation
theory and pseudoholomorphic discs carry some information on the
symplectic geometry by means of compactness phenomena.
The main purpose of this survey is to present some bases of
local analysis in almost complex manifolds which consists in studying, essentially
by analytic
methods, local equivalence problems between strictly pseudoconvex
domains. A large part of the survey is devoted to a precise study of
pseudoholomorphic discs and of different objects whose existence is direcly issued from
them. These are the Kobayashi-Royden metric and plurisubharmonic functions.
Most of our approach follows the concept of structures
deformation. Nonisotropic dilations fitted to the geometry of
strictly pseudoconvex hypersurfaces were introduced in complex
analysis by S.Pinchuk \cite{pi} (related ideas were already used by Kuiper and
Benzecri in
the context of affine and projective geometry). The essential difference in almost
complex
manifolds relies on the non holomorphicity of the dilation maps. Hence
the scaling procedure involves both a deformation of domains and of
the ambiant almost complex structure. This general scheme drives to
the presentation of some exotic non integrable almost complex
structures, appearing as cluster points of dilated structures. These
model almost complex structures imply new phenomena which distinguish
almost complex manifolds from complex ones. A consequent part of our
works relies on the study of these model structures.
\vskip 0,1cm
This survey is for a large part an overview of different results obtained in a
series of
papers \cite{ga-su, CoGaSu, co-ga-su, ga-su2}. This is organized as
follows.
In Section 1, we mainly present the basic properties of almost complex
manifolds. The first two subsections are devoted to generalities. In
subsection 1.3, we explain how to attach pseudoholomorphic discs to
totally real submanifolds and we introduce the model structures. These
were introduced in \cite{ga-su}. We define complex hypersurfaces for
these structures. In Subsection 1.4 we focus on plurisubharmonic
functions. We first establish the Hopf Lemma and, as an application,
we obtain the boundary distance preserving property for
biholomorphisms between strictly pseudoconvex domains (proposition
1.4.8). Then we study the problem of removable singularities for
plurisubharmonic functions. In Subsection 1.5 we present two
constructions of almost complex structures on the tangent and on the
cotangent bundles of an almost complex manifold. These canonical lifts
play a major r{\^o}le in our studies and they will be used throughout
the survey.
Section 2 concerns the study of stationary discs. The notion of
stationary discs was introduced by L.Lempert \cite{le81} in the complex
setting. Our results can be considered as a local analogue of
Lempert's theory in almost complex manifolds. We prove the existence
of stationary discs in the unit ball for small almost complex
deformation of the standard complex structure (Propositions 2.2.1 and
2.2.2).
We show that the stationary discs form a foliation of the unit ball,
singular at the origin (Proposition 2.3.4). Then we define an analogue
of the Riemann map and we give its main properties (Theorem
2.3.10). We end Section 2 with three applications : the boundary study
of biholomorphisms (Corollary 2.3.11), a partial generalization of
Cartan's theorem (Corollary 2.3.12) and the local biholomorphic
equivalence problem between almost complex manifolds (Theorem 2.3.13).
Section 3 is devoted to the study of the Kobayashi metric in almost
complex manifolds. Proving precise estimates of the Kobayashi-Royden
metric in strictly pseudoconvex domains, similar to the estimates
obtained by I.Graham \cite{gr75} in the complex setting, we answer a
question by S.Kobayashi about the existence of basis of complete
hyperbolic neighborhoods at every point in an almost complex
manifold. The general idea is to use a boundary localization principle
(Proposition 3.1.5) and rescaling.
In Section 4 we study the boundary behaviour of a biholomorphism
between two strictly pseudoconvex domains. We present the scaling
procedure in details and we study the properties of the dilated
objects. For a better understanding, we start with the case of four
real dimensional almost complex manifolds in Subsections 4.1 and
4.2. One of the main results concerns the behaviour of the lift of a
biholomorphism to the cotangent bundle presented in Proposition 4.2.6.
The corresponding results in higher dimension are treated in
Subsection 4.3. The analogue of Proposition 4.2.6 is Proposition
4.3.6. We end this Subsection with a compactness principle (Theorem 4.3.8). This can
be considered as an almost complex analogue to the classical
Wong-Rosay Theorem.
Section 5 deals with the question of the regularity of
pseudoholomorphic maps attached to totally real submanifolds in an
almost complex manifold. These results are consequences of a geometric
elliptic theory. In Subsection 5.1 we show that a pseudoholomorphic
disc attached, in the sense of the cluster set, to a smooth totally
real submanifold extends smoothly up to the boundary (Theorem 5.1.1). We point out
that similar results have been established in the almost complex case
under stronger assumptions on the initial boundary regularity of the
disc. In Subsection 5.2 we establish the regularity of
a pseudoholomorphic map, defined in a wedge attached to a totally real
submanifold and whose image is contained, in the sense of the cluster
set, into a totally real submanifold of an almost complex manifold
(Proposition 5.2.1). As an application we obtain a partial version of
the Fefferman Theorem (Corollary 5.2.4).
Subsection 5.3 is devoted to the proof of the Fefferman Theorem, one
of the main results of our survey (Theorem 5.3.1). This is a
consequence of the results established in the previous sections.
\section{Basic properties of almost complex structures}
\subsection{Almost complex structures}
Everywhere in this paper $\Delta$ denotes the unit disc in $\mathbb C$ and $\mathbb B$
the unit ball in $\mathbb C^n$. Let $M$ be a smooth $\mathcal C^\infty$ real
manifold of real dimension $2n$.
\begin{definition}
$(i)$ An almost complex structure on $M$ is a smooth $\mathcal
C^\infty$-field $J$ on the tangent bundle $TM$ of $M$, satisfying
$J^2=-I$.
$(ii)$ If $J$ is an almost complex structure on $M$ then the 2-tuple
$(M,J)$ is called an almost complex manifold.
\end{definition}
By an abuse of notation $J_{st}$ is the standard structure on
$\mathbb R^{2k}$ for every positive integer $k$. The standard structure
$J_{st}^{(2)}$ of $\mathbb R^2$ has the form
$$
J_{st}^{(2)} =\left(
\begin{array}{cc}
0 & -1 \\
1 & 0
\end{array}
\right).
$$
In $\mathbb R^{2n}$ with
standard coordinates
$(x^1,y^1,\dots,x^n,y^n)$
this is given by the block diagonal matrix~:
$$
J_{st}^{(2n)} = \left(
\begin{array}{cccc}
J_{st}^{(2)} & & & \\
& J_{st}^{(2)} & & \\
& & . & \\
& & & J_{st}^{(2)}
\end{array}
\right).
$$
In what follows we will just write $J_{st}$ since the dimension of the space
will be clear from the context.
The first examples of almost complex manifolds are provided by complex
manifolds. Namely, a complex manifold is a smooth real manifold $M$ of
positive dimension $2n$ with local complex analytic (holomorphic)
charts $f=(f^1,\dots,f^n)$ from $M$ to $\mathbb C^n$. The almost complex
structure is locally defined by $J:= df \circ J_{st} \circ
df^{-1}$. We point out that $J$ does not depend on the choice of $f$.
Thus an almost complex manifold $(M,J)$ is a complex manifold if for every
point $p$ in $M$ there exists a neighborhood $U$ of $p$ and a coordinate
diffeomorphism $z : U \rightarrow \mathbb R^{2n}$ such that $dz \circ J \circ
dz^{-1} = J_{st}$ on $z(U)$.
\vskip 0,1cm
An important special case of an almost complex manifold is a bounded
domain $D$ in $\mathbb C^n$ equipped with an almost complex structure $J$,
defined in a neighborhood of $\bar{D}$, and sufficiently close to the
standard structure $J_{st}$ in the standard $\mathcal C^k$ norm on
$\bar{D}$. Every almost complex manifold may be represented locally
in such a form. More precisely, we have the following statement.
\begin{lemma}
\label{suplem1}
Let $(M,J)$ be an almost complex manifold. Then for every point $p \in
M$, every $\lambda_0 > 0$ and every real $k \geq 0$ there exist a neighborhood $U$
of $p$ and a
coordinate diffeomorphism $z: U \rightarrow \mathbb B$ such that
$z(p) = 0$, $dz(p) \circ J(p) \circ dz^{-1}(0) = J_{st}$ and the
direct image $z^*(J) = dz(p) \circ J(p) \circ dz^{-1}$ satisfies $\vert\vert
z^*(J) - J_{st}
\vert\vert_{\mathcal C^k(\overline {\mathbb B})} \leq \lambda_0$.
\end{lemma}
Here $\mathcal C^k(\overline {\mathbb B})$ denotes the standard norm on $\mathbb B$
\proof There exists a diffeomorphism $z$ from a neighborhood $U'$ of
$p \in M$ onto $\mathbb B$ satisfying $z(p) = 0$ and $dz(p) \circ J(p)
\circ dz^{-1}(0) = J_{st}$. For $\lambda > 0$ consider the dilation
$d_{\lambda}: t \mapsto \lambda^{-1}t$ in $\mathbb C^n$ and the composition
$z_{\lambda} = d_{\lambda} \circ z$. Then $\lim_{\lambda \rightarrow
0} \vert\vert z_{\lambda}^{*}(J) - J_{st} \vert\vert_{\mathcal C^2(\overline
{\mathbb B})} = 0$. Setting $U = z^{-1}_{\lambda}(\mathbb B)$ for
$\lambda > 0$ small enough, we obtain the desired statement. \qed
\vskip 0,1cm
In particular, every almost complex structure $J$ sufficiently close
to the standard
structure $J_{st}$ will be written locally
$J=J_{st} + \mathcal O(\|z\|)$.
Finally by a small perturbation
(or deformation) of the
standard structure $J_{st}$ defined in a neighborhood of $\bar{D}$, where
$D$ is a domain in $\mathbb C^n$, we will mean a smooth one parameter family
$(J_\lambda)_\lambda$ of almost complex structures defined in a
neighborhood of $\bar{D}$, the real parameter $\lambda$ belonging to a
neighborhood of the origin, and satisfying~: $\lim_{\lambda
\rightarrow 0}\|J_\lambda - J_{st}\|_{\mathcal C^k(\bar{D})} = 0$.
\subsubsection{$\partial_J$ and $\bar \partial_J$ operators}
Let $(M,J)$ be an almost complex manifold. We denote by $TM$ the real
tangent bundle of $M$ and by $T_\mathbb C M:=\mathbb C \otimes TM$ its complexification.
Recall that $T_\mathbb C M = T^{(1,0)}M \oplus T^{(0,1)}M$ where
$T^{(1,0)}M:=\{ X \in T_\mathbb C M : JX=iX\} = \{\zeta -iJ \zeta, \zeta \in
TM\},$
and $T^{(0,1)}M:=\{ X \in T_\mathbb C M : JX=-iX\} = \{\zeta +
iJ \zeta, \zeta \in TM\}$.
Let $T^*M$ denote the cotangent bundle of $M$.
Identifying $\mathbb C \otimes T^*M$ with
$T_\mathbb C^*M:=Hom(T_\mathbb C M,\mathbb C)$ we define the set of complex
forms of type $(1,0)$ on $M$ by~:
$
T^*_{(1,0)}M=\{w \in T_\mathbb C^* M : w(X) = 0, \forall X \in T^{(0,1)}M\}
$
and the set of complex forms of type $(0,1)$ on $M$ by~:
$
T^*_{(0,1)}M=\{w \in T_\mathbb C^* M : w(X) = 0, \forall X \in T^{(1,0)}M\}
$.
Then $T_\mathbb C^*M=T^*_{(1,0)}M \oplus T^*_{(0,1)}M$.
This allows to define the operators $\partial_J$ and
$\bar{\partial}_J$ on the space of smooth functions defined on
$M$~: given a complex smooth function $u$ on $M$, we set $\partial_J u =
du_{(1,0)} \in T^*_{(1,0)}M$ and $\bar{\partial}_Ju = du_{(0,1)}
\in T^*_{(0,1)}M$. As usual,
differential forms of any bidegree $(p,q)$ on $(M,J)$ are defined
by means of the exterior product.
We point out that there is a one-to-one correspondence between almost
complex structures and independent complex one forms. More precisely,
to any almost complex structure one can associate a basis
$(w^1,\dots,\omega^n)$of $(1,0)$-forms. The conjugated forms
$\bar{\omega}^1,\dots,\bar{\omega}^n$ define a basis of $T^*_{(0,1)}M$
and $w^1,\dots,\omega^n,\bar{\omega}^1,\dots,\bar{\omega}^n$ are
$\mathbb C$-linearly independent. Conversely, if $n$ complex one
forms $\omega^1,\dots,\omega^n$ on $T^*_\mathbb C M$ are such that
$w^1,\dots,\omega^n,\bar{\omega}^1,\dots,\bar{\omega}^n$ are $\mathbb
C$-linearly independent, one can define an almost complex structure on $M$ by
asserting that $(\omega^1,\dots,\omega^n)$ is a basis of $T^*_{(1,0)}M$.
The corresponding almost complex structure $J$ is defined as follows.
Set $\omega^j=\zeta^j + i \eta^j$. The one forms
$\zeta^1,\dots,\zeta^n,\eta^1,\dots,\eta^n$ define a basis of $TM$.
Then define $J$ on $TM$ by $J\zeta^j = \eta^j$ for $j=1,\dots,n$.
\subsubsection{Integrability}
Let $(X_{\bar 1},\dots,X_{\bar n})$ be a basis of $T^*_{(0,1)}M$.
If $M$ is a complex manifold, then one can find local
charts $f=(f^1,\dots,f^n)$ such that
\begin{equation}\label{vf-eq}
X_{\bar j}f^k = 0 \ {\rm for \ every}\ j,k=1,\dots,n.
\end{equation}
Equation (\ref{vf-eq}) is equivalent to the equation $df^j(X_{\bar j})
= 0$ for $j,k=1,\dots,n$, meaning that $(df^1,\dots,df^n)$ form a
basis of $T^*_{(1,0)}M$. As a direct consequence of (\ref{vf-eq}) we
have
$[X_{\bar j},X_{\bar k}] = 0
$
for every $j,k=1,\dots,n$. This last condition is equivalent to the
integrability of the two fiber bundles $T^{(1,0)}M$ and $T^{(0,1)}M$.
One can rewrite the integrability of $T^{(1,0)}M$ as
\begin{equation}\label{nij-eq}
N_J(\zeta,\eta)=0 \ {\rm for \ every} \ \zeta,\eta \in TM
\end{equation}
where $N_J$ is the Nijenhuis tensor defined on $TM \times TM$ by~:
$$
N_J(\zeta,\eta) = [J\zeta,J\eta] - J [J\zeta,\eta] - J [\zeta,J\eta] -
[\zeta,\eta].
$$
The content of the Newlander-Nirenberg theorem is the following (see
\cite{new-nir,ha77,we89})~:
\begin{theorem}\label{ne-ni-theo}
An almost complex manifold $(M,J)$ is a complex manifold if and only
if $T^{(1,0)}M$ is integrable.
\end{theorem}
We point out that the integrability condition can also be interpreted in
terms of forms as follows : $(M,J)$ is a complex manifold if and only
if $d\omega$ has no $(0,2)$ component for every $(1,0)$ form $\omega$.
\vskip 0,2cm Let $J$ be an almost complex structure defined in a
neighborhood of the origin in $\mathbb R^{2n}$ endowed with the usual
standard complex coordinates $z^1,\dots,z^n$. We assume that $J(0) =
J_{st}$. Since $(dz^1,\dots,dz^n)$ form a basis of $(1,0)$ forms at
the origin, one can find a basis of $(1,0)$ forms
$(\omega^1,\dots,\omega^n)$ such that $\omega^j(0) = dz^j$ for
$j=1,\dots,n$. Hence one can assume that $\omega^j = dz^j +
\sum_{k=1}A^j_j(z,\bar{z})d\bar{z}^k$, where $A^j_k$ are smooth
functions satisfying $A^j_k(0,0) = 0$. One cannot impose the condition
$(\partial A^j_k / \partial \bar{z})(0) = 0$ unless the structure $J$
is integrable at the origin. However, if ... then by the change of
variables ... one can add the normalization condition $(\partial A^j_k
/ \partial z)(0) = 0$. This normalization will be used in the local
description of strictly pseudoconvex hypersurfaces, see
Subsection~1.3.
\vskip 0,2cm
We point out that ny almost complex structure on a real surface is integrable.
\vskip 0,1cm
Below in this first section we will describe some examples of almost complex
structures which will play essential role thoughhout the paper. These are model
almost complex structures (defined and studied in Subsection~1.3) and
canonical almost complex structures on the tangent and cotangent bundles
(Subsection~1.5). These different structures will be the center of our
study in the forecoming Sections.
\subsection{Pseudoholomorphic discs}
A smooth map $f$ between two almost complex manifolds $(M',J')$ and
$(M,J)$ is holomorphic if its differential satisfies the following
holomorphicity condition~:
\begin{equation}\label{ho-eq1}
df \circ J = J' \circ df \ {\rm on} \ TM'.
\end{equation}
\begin{lemma}\label{ho-lem1}
The map $f$ is $(J',J)$ holomorphic if and only if
\begin{equation}\label{ho-eq2}
\forall \omega \in T^*_{(1,0)}M', f^*w \in T^*_{(1,0)}M.
\end{equation}
\end{lemma}
Here $f^*w$ is the complex one form defined on $T_\mathbb C M$ by
$f^*\omega = \omega \circ df$.
\proof
System~(\ref{ho-eq1}) implies that $df(X) \in T^{(0,1)}M$ for every
$X \in T^{(0,1)}M'$. In particular, if $\omega \in T^*_{(1,0)}M$ and
$X \in T^{(0,1)}M'$ then $(f^*\omega)(X) = \omega (df(X)) = 0$.
Conversely, assume that condition~(\ref{ho-eq2}) is satisfied. If $X \in
T^{(0,1)}M'$ and $\omega \in T^*_{(1,0)}M$, then $\omega(df(X)) = 0$.
Hence, $df(X) _in T^{(0,1)}M$ and $df(JX) = -i df(X) = J'(df(X))$.
One can prove similarly the equality $df \circ J = J' \circ df$ on
$T^{(1,0)}M'$, implying system~(\ref{ho-eq1}). \qed
\vskip 0,1cm
Generically, if $dim_{\mathbb R} M'=2k >2$ then system~(\ref{ho-eq1})
is overdetermined. If $k=1$ it follows from the
previous subsection that $J'$ is integrable. In particular, one can
view locally $(M',J')$ as $(\Delta, J_{st})$. In case
$(M',J') = (\Delta, J_{st})$ the map $f$ is called a $J$-holomorphic
disc. We denote by $\zeta$ the complex variable in $\mathbb C$: $\zeta = x +
iy$. Since $J_{st}(\partial / \partial x) = \partial / \partial y$
system~(\ref{ho-eq1}) can be written~:
$$
\frac{\partial f}{\partial y} = J(f) \frac{\partial f}{\partial x},
$$
or equivalently
$$
(J+J_{st})\frac{\partial f}{\partial \bar{\zeta}} =
(J-J_{st})\frac{\partial f}{\partial \zeta}.
$$
In view of
Lemma~\ref{suplem1}, $J+J_{st}$ is locally invertible. Then the
holomorphicity condition is usually written as
$$
\frac{\partial f}{\partial \bar{\zeta}} + Q_J(f) \frac{\partial
f}{\partial \zeta} = 0,
$$
where $Q_J$ is an endomorphism of $\mathbb R^{2n}$
given by $Q_J=(J_{st}+J)^{-1}(J_{st}-J)$.
However it is easy to see that $Q_J$ is an anti $\mathbb C$-linear
endomorphism of $\mathbb C^n$.
One can locally write a basis $w:=(w^1,\dots,w^n)$
of $(1,0)$ forms on $M$ as $w^j=dz^j +
\sum_{k=1}^nA^j_k(z,\bar{z})d\bar{z}$ where $A^j_k$ is a smooth
function satisfying the normalization conditions $A^j_k(0,0)=0$.
According to condition~(\ref{ho-eq2}) the disc $f$ being
$J$-holomorphic if $f^*(w^j)$ is a
$(1,0)$ form for $j=1,\dots,n$ (see~\cite{ch1}), meaning that
$f^*(w^j)(\partial / \partial \bar{\zeta}) = 0$, $f$ satisfies the
following equation on $\Delta$~:
\begin{equation}\label{ho-eq3}
\frac{\partial f}{\partial \bar{\zeta}} + A(f)
\overline{\frac{\partial f}{\partial \zeta}} = 0,
\end{equation}
where $A=(A_{j,k})_{1 \leq j,k \leq n}$.
We will use equation~(\ref{ho-eq3}) to characterize the
$J$-holomorphicity in the survey.
The Nijenhuis-Woolf theorem gives the existence of pseudoholomorphic
discs and their smooth dependence on initial data.
The following general form is due to S.Ivashkovich and J.P.Rosay.
\begin{proposition}\label{I-R}
Let $k \in \mathbb N$, $k \geq 1$, and $0 < \alpha < 1$. Let $(M,J)$ be an
almost complex manifold with $J$ of class
$\mathcal C^{k,\alpha}$ and let $p \in M$. Then for every sufficiently
small $V=(v_1,\dots,v_k) \in \mathcal J^k_pM$,
there exists a $\mathcal C^{k+1,\alpha}$ $J$-holomorphic map $u_{p,V}$
from $\Delta$ into $M$ such that the $k^{th}$ jet of $u_{p,V}$ at the origin
is equal to $(p,V)$.
Moreover, $u_{p,V}$ can be chosen with $\mathcal C^1$ dependence (in
$\mathcal C^{k,\alpha}$) on the parameters $(p,V)$ in $\mathcal J^kM$.
\end{proposition}
Here $\mathcal J^kM$ denotes the space of jets of order $k$ of maps from
the unit disc $\Delta$ to $M$.
\proof We follow the proof given in~\cite{iv-ro04}.
If we fix a chart $B$ containing $p$, we can assume that
$p= 0 \in \mathbb R^{2n}$ and $J(0) = J_{st}$. Hence there exists a
neighborhood $U$ of 0 such that equation~(\ref{ho-eq3}) is defined for every
disc $u$, defined on $\Delta$, with values in $U$.
Consider the Cauchy-Green operator $T_{CG}$ for maps $g$
continuous on $\bar{\Delta}$, with values
in a complex vector space~:
$$
\forall \zeta \in \bar{\Delta}, \ T_{CG}(g)(\zeta) =
\frac{1}{2\pi i}\int\int_{\Delta}\frac{g(\tau)}{\zeta-\tau}d\tau \bar{d\tau}.
$$
The operator $T_{CG}$ satisfies the following properties~:
(a) $g \in \mathcal C^{k,\alpha}(\bar{\Delta}) \Rightarrow T_{CG}g \in
\mathcal C^{k+1,\alpha}(\bar{\Delta})$, $ \forall k \in \mathbb N,
\ 0<\alpha<1$,
(b) $\frac{\partial}{\partial \bar{\zeta}}(T_{CG}(g)) = g$ on $\bar{\Delta}$.
By considering $A_t(u) := A(tu)$ and $u_{t,\tau}(\zeta) := t^{-1}u(\tau \zeta)$
we can assume that $\|A\|_{\mathcal C^1}$ is small enough.
Moreover, in view of equation~(\ref{ho-eq3}), the $J$-holomorphicity of a disc
$u$ is equivalent to the (usual) holomorphicity of the disc
$h=(Id-T_{CG}(A(u)\overline{\frac{\partial}{\partial \zeta}}))u$.
Consider now the $\mathcal C^k$ mapping :
$$
\begin{array}{ccccc}
\Phi & : & (-1,1) \times \mathcal C^k(\Delta,B) & \rightarrow &
\mathcal C^k(\Delta,\mathbb C^n)\\
& & (t,u) & \mapsto &
(Id-T_{CG}(A(tu)\overline{\frac{\partial}{\partial \zeta}}))u.
\end{array}
$$
Since $\Phi(0,u) =u$, it follows from the Implicit Function Theorem that
there exists $0 < t_0 < 1$ such that for $|t| < t_0$ the map $\Phi(t,.)$
is a $\mathcal C^k$ diffeomorphism from a neighborhood of the origin
in $\mathcal C^k(\Delta,B)$ onto a neighborhood of the origin $V$
in $\mathcal C^k(\Delta,\mathbb C^n)$.
For $w=(w_1,\dots,w_k) \in (\mathbb C^n)^k$ small enough, the holomorphic
map $h_{q,w}$ defined on $\bar{\Delta}$ by
$$
h_{q,w}(\zeta) = \sum_{l=1}^k \frac{1}{l!}\zeta^l w_l
$$
belongs to $V$. If $u_{t,w}:=\Phi(t,.)^{-1}(h_{q,w})$ then $tu_{t,w}$ is
$J$-holomorphic. Moreover, since $u_{0,w} = h_w$, the $k-th$ jet of
$u_{0,w}$ at the origin is equal to $w$. hence, for sufficiently small
positive $t$ the map $w \mapsto (\frac{\partial u_{t,w}}{\partial Re(\zeta)}(0),
\dots,\frac{\partial^k u_{t,w}}{\partial (Re(\zeta)^k)}(0))$
is a diffeomorphism between neighborhoods of the origin in $(\mathbb C^n)^k$.
\qed
\vskip 0,2cm
\begin{remark}\label{nij-woo}
The statement of Proposition \ref{I-R} means that there exists a on-to-one
correpondence between
sufficiently small $J$-holomorphic discs and standard holomorphic discs.
\end{remark}
The following Proposition establishes the stability of arbitrary
$J$-holomorphic discs under perturbation of the center and of the
derivative at the center. This result due to Ivashkovich-Rosay
in~\cite{iv-ro04} is fundamental for the upper
semi-continuity of the Kobayashi Royden pseudo-norm.
\begin{proposition}\label{kob-usc-prop}
Let $(M,J)$ be an almost complex manifold with $J$ of class $\mathcal
C^{1,\alpha}$ $(\alpha > 0)$. Let $u$ be a $J$-holomorphic map from a
neighborhood of $\bar{\Delta}$ into $M$. There exists a neighborhood
$V$ of $(u(0),\frac{\partial u}{\partial Re(\zeta)}(0))$ in $TM$ such that for
every $(q,X) \in V$, there exists a $J$-holomorphic map $v : \Delta
\rightarrow M$ with $v(0) =q$, $\frac{\partial v}{\partial Rer(\zeta)}(0) = x$.
\end{proposition}
\proof
We follow the exposition of \cite{iv-ro04}. Assume that $u$ is defined
on $\Delta_r$ for some $r > 1$. Since the map $\tilde{u}$ is $J_{st}
\times J$- holomorphic from $\Delta_r$ into $\mathbb R^2 \times M$, it
is sufficient to prove Proposition~\ref{kob-usc-prop} for imbedded
discs. Moreover, there exist $(n-1)$ smooth vector fields
$Y_1,\dots,Y_{n-1}$, defined in a neighborhood of $u(\Delta_r)$ such
that for every $\zeta \in \Delta_r$, the vectors $\frac{\partial
u}{\partial x}(\zeta),Y_1(\zeta),\dots,Y_{n-1}(\zeta)$ are
$J(u(z))$-linearly independent. Consider now the $\mathcal
C^{2,\alpha}$ change of variables $\Phi$
$$
(z_1,\dots,z_n) \mapsto u(z_1) + \sum_{j=1}^{n-1}z_{j+1}Y_j(u(z_1))
$$
defined for $|z_1| < r$, $|z_j|$ small if $j \geq 2$. The
structure $\Phi^*(J)$ is a $\mathcal C^{1,\alpha}$ almost complex structure
that coincides with the standard structure on $\mathbb C \times \{0\}
\subset \mathbb C^n$. So Proposition~\ref{kob-usc-prop} reduces to the
following Lemma~:
\begin{lemma}\label{kob-usc-lem}
Let $J$ be a $\mathcal C^{1,\alpha}$ almost complex structure on
$\mathbb R^{2n}$ that coincides with the standard complex structure on
$\mathbb C \times \{0\}$. Let $U$ be a neighborhood of $\bar{\Delta} \times
\{0\}$. For any $(q,t) \in \mathbb C^n \times \mathbb C^n$ close enough to
$(0,0)$, there exists a $J$-holomorphic map $v : \Delta \rightarrow U$
such that $v(0) = q$ and $\frac{\partial v}{\partial Re(\zeta)}(0) =
(1,0,\dots,0) + t$.
\end{lemma}
\noindent{\bf Proof of Lemma~\ref{kob-usc-lem}}. Consider a neighborhood
of $\bar{\Delta} \times \{0\}$ on which the $J$-holomorphicity condition can
be written :
$$
\frac{\partial u}{\partial \bar{\zeta}} + A_J(u)
\overline{\frac{\partial u}{\partial \zeta}} =0
$$
with $A_J(\zeta,0,\dots,0) = 0$.
Set $\mathcal E_0=\{f : \bar{\Delta} \rightarrow \mathbb C^n /
f \in \mathcal C^{1,\alpha}, f(0) = 0, \nabla f(0) = 0\}$,
$\mathcal F_0 = \{g : \bar{\Delta} \rightarrow \mathbb C^n /
g \in \mathcal C^{\alpha}, g(0) = 0\}$ and $F(z) = (z,0,\dots,0)$.
Define the map
$$
\begin{array}{llccl}
\Psi & : & \mathcal E_0 & \rightarrow & \mathcal F_0\\
& & f & \mapsto &
\displaystyle{\frac{\partial(F + f)}{\partial \bar{\zeta}} + A_J(F + f)
\overline{\frac{\partial(F + f)}{\partial \zeta}}}.
\end{array}
$$
Since $\frac{\partial F}{\partial \bar{\zeta}} +0$ and
$A_J(F) = 0$ we have~:
$$\Psi(f) = \frac{\partial f}{\partial \bar{\zeta}} + B_J(f)
\overline{\frac{\partial F}{\partial \zeta}} + o(|f|),
$$
where $B_J$ is a $(2n \times 2n)$ matrix with $\mathcal C^\alpha$
entries in $z$ and $\mathbb R$-linear in $f$. We want to show that
the derivative $D\Psi_0$ of the map $\Psi$ at $f=0$ is onto.
In complex notations we can write~:
$$
D \Psi_0(f)(\zeta) = \frac{\partial f}{\partial \bar{\zeta}}(\zeta) + B_1(\zeta)
f(\zeta)
+ B_2(\zeta) \overline{f(\zeta)}
$$
where $B_1$ and $B_2$ are $(n \times n)$ complex matrices with
$\mathcal C^\alpha$ coefficients.
The surjectivity of $D\Psi_0$ follows from the following classical
result (see \cite{iv-ro04} for a direct proof)
\begin{lemma}
If $B_1$, $B_2$ are $(n \times n)$ complex matrices with $\mathcal C^\alpha$
coefficients on $\bar{\Delta}$, for every $g \in \mathcal F_0$, there
exists $f \in \mathcal E_0$ such that
$$
\frac{\partial f}{\partial \bar{\zeta}}(\zeta) + B_1(\zeta) f(\zeta)
+ B_2(\zeta) \overline{f(\zeta)} = g(\zeta),
$$
for every $\zeta \in \Delta$.
\end{lemma}
\qed
\subsection{Real submanifolds}
This Subsection deals with the study of totally real and of strictly
pseudoconvex submanifolds in an almost complex manifold.
In the first part, we attach Bishop's discs to a totally real
submanifold. In the second part, we describe locally strictly
pseudoconvex hypersurfaces in real dimension four as deformations
of strictly pseudoconvex hypersurfaces for the standard structure.
The associated scaling procedure is our main tool for the study of strictly
pseudoconvex domains in almost complex manifolds.
\vskip 0,2cm
Let $\Gamma$ be a real smooth submanifold in $M$ and let $p \in
\Gamma$. We denote by $H^J(\Gamma)$ the $J$-holomorphic tangent bundle
$T\Gamma \cap JT\Gamma$.
\begin{definition}
The real submanifold $\Gamma$ is called totally real if $H^J(\Gamma)=\{0\}$
and is called $J$-complex if $H^J(\Gamma)=T\Gamma$.
\end{definition}
We note that if $\Gamma$ is a real
hypersurface in $M$ defined by $\Gamma=\{r=0\}$ and $p \in \Gamma$
then by definition $H_p^J(\Gamma) = \{v \in T_pM : dr(p)(v) =
dr(p)(J(p)v) = 0\}$.
\vskip 0,2cm
As usual, if $\theta$ is a one form on $M$ then $J^*\theta$ is the form acting
on a vector field $X$ by $(J^*\theta)X = \theta(JX)$.
We recall the notion of the Levi form of a hypersurface~:
\begin{definition}\label{DEF}
Let $\Gamma=\{r=0\}$ be a smooth real hypersurface in $M$
($r$ is any smooth defining function of $\Gamma$) and let $p \in \Gamma$.
$(i)$ The {\sl Levi form} of $\Gamma$ at $p$ is the map defined on
$H^J_p(\Gamma)$ by ${\mathcal L}_{\Gamma}^J(X_p) = J^\star dr[X,JX]_p$,
where the vector field $X$ is any section of the $J$-holomorphic tangent
bundle $H^J \Gamma$ such that $X(p) = X_p$.
$(ii)$ A real smooth hypersurface $\Gamma=\{r=0\}$ in $M$ is
{\sl strictly $J$-pseudoconvex} at $p$ if ${\mathcal L}_{\Gamma}^J(X_p)> 0$
for any nonzero $X_p \in H^J_p(\Gamma)$. The hypersurface $\Gamma$ is called
{\sl strictly $J$-pseudoconvex} if it is strictly $J$-pseudoconvex at every point.
\end{definition}
\begin{remark}
$(i)$ the ``strict $J$-pseudoconvexity'' condition does not depend on
the choice of a smooth defining function of $\Gamma$. Indeed if $\rho$
is an other smooth defining function for $\Gamma$ in a neighborhood of
$p \in \Gamma$ then there exists a positive smooth function $\lambda$
defined in a neighborhood of $p$ such that $\rho=\lambda r$. In
particular $(J^\star dr)(p) = \lambda(p)(J^\star d\rho)(p)$.
$(ii)$ since the map $(r,J) \mapsto J^\star dr$ is smooth the ``strict
$J$-pseudoconvexity'' is stable under small perturbations of both the
hypersurface and the almost complex structure.
\end{remark}
Let $X \in TM$. It follows from the identity
$d(J^\star dr)(X,JX)=X(<J^\star dr,JX>) - JX(<J^\star dr,X>) -
(J^\star dr)[X,JX]$
that
$
(J^\star dr)[X,JX] = -d(J^\star dr)(X,JX)
$
for every $X \in H^J\Gamma$, since $<dr,JX>=<dr,JX> = 0$ in that case.
Hence we set
\begin{definition}\label{psh-def} Let $p \in M$. If $r$ is a $\mathcal C^2$ function on $M$
then the
Levi form of $r$ at $p$ is defined on $T_pM$ by ${\mathcal L_r}^J(p,v):=
-d(J^\star dr)_p(X,JX)$ where $X$ is any section of $TM$
such that $X(p) = v$.
\end{definition}
\subsubsection{Almost complex perturbation of discs}
In this subsection we attach Bishop's discs to a totally real
submanifold in an almost complex manifold.
The following statement is an almost complex
analogue of the well-known Pinchuk's construction \cite{Pi74} of a
family of holomorphic discs attached to a totally real manifold.
\begin{lemma}\label{lem-discs}
For any $\delta > 0$ there exists a family of
$J$-holomorphic discs $h(\tau,t) = h_t(\tau)$ smoothly depending on the
parameter $t \in \mathbb R^{2n}$ such that $h_t(\partial \Delta^+) \subset E$,
$h_t(\Delta) \subset W(\Omega,E)$, $W_{\delta}(\Omega,E) \subset \cup_t
h_t(\Delta)$ and $C_1(1 - \vert \tau \vert) \leq dist (h_t(\tau),E)
\leq C_2 ( 1- \vert \tau \vert)$ for any $t$ and any $\tau \in \Delta^+$,
with constants $C_j > 0$ {\it independent} of $t$.
\end{lemma}
For $\alpha > 1$, noninteger, we denote by $\mathcal C^\alpha(\bar \Delta)$
the Banach space of functions of class $\mathcal C^\alpha$ on $\bar{\Delta}$
and by $\mathcal A^\alpha$ the Banach subspace of
$\mathcal C^\alpha(\bar \Delta)$ of functions holomorphic on $\Delta$.
First we consider the situation where $E=\{r:=(r_1,\dots,r_n)=0\}$
is a smooth totally real submanifold in $\mathbb C^n$.
Let $J_{\lambda}$ be an almost complex deformation of the standard
structure $J_{st}$ that is a one-parameter family of almost complex
structures so that $J_0 = J_{st}$.
We recall that for $\lambda$ small enough the
$(J_{st},J_{\lambda})$-holomorphicity condition for a map $f:\Delta
\rightarrow \mathbb C^n$ may be written in the form
\begin{eqnarray}\label{equa0}
\bar\partial_{J_{\lambda}} f = \bar\partial f +
q(\lambda,f)\overline{\partial f} = 0
\end{eqnarray}
where $q$ is a smooth matrix satisfying $q(0,\cdot) \equiv 0$, uniquely
determined by $J_\lambda$ (\cite{si}).
A disc $f \in (\mathcal C^\alpha(\bar{\Delta}))^n$ is attached
to $E$ and is $J_\lambda$-holomorphic if and only if it
satisfies the following nonlinear boundary Riemann-Hilbert type problem~:
$$
\left\{
\begin{array}{lll}
r(f(\zeta)) = 0,& & \zeta \in \partial \Delta\\
\bar{\partial}_{J_\lambda}f(\zeta) = 0,& & \zeta \in \Delta.
\end{array}
\right.
$$
Let $f^0 \in \mathcal (\mathcal A^\alpha)^n$ be a disc attached to
$E$ and let $\mathcal U$ be a neighborhood of $(f^0,0)$ in the
space $(\mathcal C^\alpha(\bar{\Delta}))^n \times \mathbb R$.
Given $(f,\lambda)$ in $\mathcal U$ define the maps
$v_{f}: \zeta \in \partial
\Delta \mapsto r(f(\zeta))$
and
\begin{eqnarray*}
& &u : \mathcal{U} \rightarrow (\mathcal C^\alpha(\partial \Delta))^n
\times \mathcal C^{\alpha-1}(\Delta)\\
& & (f,\lambda) \mapsto (v_{f},
\bar{\partial}_{J_\lambda}f).
\end{eqnarray*}
Denote by $X$ the Banach space $(\mathcal C^\alpha(\bar \Delta))^n$.
Since $r$ is of class $\mathcal C^{\infty}$,
the map
$u$ is smooth and the tangent map $D_Xu(f^0,0)$ (we consider
the derivative
with respect to the space $X$) is a linear map from $X$ to
$(\mathcal C^\alpha(\partial \Delta))^n \times \mathcal C^{\alpha-1}(\Delta)$,
defined for every $h \in X$ by
$$
D_Xu(f^0,0)(h) =
\left(
\begin{array}{l}
2 Re [G h] \\
\bar\partial_{J_0} h
\end{array}
\right),$$
where for $\zeta \in \partial \Delta$
$$
G(\zeta) = \left(
\begin{array}{lll}
\frac{\partial r_1}{\partial z^1}(f^0(\zeta)) &\cdots&\frac{\partial
r_1}{\partial z^n}(f^0(\zeta))\\
\cdots&\cdots&\cdots\\
\frac{\partial r_n}{\partial z^1}(f^0(\zeta))& \cdots&\frac{\partial r_n}
{\partial z^n}(f^0(\zeta))
\end{array}
\right)$$
(see \cite{gl94}).
\begin{lemma}\label{tthh}
Assume that for some $\alpha > 1$
the linear map from $(\mathcal A^{\alpha})^n$ to
$(\mathcal C^{\alpha-1}(\Delta))^n$
given by $h \mapsto 2 Re [G h]$
is surjective and has a $d$-dimensional kernel.
Then there exist $\delta_0, \lambda_0 >0$ such that for every
$0 \leq \lambda \leq \lambda_0$,
the set of $J_\lambda$-holomorphic discs $f$ attached to $E$
and such that $\| f -f^0 \|_{\alpha} \leq \delta_0$ forms
a smooth $d$-dimensional
submanifold
$\mathcal A_{\lambda}$ in the Banach space
$(C^\alpha(\bar{\Delta}))^n$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{tthh}.}
According to the implicit function Theorem,
the proof of Lemma~\ref{tthh} reduces to the proof of the
surjectivity of $D_Xu$.
It follows by classical
one-variable results on the resolution of the
$\bar\partial$-problem in the unit disc that the linear map from
$X$ to $\mathcal C^{\alpha-1}(\Delta)$ given by
$h \mapsto \bar \partial h$
is surjective. More precisely, given $g \in \mathcal C^{\alpha -1}(\Delta)$
consider the Cauchy transform
$$T_{CG}(g) : \tau \in \partial \Delta \mapsto
\frac{1}{2\pi i}
\int\int_{\Delta} \frac{g(\zeta)}{\zeta - \tau}d\zeta \wedge d\bar{\zeta}.$$
For every function $g \in
C^{\alpha-1}(\Delta)$ the solutions $h \in X$ of the equation
$\bar\partial h = g$ have the form $h = h_0 + T_{\Delta}(g)$
where $h_0$ is an arbitrary function in $({\mathcal A}^{\alpha})^n$.
Consider the equation
\begin{equation}\label{EQU}
D_Xu(f^0,0)(h) = \left(
\begin{array}{l}
g_1 \\
g_2
\end{array}
\right)
\end{equation}
where $(g_1,g_2)$ is a vector-valued function with components
$g_1 \in \mathcal C^{\alpha-1}(\partial \Delta)$ and
$g_2 \in \mathcal C^{\alpha-1}(\Delta)$.
Solving the $\bar\partial$-equation for the second component, we reduce
equation~(\ref{EQU}) to
$$
2 Re [G(\zeta) h_0(\zeta)] = g_1 - 2 Re [G(\zeta) T_{\Delta}(g_2)(\zeta)]
$$
with respect to $h_0 \in (\mathcal A^{\alpha})^n$.
The surjectivity of the map $ h_0 \mapsto 2 Re [G h_0]$ gives the result.
\qed
\vskip 0,1cm
\noindent{\it Proof of Lemma~\ref{lem-discs}.} We proceed in three steps.
{\it Step 1. Filling the polydisc.} Consider the $n$-dimensional real
torus $\mathbb T^n = \partial \Delta \times
...\times \partial \Delta$ in $\mathbb C^n$ and the
linear disc $f^0(\zeta) = (\zeta,...,\zeta)$, $\zeta \in \Delta$
attached to $\mathbb T^n$.
In that case, a disc $h^0$ is in the kernel of
$h \mapsto 2 Re [G h]$ if and only if every component $h^0_k$ of $h^0$
satisfies on $\partial \Delta$ the condition $h^0_k +
\zeta^2\overline{h^0_k} = 0$. Considering the Fourier expansion
of $h_k$ on $\partial \Delta$ (recall that $h_k$ is holomorphic on
$\Delta$) and identifying the coefficients, we obtain that the map
$h \mapsto 2 Re [G h]$ from $({\mathcal A}^{\alpha})^n$ to
$(C^{\alpha - 1}(\Delta))^n$ is surjective and has a $3n$-dimensional
kernel.
By Lemma~\ref{tthh} if $J_\lambda$ is an almost complex
structure close enough to $J_{st}$ in a neighborhood of the closure
of the polydisc $\Delta^n$, there is a $3n$-parameters family of
$J_\lambda$-holomorphic discs attached to $\mathbb T^n$. These
$J_{\lambda}$-holomorphic discs fill the intersection of
a sufficiently small neighborhood of the point $(1,...,1)$
with $\Delta^n$.
{\it Step 2. Isotropic dilations.} Consider a smooth totally real
submanifold $E$ in an almost complex manifold $(M,J)$. Fixing local
coordinates, we may assume that $E$ is a submanifold in a neighborhood
of the origin in $\mathbb C^n$, $J = J_{st} + 0(\vert z \vert)$ and $E$ is
defined by the equations $y = \phi(x)$, where $\nabla \phi(0) =
0$. For every $\varepsilon > 0$, consider the isotropic dilations
$\Lambda_{\varepsilon}: z \mapsto z' = \varepsilon^{-1}z$. Then
$J_{\varepsilon}:= \Lambda_{\varepsilon}(J) \rightarrow J_{st}$ as
$\varepsilon \rightarrow 0$. In the $z'$-coordinates $E$ is defined by
the equations $y' = \psi(x',\varepsilon):=
\varepsilon^{-1}\phi(\varepsilon x')$ and $\psi \rightarrow 0$ as
$\varepsilon \rightarrow 0$. Consider the local diffeomorphism
$\Phi_{\varepsilon}: z' = x' +iy' \mapsto z''= x'
+i(y'-\psi(x',\varepsilon))$. Then in new coordinates (we omit the
primes) $E$ coincides with a neighborhood of the origin in $\mathbb R^n = \{
y = 0\}$ and $\hat J_{\varepsilon}: =
(\Phi_{\varepsilon})_*(J_{\varepsilon}) \rightarrow J_{st}$ as
$\varepsilon \rightarrow 0$. Assume for instance that $E=(]-1,1[
\times \{0\})^n$. For $j=1,\dots,n$, let $\Gamma_j$ be a smooth simple
curve in the real plane $\{(x_j,y_j) : x_j \leq 0\}$, containing
$]-1/2,1/2[$ and bounding a domain $G_j$. If $\psi_j$ is a
$J_{st}$-biholomorphism from $G_j$ to $\Delta$ then the map
$\psi:=(\psi_1,\dots,\psi_n)$ is a $J_{st}$-biholomorphism from $G_1
\times \cdots \times G_n$ to $\Delta^n$. Hence we may assume that $E$
is a neighborhood of the point $(1,...,1)$ on the torus $\mathbb T^n$
and the almost complex structure $J_{\varepsilon}$ is a small
deformation of the standard structure in a neighborhood of
$\bar{\Delta}^n$. By Step 1, we may fill a neighborhood of the point
$(1,...,1)$ in the polydisc $\Delta^n$ by
$J_{\varepsilon}$-holomorphic discs (for $\varepsilon$ small enough)
which are small perturbations of the disc $\zeta \mapsto
(\zeta,...,\zeta)$. Returning to the initial coordinates, we obtain a
family of $J$-holomorphic discs attached to $E$ along a fixed arc
(say, the upper semicircle $\partial \Delta^+$) and filling the
intersection of a neighborhood of the origin with the wedge $\{y -
\phi(x) < 0\}$.
{\it Step3.} Let now $W(\Omega,E) = \{ r_j < 0 , j=1,...,n\}$ be a
wedge with edge $E$; we assume that $0 \in E$ and $J(0) =
J_{st}$. We may assume that $E=\{y = \phi(x)\}$, $\nabla \phi(0) = 0$,
since the linear part of every $r_j$ at the origin is equal to $y_j$. So
shrinking $\Omega$ if necessary, we obtain that for any $\delta > 0$
the wedge $W_{\delta}(\Omega, E) = \{z \in \Omega: r_j(z) -
\delta\sum_{k \neq j} r_k(z) < 0 , j=1,...,n \}$ is contained in the
wedge $\{z \in \Omega: y - \phi(x) < 0 \}$. By Step 2 there is a family of
$J$-holomorphic discs attached to $E$ along the upper semicircle and
filling the wedge $W_{\delta}(\Omega,E)$. These discs are smooth up to the
boundary and smoothly depend on the parameters. \qed
\subsubsection{Local description of strictly pseudoconvex domains.} If
$\Gamma$ is a germ of a real hypersurface in $\mathbb C^n$ strictly
pseudoconvex with respect to $J_{st}$, then $\Gamma$ remains strictly
pseudoconvex for any almost complex structure $J$ sufficiently close
to $J_{st}$ in the $\mathcal C^2$-norm.
Conversely a strictly
pseudoconvex hypersurface in an almost complex manifold of real dimension
four can be represented, in suitable local coordinates, as a
strictly $J_{st}$-pseudoconvex hypersurface equipped with a small deformation
of the standard structure. Indeed, according to \cite{si} Corollary~3.1.2,
there exist a neighborhood $U$ of $q$ in $M$ and complex coordinates
$z=(z^1,z^2) : U \rightarrow B \subset \mathbb C^2$, $z(q) =
0$ such that $z_*(J)(0) = J_{st}$ and moreover, a map $f: \Delta
\rightarrow B$ is $J':= z_*(J)$-holomorphic if it satisfies the
equations
\begin{eqnarray}
\label{Jhol}
\frac{\partial f^j}{\partial \bar \zeta} =
A_j(f^1,f^2)\overline{\left ( \frac{\partial f^j}
{\partial \zeta}\right ) }, j=1,2
\end{eqnarray}
where $A_j(z) = O(\vert
z \vert)$, $j=1,2$.
To obtain such coordinates, one can consider two transversal
foliations of the ball $\mathbb B$ by $J'$-holomorphic curves
(see~\cite{ni-wo})and then take these curves into the lines $z^j = const$
by a local diffeomorphism (see Figure 1).
The direct image of the almost complex structure
$J$ under such a diffeomorphism has a diagonal matrix $ J'(z^1,z^2) =
(a_{jk}(z))_{jk}$ with $a_{12}=a_{21}=0$ and $a_{jj}=i+\alpha_{jj}$
where $\alpha_{jj}(z)=\mathcal O(|z|)$ for $j=1,2$.
We point out that the lines $z^j = const$ are
$J$-holomorphic after a suitable parametrization (which, in general,
is not linear).
\bigskip
\begin{center}
\input{figure4.pstex_t}
\end{center}
\bigskip
\centerline{Figure 1}
\bigskip
In what follows we omit the prime and denote this structure again by
$J$. We may assume that the complex tangent space $T_0(\partial D)
\cap J(0) T_0(\partial D) = T_0(\partial D) \cap i T_0(\partial D)$ is
given by $\{ z^2 = 0 \}$.
In particular, we have the following expansion for the defining
function $\rho$ of $D$ on $U$~:
$\rho(z,\bar{z}) = 2 Re(z^2) + 2Re K(z) + H(z) + \mathcal O(\vert z
\vert^3)$, where
$K(z) = \sum k_{\nu\mu} z^{\nu}{z}^{\mu}$, $k_{\nu\mu} =
k_{\mu\nu}$ and
$H(z) = \sum h_{\nu\mu} z^{\nu}\bar z^{\mu}$, $h_{\nu\mu} =
\bar h_{\mu\nu}$.
\begin{lemma}
\label{PP}
The domain $D$ is strictly $J_{st}$-pseudoconvex near the origin.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{PP}.} Consider a complex vector
$v=(v_1,0)$ tangent to $\partial D$ at the origin. Let $f:\Delta
\rightarrow \mathbb C^2$ be a $J$-holomorphic disc centered at the
origin and tangent to $v$: $f(\zeta) = v\zeta + \mathcal O(\vert
\zeta \vert^2)$. Since $A_2 = \mathcal O(\vert z \vert)$, it follows
from the $J$-holomorphicity equation (\ref{Jhol}) that
$(f^2)_{\zeta\bar\zeta}(0) = 0$. This implies that $(\rho \circ
f)_{\zeta\bar\zeta}(0) = H(v).$ Thus, the Levi form with respect to
$J$ coincides with the Levi form with respect to $J_{st}$ on the
complex tangent space of $\partial D$ at the origin. This proves
Lemma~\ref{PP}. \qed
\vskip 0,1cm
Consider the non-isotropic dilations $\Lambda_{\delta}: (z^1,z^2) \mapsto
(\delta^{-1/2}z^1,\delta^{-1}z^2) = (w^1,w^2)$ with $\delta > 0$.
If $J$ has the above
diagonal form in the coordinates $(z^1,z^2)$ in $\mathbb C^2$, then
its direct image $J_{\delta}= (\Lambda_{\delta})_*(J)$ has the form
$J_{\delta}(w^1,w^2) =(a_{jk}(\delta^{1/2}w^1,\delta w^2))_{jk}$
and so $J_{\delta}$ tends to $J_{st}$ in the $\mathcal C^2$ norm as $\delta
\rightarrow 0$. On the other hand, $\partial D$ is, in the $w$ coordinates,
the zero set of the function
$\rho_{\delta}= \delta^{-1}(\rho \circ \Lambda_{\delta}^{-1})$.
As $\delta \rightarrow 0$, the function $\rho_{\delta}$ tends to
the function $2 Re w^2 + 2 Re K(w^1,0) + H(w^1,0)$ which defines a
strictly $J_{st}$-pseudoconvex domain by Lemma~\ref{PP} and proves the claim.
This also proves that if $\rho$
is a local defining function of a strictly $J$-pseudoconvex domain, then
$\tilde{\rho}:=\rho + C \rho^2$
is a strictly $J$-plurisubharmonic function, quite similarly to the standard
case.
In conclusion we point out that extending $\tilde \rho$ by a suitable
negative constant, we obtain that if $D$ is a strictly
$J$-pseudoconvex domain in an almost complex
manifold, then there exists a neighborhood $U$ of $\bar{D}$ and a
function $\rho$, $J$-plurisubharmonic on $U$ and strictly
$J$-plurisubharmonic in a neighborhood of $\partial D$, such that
$D=\{ \rho <0\}$.
\subsubsection{Model almost complex structures}
The scaling process in complex manifolds deals with deformations of
domains under holomorphic transformations called dilations. The usual
nonisotropic dilations in complex manifolds, associated with strictly
pseudoconvex domains, provide the unit ball (after biholomorphism) as the
limit domain. In almost complex manifolds dilations are generically no more
holomorphic with respect to the ambiant structure. The scaling process
consists in deforming both the structure and the domain.
This provides, as limits, a quadratic domain and a linear deformation
of the standard structure in $\mathbb R^{2n}$, called {\it model structure}.
We study some invariants of such structures.
Let $(x^1,y^1,\dots,x^n,y^n)=(z^1,\dots,z^n)=('z,z^n)$ denote the canonical
coordinates of $\mathbb R^{2n}$.
\begin{definition}\label{def-model}
{\it Let $J$ be an almost complex structure on $\mathbb C^n$. We call
$J$ a {\rm model structure} if $J(z) = J_{st} + L(z)$ where $L$ is given by
a linear matrix $L=(L_{j,k})_{1 \leq j,k \leq 2n}$ such that $L_{j,k} = 0$ for
$1 \leq j \leq 2n-2, \ 1 \leq k \leq 2n$, $L_{j,k}=0$ for $j,k=2n-1,2n$ and
$L_{j,k} = \sum_{l=1}^{n-1} (a_l^{j,k} z^l + \bar{a}_l^{j,k}\bar{z}^l)$,
$a_l^{j,k} \in \mathbb C$, for $j=2n-1,2n$ and $k=1,\cdots,2n-2$.}
\end{definition}
The complexification $J_\mathbb C$ of a model structure $J$ can be written
as a $(2n \times 2n)$ complex matrix
\begin{equation}\label{complex}
J_\mathbb C=\left(
\begin{array}{ccccccc}
i & 0 & 0 & 0 & \cdots & 0 & 0 \\
0 & -i & 0 & 0 & \cdots & 0 & 0 \\
0 & 0 & i & 0 & \cdots & 0 & 0 \\
0 & 0 & 0 & -i & \cdots & 0 & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
0 & \tilde{L}_{2n-1,2} & 0 & \tilde{L}_{2n-1,4} & \cdots & i & 0\\
\tilde{L}_{2n,1} & 0 & \tilde{L}_{2n,3} & 0 & \cdots & 0 & -i
\end{array}
\right),
\end{equation}
where $\tilde{L}_{2n-1,2k}(z,\bar{z}) = \sum_{l=1,\ l \neq k}^{n-1}
(\alpha_l^{k} z^l + \beta_l^{k} \bar{z}^l)$
with $\alpha_l^k,\ \beta_l^k \in \mathbb C$.
Moreover, $\tilde{L}_{2n,2k-1} = \overline{\tilde{L}_{2n-1,2k}}$.
\vskip 0,1cm
With a model structure we associate model domains.
\begin{definition}
Let $J$ be a model structure on $\mathbb C^n$ and $D=\{z \in \mathbb C^n : Re z^n +
P_2('z,'\bar{z})<0\}$, where $P_2$ is homogeneous second degree real
polynomial on $\mathbb C^{n-1}$. The pair $(D,J)$ is called a {\it model
domain} if $D$ is strictly $J$-pseudoconvex in a neighborhood of the
origin.
\end{definition}
The aim of this Section is to define the complex hypersurfaces for model
structures in $\mathbb R^{2n}$.
Let $J$ be a model structure on $\mathbb R^{2n}$ and let $N$ be a germ of a
$J$-complex hypersurface in $\mathbb R^{2n}$.
\begin{proposition}\label{prop-hyp}
\hfill
$(i)$ The model structure $J$ is integrable if and only if
$ \tilde{L}_{2n-1,j}$ satisfies the compatibility conditions
$$
\frac{\partial \tilde{L}_{2n-1,2k}}{\partial \bar{z}^j} =
\frac{\partial \tilde{L}_{2n-1,2j}}{\partial \bar{z}^k}
$$
for every $1 \leq j,k \leq n-1$.
In that case there exists a global diffeomorphism of $\mathbb R^{2n}$
which is $(J,J_{st})$ holomorphic. In that case the germs of any $J$-complex
hypersurface are given by one of the two following forms~:
\hskip 0,5cm $(a)$ $N= A \times \mathbb C$ where $A$ is a germ of a $J_{st}$-complex
hypersurface in $\mathbb C^{n-1}$,
\hskip 0,5cm $(b)$ $N =\{('z,z^n) \in \mathbb C^n : z^n = \frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j\tilde{L}_{2n-1,2j}('z,'\bar{z})
+\frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j \tilde{L}_{2n-1,2j}('z,0))
+ \tilde{\varphi}('z)\}$
where $\tilde{\varphi}$ is a holomorphic function locally defined in
$\mathbb C^{n-1}$.
\vskip 0,1cm
$(ii)$ If $J$ is not integrable then $N= A \times \mathbb C$ where $A$ is a germ
of a $J_{st}$-complex hypersurface in $\mathbb C^{n-1}$.
\end{proposition}
\vskip 0,2cm
\noindent{\it Proof of Proposition~\ref{prop-hyp}}. Let $N$ be a germ
of a $J$-complex hypersurface in $\mathbb R^{2n}$.
If $\pi:\mathbb R^{2n} \rightarrow \mathbb R^{2n-2}$ is the projection on the $(2n-2)$ first
variables, it follows from Definition~\ref{def-model}, or similarly
from condition~(\ref{complex}) that
$\pi(T_zN)$ is a $J_{st}$-complex hypersurface in $\mathbb C^{n-1}$.
It follows that either $dim_\mathbb C\pi(N) = n-1$ or $dim_\mathbb C\pi(N) = n-2$.
\vskip 0,1cm
\noindent{\it Case one : $dim_\mathbb C\pi(N) = n-1$}.
We prove the following Lemma~:
\begin{lemma}\label{lem-hyp}
There is a local holomorphic function $\tilde{\varphi}$ in $\mathbb C^{n-1}$
such that
$N =\{('z,z^n) : z^n = \frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j\tilde{L}_{2n-1,2j}('z,'\bar{z})
+\frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j \tilde{L}_{2n-1,2j}('z,0))
+ \tilde{\varphi}('z)\}.$
\end{lemma}
{\it Proof of Lemma~\ref{lem-hyp}}. A germ $N$ can be represented as a
graph $N=\{z^n = \varphi('z,'\bar{z})\}$ where $\varphi$ is a smooth
local complex function. Hence
$T_zN=\{v_n = \sum_{j=1}^{n-1}(\frac{\partial \varphi}{\partial z^j}('z)v_j
+ \frac{\partial \varphi}{\partial {\bar z}^j}('z) \bar{v}_j)\}$.
A vector $v=(x^1,y^1,\dots,x^n,y^n)$ belongs to $T_zN$ if and only if
the complex components $v^1:=x^1+iy^1,\dots,v^n:=x^n + i y^n$ satisfy
\begin{equation}\label{tan}
iv_n = i\sum_{j=1}^{n-1}(\frac{\partial \varphi}{\partial z^j}('z)v_j +
\frac{\partial \varphi}{\partial \bar{z}^j}('z)\bar{v}_j).
\end{equation}
Similarly, the vector $J_zv$ belongs to $T_zN$ if and only if
\begin{equation}\label{Jtan}
\sum_{j=1}^{n-1}\tilde{L}_{2n,2j-1}('z) \bar{v}_j + i v_n =
i(\sum_{j=1}^{n-1} \frac{\partial \varphi}{\partial z^j}('z) v_j
-\sum_{j=1}^{n-1}\frac{\partial \varphi}{\partial
\bar{z}^j}('z)\bar{v}_j).
\end{equation}
It follows from (\ref{tan}) and (\ref{Jtan}) that $N$ is $J$-complex if and
only if
$$
\sum_{j=1}^{n-1}(\tilde{L}_{2n,2j-1}('z)
\bar{v}_j + 2i \frac{\partial \varphi}{\partial \bar{z}^j}(z)\bar{v}_j) = 0
$$
for every $'v \in \mathbb C^{n-1}$, or equivalently if and only if
$$
\tilde{L}_{2n,2j-1} = -2i \frac{\partial \varphi}{\partial \bar{z}^j}
$$
for every $j=1,\cdots,n-1$.
This last condition is equivalent to the compatibility
conditions
\begin{equation}\label{compat}
\frac{\partial \tilde{L}_{2n,2j-1}}{\partial \bar{z}^k} =
\frac{\partial \tilde{L}_{2n,2k-1}}
{\partial \bar{z}^j}\ {\rm for}\ j,k = 1,\cdots,n-1.
\end{equation}
In that case
there exists a local holomorphic function $\tilde{\varphi}$ in $\mathbb C^{n-1}$
such that
$$
\varphi('z,'\bar{z})=\frac{i}{2}
\sum_{j=1}^{n-1}\bar{z}^j(\sum_{k \neq j}\alpha_k^j z^k)
-\frac{i}{2}
\sum_{j=1}^{n-2}\bar{z}^j(\sum_{k > j}\beta_k^j \bar{z}^k)
+ \tilde{\varphi}('z),
$$
meaning that such $J$-complex hypersurfaces are parametrized by holomorphic
functions in the variables $'z$.
Moreover we can rewrite $\varphi$ as
$$
\varphi('z,'\bar{z})=\frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j\tilde{L}_{2n-1,2j}('z,'\bar{z})
+\frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j \tilde{L}_{2n-1,2j}('z,0))
+ \tilde{\varphi}('z).
$$ \qed
We also have the following
\begin{lemma}\label{lem-form}
The $(1,0)$ forms of $J$ have the form
$\omega = \sum_{k=1}^n c_kdz^k -\frac{i}{2}c_n\sum_{k=1}^{n-1}
\tilde{L}_{2n-1,2k} d\bar{z}^k$ with complex numbers $c_1,\dots,c_n$.
\end{lemma}
{\it Proof of Lemma~\ref{lem-form}}.
Let $X=\sum_{k=1}^n(x^k \frac{\partial}{\partial z^k} +
y^k\frac{\partial}{\partial \bar{z}^k})$ be a $(0,1)$ vector field.
In view of (\ref{complex}), we have~:
$$
J_\mathbb C(X) = -iX \Leftrightarrow \left\{
\begin{array}{lll}
x^k & = & 0, \ \ {\rm for \ k =1,\dots,n-1}\\
& & \\
x^n & = & \frac{i}{2} \sum_{k=1}^{n-1}y_k \tilde{L}_{2n-1,2k}.
\end{array}
\right.
$$
Hence the $(0,1)$ vector fields are given by
$$
X=\sum_{k=1}^ny^k\frac{\partial}{\partial \bar{z}^k} +
\frac{i}{2}\frac{\partial}{\partial z^n} \sum_{k=1}^{n-1}y^k\tilde{L}_{2n-1,2k}.
$$
A $(1,0)$ form $\omega=\sum_{k=1}^n(c_kdz^k + d_nd\bar{z}^k)$
satisfying $\omega(X)=0$ for every $(0,1)$ vector field $X$ it satisfies
$d_n = 0$ and $d_k + (i/2)c_n \tilde{L}_{2n-1,2k}=0$ for every
$k=1,\dots,n-1$.
This gives the desired form for the $(1,0)$ forms on $\mathbb C^n$. \qed
\vskip 0,1cm
Consider now the global diffeomorphism of $\mathbb C^n$ defined by
$$
F('z,z^n) = ('z,z^n - \frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j\tilde{L}_{2n-1,2j}('z,'\bar{z})
-\frac{i}{4}
\sum_{j=1}^{n-1}\bar{z}^j \tilde{L}_{2n-1,2j}('z,0)).
$$
The map $F$ is $(J,J_{st})$ holomorphic if and only if $F^*(dz^k)$
is a $(1,0)$ form with respect to $J$, for every $k=1,\dots,n$.
Then $F^*(dz^k) = dz^k$ for $k=1,\dots,n-1$ and
$$
\begin{array}{lll}
F^*(dz^n) & = & \displaystyle
dz^n + \sum_{k=1}^{n-1} \frac{\partial F_n}{\partial z^k} dz^k
+ \sum_{k=1}^{n-1} \frac{\partial F_n}{\partial \bar{z}^k} d\bar{z}^k\\
& & \\
& = & \displaystyle
dz^n + \sum_{k=1}^{n-1} \frac{\partial F_n}{\partial z^k} dz^k\\
& & \ \ \ \ \ - \displaystyle \frac{i}{4} \sum_{k=1}^{n-1}
(\tilde{L}_{2n-1,2k}('z,'\bar{z})+
\sum_{j \neq k}\bar{z}^j
\frac{\partial \tilde{L}_{2n-1,2j}}{\partial \bar{z}^k}('z,'\bar{z})
+ \tilde{L}_{2n-1,2k}('z,'0) d\bar{z}^k.
\end{array}$$
By the compatibility condition~(\ref{compat}) we have
$$
\begin{array}{lll}
F^*(dz^n) & = & \displaystyle
dz^n + \sum_{k=1}^{n-1} \frac{\partial F_n}{\partial z^k} dz^k\\
& & \\
& & -\frac{i}{4} \sum_{k=1}^{n-1}
(\tilde{L}_{2n-1,2k}('z,'\bar{z}) +
\tilde{L}_{2n-1,2k}('0,'\bar{z}) +
\tilde{L}_{2n-1,2k}('z,'0)) d\bar{z}^k\\
& & \\
& = & \displaystyle dz^n
- \frac{i}{2} \sum_{k=1}^{n-1}
\tilde{L}_{2n-1,2k}('z,'\bar{z}) d\bar{z}^k
+ \sum_{k=1}^{n-1} \frac{\partial F_n}{\partial z^k} dz^k.
\end{array}
$$
These equalities mean that $F$ is a local $(J,J_{st})$-biholomorphism of
$\mathbb C^n$, and so that $J$ is integrable.
\vskip 0,1cm
\noindent{\it Case two : $dim_\mathbb C\pi(N) = n-2$}. In that case we may write
$N=\pi(N) \times \mathbb C$, meaning that $J$-complex hypersurfaces
are parametrized by $J_{st}$-complex hypersurfaces of $\mathbb C^{n-1}$.
\vskip 0,1cm
We can conclude now the proof of Proposition~\ref{prop-hyp}. We proved in
Case one that if there exists a $J$-complex hypersurface in $\mathbb C^n$ such
that $dim \pi(N) = n-1$ (this is equivalent to the compatibility conditions
(\ref{compat})) then $J$ is integrable.
Conversely, it is immediate that if $J$
is integrable then there exists a $J$-complex hypersurface whose form
is given by Lemma~\ref{lem-hyp} and hence that the compatibility conditions
(\ref{compat}) are satisfied. This gives part $(i)$ of
Proposition~\ref{prop-hyp}.
To prove part $(ii)$, we note that if $J$ is not integrable then in view
of part $(i)$ the form of any $J$-complex hypersurface is given by Case two.
\qed
\subsection{Plurisubharmonic functions}
In this Subsection, we present essentially two results. The first establishes
the Hopf Lemma. As a consequence, we obtain the boundary equivalence property
for biholomorphisms between relatively compact, strictly pseudoconvex domains.
The second result deals with the removability of singularities
for plurisubharmonic functions.
\vskip 0,2cm
We first recall the following
definition~:
\begin{definition}\label{d6}
An upper semicontinuous function $u$ on $(M,J)$ is called
{\sl $J$-plurisubharmonic} on $M$ if the composition $u \circ f$
is subharmonic on $\Delta$ for every $J$-holomorphic disc $f:\Delta
\rightarrow M$.
\end{definition}
If $M$ is a domain in $\mathbb C^n$ and $J=J_{st}$ then a
$J_{st}$-plurisubharmonic function is a plurisubharmonic function
in the usual sense.
\vskip 0,1cm
\begin{proposition}\label{di-su}
Let $r$ be a real function of class $C^2$ in a neighborhood of a point $p \in M$.
\begin{itemize}
\item[(i)] If $F: (M,J) \longrightarrow (M', J')$ is a $(J,
J')$-holomorphic map, and $\varphi$ is a real function of class
$C^2$ in a neighborhood of $F(p)$, then for any $v \in T_p(M)$ we have
$L^J_{\varphi \circ F}(p;v) = L^{J'}_\varphi(F(p),dF(p)(v))$.
\item[(ii)] If $z:\mathbb D \longrightarrow M$ is a $J$-holomorphic disc satisfying
$z(0) = p$, and $dz(0)(e_1) = v$ (here $e_1$ denote the vector
$\frac{\partial}{\partial Re(\zeta)}$ in
$\mathbb R^2$), then $L^J_r(p;v) = \Delta (r \circ z) (0)$.
\end{itemize}
\end{proposition}
The property (i) expresses the invariance of the Levi form with
respect to biholomorphic maps. The property (ii) is often useful in order to compute
the Levi form if a vector
$t$ is given.
\proof (i) Since the map $F$ is $(J, J')$-holomorphic, we have ${J'}^* dr(dF(X))
= dr(J' dF(X)) = dr ( dF(J X)) = d(r \circ F)(JX)$ that is
$F^*({J'}^* dr) = J^* d(r \circ F)$. By the invariance of the exterior
derivative we obtain that $F^*(d{ J'}^* dr) = d J^* d (r \circ F)$. Again
using the holomorphy of $F$, we get $d{ J'}^* dr(dF(X),J'dF(X)) =
F^*(d{ J'}^* dr)(X,JX) = d J^* d(r \circ F)(X,JX)$ which
implies (i).
(ii) Since $z$ is a
$(J_{st},J)$-holomorphic map, (i) implies that $L_r^J(p,v) = L_{r \circ
z}^{J_{st}}(0,e_1) = \Delta(r \circ z)(0)$. This proves proposition. \qed
It follows from Proposition~\ref{di-su} that a $\mathcal C^2$ real valued
function $u$ on $M$ is
$J$-plurisubharmonic on $M$ if and only if $\mathcal L^J(u)(p)(v) \geq 0$
for every $p \in M$, $v \in T_pM$.
\vskip 0,1cm
This leads to the definition~:
\begin{definition}
A $\mathcal C^2$ real valued function $u$ on $M$ is {\sl strictly
$J$-plurisubharmonic} on $M$ if $\mathcal L^J(u)(p)(v)$
is positive for every $p \in M$, $v \in T_pM \backslash \{0\}$.
\end{definition}
We have the following example of a
$J$-plurisubharmonic function on an almost complex manifold $(M,J)$~:
\begin{example}\label{example}
For every point $p\in (M,J)$ there exists a neighborhood $U$ of $p$
and a diffeomorphism $z:U \rightarrow \mathbb
B$ centered at $p$ (ie $z(p) =0$) such that the function $|z|^2$ is
$J$-plurisubharmonic on $U$.
\end{example}
\vskip 0,1cm
We also have the following
\begin{lemma}
A function $u$ of class $\mathcal C^2$ in a neighborhood
of a point $p$ of $(M,J)$ is strictly $J$-plurisubharmonic
if and only there exists a neighborhood $U$ of $p$ with local
complex coordinates $z:U \rightarrow \mathbb B$ centered at $p$, such
that the function $u - c|z|^2$ is $J$-plurisubharmonic on $U$ for some
constant $c > 0$.
\end{lemma}
The function $\log \vert z \vert$ is
$J_{st}$-plurisubharmonic on $\mathbb C^n$ and plays an important role in the
pluripotential theory as the Green function for the complex
Monge-Amp{\`e}re operator on the unit ball. In particular, this function
is crucially used in Sibony's method in order to localize and
estimate the Kobayashi-Royden metric on a complex manifold. Unfortunately,
after an arbirarily small general almost complex deformation of the
standard structure this function is {\it not} plurisubharmonic with
respect to the new structure (in any neighborhood of the origin), see
for instance \cite{de99}. So we will need the following statement
communicated to the authors by E.Chirka~:
\begin{lemma}
Let $p$ be a point in an almost complex manifold $(M,J)$. There exist
a neighborhood $U$ of $p$ in $M$, a diffeomorphism $z : U \rightarrow
\mathbb B$ centered at $p$ and positive constants $\lambda_0,\ A$, such that
the function $\log|z| + A|z|$ is
$J'$-plurisubharmonic on $U$ for every almost complex structure $J'$
satisfying $\|J'-J\|_{\mathcal C^2(\bar{U})} \leq \lambda_0$.
\end{lemma}
\noindent{\it Proof.} Consider the function $u=|z|$ on $\mathbb B$.
Since $\mathcal L^{J_{st}}(u \circ z^{-1})(w)(v) \geq \|v\|^2/4|w|$
for every $w \in \mathbb B\backslash \{0\}$ and every $v \in \mathbb C^n$, it
follows by a direct expansion of ${\mathcal L}^{J'}(u)$ that there
exist a neighborhood $U$ of $p$, $U \subset\subset U_0$, and a
positive constant $\lambda_0$ such that ${\mathcal L}^{J'}(u)(q)(v)
\geq \|v\|^2/5|z(q)|$ for every $q \in U \backslash\{p\}$, every $v\in
T_qM$ and every almost complex structure $J'$ satisfying
$\|J'-J\|_{\mathcal C^2(\bar{U})} \leq \lambda_0$. Moreover, computing the
Laplacian of $\log|f|$ where $f$ is any $J$-holomorphic disc we
obtain, decreasing $\lambda_0$ if necessary, that there exists a
positive constant $B$ such that $\mathcal L^{J'}(\log|z|)(q)(v) \geq
-B\|v\|^2/|z(q)|$ for every $q \in U\backslash\{p\}$, every $v \in
T_qM$ and every almost complex structure $J'$ satisfying
$\|J'-J\|_{\mathcal C^2(\bar{U})} \leq \lambda_0$. We may choose $A = 2B$ to
get the result. \qed
\vskip 0,2cm
We point out that such constructions of plurisubharmonic functions were
generalized recently by J.P.Rosay. He constructed a plurisubharmonic function
in $\mathbb C^2$, whose polar set is a two real dimensional submanifold in
$\mathbb C^2$.
\subsubsection{Hopf lemma and the boundary distance preserving property}
In what follows we need an analogue to the Hopf lemma
for almost complex manifolds. This can be proved quite similarly
to the standard one.
\begin{lemma}\label{hopf}
(Hopf lemma) Let $G$ be a relatively compact domain with a $\mathcal C^2$
boundary on an almost complex manifold $(M,J)$. Then for any negative
$J$-psh function $u$ on $D$ there exists a constant $C > 0$ such that
$\vert u(p) \vert \geq C dist(p,\partial G)$ for any $p \in G$ ($dist$
is taken with respect to a Riemannian metric on $M$).
\end{lemma}
\noindent{\it Proof of Lemma~\ref{hopf}}.
{\it Step 1.} We have the following precise version on the unit
disc: let $u$ be a subharmonic function on $\Delta$, $K$ be a fixed
compact on $\Delta$. Suppose that $u < 0$ on $\Delta$ and $u \vert K
\leq -L$ where $ L > 0$ is constant. Then there exists $C(K,L) > 0$
(independent of $u$) such that $\vert u(p) \vert \geq C
dist(p,\partial \Delta)$ (see \cite{Ra}).
{\it Step 2.} Let $G$ be a domain in $\mathbb C$ with $\mathcal C^2$-boundary.
Then there exists an $r > 0$ (depending on the curvature of the boundary)
such that for any boundary point $q \in \partial G$ the ball $B_{q,r}$
of radius $r$ centered on
the interior normal to $\partial G$ at $q$, such that $q \in
\partial B_{q,r}$, is
contained in $G$. Applying Step 1 to the restriction of $u$ on every
such a ball (when $q$ runs over $\partial G$) we obtain the Hopf lemma
for a domain with $\mathcal C^2$ boundary:
let $u$ be a subharmonic function on $G$, $K$ be a fixed
compact on $G$. Suppose that $u < 0$ on $G$ and $u \vert K
\leq -L$ where $ L > 0$ is constant. Denote by $k$ the curvature of
$\partial G$. Then there exists $C(K,L,k) > 0$
(independent of $u$) such that $\vert u(p) \vert \geq C
dist(p,\partial \Delta)$.
{\it Step 3.} Now we can prove the Hopf lemma for almost complex
manifolds. Fix a normal field $v$ on $\partial G$ and consider the family
of $J$-holomorphic discs $d_v$ satisfying $d'_0(\partial_x) =
v(d(0))$. The image of such a disc is a real surfaces intesecting
$\partial G$ transversally, so its pullback gives a $\mathcal C^2$-curve in
$\Delta$. Denote by $G_v$ the component of $\Delta$ defined by the
condition $d_v(G_v) \subset G$. Then every $G_v$ is a domain with
$\mathcal C^2$-boundary in $\mathbb C$ and the curvatures of boundaries depend
continuously on $v$. We conclude by applying Step 2 to the composition
$u \circ d_v$ on $G_v$. \qed
\vskip 0,2cm
As an application, we obtain the boundary distance preserving property for
biholomorphisms between strictly pseudoconvex domains.
\begin{proposition}\label{equiv}
Let $D $ and $D'$ be two smoothly bounded strictly pseudoconvex
domains in almost complex manifolds $(M,J)$ and
$(M',J')$ respectively and let $f:D \rightarrow D'$ be a
$(J,J')$-biholomorphism. Then
there exists a constant $C > 0$ such that
$$
(1/C) dist(f(z),\partial D') \leq dist(z,\partial D) \leq C dist
(f(z),\partial D').
$$
\end{proposition}
\noindent{\it Proof of Proposition~\ref{equiv}}.
According to the previous section, we may assume that $D = \{ p:
\rho(p) < 0 \}$ where $\rho$ is a $J$-plurisubharmonic
function on $D$, strictly $J$-plurisubharmonic in a neighborhood of the
boundary; similarly $D'$ can be defined by means of a function
$\rho'$. Now it suffices
to apply the Hopf lemma to the functions $\rho' \circ f$ and $\rho
\circ f^{-1}$. \qed
\subsubsection{Removable singularities for plurisubharmonic functions}
In this subsection we prove the following
\begin{theorem}
\label{rem-theo}
Let $(M,J)$ be an almost complex manifold of real dimension four and let
$E \subset M$ be a generic submanifold of $M$ of real
codimension two. Then for any continuous plurisubharmonic function $u$ on
$M \backslash E$ the function $u^*$ defined by $u^*(x) = u(x)$ for $x \in
M \backslash E$ and $u^*(x) = \lim \sup_{y \in M \backslash E, y
\longrightarrow x} u(x)$ for $x \in E$, is $J$-plurisubharmonic on $M$.
\end{theorem}
As usual, by a generic manifold we mean a real
submanifold $E$ of $(M,J)$ such that the tangent space of $E$ at
every point $p \in E$ spans the tangent space $T_p(M)$ (considered as a
complex space with the structure $J(p)$). In real dimension four, $E$ is
generic if and only if it is totally real (for any vector $v \in T_p(E)
\backslash \{ 0 \}$ the vector $J(p)v$ is not in $T_p(E)$).
Theorem~\ref{rem-theo} deserves some comments.
Firstly, the definition of $u^*$ in Theorem~\ref{rem-theo} is correct since $E$
has the empty interior. The function $u^*$ is the unique possible plurisubharmonic
extension of $u$ on $M$.
Secondly, Theorem~\ref{rem-theo} is obtained by N.Karpova \cite{Ka} in the case
where $M$ is a complex manifold. Related results in the
integrable case are due to B.Shiffman \cite{Sh}, P.Pflug \cite{Pf},
U.Cegrell \cite{Ce}.
Thirdly, our proof admits easy generalizations to higher
dimensions. These are given at the end of this Subsection.
\vskip 0,1cm
Our method is similar to the approach of \cite{Ka} and
includes two main steps. First, we show that $u$ is upper
bounded. The proof uses a filling of $M \backslash E$ by $J$-holomorphic
discs. This shows that $u^*$ is defined correctly. In order to
prove its plurisubharmonicity, we can not use the characterization of
upper semi-continuous $J$-plurisubharmonic functions in term of positivity of their
Levi current since this
result is not yet established for almost complex structures. So we use another
approach based on the construction of the plurisubharmonic envelope of an
upper semicontinuous function by means of $J$-holomorphic curves; this
construction is quite elementary and in the case of the standard structure
this is due to Edgar \cite{Ed} and Bu-Schachermayer
\cite{BuSch}; from our point of view, it is of independent interest.
\vskip 0,2cm
\noindent{\bf Step 1. Filling by J-holomorphic discs and upper bound
for $u^*$}
\vskip 0,1cm
The statement of Theorem~\ref{rem-theo} is local. So everywhere below we
may assume $M$ is the unit ball $\mathbb B_2 \subset \mathbb C^2$, $J = J_{st} +
O(\vert z \vert)$ where $z = x + iy$ are standard coordinates in $\mathbb
C^2$ and $J_{st}$ is the standard complex structure on $\mathbb C^2$. We
also may assume that $E$ coincides with $\mathbb R^2 = \{z \in \mathbb C^2 : y =
0 \}$.
\vskip 0,1cm
Consider the
``wedge-type'' domains $M_{11} = \{ y_1 > 0, y_2 > 0 \}$, $M_{12} = \{ y_1
> 0, y_2 < 0 \}$, $M_{21} = \{ y_1 < 0, y_2 > 0 \}$, $M_{22} = \{ y_1 < 0,
y_2 < 0 \}$. Then $\mathbb B_2 \backslash \mathbb R^ 2 = \cup M_{ij}$ so it is enough to
show that $u$ is upper bounded on every
$M_{ij}$. We prove it for instance on $M_{11}$.
Consider the ``support'' hypersurface $\Gamma = \{ \rho = 0 \}$ where
$\rho = y_1 + y_2 + y_1^2 +y_2^2$ so that $M_{11} \subset
\Gamma^+ = \{ \rho > 0 \}$ and $\overline M_{11} \cap \Gamma = \mathbb R^2$. We
may assume that the norm $\| J - J_{st} \|_{C^2}$ is small enough
so that the function $\rho$ is strictly $J$-plurisubharmonic on $\mathbb B_2$.
For $t \geq 0$ consider the translated hypersurface $\Gamma_t = \{ z:
\rho(z) - t > 0 \}$ which is strictly $J$-pseudoconvex for small $t$.
According to the result of J.F.Barraud-E.Mazzilli \cite{ba-ma} and
S.Ivashkovich-J.P.Rosay \cite{iv-ro04} there exists a family $(f_{p,t} :
\Delta \rightarrow \mathbb B_2$ of
$J$-holomorphic discs with the following properties~(see Figure 2) :
\begin{itemize}
\item[(a)] the discs depend smoothly on parameters $p \in \Gamma_t$, $t \geq
0$
\item[(b)] there exists an $r > 0$ such that for every $p$, $t$ the
surface $f_{p,t}(r\Delta)$ is contained in $\{ p \} \cup \{ z:\rho(z) - t > 0
\}$ and $f_{p,t}(0) = p$.
\end{itemize}
\bigskip
\begin{center}
\input{figure1.pstex_t}
\end{center}
\bigskip
\centerline{Figure 2}
\bigskip
This family of discs fills the ``wedge''
$M_{11}$ and every disc is contained in $\mathbb B_2 \backslash \mathbb R^2$. It follows
by construction that their boundaries $f(r\partial \Delta)$ form a compact
set
$K = \cup_{p,t}f(r\partial \Delta)$ in $\mathbb B_2 \backslash
\mathbb R^2$. Since $u$ is bounded above on $K$, say, $u \vert K \leq C$,
applying the maximum principle for every subharmonic function $\rho \circ
f_{pt}$ we obtain that $u$ is bounded from above by $C$ on $M_{11}$. Repeating
this construction for other wedges we obtain the following
\begin{proposition}
\label{pro1}
The function $u$ is upper bounded on $\mathbb B_2 \backslash \mathbb R^2$.
\end{proposition}
The standard argument allows now to reduce the proof of
Theorem~\ref{rem-theo} to the case where $u$ is bounded. Indeed,
suppose that Theorem~\ref{rem-theo} is proved for bounded
functions. For any positive integer $n$, consider the continuous
$J$-plurisubharmonic function $u_n = \sup (u, - n)$. So $u_n^*$ is
$J$-plurisubharmonic on $\mathbb B_2$. Since the sequence $(u_n)$ is
decreasing, it converges to a $J$-plurisubharmonic function $\hat u$
on $\mathbb B_2$ and $u = \hat u$ on $\mathbb B_2 \backslash \mathbb R^2$. We prove that $u^* =
\hat u$. Clearly, by the uppersemicontinuity of $\hat{u}$ we have $u^*
\leq \hat u$. Fix now a point $x_0 \in \mathbb R^2$ and a vector $v \in
T_{x_0}(\mathbb B_2)$ generating a complex line $L$ such that $L \cap
T_{x_0}\mathbb R^2 = \{ 0 \}$. According to Nijenhuis-Woolf~\cite{ni-wo}, there
exists a $J$-holomorphic disc $f$ such that $f(0) = x_0$ and $L$
is tangent to the surface $f(\Delta)$ at $x_0$. So the bounded function $u
\circ f$ is subharmonic on the punctured disc $\Delta^*$ and so extends
uniquely as a subharmonic function on $\Delta$ setting $(u\circ f)(0) = \lim
\sup_{\zeta \longrightarrow 0 } u(\zeta)$. Since this extension is unique,
we obtain that $\hat u(x_0) = (\hat u \circ f)(0) = \lim \sup_{\zeta
\longrightarrow 0 } (u \circ f)(\zeta) \leq \lim \sup_{z \longrightarrow
x_0} u(z) = u^*(x_0)$. Applying this argument to every point in $\mathbb R^2$
we obtain that $\hat u = u^*$ on
$\mathbb B_2$.
Thus, {\it everywhere below we assume that $u$ is bounded on $\mathbb B_2
\backslash \mathbb R^2$.}
\vskip 0,2cm
\noindent{\bf Step two. $J$-plurisubharmonic envelopes of upper semicontinuous
functions}
\vskip 0,1cm
Denote by $\mu_r$ the normalized Lebesgue measure of the disc $r\Delta$ (we
simply write $\mu$ if $r = 1$).
\begin{proposition}
\label{pro2.1}
Let $v$ be an upper semicontinuous function on
$\mathbb B_2$. Consider the sequence $(v_n)$ defined as follows: $v_0 = v$ and
for $n \geq1$, for $z \in \mathbb B^2$,
$$
v_n(z) = \inf \int_{\Delta} (v_{n-1} \circ f)
(\zeta)d\mu,
$$
where $\inf$ is taken over all $J$-holomorphic discs $f : \Delta \rightarrow
\mathbb B^2$ such that $f(0) = z$ (without loss of generality we always
assume that every disc $f$ is continuous on $\overline \Delta$ and $f(\overline
\Delta) \subset \mathbb B_2$). Then the sequence $(v_n)$
decreases pointwise to the largest $J$-plurisubharmonic function $\hat v$ on $\mathbb B_2$
bounded from above by $v$.
\end{proposition}
\noindent{\it Proof of Proposition~\ref{pro2.1}.} {\it Step 1.} The sequence $(v_n)$
decreases.
Indeed, for every $z$, the constant disc $f_z(\zeta) \equiv z$ is
$J$-holomorphic so
$$v_n(z) = \inf_f \int_{\Delta} (v_{n-1} \circ f)d\mu \leq
\int_{\Delta} v_{n-1} \circ f_z d\mu = v_{n-1}(z).
$$
\begin{lemma}\label{usc}
The function $\hat v$ is upper semicontinuous.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{usc}.}
We proceed by induction on $n$. For $n = 0$ the statement is
correct. Suppose that $v_{n-1}$ is upper semicontinuous. Let $(z_k) \subset \mathbb B_2$ be a
sequence of points converging to $z_0 \in \mathbb B_2$.
The following claim is a direct consequence of Proposition \ref{I-R}.
{\bf Claim.} {\it Let $f:\Delta \rightarrow \mathbb B_2$ be a $J$-holomorphic disc
continuous on $\overline \Delta$ such that $f(0) = z_0$ and $f(\overline \Delta)
\subset \mathbb B_2$. Then there
exists a sequence of $J$-holom{\`u}orphic discs $f_k : \Delta \rightarrow \mathbb B_2$,
continuous on $\overline \Delta$ such that $f_k(\overline \Delta) \subset \mathbb B_2$,
$f_k(0) = z_k$
for every $k$ and $f_k \longrightarrow f$ uniformly on $\Delta$.}
\vskip 0,1cm
Consider the compact set $K$ defined as the closure of the union $\cup_k
f_k(\overline \Delta)$. Since $v_{n-1}$ is upper semicontinuous it is bounded from
above by a
constant $C$ on $K$ and
$$
(v_{n-1}\circ f)(\zeta) \geq \lim \sup_{k \longrightarrow \infty}
(v_{n-1} \circ f_k)(\zeta), \zeta \in \Delta.
$$
So by the Fatou lemma
$$
\int_{\Delta} (v_{n-1} \circ f d\mu \geq \lim \sup_{k
\longrightarrow \infty} \int_{\Delta} (v_{n-1} \circ f_k) d\mu \geq \lim
\sup_{k \longrightarrow \infty} v_n(z_k).
$$
This implies that
$$
v_n(z_0) = \inf_f \int_{\Delta} v_{n-1} \circ f d\mu \geq \lim \sup_{k
\longrightarrow \infty} v_n(z_k)
$$
and proves Lemma~\ref{usc}.
Therefore, the function $\hat v$ is also upper semicontinuous as a decreasing
limit of upper semicontinuous functions.
{\it Step 2.} We prove by induction that for any $J$-plurisubharmonic function
$\phi$ satisfying $\phi
\leq v$ we have $ \phi \leq v_n$ for any $n$. This is true for $n = 0$.
Suppose that $\phi \leq v_{n-1}$. Fix an arbitrary point $z_0 \in \mathbb B_2$.
For every $J$-holomorphic disc $f$ continuous on $\overline \Delta$ and satisfying
$f(0) =z_0$,
$f(\overline \Delta) \subset \mathbb B_2$ we have~:
$$
\phi(z_0) \leq
\int_{\Delta} \phi \circ f d\mu \leq \int_{\Delta} v_{n-1} \circ f d\mu.
$$
So
$v_n(z_0) \leq \phi(z_0)$.
{\it Step 3.} We show that the restriction of $\hat v$ on a
$J$-holomorphic disc is subharmonic. Given $z_0$ and $f$ as above, we have
$$
\hat v(z_0) = \lim_{n \longrightarrow \infty} v_n(z_0) \leq \lim_{n
\longrightarrow \infty} \int_{\Delta} v_{n-1} \circ f d\mu = \int_{\Delta} v \circ f
d\mu
$$
by the Beppo Levi theorem. This proves Proposition~\ref{pro2.1}. \qed
\vskip 0,1cm
Let now $u$ be as in the previous subsection (that is a bounded continuous
$J$-plurisubharmonic function on $\mathbb B_2 \backslash \mathbb R^2$) and let $u^*$ be its upper
semicontinuous extension defined in the statement of
Theorem~\ref{rem-theo}. Setting $v = u^*$, we apply Proposition
\ref{pro2.1} and obtain the largest $J$-plurisubharmonic function $\hat v$ on $\mathbb B_2$
bounded from above by $v$. In order to conclude the proof of the
Theorem~\ref{rem-theo}, it is sufficient to show that $\hat v =
u^*$ on $\mathbb B_2$.
The substantial part of the proof is contained in the following
\begin{proposition}
\label{prop2.2}
For any $z \in \mathbb B_2 \backslash \mathbb R^2$ we have $\hat v(z) = u(z)$.
\end{proposition}
\noindent{\bf Proof.}
Since $\hat v \leq u$ ($u=u^*$ on $\mathbb B_2 \backslash \mathbb R^2$),
we just need to prove the inverse
inequality. Using the induction, we show that for every $n$, $v_n \geq
u$. For $n=0$ this is clear. Suppose that $v_{n-1} \geq u$ on $\mathbb B_2
\backslash \mathbb R^2$ (and so $v_{n-1} \geq u^*$ on $\mathbb B_2$). Fix a point $z_0
\in \mathbb B_2 \backslash \mathbb R^2$ and a $J$-holomorphic disc $f$ satisfying $f(0)
= z_0$.
\vskip 0,1cm
{\bf Claim.} {\it The interior of the set $f^{-1}(\mathbb R^2) \subset \Delta$ is
empty and $\mu(f^{-1}(\mathbb R^2)) = 0$.}
\vskip 0,1cm
Indeed, the set $S \subset \Delta$ of critical points of $f$ is
discrete (see~\cite{mc-sa}). If $p \in f^{-1}(\mathbb R^2)$ is not a critical point, the
tangent
space to $f(\Delta)$ at $f(p)$ is not contained in $T_{f(p)}\mathbb R^2 = \mathbb R^2$ since
$\mathbb R^2$ is totally real. So in a neighborhood of $p$ the pullback
$f^{-1}(\mathbb R^2)$ is contained in a smooth real curve. This implies the
desired assertion. \qed
\begin{lemma}\label{ext}
The function $u \circ f$ defined and subharmonic on $\Delta \backslash
f^{-1}(\mathbb R^2)$ extends as a subharmonic function on $\Delta$.
\end{lemma}
\noindent{Proof.} Let $p \in f^{-1}(\mathbb R^2)$ be a non-critical point of $f$.
Consider a small open disc $U = p + r_0\Delta$ centered at $p$. It follows by
~\cite{ni-wo} that
\vskip0,1cm
{\bf Claim.} {\it There exists a sequence of $(J_{st},J)$-holomorphic maps
$f_k : U \rightarrow \mathbb B^2$ uniformly converging to $f$ and such that $U \cap
f^{-1}_k(\mathbb R^2) = \{ p \}$ for every $k$.}
Since $u \circ f_k$ is a bounded subharmonic function on $U \backslash \{
p \}$, it extends as a subharmonic function on $U$. By continuity of $u$
(this is the only place of the proof where we use this assumption!), the
sequence $(u \circ f_k)$ converges to $u \circ f$ on $U \backslash \{ p
\}$. Therefore, by the Lebesgue dominated convergence theorem, $(u \circ f_k)$ tends to
$u \circ f$ on $U$ in the distribution sense and the
generalized laplacian $\Delta(u \circ f)$ is a positive
distribution. Therefore, $u \circ f$ admits a subharmonic extension
$\widetilde{u \circ f}$ on $U$ (given by $\widetilde{u \circ f}(q) = \lim
\sup_{\zeta \longrightarrow q} u(\zeta)$ for $q \in
f^{-1}(\mathbb R^2)$). Therefore, since the set of critical points of $f$ is
discrete, $u \circ f$ extends subharmonically on $\Delta$, proving
Lemma~\ref{ext}. \qed
\vskip 0,1cm
Using Lemma~\ref{ext} and the induction step we have~:
$$
\int_{\Delta} v_{n-1} \circ f d\mu \geq \int_{\Delta} u^* \circ f d\mu =
\int_{\Delta \backslash f^{-1}(\mathbb R^2)} u \circ f d\mu \geq u(z_0).
$$
Hence, $v_n(z_0) \geq u(z_0)$ for every $z_0 \in \mathbb B_2 \backslash
\mathbb R^2$. This proves Proposition~\ref{prop2.2}. \qed
\vskip 0,1cm
\noindent{\it Proof of Theorem~\ref{rem-theo}}.
Since $\hat v$ is upper semicontinuous on $\mathbb B_2$ and $\hat{v} = u$ on $\mathbb B_2 \backslash
\mathbb R^2$ by Proposition~\ref{prop2.2}, we have $\hat v \geq u^*$ on $\mathbb B_2$.
However, $\hat{v} \leq u^*$ by construction (see
Proposition~\ref{pro2.1}). Hence $\hat u = u^*$ on $\mathbb B_2$. This proves
Theorem~\ref{rem-theo}. \qed
\vskip 0,2cm
Our method can be easily carried out to almost complex manifolds of any
dimension.
\begin{theorem}
Let $(M,J)$ be an almost complex manifold and
let $E \subset M$ be a genericl submanifold of $M$ of real
codimension $2$. Then for any continuous plurisubharmonic function $u$ on
$M \backslash E$ the function $u^*$ defined by $u^*(x) = u(x)$ for $x \in
M \backslash E$ and $u^*(x) = \lim \sup_{y \in M \backslash E, y
\longrightarrow x} u(x)$ for $x \in E$, is $J$-plurisubharmonic on $M$.
\end{theorem}
A slight modification of these technics can be used in order to obtain the
following almost complex analogues of well-known results. Denote by
${\mathcal H}_d$ the Hausdorff measure of dimension $d$.
\begin{theorem}
Let $E$ be a subset of an almost complex manifold $(M,J)$ of real
dimension $2n$.
\begin{itemize}
\item[(a)] If ${\mathcal H}_{2n-2}(E) \leq \infty$, then $E$ is removal
for every bounded continuous $J$-plurisubharmonic function.
\item[(b)] If ${\mathcal H}_{2n-3}(E) = 0$, then $E$ is removal for every
continuous $J$-plurisubharmonic function.
\end{itemize}
\end{theorem}
\bigskip
\subsection{Canonical almost complex bundles}
We present two constructions of almost complex structures on the tangent
and on the cotangent bundles of an almost complex manifold.
\subsubsection{Lift to the tangent bundle}
We endow the tangent bundle $TM$ with the complete lift $J^c$ of $J$
(see~\cite{ya-is} for its definition). We recall that $J^c$ is an
almost complex structure on $TM$. Moreover, if $\nabla$ is any $J$
complex connection on $M$ (ie $\nabla J=0$) and $\bar{\nabla}$ is the
connection defined on $M$ by $\bar\nabla_XY = \nabla_YX +[X,Y]$ then
$J^c$ is the horizontal lift of $J$ with respect to
$\bar\nabla$. Another definition of $J^c$ is given in \cite{le-sz}
where this is characterized by a deformation property.
The equality between the two definitions given in \cite{ya-is} and in
\cite{le-sz} is obtained by their (equal) expression in the local
canonical coordinates on $TM$~:
$$
J^c=\left(
\begin{array}{ccc}
J_i^h & & (0)\\
& & \\
t^a\partial_a J_i^h & & J_i^h
\end{array}
\right).
$$
(Here $t^a$ are fibers coordinates).
\subsubsection{Canonical lift to the cotangent bundle}
We recall the definition of the canonical lift of an almost
complex structure $J$ on $M$ to the cotangent bundle $T^*M$, following~
\cite{ya-is}. Set $m=
2n$. We use the following notations. Suffixes A,B,C,D take the values
$1$ to $2m$, suffixes $a,b,c,\dots$,$h,i,j,\dots$ take the values $1$ to
$m$ and $\bar j = j+ m$, $\dots$ The summation notation for
repeated indices is used. If the notation $(\varepsilon_{AB})$,
$(\varepsilon^{AB})$, $(F_{B}^{{}A})$ is used for matrices, the suffix
on the left indicates the column and the suffix on the right indicates
the row. We denote local coordinates on $M$ by $(x^1,\dots,x^n)$ and by
$(p_1,\dots,p_n)$ the fiber coordinates.
Recall that the cotangent space $T^*(M)$ of $M$ possesses the {\it
canonical contact form} $ \theta$ given in local coordinates by
$\theta = p_idx^i.$
The cotangent lift $\varphi^*$ of any diffeomorphism $\varphi$ of $M$
is contact with respect to $\theta$, that is $\theta$ does not depend on
the choice of local coordinates on $T^*(M)$.
The exterior derivative $d\theta$ of $\theta$ defines {\it the
canonical
symplectic structure} of $T^*(M)$:
$d\theta = dp_i \wedge dx^i$
which is also independent of local coordinates in view of the
invariance of the exterior derivative. Setting $d\theta =
(1/2)\varepsilon_{CB}dx^C \wedge dx^B$ (where $dx^{\bar j} =
dp_j$), we have
$$
(\varepsilon_{CB}) = \left(
\begin{array}{cc}
0 & I_n \\
-I_n & 0
\end{array}
\right).
$$
Denote by $(\varepsilon^{BA})$ the inverse matrix and write
$\varepsilon^{-1}$ for the tensor field of type (2,0) whose component
are $(\varepsilon^{BA})$. By construction, this definition does not
depend on the choice of local coordinates.
Let now $E$ be a tensor field of type (1,1) on $M$. If $E$ has
components $E_i^{\, h}$ and $E_i^{*h}$ relative to local coordinates $x$
and $x^*$ respectively, then
$p_a^*E_{i}^{*\,a} = p_aE_j^{\, b}\frac{\partial x^j}{\partial x^{*i}}.$
If we interpret a change of coordinates as a diffeomorphism $x^* =
x^*(x) = \varphi(x)$ we denote by $E^*$ the direct image of the tensor $E$
under the action of $\varphi$. In the case where $E$ is an almost
complex structure (that is $E^2 = -Id$), then $\varphi$ is a biholomorphism
between $(M,E)$ and $(M,E^*)$. Any (1,1) tensor field $E$ on $M$
canonically defines a contact form on $E^*M$ via $
\sigma = p_aE_b^{\, a}dx^b.$
Since $(\varphi^*)^*( p_a^*E_b^{{*}\,a}dx^{*b}) = \sigma,$
$\sigma$ does not depend on a choice of local coordinates (here
$\varphi^*$ is the cotangent lift of $\varphi$). Then this canonically
defines the symplectic form
$$
d\sigma = p_a\frac{\partial E_b^{\, a}}{\partial x^c}dx^c \wedge dx^b
+ E_b^{\, a}dp_a \wedge dx^b.
$$
The cotangent lift $\varphi^*$ of a diffeomorphism $\varphi$ is a
symplectomorphism for $d\sigma$. We may write
$d\sigma = (1/2)\tau_{CB}dx^C \wedge dx^B$
where $x^{\bar i} = p_i$; so we have
$$
\tau_{ji} = p_a \left ( \frac{\partial E_i^{\, a}}{\partial x^j} -
\frac{\partial E_j^{\, a}}{\partial x^i} \right ), \tau_{\bar{j} i} =
E_i^{\, j}, \tau_{j\bar{i}} = -E_j^{\, i}, \tau_{\bar{j}\bar{i}} = 0.
$$
We write $\widehat{E}$ for the tensor field of type (1,1) on $T^*(M)$ whose
components $\widehat{E}_B^{{}A}$ are given by
$\widehat{E}_B^{{}A} = \tau_{BC}\varepsilon^{CA}.$
Thus $
\widehat{E}_i^{\, h} = E_i^{\, h}, \ \widehat{E}_{\bar{i}}^{\, h} = 0$
and
$\widehat{E}_i^{\, \bar{h}} =
p_a \left( \frac{\partial E_i^{\, a}}{\partial x^j} -
\frac{\partial E_j^{\, a}}{\partial x^i} \right ),
\widehat{E}_{\bar{i}}^{\,\bar{h}} = E_h^{\, i}.$
In the matrix form we have
$$
\widehat{E} = \left(
\begin{array}{cll}
E_i^{\, h} & & 0 \\
p_a \left ( \frac{\partial E_i^{\, a}}{\partial x^j} -
\frac{\partial E_j^{\, a}}{\partial x^i} \right ) & & E_h^{\, i}
\end{array}
\right).
$$
By construction, the complete lift $\widehat{E}$ has the following
{\it invariance property}~: if $\varphi$ is a local diffeomorphism of $M$
transforming $E$ to $E'$, then the direct image of $\widehat{E}$ under the
cotangent lift $\psi: = \varphi^*$ is $\widehat{E'}$.
In general, $\widehat{E}$ is not an almost complex
structure, even if $E$ is. Moreover, one can show \cite{ya-is} that
$\widehat{J}$ is a complex structure if and only if $J$ is integrable.
One may however construct an almost complex structure on $T^*(M)$ as follows.
Let $S$ be a tensor field of type (1,s) on $M$. We may consider the
tensor field $\gamma S$ of type $(1,s-1)$ on $T^*M$, defined in local
canonical coordinates on $T^*M$ by the expression
$$
\gamma S = p_aS_{i_s...i_2i_1}^{\,\,\,a}dx^{i_s} \otimes
\cdots \otimes dx^{i_2}\otimes \frac{\partial}{\partial p_{i_1}}.
$$
In particular, if $T$ is a tensor field of type (1,2) on $M$, then $\gamma T$
has components
$$
\gamma T = \left(
\begin{array}{cll}
0 & & 0\\
p_a T_{ji}^{\,\,a} & & 0
\end{array}
\right)
$$
in the local canonical coordinates on $T^*M$.
Let $F$ be a (1,1) tensor field on $M$. Its Nijenhuis tensor $N$
is the tensor field of type (1,2) on $M$ acting on two vector fields $X$ and
$Y$ by
$$
N(X,Y) = [FX,FY] - F[FX,Y] - F[X,FY] + F^2[X,Y].
$$
By $NF$ we denote the tensor field acting by $(NF)(X,Y) =
N(X,FY)$. The following proposition is proved in \cite{ya-is} (p.256).
\begin{proposition}\label{prop-lift}
Let $J$ be an almost complex structure on $M$. Then
\begin{equation}\label{EEE}
\tilde J := \widehat J + (1/2)\gamma(NJ)
\end{equation}
is an almost complex structure on the cotangent bundle $T^*(M)$.
\end{proposition}
We stress out that the definition of the tensor $\tilde{J}$ is independent of
the choice of coordinates on $T^*M$. Therefore if $\phi$ is a biholomorphism
between two almost complex manifolds $(M,J)$ and $(M',J')$, then its
cotangent lift $\tilde \phi:= (\phi,{}^td\phi^{-1})$is a biholomorphism
between $(T^*(M),\tilde J)$ and
$(T^*(M'), \tilde J')$. Indeed one can view $\phi$ as a change of coordinates
on $M$, $J'$ representing $J$ in the new coordinates. The cotangent lift
$\phi^*$ defines a change of coordinates on $T^*M$ and $\tilde{J}'$
represents $\tilde{J}$ in the new coordinates.
So the assertion $(i)$ of Proposition~\ref{prop-lift} holds. Property
$(ii)$ of Proposition~\ref{prop-lift} is immediate in view of the definition
of $\tilde{J}$ given by~(\ref{EEE}).
\subsubsection{Conormal bundle of a submanifold}
The conormal bundle of a strictly pseudoconvex hypersurface in $(M,J)$
provides an important example of a totally real submanifold in the
cotangent bundle $T^*M$, endowed with the canonical almost complex
structure $\tilde{J}$ defined in the last Subsection.
If $\Gamma$ is a real submanifold in $M$, the conormal bundle
$\Sigma_J(\Gamma)$ of $\Gamma$ is the real subbundle of $T^*_{(1,0)}M$
defined by
$$
\Sigma_J(\Gamma) = \{ \phi \in T^*_{(1,0)}M: Re \,\phi
\vert_{T\Gamma} = 0\}.
$$
One can identify the
conormal bundle $\Sigma_J(\Gamma)$ of $\Gamma$ with any of the
following subbundles of $T^*M$~: $N_1(\Gamma)=\{\varphi \in T^*M :
\varphi_{|T\Gamma}=0\}$ and $N_2(\Gamma)=\{\varphi \in T^*M :
\varphi_{|JT\Gamma}=0\}$.
\begin{proposition}\label{prop-tot-real}
Let $\Gamma$ be a $\mathcal C^2$ real hypersurface in $(M,J)$. If $\Gamma$
is strictly $J$-pseudoconvex, then the bundles $N_1(\Gamma)$ and
$N_2(\Gamma)$ (except the zero section) are totally real submanifolds
of dimension $2n$ in $T^*M$ equipped with $\tilde{J}$.
\end{proposition}
Proposition~\ref{prop-tot-real} is due to A.Tumanov~\cite{tu01} in the
integrable case. The question whether a similar result was true in the
almost complex case was asked by the second author to A.Spiro who
gave a positive answer~\cite{sp}. For completeness we give an
alternative proof of this fact.
\vskip 0,1cm
\noindent{\it Proof of Proposition~\ref{prop-tot-real}}. Let $x_0 \in \Gamma$.
We consider local coordinates $(x,p)$ for the real cotangent bundle
$T^*M$ of $M$ in a neighborhood of $x_0$.
The fiber of $N_2(\Gamma)$ is given by $c(x)J^*d\rho(x)$, where $c$ is a real
nonvanishing function. In what follows we denote $-J^*d\rho$
by $d^c_J\rho$. For every $\varphi \in N_2(\Gamma)$
we have $\varphi|_{J(T\Gamma)} \equiv 0$. It is equivalenty to prove that
$N_1(\Gamma)$ is totally real in $(T^*M,\tilde J)$ or that $N_2(\Gamma)$ is
totally real in $(T^*M,\tilde J)$. We recall
that if $\Theta=p_idx^i$ in local coordinates then $d\Theta$ defines the
canonical symplectic form on $T^*M$.
If $V,W \in T(N_2(\Gamma))$ then
$d\Theta(V,W)=0$. Indeed the projection $pr_1(V)$ of $V$ (resp. $W$) on $M$ is
in $J(T\Gamma)$ and the projection of $V$ (resp. $W$) on the fiber
annihilates $J(T\Gamma)$ by definition. It follows that $N_2(\Gamma)$ is a
Lagrangian submanifold of $T^*M$ for this symplectic form.
Let $V$ be a vector field in $T(N_2(\Gamma)) \cap \tilde JT(N_2(\Gamma))$.
We wish to prove that $V=0$.
According to what preceeds we have $d\Theta(V,W) = d\Theta(JV,W)=0$ for every
$W \in T(N_2(\Gamma))$. We restrict to $W$ such that $pr_1(W) \in T\Gamma
\cap J(T\Gamma)$. Since $\Theta$ is defined over $x_0 \in \Gamma$ by
$\Theta = cd^c_J \rho$, then
$d\Theta = dc \wedge d^c_J \rho + cdd^c_J\rho$.
Since $d^c_J \rho(pr_1(V)) = d^c_J \rho(Jpr_1(V)) = d^c_J \rho(pr_1(V)) =
d^c_J \rho(Jpr_1(V)) =0$ it follows that
$dd^c_J \rho (pr_1(V),pr_1(\tilde{J}W)) =0$. However, by the definition of
$\tilde{J}$, we know that $pr_1(\tilde{J}W) = J pr_1(W)$.
Hence, choosing $W = V$, we obtain that $dd^c_J \rho (pr_1(V),J pr_1(V)) = 0$.
Since $\Gamma$ is strictly $J$-pseudoconvex, it follows that $pr_1(V) = 0$.
In particular, $V$ is given in local coordinates by $V=(0,pr_2(V))$.
It follows now from the form of $\tilde J$ that $JV=(0,J pr_2(V))$
(we consider $pr_2(V)$ as a vector in $\mathbb R^{2n}$ and $J$ defined
on $\mathbb R^{2n}$). Since $N_2(\Gamma)$ is a real bundle of rank one, then
$pr_2(V)$ is equal to zero. \qed
\section{Riemann mapping theorem for almost complex manifolds with boundary}
The aim of this Section is to prove the existence of stationary discs
in the ball for small
almost complex deformations of the standard structure.
Stationary discs are natural global biholomorphic invariants of complex
manifolds with boundary. L.Lempert proved in \cite{le81} that for a
strictly convex domain stationary discs coincide with extremal discs
for the Kobayashi metric and studied their basic properties.
Using these discs he introduced a multi dimensional analogue of the
Riemann map. We define here a local
analogue of the
Riemann map and establish its main properties. These results were
obtained in \cite{CoGaSu}.
\subsection{Existence of discs attached to a real submanifold
of an almost complex manifold}
\subsubsection{ Partial indices and the Riemann-Hilbert problem}
In this section we introduce basic tools of the linear Riemann-Hilbert
problem.
Let $V \subset \mathbb C^N$ be an open set. We denote
by $\mathcal C^k(V)$ the Banach space of (real or complex valued) functions
of class $\mathcal C^k$ on $V$ with the standard norm
$$
\parallel r \parallel_k =
\sum_{\vert \nu \vert \leq k}
\sup \{ \vert D^{\nu} r(w) \vert : w \in V \}.
$$
For a positive real
number $\alpha <1$ and a Banach space $X$, we denote by
$\mathcal C^{\alpha}(\partial \Delta,X)$ the Banach space of all functions
$f: \partial \Delta \rightarrow X$ such that
$$
\parallel f \parallel_{\alpha} :=
\sup_{\zeta \in \partial \Delta} \parallel f(\zeta) \parallel +
\sup_{\theta,\eta \in \partial \Delta, \theta \neq \eta}
\frac{\parallel f(\theta) - f(\eta)\parallel}{\vert \theta - \eta
\vert^{\alpha}} < \infty.
$$
\vskip 0,1cm \noindent If $\alpha = m +
\beta$ with an integer $m \geq 0$ and $\beta \in ]0,1[$, then we consider
the Banach space
$$
\mathcal C^{\alpha}(V):=\{r \in \mathcal C^m(V,\mathbb R): D^{\nu} r \in C^{\beta}(V),
\nu, \vert \nu \vert \leq m\}
$$
and we set $\parallel r
\parallel_{\alpha} =
\sum_{\vert \nu \vert \leq m} \parallel D^{\nu}r \parallel_{\beta}$.
Then a map $f$ is in $\mathcal C^{\alpha}(V,\mathbb C^k)$ if and only if its components
belong to $\mathcal C^{\alpha}(V)$ and we say that $f$ is of class $\mathcal C^{\alpha}$.
\vskip 0,1cm
Consider the following situation:
\vskip 0,1cm
\noindent $\bullet$ $B$ is an open ball centered at the origin in $\mathbb C^N$ and
$r^1,\dots,r^N$ are smooth $\mathcal C^\infty$ functions defined in a neighborhood
of $\partial \Delta \times B$ in $\mathbb C^N \times \mathbb C$
\noindent $\bullet$ $f$ is a map of class $\mathcal C^{\alpha}$
from $\partial \Delta$ to $B$, where $\alpha>1$ is a noninteger real
number
\noindent $\bullet$ for every $\zeta \in \partial \Delta$
\begin{itemize}
\item[(i)] $E(\zeta) = \{ z \in B: r^j(z,\zeta) = 0, 1 \leq j \leq N \}$
is a maximal totally real submanifold in $\mathbb C^N$,
\item[(ii)] $f(\zeta) \in E(\zeta)$,
\item[(iii)] $\partial_z r^1(z,\zeta) \wedge \cdots \wedge \partial_z
r^N(z,\zeta) \neq 0$ on $B \times \partial \Delta$.
\end{itemize}
Such a family ${E} = \{ E(\zeta) \}$
of manifolds with a fixed disc $f$ is called a
{\it totally real fibration} over the unit circle.
A disc attached to a fixed totally real manifold ($E$ is independent of
$\zeta$) is a special case of a totally real fibration.
\vskip 0,1cm
Assume that the defining function $r:=(r^1,\dots,r^N)$ of $E$
depends smoothly on a small real parameter $\varepsilon$, namely $r =
r(z,\zeta,\varepsilon)$,
and that the fibration $E_0:=E(\zeta,0)$ corresponding
to $\varepsilon = 0$ coincides with the above fibration
$E$. Then for every sufficiently small $\varepsilon$ and for every
$\zeta \in \partial \Delta$ the manifold $E_\varepsilon:=E(\zeta,\varepsilon)
:= \{ z \in B:
r(z,\zeta,\varepsilon) = 0\}$ is totally real.
We call $E_{\varepsilon}$ a {\it smooth totally real deformation }
of the totally
real fibration $E$. By a holomorphic disc $\tilde f$ attached to
$E_{\varepsilon}$ we mean a holomorphic map $\tilde f:\Delta
\rightarrow B$, continuous on $\bar\Delta$, satisfying
$r(f(\zeta),\zeta,\varepsilon) = 0$ on $\partial \Delta$.
For every positive real noninteger $\alpha$ we denote by
$(\mathcal A^\alpha)^N$ the space of maps defined on
$\bar{\Delta}$, $J_{st}$-holomorphic on $\Delta$, and belonging to
$(\mathcal C^\alpha(\bar{\Delta}))^N$.
\subsubsection{Almost complex perturbation of discs}
Consider a smooth deformation $(J_\lambda)$ of $J_{st}$. We recall that for
$\lambda$ small enough the
$J_{\lambda}$-holomorphicity condition for a map $f:\Delta
\rightarrow \mathbb C^N$ may be written in the form
\begin{equation}
\bar\partial_{J_{\lambda}} f = \bar\partial f +
q(\lambda,f)\overline{\partial f} = 0
\end{equation}
where $q$ is a smooth $(n \times n)$ complex matrix satisfying $q(0,\cdot) \equiv 0$.
Let $E_\varepsilon=\{r_j(z,\zeta,\varepsilon) = 0, 1 \leq j \leq
N\}$ be a smooth totally real deformation of a totally real fibration $E$.
A disc $f \in \mathcal (\mathcal C^\alpha(\bar{\Delta}))^N$ is attached
to $E_\varepsilon$ and is $J_\lambda$-holomorphic if and only if it
satisfies the following nonlinear boundary Riemann-Hilbert type problem~:
$$
\left\{
\begin{array}{lll}
r(f(\zeta),\zeta,\varepsilon) &=& 0, \ \ \ \zeta \in \partial \Delta\\
& & \\
\bar{\partial}_{J_\lambda}f(\zeta) &=& 0, \ \ \ \zeta \in \Delta.
\end{array}
\right.
$$
Let $f^0 \in \mathcal (\mathcal A^\alpha)^N$ be a disc attached to
$E$ and let $\mathcal U$ be a neighborhood of $(f^0,0,0)$ in the
space $(\mathcal C^\alpha(\bar{\Delta}))^N \times \mathbb R \times \mathbb R$.
Given $(f,\varepsilon,\lambda)$ in $U$ define the maps
$v_{f,\varepsilon,\lambda}: \zeta \in \partial
\Delta \mapsto r(f(\zeta), \zeta, \varepsilon)$
and
$$
\begin{array}{llcll}
u &:& \mathcal U & \rightarrow & (\mathcal C^\alpha(\partial \Delta))^N \times
\mathcal C^{\alpha-1}(\Delta)\\
& & (f,\varepsilon,\lambda) & \mapsto & (v_{f,\varepsilon,\lambda},
\bar{\partial}_{J_\lambda}f).
\end{array}
$$
Denote by $X$ the Banach space $(\mathcal C^\alpha(\bar \Delta))^N$.
Since $r$ is of class $\mathcal C^{\infty}$,
the map
$u$ is smooth and the tangent map $D_Xu(f^0,0,0)$ (we consider
the derivative
with respect to the space $X$) is a linear map from $X$ to
$(\mathcal C^\alpha(\partial \Delta))^N \times \mathcal C^{\alpha-1}(\Delta)$,
defined for every $h \in X$ by
$$\begin{array}{llllll}
D_Xu(f^0,0,0)(h) = \left(
\begin{matrix}
2 Re [G h] \\
\bar\partial_{J_0} h
\end{matrix}
\right),
\end{array}$$
where for $\zeta \in \partial \Delta$
$$\begin{array}{lllllll}
G(\zeta) = \left(
\begin{matrix}
\frac{\partial r_1}{\partial z^1}(f^0(\zeta),\zeta,0) &\cdots&\frac{\partial
r_1}{\partial z^N}(f^0(\zeta),\zeta,0)\\
\cdots&\cdots&\cdots\\
\frac{\partial r_N}{\partial z^1}(f^0(\zeta),\zeta,0)& \cdots&\frac{\partial r_N}
{\partial z^N}(f^0(\zeta),\zeta,0)
\end{matrix}
\right)
\end{array}$$
(see \cite{gl94}).
\begin{proposition}\label{Tthh}
Let $f^0:\bar \Delta \rightarrow \mathbb C^N$ be a $J_{st}$-holomorphic
disc attached to a totally real fibration $E$ in $\mathbb C^N$. Let
$E_\varepsilon$ be a smooth totally real deformation
of $E$ and
$J_\lambda$ be a smooth almost complex deformation of $J_0$ in a
neighborhood of $f(\bar{\Delta})$.
Assume that for some $\alpha > 1$
the linear map from $(\mathcal A^{\alpha})^N$ to $(\mathcal
C^{\alpha-1}(\Delta))^N$ given by
$h \mapsto 2 Re [G h]$
is surjective and has a $k$ dimensional kernel.
Then there exist $\delta_0,
\varepsilon_0, \lambda_0 >0$ such that for every $0 \leq \varepsilon
\leq \varepsilon_0$ and for every $0 \leq \lambda \leq \lambda_0$,
the set of $J_\lambda$-holomorphic discs $f$ attached to $E_\varepsilon$
and such that $\parallel f -f^0 \parallel_{\alpha} \leq \delta_0$ forms
a smooth $k$-dimensional
submanifold
$\mathcal A_{\varepsilon,\lambda}$ in the Banach space
$(\mathcal C^\alpha(\bar{\Delta}))^N$.
\end{proposition}
\noindent{\bf Proof.} According to the implicit function Theorem,
the proof of Proposition~\ref{Tthh} reduces to the proof of the
surjectivity of $D_Xu$.
It follows by classical
one-variable results on the resolution of the
$\bar\partial$-problem in the unit disc that the linear map from
$X$ to $\mathcal C^{\alpha-1}(\Delta)$ given by
$h \mapsto \bar \partial h$
is surjective. More precisely, given $g \in \mathcal C^{\alpha-1}(\Delta)$
consider
the Cauchy transform
$$T_{CG}(g) : \tau \in \Delta \mapsto
\frac{1}{2\pi i}\int\int_{\Delta} \frac{g(\zeta)}{\zeta - \tau}d\zeta d\bar{\zeta}.$$
For every function $g \in \mathcal
C^{\alpha-1}(\Delta)$ the solutions $h \in X$ of the equation
$\bar\partial h = g$ have the form $h = h_0 + T_{CG}(g)$
where $h_0$ is an arbitrary function in $({\mathcal A}^{\alpha})^N$.
Consider the equation
\begin{equation}\label{equa1}
D_Xu(f^0,0,0)(h) = \left(
\begin{matrix}
g_1 \\
g_2
\end{matrix}
\right),
\end{equation}
where $(g_1,g_2)$ is a vector-valued function with components
$g_1 \in \mathcal C^{\alpha-1}(\partial \Delta)$ and $g_2 \in \mathcal
C^{\alpha-1}(\Delta)$. Solving the
$\bar\partial$-equation for the second component, we reduce
(\ref{equa1}) to
$$
2 Re [G(\zeta) h_0(\zeta)] = g_1 - 2 Re [G(\zeta) T_{CG}(g_2)(\zeta)]
$$
with respect to $h_0 \in (\mathcal A^{\alpha})^N$.
The surjectivity of the map $ h_0 \mapsto 2 Re [G h_0]$ gives the result. \qed
\subsubsection{Riemann-Hilbert problem on the cotangent bundle of an almost
complex manifold}
Let $(J_{\lambda})_\lambda$ be an almost complex deformation
of the standard structure $J_{st}$ on $\mathbb B_n$, satisfying $J_\lambda(0)=
J_{st}$.Consider the canonical lift $\tilde{J}_\lambda$ on the cotangent
bundle, defined by ~\ref{EEE}. In the $(x,y)$ coordinates we identify this with
the $(4n \times
4n)$-matrix
$$
\tilde{J}_{\lambda} = \left(
\begin{matrix}
J_{\lambda}(x) & 0\\
\sum y_k A^k_{\lambda}(x)& ^tJ_{\lambda}(x)
\end{matrix}
\right),
$$
where $A_{\lambda}^k(x) = A^k(\lambda,x)$, $A_{\lambda}^k(x)$ are smooth $(2n \times
2n)$-matrix functions.
In what follows we always assume that:
\begin{equation}
\label{norm1}
A^k_0(x) \equiv 0, \ \ \ {\rm for \ every}\ k.
\end{equation}
The trivial bundle $\mathbb B \times \mathbb R^{2n}$
over the unit ball is a local coordinate
representation of the cotangent bundle of an almost complex
manifold. We denote by $x = (x^1,\dots,x^{2n})\in \mathbb B_n$
and $y = (y_1,\dots,y_{2n}) \in \mathbb R^{2n}$ the coordinates on the base and
fibers respectively. We identify the base space $(\mathbb R^{2n},x)$
with $(\mathbb C^n,z)$.
Since ${}^tJ_{st}$ is orthogonally equivalent to $J_{st}$ we may identify
$(\mathbb R^{2n},{}^tJ_{st})$ with $(\mathbb C^n,J_{st})$. After this identification
the ${{}^tJ_{st}}$-holomorphicity is expressed by the
$\bar{\partial}$-equation in the usual coordinates in $\mathbb C^n$.
Consider a smooth map
$\hat{f}=(f,g): \Delta \rightarrow \mathbb B \times \mathbb R^{2n}$ which is
{\it $(J_{st},\tilde J_{\lambda})$-holomorphic} :
$$
\tilde J_{\lambda}(f,g) \circ d\hat{f} = d\hat{f} \circ J_{st}
$$
on $\Delta$.
For $\lambda$ small enough this can be rewritten as the
following Beltrami type quasilinear elliptic equation:
$$
(\mathcal E)\left\{
\begin{array}{cll}
\bar \partial f + q_1(\lambda,f))\overline{\partial f} & = & 0\\
& & \\
\bar \partial g + q_2(\lambda,f))\overline{\partial g} + q_3(\lambda,f)g +
q_4(\lambda,f) \overline{g}
& = & 0,
\end{array}
\right.
$$
where the first equation coincides with the $(J_{st},J_{\lambda})$-holomorphicity
condition for $f$ that is $\bar\partial_{J_\lambda} f
= \bar \partial f + q_1(\lambda,f))\overline{\partial f}$.
The coefficient $q_1$ is uniquely determined by $J_{\lambda}$ and, in view
of (\ref{norm1}),
the coefficient $q_k$ satisfies, for $k=2,3,4$:
\begin{equation}
\label{nprm3}
q_k(0,\cdot) \equiv 0, \ q_k(\cdot,0) \equiv 0.
\end{equation}
We point out that in $(\mathcal E)$
the equations for the fiber component $g$ are
obtained as a small perturbation of the standard $\bar\partial$-operator.
An important feature of this system is that the second equation is
{\it linear} with respect to the fiber component $g$.
We consider the operator
$$\bar\partial_{\tilde J_{\lambda}} : \left(
\begin{matrix}
f\\
g
\end{matrix}
\right) \mapsto \left(
\begin{matrix}
\bar \partial f + q_1(\lambda,f))\overline{\partial f}\\
\bar \partial g + q_2(\lambda,f))\overline{\partial g} + q_3(\lambda,f) g+
q_4(\lambda,f)\overline{g}
\end{matrix}
\right).$$
\vskip 0,1cm
Let $r_j(z,t,\lambda)$ , $j = 1,\dots,4n$ be $\mathcal C^{\infty}$-smooth real
functions
on $\mathbb B \times \mathbb B \times [0,\lambda_0]$ and let
$r:=(r_1,\dots,r_{4n})$.
Consider the following nonlinear boundary
Riemann-Hilbert type problem for the operator
$\bar\partial_{\tilde J_{\lambda}}$:
$$
({\mathcal
BP}_{\lambda})
\left\{
\begin{array}{cll}
r(f(\zeta),\zeta^{-1}g(\zeta),\lambda) &=& 0 \ \ {\rm on} \ \partial \Delta\\
\\
\bar \partial_{\tilde J_\lambda} (f,g) &=& 0 \ \ {\rm on} \ \Delta
\end{array}
\right.
$$
on the space $\mathcal C^{\alpha}(\bar\Delta,\mathbb B_n\times \mathbb B_n)$.
The boundary problem $({\mathcal
BP}_{\lambda})$ has the following geometric meaning.
Consider the disc $(\hat f,\hat g) := (f, \zeta^{-1}g)$ on
$\Delta\backslash \{0\}$ and the set
$E_{\lambda} := \{ (z,t): r(z,t,\lambda) = 0 \}$. The
boundary condition in $({\mathcal BP}_{\lambda})$ means that
$$
(\hat f, \hat g)(\partial \Delta) \subset E_{\lambda}.
$$
This boundary problem has the following {\it invariance
property}. Let $(f,g)$ be a solution of
$({\mathcal BP}_{\lambda})$ and let $\phi$ be a automorphism of
$\Delta$.
Then $(f \circ \phi,c g \circ \phi)$ also satisfies the
$\bar\partial_{\tilde J_{\lambda}}$ equation for every complex constant $c$.
In particular, if $\theta \in[0,2\pi]$ is fixed, then the disc
$(f(e^{i\theta}\zeta),e^{-i\theta}g(e^{i\theta}\zeta))$ satisfies the
$\bar\partial_{\tilde J_{\lambda}}$-equation on $\Delta \backslash \{0\}$ and
the boundary of the disc
$(f(e^{i\theta}\zeta),e^{-i\theta}\zeta^{-1}g(e^{i\theta}\zeta))$ is
attached to $E_{\lambda}$. This implies the following
\begin{lemma}
\label{circle}
If $(f,g)$ is a solution of $({\mathcal BP}_{\lambda})$,
then
$\zeta \mapsto (f(e^{i\theta}\zeta),e^{-i\theta}g(e^{i\theta}\zeta))$ is also a
solution of $({\mathcal BP}_{\lambda})$.
\end{lemma}
Suppose that this problem has a solution $(f^0,g^0)$ for $\lambda = 0$
(in view of the above assumptions this solution is holomorphic on
$\Delta$ with respect to the standard structure on $\mathbb C^n \times
\mathbb C^n$). Using the implicit function theorem we study,
for sufficiently small $\lambda$, the solutions of
$({\mathcal BP}_{\lambda})$ close to $(f^0,g^0)$.
As above consider the map $u$
defined in a neighborhood of $(f^0,g^0,0)$ in
$(\mathcal C^\alpha(\bar{\Delta}))^{4n} \times \mathbb R$ by:
$$u: (f,g,\lambda) \mapsto \left(
\begin{matrix}
\zeta \in \partial \Delta \mapsto r(f(\zeta),\zeta^{-1}g(\zeta),\lambda)\\
\bar \partial f + q_1(\lambda,f)\overline{\partial f}\\
\bar \partial g + q_2(\lambda,f) \overline{\partial g} + q_3(\lambda,f) g +
q_4(\lambda,f) \overline{g}
\end{matrix}
\right).
$$
\vskip 0,1cm
If $X:=(\mathcal C^ \alpha(\bar{\Delta}))^{4n}$
then its tangent map at $(f^0,g^0,0)$ has the form
$$\begin{array}{llllll}
D_Xu(f^0,g^0,0): h=(h_1,h_2) \mapsto \left(
\begin{matrix}
\zeta \in \partial \Delta \mapsto
2 Re [G(f^0(\zeta),\zeta^{-1}g^0(\zeta),0)h] \\
\bar \partial h_1 \\
\bar \partial h_2
\end{matrix}
\right)
\end{array}$$
where for $\zeta \in \partial \Delta$ one has
$$\begin{array}{lllllll}
G(\zeta) = \left(
\begin{matrix}
\frac{\partial r-1}{\partial w_1}(f^0(\zeta),\zeta^{-1}g^0(\zeta),0)
&\cdots&\frac{\partial r_1}{\partial w_N}(f^0(\zeta),\zeta^{-1}g^0(\zeta),0)\\
\cdots&\cdots&\cdots\\
\frac{\partial r_N}{\partial w_1}(f^0(\zeta),\zeta^{-1}g^0(\zeta),0)&
\cdots&\frac{\partial r_N}
{\partial w_N}(f^0(\zeta),\zeta^{-1}g^0(\zeta),0)
\end{matrix}
\right)
\end{array}$$
with $N = 4n$ and $w = (z,t)$.
If the tangent map $D_Xu(f^0,g^0,0): (\mathcal A^{\alpha})^N
\longrightarrow (\mathcal
C^{\alpha-1}(\Delta))^N$ is surjective and has a
finite-dimensional kernel, we may apply the implicit function theorem
as in Section~2.2 (see Proposition~\ref{Tthh})
and conclude to the existence of a
finite-dimensional variety of nearby discs.
In particular, consider the fibration $E$ over the disc $(f^0,g^0)$
with fibers
$E(\zeta) = \{ (z,t) : r^j(z,\zeta^{-1}t,0) = 0 \}$. Suppose that this
fibration is totally real. Then we have:
\begin{proposition}\label{tthhh}
Suppose that the fibration $E$ is totally real.
If the tangent map $D_Xu(f^0,g^0,0): (\mathcal A^{\alpha})^{4n}
\longrightarrow (\mathcal
C^{\alpha-1}(\Delta))^{4n}$ is surjective
and has a finite-dimensional kernel,
then for every sufficiently small $\lambda$
the solutions of the boundary problem $({\mathcal BP}_{\lambda})$ form
a smooth finite dimensional submanifold in the space $(\mathcal C^{\alpha}(\Delta))^{4n}$.
\end{proposition}
In the next Section we present a sufficient condition for the
surjectivity of the map $D_Xu(f^0,g^0,0)$. This is due to
J.Globevnik~\cite{gl94,gl96} for the integrable case and
relies on the partial indices of the totally real fibration along $(f^0,g^0)$.
\subsection{Generation of stationary discs}
Let $D$ be a smoothly bounded domain in $\mathbb C^n$ with the
boundary $\Gamma$. According to \cite{le81}
a continuous map $f:\bar\Delta
\rightarrow \bar D$, holomorphic on $\Delta$,
is called a {\it stationary} disc for $D$ (or
for $\Gamma$) if there
exists a holomorphic map
$\hat{f}: \Delta \backslash \{ 0 \} \rightarrow
T^*_{(1,0)}(\mathbb C^n)$, $\hat{f} \neq 0$, continuous on $\bar
\Delta \backslash \{ 0 \}$ and such that
\begin{itemize}
\item[(i)] $\pi \circ \hat{f} = f$
\item[(ii)] $\zeta \mapsto \zeta \hat{f}(\zeta)$ is in ${\mathcal O}(\Delta)$
\item[(iii)] $\hat{f}(\zeta) \in \Sigma_{f(\zeta)}(\Gamma)$ for every $\zeta$
in $\partial \Delta$.
\end{itemize}
We call $\hat{f}$ a {\it lift} of $f$ to the conormal bundle of $\Gamma$
(this is a meromorphic map from $\Delta$ into
$T^{*}_{(1,0)}(\mathbb C^n)$ whose values on the unit circle lie on
$\Sigma(\Gamma)$).
We point out that originally Lempert gave this definition in a different form,
using the natural coordinates on the cotangent bundle of $\mathbb C^n$. The
present more geometric version in terms of the conormal
bundle is due to Tumanov~\cite{tu01}. This form is particularly useful
for our goals
since it can be transferred to the almost complex case.
\vskip 0,1cm
Let $f$ be a stationary disc for $\Gamma$.
It follows from Proposition~\ref{prop-tot-real} that if $\Gamma$ is a Levi
nondegenerate
hypersurface, the conormal bundle $\Sigma(\Gamma)$ is a totally real
fibration along $f^*$. Conditions $(i)$, $(ii)$, $(iii)$ may be viewed as
a nonlinear boundary problem considered in Section~2.
If the associated tangent map is surjective, Proposition~\ref{tthhh}
gives a description of all stationary
discs $\tilde{f}$ close to $f$, for a small deformation of $\Gamma$.
When dealing with the standard complex structure on $\mathbb C^n$, the
bundle $T^*_{(1,0)}(\mathbb C^n)$ is a holomorphic vector bundle which can be
identified, after projectivization of the fibers, with the
holomorphic bundle of complex hyperplanes that is with $\mathbb C^n \times
\mathbb P^{n-1}$. The conormal bundle $\Sigma(\Gamma)$ of a real
hypersurface $\Gamma$ may be naturally
identified, after this projectivization, with the bundle
of holomorphic tangent spaces $H(\Gamma)$
over $\Gamma$. According to S.Webster
\cite{we78} this is a totally real submanifold in $\mathbb C^n \times \mathbb
P^{n-1}$. When dealing with the standard structure, we may therefore
work with projectivizations of lifts of stationary discs attached to
the holomorphic tangent bundle $H(\Gamma)$. The technical avantage
is that after such a projectivization lifts of stationary discs become
holomorphic, since the lifts have at most one pole of order 1 at the origin.
This idea was first used
by L.Lempert and then applied by several authors \cite{ba-le98,
ce95, sp-tr02}.
When we consider almost complex
deformations of the standard structure (and not just deformations of
$\Gamma$)
the situation is more complicated. If the cotangent bundle $T^*(\mathbb R^{2n})$ is
equipped with $\tilde J$, there is no natural
possibility to transfer this structure to the space obtained by the
projectivization of the fibers. Consequently we do not work with
projectivization of the cotangent bundle but we will deal with meromorphic
lifts of stationary discs. Representing such lifts $(\hat{f},\hat{g})$
in the form
$(\hat{f},\hat{g})=(f,\zeta^{-1}g)$,
we will consider $\tilde J_\lambda$-holomorphic discs close to the
$J_{st}$-holomorphic disc $(f^0,g^0)$. The disc $(f,g)$
satisfies a nonlinear boundary problem of
Riemann-Hilbet type $(\mathcal BP_\lambda)$.
When an almost complex structure on the
cotangent bundle is fixed, we may view it as an elliptic prolongation
of an initial almost complex structure on the base and apply the
implicit function theorem as in previous section. This avoids
difficulties coming from the projectivization of almost complex fibre spaces.
\subsubsection{Maslov index and Globevnik's condition}
We denote by $GL(N,\mathbb C)$
the group of invertible $(N \times N)$ complex matrices
and by $GL(N,\mathbb R)$ the group of all such matrices with real entries.
Let $0 < \alpha < 1$ and let
$B:\partial \Delta \rightarrow GL(N,\mathbb C)$ be of class $\mathcal C^\alpha$.
According to \cite{ve} (see also \cite{cl-go81}) $B$ admits the factorization
$B(\tau) = F^{+}(\tau)\Lambda(\tau)F^{-}(\tau), \tau \in \partial \Delta$,
where:
$\bullet$ $\Lambda$ is a diagonal matrix of the form
$\Lambda(\tau) = diag(\tau^{k_1},\dots,\tau^{k_N})$,
$\bullet$ $F^{+}: \bar{\Delta} \rightarrow GL(N,\mathbb C)$ is
of class $\mathcal C^\alpha$ on $\bar{\Delta}$ and holomorphic in $\Delta$,
$\bullet$ $F^{-}: [\mathbb C \cup \{ \infty \}] \backslash \Delta
\rightarrow GL(N,\mathbb C)$
is of class $\mathcal C^\alpha$ on $[\mathbb C \cup \{ \infty \}] \backslash \Delta$
and holomorphic on
$[\mathbb C \cup \{ \infty \}] \backslash \bar \Delta$.
\vskip 0,1cm The integers $k_1 \geq \cdots \geq k_n$ are called the partial
indices of $B$.
\vskip 0,1cm
Let $E$ be a totally real fibration over the unit circle. For every
$\zeta \in \partial \Delta$ consider the ``normal''
vectors $\nu_j(\zeta) =
(r^j_{\bar{z}^1}(f(\zeta),\zeta),\dots,
r^j_{\bar{z}^N}(f(\zeta),\zeta))$, $j=1,\dots,N$.
We denote by $K(\zeta) \in GL(N,\mathbb C)$ the matrix with rows
$\nu_1(\zeta),\dots,\nu_N(\zeta)$ and we set
$B(\zeta) := -\overline{K(\zeta)}^{-1}K(\zeta)$,
$\zeta \in \partial \Delta$. The partial indices of the map
$B: \partial \Delta \rightarrow GL(N,\mathbb C)$
are called {\it the partial indices} of the fibration ${E}$
along the disc $f$ and their sum is
called the {\it total index} or the {\it Maslov index of ${ E}$ along $f$}.
The following result is due to
J. Globevnik~\cite{gl96}:
\vskip 0,1cm
\noindent{\bf Theorem} : {\it Suppose that all the partial indices of
the totally real fibration ${E}$ along $f$ are $\geq -1$
and denote by $k$ the Maslov index of $E$ along
$f$. Then the linear map from $(\mathcal A^{\alpha})^N$ to $(\mathcal
C^{\alpha-1}(\Delta))^N$ given by
$h \mapsto 2 Re [G h]$
is surjective and has a $(N+k)$ dimensional kernel.}
\vskip 0,1cm
Proposition~\ref{Tthh} may be restated in terms of partial indices
as follows~:
\begin{proposition}\label{tthh1}
Let $f^0:\bar \Delta \rightarrow \mathbb C^N$ be a $J_{st}$-holomorphic
disc attached to a totally real fibration ${E}$ in $\mathbb C^N$. Suppose
that all the partial indices of ${E}$ along $f^0$ are $\geq
-1$. Denote by $k$ the Maslov index of ${E}$ along $f^0$. Let also
$E_\varepsilon$ be a smooth totally real deformation of ${E}$ and
$J_\lambda$ be a smooth almost complex deformation of $J_{st}$ in a
neighborhood of $f(\bar{\Delta})$. Then there exists $\delta_0,
\varepsilon_0, \lambda_0 >0$ such that for every $0 \leq \varepsilon
\leq \varepsilon_0$ and for every $0 \leq \lambda \leq \lambda_0$
the set of $J_\lambda$-holomorphic discs attached to $E_\varepsilon$
and such that $\parallel f -f^0 \parallel_{\alpha} \leq \delta_0$ forms
a smooth $(N+k)$-dimensional submanifold
$\mathcal A_{\varepsilon,\lambda}$ in the Banach space
$(\mathcal C^\alpha(\bar{\Delta}))^N$.
\end{proposition}
Globevnik's result was applied to the study of stationary discs in
some classes of domains in $\mathbb C^n$ by M.Cerne~\cite{ce95} and
A.Spiro-S.Trapani~\cite{sp-tr02}. Since they worked with the
projectivization of the conormal bundle, we explicitely compute, for
reader's convenience and completeness of exposition, partial indices
of {\it meromorphic} lifts of stationary discs for the unit sphere in
$\mathbb C^n$.
\vskip 0,1cm
Consider the unit sphere $\Gamma:=\{z \in \mathbb C^n :
z^1\bar{z}^1 + \cdots +z^{n}\bar{z}^{n} -1 = 0\}$ in $\mathbb C^n$.
The conormal bundle $\Sigma(\Gamma)$ is given in the $(z,t)$ coordinates
by the equations
$$
(S)\left\{
\begin{array}{cll}
z^1\bar{z}^1 + \cdots +z^{n}\bar{z}^{n} - 1 = 0, & &\\
t_1 = c \bar{z}^1,\dots,t_{n} = c \bar{z}^{n}, &c \in \mathbb R.&
\end{array}
\right.
$$
According to \cite{le81}, every stationary disc for $\Gamma$
is extremal for the Kobayashi metric. Therefore,
such a stationary disc $f^0$ centered at the origin is linear by the
Schwarz lemma. So, up to a unitary transformation, we have
$f^0(\zeta) = (\zeta,0,\dots,0)$ with lift
$(\widehat{f^0},\widehat{g^0})(\zeta) =
(\zeta,0,\dots,0,\zeta^{-1},0,\dots,0)=(f^0,\zeta^{-1}g^0)$ to the
conormal bundle.
Representing nearby meromorphic discs in the form
$(z,\zeta^{-1}w)$ and eliminating the parameter $c$ in system
$(S)$ we obtain that holomorphic discs
$(z,w)$ close to $(f^0,g^0)$ satisfy for $\zeta \in \partial \Delta$:
$$
\begin{array}{lll}
r^1(z,w) & = & z^1\bar{z}^1 + \cdots +z^{n}\bar{z}^{n} - 1 = 0,\\
r^2(z,w) & = & i z^1 w_1\zeta^{-1} - i\bar{z}^1\bar{w}_1\zeta = 0,\\
r^3(z,w) & = & \bar{z}^1w_2\zeta^{-1} - \bar{z}^2w_1\zeta^{-1} +
z^1\bar{w}_2 \zeta-
z^2\bar{w}_1\zeta = 0,\\
r^4(z,w) & = & i\bar{z}^1w_2\zeta^{-1} - i\bar{z}^2w_1\zeta^{-1} -
iz^1\bar{w}_2\zeta +
iz^2\bar{w}_1\zeta = 0,\\
r^5(z,w) & = & \bar{z}^1w_3\zeta^{-1} - \bar{z}^3w_1\zeta^{-1} +
z^1\bar{w}_3 \zeta-
z^3\bar{w}_1\zeta = 0,\\
r^6(z,w) & = & i\bar{z}^1w_3\zeta^{-1} - i\bar{z}^3w_1\zeta^{-1} -
iz^1\bar{w}_3\zeta +
iz^3\bar{w}_1\zeta = 0,\\
& &\cdots\\
r^{2n-1}(z,w) & = & \bar{z}^1w_n\zeta^{-1} - \bar{z}^nw_1\zeta^{-1} +
z^1\bar{w}_n \zeta-
z^n\bar{w}_1\zeta = 0,\\
r^{2n}(z,w) & = & i\bar{z}^1w_n\zeta^{-1} - i\bar{z}^nw_1\zeta^{-1} -
iz^1\bar{w}_n\zeta +
iz^n\bar{w}_1\zeta = 0.
\end{array}
$$
Hence the $(2n \times 2n)$-matrix $K(\zeta)$ has the following expression:
$$
\left(
\begin{matrix}
\zeta&0&0& \cdots &0&0&0&0& \cdots & 0\\
-i\zeta&0&0& \cdots &0&-i&0&0&\cdots &0\\
0&-\zeta^{-1}&0& \cdots &0&0&\zeta^2&0&\cdots &0\\
0&-i\zeta^{-1}&0& \cdots &0&0&-i\zeta^2&0&\cdots &0\\
0&0&-\zeta^{-1}& \cdots &0&0&0&\zeta^2&\cdots &0\\
0&0&-i\zeta^{-1}& \cdots &0&0&0&-i\zeta^2&\cdots &0\\
\cdots & \cdots & \cdots & \cdots &
\cdots & \cdots & \cdots & \cdots & \cdots &\cdots\\
0&0&0& \cdots &-\zeta^{-1}&0&0&0& \cdots &\zeta^2\\
0&0&0& \cdots &-i\zeta^{-1}&0&0&0& \cdots &-i\zeta^2
\end{matrix}
\right)
$$
and a direct computation shows that $-B = \bar{K}^{-1} K$ has
the form
$$
\left( \begin{matrix}C_1&C_2\\
C_3&C_4
\end{matrix}
\right),
$$
where the $(n \times n)$ matrices $C_1,\dots,C_4$ are given by
$$
C_1 = \left( \begin{matrix}\zeta^2&0&.&0\\
0&0&.&0\\
0&0&.&0\\
.&.&.&.\\
0&0&.&0
\end{matrix}
\right), \ \ C_2 = \left( \begin{matrix}0&0&.&0\\
0&-\zeta&.&0\\
0&0&-\zeta&0\\
.&.&.&.\\
0&0&0&-\zeta
\end{matrix}
\right),
$$
$$
C_3 = \left( \begin{matrix}-2\zeta&0&.&0\\
0&-\zeta&.&0\\
0&0&.&0\\
.&.&.&.\\
0&0&.&-\zeta
\end{matrix}
\right), \ \ C_4 = \left( \begin{matrix}-1&0&.&0\\
0&0&.&0\\
0&0&.&0\\
.&.&.&.\\
0&0&.&0
\end{matrix}
\right).
$$
\vskip 0,1cm
We point out that the matrix
$$\left( \begin{matrix}\zeta^2&0\\
\zeta&1
\end{matrix}
\right)$$
admits the following factorization:
$$\left( \begin{matrix}1&\zeta\\
0&1
\end{matrix} \right ) \times \left( \begin{matrix}-\zeta&0\\
0&\zeta
\end{matrix}
\right) \times
\left( \begin{matrix}0&1\\
1&\zeta^{-1}
\end{matrix} \right).$$
Permutating the lines (that is multiplying $B$ by some nondegenerate matrics
with constant coefficients) and using the above factorization of $(2 \times
2)$ matrices, we obtain the following
\begin{proposition}\label{PPRRR}
All the partial indices of the conormal bundle of the unit sphere along a
meromorphic lift of a stationary disc are equal to one
and the Maslov index is equal to $2n$.
\end{proposition}
Proposition~\ref{PPRRR} enables to apply Proposition~\ref{Tthh}
to construct the family of stationary discs attached to the unit
sphere after a small deformation of the complex structure. Indeed
denote by $r_j(z,w,\zeta,\lambda)$ $\mathcal C^{\infty}$-smooth functions
coinciding for
$\lambda = 0$ with the above functions $r_1,\dots,r_{2n}$.
\vskip 0,1cm
In the end of this Subsection we
make the two following assumptions:
\begin{itemize}
\item[(i)] $r_1(z,w,\zeta,\lambda) = z^1\bar{z}^1 +
\cdots+z^{n}\bar{z}^{n} - 1$, meaning that the sphere is not deformed
\item[(ii)] $r_j(z,tw,\zeta,\lambda)
= tr^j(z,w,\zeta,\lambda)$ for every $j \geq 2, \ t \in \mathbb R$.
\end{itemize}
Geometrically this means that given $\lambda$, the set $\{ (z,w):
r_j(z,w,\lambda) = 0 \}$ is a real vector bundle with one-dimensional
fibers over the unit sphere.
Consider an almost
complex deformation $J_{\lambda}$ of the standard structure on $\mathbb B_n$
and its canonical lift $\tilde J_\lambda$ to the cotangent bundle $\mathbb B_n \times
\mathbb R^{2n}$.
Consider now the corresponding boundary problem:
$$
({\mathcal BP}_{\lambda})
\left\{
\begin{array}{lll}
& &r(f,g,\zeta,\lambda) = 0, \zeta \in \partial \Delta,\\
& &\bar \partial f + q_1(\lambda,f)\overline{\partial f} = 0,\\
& &\bar \partial g + q_2(\lambda,f)\overline{\partial g} +
q_3(\lambda,f) g + q_4(\lambda,f)\overline{g}= 0.
\end{array}
\right.
$$
Combining Proposition~\ref{PPRRR} with the previous results
we obtain the following
\begin{proposition}\label{THEO}
For every sufficiently small positive $\lambda$, the set of solutions of
$({\mathcal BP}_{\lambda})$, close enough to the disc
$(\widehat{f^0},\widehat{g^0})$, forms a smooth $4n$-dimensional
submanifold $V_{\lambda}$ in the space
$\mathcal C^{\alpha}(\bar \Delta)$ (for every noninteger $\alpha > 1$).
\end{proposition}
Moreover, in view of the assumption (ii) and of
the linearity of $({\mathcal E})$ with respect to the fiber component $g$,
we also have the following
\begin{corollary}\label{CORO}
The projections of discs from
$V_{\lambda}$
to the base $(\mathbb R^{2n},J_{\lambda})$ form a $(4n-1)$-dimensional subvariety in
$\mathcal C^{\alpha}(\bar\Delta)$.
\end{corollary}
Geometrically the solutions $(f,g)$ of the boundary problem
$({\mathcal BP}_{\lambda})$ are
such that the discs $(f,\zeta^{-1}g)$ are attached to the conormal
bundle of the unit sphere with respect to the standard structure. In
particular, if $\lambda = 0$ then every such disc satisfying $f(0) =
0$ is linear.
\subsection{Canonical Foliation and the ``Riemann map'' associated with an almost
complex structure}
In this Section we study the geometry of stationary discs in the unit ball
after
a small almost complex perturbation of the standard structure.
The idea is simple since these discs are small deformations of the
complex lines passing through the origin in the unit ball.
\subsubsection{Foliation associated with an elliptic prolongation}
Fix a vector $v^0$ with $\vert\vert v^0 \vert\vert = 1$ and consider the
corresponding stationary disc $f^0:\zeta \mapsto \zeta v^0$. Denote by
$(\widehat{f^0},\widehat{g^0})$ its lift to the conormal bundle of
the unit sphere.
Consider a smooth deformation $J_{\lambda}$ of the standard structure
on the unit ball ${\mathbb B_n}$ in $\mathbb C^n$.
For sufficiently small $\lambda_0$ consider the lift $\tilde J_{\lambda}$ on
$\mathbb B_n \times \mathbb R^{2n}$,
where $\lambda \leq
\lambda_0$. Then the solutions of the associated boundary
problem $({\mathcal BP}_{\lambda})$ form a $4n$-parameter family
of $\tilde J_\lambda$-holomorphic maps from $\Delta$ to
$\mathbb C^{n} \times \mathbb C^{n}$. Given such a solution $(f^\lambda,g^\lambda)$,
consider the disc
$(\widehat{f^\lambda},\widehat{g^\lambda}) :=
(f^\lambda, \zeta^{-1}g^\lambda)$.
In the case where $\lambda = 0$
this is just the lift of a stationary disc for the unit
sphere to its conormal bundle. The set of solutions of the problem
$(\mathcal BP_\lambda)$,
close to $(\widehat{f^0},\widehat{g^0})$, forms a
smooth submanifold of real dimension $4n$
in $(\mathcal C^\alpha(\bar{\Delta}))^{4n}$ according
to Proposition~\ref{THEO}. Hence there is a neighborhood $V_0$ of $v^0$ in
$\mathbb R^{2n}$ and a smooth real hypersurface $I_{v^0}^\lambda$ in $V_0$
such that for every $\lambda \leq \lambda_0$ and for every
$v \in I_{v^0}^\lambda$ there is one and only one solution
$(f^\lambda_v,g^\lambda_v)$ of $(\mathcal BP_\lambda)$, up to multiplication
of the fiber component $g^\lambda_v$ by a real constant,
such that
$f^\lambda_v(0) = 0$ and $df^\lambda_v(0)(\partial / \partial Re(\zeta)) = v$.
We may therefore consider the map
$$
F_0^{\lambda}: (v,\zeta) \in I_{v^0}^\lambda \times \bar{\Delta} \mapsto
(f^\lambda_v,g^\lambda_v)(\zeta).
$$
This is a smooth map with respect to $\lambda$
close to the origin in $\mathbb R$.
Denote by $\pi$ the canonical
projection $\pi: \mathbb B_n \times \mathbb R^{2n} \rightarrow \mathbb B_n$
and consider the composition $\widehat{F_0^{\lambda}} = \pi \circ
F_0^{\lambda}$.
This is a smooth map defined for $0 \leq \lambda <
\lambda_0$
and such that
\begin{itemize}
\item[(i)] $\widehat{F_0^{0}}(v,\zeta)
= v\zeta$, for every $\zeta \in \bar\Delta$ and for every $v \in I_{v^0}^\lambda$.
\item[(ii)] For every $\lambda \leq \lambda_0$,
$\widehat{F_0^\lambda}(v,0) = 0$.
\item[(iii)] For every fixed $\lambda \leq \lambda_0$ and every
$v \in I_{v^0}^\lambda$ the map
$\widehat{F_0^{\lambda}}(v,\cdot)$ is
a $J_{\lambda}$-holomorphic disc attached to the unit sphere.
\item[(iv)] For every fixed $\lambda$, different values of
$v \in I_{v^0}^\lambda$ define different discs.
\end{itemize}
\begin{definition} We call the family
$(\widehat{F_0^\lambda}(v,\cdot))_{v \in I_{v^0}^\lambda}$ {\it canonical }
discs associated with the boundary problem $({\mathcal
BP}_{\lambda})$.
\end{definition}
We stress out that by
a canonical disc {\it we always mean a disc centered at the
origin}. The preceding condition $(iv)$ may be restated as follows:
\begin{lemma}\label{LEMMA1}
For $\lambda < \lambda_0$ every canonical disc is uniquely
determined by its tangent vector at the origin.
\end{lemma}
In the next Subsection we glue the sets $I_{v}^\lambda$, depending on vectors
$v \in \mathbb S^{2n-1}$, to define the global indicatrix of $F^\lambda$.
\subsubsection{Indicatrix} For $\lambda < \lambda_0$ consider canonical discs
in $(\mathbb B_n,J_{\lambda})$ centered at the origin and admitting lifts
close to $(\widehat{f^0},\widehat{g^0})$.
As above we denote by $I_{v^0}^{\lambda}$
the set of tangent vectors at the origin of
canonical discs whose lift is close to $(\widehat{f^0},\widehat{g^0})$.
Since these vectors depend smoothly on
parameters $v$ close to $v^0$ and $\lambda \leq \lambda_0$,
$I_{v^0}^{\lambda}$ is a smooth deformation of a piece of
the unit sphere $\mathbb S^{2n-1}$. So this is a smooth real
hypersurface in $\mathbb C^n$ in a neighborhood of $v^0$.
Repeating this construction for every vector $v \in \mathbb S^{2n-1}$
we may find a finite covering of $\mathbb S^{2n-1}$ by open
connected sets $U_j$ such that for every $j$ the nearby stationary
discs with tangent vectors at the origin close to $v$ are given by
$\widehat{F^{\lambda}_j}$. Since every nearby stationary disc is uniquely
determined by its tangent vector at the origin, we may glue the maps
$\widehat{F^{\lambda}_j}$ to the map $\widehat{F^{\lambda}}$
defined for every $v
\in \mathbb S^{2n-1}$ and every $\zeta \in \bar{\Delta}$. The tangent vectors
of the constructed family of stationary discs form a smooth real
hypersurface $I^{\lambda}$ which is a small deformation of the unit
sphere. This hypersurface is an analog of the indicatrix for the
Kobayashi metric (more precisely, its boundary).
We point out that the local indicatrix $I_{v^0}^\lambda$
for some fixed $v^0 \in \mathbb S^{2n-1}$ is also useful.
\subsubsection{Circled property and Riemann map}
If $\lambda$ is small enough, the hypersurface
$I^{\lambda}$ is strictly pseudoconvex with respect to the standard
structure. Another important property of the ``indicatrix'' is its
invariance with respect to the linear action of the unit circle.
Let $\lambda \leq \lambda_0$, $v \in I^\lambda$ and
$f_v^\lambda:=\widehat{F^\lambda}(v,\cdot)$. For $\theta \in \mathbb R$
we denote by
$f_{v,\theta}^\lambda$ the $J_\lambda$-holomorphic disc in $\mathbb B_n$
defined by $f^\lambda_{v,\theta} : \zeta \in \Delta \mapsto
f_v^\lambda(e^{i\theta}\zeta)$. We have~:
\begin{lemma}\label{LEMMA2}
For every $0 \leq \lambda < \lambda_0$, every $v \in I^\lambda$
and every $\theta \in \mathbb R$ we have~: $f_{v,\theta}^\lambda \equiv
f_{e^{i\theta}v}^\lambda$.
\end{lemma}
\proof Since $f_v^\lambda$ is a canonical disc, the disc
$f_{v,\theta}^\lambda$ has a lift close to the lift
of the disc $\zeta \mapsto e^{i\theta}v\zeta$. Then according to Lemma
\ref{circle}
$f_{v,\theta}^\lambda$ is a canonical disc close to the
linear disc $\zeta \mapsto e^{i\theta}v\zeta$. Since the first jet
of $f_{v,\theta}^\lambda$ coincides with the first jet of
$f_{e^{i\theta}v}^\lambda$, these two nearby stationary discs coincide
according to Lemma~\ref{LEMMA1}. \qed
This statement implies that for any $w \in I^{\lambda}$ the vector
$e^{i\theta}w$ is in $I^{\lambda}$ as well.
It follows from the
above arguments that there exists a natural parametrization of the set
of canonical discs by their tangent vectors at the origin,
that is by the points of $I^{\lambda}$. The map
$$
\begin{array}{llcll}
\widehat{F^\lambda} &:& I^\lambda \times \Delta
& \rightarrow & \mathbb B_n\\
& & (v,\zeta) & \mapsto & f_v^\lambda(\zeta)
\end{array}
$$
is smooth on $I^\lambda \times \Delta$. Moreover, if we fix a
small positive constant $\varepsilon_0$, then by shrinking $\lambda_0$
if necessary there is, for every $\lambda < \lambda_0$, a smooth
function $\widehat{G^\lambda}$ defined on $I^\lambda \times \Delta$,
satisfying $\|\widehat{G^\lambda(v,\zeta)}\| \leq \varepsilon_0 |\zeta|^2$ on
$I^\lambda \times \Delta$, such that for every $\lambda <
\lambda_0$ we have on $I^\lambda \times \Delta$~:
\begin{equation}\label{equation}
\widehat{F^ \lambda}(v,\zeta) = \zeta v + \widehat{G^\lambda}(v,\zeta).
\end{equation}
\vskip 0,1cm
Consider now the restriction of $\widehat{F^\lambda}$ to $I^\lambda
\times [0,1]$. This is a smooth map, still denoted by
$\widehat{F^\lambda}$. We
have the following~:
\begin{proposition}\label{PROPRO}
There exists $\lambda_1 \leq \lambda_0$ such that
for every $\lambda < \lambda_1$ the family
$(\widehat{F^\lambda}(v,r))_{(v,r) \in I^\lambda \times [0,1[}$
is a real foliation of $\mathbb B_n \backslash \{0\}$.
\end{proposition}
\proof
\noindent{\it Step 1}.
For $r \neq 0$ we write $w:=rv$. Then $r=\|w\|$, $v=w/\|w\|$
and we denote by $\widetilde{F^\lambda}$ the function
$\widetilde{F^\lambda}(w):=\widehat{F^\lambda}(v,r)$.
For $\lambda < \lambda_0$, $\widetilde{F^\lambda}$ is a smooth map
of the variable $w$ on
$\mathbb B_n \backslash\{0\}$, satisfying~:
$$
\widetilde{F^\lambda}(w) = w + \widetilde{G^\lambda}(w)
$$
where $\widetilde{G^\lambda}$ is a smooth map on $\mathbb B_n \backslash \{0\}$
with $\|\tilde{G}^\lambda(w)\| \leq \varepsilon_0\|w\|^2$ on $\mathbb B_n
\backslash\{0\}$. This implies that $\widetilde{F^\lambda}$ is a local
diffeomorphism at each point in $\mathbb B_n \backslash\{0\}$, and so that
$\widehat{F^\lambda}$ is a local diffeomorphism at each point in
$I^\lambda \times ]0,1[$. Moreover, the
condition $\|\widetilde{G^\lambda}(w)\| \leq \varepsilon_1\|w\|^2$ on
$\mathbb B_n \backslash\{0\}$ implies that $\widetilde{G^\lambda}$ is
differentiable at the origin with $d\widetilde{G^\lambda}(0) = 0$. Hence
by the implicit function theorem there exists $\lambda_1 < \lambda_0$
such that the map $\widetilde{F^\lambda}$ is a local diffeomorphism at the
origin for $\lambda < \lambda_1$. So there exists $0<r_1<1$ and a
neighborhood $U$ of the origin in $\mathbb C^n$ such that
$\widehat{F^\lambda}$ is a diffeomorphism from $I^\lambda \times
]0,r_1[$ to $U \backslash \{0\}$, for $\lambda < \lambda_1$.
\vskip 0,1cm
\noindent{\it Step 2}. We show that $\widehat{F^\lambda}$
is injective on $I^\lambda \times ]0,1]$ for sufficiently small $\lambda$.
Assume by contradiction that for every $n$ there exist
$\lambda_n \in \mathbb R$, $r_n, r'_n \in ]0,1]$, $v^n, w^n \in I^{\lambda_n}$
such that:
$ \bullet \lim_{n \rightarrow \infty}\lambda_n =0,
\ \lim_{n \rightarrow \infty}r_n=r,\
\lim_{n \rightarrow \infty}r'_n=r',
$
$\bullet
\lim_{n \rightarrow \infty}v^n=v \in \mathbb S^{2n-1}, \
\lim_{n \rightarrow \infty}w^n=w \in \mathbb S^{2n-1}
$
and satisfying
$$
\widehat{F^{\lambda_n}}(v^n,r_n) = \widehat{F^{\lambda_n}}(w^n,r'_n)
$$
for every $n$. Since $\widehat{F}$ is smooth with respect to $\lambda, v, r$,
it follows that $\widehat{F^0}(v,r) = \widehat{F^0}(w,r')$ and so
$v=w$ and $r=r'$. If $r < r_1$ then the contradiction follows from the fact
that $\widehat{F^\lambda}$ is a diffeomorphism from
$I^\lambda \times ]0,r_1[$ to $U \backslash \{0\}$. If $r \geq r_1$
then for every neighborhood $U_\infty$ of $rv$ in
$\mathbb B_n \backslash \{0\}$, $r_nv^n \in U_\infty$ and
$r_n'w^n \in U_\infty$ for sufficiently large $n$. Since we may choose
$U_\infty$ such that $\widehat{F^\lambda}$ is a diffeomorphism from a
neighborhood of $(v,r)$ in $I^\lambda \times ]r_1,1]$ uniformly with respect
to $\lambda <<1$, we still obtain a contradiction.
\vskip 0,1cm
\noindent{\it Step 3}. We show that $\widehat{F^\lambda}$ is surjective
from $I^\lambda \times ]0,1[$ to $\mathbb B_n \backslash \{0\}$.
It is sufficient to show that $\widehat{F^\lambda}$ is surjective from
$I^\lambda \times [r_1,1[$ to $\mathbb B_n \backslash U$.
Consider the nonempty set
$E_\lambda=\{w \in \mathbb B_n \backslash U : w=\widehat{F^\lambda}(v,r) \
{\rm for \ some \ }(v,r) \in I^\lambda \times ]r_1,1[\}$. Since the jacobian
of $\widehat{F^\lambda}$ does not vanish for $\lambda=0$ and
$\widehat{F^\lambda}$ is smooth with respect to $\lambda$,
the set $E_\lambda$ is open for
sufficiently small $\lambda$.
Moreover it follows immediately from its definition that $E_\lambda$
is also closed in $\mathbb B_n \backslash U$. Thus
$E_\lambda=\mathbb B_n \backslash U$.
These three steps prove the result. \qed
\vskip 0,1cm
We can construct now the map $\Psi_{J_\lambda}$ for $\lambda <
\lambda_1$. For every $z \in \mathbb B_n \backslash \{0\}$ consider the unique
couple $(v(z),r(z)) \in I^\lambda \times ]0,1[$ such that
$f_v^\lambda$ is the unique canonical disc passing through $z$ (its
existence and unicity are given by Proposition~\ref{PROPRO}) with
$f_{v(z)}^\lambda(0) = 0$, $df_{v(z)}^\lambda(0)(\partial / \partial
Re(\zeta)) = v(z)$ and $f_{v(z)}^\lambda(r(z)) = z$. The map $\Psi_{J_\lambda}$ is
defined by~:
$$
\begin{array}{llcll}
\Psi_{J_\lambda} &: & \bar{\mathbb B}_n \backslash \{0\} & \rightarrow & \mathbb C^n\\
& & z & \mapsto & r(z) v(z).
\end{array}
$$
\begin{definition}
The map $\Psi_{J_\lambda}$ is called the Riemann map associated with the almost
complex structure
$J_{\lambda}$.
\end{definition}
This map is an analogue of the circular representation of a strictly
convex domain introduced by L.Lempert~\cite{le81}. The term ``Riemann map''
was used by S. Semmes~\cite{se92} for a slightly different map where the
vector $v(z)$ is normalized (and so such a map takes values in the unit ball).
In this paper we work with the indicatrix since this is more convenient for
our applications.
The Riemann map $\Psi_{J_\lambda}$ has the following properties~:
\begin{proposition}\label{PROPRO2}
\noindent $(i)$ For every $(v,\zeta) \in I^\lambda \times
\Delta$ we have $(\Psi_{J_\lambda} \circ f_v^\lambda)(\zeta) =
\zeta v$ and so $\log\|(\Psi_{J_\lambda} \circ f_v^\lambda)(\zeta)\| =
\log|\zeta|$.
\noindent $(ii)$ There exist constants $0 < C' < C$ such that $C'
\|z\| \leq \|\Psi_{J_\lambda}(z)\| \leq C \|z\|$ on $\mathbb B_n$.
\end{proposition}
\vskip 0,1cm
\proof $(i)$ Let $\zeta = e^{i\theta}r \in \Delta(0,r_0)$ with $\theta
\in [0,2\pi[$. Then $f_v^\lambda(\zeta) = f_v^\lambda(e^{i\theta}r) =
f_{e^{i\theta}v}^\lambda(r)$. Hence we have $(\Psi_{J_\lambda} \circ
f_v^\lambda)(\zeta) = \Psi_{J_\lambda}( f_{e^{i\theta}v}^\lambda(r)) =
e^{i\theta}vr = \zeta v$.
$(ii)$ Let $z \in \mathbb B_n \backslash \{0\}$. Then according to
equation~(\ref{equation}) we have the inequality
$\|\Psi_{J_\lambda}(z)\|\ (1-\varepsilon_1\|\Psi_{J_\lambda}(z)\|) \leq
\|z\|^2 \leq \|\Psi_{J_\lambda}(z)\| \ (1+ \varepsilon_1
\|\Psi_{J_\lambda}(z)\|)$. Since $\|\Psi_{J_\lambda}(z)\| \leq 1$ we obtain
the desired inequality with $c'=1/1+\varepsilon_1$ and
$c=1/1-\varepsilon_1$. \qed
\vskip 0,1cm
From the above analysis we deduce the following basic properties of the Riemann map.
\begin{proposition}\label{PR1}
\begin{itemize}
\item[]
\item[(i)] The indicatrix $I^{\lambda}$ is a compact circled smooth
$J_{\lambda}$-strictly pseudoconvex hypersurface bounding a domain
denoted by $\Omega^{\lambda}$.
\item[(ii)] The Riemann map $\Psi_{J_\lambda}: \bar {\mathbb B}_n \backslash
\{ 0 \} \rightarrow \bar\Omega^{\lambda} \backslash \{ 0 \}$ is a smooth
diffeomorphism.
\item[(iii)] For every canonical disc $f_v^{\lambda}$ we have
$\Psi_{J_{\lambda}}\circ f^{\lambda}_v(\zeta) = v\zeta$.
\end{itemize}
\end{proposition}
\subsubsection{Local Riemann map}
We introduce the notion of local indicatrix
$I_{v^0}^\lambda$ for $v^0 \in \mathbb S^{2n-1}$. We may localize the
notion of the Riemann map, introducing a similar associated with the
local indicatrix. Denote by $\Omega_{v^0}^\lambda$ the set
$I_{v^0}^\lambda \times
[0,1[$. The arguments used in the proof of Proposition~\ref{PROPRO}
show that $\widehat{F^\lambda}(\Omega_{v^0}^\lambda)$
is foliated by stationary discs centered at the origin.
We may therefore define the Riemann map $\Psi_{J_\lambda,v^0}$ on
$\widehat{F^\lambda}(\Omega_{v^0}^\lambda)$ by:
$$
\Psi_{J_\lambda,v^0}(z)=r(z)v(z)
$$
where $v(z)$ is the tangent vector at the origin of the unique stationary disc
$f_{v(z)}^\lambda$ passing through $z$ and $f_{v(z)}^\lambda(r(z)) = z$.
\begin{remark}\label{REM}
We point out that the Riemann map can be defined in any sufficiently small
deformation of the unit ball and satisfies all the same properties. Moreover
one can easily generalize this construction to strictly convex domains in
$\mathbb C^n$ equipped with small almost complex deformations of the standard
structure.
\end{remark}
\subsubsection{Structure properties of the Riemann map}
Assume now that $M \subset \mathbb C^n$ and let $J_\lambda$ be an almost
complex deformation of the standard structure on $M$.
Let $i: T^*_{(1,0)}(M,J)\rightarrow T^*(M)$ be the canonical
identification. Let $D$ be a smoothly bounded
domain in $M$ with the boundary $\Gamma$. The conormal bundle
$\Sigma_J(\Gamma)$ of $\Gamma$ is a real subbundle of
$T^*_{(1,0)}(M,J)|_\Gamma$ whose fiber at $z\in \Gamma$ is defined by
$\Sigma_{z}(\Gamma) = \{ \phi \in T^*_{(1,0)}(M,J): Re \,\phi \vert
H_{(1,0)}^J(\Gamma) = 0 \}$. Since the form $\partial_J \rho$ forms a
basis in $\Sigma_{J}(\Gamma)$, every $\phi \in \Sigma_J(\Gamma)$ has
the form $\phi = c \partial_J \rho$, $c \in \mathbb R$.
\begin{definition}
A continuous map $f:\bar\Delta \rightarrow (\bar
D,J)$, $J$-holomorphic on $\Delta$, is called
a {\it stationary} disc if there
exists a smooth map $\hat{f}= (f,g): \Delta \backslash \{ 0 \}
\rightarrow T^*_{(1,0)}(M,J)$, $\hat{f} \neq 0$ which is continuous on
$\bar \Delta \backslash \{ 0 \}$ and such that
\begin{itemize}
\item[(i)] $\zeta \mapsto \hat{f}(\zeta)$ satisfies the
$\bar\partial_{\tilde J_\lambda}$-equation on $\Delta \backslash \{0\}$,
\item[(ii)] $(i \circ (f,\zeta^{-1}g))(\partial \Delta) \subset
\Sigma_J(\Gamma)$.
\end{itemize}
\end{definition}
We call $\hat{f}$ a {\it lift} of $f$ to the conormal bundle of
$\Gamma$. Clearly the notion
of a stationary disc is {\it invariant} in the following sense: if
$\phi$ is a $\mathcal C^1$ diffeomorphism between $\bar{D}$ and
$\bar{D}'$ and a $(J,J')$-biholomorphism from
$D$ to $D'$, then for every stationary disc $f$ in $(D,J)$ the
composition $\phi \circ f$ is a stationary discs in $(D',J')$.
Let now $D$ coincide with the unit ball $ {\mathbb B_n}$ equipped with
an almost complex deformation $J_{\lambda}$ of the standard
structure. Then it follows by definition that stationary discs in
$({\mathbb B_n},J_{\lambda})$ may be described as solutions of a
nonlinear boundary problem $({\mathcal BP}_{\lambda})$ associated with
$J_\lambda$. The above techniques give the existence and efficient
parametrization of the variety of
stationary discs in $({\mathbb B_n},J_{\lambda})$ for $\lambda$ small
enough. This allows to apply the definition of the Riemann map and
gives its existence. We sum up our considerations in the following
Theorem, giving the main structural properties of the Riemann
map.
\begin{theorem}\label{TH1}
Let $J_\lambda$, $J'_{\lambda}$ be almost complex perturbations
of the standard
structure on $\bar{\mathbb B}_n$.
The Riemann map $\Psi_{J_\lambda}$ exists
for sufficiently small $\lambda$ and
satisfies the following properties:
\begin{itemize}
\item[(a)] $\Psi_{J_\lambda} : \bar{\mathbb B}_n \backslash \{ 0 \}
\rightarrow \bar
\Omega^\lambda \backslash \{ 0 \}$ is a smooth diffeomorphism
\item[(b)] The restriction of $\Psi_{J_\lambda}$ on every stationary disc
through the origin is $(J_{st},J_{st})$ holomorphic (and even linear)
\item[(c)] $\Psi_{J_\lambda}$ commutes with biholomorphisms. More
precisely for sufficiently small $\lambda'$ and for every
$\mathcal C^1$ diffeomorphism $\varphi$ of $\bar{\mathbb B}_n$,
$(J_\lambda,J_{\lambda'})$-holomorphic in $\mathbb B_n$
and satisfying $\varphi(0) = 0$, we have
$$
\varphi = ( \Psi_{J_{\lambda'}})^{-1} \circ d\varphi_0 \circ \Psi_{J_\lambda}.
$$
\end{itemize}
\end{theorem}
\vskip 0,2cm
\noindent{\it Proof of Theorem~\ref{TH1}.} Conditions (a) and (b) are
conditions (i) and (ii) of Proposition~\ref{PR1}.
\noindent For condition~(c), let $\varphi: ({\mathbb B_n},J) \rightarrow
({\mathbb B_n},J')$ be a $(J,J')$-biholomorphism of class $\mathcal C^1$ on
$\bar{\mathbb B}_n$ satisfying $\varphi(0) = 0$. We know that a disc
$f_v^J$ is a canonical disc for the almost complex structure $J$ if
and only if $\varphi(f_v^J)$ is a canonical disc for the almost
complex structure $J'$. Since $\varphi(f_v^J) =
f_{d\varphi_0(v)}^{J'}$ by definition, $\Psi_J(f_v^J)(\zeta) = \zeta
\,v$ and $\Psi_{J'}(f_{d\varphi_0(v)})(\zeta)=\zeta \,d\varphi_0(v)$ by
Proposition~\ref{PR1} (iii), condition (c) follows from the following
diagram (see Figure 3), which ends the proof of Theorem~\ref{TH1}~:
\bigskip
\begin{center}
\input{diagram3.pstex_t}
\end{center}
\vskip 0,5cm
\centerline{Figure 3}
\qed
\vskip 0,5cm
Riemann maps are useful for the boundary study of biholomorphisms in
almost complex manifolds. We have
\begin{corollary}
If $\lambda, \lambda' <<1$ and $\varphi$ is a $\mathcal C^1$ diffeomorphism of
$\bar{\mathbb B}_n$,
$(J_\lambda,J_{\lambda'})$-holomorphic in $\mathbb B_n$
satisfying $\varphi(0) = 0$, then $\varphi$ is of class $\mathcal C^{\infty}$
on $\bar{\mathbb B}_n$.
\end{corollary}
\proof This follows immediately by Theorem~\ref{TH1} condition~(c)
since the Riemann map is smooth up to the boundary. \qed
\subsubsection{Rigidity and local equivalence problem}
Condition $(c)$ of Theorem~\ref{TH1} implies the following partial
generalization of Cartan's theorem for almost complex manifolds:
\begin{corollary}
If $\lambda <<1$ and if
$\varphi$ is a $\mathcal C^1$ diffeomorphism of $\bar{\mathbb B}_n$,
$(J_\lambda,J_{\lambda})$-holomorphic in $\mathbb B_n$,
satisfying $\varphi(0) = 0$ and $d\varphi(0) = I$ then $\varphi$ is the
identity.
\end{corollary}
This provides an efficient parametrization of the
isotropy group of the group of biholomorphisms of $({\mathbb B_n},J_\lambda)$.
\vskip 0,1cm
We can solve the local biholomorphic
equivalence problem between almost complex manifolds in terms of the
Riemann map similarly to~\cite{bl-du-ka87,le88} (see the paper
\cite{li50} by P. Libermann for a traditional approach to this problem
based on Cartan's equivalence method for $G$-structures).
Let $I^\lambda$ (resp. $(I')^\lambda$) be the indicatrix of
($\mathbb B_n,J_\lambda$) (resp. ($\mathbb B_n,J'_\lambda$)) bounding
the domain $\Omega^\lambda$ (resp. $(\Omega')^\lambda$) and let
$\Psi_{J_\lambda}$ (resp. $\Psi_{J'_\lambda}$) be the associated Riemann map.
This induces the almost complex structure
$J_\lambda^*:=d\Psi_{J_\lambda} \circ J_\lambda \circ d(\Psi_{J_\lambda})^{-1}$
(resp.
$(J'_\lambda)^*:=
d\Psi_{J'_\lambda} \circ J_\lambda \circ d(\Psi_{J'_\lambda})^{-1}$)
on $\Omega^{\lambda}$ (resp. $(\Omega')^{\lambda}$). Then we have:
\begin{theorem}\label{TH3}
The following conditions are equivalent:
$(i)$ There exists a $\mathcal C^\infty$ diffeomorphism $\varphi$ of
$\bar{\mathbb B}_n$, $(J_\lambda,J_{\lambda}')$-holomorphic on $\mathbb B_n$
and satisfying $\varphi(0)=0$,
$(ii)$ There exists a $J_{st}$-linear isomorphism
$L$ of $\mathbb C^n$, $(J_\lambda^*,(J'_\lambda)^*)$-holomorphic
on $\Omega^\lambda$ and such that $L(\Omega^\lambda)=(\Omega')^\lambda$.
\end{theorem}
\noindent{\it Proof}. If $\varphi$ satisfies condition $(i)$, then
$L:=d\varphi_0$ satisfies condition $(ii)$, in view of
the commutativity of the following diagram (see Figure 4) given by
Theorem~\ref{TH1}~:
\bigskip
\begin{center}
\input{riemann2.pstex_t}
\end{center}
\vskip 0,5cm
\centerline{Figure 4}
\vskip 0,5cm
\noindent
Conversely if $L$ satisfies condition $(ii)$
then the map $\varphi:=(\Psi'_{J_\lambda})^{-1} \circ
L \circ \Psi_{J_\lambda}$ satisfies condition $(i)$. \qed
\section{Kobayashi metric on almost complex manifolds}
In this section we give a lower estimate
on the Kobayashi-Royden infinitesimal metric on a strictly pseudoconvex
domain in an almost complex manifold. In particular, we prove that
every point in an almost complex manifold has a basis of complete hyperbolic
neighborhoods. These results were obtained in the paper \cite{ga-su}.
\subsection{Localization of the Kobayashi-Royden metric}
\subsubsection{Kobayashi-Royden infinitesimal pseudometric}
Let $(M,J)$ be an almost complex
manifold.
According to \cite{ni-wo}, for every $p \in M$ there is a
neighborhood $\mathcal V$ of $0$ in $T_pM$ such that for every $v \in
\mathcal V$ there exists $f \in \mathcal O_J(\Delta,M)$ satisfying
$f(0) = p,$ $df(0) (\frac{\partial}{\partial Re(\zeta)}) = v$.
This allows to define
the Kobayashi-Royden infinitesimal pseudometric $K_{(M,J)}$.
\begin{definition}\label{dd}
For $p \in M$ and $v \in T_pM$, $K_{(M,J)}(p,v)$ is the infimum of the
set of positive $\alpha$ such that there exists a $J$-holomorphic disc
$f:\Delta \rightarrow M$ satisfying $f(0) = p$ and $df(0)(\frac{\partial}
{\partial Re(\zeta)}) = v/\alpha$.
\end{definition}
The following statement is an obvious consequence of the above definition~:
\begin{proposition}\label{ppp}
Let $f:(M',J') \rightarrow (M,J)$ be a $(J',J)$-holomorphic map.
Then $K_{(M,J)}(f(p'),df(p')(v')) \leq K_{(M',J')}(p',v')$ for every
$p'\in M', \ v' \in T_{p'}M'$.
\end{proposition}
We denote by $d_{(M,J)}^K$ the integrated pseudodistance of the
Kobayashi-Royden infinitesimal pseudometric. According to the almost
complex version of Royden's theorem \cite{kr99,iv-ro04}, the
infinitesimal pseudometric is an upper semicontinuous function on the
tangent bundle $TM$ of $M$ and $d^K_{(M,J)}$ coincides with the
usual Kobayashi pseudodistance on $(M,J)$ defined by means of
$J$-holomorphic discs. Similarly to the case of the integrable
structure we have~:
\begin{definition}\label{dddd}
$(i)$ Let $p \in M$. Then $M$ is locally hyperbolic at $p$ if
there exists a neighborhood $U$ of $p$ and a positive constant $C$ such
that for every $q \in U$, $v \in T_qM$~: $K_{(M,J)}(q,v) \geq C \|v\|$.
$(ii)$ $(M,J)$ is hyperbolic if it is locally hyperbolic
at every point.
$(iii)$ $(M,J)$ is complete hyperbolic if the Kobayashi
ball $B_{(M,J)}^K(p,r):=\{q \in M : d_{(M,J)}^K(p,q) < r\}$ is relatively
compact in $M$ for every $p \in M$, $r \geq 0$.
\end{definition}
\begin{lemma}\label{Lemlem}
Let $r < 1$ and let $\theta_r$ be a
smooth nondecreasing function on
$\mathbb R^+$ such that $\theta_r(s)= s$ for $s \leq r/3$ and $\theta_r(s) =
1$ for $s \geq 2r/3$. Let $(M,J)$ be an almost complex manifold, and
let $p$ be a point of $M$. Then there exists a neighborhood $U$ of
$p$, positive constants $A = A(r)$, $B=B(r)$ and a diffeomorphism $z:U
\rightarrow \mathbb B$ such that $z(p) = 0$, $dz(p) \circ J(p) \circ
dz^{-1}(0) = J_{st}$ and the function ${\rm log}(\theta_r(\vert z
\vert^2)) + \theta_r(A\vert z \vert) + B\vert z \vert^2$ is
$J$-plurisubharmonic on $U$.
\end{lemma}
\noindent{\sl Proof of Lemma~\ref{Lemlem}}.
Denote by $w$ the standard coordinates in $\mathbb C^n$. It follows
from Lemma~3 that there exist positive constants $A$ and $\lambda_0$
such that the function ${\rm log}(\vert w \vert^2) + A \vert w \vert$
is $J'$-plurisubharmonic on $\mathbb B$ for every almost complex
structure $J'$, defined in a neighborhood of $\bar{\mathbb B}$ in $\mathbb C^n$ and
such that $\|J'-J_{st}\|_{\mathcal C^2(\bar{\mathbb B})} \leq \lambda_0$. This
means that the function $v(w) = \log(\theta_r(|w|^2)) + \theta_r(A|w|)$
is $J'$-plurisubharmonic on $B(0,r')=\{w \in \mathbb C^n : |w| < r'\}$ for
every such almost complex structure $J'$, where $r'=\inf(\sqrt{r/3},
r/3A)$. Decreasing $\lambda_0$ if necessary, we may assume that the
function $|w|^2$ is strictly $J'$-plurisubharmonic on $\mathbb B$. Then,
since $v$ is smooth on $\mathbb B \backslash B(0,r')$, there exists a
positive constant $B$ such that the function $v + B\vert w \vert^2$ is
$J'$-plurisubharmonic on $\mathbb B$ for
$\|J'-J_{st}\|_{\mathcal C^2(\bar{\mathbb B})} \leq \lambda_0$. According to
Lemma \ref{suplem1} there exists a neighborhood $U$ of $p$ and a
diffeomorphism $z:U \rightarrow \mathbb B$ such that $\vert\vert
z_*(J) - J_{st} \vert\vert_{\mathcal C^2(\bar{\mathbb B})} \leq
\lambda_0$. Then the function $v \circ z = {\rm
log}(\theta_r(\vert z \vert^2)) + \theta_r(A \vert z \vert) +
B\vert z \vert^2$
is $J$-plurisubharmonic on $U$. \qed
\begin{proposition}\label{thm3}
(Localization principle) Let $D$ be a domain in an almost complex
manifold $(M,J)$, let $p \in \bar{D}$, let $U$ be a neighborhood of
$p$ in $M$ (not necessarily contained in $D$) and let $z:U \rightarrow
\mathbb B$ be the diffeomorphism given by Lemma~\ref{Lemlem}.
Let $u$ be a $\mathcal C^2$ function on $\bar{D}$, negative and
$J$-plurisubharmonic on $D$. We assume that $-L \leq u < 0$ on $D \cap
U$ and that $u-c|z|^2$ is $J$-plurisubharmonic on $D \cap U$, where
$c$ and $L$ are positive constants. Then there exist a positive
constant $ s$ and a neighborhood $V \subset \subset U$ of $p$,
depending on $c$ and $L$ only, such that for $q \in D \cap V$ and $v
\in T_qM$ we have the following inequality~:
\begin{equation}\label{E2}
K_{(D,J)}(q,v) \geq s K_{(D \cap U,J)}(q,v).
\end{equation}
\end{proposition}
We note that a similar statement was
obtained by F.Berteloot \cite{be95} in the integrable case. The proof
is based on N.Sibony's method \cite{si81}.
\vskip 0,2cm
\noindent{\sl Proof of Proposition~\ref{thm3}}.
Fix a neighborhood $V$ of $p$, relatively compact in $U$.
For every $q \in V$ we consider a diffeomorphism $z_q$ from $U$ to
$\mathbb B$ such that $z_q(q) = 0$ and $(z_q)_*(J)(0) = J_{st}$. We
also may assume that the function $u -c|z_q|^2$ is
$J$-plurisubharmonic on $D \cap U$.
According to Lemma
~\ref{Lemlem}, there exist uniform positive constants $A$ and $B$ such
that the function
$$
{\rm log}(\theta_r(|z_q|^2))+ \theta_r(A|z_q|)+ B|z_q|^2
$$
is $J$-plurisubharmonic on $U$ for every $q \in V$.
Set $\tau=2B/c$ and
define, for every point $q \in V$, the function $\Psi_{q}$ by~:
$$
\left\{
\begin{array}{lll}
\Psi_{q}(z) &=& \theta_r(|z_q|^2)\exp(\theta_r(A|z_q|))
\exp(\tau u(z))\ {\rm if} \ z \in D \cap U,\\
& & \\
\Psi_{q} &=& \exp(1+\tau u) \ {\rm on} \ D \backslash U.
\end{array}
\right.
$$
Then for every $0 < \varepsilon \leq B$, the function ${\rm
log}(\Psi_{q})-\varepsilon|z_q|^2$ is $J$-plurisubharmonic on $D \cap U$
and hence $\Psi_{q}$ is $J$-plurisubharmonic on $D \cap U$. Since
$\Psi_{q}$ coincides with $\exp(\tau u)$ outside $U$, it is globally
$J$-plurisubharmonic on $D$.
Let $f:\Delta \rightarrow D$ be a $J$-holomorphic disc such that
$f(0)=q \in V$ and
$(\partial f/\partial Re(\zeta))(0) = v/\alpha$ where $v \in T_qM$ and $\alpha
>0$. For $\zeta$ sufficiently close to 0 we have
$$
f(\zeta) = q + df(0)(\zeta) +
\mathcal O(|\zeta|^2).
$$
Consider the function
$$
\varphi(\zeta) = \Psi_q(f(\zeta))/|\zeta|^2
$$
which is subharmonic on
$\Delta \backslash \{0\}$. Since
$$
\varphi(\zeta) = |z_q(f(\zeta))|^2/|\zeta|^2 \exp(A|z_q(f(\zeta))|)
\exp(\tau u(f(\zeta)))
$$
for $\zeta$ close to 0 and
$$
(z_q\circ f)(\zeta) = dz_q(q)(df(0)(\partial / \partial
Re(\zeta)))\zeta + \mathcal O(|\zeta|^2)
$$
we obtain that $\limsup_{\zeta \rightarrow 0}\varphi(\zeta)
$ is finite. Moreover
$$
\limsup_{\zeta \rightarrow 0}\varphi(\zeta) \geq \|dz_q(q)(df(0)(\partial
/\partial Re(\zeta))\|^2\exp(-2B|u(q)|/c).
$$
Applying the maximum principle to a subharmonic extension of $\varphi$
on $\Delta$ we obtain the inequality
$$
\|dz_q(q)(df(0)(\partial / \partial Re(\zeta)))\|^2 \leq \exp(1+2B|u(q)|/c).
$$
Furthermore there exists a positive constant $C$ such that
$$
\|df(0)(\partial / \partial Re(\zeta))\|^2 \leq C
\|dz_q(q)(df(0)(\partial / \partial Re(\zeta)))\|^2.
$$
Hence, by definition of the Kobayashi-Royden infinitesimal pseudometric,
we obtain for every $q \in D \cap V$, $v \in T_qM$~:
\begin{eqnarray}
\label{localhyp}
K_{(D,J)}(q,v) \geq C^{-1/2} \left(\exp\left(-1-2B\frac{|u(q)|}{c}\right)
\right)^{1/2}\|v\|.
\end{eqnarray}
Consider now the Kobayashi ball $B_{(D,J)}(q,\alpha)=\{w \in D :
d_{(D,J)}^K(w,q)<\alpha\}$. It follows from Lemma~2.2 of \cite{ccs99} (whose
proof is identical in the almost complex setting) that restricting $V$
if necessary we can find a positive constant $s<1$,
independent of $q$, such that for every $J$-holomorphic disc
$f:\Delta \rightarrow D$ satisfying $f(0) \in D \cap V$ we have $f(s\Delta)
\subset D \cap U$ (see Figure 5). This gives the inequality (\ref{E2}). \qed
\bigskip
\begin{center}
\input{figure2.pstex_t}
\end{center}
\bigskip
\centerline{Figure 5}
\bigskip
\subsection{Uniform estimates of the Kobayashi-Royden metric}
In the present section we refine the lower estimate of the
Kobayashi-Royden metric.
\begin{proposition}\label{addest}
Let $D$ be a domain in an almost complex
manifold $(M,J)$, let $p \in \bar{D}$, let $U$ be a neighborhood of
$p$ in $M$ (not necessarily contained in $D$) and let $z:U \rightarrow
\mathbb B$ be the diffeomorphism given by Lemma~\ref{Lemlem}.
Let $u$ be a $\mathcal C^2$ function on $\bar{D}$, negative and
$J$-plurisubharmonic on $D$. We assume that $-L \leq u < 0$ on $D \cap
U$ and that $u-c|z|^2$ is $J$-plurisubharmonic on $D \cap U$, where
$c$ and $L$ are positive constants. Then there exists a neighborhood
$U'$ of $p$ and a constant $c'
> 0$, depending on $c$ and $L$ only, such that :
\begin{equation}\label{e1}
K_{(D,J)}(q,v) \geq c'\frac{\|v\|}{|u(q)|^{1/2}},
\end{equation}
for every $q \in D \cap U'$ and every $v \in T_qM$.
\end{proposition}
\noindent{\it Proof of proposition~\ref{addest}.}
We use the notations of the proof of Proposition~\ref{thm3}.
Consider a positive constant $r$ that will be specified later and
let $\theta$ be a smooth non decreasing function
on $\mathbb R^+$ such that $\theta(x) =x$ for $x \leq 1/3$ and $\theta(x) =
1$ for $x \geq 2/3$. Restricting $U$ if necessary we may aasume that
the function
$
\log(\theta(|z_q/r|^2)) + A|z_q| + B|z_q/r|^2
$
is $J$-plurisubharmonic on $D \cap U$, independently of $q$ and $r$.
Consider now the function
$
\Psi_{q}(z) =
\theta (|z_q|^2/r^2)\exp(A|z_q|)\exp(\tau u)
$
where $\tau=1/|u(q)|$ and $r=(2B|u(q)|/c)^{1/2}$.
Since the function $\tau u - 2B |z/r|^2$ is $J$-plurisubharmonic,
we may assume, shrinking $U$ if necessary, that the function
$\tau u - B |z_q/r|^2$ is $J$-plurisubharmonic on $D \cap U$ for every
$q \in V$. Hence the
function $\log(\Psi_q)$ is $J$-plurisubharmonic on $D \cap U$.
It follows from the estimate
(\ref{E2}) that there is a positive constant $s$ such that
$D_{(D,J)}(q,v) \geq s K_{(D\cap U,J)}(q,v)$
for every $q \in V,\ v \in T_qM$. Let $q \in V$, let $v \in T_qM$ and let
$f:\Delta \rightarrow D \cap U$ be a $J$-holomorphic disc such that
$f(0) = q$ and $df(0)(\partial / \partial Re(\zeta)) = v/\alpha$ where $\alpha
>0$.
Consider the function
$
\varphi(\zeta) = \Psi_q(f(\zeta))/|\zeta|^2
$
which is subharmonic on
$\Delta\backslash \{0\}$. As above we obtain that $\limsup_{\zeta
\rightarrow 0}\varphi(\zeta)$
is finite and $
\limsup_{\zeta \rightarrow
0}\phi(\zeta) \geq \|v\|^2\exp(2)/(r^2\alpha^2)$.
There exists a positive constant $C'$, independent of $q$, such
that $|z_q| \leq C'$ on $U$. Applying the maximum
principle to a subharmonic extension of $\varphi$ on $\Delta$, we obtain the
inequality
$$
\alpha \geq \sqrt{\frac{c}{2B\exp(1+AC')}}\|v\|^2/|u(q)|^{1/2}.
$$
This completes the proof. \qed
\subsubsection{Scaling and estimates of the Kobayashi-Royden metric}
In this Section we present a precise lower estimate on the Kobayashi-Royden
infinitesimal metric on a strictly pseudoconvex domain in
$(M,J)$.
\begin{theorem}\label{THM}
Let $M$ be a real $2n$-dimensional manifold with an almost complex
structure $J$ and
let $D=\{\rho<0\}$ be a relatively compact domain in $(M,J)$.
We assume that $\rho$ is a $\mathcal C^2$ defining function of $D$,
strictly $J$-plurisubharmonic in a neighborhood of $\bar{D}$. Then
there exists a positive constant $c$ such that~:
\begin{equation}\label{e3}
K_{(D,J)}(p,v) \geq c\left[\frac{|\partial_J\rho(p)(v - iJ(p)v)|^2}
{|\rho(p)|^2} +
\frac{\|v\|^2}{|\rho(p)|}\right]^{1/2},
\end{equation}
for every $p \in D$ and every $v \in T_pM$.
\end{theorem}
We start with the small almost complex deformations of the standard
structure. In the second subsection, we consider the case of an
arbitrary almost complex structure, not necessarily close to the
standard one. We use non-isotropic dilations in special coordinates
``reducing'' an almost complex structure in order to represent a
strictly pseudoconvex hypersurface on an almost complex manifold
as the Siegel sphere equipped with an arbitrary
small deformation of the standard structure. We stress
that such a representation cannot be obtained by the isotropic
dilations of Lemma 1 since the limit hypersurface is just a
hyperplane.
\subsubsection{Small deformations of the standard structure}
We start the proof of Theorem~\ref{THM} with the following~:
\begin{proposition}\label{thm2}
Let $D=\{\rho < 0\}$ be a bounded domain in $\mathbb C^n$, where $\rho$ is a
$\mathcal C^2$ defining function of $D$, strictly $J_{st}$-plurisubharmonic in
a neighborhood of $\bar{D}$. Then there exist positive constants $c$
and $\lambda_0$ such that for every almost complex structure $J$
defined in a neighborhood of $\bar{D}$ and such that
$\|J-J_{st}\|_{\mathcal C^2(\bar{D})} \leq \lambda_0$ estimate~(\ref{e3}) is
satisfied for every $p \in D,$ $v \in \mathbb C^n$.
\end{proposition}
\vskip 0,1cm
\noindent{\it Proof}. We note that according to Proposition~\ref{thm3}
(see estimate (\ref{localhyp}))
it is sufficient to prove the inequality
near $\partial D$. Suppose by contradiction that there exists a
sequence $(p^{\nu})$ of points in $D$ converging to a boundary point
$q$, a sequence $(v^{\nu})$ of unitary vectors and a sequence $(J_\nu)$
of almost complex structures defined in a neighborhood of $\bar{D}$,
satisfying
$\|J_\nu-J_{st}\|_{\mathcal C^2(\bar{D})}\rightarrow_{\nu \rightarrow \infty} 0$,
such that the quotient
\begin{equation}
\label{quot1}
K_{(D\cap U,J_\nu)}(p^{\nu},v^{\nu})\left[\frac{|\partial_{J_{\nu}}
\rho(p^{\nu})(v^{\nu} - iJ_{\nu}(p^{\nu})v^{\nu})|^2}{|\rho(p^{\nu})|^2}
+ \frac{\|v^{\nu}\|^2}{|\rho(p^{\nu})|}\right]^{-1/2}
\end{equation}
tends to $0$ as $\nu$ tends to $\infty$, where
$U$ is a neighborhood of $q$.
For sufficiently large $\nu$ denote by
$\delta_{\nu}$ the euclidean distance from $p^{\nu}$ to the
boundary of $D$ and by $q^{\nu} \in \partial D$ the unique point such
that $\vert p^{\nu} - q^{\nu} \vert = \delta_{\nu}$. Without loss of
generality we assume that $q = 0$, that $T_0(\partial D) = \{z:=('z,z_n) \in
\mathbb C^n : Re(z_n) = 0\}$ and that $J_\nu(q^\nu) = J_{st}$ for every $\nu$.
Consider a sequence of biholomorphic (for the standard structure)
transformations $T^{\nu}$ in a neigborhood of the origin, such that
$T^\nu(q^{\nu}) = 0$ and such that the image $D^{\nu} : =T^\nu(D)$
satisfies
$$
T_0(\partial D^{\nu})=\{ z \in \mathbb C^n : Re(z_n) = 0\}.
$$
We point out that the sequence $(T^{\nu})_\nu$ converges uniformly to the
identity map since $q^{\nu} \rightarrow q=0$ as $\nu \rightarrow \infty$
and hence that the sequence $((T^\nu)^{-1})_\nu$ is bounded. We
still denote by $J_\nu$ the direct image
$(T^\nu)_*(J_\nu)$. Let $U_1$ be a neighborhood of the origin such that
$\bar{U} \subset U_1$. For sufficiently large $\nu$ we have
$T^\nu(U) \subset U_1$. We may assume that every domain $D^{\nu}$ is
defined on $U_1$ by
$$
D^\nu \cap U_1 = \{z \in U_1 : \rho^{\nu}(z) :=
Re(z_n) + |'z|^2 +\mathcal O(|z|^3) <0\},
$$
and that the
sequence $(\hat p^\nu = T^{\nu}(p^{\nu}) =(0',-\delta_\nu))_\nu$ is on
the real inward normal to $\partial D^{\nu}$ at 0. Of course, the
functions $\rho^{\nu}$ converge uniformly with all derivatives to the
defining function $\rho$ of $D$. In what follows we omit the hat and
write $p^{\nu}$ instead of $\hat{p}^{\nu}$.
Denote by $R$ the function
$$
{R}(z)=Re(z_n) + |'z|^2 + (Re(z_n) + \vert 'z \vert^2)^2.
$$
There is a neighborhood $V_0$ of the origin in $\mathbb C^n$
such that the function
${R}$ is strictly $J_{st}$-plurisubharmonic on $V_0$. Fix $\alpha > 0$
small enough
such that the point $z^\alpha=('0,-\alpha)$ belongs to $V_0$.
Consider the dilation $\Lambda_\nu$ defined on
$\mathbb C^n$ by $\Lambda_\nu(z) =
({(\alpha / \delta_\nu)^{1/2}}'z,(\alpha/\delta_\nu)z_n)$.
If we set $J^\nu :=\Lambda_\nu \circ J_\nu \circ (\Lambda_\nu)^{-1}$
then we have~:
\begin{lemma}\label{Lemma1}
$\lim_{\nu \rightarrow \infty}J^\nu = J_{st}$, uniformly on compact
subsets of $\mathbb C^n$.
\end{lemma}
\noindent{\it Proof}. Considering $J$ as a matrix valued function,
we may assume that the Taylor expansion of $J_\nu$ at the origin is given by
$J_\nu = J_{st} + L_\nu(z) + \mathcal O(|z|^2)$ on $U$, uniformly with
respect to $\nu$.
Hence ${J}^\nu(z^0)(v) = J_{st}(v) +
{L}_\nu('z,(\delta_\nu/\alpha)^{1/2}z_n)(v)
+ \mathcal O(|(\delta_\nu|)
\ \|v\|$.
Since $\lim_{\nu \rightarrow \infty}L_\nu = 0$ by assumption,
we obtain the desired result. \qed
\vskip 0,1cm
Let $\tilde{\rho}^\nu:=(\alpha / \delta_\nu) \rho^\nu \circ
\Lambda_\nu^{-1}$ and
$G^\nu:=\{z \in \Lambda_\nu(U_1) : \tilde{\rho}^\nu(z) < 0\}$.
Then the function $R^\nu:= \tilde{\rho}^\nu + (\tilde{\rho}^\nu)^2$
converges with all its derivatives to ${R}$,
uniformly on compact subsets of $\mathbb C^n$.
Hence $R^\nu$ is strictly plurisubharmonic on $V_0$
and according to Lemma~\ref{Lemma1} there is a positive constant $C$
such that for sufficiently large $\nu$ the function $R^\nu - C|z|^2$
is strictly $J^\nu$-plurisubharmonic on $V_0$.
Since $\sup_{z \in G^\nu \cap \partial V_0} (R^\nu(z) -C|z|^2) =-C'<0$,
the function
$$
\tilde{R}^\nu:=\left\{
\begin{array}{lll}
R^\nu - C|z|^2 & {\rm on} & D^\nu \cap V_0\\
& & \\
-\frac{C'}{2} & {\rm on} & D^\nu \backslash V_0
\end{array}
\right.
$$
is $J^\nu$-plurisubharmonic on $G^\nu$, strictly $J^\nu$-plurisubharmonic
on $G^\nu \cap V_0$.
Since $z^\alpha$ belongs to $V_0$, it follows from the Proposition~\ref{thm3}
(see estimate (\ref{localhyp})) that there exists a positive constant
$C' > 0$ such that for sufficiently large $\nu$ we have~:
$$
K_{(G^\nu,J^\nu)}(z^\alpha,v) \geq C'\|v\|
$$
for every $v \in \mathbb C^n$.
Moreover for $v \in \mathbb C^n$ and for sufficiently large $\nu$ we have~:
\begin{eqnarray*}
& &
K_{(D^\nu\cap U_1, J_\nu)}(p^\nu,v) =
K_{(G^\nu,J^\nu)}(z^\alpha,\Lambda_\nu(v)) \geq C'\parallel \Lambda_\nu(v)
\parallel.
\end{eqnarray*}
This gives the inequality~:
$$
K_{(D^{\nu} \cap U,J_\nu)}(p^\nu,v) \geq C' \left(
\frac{\alpha |v_1|^2}{\delta_\nu} + \cdots +
\frac{\alpha |v_{n-1}|^2}{\delta_\nu} +
\frac{\alpha^2 |v_n|^2}{\delta_\nu^2}\right)^{1/2}.
$$
Since $C_1\delta_\nu$ is equivalent to $|\rho(p^\nu)|$ as $\nu
\rightarrow \infty$, we obtain that there
is a positive constant $C''$ such that
$$
K_{(D^{\nu} \cap U,J_\nu)}( p^\nu,v) \geq C'' \left(
\frac{\|v\|^2}{|\rho(p^\nu)|} +
\frac{|v_n|^2}{|\rho(p^\nu)|^2}\right)^{1/2}.
$$
Since $J_{\nu}(0) = J_{st}$, we have $|\partial\rho(p^\nu)(v -
iJ_\nu(p^\nu)(v))|^2 = |\partial_{J_{st}}\rho(p^\nu)(v)|^2 +
\mathcal O(\delta_{\nu})\parallel v \parallel^2 = \vert v_n \vert^2
+ \mathcal O(\delta_\nu)\parallel v \parallel^2$.
Hence there exists a positive constant $\tilde{C}$ such that
$$
K_{(D^{\nu} \cap U,J_\nu)}( p^\nu,v) \geq \tilde{C} \left(
\frac{\|v\|^2}{|\rho(p^\nu)|} +
\frac{|\partial_J\rho(p^\nu)(v-iJ_\nu(p^\nu)(v))|^2}
{|\rho(p^\nu)|^2}\right)^{1/2},
$$
contradicting the assumption on the quotient
(\ref{quot1}). This proves the desired estimate. \qed
We have the following corollary~:
\begin{corollary}\label{cor3}
Let $(M,J)$ be an almost complex manifold. Then every $p \in M$ has a
basis of complete hyperbolic neighborhoods.
\end{corollary}
\proof Let $p \in M$. According to Example~1 there exist a
neighborhood $U$ of $p$ and a diffeomorphism $z:U \rightarrow \mathbb B$,
centered at $p$, such that the function $|z|^2$ is strictly
$J$-plurisubharmonic on $U$ and
$\|z_\star(J)-J_{st}\|_{\mathcal C^2(U)} \leq \lambda_0$.
Hence the open ball $\{x \in \mathbb C^n :
\|x\|<1/2\}$ equipped with the structure $z_\star(J)$ satisfies the
hypothesis of Theorem \ref{thm2}. Now the estimate on the
Kobaysahi-Royden metric given by this theorem implies that this ball
is complete hyperbolic by the standard integration argument.
\qed
\subsubsection{Arbitrary almost complex structures}
We turn now to the proof of Theorem~\ref{THM} on an
arbitrary strictly pseudoconvex domain in an almost
complex manifold $(M,J)$ ($J$ is not supposed to
be a small deformation of the standard structure).
In view of Proposition~\ref{thm3} it suffices to prove
the statement in a neighborhood $U$ of a boundary point $q \in \partial D$.
Considering local coordinates $z$ centered at $q$, we may assume that
$D \cap U$ is a domain in $\mathbb C^n$ and
$0 \in \partial D$, $J(0) = J_{st}$.
The idea of the proof is to reduce the situation to the case of a small
deformation of the standard structure considered in Proposition~\ref{thm2}.
In the case of real dimension four Theorem~\ref{THM} is a direct corollary of
Proposition~\ref{thm2}. In the case of arbitrary dimension the proof of
Theorem~\ref{THM} requires a slight modification of Proposition~\ref{thm2}.
So we treat this case seperately.
\subsubsection{Case where $dim M = 4$}
According to \cite{si} Corollary~3.1.2,
there exist a neighborhood $U$ of $q$ in $M$ and complex coordinates
$z=(z_1,z_2) : U \rightarrow \mathbb B_2 \subset \mathbb C^2$, $z(0) =
0$ such that $z_*(J)(0) = J_{st}$ and moreover, a map $f: \Delta
\rightarrow \mathbb B$ is $J':= z_*(J)$-holomorphic if it satisfies the
equations
\begin{eqnarray}
\label{Jhol1}
\frac{\partial f_j}{\partial \bar \zeta} =
A_j(f_1,f_2)\overline{\left ( \frac{\partial f_j}
{\partial \zeta}\right ) }, j=1,2
\end{eqnarray}
where $A_j(z) = O(\vert
z \vert)$, $j=1,2$.
As pointed out before, one can obtain such coordinates by considering
two transversal
foliations of $\mathbb B$ by $J'$-holomorphic curves
and then taking these curves into the lines $z_j = const$ by a local
diffeomorphism (see Figure 1). The direct image of the almost complex structure $J$
under such a diffeomorphism has a diagonal matrix $ J'(z_1,z_2) =
(a_{jk}(z))_{jk}$ with $a_{12}=a_{21}=0$ and $a_{jj}=i+\alpha_{jj}$
where $\alpha_{jj}(z)=\mathcal O(|z|)$ for $j=1,2$.
We point out that the lines $z_j = const$ are
$J$-holomorphic after a suitable parametrization (which, in general,
is not linear).
\vskip 0,1cm
In what follows we omit the prime and denote this structure again by
$J$. We may assume that the complex tangent space $T_0(\partial D)
\cap J(0) T_0(\partial D) = T_0(\partial D) \cap i T_0(\partial D)$ is
given by $\{ z_2 = 0 \}$.
In particular, we have the following expansion for the defining
function $\rho$ of $D$ on $U$~:
$\rho(z,\bar{z}) = 2 Re(z_2) + 2Re K(z) + H(z) + \mathcal O(\vert z
\vert^3)$, where
$K(z) = \sum k_{\nu\mu} z_{\nu}{z}_{\mu}$, $k_{\nu\mu} =
k_{\mu\nu}$ and
$H(z) = \sum h_{\nu\mu} z_{\nu}\bar z_{\mu}$, $h_{\nu\mu} =
\bar h_{\mu\nu}$.
\vskip 0,1cm
Consider the non-isotropic dilations $\Lambda_{\delta}: (z_1,z_2) \mapsto
(\delta^{-1/2}z_1,\delta^{-1}z_2) = (w_1,w_2)$ with $\delta > 0$.
If $J$ has the above
diagonal form in the coordinates $(z_1,z_2)$ in $\mathbb C^2$, then
its direct image $J_{\delta}= (\Lambda_{\delta})_*(J)$ has the form
$J_{\delta}(w_1,w_2) =(a_{jk}(\delta^{1/2}w_1,\delta w_2))_{jk}$
and so $J_{\delta}$ tends to $J_{st}$ in the $\mathcal C^2$ norm as $\delta
\rightarrow 0$. On the other hand, $\partial D$ is, in the coordinates
$w$, the zero set of the function
$\rho_{\delta}= \delta^{-1}(\rho \circ \Lambda_{\delta}^{-1})$.
As $\delta \rightarrow 0$, the function $\rho_{\delta}$ tends to
the function $2 Re w_2 + 2 Re K(w_1,0) + H(w_1,0)$ which defines a
$J_{st}$- strictly pseudoconvex domain by Lemma~\ref{PP}.
So we may apply Proposition~\ref{thm2}.
This proves Theorem~\ref{THM} in dimension 4.
\subsubsection{Case where $dim M = 2n$.}
In this case we cannot apply directly Proposition \ref{thm2} since
$J$ can not be deformed by the non-isotropic dilations to the standard
structure. Instead we use the invariance of the Levi form with respect
to the non-isotropic dilations.
We suppose that in a neighborhhod of the origin we have $J = J_{st} +
{\mathcal O}(\vert z \vert)$.
We also may assume that in these coordinates the defining function
$\rho$ of $D$ has the form $\rho = 2Re z_n + 2ReK(z) + H(z) +
\mathcal O(\vert z \vert^3)$, where $K$ and $H$ are defined similarly
to the 4-dimensional case and $\rho$ is strictly $J$-plurisubharmonic
at the origin. We use the notation $z = ('z,z_n)$.
Consider the non-isotropic dilations $\Lambda_{\delta} : ('z,z_n)
\mapsto (w',w_n) = {(\delta^{-1/2}}'z,\delta^{-1}z_n)$ and set
$J_{\delta} = (\Lambda_{\delta})_*(J)$. Then $J_{\delta}$ tends to the
almost complex structure $J_0(z)= J_{st} + L('z,0)$ where
$L('z,0) = (L_{kj}('z,0))_{kj}$
denotes a matrix with $L_{kj} = 0$ for $k = 1,...,n-1$, $j = 1,...,n$,
$L_{nn} = 0$ and $L_{nj}('z,0)$, $j=1,...,n-1$ being (real) linear
forms in $'z$.
Let $\rho_{\delta} = \delta^{-1}(\rho \circ \Lambda_{\delta}^{-1})$.
As $\delta \rightarrow 0$, the function $\rho_{\delta}$ tends to
the function $\tilde{\rho} = 2 Re z_n + 2 Re K('z,0) + H('z,0)$ in the
$\mathcal C^2$ norm. By the invariance of the Levi form we have
${\mathcal L}^J(\rho)(0)(\Lambda_{\delta}^{-1}(v)) = {\mathcal
L}^{J_\delta}(\rho \circ \Lambda_{\delta}^{-1})(0)(v)$. Since $\rho$ is
strictly $J$-plurisubharmonic, multiplying by $\delta^{-1}$ and
passing to the limit at the right side as $\delta \longrightarrow 0$ ,
we obtain that
${\mathcal L}^{J_0}(\tilde \rho)(0)(v) \geq 0$ for any $v$. Now let $v =
('v,0)$. Then $\Lambda_{\delta}^{-1}(v) = \delta^{1/2}v$ and so
${\mathcal L}^J(\rho)(0)(v) = {\mathcal
L}^{J_\delta}(\rho_{\delta})(0)(v)$. Passing to the limit as $\delta$
tends to zero, we obtain that ${\mathcal L}^{J_0}(\tilde \rho)(0)(v) > 0$
for any $v = ('v,0)$ with $'v \neq 0$.
Consider now the function $R=\tilde{\rho} + \tilde{\rho}^2$.
Then ${\mathcal L}^{J_0}(R)(0)(v) = {\mathcal L}^{J_0}(\tilde \rho)(0)(v)
+ 2 v_n \overline v_n$, so $R$ is strictly $J_0$-plurisubharmonic in a
neighborhood of the origin.
Thus the functions $R^{\nu}$ used in the proof of
Proposition~\ref{thm2} are strictly $J^\nu$-plurisubharmonic and their Levi
forms are bounded from below by a positive constant independent of $\nu$.
This allows to use Proposition~\ref{thm3} and the proof can be
proceeded quite similarly to the proof of Proposition~\ref{thm2}
without any changes. \qed
\subsubsection{Upper estimate of the Kobayashi-Royden metric}
In this subsection we prove the following~:
\begin{theorem}\label{THEOREM}
Let $M$ be a real $2n$-dimensional manifold with an almost complex
structure $J$ and
let $D=\{\rho<0\}$ be a relatively compact domain in $(M,J)$.
We assume that $\rho$ is a $\mathcal C^2$ defining function of $D$,
strictly $J$-plurisubharmonic in a neighborhood of $\bar{D}$. Then
there exists a positive constant $c$ such that~:
\begin{equation}\label{E3}
K_{(D,J)}(p,v) \leq c\left[\frac{|\partial_J\rho(p)(v - iJ(p)v)|^2}
{|\rho(p)|^2} +
\frac{\|v\|^2}{|\rho(p)|}\right]^{1/2},
\end{equation}
for every $p \in D$ and every $v \in T_pM$.
\end{theorem}
Since the estimates are purely local we may
assume that $D \subset \mathbb C^n$.
Let $p \in D$ be sufficiently close to $\partial D$ and let $\|v\| =
1$.
We choose coordinates $z=(z^1,\dots,z^n)$ on $\mathbb C^n$ such that
$0 \in \partial D$, $p=(0',-\delta)$ where $0 < \delta < 1$,
$dist(p,\partial D) = dist(0,p)$, and
$D' :=\{z \in B(0,2) / Re(z_n) + |z|^2 <0\} \subset D \cap B(0,2)$.
Moreover we may assume that the map $f_v:\Delta \rightarrow \mathbb
C^n$ defined by $f_v(\zeta) = \zeta v$ is $J$-holomorphic. We denote by
$d$ the distance from $p$ to $\partial D'$ along the
line $\mathbb C v$. Since $f_v(d\Delta)$ is
contained in $D'$ we have the following inequalities~:
$$
K_{(D,J)}(p,v) \leq K_{(D',J)}(p,v) \leq \frac{1}{d}.
$$
Moreover by the strict convexity of $D'$ we have~:
$$
d \geq \frac{1}{2}\left(\delta ^2 \|v_n\|^2 + \delta
\|v'\|^2|\right)^{1/2}.
$$
On the other hand
$$
\frac{1}{(\delta^2 \|v_n\|^2 + \delta \|v'\|^2|)^{1/2}} \leq
\left(\frac{\|v_n\|^2}{\delta ^2} +
\frac{\|v'\|^2}{\delta}\right)^{1/2}
$$
which implies the desired statement. \qed
\subsection{ Boundary continuity and localization of biholomorphisms}
In this section we give some technical results necessary for the
proof of the Fefferman mapping theorem. They are also of independent
interest.
\subsubsection{Boundary continuity of diffeomorphisms}
Using estimates of the
Kobayashi-Royden metric together with the boundary distance preserving
property, we obtain, by means of classical arguments
(see, for instance, K.Diederich-J.E.Fornaess \cite{df77}), the following
\begin{theorem}\label{Reg}
Let $D$ and $D'$ be two smoothly relatively compact strictly pseudoconvex
domains in almost complex manifolds $(M,J)$ and $(M',J')$ respectively. Let
$f: D \rightarrow D'$ be a smooth
diffeomorphism biholomorphic with respect to $J$ and $J'$.
Then $f$ extends as a $1/2$-H{\"o}lder homeomorphism
between the closures of $D$ and $D'$.
\end{theorem}
We recall the following estimates
of the Kobayashi-Royden infinitesimal metric obtained previously~:
\begin{lemma}
\label{lowest1}
Let $D$ be a relatively compact strictly pseudoconvex domain in an
almost complex manifold $(M,J)$. Then there
exists a constant $C > 0$ such that
\begin{eqnarray*}
(1/C)\| v \| / dist(p,\partial D)^{1/2} \leq K_{(D,J)}(p,v)
\leq C\| v \|/dist(p,\partial D)
\end{eqnarray*}for every $p \in D$ and
$v \in T_pM$.
\end{lemma}
\noindent{\it Proof of Theorem~\ref{Reg}}. For any $p \in D$ and any
tangent vector $v$ at $p$ we have by Lemma~\ref{lowest1}~:
\begin{eqnarray*}
C_1\frac{\| df_p(v) \|}{dist(f(p),\partial D')^{1/2}} \leq
K_{(D',J')}(f(p),df_p(v)) = K_{(D,J)}(p,v) \leq C_2\frac{\| v
\|}{dist(p,\partial D)}
\end{eqnarray*}
which implies, by Proposition~\ref{equiv}, the estimate
$$\vert\vert\vert df_p \vert\vert\vert \leq C\frac{\| v
\|}{dist(p,\partial D)^{1/2}}.$$
This gives the desired statement. \qed
\vskip 0,2cm
\noindent Theorem~\ref{Reg} allows to reduce the proof of Fefferman's
theorem to a {\it local situation}.
Indeed, let $p$ be a boundary point of $D$ and
$f(p) = p' \in \partial D'$. It suffices to prove that $f$ extends
smoothly to a neighborhood of $p$ on $\partial D$. Consider
coordinates $z$ and $z'$ defined in small neighborhoods $U$ of $p$ and $U'$ of
$p'$ respectively, with $U' \cap D' = f(D \cap U)$ (this is possible since
$f$ extends as a homeomorphism at $p$). We obtain the following situation.
If $\Gamma = z(\partial D \cap U)$ and $\Gamma' = z'(\partial D' \cap U')$
then the map $z' \circ f \circ z^{-1}$ is defined on $z(D \cap U)$ in $\mathbb C^2$,
continuous up to the hypersurface $\Gamma$ with $f(\Gamma) \subset \Gamma'$.
Furthermore the map $z' \circ f \circ z^{-1}$ is a diffeomorphism between
$z(D \cap U)$ and $z'(D' \cap U')$ and the hypersurfaces $\Gamma$ and
$\Gamma'$ are strictly pseudoconvex for the structures $z_*(J)$ and
$(z')_*(J')$ respectively. Finally, we may choose $z$ and $z'$ such that
$z_*(J)$ and $z'_*(J')$ are represented by diagonal matrix functions in the
coordinates $z$ and $z'$.
As we proved in Lemma~\ref{PP}, $\Gamma$ (resp. $\Gamma'$) is also strictly
$J_{st}$-psdeudoconvex at the origin. We call such
coordinates $z$ (resp. $z'$) {\it canonical coordinates} at $p$
(resp. at $p'$). Using the non-isotropic
dilation as in Section 2.5, we may assume that the norms
$\| z_*(J) - J_{st}\|_{\mathcal C^2}$ and
$\| z'_*(J') - J_{st}\|_{\mathcal C^2}$ are as small as needed.
This localization is crucially used in the sequel and we write $J$ (resp.
$J'$) instead of $z_*(J)$ (resp. $z'_*(J')$); we
identify $f$ with $z' \circ f \circ z^{-1}$.
\subsubsection{H{\"o}lder extension of holomorphic discs}
We study the boundary continuity of pseudoholomorphic discs
attached to smooth totally real submanifolds in almost complex manifolds.
Recall that in the case of the integrable structure every smooth totally
real submanifold $E$ (of maximal dimension) is the zero set of a positive
strictly plurisubharmonic function of class $\mathcal C^2$. This remains true
in the almost complex case. Indeed, we can choose coordinates
$z$ in a neighborhood $U$ of $p \in E$ such that $z(p) = 0$, $z_*(J) =
J_{st} + O(\vert z \vert)$ on $U$ and
$z(E \cap U) = \{w=(x,y) \in z(U) : r_j(w) = x^j +o(\vert
w \vert) = 0 \}$. The function $\rho = \sum_{j=1}^n r_j^2$ is strictly
$J_{st}$-plurisubharmonic on $z(U)$ and so remains strictly
$z_*(J)$-plurisubharmonic, restricting $U$ if necessary.
Covering $E$ by such neighborhoods, we conclude by mean of the partition of
unity.
Let $f : \Delta \rightarrow M$ be a $J$-holomorphic disc and let
$\gamma$ be an open arc on the unit circle $\partial \Delta$.
As usual we denote by $C(f,\gamma)$ the cluster set of $f$ on $\gamma$;
this consists of points $p \in M$ such that $p=\lim_{k \rightarrow
\infty}f(\zeta_k)$ for a sequence $(\zeta_k)_k$ in $\Delta$ converging
to a point in $\gamma$.
\begin{theorem}
\label{Regth1}
Let $G$ be a relatively compact domain in an almost complex manifold $(M,J)$
and let $\rho$ be a strictly $J$-plurisubharmonic
function of class $\mathcal C^2$ on $\bar{G}$.
Let $f:\Delta \rightarrow G$ be a
$J$-holomorphic disc such that $\rho \circ f \geq 0$ on $\Delta$.
Suppose that $\gamma$ is an open non-empty arc on
$\partial \Delta$ such that the cluster set
$C(f,\gamma)$ is contained in the zero set of $\rho$.
Then $f$ extends as a H{\"o}lder 1/2-continuous map on $\Delta \cup
\gamma$.
\end{theorem}
We begin the proof by the following well-known assertion
(see, for instance, \cite{BeL}).
\begin{lemma}
\label{dlem3.1}
Let $\phi$ be a positive subharmonic function in $\Delta$ such that
the measures $\mu_r(e^{i\theta}) := \phi(re^{i\theta})d\theta$ converge in
the weak-star topology to
a measure $\mu$ on $\partial \Delta$ as $r \rightarrow 1$. Suppose that
$\mu$ vanishes on an open arc $\gamma \subset \partial \Delta$. Then for
every compact subset $K \subset \Delta \cup \gamma$ there exists a constant
$C>0$ such that
$\phi(\zeta) \leq C(1 - \vert \zeta \vert)$ for any
$\zeta \in K \cup \Delta$.
\end{lemma}
Now fix a point $a \in \gamma$, a constant $\delta > 0$ small
enough so that the intersection $\gamma \cap (a + \delta
\bar\Delta )$ is compact in $\gamma$; we denote by
$\Omega_{\delta}$ the intersection $\Delta \cap (a + \delta\Delta
)$. By Lemma~\ref{dlem3.1}, there exists a constant $C > 0$ such that,
for any $\zeta$ in $\Omega_{\delta}$, we have
\begin{eqnarray}
\label{dd4}
\rho \circ f(\zeta) \leq C (1 - \vert \zeta \vert ).
\end{eqnarray}
Let $(\zeta_k)_k$ be a sequence of points in $\Delta$ converging to $a$
with $\lim_{k \rightarrow \infty}f(\zeta_k) = p$.
By assumption, the function $\rho$ is strictly $J$-plurisubharmonic in a
neighborhood $U$ of $p$; hence there is a constant
$\varepsilon > 0$ such that the function $\rho - \varepsilon \vert z
\vert^2$ is $J$-plurisubharmonic on $U$.
\begin{lemma}
\label{dlem3.2}
There exists a constant $A > 0$ with the following property~: If
$\zeta$ is an arbitrary point of $\Omega_{\delta/2}$ such that
$f(\zeta)$ is in $G \cap z^{-1}(\mathbb B)$, then
$\vert \vert \vert df_\zeta \vert \vert \vert
\leq A(1- \vert \zeta \vert)^{-1/2}$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{dlem3.2}.}
Set $d = 1 - \vert \zeta \vert$; then the disc $\zeta +
d\Delta$ is contained in $\Omega_{\delta}$. Define the domain $G_d =
\{ w \in G: \rho(w) < 2Cd \}$. Then it follows by (\ref{dd4}) that the
image $f(\zeta + d\Delta)$ is contained in $G_d$, where the
$J$-plurisubharmonic function $u_d = \rho - 2Cd$ is negative.
Moreover we have lower estimates on the Kobayashi-Royden
infinitesimal pseudometric given by Proposition~\ref{addest}.
Hence there exists a positive constant $M$
(independent of $d$) such that $K_{(G_d,J)}(w,\eta) \geq M \vert \eta
\vert \vert u_d(w) \vert^{-1/2}$, for any $w$ in $G \cap z^{-1}(\mathbb B)$ and
any $\eta \in T_{w}\Omega$. On another hand, we have $K_{\zeta +
d\Delta}(\zeta,\tau ) = \vert \tau \vert /d$ for any $\tau$ in
$T_{\zeta}\Delta$ indentified with $\mathbb C$. By the decreasing property
of the Kobayashi-Royden metric, for any $\tau$ we have
\begin{eqnarray*}
M \| df_\zeta(\tau)\| \ \vert u_d(f(\zeta))\vert^{-1/2} \leq
K_{(G_d,J)}(f(\zeta),df_\zeta(\tau)) \leq K_{\zeta + d\Delta}(\zeta,\tau) =
\vert \tau \vert/d.
\end{eqnarray*}
Therefore, $\vert \vert \vert df_\zeta\vert\vert\vert \leq M^{-1}\vert
u_d(f(\zeta))\vert^{1/2}/d$. As $-2Cd \leq u_d(f(\zeta)) < 0$, this
implies the desired statement in Lemma~\ref{dlem3.2}
with $A = M^{-1}(2C)^{1/2}$. \qed
\vskip 0,1cm
\noindent{\it Proof of Theorem~\ref{Regth1}}.
Lemma~\ref{dlem3.2} implies that $f$ extends as a 1/2-H{\"o}lder map
to a neighborhood of the point $a$
in view of an integration argument inspired by the classical
Hardy-Littlewood theorem.
This proves Theorem~\ref{Regth1}. \qed
\section{Nonisotropic scaling of almost complex manifolds with boundary}
We consider a biholomorphism $f$ between two relatively compact,
strictly pseudoconvex domains $D$ and $D'$ in almost complex manifolds
$(M,J)$ and $(M',J')$.
The aim of this Section is to provide a precise information about the boundary
behavior of the tangent map of $f$.
For convenience of the reader, we begin this Section with the
case of four real dimensional almost complex manifolds (subsections
4.1 and 4.2). The case of higher dimension will be treated in
Subsection 4.3.
\subsection{Localization and boundary behavior of the tangent map}
Our considerations being purely local, we can assume that $D$ and $D'$ are
domains in $\mathbb C^2$, $\Gamma$ and $\Gamma'$ are open
smooth pieces of their boundaries containing the origin, the almost complex
structure $J$ (resp. $J'$) is defined in a neighborhood of $D$
(resp. $D'$), $f$ is a $(J,J')$ biholomorphism from $D$ to $D'$,
continuous up to $\Gamma$, $f(\Gamma) = \Gamma'$, $f(0) = 0$.
The matrix $J$ (resp. $J'$) is diagonal on $D$ (resp. $D'$).
\vskip 0,1cm
Consider a basis $(\omega_1,\omega_2)$ of $(1,0)$ differential forms
(for the structure $J$) in a neighborhood of the origin. Since $J$ is
diagonal, we may choose $\omega_j = dz^j - B_{j}(z)d\bar z^j$, $j=1,2$.
Denote by $Y=(Y_1,Y_2)$ the corresponding dual basis
of $(1,0)$ vector fields. Then $Y_j = \partial /\partial z^j -
\beta_j(z)\partial/\bar\partial z^j$, $j=1,2$. Here $\beta_j(0) =
\beta_k(0) = 0$. The basis $Y(0)$ simply coincides with the canonical (1,0)
basis of $\mathbb C^2$.
In particular $Y_1(0)$ is a basis vector of the holomorphic tangent space
$H^J_0(\partial D)$ and $Y_2(0)$ is ``normal'' to $\partial D$.
Consider now for $t \geq 0$ the translation $\partial D -
t$ of the boundary of $D$ near the origin. Consider, in a neighborhood of the
origin, a $(1,0)$ vector field $X_1$ (for $J$) such that $X_1(0) = Y_1(0)$
and $X_1(z)$ generates the complex tangent space $H^J_z(\partial D - t)$ at
every point $z \in \partial D - t$, $0 \leq t <<1$.
Setting $X_2 = Y_2$, we obtain a basis of vector fields
$X = (X_1,X_2)$ on $D$ (restricting $D$ if necessary).
Any complex tangent vector $v \in T_z^{(1,0)}(D,J)$ at
point $z \in D$ admits the unique
decomposition $v = v_t + v_n$ where $v_t = \alpha_1
X_1(z)$ (the tangent component) and $v_n = \alpha_2 X_2(z)$ (the normal
component). Identifying $T_z^{(1,0)}(D,J)$ with $T_zD$ we may
consider the decomposition $v=v_t + v_n$ for $v \in T_z(D)$.
Finally we consider this decomposition for points $z$ in a neighborhood of
the boundary.
We fix a (1,0) basis vector fields $X$
(resp. $X'$) on $D$ (resp. $D')$ as above.
\begin{proposition}
\label{matrix}
The matrix $A = (A_{kj})_{k,j= 1,2}$ of the differential $df_z$ with respect
to the bases
$X(z)$ and $X'(f(z))$ satisfies the following estimates~:
$A_{11} = O(1)$, $A_{12}= O(dist(z,\partial D)^{-1/2})$,
$A_{21}= O(dist(z,\partial D)^{1/2})$ and $A_{22}= O(1)$.
\end{proposition}
\vskip 0,1cm
\noindent{\it Proof of Proposition~\ref{matrix}}.
Consider the case where $v = v_t$. It follows from
the estimates of the Kobayashi-Royden metric obtained previously that~:
$$
\begin{array}{llcll}
\displaystyle \frac{1}{C}\left (
\frac{\| (df_z(v_t))_t \|}{dist(f(z),\partial
D')^{1/2}} + \frac{\|(df_z(v_t))_n
\|}{dist(f(z),\partial D')} \right ) & \leq &
K_{(D',J')}(f(z),df_z(v_t)) && \\
& = & K_{(D,J)}(z,v_t) & \leq & \displaystyle C
\frac{\| v_t \|}{dist(z,\partial D)^{1/2}}.
\end{array}
$$
This implies that
$\|(df_z(v_t))_t \| \leq C^{5/2} \| v_t \|$
and
$\vert\vert (df_z(v_t))_n \vert\vert \leq C^{3}dist(z,\partial D)^{1/2}
\| v_t \|$, by the boundary distance
preserving property given in Proposition~\ref{equiv}.
We obtain the estimates for the normal component in a similar way.
\qed
\subsection{Non isotropic dilations}
Our goal now is to prove Fefferman's mapping theorem without the
assumption of $\mathcal C^1$-smoothness of $f$ up to the boundary. This requires
an application of the estimates of the Kobayashi-Royden metric given
above and the scaling method due to S.Pinchuk; we adapt this to the
almost complex case.
We reduced the problem to the following local
situation. Let $D$ and $D'$ be domains in $\mathbb C^2$, $\Gamma$ and
$\Gamma'$ be open $\mathcal C^{\infty}$-smooth pieces of their boundaries,
containing the origin. We assume that an almost complex structure $J$
is defined and $\mathcal C^{\infty}$-smooth in a neighborhood of the closure
$\bar D$, $J(0) = J_{st}$ and $J$ has a diagonal form in a
neighborhood of the origin: $J(z) = diag(a_{11}(z),a_{22}(z))$.
Similarly, we assume that $J'$
is diagonal in a neighborhood of the origin, $J'(z) =
diag(a_{11}'(z),a_{22}'(z))$ and $J'(0) = J_{st}$. The hypersurface
$\Gamma$ (resp. $\Gamma'$) is supposed to be strictly $J$-pseudoconvex
(resp. strictly $J'$-pseudoconvex). Finally, we assume that $f: D
\rightarrow D'$ is a $(J,J')$-biholomorphic map, $1/2$-H{\"o}lder
homeomorphism between $D \cup \Gamma$ and $D' \cup \Gamma'$, such that
$f(\Gamma) = \Gamma'$ and $f(0) = 0$. Finally,
$\Gamma$ may be defined in a neighborhood of the origin
by the equation $\rho(z) = 0$ where $\rho(z) = 2Re
z^2 + 2Re K(z) + H(z) + o(\vert z \vert^2)$ and $K(z) = \sum
K_{\mu\nu}z^{\mu\nu}$, $H(z) = \sum h_{\mu\nu}z^{\mu}\bar
z^{\nu}$, $k_{\mu\nu} = k_{\nu\mu}$, $h_{\mu\nu} = \bar
h_{\nu\mu}$. The crucial point is that $H(z^1,0)$ is a positive
Hermitian form on $\mathbb C$, meaning that in these coordinates $\Gamma$ is
strictly pseudoconvex at the origin with respect to the standard
structure of $\mathbb C^2$ (see Lemma~\ref{PP} for the proof). Of course,
$\Gamma'$ admits a similar local representation. In what follows we
assume that we are in this setting.
Let $(p^k)$ be a sequence of points in $D$ converging to $0$ and let
$\Sigma := \{ z \in \mathbb C^2: 2Re z^2 + 2Re K(z^1,0) + H(z^1,0) < 0\}$,
$\Sigma' := \{ z \in \mathbb C^2: 2Re z^2 + 2Re K'(z^1,0) + H'(z^1,0) < 0\}$.
The scaling procedure associates with the pair $(f,(p^k)_k)$
a biholomorphism $\phi$ (with respect to the standard structure $J_{st}$)
between $\Sigma$ and $\Sigma'$. Since $\phi$ is obtained as a limit of a
sequence of biholomorphic maps conjugated with $f$, some of their properties
are related and this can be used to study boundary properties of
$f$ and to prove that its cotangent lift is continuous up to the conormal
bundle $\Sigma(\partial D)$.
\subsubsection{Fixing suitable local coordinates and dilations.}
For any boundary point $t \in
\partial D$ we consider the change of variables $\alpha^t$ defined by
$$
(z^1)^* = \frac{\partial \rho}{\partial \bar z^2}(t)(z^1 - t^1)
- \frac{\partial \rho}{\partial \bar z^1}(t)(z^2 - t^2),\
(z^2)^* = \sum_{j=1}^2 \frac{\partial \rho}{\partial z^j}(t)(z^j -
t^j).
$$
Then $\alpha^t$ maps $t$ to $0$. The real
normal at $0$ to $\Gamma$ is mapped by $\alpha^t$ to the line $\{
z^1 = 0, y_2 = 0 \}$.
For every $k$, we denote by $t^k$ the projection of
$p^k$ onto $\partial D$ and by $\alpha^k$ the change of variables
$\alpha^t$ with $ t = t^k$. Set $\delta_k = dist(p^k,\Gamma)$.
Then $\alpha^k(p^k) = (0,-\delta_k)$ and $\alpha^k(D)=\{2Re z^2 + O(\vert z
\vert^2) < 0\}$ near the origin. Since the sequence $(\alpha^k)_k$ converges
to the identity map, the sequence $(\alpha^k)_*(J)$ of almost
complex structures tends to $J$ as $k \rightarrow \infty$. Moreover
there is a sequence $(L^k)$ of linear automorphisms of $\mathbb R^4$
such that $(L^k \circ \alpha^k)_*(J)(0) = J_{st}$.
Then $(L^k \circ \alpha^k)(p^k) = (o(\delta_k),-\delta_k')$ with
$\delta_k' \sim \delta_k$ and
$(L^k \circ \alpha^k)(D) = \{Re (z^2 + \tau_k z^1) + O(\vert z \vert^2) < 0\}$
near the origin, with $\tau_k = o(1)$.
Hence there is sequence $(M^k)$ of
$\mathbb C$-linear transformations of $\mathbb C^2$, converging to the identity,
such that $(T^k: = M^k \circ L^k
\circ \alpha^k)$ is a sequence of linear transformations converging to the
identity, and $D^k:= T^k(D)$ is defined near the origin by
$D^k=\{\rho_k(z) = Re z^2 + O(\vert z \vert^2) < 0\}$.
Finally $\tilde p_k = T^k(p^k)= (o(\delta_k),\delta_k'' + io(\delta_k))$ with
$\delta_k''\sim \delta_k$.
We also denote by $\Gamma^k = \{\rho_k = 0 \}$ the
image of $\Gamma$ under $T^k$.
Furthermore, the sequence of almost complex structures
$(J_k:= (T^k)_*(J))$ converges to $J$ as $k \rightarrow \infty$
and $J_k(0) = J_{st}$.
We proceed quite similarly for the target domain $D'$.
For $s \in \Gamma'$ we define the transformation $\beta^s$ by
$$
(z^1)^* = \frac{\partial \rho'}{\partial \bar z^2}(s)(z^1 - s^1)
- \frac{\partial \rho'}{\partial \bar z^1}(s)(z^2 - s^2),
(z^2)^* = \sum_{j=1}^2 \frac{\partial \rho'}{\partial z^j}(s)(z^j -
s^j).
$$
Let $s^k$ be the projection of $q^k:=f(p^k)$ onto $\Gamma'$ and
let $\beta^k$ be the corresponding map $\beta^s$ with $s = s^k$.
The sequence $(q^k)$ converges
to $0 = f(0)$ so $\beta^k$ tends to the identity. Considering linear
transformations $(L')^k$ and $(M')^k$, we obtain a sequence $(T'^k)$ of
linear transformations converging to the identity and satisfying the
following properties. The domain
$(D^k)':= T'^k(D')$ is defined near the origin by
$(D^k)'=\{\rho_k'(z) := Re z^2 + O(\vert z \vert^2) < 0\}$,
$\Gamma_k' = \{ \rho_k' = 0 \}$ and $\tilde q_k = T'^k(q^k) =
(o(\varepsilon_k),\varepsilon_k''+ io(\varepsilon_k))$
with $\varepsilon_k'' \sim \varepsilon_k$, where
$\varepsilon_k = dist(q^k,\Gamma')$.
The sequence of almost complex structures $(J_k':= (T'^k)_*(J'))$
converges to $J'$ as $k \rightarrow \infty$ and $J_k'(0) = J_{st}$.
Finally, the map $ f^k:= T'^k \circ f \circ (T^k)^{-1}$ satisfies
$f^k(\tilde p_k) = \tilde q_k$ and is
a biholomorphism between the domains $D^k$ and $(D')^k$
with respect to the almost
complex structures $J_k$ and $J_k'$.
Consider now the non isotropic dilations
$\phi_k : (z^1,z^2) \mapsto (\delta_k^{1/2}z^1,\delta_kz^2)$ and
$\psi_k(z^1,z^2)=
(\varepsilon_k^{1/2}z^1,\varepsilon_kz^2)$ and set $\hat f^k =
(\psi_k)^{-1} \circ f^k \circ \phi_k$.
Then the map $\hat f^k$ is biholomorphic with respect to the almost complex
structures $\hat J_k:=((\phi_k)^{-1})_*(J_k)$ and
$\hat J'_k:= (\psi_k^{-1})_*(J'_k)$.
Moreover if $\hat D^k:=\phi_k^{-1}(D^k)$ and
$(\hat{D'})^k:=\psi_k^{-1}((D')^k)$ then
$\hat D^k = \{ z \in \phi_k^{-1}(U): \hat \rho_k(z) < 0\}$
where
$$
\begin{array}{lll}
\hat \rho_k(z) &: =& \delta_k^{-1}\rho(\phi_k(z))\\
& = & 2Re z^2 + \delta_k^{-1}[2
Re K(\delta_k^{1/2}z^1,\delta_kz^2) + H(\delta_k^{1/2}z^1,\delta_kz^2)
+ o(\vert (\delta_k^{1/2}z^1,\delta_kz^2) \vert^2).
\end{array}
$$
and $(\hat D')^k=\{ z \in \phi_k^{-1}(U): \hat \rho'_k(z) < 0\}$
where
$$
\begin{array}{lll}
\hat \rho'_k(z) &: =& \varepsilon_k^{-1}\rho'(\psi_k(z))\\
& = & 2Re z^2 +
\varepsilon_k^{-1}[2 Re K'(\varepsilon_k^{1/2}z^1,\varepsilon_kz^2) +
H'(\varepsilon_k^{1/2}z^1,\varepsilon_kz^2)
+ o(\vert (\varepsilon_k^{1/2}z^1,\varepsilon_kz^2) \vert^2).
\end{array}
$$
Since $U$
is a fixed neighborhood of the origin, the pullbacks $\phi_k^{-1}(U)$
tend to $\mathbb C^2$ and the functions $\hat\rho_k$ tend
to $\hat \rho(z) = 2Re z^2 + 2Re K(z^1,0) + H(z^1,0)$ in the $\mathcal C^2$ norm
on any compact subset of $\mathbb C^2$. Similarly, since $U'$
is a fixed neighborhood of the origin, the pullbacks $\psi_k^{-1}(U')$
tend to $\mathbb C^2$ and the functions $\hat\rho_k'$ tend
to $\hat \rho'(z) = 2Re z^2 + 2Re K'(z^1,0) + H'(z^1,0)$ in the $\mathcal C^2$ norm
on any compact subset of $\mathbb C^2$. If $\Sigma :=
\{ z \in \mathbb C^2: \hat \rho(z) < 0 \}$ and $\Sigma' := \{ z \in \mathbb C^2:
\hat \rho'(z) < 0 \}$ then the sequence of points $\hat p^k =
\phi_k^{-1}(\tilde p_k) \in \hat D^k$ converges to the point $(0,-1) \in
\Sigma$ and the sequence of points $\hat q^k =
\psi^{-1}_k(\tilde q^k) \in \hat{D'}^k$ converges to $(0,-1) \in
\Sigma'$. Finally $\hat{f}^k(\hat p^k) = \hat q^k$.
\subsubsection{Convergence of the dilated families.} We begin with the
following
\begin{lemma}\label{convseq2}
The sequences $(\hat J'_k)$ and $(\hat J_k)$ of almost complex structures
converge to the standard structure uniformly (with all partial
derivatives of any order) on compact subsets of $\mathbb C^2$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{convseq2}.}
Denote by $a_{\nu\mu}^k(z)$ the elements of the matrix
$J_k$. Since $J_k \rightarrow J$ and $J$ is diagonal, we have $a_{\nu\mu}^k
\rightarrow a_{\nu\mu}$ for $\nu = \mu$ and $a_{\nu\mu}^k
\rightarrow 0$ for $\nu \neq \mu$. Moreover, since $J_k(0) =
J_{st}$, $a_{\nu\mu}^k(0) = i$ for $\nu = \mu$ and $a_{\nu\mu}(0) = 0$
for $\nu \neq \mu$.
The elements $\hat
a_{\nu\mu}^k$ of the matrix
$\hat J_k$ are given by: $\hat a_{\nu\mu}^k(z^1,z^2) = a_{\nu
\mu}^k(\delta_k^{1/2}z^1,\delta_k z^2)$ for $\nu = \mu$, $\hat
a_{12}^k(z^1,z^2) = \delta_k^{1/2}a(\delta_k^{1/2}z^1,\delta_k z^2)$ and
$\hat a_{21}^k(z^1,z^2) = \delta_k^{-1/2}a_{21}^k(\delta_k^{1/2}z^1,\delta_k
z^2)$. This implies the desired result. \qed
\vskip 0,1cm
The next statement is crucial.
\begin{proposition}
\label{scaling}
The sequence $(\hat f^k )$ (together with all derivatives) is a
relatively compact family (with
respect to the compact open topology) on
$\Sigma$; every cluster point $\hat f$ is
a biholomorphism (with respect to $J_{st}$) between $\Sigma$
and $\Sigma'$, satisfying $\hat f(0,-1) = (0,-1)$ and
$(\partial \hat f^2/\partial z^2)(0,-1) = 1$.
\end{proposition}
\noindent{\it Proof of Proposition~\ref{scaling}.}
{\it Step 1: convergence.} Our proof is based on the method
developed by F.Berteloot-G.Coeur{\'e}~\cite{BerCo}.
Consider a domain
$G \subset \mathbb C^2$ of the form $G = \{ z \in W: \lambda(z) = 2Re z^2 +
2Re K(z) + H(z) + o(\vert z \vert^2) < 0 \}$ where $W$ is a
neighborhood of the origin. We assume that an almost complex
structure $J$ is diagonal on $W$ and that the hypersurface
$\{ \lambda = 0 \}$ is strictly $J$-pseudoconvex at any point.
Given $a \in \mathbb C^2$ and $\delta > 0$
denote by $Q(a,\delta)$ the non-isotropic ball
$Q(a,\delta ) = \{ z: \vert z^1 - a_1 \vert < \delta^{1/2}, \vert z^2
- a_2 \vert < \delta \}$. Denote also by $d_{\delta}$ the non-isotropic
dilation $d_{\delta}(z^1,z^2) = (\delta^{-1/2}z^1,\delta^{-1}z^2)$.
\begin{lemma}\label{Control}
There exist positive constants $\delta_0, C, r$ satisfying the
following property : for
every $\delta \leq \delta_0$ and for every $J$-holomorphic disc $g:\Delta
\rightarrow G$ such that $g(0) \in Q(0,\delta)$ we have the
inclusion $g(r\Delta) \subset Q(0,C\delta)$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{Control}.}
Assume by contradiction that there exist positive sequences $\delta_k
\rightarrow 0$, $C_k \rightarrow +\infty$, a sequence $\zeta_k \in
\Delta$, $\zeta_k \rightarrow 0$ and a sequence $g_k: \Delta
\rightarrow G$ of
$J$-holomorphic discs such that $g_k(0) \in Q(0,\delta_k)$ and
$g_k(\zeta_k) \not\in Q(0,C_k\delta_k)$. Denote by $d_k$ the
dilations $d_{\delta}$ with $\delta = \delta_k$ and consider the
composition $h_k = d_k \circ g_k$ defined on $\Delta$. The
dilated domains $G_k:= d_k(G)$ are defined by $\{ z \in d_k(W):
\lambda_k(z):= \delta_k^{-1}\lambda \circ d_k^{-1}(z) < 0 \}$ and the
sequence $(\lambda_k)$ converges uniformly on compact subsets of
$\mathbb C^2$ to $\hat \lambda : z \mapsto 2Re z^2 + 2Re K(z) + H(z^1,0)$. Since $J$
is diagonal, the sequence of structures $J_k:=(d_k)_*(J)$ converges to
$J_{st}$ in the $\mathcal C^2$ norm on compact subsets of $\mathbb C^2$.
The discs $h_k$ are $J_k$-holomorphic and the sequence $(h_k(0))$ is
contained in $Q(0,1)$; passing to a subsequence we may assume that this
converges to a point $p \in \overline{Q(0,1)}$. On the other hand, the
function $\hat \lambda + A\hat \lambda^2$ is
strictly $J_{st}$-plurisubharmonic on $Q(0,5)$ for a suitable constant
$A > 0$. Since the structures $J_k$ tend
to $J_{st}$, the functions $\lambda_k + A\lambda_k^2$
are strictly $J_k$-plurisubharmonic on $Q(0,4)$ for every $k$ large
enough and their Levi forms admit a uniform
lower bound with respect to $k$.
By Proposition~\ref{addest} the Kobayashi-Royden infinitesimal
pseudometric on $G_k$ admits the following lower bound~:
$K_{G_k}(z,v) \geq C \vert v \vert$ for any $z \in G_k \cap Q(0,3)$,
$v \in \mathbb C^2$,
with a positive constant $C$ independent of $k$. Therefore, there exists a
constant $C' > 0$ such that $\vert \vert \vert (dh_k)_\zeta \vert \vert \vert
\leq C'$ for any $\zeta \in (1/2)\Delta$ satisfying
$h_k(\zeta) \in G_k \cap Q(0,3)$.
On the other hand, the sequence $(\vert h_k(\zeta_k) \vert)$ tends to $+
\infty$. Denote by $[0,\zeta_k]$ the segment
(in $\mathbb C$) joining the origin and $\zeta_k$ and let
$\zeta_k' \in [0,\zeta_k]$ be the point the closest to the origin such
that $h_k([0,\zeta_k']) \subset G_k \cap
\overline{Q(0,2)}$ and $h_k(\zeta_k') \in \partial Q(0,2)$. Since $h_k(0)
\in Q(0,1)$, we have $\vert h_k(0) - h_k(\zeta_k') \vert \geq C''$
for some constant $C'' > 0$. Let $\zeta_k' = r_k e^{i\theta_k}$, $r_k \in
]0,1[$. Then
$$
\vert h_k(0) - h_k(\zeta_k') \vert \leq \int_{0}^{r_k}
\vert \vert \vert (dh_k)(te^{i\theta_k}) \vert \vert \vert dt \leq C'r_k
\rightarrow 0.
$$
This contradiction proves Lemma~\ref{Control}. \qed
\vskip 0,1cm
The statement of Lemma~\ref{Control} remains true if we replace the unit
disc $\Delta$ by the unit ball $\mathbb B_2$ in $\mathbb C^2$ equipped with an almost
complex structure $\tilde J$ close enough (in the $\mathcal C^2$ norm) to
$J_{st}$. For the proof it is sufficient to foliate $\mathbb B_2$ by $\tilde
J$-holomorphic curves through the origin (in view of a smooth
dependence on small perturbations of $J_{st}$
such a foliation is a small
perturbation of the foliation by complex lines through the origin and
apply Lemma~\ref{Control} to the foliation.
\vskip 0,1cm
As a corollary we have the following
\begin{lemma}
\label{conv}
Let $(M,\tilde J)$ be an almost complex manifold and let $F^k: M
\rightarrow G$ be a sequence of $(\tilde J,J)$-holomorphic maps.
Assume that for some point $p^0 \in M$ we have $F^k(p) = (0,-\delta_k)$,
$\delta_k \rightarrow 0$, and that the sequence $(F^k)$ converges
to $0$ uniformly on compact subsets of $M$.
Consider the rescaled maps $d_k \circ
F^k$. Then for any compact subset $K \subset M$ the sequence of norms
$(\| d_k \circ F^k \|_{\mathcal C^0(K)})$ is bounded.
\end{lemma}
\noindent{\sl Proof of Lemma~\ref{conv}}. It is sufficient to consider
a covering of a compact subset of $M$ by sufficiently small balls,
similarly to \cite{BerCo}, p.84.
Indeed, consider a covering of $K$ by the balls $p^j + r\mathbb B$, $j=0,...,N$
where $r$ is given by Lemma~\ref{Control} and $p^{j+ 1} \in p^j + r\mathbb B$ for
any $j$. For $k$ large enough, we obtain
that $F^k(p^0 + r\mathbb B) \subset Q(0,2C\delta_k)$, and $F^k(p^1 + r\mathbb B)
\subset Q(0,4C^2\delta_k)$. Continuing this process we obtain that
$F^k(p^N + r\mathbb B) \subset Q(0,2^NC^N\delta_k)$.
This proves Lemma~\ref{conv}. \qed
\vskip 0,1cm
Now we return to the proof of Proposition~\ref{scaling}. Lemma
\ref{conv} implies that the sequence $(\hat f^k)$ is bounded (in the $\mathcal C^0$
norm) on any
compact subset $K$ of $\Sigma$. Covering $K$ by small bidiscs,
consider two transversal foliations by $J$-holomorphic curves on every
bidisc. Since the restriction of $\hat f^k$ on every such curve is
uniformly bounded in the $\mathcal C^0$-norm,
it follows by the elliptic
estimates that this is bounded in $\mathcal C^l$ norm for every $l$ (see
\cite{si}). Since the bounds are uniform with respect to curves,
the sequence $(\hat f^k)$ is bounded in every
$\mathcal C^l$-norm. So the family $(\hat f^k)$ is relatively compact.
{\it Step 2: Holomorphicity of the limit maps.} Let $(\hat f^{k_s})$ be a
subsequence converging to a smooth map $\hat f$.
Since $f^{k_s}$ satisfies the holomorphicity condition
$\hat J'_{k_s} \circ d\hat f^{k_s} = d\hat f^{k_s} \circ
J_{k_s}$, since $\hat J_{k_s}$ and $\hat J'_{k_s}$ converge to $J_{st}$,
we obtain, passing to the limit in the holomorphicity condition, that $\hat f$
is holomorphic with respect to $J_{st}$.
{\it Step 3: Biholomorphicity of $\hat f$.}
Since $\hat f(0,-1) = (0,-1) \in \Sigma'$ and
$\Sigma'$ is defined by a plurisubharmonic function, it follows by the
maximum principle that $\hat f(\Sigma) \subset \Sigma'$ (and not just a
subset of $\bar\Sigma'$). Applying a similar argument to the
sequence $(\hat f^k)^{-1}$ of inverse map, we obtain that this converges
(after extraction of a subsequence) to the inverse of $\hat f$.
Finally the domain $\Sigma$ (resp. $\Sigma'$) is
biholomorphic to ${\mathbb H}$ by means of the transformation $(z^1,z^2)
\mapsto (z^1,z^2 + K(z^1,0))$ (resp. $(z^1,z^2) \mapsto (z^1,z^2 +
K'(z^1,0))$). Since a biholomorphism of ${\mathbb H}$ fixing the point
$(0,-1)$ has the form
$(e^{i\theta}z^1,z^2)$ (see, for instance, \cite{co}), $\hat f$ is conjugated
to this transformation by the above quadratic biholomorphisms of
$\mathbb C^2$. Hence~:
\begin{eqnarray}
\label{derivative}
\frac{\partial \hat f^2}{\partial z^2}(0,-1) = 1.
\end{eqnarray}
\vskip 0,1cm
\noindent This property will be used in the next Section. \qed
\subsubsection{Boundary behavior of the tangent map}
We suppose that we are in the local situation described at the
beginning of the previous section. Here we prove two statements
concerning the boundary behavior of the tangent map of $f$ near
$\Gamma$. They are obvious if $f$ is of class $\mathcal C^{1}$ up to
$\Gamma$. In the general situation, their proofs
require the scaling method of the previous section.
Let $p \in \Gamma$. After a local change of coordinates $z$ we
may assume that $p = 0$, $J(0) = J_{st}$ and $J$ is assumed to be diagonal.
In the $z$ coordinates, we
consider a base $X$ of (1,0) (with respect to $J$) vector fields
defined previously. Recall that $X_2 = \partial
/\partial z^2 + a(z) \partial/\bar\partial z^2$, $a(0) = 0$,
$X_1(0) = \partial/\partial z^1$ and at every point $z^0$, $X_1(z^0)$
generates the holomorphic tangent space $H_z^J(\partial D - t)$, $t \geq 0$.
If we return to the initial coordinates and move the point $p \in \Gamma$,
we obtain for every $p$ a basis $X_p$ of $(1,0)$ vector fields, defined in a
neighborhood of $p$. Similarly, we define the basis $X'_q$ for
$q \in \partial D'$.
The elements of the matrix of the tangent map
$df_z$ in the bases $X_p(z)$ and $X'_{f(p)}(z)$ are denoted by
$A_{js}(p,z)$. According to Proposition~\ref{matrix} the function
$A_{22}(p,\cdot)$ is upper bounded on $D$.
\begin{proposition}
\label{REALITY}
We have:
\begin{itemize}
\item[(a)] Every cluster point of the function $z \mapsto A_{22}(p,z)$
(in the notation of Proposition~\ref{matrix}) is real when $z$ tends to a
point $p \in \partial D$.
\item[(b)] For $z \in D$, let $p \in \Gamma$ such that
$|z-p| = dist(z,\Gamma)$. There exists a constant $A$, independent of
$z \in D$, such that $\vert A_{22}(p,z) \vert \geq A$.
\end{itemize}
\end{proposition}
The proof of these statements use the above scaling
construction. So we use the notations of the previous section.
\vskip 0,1cm
\noindent{\it Proof of Proposition~\ref{REALITY}}.
(a) Suppose that there exists a sequence of points $(p^k)$ converging
to a boundary point $p$ such that $A_{22}(p,\cdot)$ tends to a complex number
$a$. Applying the above scaling construction,
we obtain a sequence of maps $(\hat f^k)_k$.
Consider the two basis $\hat X^k:=
\delta_k^{1/2}((\phi_k^{-1}) \circ T^k)(X_1),
\delta_k((\phi_k^{-1})\circ T^k)(X_2))$ and $(\hat
X')^k:= (\varepsilon_k^{-1/2}((\psi_k^{-1}) \circ T'^k)(X'_1),
\varepsilon_k^{-1}((\psi_k^{-1})\circ T'^k)(X'_2))$. These vector
fields tend to the standard (1,0) vector field base of $\mathbb C^2$ as $k$
tends to $\infty$. Denote by $\hat A^k_{js}$ the elements of the
matrix of $d\hat f^k(0,-1)$. Then $A^k_{22} \rightarrow (\partial
\hat f^2/\partial z^2)(0,-1) = 1$, according to (\ref{derivative}). On the
other hand, $A^k_{22} = \varepsilon_k^{-1}\delta_k A_{22}$ and tends
to $a$ by the boundary distance preserving property (Proposition~\ref{equiv}).
This gives the statement.
(b) Suppose that there is a sequence of points $(p^k)$ converging
to the boundary such that $A_{22}$ tends to $0$. Repeating precisely
the argument of (a), we obtain that $(\partial \hat f^2/\partial
z^2)(0,-1) = 0$; this contradicts~(\ref{derivative}). \qed
\vskip 0,1cm
In order to establish the next proposition, it is convenient to associate
a wedge with the totally real part of the conormal bundle
$\Sigma_J(\partial D)$ of $\partial D$ as edge.
Consider in $\mathbb R^{4} \times \mathbb R^{4}$ the set $S = \{ (z,L):
dist((z,L),\Sigma_J(\partial D)) \leq dist(z,\partial D), z \in D \}$.
Then, in a neighborhood $U$ of any totally real point of
$\Sigma_J(\partial D)$, the set S contains a wedge $W_U$ with
$\Sigma_J(\partial D) \cap U$ as totally real edge.
\begin{proposition}
\label{cluster2}
Let $K$ be a compact subset of the totally real part of the conormal
bundle $\Sigma_J(\partial D)$. Then the cluster set of the cotangent lift
$\tilde f $ of $f$ on the conormal bundle
$\Sigma(\partial D)$, when $(z,L)$ tends to $\Sigma_J(\partial D)$
along the wedge $W_U$, is relatively compactly contained
in the totally real part of $\Sigma(\partial D')$.
\end{proposition}
\noindent{\sl Proof of Proposition~\ref{cluster2}}.
Let $(z^k,L^k)$ be a sequence in $W_U$
converging to $(0,\partial_J\rho(0)) = (0,dz^2)$. Set $g = f^{-1}$.
We shall prove that the
sequence of linear forms $Q^k :=
{}^tdg(w^k)L^k$, where $w^k = f(z^k)$, converges to a linear form which up to
a {\it real} factor (in view of Part (a) of
Proposition \ref{REALITY})
coincides with $\partial_J \rho'(0)= dz^2$
(we recall that ${}^t$ denotes the transposed map).
It is sufficient to prove that the first component of $Q^k$ with
respect to the dual basis $(\omega_1,\omega_2)$ of $X$ tends to $0$
and
the second one is
bounded below from the origin as $k$ tend to infinity. The map $X$ being
of class $\mathcal C^1$ we can replace $X(0)$ by $X(w^k)$.
Since $(z^k,L^k) \in W_U$, we have $L^k
= \omega_2(z^k) + O(\delta_k)$, where $\delta_k$ is the distance from
$z^k$ to
the boundary. Since $\vert\vert\vert dg_{w^k} \vert\vert\vert =
0(\delta_k^{-1/2})$, we
have $Q^k = {}^tdg_{w^k}(\omega_2(z^k)) + O(\delta_k^{1/2})$.
By Proposition~\ref{matrix}, the
components of ${}^tdg_{w^k}(\omega_2(z^k))$ with respect to the basis
$(\omega_1(z^k),\omega_2(z^k))$ are the elements of the
second line of the matrix
$dg_{w^k}$ with respect to the basis $X'(w^k)$ and $X(z^k)$. So its first
component is $0(\delta_k^{1/2})$ and tends to $0$ as $k$ tends to
infinity. Finally the component $A_{22}^k$ is bounded below from the
origin by Part (b) of Proposition~\ref{REALITY}. \qed
\subsection{Lifts of biholomorphisms to the cotangent bundle~: the
case of arbitrary dimension}
Our further considerations rely deeply on the estimates
of the Kobayashi-Royden infinitesimal pseudometric given in Section 3.
\vskip 0,1cm
Let $D$ (resp. $D'$) be a strictly pseudoconvex domain in an almost complex
manifold $(M,J)$ (resp. $(M',J')$) and let $f$ be a $(J,J')$-biholomorphism
from $D$ to $D'$. Fix a point $p \in \partial D$ and a sequence $(p^k)_k$
in $D$ converging to $p$. After extraction we may assume that the sequence
$(f(p^k))_k$ converges to a point $p'$ in $\partial D'$. According to the
Hopf lemma, $f$ has the boundary distance property. Namely, there is a
positive constant $C$ such that
\begin{equation}\label{bdp}
(1/A) \ dist(f(p^k), \partial D') \leq dist(p^k, \partial D) \leq
A \ dist(f(p^k), \partial D'),
\end{equation}
where $A$ is independent of $k$ (see~\cite{co-ga-su}).
Since all our considerations are local we set $p=p'=0 \in \mathbb C^n$.
We may assume that $J(0) = J_{st}$ and $J'(0) = J_{st}$.
Let $U$ (resp. $V$) be a neighborhood of the origin in $\mathbb C^n$ such that
$D \cap U = \{z \in U : \rho(z,\bar{z}) : =
z_n + \bar{z}_n + Re(K(z)) + H(z,\bar{z}) + \cdots < 0\}$
(resp. $D' \cap V = \{w \in V : \rho'(w,\bar{w}) : =
w_n + \bar{w}_n + Re(K'(w)) + H'(w,\bar{w}) + \cdots < 0\}$)
where
$K(z) = \sum k_{\nu\mu} z^{\nu}{z}^{\mu}$, $k_{\nu\mu} =
k_{\mu\nu}$,
$H(z) = \sum h_{\nu\mu} z^{\nu}\bar z^{\mu}$, $h_{\nu\mu} =
\bar h_{\mu\nu}$ and $\rho$ is a strictly $J$-plurisubharmonic function on
$U$ (resp. $K'(z) = \sum k'_{\nu\mu} w^{\nu}{w}^{\mu}$, $k'_{\nu\mu} =
k'_{\mu\nu}$,
$H'(w) = \sum h'_{\nu\mu} w^{\nu}\bar w^{\mu}$, $h'_{\nu\mu} =
\bar h'_{\mu\nu}$ and $\rho'$ is a strictly $J'$-plurisubharmonic function on
$V$).
\subsubsection{Asymptotic behaviour of the tangent map of $f$}
We wish to understand the limit behaviour (when $k \rightarrow \infty$) of
$df(p^k)$. Consider the vector fields
$$
v^j:=(\partial \rho/\partial x^n)\partial / \partial x^j
- (\partial \rho/\partial x^j)\partial / \partial x^n
$$
for $j=1,\dots,n-1$, and
$$
v^n:=(\partial \rho/\partial x^n)\partial /
\partial y^n - (\partial \rho/\partial y^n)\partial / \partial x^n.
$$
Restricting $U$ if necessary, the vector fields $X^1,\dots,X^{n-1}$
defined by $X^j:=v^j-iJ(v^j)$ form a basis
of the $J$-holomorphic tangent space to $\{\rho = \rho(z)\}$
at any $z \in U$. Moreover, if $X^n:=v^n-iJv^n$ then the family
$X:=(X^1,\dots,X^n)$ forms a basis of $(1,0)$ vector fields on $U$.
Similarly we define a basis
$X':=(X'^1,\dots,X'^n)$ of $(1,0)$ vector fields on $V$ such that
$(X'^1(w),\dots,X'^{n-1}(w))$ defines a basis
of the $J'$-holomorphic tangent space to $\{\rho' = \rho'(w)\}$
at any $w \in V$.
We denote by $A(p^k):=(A(p^k)_{j,l})_{1 \leq j,l \leq n}$ the matrix of the
map $df(p^k)$ in the basis $X(p^k)$ and $X(f(p^k))$.
\begin{remark}\label{precise}
In sake of completeness we should write $X_0$ and $X'_0$ to emphasize that
the structure was normalized by the condition $J(0) = J_{st}$ and
$A(0,p^k)$ for $A(p^k)$. The same construction is valid for any
boundary point of $D$.
The corresponding notations will be used in Proposition~\ref{reality}.
\end{remark}
\begin{proposition}\label{tangent}
The matrix $A(p^k)$ satisfies the following estimates~:
$$
A(p^k)=\left(
\begin{array}{ccc}
O_{n-1,n-1}(1) & & O_{n-1,1}(dist(p^k,\partial D)^{-1/2})\\
& & \\
O_{1,n-1}(dist(p^k,\partial D)^{1/2}) & & O_{1,1}(1)
\end{array}
\right).
$$
\end{proposition}
The matrix notation means that the following estimates are satisfied~:
$A(p^k)_{j,l} = O(1)$ for $1 \leq j,l \leq n-1$,
$A(p^k)_{j,n} = O(dist(p^k,\partial D)^{-1/2})$ for $1 \leq j \leq n-1$,
$A(p^k)_{n,l} = O(dist(p^k,\partial D)^{1/2})$ for $1 \leq l \leq n-1$ and
$A(p^k)_{n,n} = O(1)$.
\vskip 0,1cm
The proof of Proposition~\ref{tangent} was given previously
in dimension two (Proposition~\ref{matrix}) but is valid without any
modification in any dimension. We note
that the asymptotic behaviour of $A(p^k)$ depends only on the distance
from the point to $\partial D$, not on the choice of the sequence
$(p^k)_k$.
\subsubsection{Scaling process and model domains}
The following construction is similar to the two dimensional case.
For every $k$ denote by $q^k$ the projection of $p^k$ to $\partial D$ and
consider the change of variables $\alpha^k$ defined by
$$
\left\{
\begin{array}{ccccc}
(z^j)^* & = & \displaystyle \frac{\partial \rho}{\partial \bar z^n}(q^k)
(z^j - (q^k)^j)
- \displaystyle \frac{\partial \rho}{\partial \bar z^j}(q^k)(z^n - (q^k)^n),
& &
{\rm for} \ 1 \leq j \leq n-1,\\
(z^n)^* & = & \sum_{j=1}^n \displaystyle \frac{\partial \rho}{\partial z^j}
(q^k)(z^j - (q^k)^j).
\end{array}
\right.
$$
If $\delta_k := dist(p^k,\partial D)$ then $\alpha^k(p^k) = (0,-\delta_k)$
and $\alpha^k(D)=\{2Re z^n + O(\vert z
\vert^2) < 0\}$ near the origin. Moreover, the sequence $(\alpha^k)_*(J)$
converges to $J$ as $k \rightarrow \infty$, since the sequence
$(\alpha^k)_k$ converges
to the identity map. Let $(L^k)_k$ be a sequence of linear automorphisms of
$\mathbb R^{2n}$
such that $(T^k: = L^k
\circ \alpha^k)_k$ converges to the
identity, and $D^k:= T^k(D)$ is defined near the origin by
$D^k=\{\rho_k(z) = Re z^n + O(\vert z \vert^2) < 0\}$.
The sequence of almost complex structures
$(J_k:= (T^k)_*(J))_k$ converges to $J$ as $k \rightarrow \infty$
and $J_k(0) = J_{st}$.
Furthermore $\tilde p_k := T^k(p^k)$ satisfies
$\tilde p_k = (o(\delta_k),\delta_k'' + io(\delta_k))$
with
$\delta_k''\sim \delta_k$.
We proceed similarly on $D'$.
Denote by $s^k$ the projection of $f(p^k)$ onto $\partial D'$ and
define the transformation $\beta^k$ by
$$
\left\{
\begin{array}{ccccc}
(w^j)^* & = & \displaystyle \frac{\partial \rho'}{\partial \bar w^n}(s^k)
(w^j - (s^k)^j)
- \displaystyle \frac{\partial \rho'}{\partial \bar w^j}(s^k)(w^n - (s^k)^n),
& & {\rm for} \ 1 \leq j \leq n-1,\\
(w^n)^* & = & \sum_{j=1}^n \displaystyle \frac{\partial \rho'}{\partial w^j}
(s^k)(w^j - (s^k)^j).
\end{array}
\right.
$$
We define a sequence $(T'^k)_k$ of
linear transformations converging to the identity and satisfying the
following properties. The domain
$(D^k)':= T'^k(D')$ is defined near the origin by
$(D^k)'=\{\rho_k'(w) := Re w^n + O(\vert w \vert^2) < 0\}$,
and $\tilde f(p_k) = T'^k(f(p_k)) =
(o(\varepsilon_k),\varepsilon_k''+ io(\varepsilon_k))$
with $\varepsilon_k'' \sim \varepsilon_k$, where
$\varepsilon_k = dist(f(p_k),\partial D')$.
The sequence of almost complex structures $(J_k':= (T'^k)_*(J'))_k$
converges to $J'$ as $k \rightarrow \infty$ and $J_k'(0) = J_{st}$.
Finally, the map $ f^k:= T'^k \circ f \circ (T^k)^{-1}$ satisfies
$f^k(\tilde p_k) = \tilde f(p_k)$ and is
a $(J_k,J'_k)$-biholomorphism between the domains $D^k$ and $(D')^k$.
Let $\phi_k : ('z,z^n) \mapsto (\delta_k^{1/2} {'z},\delta_kz^n)$ and
$\psi_k('w,w^n)=
(\varepsilon_k^{1/2} \ 'w,\varepsilon_kw^n)$ and set $\hat f^k =
(\psi_k)^{-1} \circ f^k \circ \phi_k$.
The map $\hat f^k$ is $(\hat J_k,\hat J'_k)$-biholomorphic, where
$\hat J_k:=((\phi_k)^{-1})_*(J_k)$ and
$\hat J'_k:= (\psi_k^{-1})_*(J'_k)$.
If $\hat D^k:=\phi_k^{-1}(D^k)$ and
$(\hat{D'})^k:=\psi_k^{-1}((D')^k)$ then
$\hat D^k = \{ z \in \phi_k^{-1}(U): \hat \rho_k(z) < 0\}$
where
$$
\begin{array}{lll}
\hat \rho_k(z) & : = & \delta_k^{-1}\rho(\phi_k(z))\\
& = & 2Re z^n + \delta_k^{-1}[2
Re K(\delta_k^{1/2}{'z},\delta_kz^n) + H(\delta_k^{1/2}{'z},\delta_kz^n)
+ o(\vert (\delta_k^{1/2}{'z},\delta_kz^n) \vert^2).
\end{array}
$$
and $(\hat D')^k=\{w \in \phi_k^{-1}(V): \hat \rho'_k(z) < 0\}$
where
$$
\begin{array}{lll}
\hat \rho'_k(w) & : = &\varepsilon_k^{-1}\rho'(\psi_k(w))\\
& = & 2Re w^n +
\varepsilon_k^{-1}[2 Re K'(\varepsilon_k^{1/2}\ 'w,\varepsilon_kw^n) +
H'(\varepsilon_k^{1/2}\ 'w,\varepsilon_kw^n)
+ o(\vert (\varepsilon_k^{1/2}\ 'w,\varepsilon_kw^n) \vert^2).
\end{array}
$$
Since $U$
is a neighborhood of the origin, the pullbacks $\phi_k^{-1}(U)$
converge to $\mathbb C^n$ and the functions $\hat\rho_k$ converge
to $\hat \rho(z) = 2Re z^n + 2Re K({'z},0) + H({'z},0)$ in the $\mathcal C^2$ norm
on compact subsets of $\mathbb C^n$. Similarly, since $V$
is a neighborhood of the origin, the pullbacks $\psi_k^{-1}(U')$
converge to $\mathbb C^n$ and the functions $\hat\rho_k'$ converge
to $\hat \rho'(w) = 2Re w^n + 2Re K'({'w},0) + H'({'w},0)$ in the $\mathcal C^2$ norm
on compact subsets of $\mathbb C^n$. If $\Sigma :=
\{z \in \mathbb C^n: \hat \rho(z) < 0 \}$ and $\Sigma' := \{w \in \mathbb C^n:
\hat \rho'(w) < 0 \}$ the sequence of points $\hat p_k =
\phi_k^{-1}(\tilde p_k) \in \hat D^k$ converges to the point $(0,-1) \in
\Sigma$ and the sequence of points $\hat f(p_k) =
\psi^{-1}_k(\tilde f(p_k)) \in \hat{D'}^k$ converges to $(0,-1) \in
\Sigma'$. Finally $\hat{f}^k(\hat p_k) = \hat f(p_k)$.
\noindent The limit behaviour of the dilated objects is given by the following
proposition (see Figure~6).
\begin{proposition}\label{convseq}
$(i)$ The sequences $(\hat J_k)$ and $(\hat J'_k)$ of almost complex
structures converge to model structures $J_0$ and $J'_0$
uniformly (with all partial derivatives of any order) on compact subsets of
$\mathbb C^n$.
\vskip 0,1cm
$(ii)$ $(\Sigma,J_0)$ and $(\Sigma',J'_0)$ are model domains.
\vskip 0,1cm
$(iii)$ The sequence $(\hat f^k)$ (together with all derivatives) is a
relatively compact family (with respect to the compact open topology) on
$\Sigma$; every cluster point $\hat f$ is
a $(J_0,J'_0)$-biholomorphism between $\Sigma$
and $\Sigma'$, satisfying $\hat f(0,-1) = (0,-1)$ and
$\hat f^n('0,z^n) = z^n$ on $\Sigma$.
\end{proposition}
\medskip
\begin{center}
\input{figure3.pstex_t}
\end{center}
\medskip
\centerline{Figure 6}
\bigskip
\noindent{\it Proof of Proposition~\ref{convseq}.}
We start with the proof of $(i)$. We focus on structures $\hat{J}_k$.
Consider $J=J_{st} + L(z) +
O(|z|^2)$ as a matrix
valued function, where $L$ is a real linear matrix.
The Taylor expansion of $J_k$ at
the origin is given by $J_k = J_{st} + L^k(z) + O(|z|^2)$
on $U$, uniformly with respect to $k$. Here $L^k$ is a real linear
matrix converging to $L$ at infinity.
Write $\hat{J}_k = J_{st} + \hat{L}^k + O(\delta_k)$.
If $L^k=(L^k_{j,l})_{j,l}$ then
$\hat{L}^k_{j,l}= L^k_{j,l}(\phi_k(z))$ for $1 \leq j \leq n-1,\ 1 \leq
l \leq n$, $\hat{L}^k_{n,l}=\delta_k^{-1/2}L^k_{n,l}(\phi_k(z))$
for $1 \leq l \leq n-1$ and $\hat{L}^k_{n,n}=L^k_{n,n}(\phi_k(z))$.
This gives the conclusion.
\vskip 0,1cm
Proof of $(ii)$. We focus on $(\Sigma,J_0)$.
By the invariance of the Levi form we
have ${\mathcal L}^{J_k}(\rho_k)(0)(\phi_k(v)) = {\mathcal
L}^{\hat J_k}(\rho_k \circ \phi_k)(0)(v)$.
Write $J_0 = J_{st} + L^\infty$.
Since $\rho_k$ is strictly $J_k$-plurisubharmonic uniformly with respect to
$k$ ($\rho_k$ converges to $\rho$ and $J_k$ converges to $J$),
multiplying by $\delta_k^{-1}$ and
passing to the limit at the right side as $k \rightarrow \infty$,
we obtain that
${\mathcal L}^{J_0}(\hat \rho)(0)(v) \geq 0$ for any $v$. Now let $v =
(v',0) \in T_0(\partial \Sigma)$. Then
$\phi_k(v) = \delta_k^{1/2}v$ and so
${\mathcal L}^J_k(\rho)(0)(v) = {\mathcal
L}^{\hat J_k}(\rho_k)(0)(v)$. Passing to the limit as $k$
tends to infinity, we obtain that
${\mathcal L}^{J_0}(\hat \rho)(0)(v) > 0$
for any $v = (v',0)$ with $v' \neq 0$.
\vskip 0,1cm
Proof of $(iii)$. The proof of the existence
and of the biholomorphicity of $\hat{f}$ is the same as in dimension two.
We prove the identity on $\hat f^n$.
Let $t$ be a real positive number. Then we have~:
\begin{lemma}\label{infty}
$\lim_{t \rightarrow \infty} \hat{\rho}'(\hat{f}('0,-t)) = \infty$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{infty}}.
According to the boundary distance property~(\ref{bdp}) we have
$$
|\rho'(f \circ (T^k)^{-1} \circ \phi_k)('0,-t)| \geq C \
dist(T_k^{-1}('0,-\delta_k t)).
$$
Then
$$
|\hat{\rho}'_k(\hat{f}^k('0,-t))| \geq C \varepsilon_k^{-1}\delta_k \ t.
$$
Since $\hat{\rho}'_k$ converges to $\hat{\rho}'$ uniformly on
compact subsets of $\Sigma'$ and $\varepsilon_k \simeq \delta_k$ (by
the boundary distance property~(\ref{bdp})) we obtain~:
$$
|\hat{\rho}'(\hat{f}('0,-t))| \geq Ct.
$$
This proves Lemma~\ref{infty}. \qed
\vskip 0,1cm
We turn back to the proof of part $(iii)$ of Proposition~\ref{convseq}.
Assume first that $J$ (and similarly $J'$) are not integrable
(see Proposition~\ref{prop-hyp}). Consider a $J$-complex hypersurface
$A \times \mathbb C$ in $\mathbb C^n$ where $A$ is a $J_{st}$ complex hypersurface in
$\mathbb C^{n-1}$.
Since $f((A \times \mathbb C) \cap \mathbb H_{P_1}) = (A' \times \mathbb C) \cap
\mathbb H_{P_2}$ where $A'$ is a $J_{st}$ complex hypersurface in
$\mathbb C^{n-1}$, it follows that the restriction of $\hat f^n$ to $\{'z=\ '0,
Re(z^n) < 0\}$ is a $J_{st}$ automorphism of $\{'z={'0}, Re(z^n) < 0\}$.
Let $\phi : \zeta \mapsto (\zeta -1)/(\zeta + 1)$. The function
$\hat{g}:=\phi^{-1} \circ \hat{f}^n \circ \phi$ is a $J_{st}$ automorphism
of the unit disc in $\mathbb C$. In view of Lemma~\ref{infty} this satisfies
$\hat{g}(0) = 0$ and $\hat{g}(1) = 1$. Hence $\hat{g} \equiv id$ and
$\hat{f}^n('0,z^n) = z^n$ on $\Sigma$.
Assume now that $J$ and $J'$ are integrable.
Let $F$ (resp.
$F'$) be the diffeomorphism from $\Sigma$ to $\mathbb H_{P}$
(resp. from $\Sigma$ to $\mathbb H_{P'}$) given in the proof of
Proposition~\ref{prop-hyp}. The diffeomorphism $g:=F' \circ f \circ
F^{-1}$ is a $J_{st}$-biholomorphism from $\mathbb H_{P}$ to $\mathbb
H_{P'}$ satisfying $g('0,-1) = ('0,-1)$. Since $(\Sigma,J)$ and
$(\Sigma', J')$ are model domains, the domains $\mathbb H_{P}$ and
$\mathbb H_{P'}$ are strictly $J_{st}$-pseudoconvex. In particular, since
$P$ and $P'$ are homogeneous of degree two, there are linear complex maps
$L,\ L'$ in $\mathbb C^{n-1}$ such that the map $G$ (resp. $G'$) defined by
$G('z,z_n)=(L('z),z_n)$ (resp. $G'('z,z_n)=(L'('z),z_n)$) is a
biholomorphism from $\mathbb H_{P}$ (resp. $\mathbb H_{P'}$) to
$\mathbb H$. The map $G' \circ g \circ G^{-1}$ is an automorphism of
$\mathbb H$ satisfying $G' \circ g \circ G^{-1}('0,-1) = ('0,-1)$.
Let $\Phi$ be the $J_{st}$ biholomorphism from $\mathbb H$ to
the unit ball $\mathbb B_n$ of $\mathbb C^n$ defined by
$\Phi('z,z^n) = (\sqrt{2}'z/1-z^n,(1+z^n)/(1-z^n))$.
Let $\hat{g} :=\Phi^{-1} \circ g
\circ \Phi$. In view of lemma~\ref{bdp} this satisfies
$\hat{g}(0) = 0$ and $\hat{g}('0,1)=('0,1)$. Hence $\hat{g}^n \equiv id$
and $\hat{f}^n('z,z^n) = z^n$ for every $z$ in $\Sigma$. \qed
\vskip 0,1cm
According to part $(ii)$ of Proposition~\ref{convseq}
and restricting $U$ if necessary, one may view
$D \cap U$ as a strictly $J_0$-pseudoconvex domain
in $\mathbb C^n$ and $J$ as a small deformation of ${J}_0$ in a
neighborhood of $\bar{D} \cap U$. The same holds for $D' \cap V$.
\vskip 0,1cm
For $p \in \partial D$ and $z \in D$ let $X_p(z)$ and $X'_{f(p)}(f(z))$
be the basis of $(1,0)$ vector fields defined above.
The elements of the matrix of $df_z$ in
the bases $X_p(z)$ and $X'_{f(p)}(f(z))$ are denoted by
$A_{js}(p,z)$. According to Proposition~\ref{tangent} the function
$A_{n,n}(p,\cdot)$ is upper bounded on $D$.
\begin{proposition}
\label{reality}
We have:
\begin{itemize}
\item[(a)] Every cluster point of the function $z \mapsto A_{n,n}(p,z)$
is real when $z$ tends to $p \in \partial D$.
\item[(b)] For $z \in D$, let $p \in \partial D$ such that
$|z-p| = dist(z,\partial D)$. There exists a constant $A$, independent of
$z \in D$, such that $\vert A_{n,n}(p,z) \vert \geq A$.
\end{itemize}
\end{proposition}
\vskip 0,1cm
\noindent{\it Proof of Proposition~\ref{reality}}.
(a) Suppose that there exists a sequence of points $(p^k)$ converging
to a boundary point $p$ such that $A_{n,n}(p,\cdot)$ tends to a complex number
$a$. Applying the above scaling construction,
we obtain a sequence of maps $(\hat f^k)_k$.
For $k \geq 0$ consider the dilated vector fields
$$
Y^j_k:=\delta_k^{1/2}((\phi_k^{-1}) \circ T^k)(X^j(p^k))
$$
for $j=1,\dots,n-1$, and
$$
Y^n_k:=\delta_k((\phi_k^{-1})\circ T^k)(X_n(p^k)).
$$
Similarly we define
$$
Y'^j_k:=\varepsilon_k^{-1/2}((\psi_k^{-1}) \circ T'^k)
(X'^j(f(p^k)))
$$
for $j=1,\dots,n-1$, and
$$
Y'^n_k:=\varepsilon_k^{-1}((\psi_k^{-1})\circ T'^k)(X'_n(f(p^k))).
$$
For every $k$, the $n$-tuple
$Y^k:= (Y^1_k,\dots,Y^n_k)$ is a basis of $(1,0)$ vector fields for
the dilated structure $\hat{J}^k$. In view of Proposition~\ref{convseq}
the sequence $(Y^k)_k$
converges to a basis
of $(1,0)$ vector fields of $\mathbb C^n$ (with respect to $J_0$) as $k$
tends to $\infty$. Similarly, the $n$-tuple
$Y'^k := (Y'^1_k,\dots,Y'^n_k)$ is a basis of $(1,0)$ vector fields for
the dilated structure $\hat{J}'^k$ and $(Y'^k)_k$
converges to a basis of $(1,0)$ vector fields
of $\mathbb C^n$ (with respect to $J'_0$) as $k$ tends to $\infty$.
In particular the last components $Y^n_k$ and $Y'^n_k$
converge to the $(1,0)$ vector field $\partial / \partial z^n$.
Denote by $\hat A^k_{js}$ the elements of the
matrix of $d\hat f^k(0,-1)$. Then $A^k_{n,n}$ converges to $(\partial
\hat f^n/\partial z^n)(0,-1) = 1$, according to Proposition~\ref{convseq}.
On the other hand, $A^k_{n,n} = \varepsilon_k^{-1}\delta_k A_{n,n}$
converges to $a$ by the boundary distance preserving property~(\ref{bdp}).
This gives the statement.
(b) Suppose that there is a sequence of points $(p^k)$ converging
to the boundary such that $A_{n,n}$ tends to $0$. Repeating precisely
the argument of (a), we obtain that $(\partial \hat f^n/\partial
z^n)(0,-1) = 0$; this contradicts part $(iii)$ of Proposition~\ref{convseq}.
\qed
\vskip 0,1cm
\begin{proposition}\label{PPP}
The cluster set of the cotangent lift $f^*$ on
$\Sigma(\partial D)$ is contained in $\Sigma(\partial D')$.
\end{proposition}
\vskip 0,1cm
\vskip 0,1cm
\noindent{\it Proof of Proposition~\ref{PPP}}.
{\it Step one.}
We first reduce the problem to the following
local situation. Let $D$ and $D'$ be domains in $\mathbb C^n$, $\Gamma$ and
$\Gamma'$ be open $\mathcal C^{\infty}$-smooth pieces of their boundaries,
containing the origin. We assume that an almost complex structure $J$
is defined and $\mathcal C^{\infty}$-smooth in a neighborhood of the closure
$\bar D$, $J(0) = J_{st}$.
Similarly, we assume that $J'(0) = J_{st}$. The hypersurface
$\Gamma$ (resp. $\Gamma'$) is supposed to be strictly $J$-pseudoconvex
(resp. strictly $J'$-pseudoconvex). Finally, we assume that $f: D
\rightarrow D'$ is a $(J,J')$-biholomorphic map. It follows from the estimates
of the Kobayashi-Royden infinitesimal pseudometric given in \cite{ga-su}
that $f$ extends as a $1/2$-H{\"o}lder
homeomorphism between $D \cup \Gamma$ and $D' \cup \Gamma'$, such that
$f(\Gamma) = \Gamma'$ and $f(0) = 0$. Finally
$\Gamma$ is defined in a neighborhood of the origin
by the equation $\rho(z) = 0$ where $\rho(z) = 2Re
z^n + 2Re K(z) + H(z) + o(\vert z \vert^2)$ and $K(z) = \sum
K_{\mu\nu}z^{\mu\nu}$, $H(z) = \sum h_{\mu\nu}z^{\mu}\bar
z^{\nu}$, $k_{\mu\nu} = k_{\nu\mu}$, $h_{\mu\nu} = \bar
h_{\nu\mu}$. As we noticed at the end of Section~3 the hypersurface
$\Gamma$ is strictly $\hat{J}$-pseudoconvex at the origin. The hypersurface
$\Gamma'$ admits a similar local representation. In what follows we
assume that we are in this setting.
Let $\Sigma := \{ z \in \mathbb C^n: 2Re z^n + 2Re K('z,0) + H('z,0) < 0\}$,
$\Sigma' := \{ z \in \mathbb C^n: 2Re z^n + 2Re K'('z,0) + H'('z,0) < 0\}$.
If $(p^k)$ is a sequence of points in $D$ converging to $0$, then according
to Proposition~\ref{convseq}, the
scaling procedure associates with the pair $(f,(p^k)_k)$ two linear almost
complex structures ${J}_0$ and ${J}'_0$, both defined on $\mathbb C^n$,
and a $(J_0,{J}'_0)$-biholomorphism $\hat{f}$ between $\Sigma$ and
$\Sigma'$. Moreover $(\Sigma,J_0)$ and $(\Sigma',J'_0)$ are model
domains. To prove that the cluster set of the cotangent lift of $f$ at a point
in $N(\Gamma)$ is contained in $N(\Gamma')$, it is sufficient to prove that
$(\partial \hat{f}^n / \partial z^n)('0,-1) \in \mathbb R \backslash \{0\}$.
\vskip 0,1cm
\noindent{\it Step two.} The proof of Proposition~\ref{PPP} is given
by the following Lemma.
\begin{lemma}
\label{cluster}
Let $K$ be a compact subset of the totally real part of the conormal
bundle $\Sigma_J(\partial D)$. Then the cluster set of the cotangent lift
$f^*$ of $f$ on the conormal bundle
$\Sigma(\partial D)$, when $(z,L)$ tends to $\Sigma_J(\partial D)$
along the wedge $W_U$, is relatively compactly contained
in the totally real part of $\Sigma(\partial D')$.
\end{lemma}
We recall that the totally real part of $\Sigma(\partial D')$ is
the complement of the zero section in $\Sigma(\partial D')$.
\vskip 0,1cm
\noindent{\sl Proof of Lemma~\ref{cluster}}. Let $(z^k,L^k)$ be
a sequence in $W_U$ converging to $(0,\partial_J\rho(0)) =
(0,dz^n)$. We shall prove that the sequence of
linear forms $Q^k := {}^tdf^{-1}(w^k)L^k$, where $w^k = f(z^k)$, converges
to a linear form which up to a {\it real} factor (in view of Part (a)
of Proposition \ref{reality}) coincides with $\partial_{J} \rho(0)=
dz^n$ (we recall that ${}^t$ denotes the transposed map). It is
sufficient to prove that the $(n-1)$ first component of $Q^k$ with respect to
the dual basis $(\omega_1,\dots,\omega_n)$ of $X$ converge to $0$ and the
last one is bounded below from the origin as $k$ goes to infinity.
The map $X$ being of class $\mathcal C^1$ we can replace $X(0)$ by $X(w^k)$.
Since $(z^k,L^k) \in W_U$, we have $L^k = \omega_n(z^k) +
O(\delta_k)$, where $\delta_k$ is the distance from $z^k$ to the
boundary. Since $\vert\vert\vert df^{-1}_{w^k} \vert\vert\vert =
0(\delta_k^{-1/2})$, we have $Q^k = {}^tdf^{-1}_{w^k}(\omega_n(z^k)) +
O(\delta_k^{1/2})$. By Proposition~\ref{convseq}, the components of
${}^tdf^{-1}_{w^k}(\omega_n(z^k))$ with respect to the basis
$(\omega_1(z^k),\dots,\omega_n(z^k))$ are the elements of the last line of
the matrix $df^{-1}_{w^k}$ with respect to the basis $X'(w^k)$ and
$X(z^k)$. So its $(n-1)$ first components are $0(\delta_k^{1/2})$ and
converge to $0$ as $k$ tends to infinity. Finally the component $A_{n,n}^k$
is bounded below from the origin by Part (b) of
Proposition~\ref{reality}. \qed
\subsubsection{Compactness principle}
In this section we prove the following
\begin{theorem}\label{wr}
Let $(M,J)$ be an almost complex manifold, not equivalent to a model
domain. Let $D=\{r<0\}$ be a relatively compact domain in a smooth
manifold $N$ and let $(f^\nu)_\nu$ be a sequence of diffeomorphisms
from $M$ to $D$. Assume that
$(i)$ the sequence $(J_\nu:=f^\nu_*(J))_\nu$ extends smoothly up to
$\bar{D}$ and is compact in the $C^2$ convergence on $\bar{D}$,
$(ii)$ the Levi forms of $\partial D$ , $\mathcal L^{J_\nu}(\partial
D)$ are uniformly bounded from below (with respect to $\nu$) by a
positive constant.
Then the sequence $(f^\nu)_\nu$ is compact in the compact-open
topology on $M$.
\end{theorem}
We proceed by contradiction.
Assume that there is a compact $K_0$ in $M$, points $p^\nu \in M$ and a point
$q \in \partial D$ such that $\lim_{\nu \rightarrow \infty}f^\nu(p^\nu) = q$.
\begin{lemma}\label{met-kob}
For every relatively compact neighborhood $V$ of $q$ there is $\nu_0$
such that for $\nu \geq \nu_0$ we have~:
$\lim_{x \rightarrow q}inf_{q' \in D \cap \partial V}d^K_{(D,J_\nu)}=\infty$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{met-kob}}. Restricting $U$ if
necessary, we may assume that the function $\rho + C \rho ^2$ is a
strictly $J_\nu$-plurisubharmonic function in a neighborhood of
$\bar{D} \cap U$, for sufficiently large $\nu$.
Moreover, using Proposition~B, we can focus on $K_{D \cap U}$. Smoothing
$D \cap U$, we may assume that the hypothesis of Proposition~A are satisfied
on $D \cap U$, uniformly for sufficiently large $\nu$.
In particular, the inequality~(\ref{e3}) is satisfied on
$D \cap U$, with a positive constant $c$ independent of $\nu$.
The result follows by a direct integration of this inequality.
\qed
\vskip 0,1cm
The following Lemma is a corollary of Lemma~\ref{met-kob}.
\begin{lemma}\label{lem3.3.1}
For every $K \subset \subset M$ we have :
$\lim_{\nu \rightarrow \infty}f^\nu(K) =q$.
\end{lemma}
\noindent{\it Proof of Lemma \ref{lem3.3.1}}. Let $K \subset \subset M$
be such that $x^0 \in
K$. Since the function $x \mapsto d_D^K(x^0,x)$ is bounded from above by a
constant $C$ on $K$, it follows from the decreasing property of the Kobayashi
pseudodistance that
\begin{equation}\label{eq2}
d_{(D,J_\nu)}^K(f^\nu(x^0),f^\nu(x)) \leq C
\end{equation}
for every $\nu$ and every
$x \in K$. It follows from Lemma~\ref{met-kob} that for
every $V \subset \subset U$, containing $p$, we have :
\begin{equation}\label{eq3}
\lim_{\nu \rightarrow \infty}d_{(D,J_\nu)}^K
(f^\nu(x^0),D \cap \partial V) = +\infty.
\end{equation}
Then from conditions (\ref{eq2}) and (\ref{eq3}) we deduce that
$f^\nu(K) \subset V$ for every sufficiently large $\nu$.
This gives the statement. \qed
\vskip 0,1cm
Fix now a point $p \in M$ and denote by $p^\nu$ the point $f^\nu(p)$.
We may assume that the sequence $(J_\nu:=f^\nu_*(J))_\nu$ converges
to an almost complex structure $J'$ on $\bar{D}$ and according to
Lemma~\ref{lem3.3.1} we may assume that
$\lim_{\nu \rightarrow \infty}p^\nu = q$.
We apply Subsection~4.3 to the domain $D$ and the sequence $(q^\nu)_\nu$.
We denote by $T^\nu$ the linear transformation
$T^\nu:=M^\nu \circ L^\nu \circ \alpha^\nu$, as in Subsection~4.3, and
we consider $D^\nu:=T^\nu(D)$, and $J^\nu:=T^\nu_*(J_\nu)$.
If $\phi_\nu$ is the nonisotropic dilation $\phi_\nu:('z,z^n) \mapsto
(\delta_\nu^{1/2}\ 'z,\delta_\nu z^n)$ then we set
$\hat{f}^\nu:=\phi_\nu^{-1} \circ T^\nu \circ f$ and
$\hat{J}^\nu:=(\phi_\nu^{-1})_*(J^\nu)$. We also consider
$\hat{\rho}_\nu:=\delta_\nu^{-1} \circ \rho \circ \phi_\nu$
and $\hat{D}^\nu:=\{\hat{\rho}_\nu < 0\}$.
As proved in Subsection~4.3, the sequence $(\hat{D}^\nu)_\nu$ converges,
in the local Hausdorff convergence, to a domain
$\Sigma:=\{z \in C^n:\hat \rho(z) := 2Re z^n + 2Re K({'z},0) + H({'z},0)<0\}$,
where $K$ and $H$ are homogeneous of degree two.
According to Proposition~\ref{convseq} we have~:
$(i)$ The sequence $(\hat{J}^\nu)$ converges to a model almost complex
structure $J_0$, uniformly (with all partial derivatives of any order)
on compact subsets of $\mathbb C^n$,
$(ii)$ $(\Sigma,J_0)$ is a model domain,
$(iii)$ the sequence $(\hat{f}^\nu)_\nu$ converges to a $(J,J_0)$
holomorphic map $F$ from $M$ to $\Sigma$.
\vskip 0,1cm
To prove Theorem~\ref{wr}, it remains to prove that $F$ is a diffeomorphism
from $M$ to $\Sigma$.
We first notice that according to condition $(ii)$ of Theorem~\ref{wr}
and Lemma~\ref{met-kob}, the domain $D$ is
complete $J_\nu$-hyperbolic. In particular, since $f^\nu$ is a $(J,J_\nu)$
biholomorphism from $M$ to $D$, the manifold $M$ is complete $J$-hyperbolic.
Consequently, for every compact subset $L$ of $M$, there is a positive
constant $C$ such that for every $z \in L$ and every $v \in T_zM$ we have
$K_{(M,J)}(z,v) \geq C\|v\|$.
Consider the map $\hat{g}^\nu:=(\hat{f}^\nu)^{-1}$.
This is a $(\hat{J}^\nu,J)$ biholomorphism from $\hat{D}^\nu$ to $M$.
Let $K$ be a compact set in $\Sigma$. We may consider $\hat{g}^\nu(K)$
for sufficiently large $\nu$. By the decreasing property of the Kobayashi
distance, there is a compact subset $L$ in $M$ such that
$\hat{g}^\nu(K) \subset L$ for sufficiently large $\nu$. Then for every
$w \in K$ and for every $v \in T_w\Sigma$ we obtain, by the decreasing of the
Kobayashi-Royden infinitesimal pseudometric~:
$$
\|df^\nu(w)(v)\| \leq (1/C) \|v\|,
$$
uniformly for sufficiently large $\nu$.
According to Ascoli Theorem, we may extract from
$(\hat{g}^\nu)_\nu$ a subsequence, converging to a map $G$ from
$\Sigma$ to $M$. Finally, on any compact subset $K$ of $M$, by
the equality $\hat{g}^\nu \circ \hat{f}^\nu = id$ we obtain $F \circ G = id$.
This gives the result. \qed
\vskip 0,1cm
As a corollary of Theorem~\ref{wr} we obtain the following almost complex
version of the Wong-Rosay Theorem in real dimension four~:
\begin{corollary}\label{wr-2}
Let $(M,J)$ (resp. $(M',J')$) be an almost complex manifold of real dimension
four. Let $D$ (resp. $D'$) be a relatively compact domain in $M$ (resp. $N$).
Consider a sequence $(f^\nu)_\nu$ of diffeomorphisms from $D$ to
$D'$ such that the sequence $(J_\nu:=f^\nu_*(J))_\nu$ extends to $\bar{D}'$
and converges to $J'$ in the $C^2$ convergence on $\bar{D}'$.
Assume that there is a point $p\in D$ and a point $q \in \partial D'$ such
that $\lim_{\nu \rightarrow \infty}f^\nu(p) = q$ and such that
$D'$ is strictly $J'$-pseudoconvex at $q$.
Then there is a $(J,J_{st})$-biholomorphism from $M$ to the unit ball $\mathbb B^2$
in $\mathbb C^2$.
\end{corollary}
\noindent{\it Proof of Corollary~\ref{wr-2}}.
The proof of Corollary~\ref{wr-2} follows exactly the same lines as
the proof of Theorem~\ref{wr}. \qed
\section{Elliptic regularity on almost complex manifolds with boundary}
This section is devoted to one of the main technical steps of our
construction. We prove that a pseudoholomorphic disc attached (in the
sense of the cluster set) to a smooth totally real submanifold in an almost
complex manifold, extends smoothly up to the boundary. In the case of
the integrable structure, various versions of this statement have been
obtained by several authors. In the almost complex case, similar
assertions have been established by H.Hofer \cite{ho}, J.-C.Sikorav
\cite{si}, S.Ivashkovich-V.Shevchishin \cite{iv-sh}, E.Chirka
\cite{ch1}, D.McDuff-D.Salamon~\cite{mc-sa}
under stronger assumptions on the initial boundary regularity of the disc
(at least the continuity is required). Our proof consists of two
steps. First, we show that a disc extends as a $1/2$-H{\"o}lder
continuous map up to the boundary. The proof is based on special
estimates of the Kobayashi-Royden metric in ``Grauert tube'' type
domains. The second step is the reflection principle adapted to the
almost complex category; here we follow the construction of E.Chirka
\cite{ch1}.
\subsection{Reflection principle and regularity of analytic discs}
We prove the following~:
\begin{theorem}\label{reflection}
Let $N$ be a smooth $\mathcal C^\infty$ totally real submanifold in $(M,J)$
and let $\varphi : \Delta^+ \rightarrow M$ be $J$-holomorphic, where
$\Delta^+:=\{\zeta \in \Delta : Im(\zeta) >0\}$.
Assume that the cluster set of $\varphi$ on the real interval $]-1,1[$ is
contained in $N$. Then $\varphi$ is of class $\mathcal C^\infty$ on
$\Delta^+ \cup ]-1,1[$.
\end{theorem}
In case $N$ has a weaker regularity then the exact regularity of $\varphi$,
related to that of $N$, can be derived directly from the following proof of
Theorem~\ref{reflection}.
\vskip 0,1cm
\noindent{\it Proof of Theorem~\ref{reflection}.}
\noindent{\it Step one}. It follows by
Theorem~\ref{Regth1} that $\varphi$ extends as a H{\"o}lder 1/2-continuous
map on $\Delta^+ \cup
]-1,1[$.
\vskip 0,1cm
\noindent{\it Step two : The disc $\varphi$ is of class $\mathcal C^{1+1/2}$.}
The following construction of the reflection principle
for pseudoholomorphic discs is due to Chirka \cite{ch1}. For reader's
convenience we give the details.
Let $a\in ]-1,1[$. Our consideration being local at $a$, we may assume that
$N=\mathbb R^n \subset \mathbb C^n$, $a=0$ and $J$ is a smooth almost complex structure
defined in the unit ball $\mathbb B_n$ in $\mathbb C^n$.
After a complex linear change of coordinates we may assume that
$J = J_{st} + O(\vert z \vert)$ and $N$ is given by $x + ih(x)$ where
$x \in \mathbb R^n$ and $dh(0) = 0$. If $\Phi$ is the local diffeomorphism
$x \mapsto x$, $y \mapsto y - h(x)$ then $\Phi(N) = \mathbb R^n$ and the direct
image of $J$ by $\Phi$, still denoted by $J$, keeps the form $J_{st} +
O(\vert z \vert)$. Then $J$ has a basis of $(1,0)$-forms given in the
coordinates $z$ by $dz^j + \sum_k a_{jk}d\bar z^k$; using the
matrix notation we write it in the form $\omega = dz + A(z)d\bar z$ where
the matrix function $A(z)$ vanishes at the origin. Writing
$\omega = (I + A)dx + i(I - A)dy$ where $I$ denotes the identity
matrix, we can take as a basis of $(1,0)$ forms~: $\omega' = dx +
i(I +A)^{-1}(I - A)dy = dx + iBdy$. Here the matrix function $B$ satisfies
$B(0) = I$. Since $B$ is smooth, its restriction $B_{\vert \mathbb R^n}$ on $\mathbb R^n$
admits a smooth extension $\hat B$ on the unit ball such that
$\hat B - B_{\vert \mathbb R^n} = O(\vert y \vert^k)$ for any positive integer $k$.
Consider the diffeomorphism $z^* = x + i\hat B(z) y$.
In the $z^*$-coordinates the submanifold $N$ still coincides with $\mathbb R^n$
and $\omega' = dx + iBdy = dz^* + i(B - \hat B)dy - i(d\hat B)y = dz^* +
\alpha$, where the coefficients of the form $\alpha$ vanish up to
the first order on $\mathbb R^n$. Therefore there is a basis of $(1,0)$-forms
(with respect to the image of $J$ under the coordinate diffeomorphism
$z \mapsto z^*$) of the form $dz^* + A(z^*)d\bar z^*$,
where $A$ vanishes to first order on $\mathbb R^n$ and
$\| A \|_{\mathcal C^1(\bar{\mathbb B}_n)} < < 1$.
Consider the continuous map $\psi$ defined on $\Delta$ by
$$
\left\{
\begin{array}{cccc}
\psi &=& \varphi &{\rm on}\ \Delta^+\\
& & & \\
\psi(\zeta) &=&\overline{\varphi(\bar{\zeta})} &{\rm for}\ \zeta \in
\Delta^- :=\{\zeta \in \Delta / Im(\zeta) < 0\}.
\end{array}
\right.
$$
Since the map $\varphi$ satisfies
\begin{equation}\label{holo}
\bar \partial \varphi + A(\varphi)\overline{\partial \varphi} = 0
\end{equation}
on $\Delta^+$, the map $\psi$ satisfies the equation
$$
\bar\partial\psi(\zeta) +
\overline{A(\varphi(\bar\zeta))}\
\overline{\partial\psi(\zeta)} = 0
$$
for $\zeta \in \Delta^-$.
Hence $\psi$ is a solution on $\Delta$ of the elliptic equation
\begin{equation}\label{elliptic}
\bar \partial \psi + \lambda(\cdot)\overline{\partial \psi} = 0
\end{equation}
where $\lambda$ is defined by $\lambda(\zeta) =
A(\varphi(\zeta))$ for
$\zeta \in \Delta^+ \cup ]-1,1[$ and $\lambda(\zeta) =
\overline{A(\varphi(\bar\zeta))}$ for
$\zeta \in \Delta^-$.
According to Step
one, the map $\lambda$ is H{\"o}lder $1/2$ continuous on $\Delta$
and vanishes on $]-1,1[$.
This implies that $\psi$ is of class $\mathcal C^{1+1/2}$ on $\Delta$
by equation~(\ref{elliptic}) (see~\cite{si,ve}).
\vskip 0,1cm
\noindent{\it Step three : Geometric bootstrap.} See Figure 7. Let $v=(1,0)$ in
$\mathbb R^2$ and
consider the disc $\varphi^c$ defined on $\Delta^+$ by
$$
\varphi^c(\zeta) = (\varphi(\zeta),d\varphi(\zeta)(v)).
$$
\vskip 0,1cm
The cluster set $C(\varphi^c,]-1,1[)$ is contained in the smooth
submanifold $TN$ of $TM$.
\begin{lemma}\label{tot-real}
If $N$ is a totally real submanifold in an almost complex manifold
$(M,J)$ then $TN$ is a totally real submanifold in $(TM,J^c)$.
\end{lemma}
\noindent{\it Proof of Lemma~\ref{tot-real}.}
Let $X \in T(TN) \cap J^c(T(TN))$. If $X=(u,v)$ in the trivialisation
$T(TM) = TM \oplus TM$ then $u \in TN \cap J(TN)$,
implying that $u=0$. Hence $v \in TN \cap J(TN)$,
implying that $v=0$. Finally, $X=0$. \qed
\vskip 0,1cm
Applying Step two to $\varphi^c$ and $TN$ we prove that the first derivative
of $\varphi$ with respect to $x$ ($x+iy$ are the standard coordinates on $\mathbb C$)
is of class $\mathcal C^{1+1/2}$ on $\Delta^+ \cup ]-1,1[$.
The $J$-holomorphicity equation~(\ref{holo}) may be written as
$$
\frac{\partial \varphi}{\partial y} = J(\varphi)
\frac{\partial \varphi}{\partial x}
$$
on $\Delta^+ \cup ]-1,1[$.
Hence $\partial \varphi/\partial y$
is of class $\mathcal C^{1+1/2}$ on $\Delta^+ \cup ]-1,1[$, meaning that
$\varphi$ is of class $\mathcal C^{2+1/2}$ on $\Delta^+ \cup ]-1,1[$.
We prove now that $\varphi$ is of class $\mathcal C^{3+1/2}$ on
$\Delta^+ \cup ]-1,1[$. The reader will conclude, repeating the same
argument that $\varphi$ is of class $\mathcal C^\infty$ on
$\Delta^+ \cup ]-1,1[$.
\bigskip
\begin{center}
\input{bootstrap.pstex_t}
\end{center}
\bigskip
\centerline{Figure 7}
\bigskip
Replace now the data $(M,J)$ and $\varphi$ by $(TM,J^c)$ and
$\varphi^c$ in Step three. The map $^2\varphi^c$ defined on $\Delta^+$ by
$^2\varphi^c(\zeta) = (\varphi^c(\zeta), d\varphi^c(\zeta)(v))$
is $^2J^c$-holomorphic on $\Delta^+$ ($^2J^c$ is the complete lift of $J^c$
to the second tangent bundle $T(TM)$. According to Step two, its
first derivative $\partial (^2\varphi^c)/\partial x$ is of class $C^{1+1/2}$
on $\Delta^+ \cup ]-1,1[$. This means that the second derivatives
$\displaystyle \frac{\partial^2 \varphi}{\partial x^2}$ and
$\displaystyle \frac{\partial^2 \varphi}{\partial x \partial y}$
are $C^{1+1/2}$ on $\Delta^+ \cup ]-1,1[$. Differentiating
equation~(\ref{holo}) with respect to $y$, we prove that
$\displaystyle \frac{\partial^2 \varphi}{\partial y^2}$ is $C^{1+1/2}$ on
$\Delta^+ \cup ]-1,1[$ and so that $\varphi$ is $C^{3+1/2}$ on
$\Delta^+ \cup ]-1,1[$. \qed
\subsection{Behavior of pseudoholomorphic maps near totally real submanifolds}
Let $\Omega$ be a domain in an almost complex manifold $(M,J)$ and $E
\subset \Omega$ be a smooth $n$-dimensional
totally real submanifold defined as the set of common zeros of the
functions $r_j$, $j=1,...,n$ smooth on $\Omega$. We suppose that
$\bar\partial_J r_1 \wedge ...\wedge \bar\partial_J r_n \neq
0$ on $\Omega$. Consider the ``wedge''
$W(\Omega,E)=\{ z \in \Omega: r_j(z) < 0, j= 1,...,n \}$ with ``edge''
$E$. For $\delta > 0$ we denote by $W_{\delta}(\Omega,E)$ the
``shrinked'' wedge
$\{ z \in \Omega : r_j(z) - \delta \sum_{k \neq j} r_k <
0, j = 1,..., n \}$.
The main goal of this Section is to prove the following
\begin{proposition}
\label{Wedges}
Let $W(\Omega,E)$ be a wedge in $\Omega \subset (M,J)$ with a totally real
n-dimensional edge $E$ of class $\mathcal C^{\infty}$ and let $f:W(\Omega,E)
\rightarrow (M',J')$ be a $(J,J')$-holomorphic map. Suppose that the
cluster set $C(f,E)$ is (compactly) contained in a
$\mathcal C^\infty$ totally real submanifold $E'$ of $M'$.
Then for any $\delta > 0$ the map $f$ extends to
$W_{\delta}(\Omega,E) \cup E$ as a $\mathcal C^{\infty}$-map.
\end{proposition}
We previously established this statement for a single
$J$-holomorphic disc. The general case also
relies on the ellipticity of the
$\bar\partial$-operator. It requires an additional
technique of attaching pseudoholomorphic discs to a totally real
manifold which could be of independent interest.
Now we prove Proposition~\ref{Wedges}.
Let $(h_t)_t$ be the family of $J$-holomorphic discs, smoothly depending on
the parameter $t \in \mathbb R^{2n}$, defined in Lemma~\ref{lem-discs}.
It follows from Lemma~\ref{dlem3.2}, applied to the holomorphic disc $f \circ
h_t$, uniformly with respect to $t$, that there is a constant $C$
such that $\vert \vert \vert df(z) \vert \vert \vert \leq C dist(z,E)^{-1/2}$
for any $z \in W_{\delta}(\Omega,E)$.
This implies that $f$ extends as a
H{\"o}lder $1/2$-continuous map on $W_{\delta}(\Omega,E) \cup E$.
It follows now from Theorem~\ref{reflection} that every
composition $f \circ h_t$ is smooth up to $\partial \Delta^+$. Moreover,
the direct examination of our argument shows that the $\mathcal C^k$ norm of
the discs $f \circ h_t$ are uniformly
bounded, for any $k$. Recall the separate smoothness principle
(Proposition 3.1, \cite{tu}):
\begin{proposition}\label{separate}
Let $F_j$, $1 \leq j \leq n$, be $\mathcal C^{\alpha}$ ($\alpha > 1$
noninteger)
smooth foliations in a domain $\Omega \subset \mathbb R^n$ such
that for every point $p \in \Omega$ the tangent vectors to the curves
$\gamma_j \in F_j$ passing through $p$ are linearly independent. Let
$f$ be a function on $\Omega$ such that the restrictions $f
_{\vert{\gamma_j}}$, $1 \leq j \leq n$, are of class
$\mathcal C^{\alpha-1}$ and are uniformly bounded in the $\mathcal C^{\alpha-1}$
norm. Then $f$ is of class $\mathcal C^{\alpha-1}$.
\end{proposition}
Using Lemma~\ref{lem-discs} we construct $n$ transversal
foliations of $E$ by boundaries of Bishop's discs. Since the restriction
of $f$ on every such curve satisfies the hypothesis of
Proposition~\ref{separate}, $f$ is smooth up to $E$. This proves
Proposition~\ref{Wedges}. \qed
\vskip 0,1cm
Let $\Gamma$ and $\Gamma'$ be two totally real maximal submanifolds in almost
complex manifodls $(M,J)$ and $(M',J')$. Let $W(\Gamma,M)$ be a wedge in
$M$ with edge $\Gamma$.
\begin{proposition}\label{wed-reg}
If $F :W(\Gamma,M) \rightarrow M'$ is $(J,J')$-holomorphic and if the
cluster set of $\Gamma$ is contained in $\Gamma'$ then $F$ extends as a
$\mathcal C^\infty$ map up to $\Gamma$.
\end{proposition}
\noindent{\it Proof of Proposition~\ref{wed-reg}.}
In view of Proposition~\ref{reflection} the proof is classical
(see \cite{co-ga-su}). \qed
\vskip 0,1cm
As a direct application of Proposition~\ref{wed-reg} we obtain the following
partial version of Fefferman's Theorem :
\begin{corollary}\label{feff1}
Let $D$ and $D'$ be two smooth relatively compact domains in real manifolds.
Assume that $D$ admits an almost complex structure $J$ smooth on $\bar D$ and
such that $(D,J)$ is strictly pseudoconvex. Let $f$ be a smooth
diffeomorphism $f: D \rightarrow D'$, extending as a $\mathcal C^1$
diffeomorphism (still called $f$) between $\bar{D}$ and $\bar{D}'$.
Then $f$ is a smooth $\mathcal C^\infty$
diffeomorphism between $\bar D$ and $\bar D'$ if and only
if the direct image $f_*(J)$ of $J$ under
$f$ extends smoothly on $ \bar D'$ and $(D', f_*(J))$
is strictly pseudoconvex.
\end{corollary}
\noindent{\it Proof of Corollary~\ref{feff1}.}
The cotangent lift $f^*$
of $f$ to the cotangent bundle over $D$, locally defined by
$f^*:=(f,^t(df)^{-1})$, is a $(\tilde{J},\tilde{J}')$-biholomorphism
from $T^*D$ to $T^*D'$, where $J':=f_*(J)$.
According to Proposition~\ref{prop-tot-real}, the conormal
bundle $\Sigma(\partial D)$ (resp. $\Sigma(\partial D')$) is a totally real
submanifold in $T^*M$ (resp. $T^*M'$).
We consider $\Sigma(\partial D)$ as the edge of a wedge
$W(\Sigma(\partial D),M)$ contained in $TD$. Then we may apply
Proposition~\ref{wed-reg} to $F=f^*$ to conclude. \qed
\subsection{Fefferman's mapping Theorem}
Here we present one of the main results of our paper. This was
obtained in the paper \cite{ga-su2}.
\begin{theorem}\label{theo-fefferman}
Let $D$ and $D'$ be two smooth relatively compact domains in real
manifolds. Assume that $D$ admits an almost complex structure $J$
smooth on $\bar D$ and such that $(D,J)$ is strictly
pseudoconvex. Then a smooth diffeomorphism $f: D \rightarrow D'$
extends to a smooth diffeomorphism between $\bar D$ and $\bar D'$ if
and only if the direct image $f_*(J)$ of $J$ under $f$ extends
smoothly on $ \bar D'$ and $(D', f_*(J))$ is strictly pseudoconvex.
\end{theorem}
Theorem~\ref{theo-fefferman} is a consequence of
Proposition~\ref{PPP}.
We recall that according to Proposition~\ref{prop-tot-real}
the conormal bundle $\Sigma_J(\partial D)$ of $\partial D$ is a
totally real submanifold in the cotangent bundle $T^*M$.
Consider the set
$$
S = \{(z,L) \in \mathbb R^{2n} \times \mathbb R^{2n} :
dist((z,L),\Sigma_J(\partial D)) \leq dist(z,\partial D), z \in D \}.
$$
In a neighborhood $U$ of any totally real point of
$\Sigma_J(\partial D)$, the set S contains a wedge $W_U$ with
$\Sigma_J(\partial D) \cap U$ as totally real edge.
Then in view of Proposition~\ref{wed-reg} we obtain the following
Proposition~:
\begin{proposition}
\label{wedges}
There is a wedge $W_{U'}$ contained in the
wedge $W_U$ such that the map $f^*$ extends to $W_{U'} \cup \Sigma(\partial D)$
as a $\mathcal C^{\infty}$-map.
\end{proposition}
Proposition~\ref{wedges} implies immediately that $f$ extends
as a smooth $\mathcal C^\infty$ diffeomorphism from $\bar{D}$ to
$\bar{D'}$ (see Figure 8).
\bigskip
\begin{center}
\input{fefferman.pstex_t}
\end{center}
\bigskip
\centerline{Figure 8}
\bigskip
In this survey, we presented an overview of different results
dealing with local analysis in almost complex manifolds, establishing
some bases of the geometry of nonintegrable structures. We
point out that there are many open questions concerning for instance the contact
geometry (the contact properties of the Riemann map,...), the
study of Monge-Amp{\`e}re equations, or the links
between almost complex analysis and symplectic topology. Our approach
here may be considered as a necessary first step to study such
questions.
|
1,314,259,993,287 | arxiv | \section{Introduction}\label{sec:Intro}
The electrification of the automotive industry along with the development of autonomous driving technology will lead to a paradigm shift in urban mobility.
Autonomous driving electric vehicle fleets are starting to be deployed worldwide to provide Electric Autonomous Mobility-on-Demand services (E-AMoD).
The routing and charging activities are both strongly influenced by the single vehicle design and the available charging infrastructure. Thus, all these terms play a key role in the optimization of the performance of an operational E-AMoD fleet.
Against this background, this paper proposes a modeling and optimization framework to optimize the control of an electric AMoD system jointly with the siting of the charging infrastructure, while comparing different vehicles to minimize the overall vehicle flow.
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{LayerZ.eps}
\caption{Multi-layer digraph schematically representing an E-AMoD system. Each layer corresponds to a battery state-of-charge (SoC). Each node on the same vertical line represents the same geographic location. The yellow arc indicates the presence of a charging station.}
\label{fig:Overview}
\end{figure}
\textit{Related Literature:}
This paper pertains to the research streams of routing and charge scheduling, charging infrastructure siting, and system-level design of vehicles for an AMoD system.
Multiple approaches to characterize and control AMoD systems are available: From queuing-theoretical models ~\citep{ZhangPavone2016,BanerjeeJohariEtAl2015,IglesiasRossiEtAl2017}, to simulation-based ones ~\citep{LevinKockelmanEtAl2017,MaciejewskiBischoffEtAl2017,HorlRuchEtAl2019}. Multi-commodity network flow models,~\citep{RossiZhangEtAl2017,SpieserTreleavenEtAl2014,IglesiasRossiEtAl2018, SalazarLanzettiEtAl2019} are suited for efficient optimization and allow for the implementation of a variety of complex constraints. They have been successfully employed to minimize fleet travel and electricity costs subject to limited driving range, charging constraints imposed by the congestion on the power transmission grid~\citep{RossiIglesiasEtAl2018b,EstandiaSchifferEtAl2019,BoewingSchifferEtAl2020}.
The capacity and location of the charging infrastructure also play an important role in the operations of E-AMoD systems, and will influence the rebalancing schemes
of the vehicles. ~\cite{LukeSalazarEtAl2021} proposed a model to jointly optimize the operations and charging infrastructure for an E-AMoD system which, however, requires a long time on high performance hardware to find the solution.
Yet having a tractable problem can be extremely important, especially when the goal is to explore different parametrizations or perform comparison studies, e.g., in terms of vehicular composition of the fleet as is the case in this paper.
Few papers have focused on the joint design and optimization of a fleet for AMoD.
\cite{Wallar_2019} investigated multi-class fleet composition for shared mobility-as-a-service, while \cite{PaparellaHofmanEtAl2022} leveraged directed acyclic graphs to study the trade-off between number of vehicles, battery capacity and costs of operations in a fleet for AMoD. However, the majority of these works suffer from scalability issues.
In conclusion, to the best of the authors' knowledge, there are no scalable models available to optimize the operations of an E-AMoD fleet jointly with the charging infrastructure siting in a computationally-tractable manner and with global optimality guarantees.
\textit{Statement of Contributions:} This paper presents a modeling and optimization framework to jointly optimize the operations of an E-AMoD fleet jointly with the charging infrastructure placement in a computationally-effective manner and with global optimality guarantees.
We first propose a network flow model describing the fleet routing and charging activities combined with the infrastructure design.
We perform a sampling of the road network to account for the battery state of charge (SoC) of the vehicles without significantly increasing computational complexity.
Next, we frame the optimization problem in a (mixed-integer) linear fashion that can be efficiently solved with off-the-shelf algorithms in a few minutes.
Finally, we showcase our framework with two case studies. The first case study shows the impact of the siting and density of the charging infrastructure on the energy consumed by the user-free flow of vehicles, and the second explores the impact of a given vehicle on the fleet sizing and energy consumption. Both case studies are carried out for the area of Manhattan, New York City.
\textit{Organization:} The remainder of this paper is structured as follows: Section \ref{sec: Methodology} introduces the E-AMoD optimization framework.
Section \ref{sec: Results} details our case studies of Manhattan.
Finally, Section \ref{sec: Conclusions} draws the conclusions from our key findings and provides an outlook on future research.
\section{Methodology}\label{sec: Methodology}
In this section, we present a time-invariant network flow model to optimize the operation of an E-AMoD system jointly with the placement of the charging infrastructure.
To this end, we construct the multi-layer digraph shown in Fig.~\ref{fig:Overview}, to represent the position of the vehicles on the road network together with their SoC.
\subsection{Multi-Layer Sampled Graph}
We model the transportation network as a directed graph $\mathcal{G_\mathrm{R}} = (\mathcal{V_\mathrm{R}}, \mathcal{A_\mathrm{R}})$ with a set of vertices ${v} \in \mathcal{V_\mathrm{R}}$ representing the location of intersections on the road network, and a set of arcs $(i,j) \in \mathcal{A_\mathrm{R}}$ representing the road link between vertices $i$ and $j$. Each road arc $(i,j) \in \mathcal{A_\mathrm{R}}$ is characterized by a distance $d_{ij}$, travel time $t_{ij}$, and energy $e_{ij}$ required to traverse it.
We then filter the original network of the city, obtaining a new network where each pair of connected nodes is equally distant in energy consumption from eachother. The selected reduced set of nodes is used to search for node pairs, which are also multiples of the unit energy consumption. This allows for increased accuracy in the path planning and the subsequent energy consumption with the reduced set of nodes.
We highlight that the energy consumption (i.e., the unit energy of the arcs) depends on the vehicle design.
To this end, we devise an algorithmic procedure to reduce the original road network into a reduced network.
The final transportation network graph consists of a smaller set of vertices and the corresponding set of arcs between them, as for instance shown in Fig.~\ref{fig:FinalGraphs} for Manhattan, NYC.
\begin{figure}[t]
\begin{minipage}{0.95\linewidth}
\end{minipage}
\begin{minipage}{\columnwidth}
\includegraphics[width=8.7cm]{Centrality-NYC6.eps}
\end{minipage}
\caption{Road graph of Manhattan (grey). Reduced iso-energy graph (colored). Each orange arc has a weight equal to the unit energy or an integer multiple.}
\label{fig:FinalGraphs
\label{fig:Centrality}
\end{figure}
To model the SoC of the fleet, we build a multi-layer graph, where each layer is a copy of the previously mentioned reduced graph. The total number of layers is equal to the battery capacity of the vehicles divided by the energy discretization of the road network.
The top layer represents the battery at $100$\%, and the bottom layer at $0$\% SoC. Therefore, as the vehicle commutes each arc, it also traverses down one layer, representing the depleting SoC.
Thanks to the new equi-distant representation in energy of the network, we eliminate the problem of dealing with the discretization of the SOC. Without this important stage, the discretization would lead to either 1) a very high number of layers and intractability, in case of fine SOC discretization; 2) uncertainty and mismatch between real energy used and the represented SoC on the layer, in case of coarse SoC discretization.
\subsection{Travel Requests and Charging Stations}
We introduce a set of geo-nodes $\mathcal{V_\mathrm{G}}$, where the travel requests are initialized.
A geo-node is connected to all the vertices at the same geographical location, across all the battery SoC layers via geo-arcs.
Then, we define $\mathcal{M} = \{1, ..., M\}$ as the set of travel requests.
Each request $m \in \mathcal{M}$ is defined by a tuple ${r}_m = (o_m, d_m, \alpha_m) \in \mathcal{V_\mathrm{G}} \times \mathcal{V_\mathrm{G}} \times \mathbb{R}^+$ in which $\alpha_m$ is the number of users traveling from the origin $o_m$ to the destination $d_m$ per unit time.
The user flow induced by each demand $m$ is defined as $x_{ij}^{m}$, where $ m \in \mathcal{M}$ on the arcs $(i, j) \in \mathcal{A}$.
All user demand flows $x_{ij}^{m}$ are fulfilled by vehicles, and the user-free vehicle flows are defined as $x_{ij}^{\mathrm{r}}$, which is the rebalancing flow on arc $(i,j) \in \mathcal{A}$. Both the user demand flow and the rebalancing flow originate and terminate at the set of geo-nodes $\mathcal{V_\mathrm{G}}$.
We also include the presence of charging stations that are defined by the binary variable $c_i \in \mathcal{V_\mathrm{G}}$, where the set of geo-nodes $\mathcal{V_\mathrm{G}}$ is where they can be located (we recall that a geo-node connects all nodes in the graph that represents the same geographical location). If $c_i=1$, node $i$ is a charging station, if $c_i=0$, there is no charging station in node $i$.
Charging arcs $(i,j) \in \mathcal{A}_\mathrm{C}$ are directed arcs at charging station locations, which allow the vehicles to move from a layer to the subsequent layer above, while remaining at the same geographic location. Vehicles can only be charged while rebalancing via charging arcs $(i,j) \in \mathcal{A_\mathrm{C}}$.
Hence, the set of arcs $\mathcal{A_\mathrm{R}} \cup \mathcal{A_\mathrm{C}} \cup \mathcal{A_\mathrm{G}} \in \mathcal{A}$, and the set of vertices $\mathcal{V_\mathrm{R}} \cup \mathcal{V_\mathrm{G}} \in \mathcal{V}$ comprise the complete graph representation.
\subsection{Problem Formulation}
The objective of this paper is to minimize the user and rebalancing flows in the system:
\begin{equation}\label{eq: objective1}
\min_{{x}^{m},{x}^\mathrm{r}}\sum_{{m} \in \mathcal{M}} \sum_{{(i,j)}\in\mathcal{A}} t_{ij}\cdot (x_{ij}^{m} + x_{ij}^\mathrm{r}),
\end{equation}
where $x_{ij}^{m}$ is the user demand flow, $x_{ij}^\mathrm{r}$ is the rebalancing flow, $t_{ij}$ is the time to traverse arc ${(i,j)}\in\mathcal{A}$.
The vehicle and user flow conservation are expressed as in a multi-commodity transportation problem by
\begin{multline}\label{eq: Flow conservation}
\sum_{(i,j)\in\mathcal{A}}x_{ij}^{{m}} + \mathds{1}_{j=o_{m}}\cdot \alpha_{{m}} = \sum_{(j,k)\in\mathcal{A}}x_{jk}^{{m}}+ \mathds{1}_{j=d_{m}}\cdot \alpha_{{m}}\\ \qquad \forall {m}\in\mathcal{M},
\end{multline}
where the user flow $x_{ij}^{m}$ is induced by demand $m$, $\mathds{1}_{x=y}$ is the indicator function, equal to $1$ if $x=y$ and zero otherwise, and $\alpha_m$ is the user request rate per unit time.
Rebalancing the vehicles in the E-AMoD system is critical to create a balanced system and to re-align vehicle distribution with transportation requests. This is ensured by
\begin{equation} \label{eq: Flow Rebalance}
\sum_{(i,j)\in\mathcal{A}} \left( x_{ij}^{\mathrm{r}} + \sum_{{m} \in \mathcal{M}} x_{ij}^{m} \right) =
\sum_{(j,k)\in\mathcal{A}} \left( x_{jk}^{\mathrm{r}} + \sum_{{m} \in \mathcal{M}} x_{jk}^{m} \right).
\end{equation}
The geo-nodes $\mathcal{V}_\mathrm{G}$ act as origins and destinations for the vehicles. Consequently, each demand will require the geo-arcs $\mathcal{A}_\mathrm{G}$ to be used twice: first, from the origin geo-node to the road network, and second, to go from the road network to the destination geo-node $\mathcal{V}_\mathrm{G}$. The same is also applicable to rebalancing flows. This is ensured by constraining the users flow by
\begin{equation}\label{eq: FromGeo_vehicle}
\begin{aligned}
\sum_{{m \in \mathcal{M}}} \left(\sum_{(i,j) \in \mathcal{A}_\mathrm{G}} x_{ij}^{m}+\sum_{(k,l) \in \mathcal{A}_\mathrm{G}} x_{kl}^{m} \right) = 2 \cdot \sum_{{m}\in\mathcal{M}} \alpha_{{m}} \\ \qquad \forall {i,l} \in \mathcal{V}_\mathrm{G},
\end{aligned}
\end{equation}
and the rebalancing flows to
\begin{equation}\label{eq: FromGeo_rebalance}
\begin{aligned}
\sum_{(i,j) \in \mathcal{A}_\mathrm{G}} x_{ij}^\mathrm{r}+\sum_{(k,l) \in \mathcal{A}_\mathrm{G}} x_{kl}^\mathrm{r} = 2 \cdot \sum_{{m}\in\mathcal{M}} \alpha_{{m}} \qquad \forall {i,l} \in \mathcal{V}_\mathrm{G}.
\end{aligned}
\end{equation}
We enforce SoC conservation when passing through a geo-node $\mathcal{V}_{\mathrm{G}}$ as
\begin{equation}\label{eq: vehicleRebalance_Geo}
\begin{aligned}
\sum_{(i,j) \in \mathcal{A}_\mathrm{G}} x_{ij}^{m} - x_{ji}^\mathrm{r} = 0 \qquad \forall {m}\in\mathcal{M}, \forall {i,j} \in \mathcal{V}_\mathrm{G}.
\end{aligned}
\end{equation}
We limit the number of charging stations to $N\in\sN$ with
\begin{equation}\label{eq: sumCP}
\begin{aligned}
\sum_{i \in \mathcal{V}_\mathrm{G}} c_{i} \leq N.
\end{aligned}
\end{equation}
Each charging station has a limited capacity in terms of the number of vehicles it can charge simultaneously. The capacity constraint of each charging station is given by
\begin{equation}\label{eq: chargingArcs}
\begin{aligned}
x_{ij}^{\mathrm{r}} \leq Z \cdot E \qquad \forall {(i,j)} \in \mathcal{A_\mathrm{C}},
\end{aligned}
\end{equation}
where $Z$ refers to the maximum vehicle capacity of each charging station, and $E$ refers to the number of battery SoC layers that can be traversed via the charging arcs per unit time, i.e., the charging power at the charging stations.
Therefore, the charging station vehicle capacity and the charging power limit the rebalancing flow through the charging arcs ${(i,j)} \in \mathcal{A_\mathrm{C}}$.
Since the vehicles must not charge with users onboard, the charging of the vehicles can be carried out only while rebalancing. This is ensured by
\begin{equation}\label{eq: Xv(charging) = 0}
\begin{aligned}
x_{ij}^{{m}} = 0 \qquad \forall {m}\in\mathcal{M}, \forall {(i,j)} \in \mathcal{A}_\mathrm{C},
\end{aligned}
\end{equation}
which guarantees that the vehicles do not charge when catering to user demands. Therefore, the vehicles can only charge while rebalancing through charging arcs $(i,j) \in \mathcal{A_\mathrm{C}}$.
Finally, we impose non-negative flow constraints,
\begin{align}
&x_{ij}^\mathrm{{r}} \geq 0 \qquad \forall {(i,j)} \in \mathcal{A}, \label{eq: Xv > 0} \\
&x_{ij}^{{m}} \geq 0 \qquad \forall {m}\in\mathcal{M}, \forall {(i,j)} \in \mathcal{A}. \label{eq: Xu > 0}
\end{align}
First, given a pre-defined charging infrastructure placement, we define the E-AMoD optimization problem as follows:
\begin{prob}(E-AMoD Optimization Problem:)\label{prob:one}
Given a set of transportation requests $\mathcal{M}$, the optimal user flows $x^m$, and rebalancing flows $x^r$ result from:
\begin{equation*}
\begin{aligned}
&\!\min_{{x}^{m},{x}^\mathrm{r}}\sum_{{m} \in \mathcal{M}} \sum_{{(i,j)}\in\mathcal{A}} & & t_{ij}\cdot (x_{ij}^{m} + x_{ij}^\mathrm{r}), \\
& \textnormal{s.t. } & &\eqref{eq: Flow conservation}-\eqref{eq: vehicleRebalance_Geo}, \eqref{eq: Xv(charging) = 0}-\eqref{eq: Xu > 0} .
\end{aligned}
\end{equation*}
\end{prob}
Problem~\ref{prob:one} is a linear program (LP) that can be efficiently solved with global optimality guarantees with off-the-shelf LP solvers.
\begin{prob}(E-AMoD Joint Optimization Problem:)\label{prob:two}
Given a set of transportation requests $\mathcal{M}$, the optimal user flows $x^m$, and rebalancing flows $x^r$ and the optimal siting of the charging infrastructure $c$, result from:
\begin{equation*}
\begin{aligned}
&\!\min_{{x}^{m},{x}^\mathrm{r},c}\sum_{{m} \in \mathcal{M}} \sum_{{(i,j)}\in\mathcal{A}} & & t_{ij}\cdot (x_{ij}^{m} + x_{ij}^\mathrm{r}), \\
& \textnormal{s.t. } & &\eqref{eq: Flow conservation}-\eqref{eq: Xu > 0} .
\end{aligned}
\end{equation*}
\end{prob}
Problem~\ref{prob:two} is a mixed-integer linear program (MILP) that can be solved with global optimality guarantees with off-the-shelf MILP solvers.
\subsection{Discussion}
A few comments are in order.
First, we model the system at steady-state. This assumption holds if the rate change of requests is significantly lower than the average travel time of individual trips, as observed in densely populated urban environments by \cite{Neuburger1971, Rossi2018}. In addition, vehicle conservation implicitly enforce SoC conservation over the time-span under consideration.
Second, the iso-energy graph sampling approach is only valid for environments where traveling to and from a node requires the same amount of energy, e.g., flat urban environments, and should be extended to capture more general (e.g., hilly) scenarios as well.
Third, the results obtained by solving Problem \ref{prob:one} and \ref{prob:two} allow for fractional flows to occur. This is acceptable, given the mesoscopic nature of the problem, where arc flows are in the order of hundreds of vehicles, see~\cite{LukeSalazarEtAl2021}.
Fourth, we do take into account the exogenous traffic and its stochatiscity only for a specific traffic scenario. In case of different conditions, the travel time and energy consumption should be updated, and consequently resample the network. Finally, we do not take into account the impact of endogenous traffic, e.g., as done in~\cite{SalazarTsaoEtAl2019,Wollenstein-BetechSalazarEtAl2021}, but rather leave this interesting aspect to future research.
\section{Results} \label{sec: Results}
This section showcases our modeling and optimization framework in two real-world case studies for Manhattan, NYC. The original data set is extracted from OpenStreetMap \citep{HaklayWeber2008}, and reduced to 19 nodes and 138 arcs, as shown in Fig.~\ref{fig:FinalGraphs}.
The transportation demand requests are available publicly (courtesy of the New York Taxi and Limousine Commission).
The data set used as a reference is taken from March 1 to 10, 2022. Thereby, we consider approximately \unit[140]{thousand} demands per day and \unit[1.4]{million} demands for the entire 10-day period. Due to the high volume and the absence of strong peak hours of daily travel requests in NYC, see ~\cite{Meyers2018}, we do not lose the assumptions of the linear time-invariant model. Thus, we can use it to model a time period as long as multiple days.
Moreover, the results that will be shown in Section~\ref{sec: caseStudy1}, specifically the ratio between rebalancing and overall distance driven, are in line with~\cite{HogeveenSteinbuchEtAl2021}, enforcing that the linear time-invariant hypothesis over a day holds.
We investigate two case studies. The first one assesses the advantages of jointly optimizing the infrastructure siting with respect to a heuristic placement based on geo-nodes centrality, and the impact of the charging infrastructure density on the resulting rebalancing energy consumed in a day by the whole fleet.
The second case study evaluates the performance of the E-AMoD system when different types of electric vehicles are employed, see Table~\ref{table:Vehicle}, which have differently sized batteries and accordingly designed powertrains.
\begin{comment}
\begin{table}[t]
\centering
\caption{User Requests (Manhattan)}
\label{tab:demands}
\begin{tabular}{l|l|l}
Parameter & Value & Unit\\
\hline
Daily demands & 120k & \unit{requests/day}\\
Selected area & 54 & \unit{km^2} \\
\end{tabular}
\end{table}
\end{comment}
\begin{table}[t]
\begin{center}
\caption{Normalized Vehicle Parameters and Number of Layers in the Graph.}
\label{table:Vehicle}
\begin{tabular}{ l|l|l|l }
& Car A & Car B & Car C \\
\hline
Energy consumption (WLTP) & 85\% & 93\% & 100\% \\
Battery capacity & 25\% & 60\% & 100\% \\
Mass & 69\% & 85\% & 100\% \\
Number of SoC Layers & 196 & 428 & 670
\end{tabular}
\end{center}
\end{table}
Car A has the smallest battery capacity, and is therefore the lightest and hence most energy-efficient vehicle.
Conversely, Car C has the largest battery capacity, making it the heaviest and least efficient one.
For all case-studies, both Problem~\ref{prob:one} and Problem~\ref{prob:two} were solved using the solver Gurobi 9.5, see \cite{GurobiOptimization2021}, on an Intel core i7-10850H, 32GB RAM in less than 5 and 15 minutes, respectively.
\subsection{The Charging Infrastructure}\label{sec: caseStudy2}
In this case study we investigate the impact of the charging infrastructure density and siting on the user-free flows. We simulate a demand period of 10 days with Car B.
Each data point in the figure is the average daily energy usage of the user-free flow between 1-10 March, 2022.
We assess the advantages of jointly optimizing siting and routing by comparing the results of Problem~\ref{prob:two} with the ones of Problem~\ref{prob:one} after siting the charging infrastructure in a heuristic manner. Thereby, we adopt a heuristic policy that places the charging stations in the geo-nodes with the highest \textit{betweenness} centrality,~\cite{Bullo2018},---i.e., the probability that a node appears on the shortest path between any two random nodes, ---as shown in Fig.~\ref{fig:Centrality}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{DensityCentrality3.eps}
\caption{The average energy consumption of the user-free flow for an optimally sited charging infrastructure and a heuristic approach based on betweenness centrality. Data set from 1-10 March, 2022.}
\label{fig:Rebal}
\end{figure}
Fig.~\ref{fig:Rebal} shows the difference in the energy usage of the user-free flows in the two scenarios for an increasing number of charging stations.
The optimal siting significantly outperforms the heuristic approach, with up to 30\% decrease in energy consumption.
Moreover, we observe that with approximately $\unit[0.1]{stations/km^2}$ , if the siting is optimal, we reach a plateau up to which it is not convenient to build additional stations. On the contrary, by using the betweenness centrality heuristic method, the number of charging stations has to be at least 3 times higher to obtain a similar performance.
\subsection{Impact of the Vehicle Design on the Optimal Operations}\label{sec: caseStudy1}
This section details the case study for the impact of selected vehicles on the sizing and energy consumption of the fleet. Specifically, we select 3 different vehicles that are used to study the impact on the comparative metrics. The design parameters for these vehicles are normalized with respect to the vehicle from the Dutch solar car manufacturer Lightyear, see \cite{Lightyear}. Table \ref{table:Vehicle} shows the normalized parameters of the selected vehicles and the number of layers used in the multi-layer graph. The simulation is carried out for the whole day of March 1, 2022. In each scenario we solve Problem~\ref{prob:two} with $N=10$ charging stations, following the results of Section~\ref{sec: caseStudy2}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{CaseStudy1_3.eps}
\caption{Energy used by user flow (blue) and user-free flow (red) for different vehicles (A,B,C) in Manhattan. The resulting fleet sizes are equal to 7910, 7890 and 7880 vehicles, respectively.}
\label{fig:CS1}
\end{figure}
Fig.~\ref{fig:CS1} illustrates the overall energy consumed and the one exclusively caused by the rebalancing.
The energy consumption of Car A is about $15$\% less than Car C during a WLTP.
However, a vehicle with a larger battery requires fewer trips to the charging station, as reflected in Fig.~\ref{fig:CS1}. This results in a higher energy consumption of the rebalancing flow for Car A.
Nevertheless, the overall energy consumption of Car A is still the lowest due to its significantly higher energy efficiency resulting from a lower weight.
In particular, we see that Car A overall uses $12$\% less energy compared to Car C, even though it uses $44$\% more energy to rebalance.
Finally, the resulting fleet sizes are equal to $7910$, $7890$ and $7880$ vehicles for A,B and C, respectively. With these results, the difference in fleet size can be considered negligible with respect to the battery size of the individual vehicle.
In conclusion, our results indicate that using vehicles with a shorter driving range but a higher energy efficiency can lead to better performance because the possibility of a better charging trip scheduling is overcompensated by a lower total energy need.
\section{Conclusion}\label{sec: Conclusions}
This paper proposed a modeling and optimization framework to jointly optimize operations and charging infrastructure placement for Electric Autonomous Mobility-on-Demand (E-AMoD) systems in a computationally tractable, scalable and globally optimal fashion.
The proposed iso-energy graph resampling allowed us to construct a time-invariant network flow model that can account for the State of Charge (SoC) of the individual vehicles, without the actual fleet size affecting its computational effectiveness.
Leveraging this model, we formulated the E-AMoD operational-only and joint operation and infrastructure placement problems as a linear program (LP) and mixed-integer linear program (MILP), respectively, that could be solved with global optimality guarantees using off-the-shelf optimization algorithms in a few minutes only.
Our real-world case studies based on taxi data in Manhattan, NYC, showed that jointly optimizing the charging infrastructure placement can significantly improve the performance achievable by the E-AMoD system. It also revealed that the benefit of increasing the number of charging stations vanishes rapidly in case of optimal siting of the infrastructure, but this might not occur if the placement is heuristic.
Moreover, in line with our previous findings obtained in~\cite{PaparellaHofmanEtAl2022} with a completely different, still globally optimal method, we showed how deploying fleets with a downsized battery will slightly increase the empty-mileage driven, whilst resulting in almost the same fleet size and significantly reducing the total energy consumption.
This study opens the field for the following extensions:
First, we would like to include the total costs of ownership of the fleet and charging infrastructure, thereby also accounting for different levels of charging power.
Second, we deem it interesting to also capture the interactions of the fleet with public transit and (shared) active modes.
Finally, it would be insightful to account for the interactions with the power grid and study heterogeneous (solar) electric fleet compositions.
\section{Acknowledgments}\label{Sec:akn}
We thank Dr. I. New and Ir. O. Borsboom for proofreading this paper.
\section{A summary of Latin grammar}
\end{document}
|
1,314,259,993,288 | arxiv | \chapter*{Acknowledgments}
Science, even in its naked form, is a collective, social process \cite{Fleck1979}.
For science any theory, observation or discovery is related to people and their relations.
For a scientist it is all above plus much more.
I am indebted to Maciek\footnote{Q: Is it misspelled? A: No. Keys [j] and [k] are neighbors, but this time it is misleading: \emph{Maciek} is a casual form of \emph{Maciej}.} Lewenstein, especially for providing me a lot of freedom for pursuing my diverse scientific and educational interests, and his belief in me.
I am convinced that there is no better gift for independence and creativity that never saying ``no''.
I am grateful to Javi Rodr\'{\i}guez-Laguna for his countless insights into anything, from scientific remarks on current projects (and unrelated ones), through pieces of advice on academic writing and workflow, to comments on education, society and, well, anything.
It was encouraging, inspiring and fruitful.
I would like to thank Jake Biamonte for inviting me to Turin for an intensive and fascinating research collaboration.
Even tough it was a short stay, it was wonderful on so many axes.
I am happy for hospitality and help of the administration: primarily my home institute, ICFO in Castelldefels,
and also guest institutes, such as ISI Foundation in Turin and IFT in Madrid.
It is thank to your work and attitude that I felt welcomed, and free from paperwork burden.
I am grateful to my family and close friends for the support and encouragement.
Out of many lessons learnt the most important one is that, in a long run, happiness is as important as intellectual prowess.
The list only starts here.
All coauthors, discussion partners, lecturers, fellow PhD students and friends --- thank you!
\newpage
This PhD was supported by Spanish MINCIN/MINECO project TOQATA (FIS2008-00784), EU Integrated Projects AQUTE and SIQS, CHISTERA project DIQUIP, ERC grants QUAGATUA and OSYRIS.
\chapter*{Abstract}
The study of the structure of quantum states can provide insight into the possibilities of quantum mechanics applied to quantum communication, cryptography and computations, as well as the study of condensed matter systems.
For example, it shows the physical restrictions on the ways how a quantum state can be used and allows us to tell which quantum states are equivalent up to local operations.
Therefore, it is crucial for any analysis of the properties and applications of quantum states.
This PhD thesis is dedicated to the study of the interplay between symmetries of quantum states and their self-similar properties.
It consists of three connected threads of research: polynomial invariants for multiphoton states, visualization schemes for quantum many-body systems and a complex networks approach to quantum walks on a graph.
First, we study the problem of which many-photon states are equivalent up to the action of passive linear optics.
We prove that it can be converted into the problem of equivalence of two permutation-symmetric states, not necessarily restricted to the same operation on all parties.
We show that the problem can be formulated in terms of symmetries of complex polynomials of many variables, and provide two families of invariants, which are straightforward to compute and provide analytical results.
Furthermore, we prove that some highly symmetric states (singlet states implemented with photons) offer two degrees of robustness --- both against collective decoherence and against a photon loss.
Additionally, we provide two proposals for experiments, feasible with an optical setup and current technology: one related to the direct measurement of a family of invariants using photon-counting, and the other concerting the protection of transmitted quantum information employing the symmetries of the state.
Second, we study a family of recursive visualization schemes for many-particle systems, for which we have coined the name ``qubism''.
While all many-qudit states can be plotted with qubism, it is especially useful for spin chains and one-dimensional translationally invariant states.
This symmetry results in self-similarity of the plot, making it more comprehensible and allowing
to discover certain structures.
This visualization scheme allows to compare states of different particle numbers (which may be useful in numerical simulations when particle number is an open parameter) and puts emphasis on correlations between neighboring particles.
The visualization scheme can be used to plot probability distribution of sequences, e.g. related to series of nucleotides in RNA and DNA or --- aminoacids in proteins.
However, unlike classical probabilistic ensembles of sequences, visualizing quantum states offers more --- showing entanglement and allowing to observe quantum phase transitions.
Third, we study quantum walks of a single particle on graphs, which are classical analogues of random walks.
Our focus in on the long-time limit of the probability distribution.
We define ``quantumness'' to be the difference between the probability distributions of the quantum and related classical random walks.
Moreover, we study how (especially in the long-time limit) off-diagonal elements of the density matrix behave.
That is, we measure coherence between different nodes,
and we use them to perform quantum community detection --- splitting of a graph into subgraphs in such a way that the coherence between them is small.
We perform a bottom-up hierarchical aggregation, with a scheme similar to modularity maximization, which is a standard tool for the, so called, community detection for (classical) complex networks.
However, our method captures properties that classical methods cannot --- the impact of constructive and destructive interference, as well as the dependence of the results on the tunneling phase.
\chapter{Conclusion}
\label{ch:conclusion}
This PhD thesis is devoted to three threads:
\begin{itemize}
\item \textbf{\nameref{ch:invariants}},\\
about relations between permutation symmetry of a state and its other properties related to quantum information. The main focus was on local unitary equivalence of states, transformations achievable with linear optics and polynomial invariants.
\item \textbf{\nameref{ch:qubsim}},\\
about a plotting scheme for many-body states, \emph{qubism}.
This tool allows to show entanglement and phase transitions, as well as discover other symmetries of a pure state.
\item \textbf{\nameref{ch:networks}},\\
about using complex network approach to study quantum systems, with the special emphasis on community detection.
We use it to asses the range of quantum effects in a biochemical system.
\end{itemize}
Each of these topics give raise to further questions and lines of investigation.
However, this combination of topics is a source of creativity and open paths to further developments, particularly:
\begin{itemize}
\item Geometric representations for mixed states.\\
Majorana representation for symmetric qubit states serves both as a visualization and a mathematical isomorphism giving rigorous insight into properties of the state.
A variant for mixed states would be beneficial.
\item General visualization schemes putting emphasis on symmetries of a given state.\\
We plotted amplitudes, which represent all knowledge about the state, but also are susceptible to ``unimportant'' changes (e.g. local basis).
Directly showing symmetries of a state, whether rigorous or approximate, may be fruitful.
\item Special visualizations for symmetric and antisymmetric states.\\
Qubism representation, while can be used for any state, focuses on translationally invariant states.
It is likely that there are plotting schemes that are more suitable for states with different symmetries.
\item Relation of quantum community detection to other hierarchical schemes.\\
Splitting a system into subsystem that are weakly correlated is the key principle standing behind matrix product states (MPS) and projected entangled pair states (PEPS).
There are analogies between these techniques and quantum community detection, which are worth pursuing.
\item Community detection methods for many-body systems.
We performed splitting of a one-particle systems into subsystem not sharing coherent quantum superposition.
A natural extension would be to work on multiparticle systems and, instead of coherence, work on entanglement.
\end{itemize}
\chapter{Introduction}
\label{ch:introduction}
\section{Background}
This PhD thesis is divided into three chapters, each one describing a distinct thread of research:
\begin{itemize}
\item \textbf{\nameref{ch:invariants}},
\item \textbf{\nameref{ch:qubsim}},
\item \textbf{\nameref{ch:networks}}.
\end{itemize}
Yet, these threads are connected through common concepts and methods related to study of entanglement, symmetry with respect to interchange of particles and self-similarity of quantum systems.
An illustrative graph of these concepts and their relations is depicted in
Fig.~\ref{fig:thesis-diagram}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{figs/introduction/thesis_diagram}
\caption{A graph of the main concepts of this thesis and their relations.
\label{fig:thesis-diagram}
}
\end{figure}
\subsection{Historical background}
While quantum mechanics dates back to the beginning of the 20th century \cite{DiracBook},
only in the last decades it was started being considered as a tool for information processing, that is, communication, cryptography and computation \cite{NielsenChuang2000}.
One of the first research in that line was a study of the information capacity for transmission of information with quantum \cite{Holevo1973} rather than classical states \cite{Shannon1948original}.
The result, now known as Holevo's bound, is that regardless if we are operating with classical or quantum $d$-level states, we can transmit up to $\log_2(d)$ bits of information per state.
That is, for this particular task there is no advantage of using quantum states over classical states.
However, many other works show significant differences between classical and quantum states.
Perhaps the most striking is Bell's theorem \cite{Bell1964}, putting bounds on certain correlations, which cannot be broken by any probability distribution stemming from classical mechanics.
Much to the surprise of the author, it turned out that some quantum states break the bound.
Thus, in particular, quantum mechanics cannot be thought as a classical theory with yet unknown parameters, settling a dispute, which looked as purely philosophical \cite{Einstein1935,WheelerZurek1983book}.
Stronger-than-classical correlations inspired the FLASH paper \cite{Herbert1982}, a protocol for faster-than-light communication.
While, as expected, this result could not hold, the flaw in it was so subtle, that it inspired further progress \cite{Peres2003,Kaiser2011book}, starting from the no-cloning theorem \cite{Wootters1982,Dieks1982}, a proof that there can be no machine making copies of an arbitrary quantum state.
Impossibility to clone an unknown state offers certain advantages --- we may use quantum states to transmit information that we do not want to be copied.
If one party (let us call her Alice) sends a quantum message to a friend (let us call him Bob), then any trial of our enemy
to make a copy of the message will disturb the message Bob receives.
This property is utilized in protocols for generating shared secret keys, allowing for perfectly secure cryptography \cite{Bennett1984,Ekert1991}.
Another application of quantum information is quantum computation.
Feynman is attributed with the first idea to use the principles of quantum mechanics for computation \cite{Feynman1982}.
Moreover, his intuition that quantum systems can simulate other quantum system turned out to be correct \cite{Lloyd1996}.
The first quantum algorithms offered significant speedup for
testing whether a function is constant \cite{Deutsch1992},
the computation of the discrete logarithm \cite{Shor1994},
and database access \cite{Grover1996}.
Unfortunately, the power of quantum information and computation comes at the price of another intrinsic quantum mechanic problem: its extreme sensitivity to noise.
As the number of parameters grows exponentially with the number of particles, even a small noise, attenuation or uncertainty of the setting may result in drastic changes of subtle parameters of the state.
The typical classical approach to overcome this problem is to amplify the signal, so that it becomes much stronger than noise and presents redundancy against losses.
However, in quantum information this strategy is disallowed, because of the no-cloning property, which allows secure communication.
Moreover, on the one hand the pervasiveness of interaction between particles makes creation of entangled states feasible, but on the other hand it makes it easy to have an uncontrollable evolution and to entangle our system with the environment.
These uncontrolled interactions are operationally the same as the loss of quantum properties in the form of decoherence \cite{Zurek2003}.
\subsection{Properties of quantum mechanics}
Mathematically, quantum states are described as vectors in a complex Hilbert space.
Pure quantum states of $d$-levels, or \emph{qudits}, are represented by vectors of $d$ dimensions and complex entries,
\begin{equation}
\ket{\psi} =
\begin{bmatrix}
\psi_1\\
\psi_2\\
\vdots\\
\psi_d
\end{bmatrix}.
\end{equation}
When we perform a measurement in the computational basis, the probability of obtaining a given outcome is the absolute value squared of the respective vector entry
\begin{equation}
P(i) = \psi_i^* \psi_i = |\psi_i|^2,
\end{equation}
also know as the Born rule.
The only linear operations that preserve probability are unitary operations.
Consequently, when describing a purely quantum evolution, we restrict ourselves to using only these operations.
When considering a composite system of many distinguishable subsystems, the recipe to construct its wavefunction is given by the tensor product of wavefunctions representing each subsystem
\begin{equation}
\ket{\psi} = \ket{\phi_1} \otimes \ket{\phi_2} \otimes \cdots \otimes \ket{\phi_N}.
\label{eq:product-states}
\end{equation}
Thus, the probability of getting a particular outcome is independent from other measurements
\begin{equation}
P(i_1, i_2, \ldots, i_N) = P_1(i_1) P_2(i_2) \cdots P_N(i_N).
\end{equation}
Tensor product acts as a Cartesian product on the Hilbert space basis.
So for subsystems of dimensions $d_1$, $d_2$, $\ldots$ and $d_N$ the dimension of the global Hilbert space is $d_1 d_2 \cdots d_N$.
Not all quantum states of many particles can be written as a \emph{product state} \eqref{eq:product-states}.
These states are called \emph{entangled states} \cite{Horodecki2009}.
If each subsystem has dimension $d$, then product states constitute a manifold of $N (d-1)$ complex dimensions.
However, if we consider all possible states, we get a manifold of $d^N - 1$ complex parameters.
Consequently, from a measure-theoretic perspective, almost all pure states are entangled.
In many scenarios we need to deal with statistical mixtures of pure states.
This can be done by using density matrices.
For a pure state $\ket{\psi}$ it is defined as
\begin{equation}
\rho_{\ket{\psi}} = \ket{\psi}\bra{\psi}.
\end{equation}
That is, its entries are $\rho_{ij}=\psi_i \psi_j^*$.
The diagonal of density matrix consists of the probabilities of the different outcomes, given the measurement is performed in the computational basis.
The statistical mixture of two states, with probabilities $\mu$ and $(1-\mu)$ can be written as
\begin{equation}
\rho = \mu \rho_1 + (1-\mu) \rho_2.
\end{equation}
This description encapsulates the fact that different mixtures of pure states can yield the same quantum correlations.
Moreover, it allows straightforward calculation of expectations values of operators $A$, that is
\begin{equation}
\langle A \rangle = \hbox{Tr}[ A \rho ].
\end{equation}
For mixed states the notion of entanglement is more complicated.
The most standard approach is to define separable states as states being in the convex hull of $\rho_{\ket{\psi}}$, where $\ket{\psi}$ are product states, i.e.
\begin{equation}
\rho = \sum_i p(i) \rho_{\ket{\psi_i}}.
\end{equation}
Yet, unlike for pure states, the problem to tell whether a given state is separable or not is NP-hard \cite{Gurvits2003}.
While mixed states are harder to analyze than pure states, they are essential to study quantum mechanics itself.
That is, even if we study a pure state of two particles, its subsystems are generally in a mixed state.
When we study any state described by $\rho$, the state of its subsystem $A$ after ignoring subsystem $B$ reads
\begin{equation}
\rho_A = \hbox{Tr}_A\left[ \rho \right],
\end{equation}
where $\hbox{Tr}_A$ is the partial trace.
This operation traces out everything but the system $A$ (in this case, it traces out the subsystem $B$), that is
\begin{equation}
[\rho_A]_{i_A; j_A} = \sum_{i_B, j_B} [\rho]_{i_A, i_B; j_A, j_B}.
\end{equation}
Use of a mixed state to analyze a subsystem is not only done because of our ignorance, i.e. lack of knowledge, of subsystem $B$.
The other party can be light years away, and whatever we do cannot be affected by operations performed on the remote subsystem.
Or even, the other party may have crossed the event horizon of a black hole, so even in principle its information may not be accessible to us.
The notion of entanglement does depend on the choice of subsystems with respect to which we want to assess entanglement.
Consider a single photon that passes through a $50\%:50\%$ beam splitter.
If we choose to represent our quantum states with particles, the state is given by
\begin{equation}
\frac{\ket{A} + \ket{B}}{\sqrt{2}},
\end{equation}
with the following meaning: a photon is in a superposition of mode $A$ and mode $B$, with equal amplitudes.
As any state of a single particle it is always in the form a product state \eqref{eq:product-states}, thus is not entangled.
However, if we move to the second quantization picture, describe our state with the occupation of modes, $A$ and $B$, then the state is
\begin{equation}
\frac{\ket{0,1} + \ket{1,0}}{\sqrt{2}},
\end{equation}
that is, a superposition of
\begin{itemize}
\item having no photons in mode $A$ and a photon in mode $B$,
\item having a photon in mode $A$ and no photons in mode $B$,
\end{itemize}
which is entangled.
Depending on the problem we study, we may want to use one representation or the other.
\subsection{Entanglement}
The non-local character of quantum effects provides a motivation for defining entanglement as quantum correlations that cannot be generated by local operations, even if assisted by classical communication.
As a simple example, a product state $\ket{00}$ can be converted by local operations to $\ket{10}$, another product state.
However, there are no local operations that would allow transforming $\ket{00}$ into an entangled state $(\ket{00}+\ket{11})/\sqrt{2}$.
Since product states can be simulated by classical devices, practically all intrinsically quantum protocols need to rely on entanglement.
However, even arbitrary small entanglement is sufficient for universal quantum computation \cite{Nest2012}.
In quantum information, we are typically interested in properties up to the choice of local basis.
Consequently, typical entanglement measures are defined up to local unitary operations\footnote{
A generic quantum information scientist will not tell a difference between $(\ket{01}-\ket{10})/\sqrt{2}$ and $(\ket{00}+\ket{11})/\sqrt{2}$.}.
This notion is formalized by \emph{entanglement monotones} \cite{Vidal2000} --- non-increasing quantities under local operations.
The easiest case to study entanglement is a bipartite system in a pure state.
Let us call the parts $A$ and $B$, each of size $N$.
In this case all entanglement properties can be studied by choosing a convenient pair of local bases.
This procedure, called the \emph{Schmidt decomposition}, reads
\begin{equation}
\ket{\psi} = \sum_k \lambda_k \ket{\phi_k} \otimes \ket{\varphi_k},
\end{equation}
where $\lambda_k$ are non-negative real numbers and $\{\ket{\phi_k}\}_k$ is a set of orthonormal vectors for subsystem $A$ (and analogously for $\{\ket{\varphi_k}\}_k$ and $B$).
Technically, the Schmidt decomposition is the \emph{singular value decomposition} of matrix $\ket{\psi}_{i_A, i_B}$, that is
\begin{gather}
\begin{bmatrix}
\psi_{11} & \hdots & \psi_{1 N} \\
\vdots & \ddots & \vdots \\
\psi_{N 1} & \hdots & \psi_{N N}
\end{bmatrix}
=\\
\begin{bmatrix}
& & \\
\ket{\phi_1} & \cdots \vphantom{\ddots} & \ket{\phi_N}\\
& &
\end{bmatrix}
\begin{bmatrix}
\lambda_1 & & 0 \\
& \ddots & \\
0 & & \lambda_N
\end{bmatrix}
\begin{bmatrix}
& & \\
\ket{\varphi_1} & \cdots \vphantom{\ddots} & \ket{\varphi_N}\\
& &
\end{bmatrix}^T.
\end{gather}
Since it is equivalent to changing local bases, the only quantities related to entanglement is the set of \emph{Schmidt values}, i.e. $\{\lambda_k\}_k$.
For a product state there is only one non-zero Schmidt value.
Two states present the same entanglement if and only if they set of Schmidt values is the same.
The Schmidt values contain the same information as
\begin{equation}
\hbox{Tr}[\rho_A^q] = \hbox{Tr}[\rho_B^q] = \sum_k \lambda_k^q.
\end{equation}
Consequently, the study of reduced density matrices is related to invariants for a quantum state.
That is, the study of a subsystem is an important tool for studying properties of the global system.
Multipartite entanglement is significantly more difficult to analyze.
In similarity with the bipartite case, one approach is to consider polynomials in the wavefunction coordinates such that they are invariant with respect to local unitary operations.
While this method is general, there is no easy procedure to find a complete set of independent invariants.
Mathematically, these invariants can be expressed as expectation values of many copies of the initial state \cite{Grassl1998}.
Thus, also taking a supersystem plays a role in the study of the properties of quantum states.
In this thesis we restrict to the study of entanglement for pure states.
While a lot of research in quantum information is focused on qubits, we work mainly on systems of finite, but arbitrary, dimension.
In a number of cases the qudit case is significantly harder than the qubit case and either requires qualitatively different proofs to show the same properties or have properties that cannot be reduced to the qubit case.
\subsection{Quantum information with photons}
One practical and promising tool for quantum information are excitations of electromagnetic field, that is, photons.
They are massless particles, with spin $1$ and Bose statistics.
Their main advantage is the ease of creation, transmission and measurement.
Photons travel in transparent media without interaction among themselves or entangling to the environment. Consequently, a quantum state created with photons in one place can be processed and measured in another place.
In fact, many hallmark properties of quantum information were first demonstrated using photons, for example BB84 protocol for cryptography \cite{Bennett1992}, quantum teleportation \cite{Boschi1998} and Bell test \cite{Aspect1982}.
Photon pure states, as any bosons, can be described as a polynomial of creation operators acting on the electromagnetic vacuum, for example
\begin{equation}
\left(\tfrac{1}{3\sqrt{2}}a_1^{\dagger 3} + \tfrac{1}{\sqrt{3}} a_1^\dagger a_2^\dagger + \tfrac{1}{\sqrt{3}} a_3^\dagger \right)\ket{\Omega},
\end{equation}
where by $\ket{\Omega}$ we denote the vacuum state, and the state being described reads is superposition of
\begin{itemize}
\item three photons in mode $1$,
\item one photon in mode $1$ and one photon in mode $2$,
\item one photon in mode $3$.
\end{itemize}
Since creation operators commute, the permutation symmetry of bosonic states is ingrained in the polynomial representation.
Not every multiphoton state can be easily created.
The easiest ones are coherent states, which are naturally created by lasers, and squeezed states --- states of light which can be created by the propagation of a strong light beam through a nonlinear crystal.
More difficult methods of state creation is via cavity quantum electrodynamics \cite{Brattke2001,Haroche2013}.
Further processing can be done using linear optics, i.e. beam splitters and phase retarders.
Other operations are significantly harder to perform, e.g. quantum non-demolition measurements \cite{Braginsky1980,Brune1992}, or give only probabilistic results, e.g. conditional measurement \cite{Lvovsky2001}.
\subsection{Entanglement invariants for symmetric states}
Passive linear optics can be understood as the set of operations on multi-photon states restricted to many-particle interference \cite{Tichy2013}, but without interaction among the particles.
Even this small subset of all conceivable operations is useful for quantum communication and
cannot be efficiently simulated by classical computers \cite{Aaronson2010}.
Along with conditional measurement, linear optics is as powerful as a universal quantum computer \cite{Knill2001}.
Since it is easy to apply linear optics operations in a laboratory, the difficulty of quantum state generation, processing and measurement is related to transformations that cannot be performed in this framework.
In this thesis, we have undertaken the task of finding out which photon states can be reached from a given state, using only linear optics.
Our key contribution is the introduction of two families of invariants \cite{Migdal2014ffdag}, which are straightforward to compute and provide both numerical and analytical insight into the geometry of permutation-symmetric states \cite{BengtssonZyczkowski2006book}.
They are expressed as the expectation values of polynomials in annihilation and creation operators, and are related to particular symmetries of the state.
Moreover, we show an experimental scheme, using an optical setting, to directly measure the values for one of the families of invariants.
We show that our problem is equivalent to the problem of assessing the equivalence between permutation-symmetric states of distinguishable particles \cite{Migdal2013aaa}.
This in particular builds a bridge between invariants for linear operations acting on bosonic states and entanglement properties of distinguishable states.
Even if photons are relatively uncoupled from transparent media, at distances suitable for practical applications particle loss and interaction with the environment is inevitable.
We show a way to overcome these problems by transmitting quantum information encoded in singlet states built with photons \cite{Migdal2011dfs}.
These states are invariant under collective decoherence.
At the same time, the quantum information they carry is immune against all one-particle losses.
We propose an experimental protocol as a proof-of-principle demonstration of these properties.
\subsection{Quantum sequences and qubism}
Quantum entanglement of many particles is difficult to describe and quantify even for pure states.
If we want to get insight into the structure of a given state, one approach is to calculate its various entanglement measures.
In order to analyze the full state, we introduce a visualization scheme, called \emph{qubism}, for pure states of many $d$-level particles, which generates two-dimensional images \cite{Rodriguez-Laguna2011}.
We plot all amplitudes of a given state in the computational basis, and arrange them in a specific way, which makes certain quantum properties visible, see Fig.~\ref{fig:intro-qubism}.
In particular, due to the recursive nature of the plot, translational symmetry shows up as self-similarity of the plot, while entanglement shows up as a type of this self-similarity.
One-dimensional spin chains constitute an interesting class of Hamiltonians, which play a role as toy models and have proved to be a fruitful ground for developing techniques for analyzing many-body states.
Properties such as ferromagnetism, block entanglement, transport properties \cite{Lewenstein2012book} and correlation length were studied in such systems, especially in the context of quantum phase transitions.
One of the most relevant task of quantum many-body physics is to investigate how does the ground state change with the parameters in the Hamiltonian \cite{Sachdev1999book}.
We employ qubism to exhibit particular properties of spin chains and show how the plot can be used to make conjectures about the structure of the state.
We show that phase transitions are usually apparent, and can be seen without previous knowledge of the order parameter.
For numerous physical systems the ground state is a singlet state \cite{Auerbach1994}, that is, belongs to subspace of zero total angular momentum --- also this property is visible in the plot.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{figs/introduction/scheme1} &
\includegraphics[width=0.4\textwidth]{figs/introduction/scheme2} \\
\includegraphics[width=0.4\textwidth]{figs/introduction/heis12_pbc} &
\includegraphics[width=0.4\textwidth]{figs/introduction/dicke12_half}
\end{tabular}
\caption{Qubism, a 2D plotting scheme of many-body wavefunctions.
Top:
A recipe for the recursive visualization scheme.
For the first iteration level the basis for the first two particles,
$|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$,
is mapped into one of the four smaller squares (left).
Next, we repeat the procedure for the next two particles, taking the smaller squares as the starting point (right).
Bottom: Examples of qubistic plots for qubit states of $N=12$ particles.
We plot the ground state for the Heisenberg Hamiltonian with the periodic boundary conditions (left) and the half-filled Dicke state (right).
Saturation represents the absolute value of the amplitude and color represent the sign.
\label{fig:intro-qubism}
}
\end{figure}
\subsection{Complexity and the R\'enyi entropy}
Self-similarity of qubistic plots can be not only seen, but also quantified, using their fractal dimensions.
As we deal with a probability distribution derived from quantum mechanics, rather than a set,
its fractal dimension can be characterized as a function, rather than a single number.
We use the R\'enyi entropy \cite{Renyi1961} of the order $q$, which is defined as:
\begin{equation}
H_q = \frac{1}{1-q} \log_2 \left( \sum_{i=1}^N p_i^q \right),
\label{eq:renyi}
\end{equation}
where $(p_1,\ldots,p_N)$ is a probability distribution.
Its scaling properties with coarse-graining of the plot can be quantified as the multifractal dimensions \cite{Halsey1986,Theiler1990}.
That is, for fractals the entropy \eqref{eq:renyi} is expected to grow linearly as we are doubling the resolution.
The fractal dimension is defined as the linear coefficient.
The parameter $q$ is related to the sensitivity to low and high probability densities.
In particular, for $q \to 0$ we obtain the fractal dimension of the support, i.e. all non-zero probabilities, whereas for $q \to \infty$ we obtain the fractal dimension of the set of the most probable outcomes.
The R\'enyi entropy has applications to other quantum problems, in particular for the entropic uncertainty principle \cite{Bialynicki-Birula1975,Bialynicki-Birula2006}, a generalization of Heisenberg's uncertainly principle.
Moreover, a quantum variant of the R\'enyi entropy, where instead of summing probabilities we perform $\hbox{Tr}[\rho^q]$, has applications in assessing the purity of a mixed state.
Consequently, when applied to reduced density matrices, these entropies are entanglement invariants.
Additionally, the R\'enyi entropy has applications in the study of other complex systems, e.g. probability distribution and degree distribution on complex networks.
Its low value implies high heterogeneity of a network.
\subsection{Quantum complex networks}
Many complex systems can be represented in an abstract way as a graph, that is, a set of nodes connected by edges.
When we analyze real systems these graphs are called complex networks \cite{Albert2002,Newman03thestructure}.
The structure of a complex networks can be characterized with a number of parameters.
The simplest one beyond the node and edge count is the degree distribution, that is, the distribution of nodes with respect to the number of outgoing edges.
This parameter allows us to tell how homogeneous or heterogeneous are nodes with respect to their connection to other nodes.
Sometimes the degree distribution function can be identified as a power law.
In quantum mechanics, a Hamiltonian can be viewed as a graph, with edges between nodes being weighted by the respective transition amplitudes.
The unitary evolution of a quantum state can be interpreted as a quantum walk, in which the walker tunnels to its neighboring nodes.
Unlike for a classical random walk, in which the probability distribution converges to a steady state, in a quantum walk the long-time behavior does depend on the initial state and oscillates rather than converges to a steady state.
Nonetheless, a natural question would be to compare classical and quantum behavior.
We have found that after averaging out oscillations, the probability distributions are close to each other, and their difference depends on the degree distribution of the network \cite{Faccin2013}.
In order to get insight into the structure of a complex network, we can split a graph into communities \cite{Fortunato2010}, that is subgraphs, each of them with nodes more densely connected inside that with the rest of the graph.
It allows both to study in depth each subgraph and to analyze a coarse-grained graph, with communities being the new nodes.
An archetypal community would be a clique --- a subgraph with all nodes connected within itself and with no outgoing edges.
However, in real-world data communities are typically less pronounced, see Fig.~\ref{fig:tagoverflow}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/introduction/tagoverflow_physics_quantum}
\caption{\label{fig:tagoverflow}
An example of community detection applied for graph visualization: network of tags associated with \texttt{quantum-mechanics} on Physics - Stack Exchange, a questions and answers site.
The area of each node is proportional to the number of questions labeled by a given tag.
Edge widths are related to the number of questions with both tags.
Edge shades are related to correlation between two tags: $P(\text{tag1} \cap \text{tag2})/[P(\text{tag1})P(\text{tag2})]$.
Each color represents a distinct community, based on a greedy modularity maximization.
This open-source project by the author is accessible via the following link: \url{http://stared.github.io/tagoverflow/}.}
\end{figure}
There is no universal recipe for \emph{community detection}, i.e. splitting a graph into communities.
A typical approach is to define some target function, which is maximal for the best splitting.
Perhaps the most common one is modularity \cite{Newman2004,Clauset2004} --- a quantity measuring how much more nodes are connected inside a community that with the outside.
But even for modularity, an exact maximization is an NP-complete problem \cite{Brandes2008}, so heuristic methods are important, for example greedy or hierarchical models, inspired by renormalization \cite{Blondel2008}.
Another important question is related to the strength of quantum effects.
At small scales quantum effect are apparent --- energies of molecules are directly based on quantum mechanics.
However, biological systems do not seem to exhibit long-range quantum effects.
So, even at the level of a single excitation, how can we assess a typical scale for quantum effects?
We focus on a physical system, \emph{light harvesting complex II}, utilized by plants to absorb light.
To investigate regions with strong quantum effects, we introduce a community detection method suited for quantum walks \cite{Faccin2013community}.
Our approach is to look at the behavior of a single excitation, performing a quantum walk from a given site.
We distinguish between nodes for which interference effects are relevant and those for which they are not.
The distinction is established via the density matrix.
We define the splitting of a graph into communities as removing all coherence between communities. Our target functions, instead of the modularity, are typical quantum informational measures of the purity of a state or the fidelity of a quantum channel.
\section{Structure}
This PhD thesis is divided into three chapters, each one describing a distinct thread of research,
connected through common concepts and methods related to study of entanglement, symmetry and self-similarity of quantum systems.
All chapters are aimed to be self-contained, and they can be read in any order.
In Chapter~\ref{ch:invariants}:~\textbf{\nameref{ch:invariants}}
we study the problem of which many-photon states are equivalent up to the action of passive linear optics.
We prove that it can be converted into the problem of equivalence of two permutation-symmetric states, not necessarily restricted to the same operation on all parties.
We show that the problem can be formulated in terms of symmetries of complex polynomials of many variables, and provide two families of invariants, which are straightforward to compute and provide analytical results.
Furthermore, we prove that some highly symmetric states (singlet states implemented with photons) offer two degrees of robustness --- both against collective decoherence and a photon loss.
Additionally, we provide two proposals for experiments, feasible with an optical setup and current technology: one related to the direct measurement of a family of invariants using photon-counting, and the other on protecting transmitted quantum information employing the symmetries of the state.
In Chapter~\ref{ch:qubsim}:~\textbf{\nameref{ch:qubsim}}
we study a family of recursive visualization schemes for many-particle systems, for which we have coined the name \emph{qubism}.
While all many-qudit states can be plotted with qubism, it is especially useful for spin chains and one-dimensional translationally invariant states.
This symmetry results in self-similarity of the plot, making it more comprehensible and allowing us to discover certain structures from it.
This visualization scheme allows to compare states of different particle numbers (which may be useful in numerical simulations when particle number is an open parameter) and puts emphasis on correlations between neighboring particles.
The visualization scheme can be used to plot probability distribution of sequences, e.g. related to series of nucleotides in RNA and DNA or --- amino acids in proteins.
However, unlike classical probabilistic ensembles of sequences, visualizing quantum states offers more --- showing entanglement and allowing us to observe quantum phase transitions.
In Chapter~\ref{ch:networks}:~\textbf{\nameref{ch:networks}}
we study quantum walks of a single particle on graphs, which are quantum analogues of classical random walks.
Our focus in on the long-time limit of the probability distribution.
We define ``quantumness'' to be the difference between the probability distributions of the quantum and related random walks.
Moreover, we study how (especially in the long-time limit) off-diagonal elements of the density matrix behave.
That is, we measure coherence between different nodes,
and we use this coherence to perform quantum community detection --- splitting of a graph into subgraphs in such a way that the coherence between them is small.
We perform a bottom-up hierarchical aggregation, with a scheme similar to modularity maximization, which is a standard tool for the, so called, community detection for (classical) complex networks.
However, our method captures properties that classical methods cannot --- the impact of constructive and destructive interference, as well as the dependence of the results on the tunneling phase.
\section{Contribution}
This PhD thesis is based on the following peer-reviewed papers and preprints, in the chronological order:
\begin{itemize}
\item \cite{Migdal2011dfs},
Chapter~\ref{ch:invariants}, especially Sec.~\ref{s:singlet-space}:\\
P. Migdał, K. Banaszek,\\
Immunity of information encoded in decoherence-free subspaces to particle loss,\\
Phys. Rev. A 84, 052318 (2011), arXiv:1107.3786,
\item \cite{Rodriguez-Laguna2011},
Chapter~\ref{ch:qubsim}:\\
J. Rodriguez-Laguna, P. Migdał, M. Ibanez Berganza, M. Lewenstein, G. Sierra,\\
Qubism: self-similar visualization of many-body wavefunctions,\\
New J. Phys. 14 053028 (2012), arXiv:1112.3560,
appeared in the New Journal of Physics Highlights of 2012,
\item \cite{Migdal2013aaa},
Chapter~\ref{ch:invariants}, especially Sec.~\ref{s:slocc-equivalence}:\\
P. Migdał, J. Rodriguez-Laguna, M. Lewenstein,\\
Entanglement classes of permutation-symmetric qudit states: symmetric operations suffice,\\
Phys. Rev. A 88, 012335 (2013), arXiv:1305.1506,
\item \cite{Migdal2014ffdag},
Chapter~\ref{ch:invariants}, especially Sec.~\ref{s:polynomial-invariants}:\\
P. Migdał, J. Rodríguez-Laguna, M. Oszmaniec, M. Lewenstein,\\
Multiphoton states related via linear optics,\\
Phys. Rev. A 89, 062329, arXiv:1403.3069,
selected by PRA as Editors' Suggestion,
\item \cite{Faccin2013},
Chapter~\ref{ch:networks}, especially Sec.~\ref{sec:walks}:\\
M. Faccin, T. Johnson, J. Biamonte, S. Kais, P. Migdał,\\
Degree Distribution in Quantum Walks on Complex Networks,\\
Phys. Rev. X 3, 041007 (2013), arXiv:1305.6078,
\item \cite{Faccin2013community},
Chapter~\ref{ch:networks}, especially Sec.~\ref{sec:community}:\\
M. Faccin, P. Migdał, T. Johnson, J. Biamonte, V. Bergholm,\\
Community Detection in Quantum Complex Networks,\\
Phys. Rev. X 4, 041012 (2014), arXiv:1310.6638.
\end{itemize}
Moreover, the author wrote a paper in mathematical psychology \cite{Migdal2011twochoice}.
Additionally, the author contributed to open source projects related to the thesis, in particular:
\begin{itemize}
\item QuTiP (\href{http://qutip.org/}{\texttt{qutip.org}}, a Python package for quantum physics): implemented qubism and other visualizations of many body quantum states, improved the Bloch sphere visualization \cite{Johansson2012a},
\item Wikipedia article on matrix product states \cite{wiki:matrix_product_state}.
\end{itemize}
\chapter{Invariants for bosonic and symmetric states}
\label{ch:invariants}
\section{Introduction}
In quantum physics, one of the fundamental symmetries is the symmetry with respect to exchange of particles.
At least in three dimensional space, particles need either to be symmetric with respect to interchange of particles (bosons) or antisymmetric (fermions).
Even for distinguishable particles (i.e. fermions and bosons, but when each particle occupy exclusive sets of modes, e.g. different spatial positions) symmetry with respect to exchange of particles still plays a role.
A permutation-symmetric state is a state of the maximal total spin \cite{Dicke1954};
on the opposite end there is singlet subspace (i.e. the subspace with total spin zero), having some antisymmetric properties.
In quantum information, we are almost always interested in properties up to the choice of local basis.
Consequently, typical entanglement measures are defined up to local unitary operations.
In particular, \emph{entanglement monotones} \cite{Vidal2000} --- defined as quantities non-increasing by local operations.
In this chapter we study the relation between symmetries of the state (especially: permutation symmetry and singlet state) and its capabilities to be used in information theory.
We attempt to answer the question which pairs of states can be transformed into each other within a fixed set of operations.
The content of this chapter is the following.
In Sec.~\ref{sec:symmetry-overview} we present an overview and background knowledge related to entanglement and its relation to symmetries, in particular --- local unitary equivalence.
Sec.~\ref{sec:preliminaries} introduces some notation and basic mathematical facts that we use through this chapter. In particular we introduce basics of the geometry of Hilbert space, equivalences with respect to local operations and provide simple examples.
In Sec.~\ref{s:slocc-equivalence} we show rigorously the relation between geometry of bosonic states subjected to linear operations and the local equivalence of permutation-symmetric states.
Moreover, as a byproduct of the methods we apply, we introduce a discrete family of states, which contains W and GHZ states as special cases.
In Sec.~\ref{s:polynomial-invariants} we introduce two families of polynomial invariants to test whether two many-photon states can be related via linear optics.
We show their relation to the geometry of bosonic states and propose an experimental setup for their direct measurement.
In Sec.~\ref{s:singlet-space} it is shown that creating singlet states from bosons does not only protect the information against collective decoherence, but also makes it immune to one-particle loss.
Unless explicitly stated, we work on pure states with fixed number of particles $n$, each with the same number of levels $d$.
\section{Overview}
\label{sec:symmetry-overview}
Entanglement is perhaps the most important resource for quantum information (for a review see \cite{Horodecki2009}), and its characterization is one of the key tasks of quantum theory.
Particularly difficult is the problem of characterizing entangled mixed states (for a recent review of various necessary criteria see \cite{Guehne2009}).
The problem for pure states is much simpler.
But even in this case, only a few settings are completely understood --- in particular, bipartite entanglement, where the Schmidt decomposition provides a method of classification of pure entangled states of two parties \cite{Horodecki2009}.
In a multipartite scenario very little is known about the different classes of entanglement. Typical questions that one would like to answer concern entanglement classes of pure states which are invariant with respect to local operations, typically assumed to constitute a group (unitary, general linear, etc.).
The corresponding classes of states are called then LU-, SLOCC-equivalent, etc., where LU denotes local unitary, and SLOCC --- stochastic local operation and classical communications. Only a few rigorous results are known concerning these questions, which we list below
\begin{itemize}
\item For three qubits a generalization of the Schmidt decomposition has been formulated (see \cite{acin2000generalized} and references therein) --- this result provides a classification of invariant states with respect to local unitaries. There is a considerable amount of work regarding this and the related problem of geometrical invariants by the Sudbery group \cite{Carteret2000, Williamson2011}.
\item Classification of entanglement of three qubit states according to LU and SLOCC has been presented in Ref. \cite{Sudbery2000,DeVicente2012} and \cite{Dur2000}, respectively.
\item Classification of entanglement of 4 qubits according to SLOCC has been presented in Ref. \cite{Verstraete2002} (see also a series of papers by Miyake \cite{Miyake2002, Miyake2003}).
\item For many qudits a multiparticle generalization of the Schmidt decomposition \cite{Carteret2000, Verstraete2003} provides a general way to answer whether two states are LU-equivalent.
\end{itemize}
There is also a considerable amount of work on many-qubit states cf. \cite{Miyake2004, Miyake2004a,Kraus2010a, Kraus2010b}, but very little is known about general many qudit states.
The difficulty of classifying entanglement for multipartite pure states is one of the motivations for considering restricted families of states.
Such restrictions are typically introduced by considering symmetries \cite{Sawicki2012}, which might be physically motivated. In this spirit many authors considered totally permutation-symmetric pure states of $n$ qubits (cf. \cite{Aulbach2010,Devi2010,Mathonet2010,Aulbach2011,Cenci2010a, Ganczarek2011}), since such states naturally describe systems of many bosons, and appear frequently in the context of quantum optics. Similarly, quantum correlations in totally antisymmetric states (as representative states of fermions) have been intensively investigated (for a review see \cite{Eckert2002} and references therein). In the next introductory subsection we focus on symmetric
states and their particular role in physical applications.
A many-qudit wavefunction can be permutation-symmetric for two reasons. One is when it describes a system of bosons, so
that the particles are indistinguishable on a fundamental level. Second is when the particles are distinguishable but, because of a particular setting (e.g. a Hamiltonian for which the particles form an eigenstate), they happen to be in permutation-symmetric state. The latter situation occurs for instance for the Lipkin-Meshkov-Glick model \cite{Lipkin1965} of nuclear shell structure, and related models of quantum chaos \cite{Gnutzmann2000}.
It is worth stressing that the two situations are not the same.
In the later case we are able to manipulate each particle separately in a different way, while in the first we are restricted to operations modifying each boson in the same way. The question is whether those two settings give rise to same entanglement classes, i.e. if for symmetric states the classification can be reduced to studying operations that act in the same way on all particles.
Moreover, the entanglement geometry of permutation-symmetric states is interesting and relevant, e.g. for quantum computation using linear optics \cite{Aaronson2010}.
As mentioned above, this question was widely studied in the qubit case \cite{Aulbach2010a, Devi2010, Ganczarek2011}, but most of the obtained results are not-applicable for qudit systems of dimension $d>2$, the general case which we are going to address.
In the field of {\em quantum optics}, this question can be recast in this way: can a given multi-photon state, $\ket{\psi_1}$, be transformed into another one, $\ket{\psi_2}$, using only {\em linear optics}? By (passive) linear optics we mean the use of beam-splitters and wave plates, which is known to be equivalent to the action of arbitrary unitary operations on each mode \cite{Lee2002}.
This question bears special relevance both in theory and practical applications. On the theoretical side, linear optics with postselection has been proved to be able to efficiently realize a universal quantum computer \cite{Knill2001, Kok2007}. But even without postselection, linear optics transformations of multi-photon states constitute an intermediate stage between classical and full-fledged quantum computation \cite{Aaronson2010}. On the practical side, our ability to generate decoherence-free states \cite{Bourennane2004, Wu2011} relies on our ability to transform multi-photon states.
Operation by linear optics can be viewed as multi-particle interference, as opposed to multi-particle interaction. But beyond a generic interference phenomenon, it bears effects which are specific to bosons \cite{Tichy2012,Tichy2013}.
In this chapter we consider equivalence under linear optics transformations of pure states of $n$ photons in $d$ modes, disregarding the possibility of postselection. We show that the problem is equivalent to the LU-equivalence of bosonic states.
As an illustration, let us consider a state of two photons occupying two different modes or channels, $\ket{\psi_1}=\ket{1,1}$ -- i.e. one photon in each mode.
It is possible to transform this state into $\ket{\psi_2}=(\ket{2,0}-\ket{0,2})/\sqrt{2}$ using Hong-Ou-Mandel interference \cite{Hong1987} (i.e.: two-photon interference in a $50\%:50\%$ beam splitter), but it is not possible to place both photons in the same channel with $100\%$ efficiency, i.e.: $\ket{\psi_3}=\ket{2,0}$ is not achievable.
Of course, it is always possible to perform postselection, measuring the number of photons in the second channel and retaining only the states that contain none, but the efficiency will drop to $50\%$.
Translating the problem to LU-equivalence, we can say that a state $(\ket{01}_P + \ket{10}_P)/\sqrt{2}$ (one particle in one mode and one in the other, symmetrized) is not equivalent to state $\ket{00}_P$ (both particles in the same state).
Moreover, quantum systems are powerful yet fragile carriers of information. The ability to create and manipulate superposition states offers verifiably secure cryptography \cite{Gisin2002,Scarani2009}, reduces the complexity of certain computational problems \cite{Ekert1996}, and enables novel communication protocols \cite{Buhrman2010}. However, in practical settings one needs to protect the quantum states carrying information against decoherence, i.e. uncontrolled interactions with the environment. This is accomplished by building redundancy into the physical implementation.
Compared to the classical case, this task is much more challenging \cite{Gottesman2010} due to limitations in handling quantum information, boldly exemplified by the no-cloning theorem \cite{Wootters1982}.
When an ensemble of elementary quantum systems decoheres through symmetric coupling with the environment, one can identify collective states that remain invariant in the course of evolution. These states span a so-called \emph{decoherence-free subspace} (DFS) that is effectively decoupled from the interaction with the environment \cite{Duan1997,Zanardi1997,Lidar1998}. More generally, it is possible to identify subspaces that can be formally decomposed into a tensor product of two subsystems, one of which absorbs decoherence, while the second one, named a \emph{noiseless subsystem} or \emph{a decoherence-free subsystem}, remains intact \cite{Kempe2001}.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Representations and notation for symmetric states}
Let us consider the system of $n$-photons in $d$ modes. There are, at least, two possible descriptions of the Hilbert space ${\cal S}_n^d$ describing the system. In a {\em mode description}, i.e.: the second quantization picture (see for example \cite{FetterBook1971}), ${\cal S}_n^d$ is treated as a subspace of the full Fock space $\mathrm{Fock}\left({\mathbbm C} ^d \right)$. Let $\vec n\equiv(n_1,\cdots,n_d)$ be a multi-index denoting the photon count for each mode and let $|\vec n| = \sum_{k=1}^d n_k$. The basis states spanning ${\cal S}_n^d$ are specified by the photon count on each mode,
\begin{equation}
\ket{\vec n} =
\frac{(a^{n_1}_1)^\dagger\cdots(a^{n_d}_d)^\dagger}{\sqrt{(n_1!)\cdots(n_d!)}}
\ket{\Omega}
\equiv
{\tilde a}^\dagger_{\vec n} \ket{\Omega},\ |\vec n| =n \ .
\label{eq:multia}
\end{equation}
In the above expression $\ket{\Omega}$ is the Fock vacuum, $a_1,\ldots,a_d$ are annihilation operators, and ${\tilde a}^\dagger_{\vec n}$ is a normalized monomial defined as above, creating $\ket{\vec n}$ from vacuum.
In a {\em particle description}, Hilbert space ${\cal S}_n^d$ is treated as the permutation-symmetric subspace of $({\mathbbm C}^d)^{\otimes n}$, $\mathrm{Sym}^n \left({\mathbbm C} ^d\right)$. Let us fix the basis vectors of ${\mathbbm C} ^d$: $\ket{1},\ket{2},\ldots,\ket{d}$. Basis states of $\left({\mathbbm C} ^d\right)^{\otimes n}$ with a simple tensorial form,
\begin{equation}
\ket{\phi}=\ket{i_1}\otimes \ket{i_2}\otimes \cdots \otimes \ket{i_n} \ ,\ i_k \in \left\{1,\ldots,d\right\}\ ,
\end{equation}
are not {\em permutation symmetric}. A basis for $\mathrm{Sym}^n\left({\mathbbm C} ^d\right)$ is obtained from product vectors in $({\mathbbm C}^d)^{\otimes n}$ by symmetrization over all factors in the tensor product. Let us define an asymmetric state from given mode counts $\vec{n}=\{n_1,\cdots,n_d\}$:
\begin{equation}
\ket{\vec{n}}_A \equiv \ket{1}_P^{\otimes n_1} \otimes \ket{2}_P^{\otimes n_2}
\otimes \cdots \otimes \ket{d}_P^{\otimes n_d}\ .
\label{eq:ordered-state}
\end{equation}
In the above expression we explicitly put the subscript $P$ to emphasize that we deal with tensor product of states in particle representation.
The state $\ket{\vec{n}}_A$ can be thought of as a naive state in particle representation with the corresponding photon counts for each mode.
The corresponding normalized symmetric state is given by:
\begin{equation}
\ket{\vec{n}} =
\frac{\sqrt{n_1! \ldots n_d!}}{\sqrt{n!}}
\sum_{\mathrm{perm}} \ket{1}_P^{\otimes n_1} \otimes \ket{2}_P^{\otimes n_2}
\otimes \cdots \otimes \ket{d}_P^{\otimes n_d}\ ,
\label{eq:fock_in_particle_representation}
\end{equation}
where the sum is over the different permutations of the factors appearing in the tensor product. Notice the required normalization factor. There exists another way of expressing the state $\ket{\vec{n}}$ in particle basis
\begin{equation}
\ket{\vec{n}} = N(\vec{n})\; {\mathbbm P}^{(n)}_{sym} \ket{\vec{n}}_A \ ,
\end{equation}
where ${\mathbbm P}^{(n)}_{sym}$ is the projector onto the completely symmetric subspace of $({\mathbbm C}^d)^{\otimes n}$ and the normalization factor $N(n)$ is given by
\begin{equation}
N(n) = \sqrt{\frac{n!}{n_1!\cdots n_d!}}\ .
\end{equation}
During most of this chapter, we will work within the mode description, as it is more natural for dealing with boson states. However, in some parts of this paper we will use also the particle representation and we will proceed between them both, when it is convenient.
States written in the particle representation will have a subscript $P$.
States written in mode representation will have commas between modes.
An arbitrary pure state of the system can be written as:
\begin{equation}
\ket{\psi} = \sum_{|\vec n|=n} \alpha_{\vec n}\; \ket{\vec n} \ ,
\label{def.f}
\end{equation}
where $\alpha_{\vec n}$ are complex amplitudes and $\ket{\vec n}$ are normalized states with fixed number of photons in each mode. To each state $\ket{\psi}$ we associate a unique homogeneous polynomial in the creation operators according to the recipe:
\begin{equation}
\ket{\psi}=\sum_{|\vec n|=n} \alpha_{\vec n}\; \ket{\vec n}= f^\dagger\ket{\Omega}
\quad\to\quad
f^\dagger \equiv
\sum_{|\vec n|=n} \alpha_{\vec n} \; {\tilde a}^\dagger_{\vec n} \ .
\label{eq:state-polynomial}
\end{equation}
\subsection{Local unitary equivalence}
In quantum information processing one practical questions is which states can be achieved from state $\ket{\psi}$, with quantum operations applied only to individual particles.
There are two important operation protocols (see \cite{Aulbach2011} for a reference):
\begin{itemize}
\item Local Operations and Classical Communication (LOCC),
\item Stochastic Local Operations and Classical Communication (SLOCC).
\end{itemize}
Both of them allow local operations (such as unitary operations or measurements) and exchange of classical information --- perhaps with measurements or operations being conditional on already obtained outcomes.
The difference between LOCC and SLOCC is that in the first case we require deterministic success, while in the later --- success with some non-zero (however small) probability.
A typical example of a LOCC protocol would deterministically distinguishing two orthogonal states with only local projection measurements (with basis of subsequent measurement being dependent on the previous measurement outcomes) \cite{Walgate2000}.
One key characteristics of states, derived from LOCC and SLOCC, are classes states that are equivalent with respect to them --- i.e. for each pair of states from a class there is a S(LOCC) protocol that does $\ket{\psi_1}\mapsto\ket{\psi_2}$ and another one performing the reverse operation $\ket{\psi_2}\mapsto\ket{\psi_1}$.
Not every (S)LOCC transformation can be reversed.
For example, we can map an entangled state into a product state (by measurements and applying appropriate unitary operations), but the inverse is not possible.
For LOCC, states are equivalent if and only if there exist local unitary operations $U_1,\ldots,U_n$ such that:
\begin{equation}
U_1 \otimes U_2 \otimes \cdots \otimes U_n \ket{\psi_1} = \ket{\psi_2}.
\label{eq:lu_equivalence_u1u2u3}
\end{equation}
This should not be surprising: we cannot make non-trivial measurements (not to disturb the state) and consequently, we cannot get any information for communication.
The only operations that we are free to perform are local unitary operations.
That is, LOCC-equivalence is the same as local unitary equivalence, or LU-equivalence, and we will use the later term.
For qubits, the problem was solved in \cite{Kraus2010a, Kraus2010b} using normal forms..
For SLOCC, pure states are equivalent if and only if there exist local invertible operations $A_1,\ldots,A_n$ (not necessarily unitary, normal or diagonalizable) such that:
\begin{equation}
A_1 \otimes A_2 \otimes \cdots \otimes A_n \ket{\psi_1} = \ket{\psi_2}.
\label{eq:slocc_equivalence_a1a2a3}
\end{equation}
The original proof is in \cite{Dur2000}.
It can be expressed as follows:
on each particle we perform a positive operator-valued measurement (POVM).
That is, for the $i$-th particle we use operators $\{\tilde{A}_i^{k}\}_k$, such that
\begin{equation}
\sum_k \tilde{A}_i^{k\dagger} \tilde{A}_i^{k} = \mathbb{I}
\end{equation}
and for each $k$ we get an outcome
\begin{equation}
\frac{\tilde{A}_i^{k} \ket{\psi}}{\sqrt{\bra{\psi} \sum_k \tilde{A}_i^{k\dagger} \tilde{A}_i^{k} \ket{\psi}}}
\end{equation}
with probability
\begin{equation}
p_k = \bra{\psi} \tilde{A}_i^{k\dagger} \tilde{A}_i^{k} \ket{\psi}.
\end{equation}.
We set $\tilde{A}_i^{1}$ equal to $A_i$, up to normalization.
Then, by conditioning our result on getting outcome "1" for every particle, we obtain $\ket{\psi_2}$ as in \eqref{eq:slocc_equivalence_a1a2a3}.
Moreover, we need to impose that $A_i$ is invertible, so as to guarantee that we are able to go back from $\ket{\psi_2}$ to $\ket{\psi_1}$.
As we see, also in this case we were able to avoid communicating:
we are able to set local operations in advance and condition the result on a particular measurement outcome.
To give some taste of equivalence classes both with respect to LOCC and SLOCC,
we perform the Schmidt decomposition on a pure state of two particles
\begin{equation}
\ket{\psi_1} = \sum_k \lambda_k \ket{\phi_k} \otimes \ket{\varphi_k},
\end{equation}
where $\{\ket{\phi_k}\}$ and $\{\ket{\varphi_k}\}$ are sets of orthonormal one-particle states and the Schmidt values $\lambda_k$ are non-negative real numbers, in decreasing order.
Then state $\ket{\psi_1}$:
\begin{itemize}
\item is LU-equivalent to $\ket{\psi_2}$ iff they have the same Schmidt values,
\item is SLOCC-equivalent to $\ket{\psi_2}$ iff they have the same number of non-zero Schmidt values.
\end{itemize}
In particular, for two qubits all equivalence classes are represented by
\begin{itemize}
\item LU-equivalence: one-parameter family $\sqrt{1-\lambda^2} \ket{00} + \lambda \ket{11}$
for $0 \leq \lambda \leq 1/\sqrt{2}$,
\item SLOCC-equivalence: a discrete family of two states --- a product state $\ket{00}$ and an entangled state $(\ket{00} + \ket{11})/\sqrt{2}$.
\end{itemize}
In general, entanglement classes for more than two particles are much more involved, even for the symmetric qubit ($d=2$) states with three \cite{Dur2000} or four \cite{Verstraete2002} particles.
Let us see how this problem can be stated for bosonic systems.
First, let us describe the action of (passive) linear optics on pure states described in different representations. Within the mode representation, the action of linear optics is expressed mathematically as the application of {\em unitary operations} on the creation operators, i.e.:
\begin{align}
a_i'^\dagger = \sum_{j=1}^{d} U_{ij} a_j^\dagger \ ,
\label{eq:u-transf}
\end{align}
where $U \in SU(d)$. Conversely, all $SU(d)$ operations among the modes can be achieved with a sequence of two-mode operations, such as beam-splitters and wave plates, in a way which resembles the action of Euler angles \cite{Reck1994}.
We use the word \emph{passive}, since we want to exclude operations that are linear in both creations and annihilation operators, but do mix them (i.e. squeezing).
Alternatively, in particle representation, transformation \eqref{eq:u-transf} is equivalent to the action of the same $U$ on each particle:
\begin{equation}
\ket{\psi'}_P = U^{\otimes n} \ket{\psi}_P \ .
\label{eq:u-transf2}
\end{equation}
The equivalence between both representations corresponds to the equivalence between the first and second quantization pictures for bosonic states \cite{FetterBook1971}.
Instead of a unitary operation $U$, we can use any $d\times d$ complex matrix $A$.
We also use passive linear optics, and add ancillary modes (empty on input, conditioned to be empty on output) so that
\begin{equation}
\mathcal{U} =
\begin{bmatrix}
t A & B\\
C & D
\end{bmatrix}
\end{equation}
is unitary for some number $t$ and matrices $B$, $C$ and $D$.
For the equivalence relation we need to assume that $A$ is invertible.
We are ready to state the problem of equivalence between two bosonic pure states under the action of linear optics. The problem is formulated as follows. Given two pure states, $\ket{\psi_1}=f_1^\dag \ket{\Omega}$ and $\ket{\psi_2}=f_2^\dag \ket{\Omega}\in {\cal S}_n^d$,
we ask whether there exists a unitary transformation on the modes $U\in SU(d)$ such that $f_1$ and $f_2$ are related by a rotation among the variables
\begin{equation}
f_2(\vec{a})^\dagger = f_1(U^* \vec{a})^\dagger \ . \label{lu.equivalence}
\end{equation}
Alternatively, in the particle description, \eqref{lu.equivalence} is equivalent to
\begin{equation}
\ket{\psi_2}_P = U\otimes U \otimes \cdots \otimes U \ket{\psi_1}_P \ .
\label{eq:lu_equivalence_uuu}
\end{equation}
Both problems can be directly translated to their stochastic variants, with $A\in GL(d)$.
Formula \eqref{eq:lu_equivalence_uuu} is a special case of \eqref{eq:lu_equivalence_u1u2u3}, with all unitary operations being the same, i.e.
\begin{equation}
U \equiv U_1 = U_2 = \ldots = U_n,
\end{equation}
and analogously with \eqref{eq:slocc_equivalence_a1a2a3}.
Certainly, if states are related by \eqref{eq:lu_equivalence_uuu}, they are also LU-equivalent. But, if two pure permutation-symmetric states are LU-equivalent, does it mean that they are related by linear optics?
We show it in Sec.~\ref{s:slocc-equivalence} (see also our paper \cite{Migdal2013aaa}), both for LU-equivalence and SLOCC-equivalence.
\subsection{Invariants}
As it was stated in the introduction, our approach to the equivalence problem \eqref{lu.equivalence} is based on the construction of particular classes of invariants of the local unitary group representing linear optics. Let us consider the action of a group $G$ on some set $X$. For $x\in X$ and $g\in G$, let us denote the action of $g$ on $x$ by $g\cdot x$, which again belongs to $X$. A function $h: X\mapsto X$ is invariant under the action of $G$ if and only if
\begin{equation}
h(g\cdot x)=h(x)\ \text{for all $x\in X$ and all $g\in G$\ .}
\end{equation}
In our case we have $X=\mathcal{S}^{d}_n$, $G=SU(d)$ and the action of $G$ is given by \eqref{eq:u-transf} or, equivalently, by \eqref{eq:u-transf2}. A theorem by Hilbert states that, for a compact group $G$ acting in a unitary fashion on a finite dimensional vector space, there exists a finite number of independent invariants (which are polynomial in the coordinates of $\ket{\psi}$) that are able to distinguish whether two vectors belong to the same orbit of $G$ \cite{Weyl1997, Kraft1996, Grassl1998}.
A convenient way to write down the invariants involves using tensor diagrams \cite{Biamonte2013invariants} --- they make it explicit why certain polynomials are invariant and allow us to avoid multiple index contractions.
Thus, the LU-equivalence problem can be solved completely once the minimal set of independent polynomial invariants is known. This problem is in general unsolved.
For recent developments in the theory of invariants in the context of entanglement theory see \cite{Vrana2011}.
In our work we do not attempt to study all invariants of the action of $SU\left(d\right)$ on $\mathcal{S}_{n}^d$. Instead, we focus on two families of invariants, analyzing their usefulness and physical relevance.
There are relevant differences between structures of LU and and SLOCC invariants.
Since LU-equivalence is characterized by a compact group acting on vector space,
\begin{itemize}
\item there is a finite number of polynomial invariants that are necessary to distinguish orbits,
\item all invariant polynomials can be written as a sum of polynomials of a particular form \cite{Sudbery2000}.
\end{itemize}
In SLOCC, or equivalently --- equivalence up to local invertible operations, the previous statements do not hold.
For example, for one SLOCC invariant, the Schmidt number (i.e. number of non-zero Schmidt values), there is no continuous function (let alone polynomial), as $\ket{00}+\epsilon \ket{11}$ has Schmidt rank $2$ for arbitrary small $\epsilon>0$.
\subsection{Simple examples}
Before considering the general problem, let us focus on simple cases for LU-equivalence of symmetric states:
\begin{itemize}
\item only two particles ($n=2$) in an arbitrary number of modes,
or, alternatively,
\item an arbitrary number of particles in just two modes ($d=2$),
i.e.: permutation-symmetric states for qubits,
\item multi-particle squeezed coherent states (i.e. Gaussian states).
\end{itemize}
In particular, for the first two cases, we want to show explicitly how the bosonic mode description is related to the particle representation.
\subsubsection{Two particles}
\label{s:two_particles}
For two particles it suffices to perform a variant of the Schmidt decomposition, for symmetric states \cite{Li2001}, i.e.:
\begin{align}
\ket{\psi} = \sum_{i=1}^{d} \lambda_i \ket{\phi_i}_P \otimes \ket{\phi_i}_P \ ,
\end{align}
where $\lambda_i\geq 0$ and $\ket{\phi_i}$ are pairwise orthogonal states, the same for both particles. Thus, two pure states of two photons are related by linear optics if and only if they have the same sets of Schmidt values $\{\lambda_i\}$.
In this case, the polynomial $f(\vec{a})$ (as in \eqref{eq:state-polynomial}) is formally a quadratic polynomial in the number of modes, $d$. The Schmidt decomposition allows us to rewrite it as:
\begin{equation}
f = \sum_{i=1}^d \frac{\lambda_i}{\sqrt{2}} b_i^2
\label{factorize.n2}
\end{equation}
for a certain set of new variables $\vec{b}$, such that $b_i^\dagger \ket{\Omega} = \ket{\phi_i}$, which are related to $\vec{a}$ by a unitary rotation, i.e. $\vec{b} = U \vec{a}$.
Bear in mind that the operation we had performed is not diagonalization, since:
\begin{itemize}
\item $M_{\mu\nu} \equiv \ket{\psi}_{\mu \nu}$ is a symmetric matrix, not necessarily Hermitian,
\item $U \otimes U \ket{\psi}$ is equivalent to $U M U^T$, not $U M U^\dagger$.
\end{itemize}
\subsubsection{Two modes and Majorana representation}
\label{sec:majorana}
When there are just two modes ($d=2$), it is possible to use the {\em Majorana stellar representation} (see e.g. \cite{BengtssonZyczkowski2006book, Aulbach2011a, Markham2010} for a short introduction) and write the state as:
\begin{equation}
\ket{\psi} = A \prod_{i=1}^{n} \left(\cos\(\frac{\theta_i}{2}\) a_1^\dagger
+ e^{i \varphi_i} \sin\(\frac{\theta_i}{2}\) a_2^\dagger \right) \ket{\Omega},
\label{majorana}
\end{equation}
where pairs $(\theta_i, \varphi_i)$ can be interpreted as coordinates of points on the Bloch sphere, and $A$ is a normalizing factor. Equation (\ref{majorana}) is equivalent to a factorization of the homogeneous polynomial defined in eq. (\ref{eq:state-polynomial}) in the following form:
\begin{equation}
f(a_1,a_2)= \tilde{A} a_2^n
\prod_{i=1}^{n}
\left( \tfrac{a_1}{a_2} - x_i \right)
\label{factorize.d2},
\end{equation}
where $\tilde{A}$ is the coefficient of $a_1^n$, and we have introduced variables $x_i = - e^{i \varphi_i} \tan(\theta_i/2)$.
Linear optics acts on this representation as a rotation of the Bloch sphere as a whole. Consequently, two states are related by linear optics if and only if their Majorana representations are related by rotation \cite{Mathonet2010}.
Let us show how to decide whether two symmetric $n$-qubit states are equivalent under linear operations.
First, we apply the Majorana stellar representation to both states, resulting in two sets of vectors, $\{\vec{v}_i\}_{i\in \{1,\ldots,n\}}$ and $\{\vec{u}_i\}_{i\in \{1,\ldots,n\}}$. They may differ by a rotation (i.e. an element of SO(3)) and a permutation.
Let us choose an ordered pair of two non-parallel vectors $(\vec{u}_1, \vec{u}_2)$.
Then, for every ordered pair from the first set $(\vec{v}_i, \vec{v}_j)$ for $i \neq j$, if their scalar products match ($\vec{v}_i\cdot\vec{v}_j=\vec{u}_1\cdot\vec{u}_2$) we can construct a unique rotation that rotates the first pair into the second.
Then we check whether such rotation rotates every $\vec{v}_i$ into a distinct vector $\vec{u}_{\sigma{i}}$. If it does, states are equivalent.
If if it does not for all pairs, two states are not equivalent.
As the number of ordered pairs of two different vectors is $n^2-n$, the algorithm complexity is the maximum of $n^2$ and the complexity of an algorithm for factorization of an $n$-degree polynomial of one variable (to get the Majorana stellar representation).
\subsubsection{Gaussian states}
\label{sec:gaussian-states}
Gaussian states of light are states for which the Wigner function is Gaussian (as well as other quasi-probability distributions \cite{Cahill1969}).
Pure Gaussian states can be written as \cite{Braunstein2005}
\begin{equation}
\ket{\psi} =
\exp\left(\sum_i \alpha_i a_i^\dagger - \alpha_i^* a_i \right)
\exp\left( \sum_{ij} Q_{ij} a_i^\dagger a_j^\dagger \right)
\ket{\Omega},
\label{eq:pure-gaussian-def}
\end{equation}
where a complex symmetric matrix $Q$ is related to squeezing and a complex vector $\vec{\alpha}$ is related to displacing the center.
They are of special importance, since they are both simple to generate and analyze \cite{Braunstein2005rmp,Weedbrook2012,Adesso2014,Giedke2001thesis}.
Unlike other states we analyze in this chapter, Gaussian states have an unbounded number of photons, with the sole exception of the vacuum state.
Their entanglement was studies with respect to modes \cite{Kraus2003,Adesso2006,Giedke2014}, with applications to standard quantum information operations \cite{NielsenChuang2000} such as quantum teleportation \cite{Enk1999} or quantum key distribution \cite{Gottesman2001}.
Gaussian states are the only pure states described by non-negative Winger function.
Since the behavior of Gaussian states is similar to some classical states, the negativity of Wigner function \cite{Kenfack2004} can be used to measure certain aspects of non-classicality of quantum states.
Continuous-variable systems are typically studied using canonical position and momentum or equivalently --- creation and annihilation operators
\begin{equation}
\begin{bmatrix}
a_i\\
a_i^\dagger
\end{bmatrix}
=
\frac{1}{\sqrt{2}}
\begin{bmatrix}
1 & i\\
1 & -i
\end{bmatrix}
\begin{bmatrix}
x_i\\
p_i
\end{bmatrix}
\label{eq:aadag-xp}
\end{equation}
for each mode $i$.
Multiplying the wavefunction by a phase rotates all $x$ and $p$.
General affine operations on creation and annihilation, called linear unitary Bogoliubov transformations, are of the form
\begin{equation}
\vec{a} \mapsto A \vec{a} + B \vec{a}^\dagger + \vec{\alpha},
\label{eq:bogoliubov}
\end{equation}
where $A$ and $B$ are responsible for rotation and squeezing and $\vec{\alpha}$ for displacing.
$A$ and $B$ are related, so that the output modes fulfill the canonical commutation relations.
For passive linear optics $A\equiv U^*$ is unitary and both $B$ and $\vec{\alpha}$ are zero.
That is, it does not mix creation with annihilation operators, or displace them.
In particular, passive linear optics preserves the number of particles.
Let us stick to creation and annihilation operations, since they can easily represent the action of linear optics, that is
\begin{equation}
\begin{bmatrix}
\vec{a}\\
\vec{a}^\dagger
\end{bmatrix}
\mapsto
\begin{bmatrix}
U^* & 0\\
0 & U
\end{bmatrix}
\begin{bmatrix}
\vec{a}\\
\vec{a}^\dagger
\end{bmatrix}
\end{equation}
whereas for the canonical position and momentum it is slightly more complicated
\begin{equation}
\begin{bmatrix}
\vec{x}\\
\vec{p}
\end{bmatrix}
\mapsto
\begin{bmatrix}
\Re [U] & \Im [U]\\
-\Im [U] & \Re [U]
\end{bmatrix}
\begin{bmatrix}
\vec{a}\\
\vec{a}^\dagger
\end{bmatrix},
\end{equation}
what can be derived using \eqref{eq:aadag-xp}.
All pure Gaussian states are related by \eqref{eq:bogoliubov}.
But which states are related only by passive linear optics?
Let us focus on $\vec{\alpha}=\vec{0}$.
The problem has the same geometry as the two-particle case studied in Sec.~\ref{s:two_particles}, thus can be solved with the same method (mathematically speaking, Gaussian states \eqref{eq:pure-gaussian-def} are exponents of two-particle states).
However, we take another route, which sheds light on physical properties of Gaussian states.
The key observation is that each squeezing \eqref{eq:bogoliubov} can be decomposed as a series of operations: a passive optics, a one-mode squeezing and another passive optics \cite{Braunstein2005} (see also \cite[A.2]{Giedke2001thesis}), using so-called Bloch-Messiah decomposition, which can be used to analyze modes of link squeezing, e.g. as in \cite{Migdal2010}.
One-mode squeezings are operations of the form
\begin{equation}
x_i \mapsto \exp(-r_i) x_i, \quad p_i \mapsto \exp(r_i) p_i
\end{equation}
for real $r_i\geq0$ called squeezing parameters.
If we start from the vacuum state, then such squeezing results in
\begin{equation}
\bra{\psi} x_i^2 \ket{\psi} = \exp(-2r)/2, \quad \bra{\psi} p_i^2 \ket{\psi} = \exp(2r)/2,
\label{eq:squeezed-quadratures}
\end{equation}
which saturate the Heisenberg's uncertainty principle.
We may conclude that with passive optics operations we can build a normal form of a state,
i.e one with no correlations between modes and modes squeezed as in \eqref{eq:squeezed-quadratures}.
Let us calculate a correlator for a Gaussian state:
\begin{equation}
\rho_{ij} = \bra{\psi} a_i^\dagger a_j \ket{\psi},
\label{eq:gaussian-1particle-dm}
\end{equation}
which can be thought to a generalization of a one-particle reduced density matrix.
This matrix bears exact information to decide whether two states are related via linear optics.
By diagonalizing \eqref{eq:gaussian-1particle-dm}
\begin{equation}
\rho \mapsto U \rho U^\dagger
\end{equation}
we get new, pairwise uncoupled modes. On the diagonal we get eigenvalues, i.e.:
\begin{equation}
\bra{\psi} a_i^\dagger a_i \ket{\psi}
=
\tfrac{1}{2} \bra{\psi} \left( x_i^2 +p_i^2 - 1 \right) \ket{\psi}
= \tfrac{1}{2}\cosh(2r_i) - \tfrac{1}{2},
\end{equation}
where we used \eqref{eq:squeezed-quadratures}, and which is in one-to-one correspondence with the squeezing parameter.
Consequently, if two non-displaced Gaussian states have the same spectrum of \eqref{eq:gaussian-1particle-dm}, they are related by passive linear optics, and the exact transformation is $DU$,
where $U$ is a unitary matrix diagonalizing \eqref{eq:gaussian-1particle-dm}, and $D$ is a diagonal matrix with phases, rotating each position-momentum pairs.
\section{SLOCC and LOCC equivalence of permutationally symmetric pure states}
\label{s:slocc-equivalence}
In this section we present two results, main for our paper \cite{Migdal2013aaa}.
The first one is that, when testing whether two permutation-symmetric $n$ qudit states are equivalent under local transformations, the search can be restricted to operators which act in the same way on every particle.
This property was proven for qubits \cite{Bastin2009, Mathonet2010, Bastin2010} in the SLOCC variant (though the unitary version can be deduced from their proof).
For general qudit system it remained so far an open question
\cite[Sec.5.1.1.]{Aulbach2011}.
That is, in the course of this work, we prove that:
\begin{theorem}\label{thm:main}
Let us consider two permutation-symmetric states of $n$
qudits (i.e. $d$-level particles), $\ket{\psi}$ and $\ket{\varphi}$, for which
there exist invertible $d\times d$ matrices $A_1, \ldots, A_n$ such that
\begin{equation}
A_1 \otimes A_2 \otimes \ldots \otimes A_n \ket{\psi} =
\ket{\varphi}\label{eq:a1a2an}.
\end{equation}
Our result implies that then there exists an invertible $d\times d$ matrix $A$ such that
\begin{equation}
A \otimes A \otimes \ldots \otimes A \ket{\psi} = \ket{\varphi}.\label{eq:aaa}
\end{equation}
If we restrict ourselves to unitary matrices $A_1, \ldots, A_n$, then $A$ is
unitary.
\end{theorem}
For $A_i$ unitary, \eqref{eq:a1a2an} is a condition of equivalence of states
under reversible local operations (or LU-equivalence), which is proven to be the same as equivalence
with respect to local operations and classical communication (LOCC) \cite{Bennett2000, Vidal2000}.
Moreover, in both cases we provide a direct construction for $A$ as
a function of $A_1,\ldots, A_n$.
Our second result stems from the consideration of stabilizers of states \cite{Cenci2010a} in the form of a matrix $B$ acting on one particle, and its inverse $B^{-1}$ acting on another one. Only for very specific states there are such $B$, which are non-trivial. We show that the Jordan form of $B$, disregarding the values of the eigenvalues, is an invariant for SLOCC-equivalence, and analyze it in detail, providing a coarse-grained classification of the relevant entanglement classes. If each block of the Jordan form of $B$ has a distinct eigenvalue, then there is a unique stabilized state, up to local operations. In particular, we find as entanglement class representatives a $d$-level generalization of the $n$-particle GHZ state
\begin{align}
\frac{\ket{0}^n + \ldots + \ket{d-1}^n}{\sqrt{d}},
\end{align}
and one possible generalization of the W state for $d>2$, i.e. a state with all single particle state indices adding up to $d-1$, that is
\begin{align}
\binom{n + d - 2}{d - 1}^{-1/2} \sum_{i_1+\ldots+i_n = d-1} \ket{i_1} \ket{i_2}\ldots \ket{i_n},\label{eq:excitationnormalized}
\end{align}
which we call \textit{excitation state}.
For two particles both classes coincide, as e.g.
\begin{align}
\ket{00} + \ket{11} + \ket{22} \cong \ket{02} + \ket{11} + \ket{20}
\end{align}
Table \ref{tab:n3} summarizes the entanglement classes related to Jordan blocks for the simplest non-trivial case, i.e. $n=3$ particles (a general construction is in \eqref{eq:unique_general}).
We adopt a special notation for the Jordan block structure.
Outer brackets separate eigenspaces with different eigenvalues, while the inner brackets separate different Jordan blocks of the same eigenvalue.
Each number is dimension of a single Jordan block.
Ordering of the terms does not matter, neither in inner or outer brackets.
For example: $\{ \{ 2 \}\}$ is a matrix with only one Jordan block, $\{ \{ 1, 1 \}\}$ is proportional to the identity matrix and $\{ \{ 1 \}, \{ 1 \} \}$ is a matrix with two different eigenvalues, that is ($\lambda_1 \neq \lambda_2$):
\begin{align}\label{eq:block-notation}
\{ \{ 2 \}\} & \equiv
\left[
\begin{array}{cc}
\lambda_1 & 1\\
0 & \lambda_1
\end{array}
\right]\\
\{ \{ 1, 1 \}\} & \equiv
\left[
\begin{array}{cc}
\lambda_1 & 0\\
0 & \lambda_1
\end{array}
\right]\\
\{ \{ 1 \}, \{ 1 \} \} & \equiv
\left[
\begin{array}{cc}
\lambda_1 & 0\\
0 & \lambda_2
\end{array}
\right]
\end{align}
The number of different Jordan block structures for a given $d$ is given by double partitions~\cite{oeisA001970}.
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
$d$ &
Block structure &
A class representative\\
\hline
2 &
$\{ \{ 2 \}\}$ &
$\ket{001} + \ket{010} + \ket{001}$\\
\hline
&
$\{ \{ 1, 1 \}\}$ &
(not unique) any state\\
\hline
&
$\{ \{ 1 \}, \{ 1 \} \}$ &
$\ket{000} + \ket{111}$\\
\hline
3 &
$\{ \{ 3 \} \}$ &
$\ket{002} + \ket{020} + \ket{200}$ \\
&
&
$+ \ket{011} + \ket{101} + \ket{110}$\\
\hline
&
$\{ \{ 2, 1 \} \}$ &
(not unique)\\
\hline
&
$\{ \{ 1, 1, 1 \} \}$ &
(not unique) any state\\
\hline
&
$\{ \{ 2 \}, \{ 1 \} \}$ &
$\ket{001} + \ket{010} + \ket{100} + \ket{222}$\\
\hline
&
$\{ \{ 1, 1 \}, \{ 1 \} \}$ &
(not unique)\\
\hline
&
$\{ \{ \{ 1 \}, \{ 1 \}, \{ 1 \} \} \}$ &
$\ket{000} + \ket{111} + \ket{222}$\\
\hline
\end{tabular}
\caption{
A summary of entaglement classes related to Jordan blocks, for the case of three qubits ($d=2$) and qutrits ($d=3$). The notation is explained in the main text \eqref{eq:block-notation}. The general construction for the unique states is given in \eqref{eq:unique_general}.
}
\label{tab:n3}
\end{table}
This work is organized as follows:
in Section \ref{sec:aaa-symmetry} we prove that it is sufficient to study invariance under symmetric transformations.
We supplement it with a construction of the symmetric transformation in Section \ref{sec:aaa-explicit}.
In Section \ref{sec:aaa-classes} we discuss the entanglement classes which can be obtained by
studying stabilizer operators related to one-particle transformations.
\subsection{Symmetric operations suffice}
\label{sec:aaa-symmetry}
We start with an approach similar to the one of \cite{Mathonet2010}.
Let us consider two permutation-symmetric states, $\ket{\psi}$ and
$\ket{\varphi}\in {\cal S}$, with ${\cal S}$ denoting the symmetric subspace of the full Hilbert space.
If \eqref{eq:a1a2an} holds, then any different
permutation of $A_1, \ldots, A_n$ will also work. In order to show this
property explicitly, we may use $\ket{\psi} = P_\sigma
\ket{\psi}$ and $\ket{\varphi} = P_{\sigma^{-1}} \ket{\varphi}$, where
$P_\sigma$ is a permutation matrix for the permutation of particles $\sigma$,
i.e. $P_\sigma \ket{i_1i_2 \ldots i_n}=\ket{i_{\sigma(1)} i_{\sigma(2)}\ldots
i_{\sigma(n)}}$.
Since all $A_i$ are invertible, it means in particular that
\begin{equation}
\left(A_2^{-1} \otimes A_1^{-1} \otimes \ldots \otimes A_n^{-1} \right)
\left(A_1 \otimes A_2 \otimes \ldots \otimes A_n \right) \ket{\psi} = \ket{\psi}
\end{equation}
or, equivalently,
\begin{equation}
\left(B \otimes B^{-1} \otimes \mathbb{I} \otimes \ldots \otimes \mathbb{I}
\right) \ket{\psi} = \ket{\psi},\label{eq:bbinv}
\end{equation}
where $B=A_2^{-1} A_1$.
From now on, we will use a subscript in parenthesis to indicate the position of
an operator in the tensor product, e.g.
\begin{equation}
B_{(2)} \equiv \mathbb{I} \otimes B \otimes \mathbb{I} \otimes \ldots \otimes
\mathbb{I},
\end{equation}
where the total number of factors is $n$.
Without the loss of generality, we use operations on the first and the second particle.
First, let us show that if an operation on one particle can be reversed by
applying the inverse operation on a {\em different} particle, then that
single-particle operation must preserve permutation symmetry of the state.
\begin{lemma}\label{thm:bbinv2b}
For a symmetric $\ket{\psi} \in \mathcal{S}$ the equality \eqref{eq:bbinv}
\begin{equation}
B_{(1)} B_{(2)}^{-1} \ket{\psi} = \ket{\psi}
\end{equation}
holds if and only if
\begin{equation}
B_{(1)} \ket{\psi} \in \mathcal{S}. \label{eq:b}
\end{equation}
\end{lemma}
\begin{proof}
\begin{align}
B_{(1)} \ket{\psi} \in \mathcal{S} \Leftrightarrow B_{(1)} \ket{\psi} =
B_{(2)} \ket{\psi}\\
\Leftrightarrow B_{(1)} B_{(2)}^{-1} \ket{\psi} = \ket{\psi}
\end{align}
\end{proof}
Now we will show that the action of the aforementioned single-particle operation
$B_{(1)}$ can be expressed as an operation acting in the same way on every
particle $S^{\otimes n}$. Intuitively, we must search for an $n$-th root of
$B$. But not all such $n$-th roots will work, as the following example shows:
Consider $S=\sigma_x$, which is a square root
of $B=I$, acting on $\ket{00}\in \cal{S}$. While $B_{(1)}\ket{00}\in \cal{S}$,
$S_{(1)}\ket{00}=\ket{10}\not\in \cal{S}$, and $S \otimes
S \ket{00}=\ket{11}$, which, despite being symmetric, is not the desired
state. Thus, the relevant question is: {\em which one is the appropriate $n$-th
root?}
Before we can proceed, we need a few lemmas.
\begin{lemma}
\label{thm:symcom}
If $\ket{\psi} \in {\cal S} $, $X_{(1)}\ket{\psi}\in {\cal S}$ and $Y_{(2)}\ket{\psi}\in{\cal S}$, then
$X_{(1)}Y_{(2)}\ket{\psi}\in {\cal S}$ $\Leftrightarrow$ the commutator
acting on the state vanishes $[X_{(1)}, Y_{(1)}]\ket{\psi}=0$.
\end{lemma}
\begin{proof}
If the final state is symmetric, we may permute the first two particles
without altering the result:
\begin{align}
0 &= X_{(1)}\left(Y_{(2)}\ket{\psi}\right) - Y_{(1)}\left(X_{(2)}\ket{\psi}\right)\\
&= X_{(1)}\left(Y_{(1)}\ket{\psi}\right) -
Y_{(1)}\left(X_{(1)}\ket{\psi}\right)\\
&= [X_{(1)}, Y_{(1)}]\ket{\psi}.
\end{align}
\end{proof}
To see how commutativity is important, take as an example
\begin{equation}
X =
\left[
\begin{matrix}
1 & 1\\
0 & 1
\end{matrix}
\right], \quad
Y =
\left[
\begin{matrix}
0 & 1\\
1 & 0
\end{matrix}
\right].
\end{equation}
acting on $\ket{\psi}=(\ket{01}+\ket{10})/\sqrt{2}$ (i.e. $\ket{\psi}$,
$X_{(1)}\ket{\psi}$ and $Y_{(2)}\ket{\psi}$ are symmetric, but $X \otimes Y
\ket{\psi} = (\ket{00}+\ket{01}+\ket{11})/\sqrt{2}$ is not).
\begin{lemma}\label{thm:comm3}
Moreover, for $n\geq3$ the commutator acting on the state always vanishes, i.e.
$[X_{(1)}, Y_{(1)}]\ket{\psi}=0$.
\end{lemma}
\begin{proof}
\begin{align}
&X_{(1)} Y_{(1)} \ket{\psi}
= X_{(1)} Y_{(3)} \ket{\psi}
= X_{(2)} Y_{(3)} \ket{\psi}\\
&= Y_{(3)} X_{(2)} \ket{\psi}
= Y_{(1)} X_{(1)} \ket{\psi}
\end{align}
\end{proof}
\begin{lemma}
If $X_{(1)}\ket{\psi}$ is symmetric, then $X^p_{(1)}\ket{\psi}$ is symmetric for
all natural $p$ (integer if $X$ is invertible).
\end{lemma}
\begin{proof}
We use mathematical induction with respect to $p$, starting at $p=0$. Since
$X$ commutes with $X^p$ (even without the restriction to a specific state), then
using Lemma \ref{thm:symcom}, $X^p_{(1)}\ket{\psi}\in{\cal S}$ implies that
$X^{p+1}_{(1)}\ket{\psi}\in {\cal S}$. If $X$ is invertible, we may use the same
argument for $X$ and $X^{-p}$, respectively.
\end{proof}
\begin{corollary}
Moreover, we get
\begin{align}
X^p_{(1)}\ket{\psi} = X^{p_1}\otimes X^{p_2}\otimes \ldots \otimes X^{p_n}
\ket{\psi},
\end{align}
for any integers $p_i$ (can be negative if $X$ is invertible) that add up to
$p$.
\end{corollary}
\begin{corollary}
\label{corol:analytic}
In particular, $f(X)_{(1)}\ket{\psi}\in {\cal S}$ for any analytic
function $f(z)$.
\end{corollary}
\begin{theorem}
\label{thm:sssb}
For any $X$ and $\ket{\psi}\in {\cal S}$ it holds that if
\begin{equation}
X_{(1)} \ket{\psi} = \ket{\phi} \in {\cal S}
\end{equation}
then there exists a single-particle operator $S$ such that $S^n=X$,
$S_{(1)}\ket{\psi}\in{\cal S}$ and
\begin{equation}
\left(S \otimes S \otimes \ldots \otimes S \right) \ket{\psi} =
\ket{\phi}.
\end{equation}
\end{theorem}
\begin{proof}
The proof outline is the following: we prove that, among the $n$-th roots
of operator $X$, there is (at least) one, $S$, which {\em can be expressed as a
polynomial} of $X$;
following Corollary \ref{corol:analytic}, we get $S_{(1)}\ket{\psi}\in\cal{S}$ and the
rest of the theorem follows.
The $n$-th root function is multivalued, so we can not use it to
prove the theorem as it stands. Let us, then, prove that there exists a
polynomial function $f$, such that $[f(X)]^n=X$.
Let $\left\{\lambda_i\right\}$ be the eigenvalues of $X$, with algebraic
multiplicities $\left\{m_i\right\}$ (i.e. size of the largest Jordan block related to such eigenvalue). Matrix function theory \cite[Chapter
1]{Higham2008} states that the action of any analytical function $f$ on a matrix
$X$ is completely determined by the set of values $\left\{f(\lambda_i)\right\}$,
along with the derivatives $\left\{ f^{(k)}(\lambda_i)\right\}$, up to degree
$m_i$. Let us choose, for each $i$ separately, $f(\lambda_i)$ and
$f^{(k)}(\lambda_i)$ from the same branch of the complex $n$-th root function. It is
always possible to find a polynomial $f$ that takes exactly those values and
derivatives at the eigenvalues of $X$, e.g., via Hermite interpolation.
Thus, we can define $S\equiv f(X)$, and
we have $S^n=X$, as required.
Combining this result with the corollaries, we get that
\begin{align}
S^{\otimes n} \ket{\psi} = S^{n}_{(1)} \ket{\psi} = X_{(1)} \ket{\psi}.
\end{align}
\end{proof}
The converse of theorem \ref{thm:sssb} is false. Take,
e.g., $S=\sigma_x$ and $\ket{\psi}=\ket{00}$. It is true that $S \otimes
S \ket{00}=\ket{11}\in \cal{S}$, yet there is no $B$ such that
$B_{(1)}\ket{00}=\ket{11}$.
\subsection{Explicit formula for symmetrization}
\label{sec:aaa-explicit}
In this section we provide the explicit form of $A$, given all $A_i$.
Let $B_{ij} \equiv A_i^{-1} A_j$. Thus, operator $B$ in the previous section
would correspond to $B_{12}$ with the new notation. Transforming
\eqref{eq:a1a2an} we get
\begin{align}
&A_1 \otimes A_2 \otimes \ldots \otimes A_n\ket{\psi}\\
&= A_1 \otimes A_1 B_{12} \otimes \ldots \otimes A_1 B_{1n} \ket{\psi}
\\
&= A_1^{\otimes n} B_{12\ (2)} B_{13\ (3)} \ldots B_{1n\ (n)}\ket{\psi}.
\end{align}
All $B_{1j\ (j)}\ket{\psi}$ are symmetric states, similarly to
$B_{(1)}\ket{\psi}$. Consequently, the last part can be reshuffled as
\begin{align}
A_1^{\otimes n} \left(B_{11}B_{12}\ldots B_{1n} \right)_{(1)} \ket{\psi}.
\end{align}
Note that no requirements are imposed about their commutativity.
Using Lemma \ref{thm:sssb} we get $A = A_1 S$, where $S$ is an appropriate
$n$-th root of $B_{11}B_{12}\ldots B_{1n}$.
Moreover, when all $A_i$ are unitary, then $S$ is unitary, since roots of
unitary matrices can be chosen to be unitary given that $f(U X U^\dagger) = U f(X) U^\dagger$ for all unitary $U$.
This finalizes the proof of Theorem~\ref{thm:main}.
\subsection{Symmetry classes from single-particle stabilizers}
\label{sec:aaa-classes}
A well-known strategy in the search for entanglement classes theory is to study
the dimension of the stabilizers of a state \cite{Cenci2010a}, i.e.: operators
$X$ such that $X \ket{\psi} = \ket{\psi}$. In our case it is natural to consider
stabilizers in the form of $X = B_{(1)} B_{(2)}^{-1}$, and state that {\em $B$
stabilizes $\ket{\psi}\in\cal{S}$} as a convenient shorthand notation. Following
Lemma \ref{thm:bbinv2b}, $B$ stabilizes $\ket{\psi}\in\cal{S}$ if
and only if $B_{(1)}\ket{\psi}\in\cal{S}$.
Bear in mind that a set of $B$ stabilizing a particular state is
guaranteed to form a group only for $n\geq3$, as follows from Lemma \ref{thm:comm3}.
Let us consider the Jordan normal form $J$ of $B$. We have shown
that all local operations for symmetric states are equivalent to the
action of the same single-particle operation on all qudits: $A^{\otimes n}$.
Consequently, if a state is stabilized by $B$, a SLOCC transformed state is
stabilized by some $A B A^{-1}$, i.e.: {\em the Jordan form of the stabilizer is
preserved}.
Below, we prove the following facts relating the Jordan form of $B$ to the
stabilized states. First, that the precise eigenvalues are not
important --- only their degeneracies matter (see notation from Table \ref{tab:n3}).
Second, that stabilized states
do never mix eigenspaces of different eigenvalues. In particular, it means that
the problem can be split into a direct sum over distinct eigenvalues. Third, we
will show the explicit form of a state stabilized by a single Jordan block.
Fourth, we show that when eigenvalues are non-degenerate, there is an unique state
related to it (up to SLOCC).
Fifth, we proceed to writing down states for multiple Jordan blocks with the
same eigenvalue. This will complete the characterization of states stabilized by
any $B$.
\begin{theorem}
The set of states stabilized by $B$ does not depend on the particular values of
its eigenvalues, as long as (non-)degeneracy is preserved.
\end{theorem}
\begin{proof}
We will show that mapping eigenvalues to different ones does not break the stabilizer's condition.
Let choose a complex function $f(z)$ such that (i) for all eigenvalues $f(\lambda_i) =
\tilde{\lambda}_i$, and (ii) $f^{(k)}(\lambda_i)=\delta_{0k}$ for all $k$ up to the
algebraic multiplicity of each $\lambda_i$. Now, $f(B)_{(1)}$ is also a
stabilizer of $\ket{\psi}$, with the same Jordan blocks, but arbitrarily set
eigenvalues.
\end{proof}
In particular, for $d=2$ the only two non-trivial Jordan forms of $B$ are
related to the GHZ state (two different eigenvalues) and W state (single
eigenvalue). We proceed to showing that stabilized states never mix subspaces
with different eigenvalues.
Given a subspace $V$, let us denote by $\mathrm{Sym}^n(V)$ the permutation-symmetric subspace of $V^{\otimes n}$.
Then:
\begin{theorem}
For a given Jordan form $J$ with generalized eigenspaces $V_1, \ldots, V_p$ for
distinct eigenvalues, stabilized states are of the form
\begin{align}
\ket{\psi} \in \bigoplus_{i=1}^p \mathrm{Sym}^{n}(V_i).
\end{align}
That is, they contain no vectors mixing components from Jordan blocks of
different eigenvalues.
\end{theorem}
\begin{proof}
Let $\ket{\mu}$ and $\ket{\nu}$ be one-particle states
($\mu,\nu\in\{0,\ldots,d-1\}$) that belong to blocks of $J$ with a different
eigenvalues.
Let us take $f(B)$ mapping all subspaces to zero, except the one to which
$\ket{\mu}$ belongs, which we map to 1.
Suppose that $\ket{\psi}$ has a component containing a product of $\ket{\mu}$
and $\ket{\nu}$ (at different sites). Then, in particular, it has
$\ket{\mu}\ket{\nu}\ket{\xi}$ and $\ket{\nu}\ket{\mu}\ket{\xi}$, for some
symmetric $\xi$ (perhaps containing $\ket{\mu}$ or $\ket{\nu}$ as well).
But
\begin{align}
&f(B)_{(1)} \left(\ket{\mu}\ket{\nu}\ket{\xi} +
\ket{\nu}\ket{\mu}\ket{\xi}\right)\\
&= \ket{\mu}\ket{\nu}\ket{\xi}.
\end{align}
The right hand side cannot be paired with any other terms in order to make a
symmetric state. Thus $f(B)_{(1)} \ket{\psi}$ is not symmetric, which
contradicts the assumption. Thus, a stabilized state can not contain a term with
a product of elements from two Jordan subspaces with different eigenvalues.
\end{proof}
Thus, when $J$ has $d$ distinct eigenvalues, then the stabilized state is
a generalized GHZ:
\begin{equation}
\ket{\psi} = \alpha_0 \ket{0}^n + \ldots + \alpha_{d-1} \ket{d-1}^n.
\end{equation}
When we consider local unitary equivalence, then the set of $\{|\alpha_i|^2
\}_{i\in \{0,\ldots,d-1\}}$ distinguishes classes, whereas for SLOCC, the state
is equivalent to any other with the same number of non-zero $\alpha_i$.
Now, it suffices to focus on a {\em Jordan subspace} related to a single eigenvalue.
Still, for a single eigenvalue there may be more than one Jordan blocks, i.e.
invariant subspaces. We start with the analysis of a {\em single Jordan block}, then
generalize our result to more blocks with the same eigenvalue.
\begin{theorem}
Let $K$ be a $k \times k$ Jordan block with eigenvalue zero, i.e.
$\sum_{i=1}^{k-1}\ket{i-1}\bra{i}$. Its stabilized states are
\begin{equation}
\ket{\psi} = \sum_{j=0}^{k-1} \alpha_j \ket{E_j},\label{eq:ejsum}
\end{equation}
where $\ket{E_j}$ is a symmetric state with $j$ {\em excitations}, i.e.
a symmetrized sum of all basis states for which the sum of the particle
indices is $j$:
\begin{align}
\ket{E_j} = \sum_{i_1+\ldots+i_n = j} \ket{i_1} \ket{i_2}\ldots \ket{i_n}.
\end{align}
\end{theorem}
\begin{proof}
First, let us show that all states $K_{(1)} \ket{E_j}$ are symmetric, as long as
$j < k$.
\begin{align}
K_{(1)} \ket{E_j} &= \sum_{i_1+\ldots+i_n = j} \ket{i_1 - 1} \ket{i_2}\ldots
\ket{i_n}\\
&= \sum_{i'_1+\ldots+i_n = j - 1} \ket{i'_1} \ket{i_2}\ldots \ket{i_n} =
\ket{E_{j-1}},
\end{align}
where we use $\ket{-1} \equiv 0$ as a convenient shorthand notation. Now
we can apply a substitution $i'_1 = i_1 - 1$ and change the summation limit
(thus requiring $j < k$).
Let us now show that all stabilized states $\ket{\psi}$ have the form of \eqref{eq:ejsum}.
We proceed by induction with respect to $n$, the number of particles. For
$n=1$ (inductive basis), all basis states are stabilized. Now, let us assume
that the condition works up to a given $n$. As $K_{(1)}$ reduces the total
number of excitations by one, it suffices to look at subspaces of fixed $j$.
Together with the inductive assumption (in particular, the fact that the first
$n$ particles must remain in a permutation symmetric state after application of
$K_{(1)}$) we get a general form
\begin{align}
\ket{\xi} = \sum_{l=0}^{j} \beta_l \ket{E_{j-l}} \ket{l}.
\end{align}
To find the actual constraints on $\beta_l$, we just note that the assumed
symmetry of $K_{(1)}\ket{\xi}$ implies that
\begin{align}
K_{(1)}\ket{\xi} &= \sum_{l=0}^{j-1} \beta_l \ket{E_{j-l-1}} \ket{l}\\
= K_{(n+1)}\ket{\xi} &= \sum_{l=1}^{j} \beta_l \ket{E_{j-l}} \ket{j-1}.
\end{align}
Again, with a simple shift of index, and using the orthogonality of
the components, we get $\beta_l = \beta_{l+1}$. Thus, there is only one
state (up to a factor) for a given $j$ that remains symmetric after $K_{(1)}$.
\end{proof}
When considering SLOCC-equivalence, we may take $\ket{E_{k-1}}$ as a representative
of the states stabilized by $K_{(1)}$. The reason is that all other
states (with $\alpha_{k-1}\neq0$) can be built from it via an operator
$\sum_{j=0}^{k-1} \alpha_{k-1-j} K_{(1)}^j$. This operator is invertible, since its determinant is $\alpha_{k-1}^k$.
Throughout the derivation, we work with unnormalized states for convenience.
The properly normalized excitation state is given in equation \eqref{eq:excitationnormalized}.
\begin{theorem}
There is a unique (up to SLOCC operations) state stabilized by $B$
if and only if each Jordan block of $B$ has a distinct eigenvalue.
A $n>2$ particle state $\ket{\psi}\in\cal{S}$, stabilized by $B$, is
unique (up to SLOCC) if and only if each block of its
Jordan form has a distinct eigenvalue and no other $B'$ exists with a greater number of
eigenvalues or a lesser number of Jordan blocks.
\end{theorem}
The formulation may seem complicated, but we want to exclude {\em degenerate} states, which are also stabilized by other operators.
For the excitation state we want to ensure that amplitude of the $(d-1)$ excitations is non-zero (otherwise it is stabilized also by a matrix with two eigenvalues), or, for the GHZ states, that all amplitudes are non-zero (otherwise, two eigenvalues can be merged into one, forming a single Jordan block).
For example, a three qutrit pure state $\ket{000}+\ket{111}$ is stabilized by a matrix with its Jordan block structure
$\{ \{ 1 \}, \{ 1 \}, \{ 1 \} \}$ (as for the GHZ state). However, unlike $\ket{000}+\ket{111}+\ket{222}$ (the GHZ state), it is also stabilized by a matrix with one less Jordan block $\{ \{ 1 \}, \{ 2 \} \}$.
\begin{proof}
$'\Leftarrow'$
We have already shown that the GHZ-like state with all amplitudes
different from zero is unique, as well as the excitation state
with non-zero amplitude for the highest excitation. It follows as well for any
state without blocks of the same eigenvalue, as the problem can be split into a problem for each eigenvalue.
\begin{itemize}
\item If any amplitude is zero in the GHZ-like case, the state is also stabilized by a $B$ with a Jordan block of dimension two.
\item If the amplitude of for the highest excitation is zero, in the excitation state, the state is also stabilized with a $B$ with one more eigenvalue.
\end{itemize}
$'\Rightarrow'$
If there are two blocks with the same eigenvalue, then we can take two
one-particle eigenvectors $\ket{\mu}$ and $\ket{\nu}$ having the same
eigenvalue. Let us look at the projection of $\ket{\psi}$ on the subspace
spanned by $\text{Sym}^n(\text{lin}\{\ket{\mu}, \ket{\nu}\})$. Then, in
particular, a linear combination with non-zero coefficients of elements with
zero, one and two $\ket{\nu}$ states among all other $\ket{\mu}$ does not give
rise to more blocks or eigenvalues, but gives rise to some states which cannot
be interchanged with local operations.
\end{proof}
\begin{corollary}
The number of Jordan block structures with non-degenerate eigenvalues
is the same as the number of integer partitions of $d$~\cite{oeisA000041}.
A general construction of such state is
\begin{align}
\bigoplus_{i=1}^{\#\mathrm{blocks}} \ket{E_{k_i}},\label{eq:unique_general}
\end{align}
where $k_i$ is the dimension of the $i$-th Jordan block, in descending order.
In particular, for GHZ there are only blocks of size $k_i=1$,
whereas for the excitation state there is only one block, $k_1=d$.
\end{corollary}
It is also relevant to ask about stabilized states for $B$ whose Jordan
decomposition contains two different blocks with the same eigenvalue.
Let us use a one-particle basis given by $\ket{i^{(b)}}$, where $i$ denotes
excitation-level (i.e. the largest $i$ such that $J^i$ acting on this vector is
non-zero) and $b$ the Jordan block to which it belongs. First, we notice
that the sum of the excitations $j$ in a given state is decreased by $1$ after
action of $K_{(1)}$. Second, we notice that the excitations can be distributed
among all Jordan subspaces which are {\em big enough} (i.e. all blocks of size
strictly lesser than $j$). Moreover, the distribution among such Jordan subspaces
needs to be permutation-invariant.
\begin{theorem}
An unnormalized state of excitation $j$ distributed among $s$
blocks (with weights $n_1,n_2,\ldots$ adding up to $n$, related to distribution of excitations among Jordan blocks) reads
\begin{equation}
\ket{E_j^{n_1,n_2,\ldots}} = \sum_{\vec{b}: \#i =
n_i}\sum_{i_1+\ldots+i_n=j}\ket{i_1^{(b_1)}}\ldots
\ket{i_n^{(b_n)}}.\label{eq:ej_multimode}
\end{equation}
We will show by induction that only states of the form
$\ket{E_j^{n_0,n_1,\ldots}}$ are stabilized by such $J$.
\end{theorem}
For example, one excitation $j=1$ among two particles, distributed among two
modes ($n_1=1$, $n_2=1$) reads
\begin{align}
\ket{E_1^{1,1}} &= \ket{0^{(1)}1^{(2)}} + \ket{1^{(1)}0^{(2)}}\\
&+ \ket{0^{(2)}1^{(1)}} + \ket{1^{(2)}0^{(1)}}.
\end{align}
\begin{proof}
The induction basis is for $n=1$
and holds trivially (as it works for all states). So let us assume that
\eqref{eq:ej_multimode} holds for $n$.
For $n+1$ particles, a generic state with fixed $j$ and $n_1,n_2,\ldots$ is
\begin{align}
\sum_{l=0}^j \sum_{b=1} \beta_{l,b}
\ket{E_{j-l}^{n_1-\delta_{b1},n_1-\delta_{b1},\ldots}} \ket{l^{(b)}}.
\end{align}
Applying $J_{(1)}$ and $J_{(n+1)}$ on the state above, we get a relation
$\beta_{l,b}=\beta_{l+1,b}=\beta_b$. Moreover, from the condition of permutation
symmetry for blocks (i.e. components with the same $(b)$) we get that all
$\beta$ need to be the same, so it is of the form \eqref{eq:ej_multimode}.
\end{proof}
This finalizes the classification of symmetric states for which \eqref{eq:bbinv} holds.
\section{Invariants as functions of creation and annihilation operators}
\label{s:polynomial-invariants}
Having shown that problem of relating bosonic states by linear optics is the same that asking whether they are equivalent with respect to local unitary operations,
we focus on specific methods for bosonic states that provide analytic invariants, basing on our work \cite{Migdal2014ffdag}.
These invariants are built upon $f^\dagger$, the homogeneous polynomial on the creation operators which transforms the vacuum into our state. We present two families of LU-invariants, i.e.: two sets of complex-valued functions on the Hilbert space which are invariant under linear optics:
\begin{itemize}
\item The spectrum of the operator $f f^\dagger$.
\item The moments: vacuum expectation values of the operators $f^k f^{\dagger k}$, for any natural $k$.
\end{itemize}
The considered invariants are both simple to calculate and, as we will show, sufficient to distinguish states in many practical situations, even some states which are generally difficult to handle.
This part of the work is organized as follows.
In Section~\ref{s:ff} we present the construction and relevance of spectral invariants related to the operator $f f^\dagger$.
We show that, despite being infinite dimensional, this operator can be easily diagonalized, as it separates into blocks of fixed numbers of particles (not related to the photon count) which are related to many-body correlators.
In Section~\ref{s:replicas} we discuss the second set of invariants: vacuum expectation values of $f^k f^{\dagger k}$.
It corresponds to the projection of the tensor power of $k$ copies of our state (in the particle basis) onto the completely symmetric Hilbert space.
In Section~\ref{s:ffdag-examples} we apply our methods in concrete examples. We show that, using our invariants, we can solve the LU-equivalence problem for two particles in two modes and for three particles in two modes. We also study which states from the four-particle singlet subspace can be reached using linear optics from another state in the same singlet subspace.
Moreover, we show that, at least in some cases, $k$-particle blocks of $f f^\dagger$ provide more invariants than $k$-particle reduced density matrices.
In Section~\ref{s:mode-product} we propose an interferometric scheme that, in principle, allows for a direct measurement of this set of invariants.
Moreover, such scheme allows direct experimental creation of states given by the polynomial $f^{k}$ for an arbitrary $k$.
Some technical discussions are left for Sec.~\ref{app:schwinger}, where we introduce Schwinger-like representation for expressing arbitrary $k$-body correlations in terms of normally ordered creation and annihilation operators.
\subsection{Reduced density matrix}
\label{sec:reduced-density-matrix}
One of straightforward methods for checking whether two states are LU-equivalent is comparing spectra of their reduced density matrices.
As we are working on permutation symmetric states, it does not matter which particle we choose, and thus we have family of density matrices parameterized by a number from $1$ to $n$.
The simplest one is the one-particle density matrix
\begin{equation}
\rho_{ij} = \bra{\psi} a_j^\dagger a_i \ket{\psi}.
\end{equation}
For two particles, two state are LU-equivalent if, and only if, they have the same spectra of the one-particle reduced density matrix, see Schmidt decomposition in Sec.~\ref{s:two_particles}.
The same condition holds for the Gaussian states, as shown in Sec.~\ref{sec:gaussian-states}.
In general it is a necessary, but not sufficient, condition for LU-equivalence.
Even for a pure state of three symmetric qubits it is no longer the case --- reduced density matrices offer $1$ invariant, whereas there are three, see Sec.~\ref{s:three-qubits}.
For general relation of expectation values of creation and annihilation operators and reduced density matrices, see Sec.~\ref{app:schwinger}.
\subsection{Spectral method}\label{s:ff}
Let us consider the $d$-mode, $n$-particle bosonic state given in equation (\ref{def.f}), $\ket{\psi}=f^\dagger\ket{\Omega}$, where $f(a_1,\cdots,a_d)$ is a homogeneous polynomial of degree $n$ in the annihilation operators for the modes. Now, let us consider the operator $ff^\dagger$.
We will show that:
\begin{itemize}
\item its spectrum is invariant with respect to $SU(d)$ transformations (\ref{eq:u-transf}),
\item it may be decomposed into an infinite number of blocks of finite size, but
\item the first $n$ blocks suffice to reconstruct the state.
\end{itemize}
\subsubsection{Invariance of the spectrum}
\begin{theorem}
The spectrum of $ff^\dagger$ is invariant with respect to arbitrary rotations between the modes, that is,
\begin{equation}
\text{Sp}\left[f(\vec{a}) f^\dagger(\vec{a})\right] =
\text{Sp}\left[f(U\vec{a}) f^\dagger(U\vec{a})\right]
\label{ff.invariance}
\end{equation}
for every $U\in SU(d)$.
\end{theorem}
\begin{proof}
Each unitary operator acting on the modes $U = \exp(i H)$ (with Hermitian $H$) can be promoted to act on the full Fock-space via a second quantization extension:
\begin{align}
\tilde{U} = \exp\(i \sum_{i,j=1}^{d} H_{ij} a_i^\dagger a_j \),
\end{align}
where $\tilde{U}\cong U^{\otimes n}$ on our Hilbert space ${\cal S}_n^d$. This operator $\tilde{U}$ is unitary and acts on monomials in a natural way, i.e.: $\tilde{U}^\dagger a_j \tilde{U} = \sum_i U_{ji} a_i$, which can be checked with the Hadamard lemma. Consequently,
\begin{align}
f(U\vec{a}) f^\dagger(U\vec{a}) = \tilde{U}^\dagger f(\vec{a}) f^\dagger(\vec{a}) \tilde{U},
\end{align}
i.e.: the two operators are unitarily related and, thus, they have the same spectrum.
\end{proof}
\subsubsection{Block Decomposition}
\label{s:block-decomposition}
Since operator $f$ is a {\em homogeneous} polynomial of degree $n$ on the annihilation operators, each summand in operator $ff^\dagger$ contains $n$ creation and $n$ annihilation operators. Thus, $ff^\dagger$ preserves the number of photons $k$, and decomposes into blocks $ff^\dagger|_k$. Let $\vec k$ and $\vec k'$ be multi-indices with $|\vec k|=|\vec k'|=k$. Then, matrix elements of $ff^\dagger|_k$ can be shown to correspond to {\em correlators} of our state:
\begin{align}
&\bra{\vec k'}\; f f^\dagger\; \ket{\vec k} =
\bra{\Omega}\; \tilde a_{\vec k'}\;
f f^\dagger\;
\tilde a_{\vec k}^\dagger \; \ket{\Omega} \nonumber\\ =&
\bra{\Omega}\; f\; \tilde a_{\vec k'} \tilde a^\dagger_{\vec k} \;
f^\dagger\; \ket{\Omega} = \bra{\psi} \; \tilde a_{\vec k'} \tilde
a^\dagger_{\vec k}\; \ket{\psi}.
\label{eq:spec2correl}
\end{align}
For example, for two modes and particle numbers $k\in\{0, 1,2\}$, the blocks are given by:
\begin{align}
f f^\dagger|_{k=0} &=
\left[%
\begin{matrix}
\bra{\psi}1\ket{\psi}
\end{matrix}
\right]
\\
f f^\dagger|_{k=1} &=
\left[%
\begin{matrix}
\bra{\psi}a_1 a_1^\dagger\ket{\psi} & \bra{\psi}a_1 a_2^\dagger\ket{\psi} \\
\bra{\psi}a_2 a_1^\dagger\ket{\psi} & \bra{\psi}a_2 a_2^\dagger\ket{\psi}
\end{matrix}
\right]
\\
f f^\dagger|_{k=2} &=
\end{align}%
\begin{equation}
\left[%
\begin{matrix}
\bra{\psi} \frac{a_1^2 a_1^{\dagger 2}}{2} \ket{\psi}
& \bra{\psi} \frac{a_1^2 a_1^\dagger a_2^\dagger}{\sqrt{2}} \ket{\psi}
& \bra{\psi} \frac{a_1^2 a_2^{\dagger 2}}{2} \ket{\psi}
\\
\bra{\psi} \frac{a_1 a_2 a_1^{\dagger 2}}{\sqrt{2}} \ket{\psi}
& \bra{\psi} a_1 a_2 a_1^\dagger a_2^\dagger \ket{\psi}
& \bra{\psi} \frac{a_1 a_2 a_2^{\dagger 2}}{\sqrt{2}} \ket{\psi}
\\
\bra{\psi} \frac{a_2^2 a_1^{\dagger 2}}{2} \ket{\psi}
& \bra{\psi} \frac{a_2^2 a_1^\dagger a_2^\dagger}{\sqrt{2}} \ket{\psi}
& \bra{\psi} \frac{a_2^2 a_2^{\dagger 2}}{2} \ket{\psi}
\end{matrix}
\right]\nonumber.
\end{equation}
The matrix elements of $ff^\dagger|_k$ are $k$-particle correlators. For $k=0$, the only matrix element is the norm of the state.
Note that the spectrum of $f f^\dagger$ is real, as each block $f f^\dagger|_k$ is a Hermitian matrix.
Unitary rotations do not change the particle count. Consequently, the block structure is preserved under rotations and, thus, the $\text{Sp}[f f^\dagger|_k]$ are invariants. If the eigenvalues for two states differ, $\text{Sp}[f_1 f_1^\dagger|_k] \neq \text{Sp}[f_2 f_2^\dagger|_k]$, then the two states {\em can not} be related by a unitary rotation of the modes.
The converse is, in general, not true --- states related by complex conjugation (of $f$), so preserving the spectrum, are not necessarily related by linear optics (see \ref{s:three-qubits} for an example).
It, however, remains an open question whether the converse (up to complex conjugation) is true.
Instead of the eigenvalues, we may compute the characteristic polynomial:
\begin{align}
w_k(\lambda) = \det\left[ f f^\dagger|_{k} - \lambda \mathbbm{I} \right].
\label{characteristic.polynomial}
\end{align}
Since its coefficients are in one-to-one correspondence with the spectrum, the method is equally powerful. Moreover, the coefficients of $w_k(\lambda)$ are polynomials in the coefficients of $f$, which is closer in spirit to formulation of Hilbert's theorem. An alternative, but equivalent, route is to investigate the moments $f f^\dagger|_k$, $\hbox{Tr}[(f f^\dagger|_k)^l]$. They are in one to one correspondence with the characteristic polynomial $w_k(\lambda)$ by the virtue of Newton identities \cite{Mead1992}.
For $k=1$, the block is related to the single-particle reduced density matrix (see Sec.~\ref{sec:reduced-density-matrix}), i.e.:
\begin{align}
\rho_1 = f f^\dagger|_{k=1} - n \mathbbm{I}.
\end{align}
For $k>1$ we do not recover the reduced $k$-particle density matrix and, as we will show,
$f f^\dagger|_k$
can provide more entanglement invariants than the spectrum of the reduced density matrices with those respective particle numbers.
Even the first block can give interesting results. We can show that no-go observation for deterministically changing one Fock state into another using with linear optics.
Let us look at $f f^\dagger|_1$. As it is a Fock state, its matrix is diagonal (i.e terms $\langle \psi | a_i a_j^\dagger |\psi \rangle$ vanish for $i\neq j$). The diagonal values, and therefore the eigenvalues, are $\bra{\psi} a_i a_i^\dagger \ket{\psi} = n_i + 1$. As they are invariants, two Fock states can be deterministically related by linear optics if and only if they have the same photon counts (up to a permutation of modes).
\subsubsection{Correlators and reconstruction}
Knowledge of $f f^\dagger |_k$ for all block particle numbers $k\leq n$ suffices to reconstruct the state $f^\dagger\ket{\Omega}$. The reconstruction strategy is to build the matrix elements of the corresponding density matrix
\begin{align}
\rho_{\vec{n} \vec{n}'} =
\bra{\psi} a_{\vec n}^\dagger a_{\vec n'} \ket{\psi},
\end{align}
which can be done by using the commutation relations in order to express the anti-normally ordered terms into terms with normal ordering.
However, we do not claim that higher blocks with $k>n$ are not important. While they are not required to reconstruct the state, there might be pairs of states whose polynomials $w_0$ up to $w_n$ coincide, yet their $w_k$ differ for some $k>n$.
That is, eigenvalues do not capture relative orientation of eigenvectors for different blocks. Eigenvalues for $k>n$ might incorporate relations between eigenvectors for $k\leq n$.
Let us provide a more straightforward way to reconstruct the state, which does not involve calculating inverting the normal ordering of the operators. Let us recall the notion of {\em frame representation} of a many qudit state \cite{Ferrie2011}. Let $\{\sigma^i\}$ be an orthogonal (in trace norm) set of generators of $SU(d)$ plus the identity (i.e. a basis for $d\times d$ Hermitian matrices). For $SU(2)$ we may just choose the Pauli matrices: $\{\mathbbm{I},\sigma^x,\sigma^y,\sigma^z \}$. Any density matrix of a $n$-qudit state can be written as:
\begin{align}
\rho = \sum_{i_1,\cdots,i_n} t_{i_1 i_2 \ldots i_n}
\sigma^{i_1} \otimes \sigma^{i_2} \otimes \ldots \otimes \sigma^{i_n}
\equiv \sum_{\vec\imath} t_{\vec\imath} \sigma^{\vec\imath},
\label{frame.rep}
\end{align}
Note that for permutation-symmetric states, $t_{i_1 i_2 \ldots i_n}$ must be permutation-symmetric. Since the $\{\sigma^i\}$ are orthogonal, the state can be reconstructed from the expectation values of {\em strings} of $\sigma^{i}$ operators:
\begin{align}
t_{i_1 i_2 \ldots i_n} = \frac{1}{2^n}\hbox{Tr}\left[ \sigma^{i_1} \otimes \sigma^{i_2} \otimes \ldots \otimes \sigma^{i_n} \;\rho \right].
\label{reconstruction}
\end{align}
Expectation values of permutation-symmetric strings of $\sigma^i$ can be obtained from the correlators $ff^\dagger|_k$, as shown in Appendix (\ref{app:schwinger}). The idea behind the proof is the use of a Schwinger-like representation,
related to the one for spin systems ---see \cite[Chapter 7.2]{Auerbach1994}, and develop identities of the form
\begin{align}
\bra{\psi} \left( \sum_{perm} \sigma^{\vec{\imath}} \right) \ket{\psi} = \bra{\Omega} f A(\vec\imath) f^\dagger \ket{\Omega},
\end{align}
where $A(\vec\imath)$ is a polynomial in creation and annihilation operators. From a practical perspective it allows calculating the expectation value without immersing everything in the full Hilbert space of distinguishable particles, which has a very high dimension.
For example, for $d=2$, we get the following relation
\begin{gather}
\bra{\psi}
\sum_{perm} (\mathbbm{I})^{\otimes n_{I}}
\otimes (\sigma^x)^{\otimes n_x}
\otimes (\sigma^y)^{\otimes n_y}
\otimes (\sigma^z)^{\otimes n_z}
\ket{\psi}\label{eq:pauli_sym}\\
=\bra{\Omega} f
: \left( a^\dagger a + b^\dagger b \right)^{n_I}
\left( a^\dagger b + b^\dagger a \right)^{n_x}\label{eq:schwinger_form}\\
\times \left( - i a^\dagger b + i b^\dagger a \right)^{n_y}
\left( a^\dagger a - b^\dagger b\right)^{n_z} :
f^\dagger \ket{\Omega},\nonumber
\end{gather}
where $n_{I} + n_x + n_y + n_z = n$ (covering all symmetric correlators), the sum is over all $n!$ permutations and :expression: stands for the normal ordering, i.e. putting the creation operators on the left and the annihilation on the right. Note that, for most of this chapter, we use anti-normal ordering, as we work with operators of the form $f^k f^{\dagger k}$.
\subsection{Symmetric component of tensor powers}
\label{s:replicas}
An alternative set of invariants can be found by studying the symmetric component of tensor copies of a given multi-photon state, taken in the particle representation.
Typically, $\ket{\psi}_P^{\otimes k}$ is not permutation-symmetric, therefore it does not describe a boson state. However, we will show that its projection on the symmetric subspace is proportional to $f^{\dagger k}\ket{\Omega}$, a $kn$-photon state in $d$ modes.
Let us give an example, with $n=2$ and $d=2$, $\ket{\psi} = \ket{1,1} = \frac{1}{g\sqrt{2}} (\ket{12}_P+\ket{21}_P)$. If we multiply it tensorially with itself, we get $\ket{\psi}_P^{\otimes 2} = \frac{1}{2}(\ket{12}_P+\ket{21}_P)\otimes (\ket{12}_P+\ket{21}_P)$. This is {\em not} a valid photon state, because it is {\em not} permutation-symmetric:
\begin{gather}
\tfrac{1}{2} \left( \ket{1212}_P + \ket{1221}_P \right.\label{eq:nonsym-prod}\\
\left. + \ket{2112}_P + \ket{2121}_P \right)\nonumber
\end{gather}
Nonetheless, it can be projected on the permutation-symmetric subspace, $\hbox{Sym}^{kn}({\mathbbm C}^d)$.
Let ${\mathbbm P}^{(kn)}_{sym}$ stand for that projector, where the upper index represents the number of particles to be symmetrized, in this case --- $kn$.
Then,
\begin{equation}
\bra{\psi}_P^{\otimes 2} {\mathbbm P}^{(4)}_{sym} \ket{\psi}_P^{\otimes 2} = \frac{2}{3}
\end{equation}
because \eqref{eq:nonsym-prod} contains 4 out of 6 possible permutations,
\begin{equation}
{\mathbbm P}^{(4)}_{sym} \ket{1212}_P =
\tfrac{1}{6} \left( \ket{1122}_P + \text{permutations} \right).
\end{equation}
In order to make the LU-invariance of those values $\bra{\psi}_P^{\otimes k} {\mathbbm P}^{(kn)}_{sym} \ket{\psi}_P^{\otimes k}$ manifest, we will show their relation to
\begin{equation}
\bra{\Omega} f^k f^{\dagger k} \ket{\Omega}
\label{fkfk}
\end{equation}
i.e.: the {\em vacuum expectation values of} $f^k f^{\dagger k}$ for all $k\in{\mathbbm N}$. These are easy to compute and their invariance is straightforward, since the vacuum is rotation-invariant. Thus, we will prove the following:
\begin{theorem}%
For every homogeneous polynomial $f$, such that $\ket{\psi} = f^\dagger \ket{\Omega}$, the state generated by its $k$-th power is proportional to the state $\ket{\psi}_P^{\otimes k}$ projected on the fully symmetric space of all particles, that is,
\begin{equation}
f^{\dagger k} \ket{\Omega} = \tfrac{\sqrt{(kn)!}}{\sqrt{(n!)^k}} {\mathbbm P}^{(kn)}_{sym} \ket{\psi}^{\otimes k}_P,
\label{eq:fk_and_symmetrization}
\end{equation}
so, in particular:
\begin{equation}
\bra{\Omega}f^kf^{\dagger k} \ket{\Omega} = \tfrac{(kn)!}{(n!)^k} \bra{\psi}_P^{\otimes k} {\mathbbm P}^{(kn)}_{sym} \ket{\psi}_P^{\otimes k}.
\label{fkfk.projector}
\end{equation}
\end{theorem}
\begin{proof}
Let $\{\vec{n}^{(1)},\cdots,\vec{n}^{(k)}\}$ be $k$ multi-indices,
denoting photon count at each mode, i.e., for the vector with index $m$, we have
\begin{equation}
\vec{n}^{(m)} = \{n^{(m)}_1,\cdots,n^{(m)}_d\}.
\end{equation}
Let us denote by $|\vec{n}^{(m)}|=\sum_l n^{(m)}_l$ the total photon count. The monomial operator defined in \eqref{eq:multia}, $\tilde a^\dagger_{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}}$, can be written in terms of the individual normalized monomials as
\begin{equation}
\tilde a^\dagger_{\vec{n}^{(1)}} \tilde a^\dagger_{\vec{n}^{(2)}} \cdots \tilde a^\dagger_{\vec{n}^{(k)}} = M(\vec{n}^{(1)},\cdots,\vec{n}^{(k)})\ \tilde a^\dagger_{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}}
\label{global.creator}
\end{equation}
where
\begin{equation}
M(\vec{n}^{(1)},\cdots,\vec{n}^{(k)})\equiv \prod_{l=1}^d
\sqrt{\frac{ (n_{l}^{(1)}+\cdots+n_{l}^{(k)})!}{(n_{l}^{(1)})! \cdots (n_{l}^{(k)})! }}
\label{combinatorial.factor}
\end{equation}
is the normalization factor. Let us express $f^{\dagger k}\ket{\Omega}$ as a sum of terms of this kind:
\begin{align}
(f^\dagger)^k \ket{\Omega} &= \sum_{\vec{n}^{(1)},\cdots,\vec{n}^{(k)}} \alpha_{\vec{n}^{(1)}} \cdots \alpha_{\vec{n}^{(k)}}\; \nonumber \\
&\times M(\vec{n}^{(1)},\cdots,\vec{n}^{(k)}) \;\tilde a^\dagger_{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}} \ket{\Omega}
\label{eq:fk}
\end{align}
so, the coefficient for $\ket{\vec{I}}\equiv \tilde{a}^\dagger_{\vec{I}} \ket{\Omega}$ is
\begin{equation}
\sum_{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}=\vec I} \alpha_{\vec{n}^{(1)}}\cdots \alpha_{\vec{n}^{(k)}} \cdot M(\vec{n}^{(1)},\cdots,\vec{n}^{(k)})
\end{equation}
where $\vec I$ is a multi-index for $nk$ photons in $d$ modes.
Now, let us consider the right hand side of \eqref{eq:fk_and_symmetrization}. The tensor product $\ket{\psi}^{\otimes k}$ can be written as:
\begin{equation}
\ket{\psi}_P^{\otimes k} =
\sum_{\vec{n}^{(1)},\cdots,\vec{n}^{(k)}}
\alpha_{\vec{n}^{(1)}}\cdots \alpha_{\vec{n}^{(k)}}
\ket{\vec{n}^{(1)}}_P \otimes \cdots \otimes \ket{\vec{n}^{(k)}}_P,
\label{eq:particle-power-k}
\end{equation}
Notice that the action of several partial projections on symmetric subspaces followed by a global projection on the symmetric subspace is equivalent to just the final global projection. Consequently,
\begin{align}
&{\mathbbm P}_{sym}^{kn} \left( \ket{\vec{n}^{(1)}}_P\otimes\cdots\otimes\ket{\vec{n}^{(k)}}_P
\right) \nonumber \\
&= N(\vec{n}^{(1)})\cdots N(\vec{n}^{(k)}) {\mathbbm P}^{(kn)}_{sym} \nonumber\\
&\left(
{\mathbbm P}^{(n)}_{sym}(\ket{\vec{n}^{(1)}}_A) \otimes \cdots \otimes
{\mathbbm P}^{(n)}_{sym}(\ket{\vec{n}^{(k)}}_A) \right) \\
&= N(\vec{n}^{(1)})\cdots N(\vec{n}^{(k)}) {\mathbbm P}^{(kn)}_{sym} \left(
\ket{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}}_A \right) \nonumber \\
&= \frac{N(\vec{n}^{(1)})\cdots N(\vec{n}^{(k)})}{N(\vec{n}^{(1)}+\cdots+\vec{n}^{(k)})}
\ket{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}} = \nonumber \\
&= \tfrac{\sqrt{(kn)!}}{\sqrt{(n!)^k}} M(\vec{n}^{(1)},\cdots,\vec{n}^{(k)}) \ket{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}}
\end{align}
Applying the above relations to \eqref{eq:particle-power-k} we get
\begin{align}
{\mathbbm P}_{sym}^{kn} \ket{\psi}^{\otimes k}_P &=
\sum_{\vec{n}^{(1)},\cdots,\vec{n}^{(k)}}
\alpha_{\vec{n}^{(1)}}\cdots \alpha_{\vec{n}^{(k)}}
\tfrac{\sqrt{(kn)!}}{\sqrt{(n!)^k}}\\
&\times M(\vec{n}^{(1)},\cdots,\vec{n}^{(k)})
\ket{\vec{n}^{(1)}+\cdots+\vec{n}^{(k)}}\nonumber
\end{align}
which is a state proportional to \eqref{eq:fk}, with the proportionality factor $\sqrt{(kn)!/(n!)^k}$, thus we have shown \eqref{eq:fk_and_symmetrization}.
\end{proof}
This tensor product symmetrization trick bears resemblance to the use of Clebsch-Gordan coefficients. Indeed, already for $k=2$ the result is useful: $\ket{\psi}^{\otimes 2}$ is not permutation-symmetric unless $\ket{\psi}=\ket{\phi}^{\otimes n}$ for some single-particle state $\ket{\phi}$.
It is possible to prepare an experimental setup to measure $\langle f^k f^{\dagger k} \rangle$. We have to prepare $k$ copies of the state and project each $k$-tuple of modes into their symmetric combination. For example, if $k=2$, two modes are symmetrized using a beam-splitter. Then, $\langle f^2 f^{\dagger 2}\rangle$ is the probability amplitude for losing no photons in the procedure. In general, taking copies of bosonic states and calculating projections offers a way to measure multi-particle entanglement, since taking $k$ copies provides a way to measure R\'enyi entropy of order $k$ of the given subsystems \cite{Daley2012}.
There is another interpretation of $\langle f^k f^{\dagger k} \rangle$ in polynomial language. The quantity we are investigating is known as the {\em Bombieri norm} of homogeneous polynomials \cite{Beauzamy1990} (in this case, $f^k$), which is known to be invariant under unitary rotations of the variables.
This quantity can be expressed as an integral of $|f(\vec{a})|^{2k}$ over the (complex) unit sphere $|\vec{a}|=1$, \cite{Pinasco2012, Pinasco2005} (equivalently, see \cite[Lemma 15]{Aaronson2010}, where it is called Fock Inner Product).
\subsection{Examples}
\label{s:ffdag-examples}
The previous two sections have introduced two sets of LU-invariants for $n$-photon states in $d$-modes. The question to be addressed in this section is the following: can those invariants help us determine the LU-equivalence classes of relevant states? We will start our discussion with a benchmark problem, which can be solved in many different ways: $n=2$ photons in $d=2$ modes. Then, we will proceed to the case of $n=3$ particles, still in $d=2$ modes, which is the first non-trivial case, although it is well understood. We will show that, in that case, the right number of polynomial invariants is recovered. Our last example is a much more complicated system: $n=4$ photons in $d=8$ modes with some additional symmetries.
\subsubsection{2 particles in 2 modes}
The simplest example is $n=2$ particles in $d=2$ modes:
\begin{align}
f = \alpha_{20} \tfrac{a_1^{2}}{\sqrt{2}} + \alpha_{11} a_1 a_2 + \alpha_{02} \tfrac{a_2^{2}}{\sqrt{2}}.
\end{align}
There is just a single invariant. Let us study how we can obtain it using the methods described in this paper. In our case it suffices to look at a block of $k=1$ particles:
\begin{align}
\left[%
\begin{matrix}
3|\alpha_{20}|^2 + 2 |\alpha_{11}|^2 + |\alpha_{02}|^2 &
\sqrt{2}(\alpha_{20}^\star \alpha_{11} + \alpha_{11}^\star \alpha_{02}) \\
\sqrt{2}(\alpha_{20} \alpha_{11}^\star + \alpha_{11} \alpha_{02}^\star) &
|\alpha_{20}|^2 + 2 |\alpha_{11}|^2 + 3|\alpha_{02}|^2
\end{matrix}
\right]
\end{align}
Its characteristic polynomial is
\begin{equation}
w_2(\lambda) = \lambda^2
- \hbox{Tr} \left( f f^\dagger|_{k=1} \right) \lambda
+ \det \left( f f^\dagger|_{k=1} \right),
\end{equation}
where coefficients are
\begin{align}
\hbox{Tr} \left( f f^\dagger|_{k=1} \right)
&= 4\left(|\alpha_{20}|^2 + |\alpha_{11}|^2 + |\alpha_{02}|^2\right),
\nonumber \\
\det \left( f f^\dagger|_{k=1} \right)
&= 4 \left(|\alpha_{20}|^2 + |\alpha_{11}|^2 +
|\alpha_{02}|^2\right)^2 \nonumber \\
- ( |\alpha_{20}|^2 &- |\alpha_{02}|^2 )^2 + 2 |\alpha_{20}^\star \alpha_{11} +
\alpha_{11}^\star \alpha_{02}|^2.
\label{eq:twotwodet}
\end{align}
The trace gives only the normalization, which is the same information contained in $f f^\dagger|_{k=0}$, and which we can set to $1$. The determinant, on the other hand, gives a new invariant.
Alternatively, we can factorize the (degree 2) polynomial: $f=f_1f_2$. In other terms, we can make use of the Majorana stellar representation \cite[Ch. 7]{BengtssonZyczkowski2006book}:
\begin{align}
\ket{\psi} &= \frac{1}{\sqrt{N}}f_1^\dagger f_2^\dagger
\ket{\Omega}\\ & = \frac{1}{\sqrt{2N}}\left(
\ket{\phi_1}_P \otimes \ket{\phi_2}_P
+ \ket{\phi_2}_P \otimes \ket{\phi_1}_P \right),
\end{align}
where $\sqrt{N}$ is a normalization factor and $f_i^\dagger \ket{\Omega} \equiv \ket{\phi_i}$. Since $U\in SU(2)$ acts on the representation as a simultaneous rotation of the points, for two particles the only invariant is the angle between the states, or equivalently $|\braket{\phi_1}{\phi_2}|^2$. A straightforward (albeit tedious) calculation gives
\begin{align}
|\braket{\phi_1}{\phi_2}|^2 =
\frac{|\alpha_{20}|^2 + |\alpha_{11}|^2 + |\alpha_{02}|^2 - |\alpha_{11}^2 - 2 \alpha_{20} \alpha_{02}|}
{|\alpha_{20}|^2 + |\alpha_{11}|^2 + |\alpha_{02}|^2 + |\alpha_{11}^2 - 2 \alpha_{20} \alpha_{02}|}.
\end{align}
Along with the normalization condition it yields the invariant
\begin{align}
|\alpha_{11}^2 - 2 \alpha_{20} \alpha_{02}|^2
&= 3 - \det \left( f f^\dagger|_{k=1} \right).
\end{align}
The above is $0$ and $1$ for orthogonal and parallel vectors $\ket{\phi_i}$, respectively.
It is also possible to find the $\langle f^k f^{\dagger k}\rangle$ invariants associated to $k$ copies. For $k=2$ we obtain:
\begin{align}
\tfrac{2^2}{4!}\bra{\Omega} f^2 f^{\dagger 2} \ket{\Omega} = 1 -
\tfrac{1}{3}|\alpha_{11}^2 - 2 \alpha_{20} \alpha_{02}|^2.
\end{align}
In particular, for each orbit under linear optics, we can give a representative, for example
\begin{align}
\frac{\cos(\theta)}{\sqrt{2}} a_1^2 + \frac{\sin(\theta)}{\sqrt{2}} a_2^2
\end{align}
for $\theta\in[0,\frac{\pi}{4})$.
\subsubsection{Three qubits}
\label{s:three-qubits}
The case of $n=3$ photons in $d=2$ modes can be viewed as three qubits in a permutation-symmetric state, and is more involved. A full list of invariants is listed in \cite{Sudbery2000}. Disregarding mirror-reflection (i.e.: anti-unitary operators) there are 6 invariants, which reduce to 4 when we take into account normalization and permutation-symmetry. A normal form can be employed \cite{acin2000generalized, Acin2001a, Carteret2000} which, when particularized to a permutation-symmetric state, gives
%
\begin{align}
\ket{\psi} = &p \left(\ket{001}_P+\ket{010}_P+\ket{100}_P\right)/\sqrt{3} \nonumber\\ +
&q \ket{111}_P + r \exp(i \varphi) \ket{000}_P,
\label{eq:acin3form}
\end{align}
where all parameters ($p$, $q$, $r$, $\varphi$) are real.
In this section, we use modes $\{0,1\}$, which are more prevalent in description of qubits, $\{1,2\}$ (in most of this paper we start enumeration from $1$).
Or, in polynomial notation:
\begin{align}
f = \alpha_{30} \tfrac{a_0^3}{\sqrt{6}}
+ \alpha_{21} \tfrac{a_0^2 a_1}{\sqrt{2}}
+ \alpha_{03} \tfrac{a_1^3}{\sqrt{6}},
\end{align}
where $\alpha_{30}$ is complex and both $\alpha_{21}$ and $\alpha_{03}$ are real parameters.
Our main result is that both the set of moments $\langle f^k f^{\dagger k}\rangle$ with $k\leq 5$ and the characteristic polynomials of the blocks $ff^\dagger|_{k\leq 2}$ {\em provide all invariants}. This can be checked by computing the matrix of partial derivatives of these invariants with respect to the parameters determining state \eqref{eq:acin3form} at, e.g., the point $(p=q=r=1, \varphi=\pi/4)$, and observing that is has maximal rank.
This result implies that blocks of $ff^\dagger$ convey more information than reduced density matrices, which are known to provide only 2 invariants, including the normalization
(note that for $1$ qubits spectra of one-particle and two-particle reduced density matrix are the same).
Beyond this dimensionality test, it is relevant to test whether those invariants can distinguish between states related by complex conjugation (or reflection, in terms of the Majorana representation), i.e.: $\ket{\psi}$ and $\ket{\psi}^*$. In general, for $n \geq 3$, such states do not need to be related by a unitary transformation
(as, in the Majorana representation, 3 indistinguishable unit vectors need not to have mirror symmetry).
Unfortunately, neither moments nor block spectra can distinguish a state from its complex conjugate (as we already noted in Sec. \ref{s:block-decomposition}).
\subsubsection{Four-particle singlet state}
\label{s:four-particle-singlet}
As a more interesting example we consider $n=4$ photons in $d=8$ modes, composing four qubits whose singlet-subspace determines a logical qubit, see Fig.~\ref{fig:4photons8modes}. There are three Hilbert spaces that are relevant for this scenario: the total Hilbert space $\mathcal{S}^8_4$, the 4-qubit subspace $\mathcal{H}_4$, and the two-dimensional singlet subspace $\mathcal{H}_s$, which determines the logical qubit, structured by the following inclusions:
\begin{equation}
\mathcal{S}^8_4 \supset \mathcal{H}_4 \supset \mathcal{H}_s \ .
\label{eq:singlet-subsubspace}
\end{equation}
We address here the following natural question: starting with a particular singlet state $\ket{\psi}\in\mathcal{H}_s$, which singlet states (also in $\mathcal{H}_s$) can be obtained from it using only linear optics? Before proceeding further, let us first describe the details of the construction of the 4-qubit and the singlet subspaces of $\mathcal{S}^8_4$.
Let us denote the by $\{a_i,b_i\}_{i=1}^4$ the four annihilation operators required to span $\mathcal{S}^8_4$, where the $a_i$ refer to horizontal and the $b_i$ to vertical polarizations of the $i$-th beam. We define the 4-qubit subspace $\mathcal{H}_{4}\subset \mathcal{S}^{8}_4$, as a subspace spanned by states that have exactly one particle in each of the four pairs of modes: $(a_i,b_i)$. This subspace has dimension 16 and is isomorphic to the Hilbert space of four distinguishable qubits $\left({\mathbbm C}\right)^{\otimes 4}$. Action of the local unitary group $SU(2)^{\otimes 4}$ on $\mathcal{H}_4$ is modeled by the action of global linear optics operations that do not mix pairs $(a_i,b_i)$. The two-dimensional singlet subspace, $\mathcal{H}_s$, is defined as the subspace of $\mathcal{H}_4$, which is invariant under the action of any collective unitary rotations on all four qubits, $V^{\otimes 4}$.
The above construction was first introduced in \cite{Zanardi1997} as the simplest example of a decoherence-free subspace for collective rotations, and it has been created experimentally \cite{Weinfurter2001}. In \cite{Migdal2011dfs} it was shown that the logical qubit is immune to one-particle loss and a protocol for quantum key distribution using such states and linear optics was provided.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.35\textwidth]{figs/article/ffdag/4photons8modes}
\caption{Linear transformations for a state with 4 photons distributed among 8 modes, $\mathcal{S}^8_4$. We consider states having exactly one photon in each pair of nodes (denoted by green boxes).
This subspace is equivalent to the Hilbert space of 4 distinguishable particles, $\mathcal{H}_4$.
Furthermore, we study singlet states, i.e. states that are invariant with respect to $U=V^{\otimes 4}$, for all unitary $V$, where each $V$ acts on the respective pair of modes.}
\label{fig:4photons8modes}
\end{figure}
Let us describe the structure of the singlet space in the mode description. For each pair of beams we can define the two-photon {\em singlet} state:
\begin{align}
s_{12} = \left( a_1 b_2 - b_1 a_2 \right) / \sqrt{2},
\end{align}
i.e. $s^\dagger_{12}\ket{\Omega} = \left( \ket{HV} - \ket{VH} \right)/\sqrt{2}$, where $\ket{H}$ and $\ket{V}$ stand for horizontal and vertical polarization, respectively. Those two-photon singlet states can be paired in three inequivalent ways in order to build a global $n=4$ state:
\begin{align}
s_{12}s_{34}, \quad s_{13}s_{42}, \quad s_{14}s_{23}.
\label{eq:pairs_of_pairs}
\end{align}
These three states are not orthogonal, since they span a two-dimensional subspace. In fact, the ordering of particles in $s_{13}s_{42}$ was selected so that the scalar product between each pair is $-1/2$. To form an orthogonal basis, we prepare two linear combinations of them, resembling circular polarization states:
\begin{align}
l &= \tfrac{\sqrt{2}}{3}(s_{12}s_{34} + \epsilon s_{13}s_{42} + \epsilon^2 s_{14}s_{23})\label{eq:singlet_lr_def}\\
r &= \tfrac{\sqrt{2}}{3}(s_{12}s_{34} + \epsilon^2 s_{13}s_{42} + \epsilon s_{14}s_{23}),
\end{align}
where $\epsilon = \exp(i 2\pi/3)$.
Let us introduce the following parametrization for our state
\begin{align}
f = \cos(\tfrac{\theta}{2}) l + \sin(\tfrac{\theta}{2}) e^{i\varphi} r,
\label{eq:singlet_lr_basis}
\end{align}
where $\theta\in[0,\pi)$ and $\varphi\in[0,2\pi)$, so that we can absorb the sign in $\theta$. As it is a logical qubit (i.e. a two dimensional Hilbert space), it can be represented on the Bloch sphere, see Fig.~\ref{fig:2_dims_singlet_geometry}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.40\textwidth]{figs/article/ffdag/singlet.png}
\caption{Arrows stand for $s_{12}s_{34}$, $s_{13}s_{42}$ and $s_{14}s_{23}$. On the poles there are $l$ and $r$ states, as defined in \eqref{eq:singlet_lr_def}. Points represent a single state subjected to action related to all permutations of pairs of modes modes.}
\label{fig:2_dims_singlet_geometry}
\end{figure}
Now let us compute the moments up to a few copies:
\begin{align}
\langle f^{2} f^{\dagger 2} \rangle &= \tfrac{17}{2} - \tfrac{1}{2}
\cos(2\theta), \nonumber \\
\langle f^{3} f^{\dagger 3} \rangle &= 290 - 42 \cos(2\theta) - 8
\sin^3(\theta) \cos(3 \varphi),
\end{align}
as a side note, the normalization factors (as in \eqref{fkfk.projector}) are $1/70$ and $1/34650$, respectively. I.e.: the states are very far from being coherent.
Consequently, we obtain two invariants:
\begin{align}
\cos(2\theta) \quad \text{and} \quad \cos(3 \varphi).
\label{eq:singlet_invariants}
\end{align}
This results restricts the allowed operations within linear optics.
If we restrict ourselves further, only to operations preserving the singlet subspace, then the only possible operations,
in the Bloch representation
(see Fig.~\ref{fig:2_dims_singlet_geometry}) are: rotation along the equator by $2\pi/3$ and $4\pi/3$, rotation around states \eqref{eq:pairs_of_pairs} by $\pi$, and mirror reflection with respect the equatorial plane. In particular, there are no continuous allowed transformations \cite{Wasilewski2007} for such singlet states.
Let show how to implement all those operations, with the exception of the mirror reflection.
What are the possible operations which hold the state within the singlet subspace? Of course, different parings can be interchanged by permuting beams. For example, $(2 \leftrightarrow 3)$ changes $s_{12}s_{34}$ into $-s_{13}s_{24}$ (and the same changing $(1 \leftrightarrow 4)$). Exchange of any two particles acting on any of the three two-singlet parings produces a state with a minus sign. Thus, permuting particles preserves the singlet subspace.
The group of permutations of $4$ particles has $24$ elements, which can be generated by two-particle swaps:
\begin{align}
(1 \leftrightarrow 2) \text{ or } (3 \leftrightarrow 4):\quad &
&l &\mapsto - r,\quad
&r &\mapsto - l\\
(1 \leftrightarrow 3) \text{ or } (2 \leftrightarrow 4):\quad &
&l &\mapsto - \epsilon^2 r,\quad
&r &\mapsto - \epsilon l\\
(1 \leftrightarrow 4) \text{ or } (2 \leftrightarrow 3): \quad &
&l &\mapsto - \epsilon r,\quad
&r &\mapsto - \epsilon^2 l,
\end{align}
which can be checked directly by permuting particles in \eqref{eq:singlet_lr_def}. On the Bloch sphere, they are just rotations by $\pi$ around one of the states \eqref{eq:pairs_of_pairs}.
Composition of two permutations allows us to reach cyclic permutations of the three particles, e.g. ($1 \rightarrow 2 \rightarrow 3 \rightarrow 1$). It turns out that such permutations result in $\phi \mapsto \phi + 2\pi/3$ and $\phi \mapsto \phi + 4\pi/3$.
Thus we reached all operations unitary operations allowed by \eqref{eq:singlet_invariants}, with one exception. It does not cover antiunitary operations (reflections on Bloch sphere $\theta \mapsto \pi -\theta$). Thus, it is still possible that there are linear operations not preserving the singlet subspace that map some states into their complex conjugates. Nonetheless, this computation provides the most systematic study of the geometry of the simplest singlet qubit state implemented with photons, to the best of the authors' knowledge.
Alternatively, we can use the spectrum of $f f^\dagger|_k$ for different values of $k$. It suffices to check the two-particles block, i.e. $f f^\dagger |_2$, which is a $36\times 36$ matrix. The highest degree terms of its characteristic polynomial read:
\begin{align}
w_2(\lambda) &= \lambda^{36}\\
&- \lambda^{35} \tfrac{1}{4}\left( 17139 \cos(2 \theta) \right)\nonumber\\
&+ \lambda^{34} \tfrac{1}{72}\left( 9084959 + 1605 \cos(2 \theta) \right.\nonumber\\
&+ \left. 4 \cos(3 \varphi) \sin^3(\theta) \right) - \ldots,\nonumber
\end{align}
which yield the same invariants as the moments.
\subsection{Experimental recipe for tensor product in mode basis}
\label{s:mode-product}
In this section we study tensor product in the mode representation $\ket{\psi}_M^{k}$, which is different and more physically relevant than tensor product in the particle representation discussed in Sec.~\ref{s:replicas}.
Furthermore, we provide experimentally-feasible way do directly measure the invariants $\bra{\Omega} f^k f^{\dagger k} \ket{\Omega}$, defined as in \eqref{fkfk}, as related to success-rate of creation of states $f^{\dagger k}\ket{\Omega}$ from $k$ copies of state $f^\dagger \ket{\Omega}$.
To start with, let us look at example of $n=3$ particles in $d=2$ modes, raised to power $k=2$
\begin{align}
&\left( \tfrac{1}{\sqrt{2}} (\ket{0,3}_M+\ket{2,1}_M) \right)^{\otimes 2}\\
&= \tfrac{1}{2}\left( \ket{0,3,0,3}_M + \ket{0,3,2,1}_M\right.\\
&\left.+ \ket{2,1,0,3}_M+\ket{2,1,2,1}_M \right).
\end{align}
This is a valid photon state (as permutation-symmetry of particles is built-in in the mode representation), of $6$ particles in $4$ modes.
In general, raising a bosonic state to tensor power, in the mode representation, yields in $kn$ photons in $kd$ modes (not $kd$ particles in $d$ modes, as in the tensor power for particle representation).
Tensor product in mode representation has a direct physical interpretation. If we create $k$ optical tables the same setups, each one producing state $\ket{\psi}$, then $\ket{\psi}_M^{\otimes k}$ is the quantum state produced by the laboratory.
As we see, multiplying state also multiplies number of modes, as there is one more parameter related to the number of optical table.
The question is if it this product can be related to $f^{\dagger k}$ in some way? The answer is positive. This time instead of symmetrizing particles (as we did for $\ket{\psi}_P^{\otimes k}$) we need to reduce number of modes from $kd$ to $d$,
by performing some symmetrization of modes.
We can write
\begin{align}
\ket{\psi}_M^{\otimes k} &= f^\dagger(a_{(1,1)},\ldots,a_{(d,1)})\label{eq:fkmanymodes}\\
&\times f^\dagger(a_{(1,2)},\ldots,a_{(d,2)}) \times \ldots\\
&\times f^\dagger(a_{(1,k)},\ldots,a_{(d,k)})\ket{\Omega}.
\end{align}
That is, if we are taking a number of copies of a bosonic state, then we in fact multiply number of modes.
The second index is related to copy.
System is symmetrized with respect to particles inside mode, by construction.
To symmetrize among modes, we need to project it on symmetric combination of respective modes
\begin{align}
b_{(i,1)} = \frac{a_{(i, 1)} + \ldots + a_{(i, k)} }{\sqrt{k}},\label{eq:sym_modes}
\end{align}
where all $b_{(i,j)}$ need to be pairwise orthogonal.
It can be realized with linear optics, as unitary rotation of modes.
In particular, we may employ Fourier transform (i.e. $\vec{b}_i = \mathcal{F} \vec{a}_i$ for each group of modes), and we are interested in the constant term.
When inverting Fourier transform, each mode can be expressed as a linear combination of $b_{(i,j)}$,
where states with different indices are orthogonal,
and weight of $b_{(i,1)}$ is always $1/\sqrt{k}$.
Consequently,
\begin{align}
&f^\dagger(a_{(1,j)}, \ldots, a_{(d,j)})\label{eq:symmetrization_of_variables}\\
= &f^\dagger \left(\tfrac{1}{\sqrt{k}} b_{(1,1)} + \mathcal{O}, \ldots, \tfrac{1}{\sqrt{k}} b_{(d,1)} + \mathcal{O} \right)\\
= & k^{-n/2} f^\dagger(b_{(1,1)}, \ldots, b_{(d,1)}) + \mathcal{O},
\end{align}
where by $\mathcal{O}$ we denote terms containing at least one $b_{(i,j\neq 1)}$.
Thus, by using \eqref{eq:symmetrization_of_variables} for every component of \eqref{eq:fkmanymodes} we get
\begin{align}
k^{-kn/2} f^{\dagger k}(b_{(1,1)}, \ldots, b_{(d,1)}) + \mathcal{O}.
\end{align}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.3\textwidth]{figs/article/ffdag/fpower}
\caption{Experimental setup example for $d=3$ modes (and $\mathcal{F}$ operators) and $k=3$ copies (and outcome channels per operator).}
\label{fig:fpower}
\end{figure}
Consequently, we have one more interpretation of $f^{\dagger k}\ket{\Omega}$.
It is the state you get when following the recipe, pictured in Fig.~\ref{fig:fpower}:
\begin{itemize}
\item Create $k$ copies of an $n$-photon state.
\item Perform interference on each group of respective modes.
\item Postselect results in which for each group of modes no photon was detected in non-first output mode.
\end{itemize}
Our probability to succeed is
\begin{equation}
\frac{\bra{\Omega} f^k f^{\dagger k} \ket{\Omega}}{k^{kn}}
\leq \frac{(kn)!}{(n!)^k k^{kn}}
\approx k^{-1/2} (2 \pi n)^{(1-k)/2},
\end{equation}
where the approximation is due to Stirling's formula for $(kn)!$ and $n!$.
That is, invariant $\bra{\Omega} f^k f^{\dagger k} \ket{\Omega}$ can be measured experimentally, as statistic of no clicks in detectors, in the described setting.
For the simplest case of $n=1$, $d=1$ and $k=2$, the Fourier transform becomes%
\begin{align}
\mathcal{F} =
\left[%
\begin{matrix}
\tfrac{1}{\sqrt{2}} & \tfrac{1}{\sqrt{2}}\\
- \tfrac{1}{\sqrt{2}} & \tfrac{1}{\sqrt{2}}
\end{matrix}
\right].
\end{align}
and we get Hong-Ou-Mandel interference with postselection, allowing us to produce state two photons in one mode $\ket{2,0}_M$ from two photons in two modes $\ket{1,1}_M$, with $50\%$ postselection efficiency.
Moreover, a similar experimental scheme as above can be used to produce states of the from%
\begin{align}
f_1^\dagger \cdots f_k^\dagger \ket{\Omega},
\end{align}
where all $f_i^\dagger \ket{\Omega}$ are states of a fixed number of photons (perhaps different for each $i$).
The success rate is%
\begin{align}
\frac{\bra{\Omega} f_k \cdots f_1 f_1^\dagger \cdots f_k^\dagger \ket{\Omega}}{k^{n_1 + \cdots + n_k}}.
\end{align}
This follows directly from \eqref{eq:symmetrization_of_variables} applied to a product of functions.
\subsection{Schwinger representation of symmetric operators}\label{app:schwinger}
Below we provide technical calculations to show that \eqref{eq:pauli_sym} and \eqref{eq:schwinger_form} are the same on permutation-symmetric states, i.e.
\begin{gather*}
\bra{\psi}
\sum_{perm} (\mathbbm{I})^{\otimes n_{I}}
\otimes (\sigma^x)^{\otimes n_x}
\otimes (\sigma^y)^{\otimes n_y}
\otimes (\sigma^z)^{\otimes n_z}
\ket{\psi}\\
=\bra{\Omega} f
: \left( a^\dagger a + b^\dagger b \right)^{n_I}
\left( a^\dagger b + b^\dagger a \right)^{n_x}\\
\times \left( - i a^\dagger b + i b^\dagger a \right)^{n_y}
\left( a^\dagger a - b^\dagger b\right)^{n_z} :
f^\dagger \ket{\Omega}.
\end{gather*}
We proof a general variant of it, for qudits.
\subsubsection{Auxiliary notation}
Let us introduce the following notation:
\begin{align}
a_\mu^\dagger &=
\frac{1}{\sqrt{n+1}}
\sum_{i=0}^{n} \ket{\mu}_i\label{eq:adag_alt_def}\\
a_\mu &=
\frac{1}{\sqrt{n}}
\sum_{i=0}^{n-1} \bra{\mu}_i,
\end{align}
where $\ket{\mu}_i$ means {\it insert $\ket{\mu}$ between $i$-th and $(i+1)$-th particle}, whereas $\bra{\mu}_i$ removes $i$-th particle.
The $n$ is the total number of particles in the state it is acting on.
We show that this notation is consistent, i.e. the left hand sides of \eqref{eq:adag_alt_def} act like creation and annihilation operators, respectively.
However, the right hand side can be applied on any state, not only a permutation symmetric one.
For example:
\begin{align}
&\left( \sum_{i=0}^2 \ket{2}_i \right) \ket{01}_P\\
&= \left( \ket{2}_0 + \ket{2}_1 + \ket{2}_2 \right) \ket{01}_P\\
&= \ket{201}_P + \ket{021}_P + \ket{012}_P
\end{align}
and
\begin{align}
&\left( \sum_{i=0}^2 \bra{2}_i \right) \ket{201}_P\\
&= \left( \bra{2}_0 + \bra{2}_1 + \bra{2}_2 \right) \ket{201}_P\\
&= \braket{2}{2} \ket{01}_P + \braket{2}{0} \ket{21}_P + \braket{1}{2} \ket{20}_P\\
&= \ket{01}_P.
\end{align}
A straightforward check on $n$-particle permutation-symmetric states Dicke state show that this (abuse of) notation makes sense.
That is, let us check that:
\begin{align}
a_\mu^\dagger {\tilde a}^\dagger_{\vec{n}} \ket{\Omega}
&= \left( \frac{1}{\sqrt{n+1}} \sum_{i=0}^n \ket{\mu}_i \right) \ket{\vec{n}},\\
a_\mu {\tilde a}^\dagger_{\vec{n}} \ket{\Omega}
&= \left( \frac{1}{\sqrt{n}} \sum_{i=0}^{n-1} \bra{\mu}_i \right) \ket{\vec{n}}.
\end{align}
We proceed by writing a state in particle representation as in \eqref{eq:fock_in_particle_representation}.
For the convenience, without the loss of generality, let us pick $\mu=1$,%
\begin{align}
&\sqrt{n+1} a_1^\dagger \sqrt{\frac{n!}{n_1! \cdots n_d!}}
\ket{n_1, \cdots, n_d}\\
&= \left( \sum_{i=0}^{n} \ket{1}_i \right)
\left( \ket{1}^{n_1}_P \cdots \ket{d}^{n_d}_P
+ \text{perm.} \right)\\
&= (n_1 + 1)
\left( \ket{1}^{n_1+1}_P \cdots \ket{d}^{n_d}_P
+ \text{perm.} \right)\\
&= (n_1 + 1) \sqrt{\frac{(n+1)!}{(n_1+1)! \cdots n_d!}}
\ket{n_1+1, \cdots, n_d},
\end{align}
where \emph{perm.} means inequivalent permutations.
Factor $(n_1+1)$ in the third line comes from%
\begin{equation}
(n + 1) \frac{n!}{n_1! \cdots n_d!}
\big/ \frac{(n+1)!}{(n_1+1)! \cdots n_d!},
\end{equation}
that is, putting $n+1$ particles and comparing number of inequivalent terms in permutation, for the initial and final state.
And analogously for annihilation:%
\begin{align}
&\sqrt{n} a_1 \sqrt{\frac{n!}{n_1! \cdots n_d!}}
\ket{n_1, \cdots, n_d}\\
&= \left( \sum_{i=0}^{n-1} \bra{1}_i \right)
\left( \ket{1}^{n_1}_P \cdots \ket{d}^{n_d}_P
+ \text{perm.} \right)\\
&= n
\left( \ket{1}^{n_1-1}_P \cdots \ket{d}^{n_d}_P
+ \text{perm.} \right)\\
&= n \sqrt{\frac{(n-1)!}{(n_1-1)! \cdots n_d!}}
\ket{n_1-1, \cdots, n_d}.
\end{align}
This time $n$ in the third line comes from%
\begin{equation}
n_1 \frac{n!}{n_1! \cdots n_d!}
\big/ \frac{(n-1)!}{(n_1-1)! \cdots n_d!}.
\end{equation}
\subsubsection{Proof}
We start the proof with the following observation.
When we remove a particle from a symmetric state,
there result does not depend which one
(state of all other particles always permutation symmetric).
That is%
\begin{align}
\bra{\mu}_i \ket{\psi}
= \bra{\mu}_j \ket{\psi}
= \frac{1}{n} \left( \sum_{i=0}^{n-1} \bra{\mu}_i \right) \ket{\psi},\label{eq:annihilation_in_one_place}
\end{align}
where the last equality is a consequence of the former (for an $n$-particle state).
Consequently, when acting on $n$-particle symmetric state we get,
we write subsequent annihilation and creation operators as a single sum:
\begin{align}
&a_{\mu_1}^\dagger \cdots a_{\mu_k}^\dagger
a_{\nu_k} \cdots a_{\nu_1} \ket{\psi}\\
= &\frac{(n-k)!}{n!}
\sum_{i_1,\ldots,i_k} \sum_{j_1,\ldots,j_k}\\
&\Big(
\ket{\mu_1}_{i_1} \cdots \ket{\mu_k}_{i_k}
\bra{\nu_k}_{j_k} \cdots \bra{\nu_1}_{j_1}
\Big) \ket{\psi}\label{eq:aiaj2aiai}\\
= &\left( \sum_{i_1,\ldots,i_k}
\ket{\mu_1}_{i_1} \cdots \ket{\mu_k}_{i_k}
\bra{\nu_k}_{i_k} \cdots \bra{\nu_1}_{i_1}
\right) \ket{\psi},
\end{align}
where instead of the sum over $j_1,\ldots,j_k$ we put $j_p=i_p$ using \eqref{eq:annihilation_in_one_place}.
Note that as creation and annihilation operations add and subtract particles (respectively), indices in a product do refer to different set of particles and need to be carried out iteratively.
That is, summation over $j_p$ goes from $j_p=0$ to $n-p$.
We need to show one more thing:%
\begin{align}
&\left( \sum_{i_1,\ldots,i_k}
\ket{\mu_1}_{i_1} \cdots \ket{\mu_k}_{i_k}
\bra{\nu_k}_{i_k} \cdots \bra{\nu_1}_{i_1}
\right) \ket{\psi}
\label{eq:changing2fixed_order}\\
=
&\left( \sum_{\text{p.d. }l_1,\ldots,l_k}
\ket{\mu_1}_{l_1}\bra{\nu_1}_{l_1}
\cdots \ket{\mu_k}_{l_k}\bra{\nu_k}_{l_k}
\right) \ket{\psi},
\end{align}
where by \emph{p.d.} we mean pairwise different.
In fact the only thing we need to do is to relabel each component of the sum.
In the first line $i_p\in{0,\ldots, n-p}$, while in the second --- $l_p\in{0,\ldots, n-1}$ but disallow repetitions.
If in the first line we relabel in such a way that we don't forget about particles that we removed with $\bra{\nu_1}_{i_p}$, then we get $l_p$.
When we combine \eqref{eq:aiaj2aiai} with \eqref{eq:changing2fixed_order} we get an important relation%
\begin{align}
&a_{\mu_1}^\dagger \cdots a_{\mu_k}^\dagger
a_{\nu_k} \cdots a_{\nu_1} \ket{\psi}
\label{eq:normal_order_and_symmetric_operators}\\
=&\left( \sum_{\text{p.d. }l_1,\ldots,l_k}
\ket{\mu_1}_{l_1}\bra{\nu_1}_{l_1}
\cdots \ket{\mu_k}_{l_k}\bra{\nu_k}_{l_k}
\right) \ket{\psi}.
\end{align}
After showing relation \eqref{eq:normal_order_and_symmetric_operators}, we proceed to the main part of the proof.
Any symmetrized product of matrices is multilinear in their matrix entries, defined by $((\mu_1, \nu_1),\ldots,(\mu_n,\nu_n))$, where each $\mu_i$ (and $\nu_i$) is in $\{0,\ldots, d-1\}$, that is%
\begin{equation}
\sum_{\vec\imath \in \sigma(\{1,\ldots,n\})}
\ket{\mu_1}_{i_1}\bra{\nu_1}_{i_1}\ldots \ket{\mu_n}_{i_n}\bra{\nu_n}_{i_n}.
\end{equation}
So we need to show that for a sum of distinct matrix elements give the corresponding normally ordered operators.
When we apply \eqref{eq:normal_order_and_symmetric_operators}, we get
\begin{equation}
: a_{\mu_1}^\dagger a_{\nu_1} \ldots a_{\mu_n}^\dagger a_{\nu_n} :,
\label{eq:all-normally}
\end{equation}
what completes the proof.
Bear in mind that in \eqref{eq:all-normally} we get $n$ creation and annihilation operators, regardless of the multi-particle operator we want to use.
When we use only a $k$-particle operator, the formula can be simplified, what we show in the examples.
\subsubsection{Examples}
Below, for the clarity, we will work with qubits and use $a$ and $b$ for the annihilation operators of $\ket{0}$ and $\ket{1}$, respectively.
First, we see that%
\begin{align}
\sum_{i=1}^n \sigma^x_i &= a^\dagger b + b^\dagger a\\
\sum_{i=1}^n \sigma^y_i &= -ia^\dagger b + i b^\dagger a\\
\sum_{i=1}^n \sigma^z_i &= a^\dagger a - b^\dagger b,
\end{align}
which is the standard Schwinger representation of operators for symmetric states, where we directly applied \eqref{eq:aiaj2aiai}, e.g. for symmetrized $\sigma^y$%
\begin{align}
\sum_{j=1}^n \sigma^y_j &=
\sum_{j=1}^n \left( -i \ket{0}_j\bra{1}_j + i \ket{1}_j\bra{0}_j \right)\\
&= -ia^\dagger b + i b^\dagger a.
\end{align}
Now, let us look at symmetrized product of two operators, e.g. $\sigma^x_i$ and $\sigma^z_j$:%
\begin{align}
&\sum_{i \neq j} \sigma^x_i \otimes \sigma^z_j\label{eq:schwinger_square}\\
&= \sum_{i\neq j}
\left( \ket{0}_i \bra{1}_i + \ket{1}_i \bra{0}_i \right)
\left( \ket{0}_j \bra{0}_j - \ket{1}_j \bra{1}_j \right)\\
&= \sum_{i\neq j}
\left(
\ket{0}_i \bra{1}_i \ket{0}_j \bra{0}_j
- \ket{0}_i \bra{1}_i \ket{1}_j \bra{1}_j \right.\\
&\phantom{=\sum_{i\neq j}(}\left.
+ \ket{1}_i \bra{0}_i \ket{0}_j \bra{0}_j
- \ket{1}_i \bra{0}_i \ket{1}_j \bra{1}_j
\right)\\
&= \left(
a^{\dagger 2} a b
- a^\dagger b^\dagger b^2
+ a^\dagger b^\dagger a^2
- b^{\dagger 2} a b
\right)\\
&= :
\left( a^\dagger b + b^\dagger a \right)
\left( a^\dagger a - b^\dagger b \right) :
\end{align}
were we applied \eqref{eq:normal_order_and_symmetric_operators} to change summation to creation and annihilation operators.
\section{Singlet space for photons and information protection}
\label{s:singlet-space}
In this section we study singlet subspace implemented with bosons (as in Sec.\ref{s:four-particle-singlet}), basing on our work \cite{Migdal2011dfs}.
We prove that for a system of qubits, subjected to collective decoherence in the form of perfectly correlated random SU($d$) unitaries, quantum superpositions stored in the decoherence free subspace are fully immune against the removal of one particle.
This provides a feasible scheme to protect quantum information encoded in the polarization state of a sequence of photons against both collective depolarization and one photon loss.
We provide a scheme for experimental demonstration with photon quadruplets using currently available technology.
We consider the DFS for an ensemble of $n$ qudits, i.e.\ elementary $d$-level systems, composed of states $\ket{\psi}$ that are invariant with respect to an arbitrary perfectly correlated SU$(d)$ transformation:
\begin{equation}
V^{\otimes n} \ket{\psi} = \ket{\psi}, \qquad V \in \text{SU}(d).
\label{Eq:invariance}
\end{equation}
Note, that in this context we consider \emph{distinguishable} particles.
That is, one we implement $n$ particles with $n$ photons in $nd$ modes.
In the context of a multi-photon states, singlet states as defined above are states invariant with respect to
\begin{equation}
U = V^{\otimes n}.
\end{equation}
We show that this DFS features an additional degree of robustness, namely that the stored quantum information is immune to the loss of one of the qudits, regardless of the encoding. This result, specialized to the polarization state of single photons for which $d=2$, offers {\em combined} protection against two common optical decoherence mechanisms: photon loss \cite{Wasilewski2007,Lu2008} due to reflections, scattering, residual absorption, etc.\ as well as collective depolarization that occurs inevitably in optical fibers used for long-haul transmission \cite{Bartlett2007,banaszek2004experimental,Bourennane2004}. Consequently, we provide here rigorous foundations to a speculation presented in Ref.~\cite{Boileau2004} that DFS-based quantum cryptography
can be made tolerant also to photon loss. It is worth noting that another physical realization of the qubit case can be also an ensemble of spin-$\frac{1}{2}$ particles \cite{Viola2001} coupled identically to a varying magnetic field.
The section is organized as follows. First, in Sec.~\ref{Sec:SingletQubits} we briefly review the geometry of the singlet subspace for an ensemble of qubits and we explicitly show the robustness of the four qubit DFS, which spans the logical qubit space. This particular case leads us to a proposal for a proof-of-principle experiment based on currently available photonic technologies that demonstrates the robustness of DFS encoding, presented in Sec.~\ref{Sec:ExperimentalScheme}. The general proof for an arbitrary $d$ that a quantum superposition encoded in an $\text{SU}(d)$ DFS remains immune against the loss of one particle is described in Sec.~\ref{Sec:General}.
\subsection{Example with logical qubits}
\label{Sec:SingletQubits}
Because of two relevant physical realizations using photons and spin-$1/2$ particles, we will first discuss the qubit case with $d=2$. The complete Hilbert space of an ensemble of $n$ qubits, each described by a two-dimensional spin-$1/2$ space $\mathcal{H}_{1/2}$, can be subjected to Clebsch-Gordan decomposition \cite{Dicke1954}
\begin{equation}
( \mathcal{H}_{1/2} )^{\otimes n} = \bigoplus_{j = (n \bmod 2)/2 }^{n/2} \mathbbm{C}^{K^j_n} \otimes \mathcal{H}_{j}\label{eq:clebsch},
\end{equation}
where the direct sum is taken with the step of one and
$K^j_n$ are multiplicities of spin-$j$ Hilbert spaces $\mathcal{H}_{j}$, given explicitly by
\begin{equation}
K^j_n = \frac{2j+1}{n/2+j+1} \binom{n}{n/2+j} \label{eq:catalan}.
\end{equation}
The action of $V^{\otimes n}$, where $V$ is any $\text{SU}(2)$ transformation, affects only $\mathcal{H}_{j}$ in Eq.~(\ref{eq:clebsch}), leaving $\mathbbm{C}^{K^j_n}$ unchanged. In particular, for an even number of $n$ qubits forming the ensemble, the {\em singlet subspace} corresponding to $j=0$ is free from decoherence. Furthermore, removing one particle from that ensemble maps any initial state from the singlet subspace onto a certain state from the {\em doublet subspace} $\mathbbm{C}^{K^{1/2}_{n-1}} \otimes \mathcal{H}_{1/2}$. Because $K^{1/2}_{n-1}=K^0_n$, it is plausible that the quantum superposition will end up entirely in the decoherence-free subsystem $\mathbbm{C}^{K^{1/2}_{n-1}}$ where it will remain protected from collective depolarization.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{figs/article/dfs_photon_loss/4qubits}
\caption{Diagrams depicting three non-equivalent products of two-qubit singlet states defined in Eq.~(\protect\ref{eq:dfs4}). The qubits are represented as dots with connections identifying pairs that form singlet states.}
\label{fig:spps}
\end{figure}
The simplest non-trivial case is $n=4$ physical qubits encoding one logical qubit.
Let us consider three states from the four-qubit DFS defined as products, same as in \eqref{eq:pairs_of_pairs},
\begin{align}
\ket{\Xi_1} &= \ket{\Psi^-}_{12} \ket{\Psi^-}_{34}, \nonumber\\
\ket{\Xi_2} &= \ket{\Psi^-}_{13} \ket{\Psi^-}_{42}, \label{eq:dfs4}\\
\ket{\Xi_3} &= \ket{\Psi^-}_{14} \ket{\Psi^-}_{23}, \nonumber
\end{align}
where $\ket{\Psi^-}_{ij}= (\ket{01}_{ij}-\ket{10}_{ij})/\sqrt{2}$ is the singlet state of qubits $i$ and $j$. These states, shown schematically in Fig.~\ref{fig:spps}, form an overcomplete set in the DFS. For concreteness, let us select $\ket{\Xi_1}$ and $\ket{\Xi_3}$ as a non-orthogonal basis.
Any state of the logical DFS qubit can be written as a superposition
\begin{align}
\ket{\psi} = \alpha \ket{\Xi_1}+ \beta \ket{\Xi_3},
\label{Eq:Psifourqubit}
\end{align}
where $\alpha$ and $\beta$ are complex amplitudes. Without loss of generality we can assume that the first physical qubit has been lost. The remaining three qubits are described by an equally weighted statistical mixture of two states:
\begin{align}
\ket{\psi^{(0)}}_{\bar{1}} &= \alpha \ket{1}_{2} \ket{\Psi^-}_{34} + \beta \ket{\Psi^-}_{23} \ket{1}_{4} \nonumber \\
\ket{\psi^{(1)}}_{\bar{1}} &= \alpha \ket{0}_{2} \ket{\Psi^-}_{34} + \beta \ket{\Psi^-}_{23} \ket{0}_{4},\label{eq:singletowyrozpisany}
\end{align}
where $\ket{\cdot}_{\bar{1}}$ denotes the state of all qubits but the first one. It is easy to see that a collective transformation
$V^{\otimes 3}$ leaves the statistical mixture $\frac{1}{2} \bigl( \ket{\psi^{(0)}}_{\bar{1}} \bra{\psi^{(0)}} +
\ket{\psi^{(1)}}_{\bar{1}} \bra{\psi^{(1)}}\bigr)$ intact.
After the loss of the first particle, the initial four-qubit state from Eq.~(\ref{Eq:Psifourqubit}) can be recovered through the following procedure.
First, one needs to measure in a non-destructive way the $z$ component of the total pseudospin operator $\sigma^{z}_{2} + \sigma^{z}_{3} + \sigma^{z}_{4}$, where $\sigma^{z} = \ket{0}\bra{0} - \ket{1}\bra{1}$, in order to discriminate $\ket{\psi^{(0)}}$ from $\ket{\psi^{(1)}}$. If the result corresponding to $\ket{\psi^{(1)}}$ is obtained, we apply a collective rotation $(\sigma^{x})^{\otimes 3}$, where $\sigma^x = \ket{0}\bra{1} + \ket{1}\bra{0}$. This yields the state $\ket{\psi^{(0)}}_{\bar{1}}$. In the second step, one replaces the lost qubit with a new one prepared in a state $\ket{+}_{1} = \frac{1}{\sqrt{2}}( \ket{0}_1 + \ket{1}_1 )$ and applies a controlled rotation which restores the original state $\ket{\psi}$:
\begin{equation}
\bigl( \ket{0}_{1} \bra{0} \otimes {\openone}^{\otimes 3} + \ket{1}_{1} \bra{1} \otimes {(\sigma^x)}^{\otimes 3}\bigr)
\bigl( \ket{+}_{1} \ket{\psi^{(0)}}_{\bar{1}}\bigr) = \ket{\psi}
\end{equation}
Note that this rotation can be realized as a sequence of three C-NOT gates.
The robustness of DFS to particle loss can be intuitively understood in the following way. DFS states owe their invariance with respect to collective unitary transformation to a very rigid structure. In fact, if we write a DFS state as a superposition in the computational basis for individual qubits, the state of one qubit can be determined unambiguously from the states of the remaining ones. This suggests that the loss of one particle does not destroy any information. Futher, it is always possible to repair the state as there is only one unique way to fit the lost particle such that the singlet symmetry is recovered.
\subsection{Experimental scheme}
\label{Sec:ExperimentalScheme}
We will now present a proposal a feasible experiment that demonstrates the robustness of DFS encoding using
photon quadruplets that can be generated in the process of parametric down-conversion \cite{Bourennane2004,Weinfurter2001,Gong2008}.
The basis states $\ket{0}$ and $\ket{1}$ correspond in this case to horizontal and vertical polarizations of individual photons.
Let us consider four-photon states $\ket{\Xi_k}$, $k=1,2,3$, defined in Eq.~(\ref{eq:dfs4}) as well as their orthogonal complements in the two-dimensional DFS, which we will denote as $\ket{\Xi_k^\perp}$.
The index $k$ corresponds to three non-equivalent orderings of the photons and it can be changed by suitable rerouting of the photons. As demonstrated in \cite{Bourennane2004}, the states $\ket{\Xi_1}$ and $\ket{\Xi_1^\perp}$ can be discriminated unambiguously by detecting polarizations in the horizontal-vertical basis $\ket{0}, \ket{1}$ for photons $12$ and in the diagonal basis $(\ket{0} \pm \ket{1})/\sqrt{2}$ for photons $34$. Restricted to the DFS subspace, this strategy yields the standard projective measurement.
It is easy to check that the above individual measurement no longer works if one of the photons is missing. It turns out that this problem can be solved by resorting to collective measurements. Suppose that we interfere photon pairs $12$ and $34$ on two separate balanced beam splitters, playing the role linear-optics Bell state analyzers \cite{Braunstein1995}. The state $\ket{\Xi_1}$ will yield exactly one photon in each output port of each beam splitter. In contrast, because the orthogonal state $\ket{\Xi_1^\perp}$ can be written as \cite{Kempe2001}:
\begin{equation}
\ket{\Xi_1^\perp} = \frac{1}{\sqrt{3}}\left(
\ket{00}_{12}\ket{11}_{34}
+ \ket{11}_{12}\ket{00}_{34} - \ket{\Psi^+}_{12}\ket{\Psi^+}_{34} \right),
\end{equation}
where $\ket{\Psi^+}_{ij}= (\ket{01}_{ij}+\ket{10}_{ij})/\sqrt{2}$, it will always produce two photons at the same output port for each of the two beam splitters. If one photon is lost, the states $\ket{\Xi_1}$ and $\ket{\Xi_1^\perp}$ will still give distinguishable outcomes: registering two photons at a single output unambiguously heralds $\ket{\Xi_1^\perp}$, while registering a photon pair at two different outputs of the same beam splitters detects $\ket{\Xi_1}$. The third photon will emerge separately from the second beam splitter. This detection scheme is summarized in Fig.~\ref{fig:measurement}.
An interesting question is whether the scheme described above could be exploited for quantum key distribution.
The scalar products between any two the states $\ket{\Xi_k}$ and $\ket{\Xi_l}$ with $k \neq l$ are equal to $\braket{\Xi_k}{\Xi_l} = -\frac{1}{2}$. In the Bloch representation of the two-dimensional DFS, they form a regular triangle inscribed into a great circle on the Bloch sphere,
constituting a so-called {\em trine} that warrants cryptographic security \cite{Boileau2004,Renes2004,Tabia2011}. To generate a key, the sender Alice could prepare photon quadruplets in one of randomly selected states $\ket{\Xi_1}$, $\ket{\Xi_2}$, or $\ket{\Xi_3}$. The ability to perform a projection onto any pair of orthogonal states $\ket{\Xi_k}, \ket{\Xi_k^\perp}$ would enable the receiving party Bob to tell, in the case when an outcome $\ket{\Xi_k^\perp}$ is obtained, which state has definitely not been prepared by Alice. Such correlations between Alice's preparations and Bob's outcomes can be distilled into a secure key.
We have shown that the projective measurement onto $\ket{\Xi_k}, \ket{\Xi_k^\perp}$ can be implemented in a way that tolerates the loss of one photon. In a cryptographic setting, the crucial issue is to ensure that an eavesdropper Eve does does not map the state of intercepted photons outside the DFS, which may enable eavesdropping attacks beyond those already studied \cite{Boileau2004,Renes2004,Tabia2011}. To verify that this is not the case, Bob could perform in principle a full quantum state reconstruction on some of the transmissions, which however would be resource consuming. We conjecture that a sufficient strategy to detect such an attack would be: (i) to detect polarizations of photons emerging after the beam splitters; (ii) for a subset of transmissions to count directly received photons to ensure that no multiphoton states in individual input paths occur; (iii) for another subset of transmissions to apply before the beam splitters random and uncorrelated transformations $V \otimes V$ and $V' \otimes V'$ and check that states $\ket{\Xi_k}$ always yield the correct outcome when Bob used the matching basis for his measurement.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figs/article/dfs_photon_loss/switches}
\caption{An experimental scheme for loss-tolerant detection of a logical qubit encoded in four photons. The projection basis $\ket{\Xi_k}$, $\ket{\Xi_k^\perp}$, where $k=1,2,3$, is selected by a suitable rerouting of input photons. Pairs of photons are interfered on two balanced beam splitters and photon numbers are counted at their outputs. Combinations of outcomes for individual detectors that correspond to unambiguous identification of $\ket{\Xi_k}$ and $\ket{\Xi_k^\perp}$ are indicated with photon numbers in curly brackets. The ordering within both inner and outer brackets does not matter.}
\label{fig:measurement}
\end{figure}
\subsection{General proof}
\label{Sec:General}
The reasoning presented in Sec.~\ref{Sec:SingletQubits} can be generalized to any even number of $n > 4$ qubits by considering DFS states given by products of two-qubit singlet states. Such states form an overcomplete set in the DFS \cite{Lyons2008}, which enables one to follow directly the steps described for four qubits. The robustness of DFS encoding can be shown more generally for an ensemble of $n$ qudits, i.e.\ $d$-dimensional systems.
In this case, a DFS satisfying Eq.~\eqref{Eq:invariance} exists only when $n$ is a multiple of $d$, which follows from the structure of the Young tableux for irreducible representations of tensor products of the $SU(d)$ group \cite{Jones1998groupsrepresentations}.
As before, for concreteness we will consider removal of the first qudit. Let us consider arbitrary two states $\ket{\psi}$ and $\ket{\Phi}$ from the DFS and expand them in the form analogous to Eq.~(\ref{eq:singletowyrozpisany}):
\begin{equation}
\ket{\psi} = \frac{1}{\sqrt{d}} \sum_{i=0}^{d-1} \ket{i}_1 \ket{\psi^{(i)}}_{\bar{1}},
\quad
\ket{\Phi} = \frac{1}{\sqrt{d}} \sum_{i=0}^{d-1} \ket{i}_1 \ket{\Phi^{(i)}}_{\bar{1}}
\label{eq:sud_singletowyrozpisany}
\end{equation}
where $\ket{i}_1 , i=0,\ldots,d-1$ is an orthonormal basis in the space of the first qudit, and
$\ket{\psi^{(i)}}_{\bar{1}} = \sqrt{d}\, {}_{1}\! \braket{i}{\psi}$ and $\ket{\Phi^{(i)}}_{\bar{1}} = \sqrt{d} \, {}_{1}\! \braket{i}{\Phi}$ are states of the remaining $n-1$ qudits. We will first show that the following general property holds:
\begin{equation}
{}_{\bar{1}} \! \braket{\Phi^{(i)}}{\psi^{(j)}}_{\bar{1}} = \delta_{ij} \braket{\Phi}{\psi}.
\label{Eq:SUd:dmuidmuj}
\end{equation}
As we will see, this property guarantees that the loss of one particle does not destroy the quantum information encoded in the DFS.
In order to show that for $i \neq j$ the states $\ket{\Phi^{(i)}}$ and $\ket{\psi^{(j)}}$ are orthogonal as implied by Eq.~\eqref{Eq:SUd:dmuidmuj}, let us consider the action of
a diagonal unitary operator $D^{\otimes n}$, where $D = \text{diag} (e^{i \phi_0}, \ldots, e^{i \phi_{d-1}} )$ with arbitrary phases $\phi_0, \ldots, \phi_{d-1}$ that sum up to zero. Invariance of $\ket{\Phi^{(i)}}_{\bar{1}}$ and $\ket{\psi^{(j)}}_{\bar{1}}$ under $D^{\otimes n}$ implies that in the basis formed by tensor products of states $\ket{0}, \cdots, \ket{d-1}$ they are composed only from terms that have exactly $n/d$ particles in each of these $d$ states. Consequently, projecting the first qudit on orthogonal states $\ket{i}_{1}$ and $\ket{j}_{1}$ leaves the remaining qudits in distinguishable states.
In order to verify the case when $i=j$ in Eq.~(\ref{Eq:SUd:dmuidmuj}) it is convenient to use the transformation of states $\ket{\psi^{(i)}}_{\bar{1}}$ under the action of $V^{\otimes (n-1)}$. In order to derive this transformation, let us rewrite the invariance condition from Eq.~\eqref{Eq:invariance} to the form
$V^\dagger \otimes \openone^{\otimes (n-1)} \ket{\psi} = \openone \otimes V^{\otimes (n-1)} \ket{\psi}$ and project the first qudit onto $ \sqrt{d} \, {}_{1} \! \bra{i}$. This yields the identity:
\begin{equation}
V^{\otimes (n-1)} \ket{\psi^{(i)}}_{\bar{1}} =
\sqrt{d} \, \bigl( {}_{1} \! \bra{i} V^\dagger \bigr) \ket{\psi}
=
\sum_{j=0}^{d-1} \bigl( \bra{j} V \ket{i} \bigr)^\ast \ket{\psi^{(j)}}_{\bar{1}}
\label{Eq:iUdagger}
\end{equation}
Let us now specialize this result to a special unitary transformation that cyclically shifts the labelling of the basis states:
\begin{equation}
W = (-1)^{d-1} \sum_{i=0}^{d-1} \ket{i+ 1 }\bra{i},\label{eq:unitaryW}
\end{equation}
where the addition $i+1$ is understood to be modulo $d$. Using this $W$ in Eq.~\eqref{Eq:iUdagger} implies that $\ket{\psi^{(i+1)}} =(-1)^{d-1} W^{\otimes (n-1)} \ket{\psi^{(i)}}$, i.e.\ $\ket{\psi^{(i)}}$ and $\ket{\psi^{(i+1)}}$ are related by a unitary that is independent of $\ket{\psi}$. This means that
$\braket{\Phi^{(i+1)}}{\psi^{(i+1)}} = \braket{\Phi^{(i)}}{\psi^{(i)}}$. This fact combined with expanding the scalar product $\braket{\Phi}{\psi}$ using Eq.~(\ref{eq:sud_singletowyrozpisany}) completes the proof of Eq.~(\ref{Eq:SUd:dmuidmuj}).
With Eq.~(\ref{Eq:SUd:dmuidmuj}) in hand, further steps are straightforward.
A removal of the first qudit maps a state $\ket{\psi}$ onto a statistical mixture
\begin{equation}
\varrho_{\bar{1}} = \hbox{Tr}_1 \bigl( \ket{\psi}\bra{\psi} \bigr) =\frac{1}{d} \sum_{i=0}^{d-1} \ket{\psi^{(i)}}_{\bar{1}}\bra{\psi^{(i)}}.
\end{equation}
Eq.~(\ref{Eq:SUd:dmuidmuj}) implies that analogously to the SU(2) case the components with different $i$ occupy orthogonal subspaces. Within each subspace the state is fully preserved, which follows from applying Eq.~(\ref{Eq:SUd:dmuidmuj}) to pairs of states from an arbitrary basis in the DFS. The final step is to show that the state $\varrho_{\bar{1}}$ is invariant with respect to $V^{\otimes (n-1)}$. This is a consequence of the fact that both the initial state $\ket{\psi}$ and the procedure of tracing out a particle are invariant with respect to SU($d$) transformations. Explicitly, the invariance of $\hat\varrho_{\bar{1}}$ can be verified with a calculation based on Eq.~(\ref{Eq:iUdagger}):
\begin{align}
V^{\otimes (n-1)} \varrho_{\bar{1}} (V^\dagger)^{\otimes (n-1)} &=
\sum_{i=0}^{d-1} \bigl( {}_{1} \! \bra{i} V^\dagger \bigr) \ket{\psi}\bra{\psi} \bigl( V \ket{i}_{1} \bigr) \\
&= \hbox{Tr}_1 \bigl( \ket{\psi}\bra{\psi} \bigr) = \varrho_{\bar{1}}.
\end{align}
Thus the encoded state is fully preserved.
Concluding, we have shown that DFS encoding is immune to removing one particle. Unfortunately, this property does not seem to generalize in a straightforward manner to the loss of more particles. For example, when two qubits are removed from a four-qubit DFS state, the result will be either a singlet state of the remaining two qubits, or a statistical mixture of the singlet and triplet states which does not preserve the original superposition. This observation holds also for any higher even number of qubits. Nevertheless, our result shows how to protect information in the few-photon regime from both collective depolarization and the first-order effects of linear attenuation. We have proposed an experimental demonstration of this combined protection which can provide a robust quantum cryptography protocol.
Finally, let us note that although the proof of robustness against the qudit loss was based on the assumption that Eq.~(\ref{Eq:invariance}) is satisfied for every $\text{SU}(d)$ matrix, the DFS fulfilling this condition protects quantum superpositions from any decoherence mechanism that involves a subset of $\text{SU}(d)$ transformations. Therefore our considerations apply to a range of physical systems, for example higher-spin particles in a magnetic field or multilevel atoms interacting with optical fields.
\section{Further questions}
We analyzed the problem of which states with a fixed number of photons $n$ in $d$ modes can be related using only linear optics. This problem may be mathematically formulated in terms of which homogeneous polynomials of degree $n$ in $d$ complex variables may be related by a unitary transformation between them (or linear, if we allow postselection of ancillary modes).
We relate this problem to the problem of equivalence of pure states of distinguishable particles, with respect to local operations (i.e. LU- and SLOCC-equivalence).
We show that the study of homogeneous operations, i.e.: those where the same single-particle operator acts on each particle, suffices.
Furthermore, we introduce and analyze entanglement classification by checking
which one-particle operations preserve permutation symmetry.
In that classification we obtain a sequence of states, unique up to SLOCC.
In one extreme we find the multiparticle GHZ state, whereas on the other there is a $(d-1)$ excitation state, which is a natural generalization of the W state resulting from the classification scheme.
Some questions are left open:
\begin{itemize}
\item Whether invariance under all local operations (that is, not only invertible operations) on symmetric states can be represented as the same transformation for each particle.
\item Does it work for mixed states?
\item Whether the application of $k$-particle transformations on permutation-symmetric
states which are reversible by acting on other part will give rise to different entanglement
classification.
\end{itemize}
We passive linear optics with no postselection we introduce two families of invariants.
Both are based on the global creation operator, which creates the state, $\ket{\psi}=f^\dagger(\vec a)\ket{\Omega}$, which can be written as a homogeneous polynomial on the creation operators for each mode. The first set of the invariants is just the spectrum of the operator $f f^\dagger$. The second one is the set of moments of the form $\bra{\Omega}f^k f^{\dagger k} \ket{\Omega}$. This second set of invariants can receive a physical interpretation, since they are related to the probability of not losing particles when $k$ copies of the original state are prepared, and the symmetric channel is postselected.
The main open question is whether our invariants are fine-grained enough to ensure that if two multiphoton states have the same invariants, they can be connected with linear optics and complex conjugation.
We have computed the invariants for a variety of situations, and found that they provide a complete characterization of the equivalence classes in all of them.
However, this question is not yet answered in the general case.
Regarding future work, we would like to make the following remarks. First of all, a proof that these invariants provide a full characterization would be very desirable. Or, alternatively, a counterexample, which would lead us to find better invariants.
Second, both methods can be applied for fermions with no modifications beyond changing bosonic by fermionic operators. It deserves investigation whether this method provides new invariants in that case, or whether it simplifies the derivation of already known ones.
A third line of future research will be to extend our results to mixed states, or states without a fixed number of particles. In this last case, moments can still be used,
but the spectral method becomes impractical (as $ff^\dagger$ not longer can be decomposed into blocks).
But perhaps the most practical open question is: if two multiphoton states $\ket{\phi_1}$ and $\ket{\phi_2}$ can not be related using only linear optics, what is the maximal efficiency for obtaining $\ket{\phi_2}$ out of $\ket{\phi_1}$ using linear optics {\em and} postselection?
\chapter{Quantum walks on complex networks}
\label{ch:networks}
\section{Introduction}
In this chapter we develop a complex networks \cite{Newman03thestructure,Newman2010book,Albert2002} approach to the unitary evolution of a single particle, which we interpret as a quantum walk.
We study the analogies between this process and related classical walks.
The focus is on studying the long time probability distribution and the coherence between nodes,
which brings tools for analyzing properties of quantum walk on a complex network.
In particular, we consider a splitting of a complex networks into independent pieces that are not related by quantum superposition.
\subsection{Networks and quantum walks}
Study of quantum walks goes back to the Feynman checkerboard \cite{feynman1965quantum}, a toy model in which a particle travels as the speed of light on a one-dimensional lattice, while being subjected to reversal with some amplitude.
The effective behavior of this particles turns out to be the same as for a massive particle, evolving as described by the Sch{\"o}dinger equation.
Another seminal model is a quantum walk with a coin \cite{Aharonov1993}, where the path taken by a particles is specified by tossing a quantum coin.
For an overview of quantum walks see \cite{Kempe2003,Venegas-Andraca2012}.
In general, the quantum dynamics of any discrete system can be re-expressed and interpreted as a single particle quantum walk~\cite{PhysRevLett.103.240503,PhysRevA.81.032327}, which is capable of performing universal quantum computation~\cite{childs2009universal}.
One dimensional walks can be simulated with photons, both in the single particle variant \cite{Schreiber2011} and the walk with a coin variant \cite{Schreiber2010}.
Quantum walks are used to study transport properties in physical systems~\cite{Mulken2011,FG98,caruso09,MRLA08,CF09}, such as transport of energy through biological complexes or artificial solar cells.
Additionally, there have been theoretical proposals for speed-up of algorithms for large social and links networks~\cite{Faloutsos:1999:PRI:316194.316229,albert1999internet},
for example for
PageRank \cite{Page1999}, the famous ranking algorithm based on the simulation of a random walk through Internet.
It has been studied used quantum annealing~\cite{Burillo2012,paparo2012google,garnerone2012pagerank,Garnerone2012google},
that is, a simulation of the procedure by which a quantum system is driven to its ground state, chosen so that the same ground state represents the Google ranking vector.
While analytical results have been obtained for some specific topologies, such as star-like \cite{muelken2007inefficient,mulken2006coherent,cai1997rouse}, regular or semi-regular \cite{salimi2010continuous} networks, progress in analyzing quantum walks on complex networks has largely been based on numerical analysis.
More general analytic results, applicable to real-world complex systems, can be brought by
studying the probability distribution of finding the walker at each node in the long time limit of a certain continuous-time unitary quantum walk.
For unitary quantum walks, even for arbitrary long times there are oscillations rather than a steady state; this is not necessarily the case for open quantum walks~\cite{spohn1977algebraic,Whitfield2010}.
Consequently, we work with the long time averages, which are equivalent to removing oscillations \cite{mulken2005asymmetries,muelken2007inefficient}.
We show that the result can be approximated by the steady state of a classical random walk.
Moreover, we measure the quality of this approximation, by studying a certain parameter, which is called \emph{quantumness}~\cite{Faccin2013} --- a number in the unit interval quantifying the strength of quantum effects.
In classical random walks on undirected, connected graphs, there is a unique steady state, so the long time limit does not depend on the choice of the initial state.
However, in the quantum walk the final state depends on the initial conditions.
In particular, we show how these quantum effects are related to the energy of a given state and the degree distribution of the underlying network.
As a case study, we investigate quantum walks on a range of model complex network structures, including the \ac{ba}, \ac{er}, \ac{ws} and \ac{rg} networks.
We repeat this analysis for several real-world networks, specifically a \ac{kc} social network~\cite{zachary1977information}, the \ac{em} network of the URV university~\cite{guimera2003self}, the \emph{Caenorhabditis elegans}
network~\cite{duch2005community}, and a \ac{ca} network of scientists~\cite{newman2006finding}.
Let us make a brief introduction to the models we use as benchmarks.
They are parametrized by the number of nodes $N$ and some other parameter that can be mapped to the number of edges $M$.
The \acf{er} model is one of the first random graph models \cite{Erdos1959,ER60}.
We create a random graph with $M$ edges, that is, out of all possible graphs with $N$ nodes and $M$ edges we select one.
The \acf{ws} \cite{Watts1998} is a model showing how addition of a few links changes graph behavior from short to long range.
We start with $N$ nodes connected as a circle, that is, with each node connected to its two neighbors.
Then we add further $N-M$ edges, similarly as for the \acf{er} model, so as to have $M$ at the end.
In the \acf{rg} model on a square \cite{penrose2003} we start creating $N$ points, each of them from the uniform probability distribution on a unit square.
Then we connect all pairs of nodes, which are closer than a certain distance cutoff $r$,
which can be adjusted so that we obtain $M$ edges.
The \acf{ba} model \cite{BA99} is based on preferential attachment and serves as a key example for scale-free behavior of real-world networks.
We start with a few nodes, connected with each other.
New nodes are added and linked to the old ones with a probability proportional to their degree.
This way nodes having many edges get new edges easier than others.
\subsection{Community detection}
Real-world complex networks are typically not homogeneous --- some of their regions are much more connected internally than with the rest of networks.
These regions are called \emph{communities}.
The identification of the community structure within a network addresses the
problem of characterizing the mesoscopic boundary between the microscopic scale of basic network components
(herein called nodes) and the macroscopic scale of the whole
network~\cite{girvan2002,porter2009communities,Fortunato2010}.
The detection of community structures dates back to 1927~\cite{rice1927identification}, when index of cohesion within a community was introduced to study behavior of political parties in the United States.
The analysis of the community structure has revealed countless important hierarchies of community groupings within real-world complex networks.
Salient examples can be found in social networks such as
human~\cite{zachary1977information} or animal relationships~\cite{lusseau2004},
biological~\cite{jonsson2006cluster,pimm1979structure,krause2003compartments,guimera2005functional}, biochemical~\cite{holme2003subnetwork} and
technological~\cite{flake2002self,gauvain2013communities} networks, as well as numerous others
\cite{girvan2002}.
In quantum networks, as researchers explore networks of an increasingly
complex geometry and large
size~\cite{allegra2012,plenio2008dephasing,renger2006},
the tractability of their analysis and understanding may rely on identifying
relevant community structures.
An interesting application is the quantum simulation of electric excitation transport in biological dissipative networks
\cite{ringsmuth2012,fleming10,IF12,CF09,caruso09,MRLA08,Scholak2011}.
The major light harvesting complex of plants, photosystem II (LHCII) \cite{Croce2014}, is of particular interest.
In past works, researchers have divided this complex \emph{by hand} in order to gain more insight into the
system dynamics~\cite{pan2013architercture,novoderezhkin2005lhcii,fleming2009lhcii}.
We have devised methods that optimize the task of identifying communities within
a quantum network \emph{ab initio} and, as we will show, the resulting
communities consistently point towards a structure that is different to those
previously identified for the LHCII \cite{Faccin2013community}.
We also consider larger networks,
for which an automatic method would appear to be the only
feasible option.
We introduce a set of novel methods based on community detection for quantum walks \cite{Faccin2013community}.
As in typical classical methods, the backbone our approach is a hierarchical aggregation of communities \cite{carlsson2010characterization}.
That is, we start with $N$ communities, each of them consisting of a single node.
We define a closeness function between each pair of nodes.
In each iteration, we merge the two closest communities into a new one
and proceed until all communities are merged into a single one.
The output of the algorithm can be either the splitting a network into a given number of communities, or the splitting that maximizes some target function.
The procedure is depicted in Fig.~\ref{fig:dendro}.
Unlike the classical case, where classical \emph{modularity} \cite{Newman2004} is used both for measuring closeness and establishing the target function, we introduce a few modularity-like functions based on coherence and transport properties of a quantum walk \cite{zimboras2013quantum}.
\begin{figure}[b!]
\centering
\begin{center}
\includegraphics[width=\textwidth]{figs/article/communities/dendrogram}
\end{center}
\caption{Hierarchical community structure arising from a quantum evolution.
Left: the closeness matrix $c(i,j)$ between $n=60$ nodes.
Right: the dendrogram showing the resulting hierarchical community
structure.
The dashed line shows the optimum level within this hierarchy,
according to the maximal modularity criterion.
The particular example shown here is the one corresponding
to~\fir{fig:art-fidelity}.
}
\label{fig:dendro}
\end{figure}
All our methods are based on the full unitary dynamics of the system,
as described by the Hamiltonian, and account for quantum effects such as coherent evolution and interference.
In fact, phases are often fundamental to characterize the system evolution.
For example, in \cite{harel2012quantum} it was shown that in light harvesting
complexes interference between pathways is important even at room temperature.
We use our community detection methods to automatically find communities, which turn out to be in good agreement with communities picked by hand by experts studying this system.
As with the case of classical community structure, there are many possible
definitions of a quantum community.
We restrict ourselves to two broad
classes based on transport properties and fidelity under unitary evolution.
The use of community detection in quantum systems addresses an open
challenge in the drive to unite quantum physics and complex network
science.
We expect such partitioning, based on our definitions or
extensions such as above, to be used extensively in making the large
quantum systems currently being targeted by quantum physicists tractable to
numerical analysis.
\subsection{Structure}
This chapter is structured as follows.
First, in Sec.~\ref{sec:walks}, we look at similarities between a classical random walk and quantum walk on a graph \cite{Faccin2013}.
Sec.~\ref{sec:walks-framework} an introduction to the dynamics for a continuous-time random walk and a continuous time unitary quantum walk.
We describe the long time averaged probability distribution of the quantum walk.
In Sec.~\ref{sec:degree-quantum} we introduce quantity called \emph{quantumness} to assess the difference between the classical and quantum behavior on a given graph.
Sec.~\ref{sec:walks-numericalresults} is dedicated to numerical studies of this quantity
on a range of artificial and real-world complex network topologies.
Second, in Sec.~\ref{sec:community}. we design community detection algorithms for quantum system \cite{Faccin2013community}.
They are based on quantum walk and depend on properties such as interference different paths, thus cannot be replicated by any classical random walk.
Moreover, some of our definitions of communities are directly to quantum informational properties of the equilibrium state.
In Sec.\ref{sec:comdet} we begin by recalling several common notions from classical community
detection that we rely on in this work.
This sets the stage for the development of a quantum treatment of community detection in Section~\ref{sec:quantumcom}.
We then turn to several examples in Section~\ref{sec:performance} including the LHCII complex mentioned previously.
Some technical details of community detection are left for Sec.~\ref{sec:comdet-appendix}.
\section{Classical and quantum walk}\label{sec:walks}
\subsection{Walks framework}\label{sec:walks-framework}
We consider a walker moving on a connected network of $N$ nodes, with each
weighted undirected edge between nodes $i$ and $j$ described by the element
$A_{ij}$ of the off-diagonal adjacency matrix $A$.
The matrix is symmetric ($A_{ij}=A_{ji}$) and has real, non-negative entries, with zero entries being equivalent to absence of an edge.
We use Dirac notation and represent $A = \sum_{ij} A_{ij} \ket{i} \bra{j}$ in terms of $N$ orthonormal vectors $\ket{i}$.
The network gives rise to both a quantum walk and a corresponding classical walk.
There is no unique mapping from a network to evolution.
However, there is a number of conditions required to be kept.
For a classical random walk, the infinitesimal generator needs to ensure that for any state:
\begin{itemize}
\item probabilities sum up to one,
\item probabilities are non-negative.
\end{itemize}
Operators fulfilling these criteria are called infinitesimal stochastic operators, and are defined by
\begin{itemize}
\item all columns sum up to zero,
\item all off-diagonal entries are positive.
\end{itemize}
Additionally, one more property is added --- requirement that rate of leaving a node is the same for all nodes. Thus, it has an interpretation of a random walk, rather than any probability flow.
It translates to property, that all diagonal values of the generator are the same.
On contrary, for quantum evolution the only property we need for infinitesimal stochastic operators is Hermitian symmetry, so that generated evolution is unitary.
It is not possible to ensure that rate of leaving each nodes is the same, due to interference.
However, at least we can set diagonal terms to be of the same value and normalize amplitudes on edges.
The classical stochastic walk $S(t) = \mathrm{e}^{ -H_C t}$ we consider is generated by the infinitesimal stochastic (see e.g.~Refs.~\cite{BB12, johnson2010,BF13}) operator
\begin{equation}
H_C = L D^{-1},
\end{equation}
where $D = \sum_i d_i \ket{i} \bra{i}$ is a diagonal matrix of the node degrees, $d_i = \sum_j A_{ij}$ and $L$ is the graph Laplacian, defined as $L = D - A$.
For this classical walk, the total rate of leaving each node is identical, what is ensured by the normalization by multiplying by $D^{-1}$.
The corresponding unitary quantum walk $U(t) = \mathrm{e}^{-\mathrm{i} H_Q t}$ is generated by the Hermitian operator
\begin{equation}
H_Q = D^{-\nicefrac 12} L D^{-\nicefrac 12}.
\end{equation}
For this quantum walk, the energies $\brackets{i}{H_Q}{i}$ at each node are identical.
The generators $H_C$ and $H_Q$ are similar matrices, related by
\begin{equation}
H_Q = D^{-\nicefrac 12} H_C D^{\nicefrac 12}.
\end{equation}
This mathematical framework, represented in \fir{fig:scheme}, underpins our analysis.
As we will describe in \secr{sec:classical}, the long time behavior of
the classical walk generated by $H_C$ has been well
explained in terms of its underlying network properties, specifically
the degrees $d_i$.
Our goal in \secr{sec:quantum} is to determine the role this concept plays in the quantum walk generated by $H_Q$.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{figs/article/qwalk/quantum-stochastic-scheme}
\end{center}
\caption{
Relating stochastic and quantum walks.
An undirected weighted network (graph) $G$ is represented by a symmetric, off-diagonal and non-negative adjacency matrix $A$.
There is a mapping from $A$ (by summing columns) to the diagonal matrix $D$ with entries given by the weighted degree of the corresponding node.
The node degrees are proportional to the steady state probability distribution of the continuous-time stochastic walk (with uniform escape rate from each node) generated by $H_C = L D^{-1}$, where $L = D - A$ is the Laplacian.
The steady state probabilities, represented by the vector $\ket{\pi_0}$, are proportional to the node degrees.
We generate a corresponding continuous-time unitary quantum walk by the Hermitian operator $H_Q = D^{-\nicefrac12} L D^{-\nicefrac 12}$, which is similar to $H_C$.
The probability of being in a node in the stochastic stationary state $\ket{\pi_0}$ and the probability arising from the ground state of the quantum walk are the same.
}
\label{fig:scheme}
\end{figure}
\subsubsection{Classical walks} \label{sec:classical}
In the classical walk the probability $P_i (t)$ of being at node $i$ at time $t$ evolves as $\ket{P(t)} = S(t) \ket{P(0)}$, where $\ket{P(t)} = \sum_i P_i (t) \ket{i}$.
The stationary states of the walk are described by eigenvectors $\ket{\pi_i^k}$ of $H_C$ with eigenvalues $\lambda_i$ equal to zero.
We assume throughout this work that the walk is connected, i.e., it is possible
to transition from any node to any other node through some series of allowed
transitions. In this case there is a unique eigenvector $ \ket{\pi_0} =
\ket{P_C}$ with $\lambda_0 = 0$, and $\lambda_i > 0$ for all $i \neq
0$~\cite{keizer1972steady,lancaster1985theory,norris1998markov,BB12}.
This (normalized accordingly) eigenvector $\ket{P_C} = \sum_i (P_C)_i \ket{i}$ describes the steady state probability distribution
\begin{equation}\label{eq:classical}
(P_C)_i = \frac{d_i }{\sum_j d_j}.
\end{equation}
In other words, the process is ergodic and after long times the probability of
finding the walker at any node $i$ is given purely by the importance of the degree $d_i$ of that
node in the network underlying the process.
\subsubsection{Quantum walks} \label{sec:quantum}
When considering quantum walks on networks, it is natural to ask what is
the long time behavior of a quantum walker \cite{Burillo2012,Mulken2011,aharonov2001quantum,mulken2005asymmetries}.
This problem is similar to some thermalization problems \cite{Polkovnikov2011}, in which
the unitary evolution does not drive the system towards a steady state.
Therefore, to obtain a static picture we consider the long time average
probability $(P_Q)_i$ of being on node $i$, which reads
\begin{equation}\label{eqn:pq}
( P_Q )_i =
\lim_{T\to\infty} \frac 1T \int_0^T {\textrm d}t\ \brackets{i}{U(t) \rho(0) U^\dagger (t)}{i} .
\end{equation}
For ease of comparison with $\ket{P_C}$ we will also write the distribution in \eqr{eq:pn} as a ket $\ket{P_Q} = \sum_i ( P_Q )_i \ket{i}$. Unlike the classical case, \eqr{eqn:pq} depends on the initial state $\rho(0)$.
The long time average can be carried out
\begin{align}
( P_Q )_i &=
\lim_{T\to\infty} \frac 1T \sum_{kl} \int_0^T {\textrm d}t\
\braket{i}{\phi_k} e^{- i E_k t} \bra{\phi_k} \rho(0) \ket{\phi_l} e^{i E_j t} \braket{\phi_l}{i}\\
&=
\lim_{T\to\infty} \frac 1T \sum_{kl} \int_0^T {\textrm d}t\
\braket{i}{\phi_k}\bra{\phi_k} \rho(0) \ket{\phi_l} \braket{\phi_l}{i} e^{i (E_j - E_k) t}\\
&= \sum_{kl:\ E_k=E_l} \braket{i}{\phi_k}\bra{\phi_k} \rho(0) \ket{\phi_l} \braket{\phi_l}{i}.
\end{align}
That is, interference between subspaces of different energy vanish in the long time average, so we obtain an expression for the probability $ ( P_Q )_i$ in terms of the energy eigenspace projectors $\Pi_j$ of the Hamiltonian $H_Q$,
\begin{align}
( P_Q )_i =
\sum_j \brackets{i}{ \Pi_j \rho (0) \Pi_j}{i} .
\label{eq:pn}
\end{align}
Here $\Pi_j = \sum_k\ket{\phi_j^k}\bra{\phi_j^k}$ projects onto the subspace spanned by the eigenvalues $\ket{\phi_j^k}$ of $H_Q$ corresponding to the same eigenvalue $\lambda_j$.
In other words, the long time average distribution is a mixture of the
distributions obtained by projecting the initial
state onto each eigenspace.
Due to the similarity transformation $H_Q = D^{-\nicefrac 12} H_C D^{\nicefrac 12}$ the classical $H_C$ and quantum $H_Q$ generators share the same eigenvalues $\lambda_i \geq 0$, and have eigenvectors related by $\ket{\phi_i^k} = D^{-\nicefrac 12} \ket{\pi_i^k}$ up to their normalizations.
In particular, the unique eigenvectors corresponding to $\lambda_0 = 0$ are $\ket{\pi_0} = D \ket{\mathbf{1}}$ and $\ket{\phi_0}
= D^{\nicefrac 12} \ket{\mathbf{1}}$ up to their normalizations, with $\ket{\mathbf{1}} = \sum_i \ket{i}$. Therefore the probability vector describing the outcomes of a measurement of
the quantum ground state eigenvector $\ket{\phi_0} $ in the node basis is the classical steady state distribution $\ket{\pi_0} = \ket{P_C}$.
The state vector $\ket{P_C}$ appears in \eqr{eq:pn} for the quantum long time average distribution $\ket{P_Q}$ with weight $ \brackets{ \phi_0 }{ \rho (0) }{\phi_0}$.
Accordingly we split the sum in \eqr{eq:pn} into two parts, the first we call the ``classical term'' $\ket{P_C}$ and the rest we call the ``quantum correction'' $\ket{\tilde{P}_Q}$, as
\begin{align}
\ket{P_Q} = (1-\varepsilon) \ket{P_C} +\varepsilon \ket{ \tilde{P}_Q} .
\label{eq:twoterms}
\end{align}
The normalized quantum correction $\ket{\tilde{P}_Q} = \sum_i ( \tilde{P}_Q )_i \ket{i}$ is given by
\begin{align} \label{eq:quantumcorrection}
( \tilde{P}_Q )_i &= \frac{1}{ \varepsilon} \sum_{j \neq 0} \brackets{i}{ \Pi_j \rho (0) \Pi_j}{i} ,
\end{align}
and the weight
\begin{equation}
\varepsilon = 1 - \brackets{ \phi_0 }{ \rho (0) }{\phi_0},\label{quantumnessformula}
\end{equation}
we call \emph{quantumness} is a function both of the degrees, through $\ket{\phi_0}$, and the initial state.
We can think of the parameter $\varepsilon$, which controls the classical-quantum mixture,
as the quantumness of $\ket{P_Q}$ for the following three reasons.
First, the proportion of the elements in $(P_Q)_i$ \eqref{eq:twoterms} that corresponds to the genuinely quantum
correction is $\varepsilon$.
Second, the trace distance between the normalized distribution $(P_C)_i$ and the
unnormalized distribution $(1-\varepsilon) (P_C)_i$ forming the classical part
of the quantum result is also $\varepsilon$.
Last, using a triangle inequality, the trace distance between the normalized
distributions $(P_C)_i$ and $(P_Q)_i$ is upper bounded by $2 \varepsilon$.
This expression for the quantumness in \eqr{quantumnessformula} enables us to make some physical statements about a general initial state.
By realizing that $\ket{\phi_0}$ is the ground state of zero energy $\lambda_0 = 0$ and the gap $\Delta = \min_{i \neq 0} \lambda_i$ in the energy spectrum is non-zero for a connected network~\cite{keizer1972steady,lancaster1985theory,norris1998markov,BB12},
the above implies a bound $E / \Delta \ge \varepsilon $ for the quantumness $\varepsilon$ of the walk in terms of the energy $E = \textrm{tr} \{ H_Q \rho \}$ of the initial state. The bound is obtained through the following steps
\begin{align}
E &= \textrm{tr} \{ H_Q \rho \} = \sum_{j \neq 0} \lambda_j \textrm{tr} \{ \Pi_j \rho (0) \} \nonumber \\
&\geq \Delta \sum_{j \neq 0} \textrm{tr} \{ \Pi_j \rho (0) \} = \Delta \left( 1 - \textrm{tr} \{ \Pi_0 \rho (0) \} \right) = \Delta \varepsilon \label{eq:EnergyBound}.
\end{align}
The above demonstrates that the classical stationary probability distribution will be recovered for low energies.
A utility of this result is that it connects the long time average distribution
to a simple physical property of the walk, the energy, which provides a total ordering of all possible initial states.
\subsection{Degree distribution and quantumness} \label{sec:degree-quantum}
Quantumness is both a function of the degrees of the network nodes and the initial state.
To compare the quantumness of different complex networks, we fix the initial state $\rho(0)$.
For our example we choose the even superposition state $\rho(0) = \ket{\Psi(0)} \bra{\Psi(0)}$ with $\ket{\Psi(0)} = \ket{\mathbf{1}} / \sqrt{N}$.
This state has several appealing properties, for example, it is invariant under node permutations and thus independent of the arrangement of the network.
In this case the quantumness is given by the expression
\begin{align}\label{eq:quantumnessdegree}
\varepsilon= 1 - \frac{ \langle \sqrt{d} \rangle^2 }{ \langle d \rangle} ,
\end{align}
where $\langle d \rangle = \sum_i d_i / N$ is the average degree and $\langle
\sqrt{d} \rangle = \sum_i \sqrt{d}_i / N$ is the average root degree of the
nodes. As such, the quantumness depends only on the degree distribution of the network and increases with network heterogeneity.
This statement is quantified by writing the quantumness
\begin{align}
\varepsilon = 1 - \frac{1 }{ N}\exp \left[ H_{\nicefrac 12} \left( \left \{ \frac{ d_i }{ \sum_j d_j } \right \} \right) \right] ,
\end{align}
in terms of the R\'{e}nyi entropy
\begin{align} \label{eq:renyientropy}
H_q ( \{ p_i \}) = \frac{1}{1-q} \ln\left(\sum_i p_i^q \right),
\end{align}
where $ d_i / \sum_j d_j = (P_C)_i$ are the normalized degrees.
To obtain an expression in terms of the more familiar Shannon entropy $H_1$ (obtained by taking the $q\rightarrow1$ limit of \eqr{eq:renyientropy}), we recall that the R\'{e}nyi entropy is non-increasing with $q$ \cite{Beck1993thermodynamics}.
This leads to the upper bound
\begin{align}\label{eq:entropybound}
\varepsilon \leq 1 - \frac{ 1}{ N} \exp \left[ H_{1} \left( \left \{ \frac{ d_i }{ \sum_j d_j } \right \} \right) \right] .
\end{align}
The quantumness approaches this upper bound in the limit that $M$ nodes have uniform degree $d_i = M \langle d \rangle/ N$ and all others have $d_i = 0$. This limit is never achieved unless $M = N$ and $\varepsilon = 0$, e.g., a regular network.
Physically, $\varepsilon = 0$ for a regular network because the symmetry of the Hamiltonian $H_Q$ implies its eigenvectors are evenly distributed.
The only eigenvector of this type that is positive is the initial state $\ket{\Psi (0)}$, which due to the Perron-Frobenius theorem must also be the ground state $\ket{\Psi (0)} = \ket{\phi_0}$. Therefore $E=0$ and so, from \eqr{eq:EnergyBound}, $\varepsilon = 0$.
In another limit, the quantumness takes
its maximum value $\varepsilon = (N-2)/N \approx 1$ when the degrees of two
nodes are equal and much larger than those of the others (note that the
symmetry of $A$ prevents the degree of a single node from dominating). In the case
that $A_{ij} \in \{ 0 , 1 \}$, i.e., the network
underlying the walks is not weighted, the quantumness of a connected network is more restricted. It is maximized by a walk based on a star network---where a
single node is connected to all others. For a walk of this type $\varepsilon =
1/2 - \sqrt{N-1}/N \approx 1/2$.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.75\textwidth]{figs/article/qwalk/plot-occupation-all}
\end{center}
\caption{
Long time average probability and degree for nodes in a complex network.
Eight networks are considered: \acf{ba},
\acf{er}, \acf{ws}, \acf{rg}, \acf{kc}, \acf{em}, \acf{ce} and \acf{ca}.
We plot the classical $( P_C )_i$ (red dashed line) and quantum $( P_Q )_i$
(black $+$) probabilities against the degree $d_i$ for every node $i$.
We overlay this with a plot of the average degree distribution $P(d)$ against
$d$ for each network type (grey full line), when known, along with the distribution for the specific realization used (grey $+$).
Alongside the \ac{ba} network we also plot $( P_Q )_i$ for the optimized BA (BA-opt) network, in which the internode weights of the BA network are randomly varied in a Monte Carlo algorithm to reach
$\varepsilon = 0.6$ (orange $\times$). We do not include a plot of the degree distribution for this network.
}
\label{fig:average}
\end{figure*}
Next, in \secr{sec:walks-numericalresults} we study the form of the quantum
correction $\ket{\tilde{P}_Q}$ given by \eqr{eq:quantumcorrection} for a range
of complex network topologies.
\subsection{Numerical results} \label{sec:walks-numericalresults}
To obtain numerical results, we consider non-weighted binary networks $A_{ij} \in \{ 0, 1\}$
with various complex network topologies.
Specifically we consider the
\ac{ba} scale free network~\cite{BA99}, the \ac{er}~\cite{ER60}
and the \ac{ws}~\cite{Watts1998} small world networks, and the
\ac{rg} (on a square)~\cite{penrose2003}, a network without the scale
free or small world characteristics.
We set number of nodes to be $N = 500$ and the average degree $\langle d \rangle \approx 6$.
If a disconnected network is obtained, only the giant component is considered.
The long time average probability of being on each node $i$ is plotted against
its degree $d_i$ for a quantum ($P_Q$) and stochastic ($P_C$) walk in
\fir{fig:average}.
The two cases are nearly identical for these binary networks and the evenly distributed initial state, illustrating that the quantumness $\varepsilon$ is small, below $0.13$.
See Tab.~\ref{tab:en-bound} for the comparison of values.
Within these, the \ac{ba} network shows the highest quantum correction.
This is expected since the \ac{ba} network has the higher degree heterogeneity.
The \ac{ws} network, which is well known to have quite uniform
degrees~\cite{barrat2000}, is accordingly the network with the lowest quantum correction.
For many of the network types the typical quantumness can be obtained from
the expected (thermodynamic limit) degree distribution. In the \ac{ba} network,
the degree distribution approximately obeys the continuous probability density $P(d)= \langle
d\rangle^2 / 2 d^3 $~\cite{BA99}. Integrating
this to find the moments, results in $\varepsilon = 1/9$, which is independent
of the average degree $\langle d \rangle$ and is compatible with our numerics.
The degree distributions of the \ac{er} and \ac{rg} networks both approximately
follow the Poissonian distribution
$P(d) \approx \langle d\rangle^d \mathrm{e}^{-\langle d\rangle} / d! $ for large
networks, which explains the similarity of their quantumness $\varepsilon$
values.
For $\langle d\rangle = 6$ we
recover $\varepsilon \approx 0.046$, which is
compatible with the values for the particular networks we generated.
From the general form, calculating the quantumness numerically and performing a best fit we find
that $\varepsilon \approx \kappa_1 \langle d\rangle^{-\kappa_2}$, with
fitting parameters $\kappa_1 = 0.429$ and $\kappa_2 = 1.210$.
The size of the quantum effects can be enhanced by introducing heterogeneous
weights $A_{ij}$ within a network. We have done this for a \ac{ba} network using several iterations of the following procedure.
A pair of connected nodes is randomly selected then the associated weight is doubled of halved at random.
As anticipated, the effect is to increase the discrepancy between
the classical and quantum dependence of the long time average probability on
degree, illustrated in \fir{fig:average}.
As the number of iterations is increased, the quantumness follows
the bound given in \eqr{eq:entropybound}.
In fact, most networks are found close to saturating this bound, especially for
low quantumness.
\begin{table}
\centering
\begin{tabular}{p{4cm}cr}\hline
type & $\varepsilon$ & $E /\Delta$\\\hline
\ac{ba} & 0.1299 & 0.5583\\
\ac{er} & 0.0431 & 0.1734\\
\ac{rg} & 0.0396 & 11.2875\\
\ac{ws} & 0.0164 & 0.0846\\
\ac{ba}-opt & 0.6092 & 844.9181\\
\ac{kc} & 0.1204 & 1.3471\\
\ac{ce} & 0.2247 & 4.7622\\
\ac{em} & 0.1987 & 1.5449\\
\ac{ca} & 0.1138 & 39.8535\\\hline
\end{tabular}
\caption{Quantumness, energy and gap. The quantumness $\varepsilon$ and its upper bound $E/\Delta$, the ratio of energy and gap, for each of the nine networks considered in \fir{fig:average}.
Note that the energy gap can be arbitrarily small, and zero for disjoint networks;
this gives raise to very high $E/\Delta$ values.}
\label{tab:en-bound}
\end{table}
Further, the energy $E = \brackets{\Psi_0}{H_Q}{\Psi_0}$ of the given initial state
has a simple expression $E = 1 - (1/N) \sum_{ij} A_{ij} / \sqrt{d_i d_j}$,
which allows us to determine the extent to which the bound $E / \Delta \geq
\varepsilon$ is saturated by comparing the values of $E/ \Delta$ and
$\varepsilon$. We find that for some networks, e.g., the BA, ER and WS
networks, the bound is quite restrictive and reasonably saturated. However for
the other networks we find that quantumness takes a low value without this being ensured by the bound only, see Table~\ref{tab:en-bound}.
Finally, our numerical calculations reveal the behavior of the quantum part
$\tilde{P}_Q$ of the long time average node occupation.
We find that the quantum part enhances the long time average probability of being at nodes with small degree relative to the classical part. More precisely $ (
\tilde{P}_Q )_i / ( P_C )_i$ exhibits roughly $( d_i )^{-\kappa_3}$ scaling,
with $\kappa_3 \approx 1$, as shown in
\fir{fig:diff}. Interestingly, there is a correlation between the amount of
enhancement, given by $\kappa_3$, and the type of complex network. The network
types with smaller diameters (order of increasing diameter: \ac{ba}, then
\ac{er} and \ac{ws}, then \ac{rg}) have the smallest $\kappa_3$, and the quantum
parts enhance the low degree nodes least. Moreover, the enhancement $\kappa_3$ seems to be quite independent of the internode weights. Thus our numerics show a qualitatively common quantum effect for a range of complex network types. Quantitative details vary between the network types, but appear robust within each type.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.8\textwidth]{figs/article/qwalk/plot-diff-all}
\end{center}
\caption{Quantum effects.
The ratio of the quantum $(\tilde{P}_Q)_i$ and classical
$(P_C )_i$ probabilities plotted against degree $d_i$ (black
$+$) for every $i$, for the the networks considered in \fir{fig:average}.
We also plot the best fitting curve (red dashed line) to this data of the form $(\tilde{P}_Q)_i / (P_C )_i \propto ( d_i )^{\kappa_3}$ whose exponent $\kappa_3$ is given in the plot.}
\label{fig:diff}
\end{figure*}
The models of networks examined in the previous subsection have very
specific topologies and therefore degree distributions,
and do not capture the topological properties of all real-world networks
(for details see chapter 9 of Ref.~\cite{estrada2011structure}).
We therefore now study the behavior of the quantumness and
gap for topologies present in a variety of real-world networks:
a \acf{kc} social network~\cite{zachary1977information}, the \acf{em} network
of the URV university~\cite{guimera2003self}, the
\acf{ce} network~\cite{duch2005community}, and a \acf{ca} network of
scientists~\cite{newman2006finding}.
Despite the variety of topologies, we again find that the quantumness is consistently small, see Tab.~\ref{tab:en-bound}. Therefore the classical and quantum distributions are very close, as shown in \fir{fig:average}. Additionally, the quantum correction exhibits the same generic behavior as observed for the artificial networks.
Interestingly, the quantumness of real-world networks is appreciably smaller than enforced by the bound of \eqr{eq:EnergyBound}, with $E/\varepsilon \Delta$ taking large values.
\section{Community detection for quantum walks}\label{sec:community}
\subsection{Community detection}\label{sec:comdet}
Community detection is the partitioning of a set of nodes~$\mathcal{N}$ into non-overlapping
and non-empty subsets
$\mathcal{A},~\mathcal{B},~\mathcal{C},\ldots~\subseteq~\mathcal{N}$,
called communities, that together sum up to $\mathcal{N}$.
There is usually no agreed upon optimal partitioning of nodes into communities.
Instead there is an array of approaches that differ in both the definition of
optimality and the method used to achieve, exactly or approximately, this
optimality (see~\cite{Fortunato2010} for a recent review). In classical
networks optimality is, for example, defined
statistically~\cite{lancichinetti2011oslom}, e.g.\ in terms of
connectivity~\cite{girvan2002} or
communicability~\cite{estrada2011community,estrada2009community}, or increasingly, and sometimes
relatedly~\cite{meila2001random}, in terms of stochastic random
walks~\cite{Delvenne2010,rosvall2011infomap,pons2005computing}. Our particular focus is on the
latter, since the concept of transport (e.g.~a quantum walk) is central to
nearly all studies conducted in quantum physics. As for achieving optimality,
methods include direct maximization
via simulated annealing~\cite{guimera2004modularity,guimera2005functional} or,
usually faster, iterative division or agglomeration of
communities~\cite{hastie2001elements}.
We focus on the latter since it provides a simple and effective
way of revealing a full hierarchical structure of the network, requiring only
the definition of the closeness of a pair of communities.
Formally, hierarchical community structure detection methods are
based on a (symmetric) closeness function
$c(\mathcal{A},\mathcal{B}) = c(\mathcal{B},\mathcal{A})$
of two communities $\mathcal{A} \neq \mathcal{B}$.
In the agglomerative approach,
at the lowest level of the hierarchy, the nodes are
each assigned their own communities. An iterative procedure then
follows, in each step of which the closest pair of communities
(maximum closeness $c$) are merged. This procedure ends at the highest level, where
all nodes are in the same community.
To avoid instabilities in this agglomerative procedure, the closeness
function is required to be non-increasing under the merging of two communities,
$c(\mathcal{A} \cup \mathcal{B}, \mathcal{C}) \le \max(c(\mathcal{A},\mathcal{C}), c(\mathcal{B},\mathcal{C}))$,
which allows the representation of the community
structure as a linear hierarchy indexed by the merging closeness.
The resulting structure is often represented as a dendrogram (as shown in
\fir{fig:dendro}).%
\footnote{%
In general it may happen that more than one pair of communities are at the
maximum closeness. In this case the decision on which pair merges first can
influence the structure of the dendrogram,
see~\cite{jain1988algorithms,carlsson2010characterization}.
In~\cite{carlsson2010characterization} a permutation invariant formulation of
the agglomerative algorithm is given, where more than two clusters can be merged
at once. In our work we use this formulation unless stated otherwise.
}
This leaves open the question of which level of the hierarchy yields
the optimal community partitioning. If a
partitioning is desired for simulation, for example, then there may be
a desired maximum size or minimum number of communities. However, without such
constraints, one can still ask what is the best choice of communities
within those given by the hierarchical structure.
A type of measure that is often used to quantify the quality of a community partitioning
choice for this purpose is
modularity~\cite{newman2004pre,Newman2004,Clauset2004}, denoted $Q$.
It was originally
introduced in the classical network setting, in which a network is
specified by a (symmetric) adjacency matrix of (non-negative) elements
$A_{ij} = A_{ji} \ge 0$ ($A_{ii} = 0$), each off-diagonal element giving the weight of
connections between nodes $i$ and $j\neq i$ \footnote{As will become apparent, we need
only consider undirected networks without self-loops.}.
The modularity attempts to
measure the fraction of weights connecting elements in the same
community, relative to what might be expected. Specifically, one takes the fraction of intra-community
weights and subtracts the average fraction obtained when the
start and end points of the connections are reassigned randomly,
subject to the constraint that the total connectivity $k_i = \sum_j
A_{ij}$ of each node is fixed. The modularity is then given
by
\begin{align}
\label{eq:modularity}
Q = \frac{1}{2m} \textrm{tr} \left \{ C^{\mathrm{T}} B C \right \} ,
\end{align}
where $m = \mbox{$\textstyle \frac{1}{2}$} \sum_i k_i$ is the total weight of connections, $B$
is the modularity matrix with elements $B_{ij} = A_{ij}-k_i k_j / 2 m$, and
$C$ is the community matrix, with elements $C_{i \mathcal{A}}$ equal to unity if $i \in
\mathcal{A}$, otherwise zero.
The modularity then takes values strictly less than one, possibly negative, and exactly zero in the case that the nodes form a single community.
As we will see, there is no natural adjacency matrix associated with
the quantum network and so for the purposes of modularity we use
$A_{ij} = c (i , j)$ for $i \neq j$. The modularity $Q$ thus measures
the fraction of the closeness that is intra-community, relative to
what would occur if the inter-node closeness $c (i , j)$ were randomly
mixed while fixing the total closeness $k_i = \sum_{j\neq i} c (i ,
j)$ of each node to all others. Thus both the community structure and
optimum partitioning depend solely on the choice of the closeness
function.
Finally, once a community partitioning is obtained it is often desired
to compare it against another. Here we use the common
normalized mutual information
(NMI)~\cite{ana2003normalized,strehl2003cluster,danon2005}
as a measure of the mutual dependence of two community partitionings.
Each partitioning
$X = \{\mathcal{A}, \mathcal{B},\dots \}$
is represented by a probability distribution
${P_X = \{|\mathcal{A}|/|\mathcal{N}|\}_{\mathcal{A} \in X}}$, where
$|\mathcal{A}| = \sum_i C_{i \mathcal{A}}$ is the
number of nodes in community~$\mathcal{A}$. The similarity of two community
partitionings $X$ and $X'$ depends on the joint
distribution
$P_{X X'} = \{ |\mathcal{A} \cap \mathcal{A}'| / |\mathcal{N}| \}_{\mathcal{A} \in X, \mathcal{A}' \in X'}$,
where
$|\mathcal{A} \cap \mathcal{A}'| = \sum_i C_{i \mathcal{A}} C_{i \mathcal{A}'}$
is the number of nodes that belong to both communities $\mathcal{A}$ and $\mathcal{A}'$. Specifically, NMI is defined as
\begin{equation}\label{eq:nmi}
\operatorname{NMI}(X,X') =
\frac{2\,I(X,X')}{H(X)+H(X')}.
\end{equation}
Here $H(X)$ is the Shannon entropy of $P_X$, and the
mutual information $I(X,X')=
H(X)+H(X')-H(X,X')$ depends on the
entropy $H(X,X')$ of the joint distribution
$P_{X X'}$.
The mutual information is the average of the amount of information about
the community of a node in $X$ obtained by learning its
community in $X'$.
The normalization
ensures that the NMI has a minimum value of zero and takes its maximum value of unity for two identical
community partitionings.
The symmetry of the definition of NMI follows from that of mutual
information and Eq.~\eqref{eq:nmi}.
\subsection{Quantum community detection}\label{sec:quantumcom}
The task of community detection has a particular interpretation in a quantum
setting. The state of a quantum system is described in terms of a
Hilbert space $\mathcal{H}$, spanned by a complete orthonormal set of basis
states $\{ \ket{i} \}_{i \in \mathcal{N}}$. Each basis state~$\ket{i}$ can be associated
with a node~$i$ in a network and often, as in the case of single
exciton transport, there is a clear choice of basis states that makes
this abstraction to a spatially distributed network natural.
The partitioning of nodes into communities then corresponds to the
partitioning of the Hilbert space $\mathcal{H} = \bigoplus_{\mathcal{A} \in X} \mathcal{V}_\mathcal{A} $
into mutually orthogonal subspaces $\mathcal{V}_\mathcal{A} = \Span_{i \in \mathcal{A}} \{\ket{i} \}$.
As with classical networks, one can then imagine an assortment of
optimality objectives for community detection, for example, to identify a
partitioning into subspaces in which inter-subspace transport is
small, or in which the state of the system remains relatively unchanged within each subspace.
In the next two subsections we introduce two classes of community closeness
measures that correspond to these objectives.
A more detailed derivation can be found in Sec.~\ref{sec:comdet-appendix}.
Our closeness measures
take into account the full unitary evolution of an isolated
system
governed by its Hamiltonian~$H$. Rather than being applicable to
isolated systems only however, this type of community partitioning could be
used, among other things, to guide the simulation or analysis of a more complete
model in the presence of an environment, where this more complete
model may be much more difficult to describe.
\subsubsection{Inter-community transport}
\label{sec:mixing}
Several approaches to detecting communities in classical networks are
based on the flow of probability through the network during a
classical random walk~\cite{meila2001random,eriksen2003modularity,pons2005computing,weinan2008optimal,Delvenne2010,rosvall2011infomap}.
In particular, many of these methods seek communities for which the
inter-community probability flow or transport is small.
A natural approach to quantum community detection is thus to
consider the flow of probability during a continuous-time quantum
walk, and to investigate the \emph{change} in the probability of observing
the walker within each community:
\begin{align}
T_{X}(t) &= \sum_{\mathcal{A} \in X} T_\mathcal{A} (t)
= \sum_{\mathcal{A} \in X} \frac{1}{2}\left| p_\mathcal{A} \left \{ \rho (t) \right \} - p_\mathcal{A} \left \{ \rho (0) \right \} \right|,
\end{align}
where
$
\rho (t) = \mathrm{e}^{-\mathrm{i} H t} \rho (0) \mathrm{e}^{\mathrm{i} H t}
$
is the state of the walker, at time~$t$, during the walk generated by $H$, and
\begin{align}
p_\mathcal{A} \left \{ \rho \right \} = \textrm{tr} \left \{ \Pi_\mathcal{A} \rho \right \}
\end{align}
where $\Pi_\mathcal{A}=\sum_{i\in\mathcal{A}} \ket{i}\bra{i}$ is the projector on the
$\mathcal{A}$ subspace,
is the probability of a walker in state~$\rho$ being found in
community~$\mathcal{A}$ upon a von Neumann-type measurement.\footnote{%
Equivalently, $p_\mathcal{A} \left \{ \rho \right \}$ is
the norm of the projection (performed by projector $\Pi_\mathcal{A}$) of the
state $\rho$ onto the community subspace $\mathcal{V}_\mathcal{A}$.}
The initial state~$\rho (0)$ can be chosen freely.
The change in inter-community transport is clearest when the process begins either entirely inside or entirely outside each community. Because of this, we choose the walker to be initially localized
at a single node $\rho (0) = \proj{i}$ and then, for symmetry, sum
$T_X (t)$ over all $i \in \mathcal{N}$. This results in the particularly
simple expression
\begin{align}
T_\mathcal{A} (t) = \sum_{i \in \mathcal{A}, j \notin \mathcal{A}} \frac{R_{ij}(t)+R_{ji}(t)}{2}
= \sum_{i \in \mathcal{A}, j \notin \mathcal{A}} \sym{R}_{ij}(t),
\end{align}
where $R(t)$ is the doubly stochastic transfer matrix whose elements
$R_{ij}(t) = |\brackets{i}{\mathrm{e}^{-\mathrm{i} H t}}{j}|^2$ give
the probability of transport from node~$j$ to node~$i$,
and $\sym{R}(t)$ its symmetrization.
This is reminiscent of classical community detection methods,
e.g.~\cite{pons2005computing}, using closeness measures based on the
transfer matrix of a classical random walk.
We can thus build a community structure that seeks to reduce $T_X (t)$
at each hierarchical level by using the closeness function
\begin{align}
\label{eq:closeness_transport}
\notag
c^T_t(\mathcal{A} , \mathcal{B}) &= \frac{T_\mathcal{A} (t) + T_\mathcal{B} (t) -
T_{\mathcal{A}\cup\mathcal{B}}(t)}{|\mathcal{A}||\mathcal{B}|}\\
&=
\frac{2}{|\mathcal{A}||\mathcal{B}|} \sum_{i \in \mathcal{A}, j \in \mathcal{B}} \sym{R}_{ij}(t)
\end{align}
where the numerator is the decrement in $T_X (t)$ caused by merging
communities $\mathcal{A}$ and $\mathcal{B}$.
The normalizing factor in Eq.~\eqref{eq:closeness_transport} avoids the
effects due to the uninteresting scaling of the numerator with the community
size.
Since a quantum walk does not converge to a stationary state, a
time-average of the closeness defined in Eq.~\eqref{eq:closeness_transport}
is needed to obtain a quantity that eventually converges with
increasing time.
Given the linearity of the formulation, this corresponds to replacing
the transport probability~$R_{ij}(t)$
in \eqr{eq:closeness_transport}
with its time-average
\begin{align}
\label{eq:avtransfer}
\tave{R}_{ij}(t) = \frac{1}{t} \int_0^t R_{ij}(t') \:\mathrm{d} t'.
\end{align}
It follows that, as with similar classical community detection methods~\cite{Delvenne2010},
our method is in fact a class of approaches,
each corresponding to a different time $t$.
The appropriate value of $t$ will depend on the specific
application, for example, a natural
time-scale might be the decoherence time.
Not wishing to lose generality and focus on a particular system,
we focus here on the short and long time limits.
In the short time limit $t \to 0$, relevant if $t H_{ij} \ll 1$ for
$i \neq j$, the averaged transfer matrix $\tave{T}_{ij} (t)$ is simply
proportional to $|H_{ij}|^2$.
Note that in the short
time limit there is no interference between
different paths from $\ket{i}$ to $\ket{j}$, and therefore for
short times $c^T_t (i , j)$ does not depend on the on-site energies $H_{ii}$ or the phases of the
hopping elements $H_{i j}$.
This is because, to leading order in time, interference
does not play a role in the transport out of a single node.
For this reason we can refer to this approach as ``semi-classical''.
In the long time limit $t \to \infty$, relevant if $t$ is much larger
than the inverse of the smallest gap between distinct
eigenvalues of $H$, the probabilities are elements of the
mixing matrix~\cite{godsil2013},
\begin{align}
\lim_{t\to \infty} \tave{R}_{ij}(t) = \sum_k | \bracket{i}{\Lambda_k}{j}|^2 ,
\label{eq:mixing-matrix}
\end{align}
where $\Lambda_k$ is the projector onto the $k$-th eigenspace of $H$. This
thus provides a simple spectral method for building the community
structure.
Note that, unlike in a classical infinitesimal stochastic walk where
each $\tave{R}_{ij} (t)$
eventually becomes proportional to the connectivity $k_j$ of the final
node $j$, the long time limit in the quantum setting is non-trivial and,
as we will see, $\tave{R}_{ij}(t)$ retains a strong impression of the
community structure for large~$t$.%
\footnote{Note that, apart from small or large
times $t$, there is no guarantee of symmetry $R_{ij}(t) = R_{ji}(t)$ in the
transfer matrix for a given
Hamiltonian. See~\cite{zimboras2013quantum}. Hamiltonians featuring
this symmetry, e.g., those with real $H_{ij}$, are called
time-symmetric.}
\subsubsection{Intra-community fidelity}
\label{sec:coher}
Classical walks, and the community detection methods based on them, are fully
described by the evolution of the probabilities of the walker occupying each
node. The previous quantum community detection approach is based on the
evolution of the same probabilities but for a quantum walker.
However, quantum walks are richer than this, they are not fully
described by the evolution of the node-occupation probabilities. We
therefore introduce another community detection method that captures
the full quantum dynamics within each community subspace.
Instead of reducing merely the change in probability within the
community subspaces, we reduce the change in the projection of the
quantum state in the community subspaces. This change is measured
using (squared) fidelity, a common measure of distance between two
quantum states.
For a walk beginning in state $\rho (0)$ we therefore focus on the quantity
\begin{align}
F_X (t) &= \sum_{\mathcal{A} \in X} F_\mathcal{A} (t)
= \sum_{\mathcal{A} \in X} F^2 \left \{ \Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A}, \Pi_\mathcal{A} \rho(0) \Pi_\mathcal{A} \right \},
\end{align}
where $\Pi_\mathcal{A} \rho \Pi_\mathcal{A}$ is the projection of the state $\rho$ onto the subspace $\mathcal{V}_\mathcal{A}$ and
\begin{align}
F \left\{\rho ,\sigma \right\} = \textrm{tr} \left\{ \sqrt{\sqrt{\rho} \sigma \sqrt{\rho}} \right\} \in [0, \sqrt{\textrm{tr} \{\rho\} \textrm{tr} \{\sigma\}}]
\end{align}
is the fidelity, which is symmetric between $\rho$ and $\sigma$.
We build a community structure that seeks to maximize
the increase in $F_X (t)$ at
each hierarchical level by using the closeness measure
\begin{align}
\label{eq:fidelitydist}
c^F_t (\mathcal{A} , \mathcal{B}) = \frac{F_{\mathcal{A} \cup \mathcal{B}}(t) -F_\mathcal{A}(t) -F_\mathcal{B}(t)
}{|\mathcal{A}| |\mathcal{B}|} \in [-1 ,1],
\end{align}
i.e., the change in $F_X (t)$ caused by merging communities $\mathcal{A}$ and~$\mathcal{B}$.
Our choice for the denominator prevents uninteresting size scaling,
as in Eq.~\eqref{eq:closeness_transport}.
The initial state~$\rho(0)$ can be chosen freely. Here we choose the
pure uniform superposition state $\rho(0)=\ket{\psi_0}\bra{\psi_0}$ satisfying
$\brakets{i}{\psi_0} = 1/\sqrt{n}$ for all~$i$.
This state was used to
investigate the effects of the connectivity on the dynamics of a quantum walker
in~Ref.~\cite{Faccin2013}.
As for our other community detection approach, we consider the time-average of
Eq.~\eqref{eq:fidelitydist} which yields
\begin{align}
c_t^F (\mathcal{A},\mathcal{B}) =
\frac{2}{|\mathcal{A}||\mathcal{B}|} \sum_{i\in\mathcal{A}, j\in\mathcal{B}}
\real(\tave{\rho}_{ij}(t)\rho_{ji}(0)),
\end{align}
where $\tave{\rho}_{ij}(t) = \frac 1t \int_0^t \mathrm{d} t' \rho_{ij}(t')$.
In the long time limit, the time-average of the density matrix takes a particularly simple
expression:
\begin{align}
\lim_{t \to \infty} \tave{\rho}_{ij}(t) = \sum_k \Lambda_k \rho_{ij}(0) \Lambda_k,
\end{align}
where $\Lambda_k$ is as in the previous Sec.~\ref{sec:mixing}.
The definition of community closeness given in
Eq.~\eqref{eq:fidelitydist} can exhibit negative values.
In this case the usual definition of modularity fails~\cite{traag2009}
and one must extended it.
In this work we use the definition of modularity proposed
in~\cite{traag2009}, which coincides with Eq.~(\ref{eq:modularity}) in
the case of non-negative closeness.
The extended definition treats negative and positive
links separately, and tries to minimize intra-community negative
links while maximizing intra-community positive links.
\subsection{Performance analysis}\label{sec:performance}
To analyze the performance of our quantum community detection methods
we apply them to three different networks.
The first one (Sec.~\ref{sec:quantumnet}) is a simple quantum network,
which we use to highlight how some intuitive notions in
classical community detection do not necessarily transfer over
to quantum systems.
The second example (Sec.~\ref{sec:artificial}) is an artificial quantum
network designed to exhibit a clear classical community structure,
which we show is different from the quantum community structure obtained and
fails to capture significant changes in this structure induced by quantum
mechanical phases on the hopping elements of the Hamiltonian.
The final network (Sec.~\ref{sec:lhcii}) is a real world quantum
biological network, the LHCII light harvesting complex, for which we find a
consistent quantum community structure differing from the
community structure cited in the literature.
These findings confirm that a quantum mechanical treatment of community
detection is necessary as classical and semi-classical methods
cannot be reproduce the structures that appropriately capture quantum effects.
Below we will compare quantum community structures
against more classical community structures, such the one given
by the semi-classical method based on the short time transport and,
in the case of the example of Sec.~\ref{sec:artificial},
the classical network from which the quantum network is constructed.
Additionally we use a traditional classical
community detection algorithm, OSLOM~\cite{lancichinetti2011oslom}, an algorithm based on the maximization
of the statistical significance of the proposed partitioning,
whose input adjacency matrix~$A$ must be real.
For this purpose we use the absolute
values of the Hamiltonian elements in the site basis: $A_{ij} = |H_{ij}|$.
\subsubsection{Simple quantum network}
\label{sec:quantumnet}
\begin{figure}
\centering
\includegraphics[width=0.3\columnwidth]{figs/article/communities/toy-graph-6nodes}\\ \medskip
Disconnected components:\\
\vspace{-10pt}
\subfloat[Transport]{\label{fig:phase-null-transport}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-null-transport-unpert}}
\hfill
\subfloat[Fidelity]{\label{fig:phase-null-fidelity}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-null-fidelity-unpert}}
\hfill
\subfloat[Fidelity (Perturbed)]{\label{fig:phase-null-fidelity-pert}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-null-fidelity-pert}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/colorbox}}\\ \medskip
Phases' effect on transport:\\
\vspace{-10pt}
\subfloat[Coherent phases]{\label{fig:phase-coher-transport}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-coher-transport-unpert}}
\hfill
\subfloat[Random phases]{\label{fig:phase-rand-transport}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-rand-transport-unpert}}
\hfill
\subfloat[Cancelling phases]{\label{fig:phase-canc-transport}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-canc-transport-unpert}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/colorbox}}\\ \medskip
Phases' effect on fidelity:\\
\vspace{-10pt}
\subfloat[Coherent phases]{\label{fig:phase-coher-fidelity}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-coher-fidelity-pert}}
\hfill
\subfloat[Random phases]{\label{fig:phase-rand-fidelity}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-rand-fidelity-pert}}
\hfill
\subfloat[Cancelling phases]{\label{fig:phase-canc-fidelity}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/toyphases-canc-fidelity-pert}
\includegraphics[height=0.15\columnwidth]{figs/article/communities/colorbox}}\\
\caption{
Simple quantum network --- a graph with six nodes.
Each solid line represents transition amplitude $H_{ij}=1$.
For dashed and dotted lines the transition amplitude can be either zero (a, b
and c) or the absolute value is the same $|H_{ij}|=1$ but phase is
(d and g) coherent (all ones),
(e and h) random $\exp(i \varphi_k)$ for each link,
(f and i) canceling (ones for dashed red and minus one for dotted green).
Plots show the node closeness for both methods based on transport and fidelity
(only the long-time-averages are considered, in plots (g), (h) and (i) we
used a perturbed Hamiltonian to solve the eigenvalues degeneracy, this
explains the non-symmetric closeness in (i)).
}
\label{fig:phase-sensitive-graph}
\end{figure}
Here we use a simple six-site network model to study ways in which quantum
effects lead to non-intuitive results, and how methods based on different quantum
properties can, accordingly, lead to very different choices of communities.
We begin with two disconnected cliques of three nodes each,
where all Hamiltonian matrix elements within the groups are identical and real.
\fir{fig:phase-sensitive-graph} illustrates this highly
symmetric topology.
The community detection method based on quantum transport identifies the two
fully-connected groups as two separate communities
(\fir{fig:phase-null-transport}), as is expected.
Contrastingly, the methods based on fidelity predict
counter-intuitively only a single community; two disconnected nodes can retain
coherence and, by this measure, be considered part of the same community
(\fir{fig:phase-null-fidelity}).
This symmetry captured by the fidelity-based community structure
breaks down if we introduce random perturbations into the Hamiltonian.
Specifically, the fidelity-based closeness~$c_t^F$
is sensitive to perturbations of the order~$t^{-1}$, above
which the community structure is divided into the two groups of three
(\fir{fig:phase-null-fidelity-pert})
expected from transport considerations. Thus we may tune the resolution of
this community structure method to asymmetric perturbations by varying~$t$.
Due to quantum interference we expect that the
Hamiltonian phases should significantly affect the quantum community partitioning.
The same toy model can be used to demonstrate this effect.
For example, consider adding four elements to the Hamiltonian corresponding to
hopping from nodes 2 and 3 to 4 and 5 (see diagram in
\fir{fig:phase-sensitive-graph}).
If these hopping elements are all identical to the others, it
is the two nodes, 1 and 6, that are not directly connected for which the
inter-node transport is largest (and thus their inter-node closeness is the
largest). However, when the phases of the four additional elements are
randomized, this transport is decreased.
Moreover, when the phases are canceling, the
transport between nodes 1 and 6 is reduced to zero, and the closeness between
them is minimized
(see Figs.~\ref{fig:phase-coher-transport}--\ref{fig:phase-canc-transport}).
The fidelity method has an equally strong dependence on the phases (see
Figs.~\ref{fig:phase-coher-fidelity}--\ref{fig:phase-canc-fidelity}), with
variations in the phases breaking up the network from a large central community
(with nodes 1 and 6 alone)
into the two previously identified communities.
\subsubsection{Artificial quantum network}
\label{sec:artificial}
\begin{figure*}[t!]
\centering
\begin{minipage}{0.9\textwidth}
\subfloat[Original data]{\label{fig:art-theo}
\includegraphics[angle=0,width=0.2\textwidth]{figs/article/communities/graph_theo_none}}
\hfill
\subfloat[Transport; $t\to 0$]{\label{fig:art-transport_short}
\includegraphics[angle=0,width=0.2\textwidth]{figs/article/communities/graph_transport-zero}}
\hfill
\subfloat[Transport; $t\to\infty$]{\label{fig:art-transport_long}
\includegraphics[angle=0,width=0.2\textwidth]{{figs/article/communities/graph_transport-infty}}}
\hfill
\subfloat[Fidelity; $t\to\infty$]{\label{fig:art-fidelity}
\includegraphics[angle=0,width=0.2\textwidth]{{figs/article/communities/graph_fidelity-infty}}}
\vspace{10pt}
\\
\subfloat[OSLOM]{\label{fig:art-oslom}
\includegraphics[angle=0,width=0.2\textwidth]{{figs/article/communities/graph_oslom-none}}}
\hfill
\subfloat[Phases dependence (original partitioning)]{
\label{fig:art-phases-nophases}
\includegraphics[width=0.3\textwidth]{figs/article/communities/phases-nophases}
}
\hfill
\subfloat[Phases dependence (classical model)]{
\label{fig:art-phases-classical}
\includegraphics[width=0.3\textwidth]{figs/article/communities/phases-classical}
}
\end{minipage}
\caption{Artificial community structure.
(a) Classical community structure used in creating the network.
(b--e) Community partitionings found using the
three quantum methods and OSLOM.
(f,g)
Behavior of the approaches as the phases of the Hamiltonian elements are randomly
sampled from a Gaussian distribution of width~$\sigma$. The mean
NMI, compared with zero phase partitioning (f) and the classical
model data (g),
over 200 samplings of the phase distribution is plotted. The
standard deviation is indicated by the shading.
Both OSLOM and $c^{T}_0$ are insensitive to phases and thus do not
respond to the changes in the Hamiltonian.
}
\label{fig:artificial}
\end{figure*}
The Hamiltonian of our second quantum network is constructed from the
adjacency matrix $A$ of a classical unweighted, undirected network
exhibiting a clear classical partitioning,
using the relation $H_{ij} = A_{ij}$.
We construct~$A$ using the algorithm proposed by Lancichinetti {\em et
al.}~in~\cite{lancichinetti2008}, which provides a method
to construct a network with heterogeneous distribution both for the node
degree and for the communities dimension and a controllable inter-community
connection. We start with a rather small network of 60 nodes with average
intra-community connectivity $\langle k \rangle=6$, and only 5\% of the
edges are rewired to join communities.
The network is depicted in \fir{fig:art-theo}.
To confirm the expected, the known classical community structure is indeed
obtained by the semi-classical short-time-transport algorithm%
\footnote{In the case of short-time transport, a small
perturbation was also added to the closeness
function in order to break the symmetries of the system.}
and the OSLOM
algorithm (see Figs.~\ref{fig:art-transport_short}--\ref{fig:art-oslom}),
achieving $\text{NMI}=0.953$ and $\text{NMI}=0.975$ with the known
structure, respectively.
The quantum methods based on the long-time average of both transport and fidelity
reproduce the main features of the original community structure
while unveiling new characteristics. The transport-based long-time average
method ($\text{NMI}=0.82$ relative to the classical partitioning)
exhibits disconnected communities, i.e.\ the
corresponding subgraph is disconnected. This behavior can be explained
by interference-enhanced quantum walker dynamics, as exhibited by the toy
model in the previous subsection.
The
long-time average fidelity method ($\text{NMI}=0.85$) returns the four main
classical communities plus a number of single-node communities.
Both methods demonstrate that the quantum and classical community
structures are unsurprisingly different, with the quantum community
structure clearly dependent on the quantum property being
optimized, more so than the different classical partitionings.
\subsubsection*{Adjusted phases}
As shown in Sec.~\ref{sec:quantumnet},
due to interference the dynamics of the quantum system can change
drastically if the phases of the Hamiltonian elements are non-zero. This is
known as a chiral quantum walk~\cite{zimboras2013quantum}. Such walks exhibit, for example, time-reversal
symmetry breaking of transport between sites~\cite{zimboras2013quantum} and
it has been proposed that nature might actually make use of phase
controlled interference in transport processes~\cite{harel2012quantum}.
OSLOM, our semi-classical short-time transport algorithm and other
classical community partitioning methods are insensitive to changes in the
hopping phases. Thus, by establishing that the quantum community structure
is sensitive to such changes in phase, as expected from above, we show that
classical methods are inadequate for finding quantum community structure.
To analyze this effect we take the previous network
and adjust the phases of the Hamiltonian terms while preserving their
absolute values. Specifically, the phases are sampled randomly from a
normal distribution with mean zero and standard deviation $\sigma$.
We find that, typically, as the standard deviation $\sigma$ increases,
when comparing quantum communities and the corresponding communities
without phases the NMI between them decreases, as shown in
\fir{fig:art-phases-nophases}.
A similar deviation reflects on the comparison with the classical
communities used to construct the system, shown in
\fir{fig:art-phases-classical}.
This sensitivity of the quantum community structures to phases, as revealed
by the NMI, confirms the expected inadequacy of classical methods.
The partitioning based on long-time average fidelity seems to be the most sensitive
to phases.
\subsubsection{Light-harvesting complex}
\label{sec:lhcii}
\begin{figure*}[t!]
\centering
\includegraphics[height=0.24\textwidth]{figs/article/communities/2bhw}\hfill
\includegraphics[height=0.24\textwidth]{figs/article/communities/lhcii-head}\hfill
\includegraphics[height=0.24\textwidth]{figs/article/communities/lhcii-theo}\\
\bigskip
\subfloat[Transport; $t\to 0$]{\label{fig:lhcii-ampli}
\includegraphics[height=0.24\textwidth]{figs/article/communities/lhcii-transport-prod-zero}}
\hfill
\subfloat[Transport; $t\to\infty$]{\label{fig:lhcii-mixing}
\includegraphics[height=0.24\textwidth]{figs/article/communities/lhcii-transport-prod-infty}}
\hfill
\subfloat[Fidelity; $t\to\infty$]{\label{fig:lhcii-purity}
\includegraphics[height=0.24\textwidth]{figs/article/communities/lhcii-fidelity-prod-infty}}
\caption{Light harvesting complex II (LHCII).
(top left) Monomeric subunit of the LHCII complex with pigments Chl-a (red) and
Chl-b (green) packed in the protein matrix (gray).
(top center) Schematic representation of Chl-a and Chl-b in the monomeric
subunit, here the labeling follows the usual nomenclature (b601, a602\dots).
(top right) Network representation of the pigments in circular layout, colors represent
the typical partitioning of the pigments into communities. The widths of the links
represent the strength of the couplings~$|H_{ij}|$ between nodes.
Here the labels maintain only the ordering (b601$\to$1, a602$\to$2,\dots).
(a,b,c) Quantum communities as found by the different quantum community detection methods.
Link width denotes the pairwise closeness of the nodes.
}
\label{fig:lhcii}
\end{figure*}
An increasing number of biological networks of non-trivial topology are
being described using quantum mechanics. For example, light harvesting
complexes have drawn significant attention in the quantum information
community.
One of these is the LHCII, a two-layer 14-chromophore complex embedded into
a protein matrix (see \fir{fig:lhcii} for a sketch) that
collects light energy and directs it toward the reaction center where it is
transformed into chemical energy. The system can be described as a network
of 14 sites connected with a non-trivial topology.
The single-exciton subspace is spanned by 14
basis states, each corresponding to a node in the network, and the
Hamiltonian in this basis was found in Ref.~\cite{fleming2009lhcii}.
In a widely adopted chromophore community
structure~\cite{novoderezhkin2005lhcii}, the sites are partitioned
\emph{by hand} into communities according to their physical closeness
(e.g. there are no communities spanning the two layers of the complex), and
the strength of Hamiltonian couplings
(see the top right of \fir{fig:lhcii}). Here, we
apply our \emph{ab initio} automated quantum community detection algorithms
to the same Hamiltonian.
All of our approaches predict a modified partitioning to that commonly used
in the literature. The method based on short-time transport returns
communities that do not connect the two layers.
This semi-classical approach relies only on
the coupling strength of the system, without considering interference
effects, and provides the closest partitioning to the one provided by the literature
(also relying only on the coupling strengths).
Meanwhile, the methods
based on the long-time transport and fidelity return very similar community
partitionings, in which node 6 on one layer and node 9 on the other are in
the same community.
These two long-time community partitionings are
identical, except one of the communities predicted by the fidelity based
method is split when using the transport based method. It is therefore a
difference in modularity only.
The classical OSLOM algorithm fails spectacularly: it gives only one significant community
involving nodes 11 and 12 which exhibit the highest coupling strength. If
assigning a community to each node is forced, a unique community with all
nodes is provided.
\section{Appendix}\label{sec:comdet-appendix}
\subsection{Definitions}
\subsubsection{Modularity}
Assume we have a directed, weighted graph (with possibly negative
weights) and self-links, described by a real adjacency matrix~$A$.
The element~$A_{ij}$ is the weight of the link from node~$i$ to
node~$j$.
The in- and outdegrees of node~$i$ are defined as
\begin{equation}
k^{\text{in}}_i = \sum_j A_{ji}, \qquad
k^{\text{out}}_i = \sum_j A_{ij}.
\end{equation}
For a symmetric graph~$A$ is symmetric and the indegree is equal to the
outdegree.
The total connection weight is
$m = \sum_i k^{\text{in}}_i = \sum_i k^{\text{out}}_i = \sum_{ij} A_{ij}$.
The community matrix~$C$ defines the membership of the nodes in
different communities. The element $C_{i \mathcal{A}}$ is equal to unity if $i \in
\mathcal{A}$, otherwise zero.%
\footnote{For a fuzzy definition of membership we could
require $C_{i \mathcal{A}} \geq 0$ and $\sum_\mathcal{A} C_{i \mathcal{A}} = 1$ instead.}
The size of a community is given by $|\mathcal{A}| = \sum_i C_{i \mathcal{A}}$.
For strict (non-fuzzy) communities we can define $C$ using
an assignment vector~$\sigma$ (the entries being the communities of
each node): $C_{i \mathcal{A}} = \delta_{\mathcal{A}, \sigma_i}$. This yields
$(C C^T)_{ij} = \delta_{\sigma_i, \sigma_j}$.
There are many different ways of partitioning a graph into communities.
A simple approach is to minimize the \emph{frustration} of the partition,
defined as the sum of the absolute weight of positive links between communities and negative links within
them:
\begin{equation}
F = -\sum_{ij} A_{ij} \delta_{\sigma_i, \sigma_j} = -\textrm{tr}\left(C^T A C\right).
\end{equation}
Frustration is inadequate as a goodness measure for partitioning nonnegative graphs
(in which a single community containing all the nodes minimizes it).
For nonnegative graphs we can instead maximize another measure called \emph{modularity}:
\begin{equation}
Q = \frac{1}{m}\sum_{\mathcal{A}, ij} (A_{ij}-p_{ij}) C_{i \mathcal{A}} C_{j \mathcal{A}}
= \frac{1}{m}\textrm{tr}\left(C^T (A-p) C\right),
\end{equation}
where $p_{ij}$ is the ``expected'' link weight from $i$ to~$j$, with~$\sum_{ij} p_{ij} = m$,
and is what separates modularity from plain frustration.
Different choices of the ``null model''~$p$ give different modularities.
Using degrees, we can define $p_{ij} = k^{\text{out}}_i k^{\text{in}}_j / m$.
For graphs with both positive and negative weights the usual definitions of degrees do not make much sense,
since usually negative and positive links should not simply cancel each other out.
Also, plain modularity will fail e.g.\ when~$m=0$.
This can be solved by treating positive and negative links separately~\cite{traag2009}.
\subsubsection{Hierarchical clustering}
All our community detection approaches share a common theme.
For each (proposed) community~$\mathcal{A}$
we have a goodness measure~$M_\mathcal{A}(t)$ that depends on the system Hamiltonian, the initial state, and~$t$.
This induces a corresponding measure for a partition~$X$:
\begin{align}
M_{X}(t) = \sum_{\mathcal{A} \in X} M_\mathcal{A}(t).
\end{align}
Using this, we define a function for comparing two partitions, $X$
and $X'$, which only differ in a single merge that combines
$\mathcal{A}$ and~$\mathcal{B}$:
\begin{align}
M_{\mathcal{A}, \mathcal{B}}(t)
= M_{X'}(t)-M_{X}(t)
= M_{\mathcal{A} \cup \mathcal{B}}(t) -M_\mathcal{A}(t)-M_\mathcal{B}(t).
\end{align}
We can make $M_{\mathcal{A},\mathcal{B}}(t)$ into a symmetric closeness measure $c(\mathcal{A},\mathcal{B})$
by fixing the time~$t$ and normalizing it with~$|\mathcal{A}||\mathcal{B}|$.
Using this closeness measure together with the agglomerative
hierarchical clustering algorithm (as explained
in Sec.~\ref{sec:comdet}) we then obtain a community hierarchy.
The goodness of a specific partition in the hierarchy is given by its
modularity, obtained using the adjacency matrix given by
$A_{ij} = c(i,j)$.
The standard hierarchical clustering algorithm requires closeness to fulfill the \emph{monotonicity property}
\begin{align}
\label{eq:max}
\min(c(\mathcal{A},\mathcal{C}), c(\mathcal{B},\mathcal{C})) \le c(\mathcal{A} \cup \mathcal{B}, \mathcal{C}) \le \max(c(\mathcal{A},\mathcal{C}), c(\mathcal{B},\mathcal{C})).
\end{align}
for any communities~$\mathcal{A}, \mathcal{B}, \mathcal{C}$.
If this does not hold, we may encounter a situation where the
merging closeness sometimes increases, which in turn means that the
results cannot be presented as a dendrogram indexed by decreasing closeness.
The real downside of not having the monotonicity property, however, is stability-related.
The clustering algorithm should be stable, i.e. a small change in the system
should not dramatically change the resulting hierarchy.
Assume we encounter a situation where all the pairwise closenesses between a
subset of clusters $S = \{\mathcal{A}_i\}_i$
are within a given tolerance. A small perturbation can now change the
pair $\{\mathcal{A},\mathcal{B}\}$ chosen for the merge.
If Eq.~\eqref{eq:max} is fulfilled, then
the rest of $S$ is merged into the same new cluster during subsequent
rounds, and hence their relative merging order does not matter.
\subsubsection{Notation}
Let the Hamiltonian of the system have the spectral decomposition
$H = \sum_k E_k \Lambda_k$.
The unitary propagator of the system decomposes as
$U(t) = \mathrm{e}^{-\mathrm{i} H t} = \sum_k e^{-i E_k t} \Lambda_k$.
We denote the state of the system at time~$t$ by
\begin{align}
\rho (t) = U(t) \rho(0) U(t)^\dagger.
\end{align}
Sometimes we make use of the state obtained
by measuring in which community subspace $\mathcal{V}_\mathcal{A}$ the quantum
state is located, and then discarding the result. The resulting state is
\begin{align}
\label{eq:measure}
\rho_X(t) &= \sum_{\mathcal{A} \in X} \Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A}.
\end{align}
This state is normally not pure even if~$\rho(t)$ is.
The probability of transport from node~$b$ to node~$a$, the transfer matrix,
is given by the elements
\begin{align}
R_{ab}(t) = |\brackets{a}{U(t)}{b}|^2.
\end{align}
$R(t)$ is doubly stochastic, i.e.\ its rows and columns all sum up to unity.
We use $\sym{R} = (R+R^T)/2$ to denote its symmetrization.
The time average of a function $f(t)$ is denoted using~$\tave{f}(t)$:
\begin{align}
\tave{f}(t) = \frac{1}{t} \int_0^t f(t') \: \mathrm{d} t'.
\end{align}
Now we have
\begin{align}
\tave{R}_{ab}(t)
&= \sum_{jk} \frac{1}{t} \int_0^t e^{-i(E_j-E_k)t'} \: \mathrm{d} t' \brackets{a}{\Lambda_j}{b} \brackets{b}{\Lambda_k}{a}.
\end{align}
The $tH \ll 1$ and $t \to \infty$ limits of this average are
\begin{align}
\label{eq:T_ave_lims}
\notag
\tave{R}_{ab}(t \to 0) &= \delta_{ab}\left(1-\frac{t^2}{3}(H^2)_{aa}\right) +\frac{t^2}{3}|H_{ab}|^2 +O(t^3),\\
\tave{R}_{ab}(t \to \infty)
&= \sum_{jk} \delta_{jk} \brackets{a}{\Lambda_j}{b} \brackets{b}{\Lambda_k}{a}
= \sum_{k} |\brackets{a}{\Lambda_k}{b}|^2.
\end{align}
The time average of the state of the system is given by
\begin{align}
\tave{\rho}(t)
= \sum_{jk} \frac{1}{t} \int_0^t e^{-i(E_j-E_k)t'} \: \mathrm{d} t' \Lambda_j \rho(0) \Lambda_k.
\end{align}
It can be interpreted as the density matrix of a system that has
evolved for a random time, sampled from the uniform distribution on
the interval~$[0,t]$.
Again, in the short- and infinite-time limits this yields
\begin{align}
\label{eq:rho_ave_lims}
\notag
\tave{\rho}(t \to 0)
=& \rho(0) -\frac{it}{2} \left[H, \rho(0)\right]
+\frac{t^2}{3}\left(H \rho(0) H -\frac{1}{2}\left\{H^2, \rho(0)\right\} \right)
+O(t^3),\\
\tave{\rho}(t \to \infty)
=& \sum_{k} \Lambda_k \rho(0) \Lambda_k.
\end{align}
\subsection{Closeness measures}
\subsubsection{Inter-community transport}
\label{sec:S:mixing}
Considering the flow of probability during a continuous-time quantum
walk, let us investigate the \emph{change} in the probability of observing
the walker within a community:
\begin{align}
T_\mathcal{A} (t)
= \frac{1}{2}\left| p_\mathcal{A} \left \{ \rho (t) \right \} - p_\mathcal{A} \left \{ \rho (0) \right \} \right|,
\end{align}
where
$p_\mathcal{A} \left \{ \rho \right \} = \textrm{tr} \left( \Pi_\mathcal{A} \rho \right)$
is the probability of a walker in state~$\rho$ being found in
community~$\mathcal{A}$ upon a von Neumann-type measurement.\footnote{
Equivalently, $p_\mathcal{A} \left \{ \rho \right \}$ is
the norm of the projection (performed by projector $\Pi_\mathcal{A}$) of the
state $\rho$ onto the community subspace $\mathcal{V}_\mathcal{A}$.}
A good partition should intuitively minimize this change, keeping the walkers as localized to the communities as possible.
$T_X= \sum_{\mathcal{A} \in X}T_{\mathcal{A}}$ is of course minimized by the trivial choice of a single
community, $X = \{\mathcal{A}\}$, and any merging of communities can only decrease~$T_{X}$.
Therefore we have
$T_{\mathcal{A} \cup \mathcal{B}}(t) \le T_{\mathcal{A}}(t) +T_{\mathcal{B}}(t)$.
The initial state~$\rho (0)$ can be chosen freely.
For a pure initial state $\rho(0) = \ket{\psi}\bra{\psi}$ we obtain
\begin{align}
T_\mathcal{A} (t) = \frac{1}{2} \left| \bracket{\psi}{U^\dagger(t) \Pi_\mathcal{A} U(t)}{\psi} -\bracket{\psi}{\Pi_\mathcal{A}}{\psi} \right|.
\end{align}
The change in inter-community transport is clearest when the process begins either entirely inside or entirely outside each community. Because of this, we choose the walker to be initially localized
at a single node $\rho (0) = \proj{b}$ and then, for symmetry, sum (or average)
$T_\mathcal{A} (t)$ over all $b \in \mathcal{N}$:
\begin{align}
\label{eq:T}
\notag
T_\mathcal{A} (t) &= \frac{1}{2} \sum_b \left| \bracket{b}{U(t)^\dagger \Pi_\mathcal{A} U(t)}{b} -\bracket{b}{\Pi_\mathcal{A}}{b} \right|\\
\notag
&= \frac{1}{2} \sum_b \left|\sum_{a \in \mathcal{A}} (R_{ab}(t) -\delta_{ab}) \right|\\
\notag
&= \frac{1}{2} \left(\sum_{b \in \mathcal{A}} \left|1 -\sum_{a \in \mathcal{A}} R_{ab}(t) \right|
+\sum_{b \notin \mathcal{A}} \left|\sum_{a \in \mathcal{A}} R_{ab}(t) \right| \right)\\
\notag
&= \frac{1}{2} \left(\sum_{a \notin \mathcal{A},b \in \mathcal{A}} R_{ab}(t)
+\sum_{a \in \mathcal{A}, b \notin \mathcal{A}} R_{ab}(t)\right)\\
&= \sum_{a \in \mathcal{A}, b \notin \mathcal{A}} \frac{R_{ab}(t)+R_{ba}(t)}{2}
= \sum_{a \in \mathcal{A}, b \notin \mathcal{A}} \sym{R}_{ab}(t),
\end{align}
since $R(t)$ is doubly stochastic.
Now we have
\begin{align}
T_{\mathcal{A},\mathcal{B}}(t) = T_{\mathcal{A}}(t) +T_{\mathcal{B}}(t) -T_{\mathcal{A} \cup \mathcal{B}}(t)
= 2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}} \sym{R}_{ab}(t)
\end{align}
with $0 \le T_{\mathcal{A},\mathcal{B}}(t) \le 2 \min(|\mathcal{A}|, |\mathcal{B}|)$.
The short- and long-time limits of the time-averaged $T_{\mathcal{A},\mathcal{B}}(t)$
can be found using Eqs.~\eqref{eq:T_ave_lims}:
\begin{align}
T_{\mathcal{A}, \mathcal{B}}^{t \to 0}
&= 2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}}
\left( \delta_{ab} +\frac{t^2}{3}\left(|H_{ab}|^2 -\delta_{ab}(H^2)_{aa}\right) +O(t^3)\right),\\
T_{\mathcal{A}, \mathcal{B}}^{t \to \infty}
&= 2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}} \sum_k |(\Lambda_k)_{ab}|^2.
\end{align}
\subsubsection{Intra-community fidelity}
\label{sec:S:coher}
Our next measure aims to maximize the ``similarity'' between the
evolved and initial states when projected to a community subspace.
We do this using the squared fidelity
\begin{align}
F_\mathcal{A} (t) = F^2 \left \{ \Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A}, \Pi_\mathcal{A} \rho(0) \Pi_\mathcal{A} \right \},
\end{align}
where $\Pi_\mathcal{A} \rho \Pi_\mathcal{A}$ is the projection of the state $\rho$ onto the subspace $\mathcal{V}_\mathcal{A}$ and
\begin{align}
F \left \{ \rho , \sigma \right \} = \textrm{tr} \left \{ \sqrt{ \sqrt{\rho} \sigma \sqrt{\rho} } \right \} \in [0, \sqrt{\textrm{tr} \{ \rho \} \textrm{tr} \{ \sigma \}} ],
\end{align}
is the fidelity, which is symmetric between $\rho$ and $\sigma$.
If either $\rho$ or $\sigma$ is rank-1, their fidelity reduces to
$
F \left \{ \rho, \sigma \right \} = \sqrt{\textrm{tr} \{\rho \sigma\}}
$.
Thus, if the initial state~$\rho(0)$ is pure, we have
\begin{align}
F_\mathcal{A} (t) = \textrm{tr} \left( \Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A} \rho(0) \right).
\end{align}
This assumption makes
$F_X(t)$ equivalent to the squared fidelity between $\rho_X(t)$ and a pure~$\rho(0)$:
\begin{align}
\label{eq:FX}
\notag
F_X(t)
&= \sum_{\mathcal{A} \in X} \textrm{tr}\left(\Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A} \rho(0)\right)
= \textrm{tr}\left(\rho_X(t) \rho(0)\right)\\
&= F^2\{\rho_X(t), \rho(0)\}
= F^2\{\rho(t), \rho_X(0)\},
\end{align}
and yields
\begin{align}
\label{eq:f2}
\notag
F_{\mathcal{A},\mathcal{B}}(t)
&= F_{\mathcal{A} \cup \mathcal{B}}(t)-F_\mathcal{A}(t)-F_\mathcal{B}(t)\\
\notag
&= 2 \real \textrm{tr} \left(\Pi_\mathcal{A} \rho(t) \Pi_\mathcal{B} \rho(0) \right)\\
&= 2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}} \real \left(\rho_{ab}(t) \rho_{ba}(0) \right).
\end{align}
We use as the initial state the uniform superposition of all the basis states with arbitrary phases:
$\ket{\psi} = \frac{1}{\sqrt{n}}\sum_k e^{i \theta_k} \ket{k}$, which gives
\begin{align}
F_{\mathcal{A},\mathcal{B}}(t)
&=
\frac{2}{n^2} \sum_{a \in \mathcal{A}, b \in \mathcal{B}} \sum_{xy}
\real\left(e^{i(\theta_x -\theta_y +\theta_b -\theta_a)} U_{ax} \overline{U_{by}}\right).
\end{align}
In this case the short-term limit does not yield anything interesting.
The long-time limit of the time-average of $F_{\mathcal{A},\mathcal{B}}(t)$ is
\begin{align}
\notag
F_{\mathcal{A}, \mathcal{B}}^{t \to \infty}
&= \frac{2}{n^2} \sum_{a \in \mathcal{A}, b \in \mathcal{B}} \sum_{xy,k} \real \left(e^{i(\theta_x -\theta_y +\theta_b -\theta_a)}(\Lambda_k)_{ax} (\Lambda_k)_{yb} \right).
\end{align}
We may now (somewhat arbitrarily) choose all the phases~$\theta_k$ to be the same,
or average the closeness measure over all possible phases~$\theta_k \in [0, 2\pi]$.
\subsubsection{Purity}
The coherence between any communities $X = \{ \mathcal{A},\mathcal{B},\dots \}$ is completely destroyed
by measuring
in which community subspace $\mathcal{V}_\mathcal{A}$ the quantum
state is located, see Eq.~\eqref{eq:measure}. If the measurement outcome is not revealed, the
purity of the measured state~$\rho_X(t)$ is,
due to the orthogonality of the projectors,
\begin{align}
\notag
{\mathbbm P}_X(t) &= \textrm{tr}\left(\rho_X^2(t)\right)
= \sum_{\mathcal{A} \in X} \textrm{tr}\left((\Pi_\mathcal{A} \rho(t))^2\right)
= \sum_{\mathcal{A} \in X} {\mathbbm P}_\mathcal{A}(t),
\end{align}
where
\begin{align}
{\mathbbm P}_\mathcal{A}(t) &= \textrm{tr}\left((\Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A})^2\right) = \textrm{tr}\left((\Pi_\mathcal{A} \rho(t))^2\right).
\end{align}
If $\rho(t)$ is pure, we have (cf. Eq.~\eqref{eq:FX})
\begin{align}
{\mathbbm P}_X(t)
= \sum_{\mathcal{A} \in X} \textrm{tr}(\Pi_\mathcal{A} \rho(t) \Pi_\mathcal{A} \rho(t))
= F^2\{\rho_X(t), \rho(t)\}.
\end{align}
The change in purity of the state after a projective measurement
locating the walker into one of the communities is
\begin{align}
\notag
{\mathbbm P}_{\mathcal{A}, \mathcal{B}}(t) &= {\mathbbm P}_{\mathcal{A} \cup \mathcal{B}}(t) -{\mathbbm P}_\mathcal{A}(t)-{\mathbbm P}_\mathcal{B}(t)\\
\notag
&= 2\textrm{tr}\left(\Pi_\mathcal{A} \rho(t) \Pi_\mathcal{B} \rho(t)\right)\\
&= 2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}} | \rho_{ab} (t) |^2 \ge 0.
\end{align}
Again, we will use the initial state
$\ket{\psi}~=~\frac{1}{\sqrt{n}}\sum_k e^{i \theta_k} \ket{k}$:
\begin{align}
{\mathbbm P}_{\mathcal{A},\mathcal{B}}(t)
&=
\frac{2}{n^2} \sum_{a \in \mathcal{A}, b \in \mathcal{B}}
\left|\sum_{xy} e^{i(\theta_x-\theta_y)} U_{ax}(t) \overline{U_{by}(t)}\right|^2.
\end{align}
As with the fidelity-based measure, the short-time limit is uninteresting.
The long-time limit of the time-average of ${\mathbbm P}_{\mathcal{A},\mathcal{B}}(t)$ is
\begin{align}
\notag
{\mathbbm P}_{\mathcal{A},\mathcal{B}}^{t \to \infty}
&= 2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}}
\left(|\bracket{a}{\tave{\rho}(\infty)}{b}|^2 +\sum_{k \neq m}
|\bracket{a}{\Lambda_k \rho_0\Lambda_m}{b}|^2\right)\\
&=
2 \sum_{a \in \mathcal{A}, b \in \mathcal{B}} \left(
|\sum_{kxy} e^{i(\theta_x -\theta_y)} (\Lambda_k)_{ax} (\Lambda_k)_{yb}|^2\right.\\
&\left.+\sum_{k \neq m}|\sum_{xy} e^{i(\theta_x -\theta_y)} (\Lambda_k)_{ax} (\Lambda_m)_{yb}|^2
\right).
\end{align}
\chapter{Visualization of quantum sequences}
\label{ch:qubsim}
\section{Introduction}
One of the key features of quantum mechanics is that increasing the number of particles results in an exponential increase of the number of parameters we need to describe the state.
For example, a pure state of $N$ qubits needs $2^N$ complex parameters.
It is a crucial feature, related to quantum phenomena such as entanglement and some aspects of quantum computation.
However, the exponential increase of parameters makes it problematic to store, analyze, process or visualize many-particle quantum states.
Moreover, sometimes we are interested in analyzing quantum states of infinitely many particles (for example, infinite spin chain lattices).
While it is impossible even to store all parameters,
still we can work with (usually approximate) models describing the state.
The problem is not unique to quantum mechanics --- one already have it in statistical physics,
and more generally in statistics.
That is, while each state can be described with a number of parameters proportional to the number of particles,
the probability distribution requires an exponentially growing number of parameters.
However, while in classical systems we can avoid this problem by considering a single state, in quantum mechanics this is not the case \cite{Feynman1982}.
In this chapter we present methods for analysis of many-particle (and infinite particle) wavefunctions, based (or inspired) on similar methods in statistics.
While the problem is general, in this chapter we will focus of sequences --- i.e. configuration of particles, where they can be meaningfully arranged in a line.
Every pure quantum state of $N$ particles can be expressed as in the computational basis, i.e.
\begin{align}
\ket{\Psi} = \sum_{s_1,s_2,\ldots,s_N} \alpha_{s_1,s_2,\ldots s_N} \ket{s_1} \ket{s_2} \cdots \ket{s_N},
\label{eq:state_in_computational_basis}
\end{align}
where $\alpha_{s_1,s_2,\ldots s_N}$ are complex parameters and the sum is over respective number of states for each particle, i.e. $s_i \in \{0, \ldots, d_i - 1 \}$.
We focus on systems of distinguishable particles of the same number of states, i.e. $d \equiv d_1 = \ldots = d_N$.
We will put special emphasis on translationally-invariant states.
That is, let us define the shift operator by
\begin{align}
T \ket{s_1} \ket{s_2} \cdots \ket{s_{N-1}} \ket{s_N} = \ket{s_2} \ket{s_3} \cdots \ket{s_N} \ket{s_1},
\end{align}
then translationally invariant states are the states fulfilling
\begin{align}
T \ket{\Psi} = \ket{\Psi}.\label{eq:transl-inv}
\end{align}
The dimension of translationally invariant states still grows exponentially \cite{oeisA000031}.
To see that, let us take computational basis of $N$ qudits, which is of the dimension $d^N$,
and construct abstractions classes of basis states related by $T^k$, for some $k$.
As an orbit of $T^k$ has at most $N$ elements, the dimension of translationally invariant subspace is at least $d^N/N$.
Nonetheless sometimes this symmetry simplifies substantially properties of the state.
In this chapter we present a pictorial representation of quantum many-body wavefunctions, for which we have coined the name \emph{qubism}\footnote{The name \emph{qubism} (inspired by Cubism, the art movement) should not be confused with \emph{QBsim} (quantum Bayesianism) \cite{Caves2002,Fuchs2010}.} \cite{Rodriguez-Laguna2011}.
In this visualization, a wavefunction characterizing a pure state of a chain of $N$ qudits is mapped to an image with $d^{N/2} \times d^{N/2}$ pixels.
It is presented in a few flavors and applied to analyze properties of ground states of commonly used Hamiltonians in condensed matter and cold atom physics, such as the Heisenberg or the Ising model in a transverse field (ITF).
The main property of the plotting scheme is recursivity: increasing the number of qubits reflects in an increase in the image resolution.
Thus, the plots are typically fractal-like, at least for translationally-invariant states.
The two-dimensional structure is especially capable of capturing correlations between neighboring particles.
Many features of the wavefunction, such as magnetization, correlations and criticality, are represented by visual properties of the images.
In particular, factorizability can be easily spotted: entanglement entropy turns out to be the deviation from exact self-similarity.
Furthermore, we use similar a scheme to visualize density matrices and operators.
We show that some properties of \emph{qubistic} plots do not depend on particular graphical representation, but are related to information theoretic properties of the state.
Once the measurement basis is chosen, we analyze outcomes as classical probabilistic sequences.
We use tools such as (classical) conditional entropy and mutual information, as well as R\'enyi fractal dimension, to describe the state.
\subsection{Classical sequence analysis}
Analysis of probabilistic sequence is one of important problems in classical information theory.
Initial considerations on how much information can be sent as a probabilistic sequence of letters
gave raise to Shannon entropy \cite{Shannon1948original} and related tools such as conditional and mutual information.
These concepts have proven to be crucial in communication --- as they provide rigorous bounds both on how to avoid redundancy by efficiently compressing information and how to add minimal redundancy, so that the message can be decoded, even if it is subjected to noise \cite{CoverThomas2006book,MacKay2003}.
Moreover, they remain one of the main general-purpose approaches to data analysis, as these tools deal with abstract information and require little assumptions.
Information theory is widely used for analysis of stationary processes, that is, probabilistic sequences of letters over an alphabet, with probabilities being invariant under translation.
They are a direct analogue of quantum translationally-invariant states \eqref{eq:transl-inv}.
Stationary processes are applied to as diverse topics as analysis of the structure of languages \cite{Norvig2009}, DNA sequences \cite{Almeida2001}, heart arrhythmias \cite{Jones2008} and correlations for grounds states of a Hamiltonian \cite{Bialek2001}.
One of key techniques for simplification and modeling of stationary processes are hidden Markov models \cite{Rabiner1989}.
That is, certain processes can be simulated as a memoryless stochastic process on the internal states (a random walk on a fixed graph) and a \emph{observation matrix} mapping the internal states to probabilities of observing particular outcomes.
Nonetheless, for some stationary processes memory properties are crucial \cite{Jones2008}.
\subsection{Data visualization}
It is not uncommon for a communication in technical sciences to involve presenting data, whether derived from an experiment, a numerical simulation or an exact formula.
It can be conveyed in the form of a table with numbers, a histogram, a line plot, a scatter plot or a density map --- to name only a few ways of visualizing data.
However, using plots to present data should not be taken for granted.
Even typical plots such as bar plots or scatter plots appeared for the first time in late 18th century \cite{Tufte2007}.
When we interact with data (especially data coming from an experiment or simulation), it is useful to have at the same time access to raw data and a representation enabling us to get further insight.
For example, when we are studying the correlation between two variables, a scatter plot is often a better way to show the data than just only the linear correlation coefficient.
First, from raw data presented as such a plot is easy to see correlations.
Second, it also allows to see why such correlation happen (maybe it is only due to a few outliers, or there is no correlation, but the data is still highly dependent in a non-linear way).
While most of such plots are multi-purpose tools can be applied to various kinds of data, some are more specific, with the visualization being deeply related to properties of the visualized object.
Perhaps the most beautiful example, Mendeleev's periodic table of elements, arranges elements in a way related to their nuclear (number of protons) and chemical (electric structure of orbitals) properties \cite{Marchese2012}.
It is important to remember that every data visualization puts emphasis on some aspects of data at the expense of others.
For example, scatter and bar plot are good at showing relative differences, and put emphasis on values standing out of the crowd.
Yet they may mask small but crucial changes, for example:
\begin{itemize}
\item Prices $\$5.00$ and $\$4.99$ convey a different message to the consumer \cite{Schindler2006}.
\item In some voting models \cite{Migdal2010mafia,Migdal2011twochoice} the parity of the number of participants may matter even in the limit of infinitely many participants --- i.e. adding two participants changes the value less than adding one participant.
\item Numerical value $1.57$ is close to $\pi/2$, but does not have the unique properties of the later.
\end{itemize}
Consequently, depending both on our data and the features we want to put the emphasis on,
we need to choose, tweak or create visualization schemes according to our needs.
It is a choice we cannot avoid as, all in all, even presenting numbers using Arabic numerals (e.g. $0.231 + 0.150i$) is a form of data visualization (and often abstraction, if we round numbers with fixed point precision).
\subsection{Visualizing sequences}
Analysis of the statistical distribution of sequences is important in a few fields of science.
In natural language processing texts are cut into so called $N$-grams --- sequences of $N$ consecutive characters or words).
Their distribution is being applied for language recognition and for various statistical interferences about language \cite{Norvig2009}.
Another application is in molecular genetics --- analysis of deoxyribonucleic acid (DNA) sequences.
From the information theory perspective, each DNA sequence is a word over the alphabet of 4 letters, $\{A,C,G,T\}$,
denoting nucleobases --- adenine, cytosine, guanine and thymine, respectively.
A triple of nucleobases encodes an amino acid, the building block of a protein.
Thus, presence and absence of sequences of nucleobases is related to the structure of the proteins that are being encoded.
To visualize that, in 1990 Jeffrey \cite{Jeffrey1990,Jeffrey1992} used the so-called \emph{chaos game representation} to plot different sequences on the same graph.
We describe the scheme, as it is directly related to \emph{qubism}.
The chaos game representation applied to DNA sequences works as follows.
Fist, we plot a square and put ($\{A,C,G,T\}$) on the edges, for example:
\begin{align}
\vec{A} = (0,0),\quad
\vec{C} = (1,0),\quad
\vec{G} = (0,1),\quad
\vec{T} = (1,1).
\label{eq:acgt_positions}
\end{align}
Then for each sequence ($s_1 s_2 s_3,\ldots,s_N$) we find its position with the following iterative procedure:
\begin{align}
\vec{r}_0 &= (1/2, 1/2)\\
\vec{r}_{i} &= (\vec{r}_{i-1} + \vec{s_i})/2.
\end{align}
That is, we start in the middle of the square and for each consecutive symbol we move the the position in the middle-way between its current position and the symbol's corner. See Fig.~\ref{fig:chaos_games}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.40\textwidth]{figs/qubism/chaos_games.pdf}
\caption{Construction of the chaos game representation for a DNA sequence.
Points for a null sequence, $T$, $TG$ and $TGA$.}
\label{fig:chaos_games}
\end{figure}
Written in other way, it is just
\begin{align}
\vec{r}_{N} = \sum_{i=1}^N 2^{i-N-1} \vec{s_i},
\end{align}
the two coordinates are related to the binary expansion of the reversed sequence, i.e.
\begin{align}
\vec{r}_{N} = (&0.(s_{N})_x (s_{N-1})_x \ldots (s_{2})_x (s_{1})_x 1,\label{eq:chaos_position}\\
&0.(s_{N})_y (s_{N-1})_y \ldots (s_{2})_y (s_{1})_y 1),\nonumber
\end{align}
where $x$ and $y$ mean the first and the second coordinate of the symbols,
as in \eqref{eq:acgt_positions}. Because of the reversed order, typically it does not converge for infinite sequences.
If the sequence distribution is uniform, it gives raise to a uniform distribution of points (up to the discretization) on square, see \eqref{eq:chaos_position}.
If it is not, it typically looks fractal, showing the presence (or absence) of some particular subsequences.
For example, we can cut a DNA into non-overlapping sequences of 6 nucleobases (each encoding 2 amino acids).
Then the presence of particular strings says which pairs of amino acids are being encoded.
Chaos game representation for DNA sequences was used as a starting point to compare genes and calculate their information content \cite{Almeida2001} and multifractal properties \cite{Yu2001}.
Moreover, it was applied to compare proteins basing on their structure \cite{Liu2007}.
The idea was rediscovered by \cite{Hao2000}, with mapping slightly different from \eqref{eq:chaos_position}.
The order of symbols in this formula is reversed, thus the first symbols carry more weight in $\vec{r}$ that the last ones.
In particular, it allows every sequence to be convergent at the price of restricting ourselves to the analysis of sequences of the same length.
Unbeknownst of the previous works, in 2005 Latorre \cite{Latorre2005} used this mapping to encode an image as a quantum state.
These quantum states were written down as states of a spin chain and then expressed it as matrix product states (MPS).
This proof-of-principle encoding was called \emph{qpeg} compression.
\section{Qubism}\label{s:qubism}
\subsection{Basic mapping}
To plot a pure quantum state of many qubits, let us start by writing it in the computational basis \eqref{eq:state_in_computational_basis}.
Similarly to the DNA sequence, we want to map each sequence to a particular position (or region) on a unit square.
Then we will color the region depending on its amplitude.
For simplicity, in this section we concentrate on qubits, with an even number of particles.
Generalization for qudits, particles of different dimension and an odd number of particles is straightforward.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.60\textwidth]{figs/qubism/simplest.pdf}
\caption{Qubism mapping for $N=2$ and $N=4$ particles. Adding more particles results in recursive splitting.}
\label{fig:qubism_simplest}
\end{figure}
We proceed as in Fig.~\ref{fig:qubism_simplest}, constructing the mapping recursively.
We start with a unit square. We divide it into four quadrants. Depending on the first two bits, we pick a quadrant, according to:
\begin{align}
\begin{matrix}
00 \to \hbox{upper left} & 01 \to \hbox{upper right} \\
10 \to \hbox{lower left} & 11 \to \hbox{lower right}.
\end{matrix}
\end{align}
Then for each quadrant, we proceed recursively with the remaining part of the sequence.
After mapping sequences to squares, we create a complex function on the unit square,
[$0,1]\times[0,1] \rightarrow \mathbb{C}$,
that has values taken from the wavefunction amplitudes.
To be specific, for each sequence $y_1 x_1 y_2 x_2 \ldots y_{N/2} x_{N/2}$ we create a square with edge size $2^{-N/2}$ and with position (i.e. its top left corner)
\begin{align}
x = \sum_{i=1}^{N/2} 2^{-i} x_i, \qquad
y = \sum_{i=1}^{N/2} 2^{-i} y_i,\label{eq:qubism_position},
\end{align}
where we plot the $x$ coordinate from left to right and the $y$ coordinate from up to down.
We map complex numbers to colors \cite{Wegert2011,Petrisor2014plotting_complex}, using the absolute value $|z|$ for lightness or saturation and the phase ($\arg(z)$) for hue.
To be more specific, we use two mappings, defined in hue-saturation-value (HSV) coordinates as follows:
\begin{equation}
\text{light: }
\begin{bmatrix}
H\\S\\V
\end{bmatrix}
=
\begin{bmatrix}
\arg(z)/(2\pi)\\
\max(|z|, 1)\\
1
\end{bmatrix}
\qquad
\text{dark: }
\begin{bmatrix}
H\\S\\V
\end{bmatrix}
=
\begin{bmatrix}
\arg(z)/(2\pi)\\
\max(|z|, 1)\\
1
\end{bmatrix}
\label{eq:complex_to_hsv}
\end{equation}
as in Fig~\ref{fig:complex_colors}.
The mapping of the phase to hue is standard. However, there are various convention for mapping of $|z|$ to lightness or saturation;
we adopt mapping as above, without going into details.
\begin{figure}[!htbp]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.40\textwidth]{figs/complex_color_circle_bright.pdf} &
\includegraphics[width=0.40\textwidth]{figs/complex_color_circle_dark.pdf}
\end{tabular}
\caption{Examples of color mappings for a complex number $z=x + i y$. For clarity, only the unit disk is shown.
Values can be scaled, so that full saturation takes place $|z|_{max}$.}
\label{fig:complex_colors}
\end{figure}
For example, for state $\ket{0101}-\ket{1010}$, the qubistic plot is as in Fig.~\ref{fig:qubism_example_0101}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.30\textwidth]{figs/qubism/example_0101.pdf}
\caption{Plot for the state $(\ket{0101}-\ket{1010})/\sqrt{2}$. The red square denotes amplitude of $\ket{0101}$, while the teal --- of $\ket{1010}$.}
\label{fig:qubism_example_0101}
\end{figure}
\subsection{Properties}
The visualization scheme described above has some interesting geometrical properties,
which can be translated into symmetries of the state, as shown in Fig.~\ref{fig:qubism_symmetries}.
\begin{itemize}
\item Corners correspond to:
\begin{itemize}
\item ferromagnetic states (i.e. $\ket{0000\ldots}$ for upper left and $\ket{1111\ldots}$ for lower right), and
\item antiferromagnetic states (i.e. $\ket{0101\ldots}$ for upper right and $\ket{1010\ldots}$ for lower left).
\end{itemize}
\item Rotation of the plot by $180^\circ$ corresponds to $0\leftrightarrow1$ (changing zeros into ones and vice versa), or equivalently: application of bit swap on all particles $(\sigma^x)^{\otimes N}$).
\item Horizontal reflection flips every even qubit ($(\mathbb{I} \otimes \sigma^x)^{\otimes N/2}$).
\item Vertical reflections flips every odd qubit ($(\sigma^x \otimes \mathbb{I})^{\otimes N/2}$).
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figs/qubism/symmetries.pdf}
\caption{Basic geometrical symmetries of the plotting scheme as in Fig~\ref{fig:qubism_simplest}.}
\label{fig:qubism_symmetries}
\end{figure}
The recursive structure is related to the state of consecutive pairs of qubits.
Each quadrant defines a subplot, related to the projection of a wavefunction on a certain state of the first two qubits.
For example, if we measure the first two qubits in the basis $\{\ket{0}, \ket{1}\}$, and obtain $10$, then our new wavefunction is $\bra{10}_{12} \ket{\Psi}$ up to its normalization.
But it is already in the plot --- it is just the lower left quadrant.
If we measure the two last particles, then $\bra{10}_{N-1,N} \ket{\Psi}$ is the same as taking every second pixel in both $x$ and $y$ direction, i.e. taking all pixels corresponding to sequences ending with $\ldots10$.
In particular, if a state is translationally invariant then the two above coincide, i.e. $\bra{10}_{12} \ket{\Psi} = \bra{10}_{N-1,N} \ket{\Psi}$.
The plotting scheme is valid for an arbitrary number of qubits.
However, once the number of particles gets bigger, it does make little sense to plot anything but translationally-invariant states \eqref{eq:transl-inv}.
\subsection{Examples}
\subsubsection{Product state}
Let us start with the simplest possible state --- a product state of the form
\begin{align}
\ket{\Psi} = (\alpha \ket{0} + \beta \ket{1})^{\otimes N},
\end{align}
which is depicted in Fig.~\ref{fig:product_state}.
The pattern is self-similar, but in some sort of trivial way --- each subplot is proportional to other subplots.
For example $\bra{00}_{12} \ket{\Psi} \propto \bra{10}_{12} \ket{\Psi}$.
It is directly related to the fact that by measuring the state of one particle we do not disturb the results for others.
Or, in other words, that particles are not correlated in any way.
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{figs/qubism/product_state_grid}
\caption{\label{fig:product_state} An array of product states for $N=10$ particles,
for $\alpha=\cos(\theta/2)$ and $\beta=\sin(\theta/2) \exp(i \varphi)$.}
\end{figure}
\subsubsection{Dicke states}
The next state we would like to plot is the \emph{Dicke state} \cite{Dicke1954}, that is
\begin{align}
D^N_k = \binom{N}{k}^{-1/2} \sum_{\text{inequiv. perm.}} \ket{0}^{\otimes (N-k)} \ket{1}^{\otimes k},
\end{align}
or, in other words, a state defined by all linear combinations of basis states with fixed number of $1$s.
In particular, for $k=1$ we get the \emph{W states}, for $N=3$ particles is
\begin{equation}
\frac{\ket{001} + \ket{010} + \ket{100}}{\sqrt{3}}.
\end{equation}
For six particles we plot all Dicke states, in Fig.~\ref{fig:dicke-states-6}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{figs/qubism/dicke_n6_k_all}
\caption{\label{fig:dicke-states-6}Dicke states for $N=6$ particles, with $k=0,1,\ldots,N$. Notice how the plot changes from all zeros $\ket{000000}$, going through a state with the same number of 0s and 1s ($k=3$) to all ones $\ket{111111}$.}
\end{figure}
For even $N$ we can consider Dicke states with the same number of $0$s as $1$s, that is, with $k=N/2$.
They can be related to the ground state of a fermionic system at half-filling,
where every fermion interacts with every other with the same coefficient.
This state is plotted in Fig.~\ref{fig:dicke-states-hf}, for various particle numbers.
\begin{figure}[!htbp]
\centering
\includegraphics[width=3cm]{figs/article/qubism_njp/W08}
\includegraphics[width=3cm]{figs/article/qubism_njp/W10}
\includegraphics[width=3cm]{figs/article/qubism_njp/W12}
\includegraphics[width=3cm]{figs/article/qubism_njp/W14}
\caption{\label{fig:dicke-states-hf}Dicke states with half-filling for $N=8$, $10$, $12$
and $14$ qubits. Notice how the fractal structure develops.}
\end{figure}
Every Dicke state is permutation-symmetric, i.e. permutation of the particle order leaves it unchanged.
In fact, they form a basis for the permutation-symmetric subspace of qubits;
or equivalently ---- for bosonic states in two modes, written in the particle basis.
This means that whenever we find a qubistic plot being a superposition of shapes as in Fig.~\ref{fig:dicke-states-6},
the state is permutation symmetric.
\subsubsection{Ising model in a transverse field}
\label{s:ising_model}
Let us consider spin-$1/2$ antiferromagnetic Ising model in a transverse field, in a 1D spin chain
\begin{equation}
H = \sum_{i=1}^N \sigma^z_i \sigma^z_{i+1} - \Gamma \sum_{i=1}^N \sigma^x_i,
\label{eq:itf_model}
\end{equation}
where $\Gamma$ is a parameter describing the strength of the transverse field.
Let us use periodic boundary conditions, i.e. $\sigma^z_{N+1} \equiv \sigma^z_{1}$.
That is, spins of neighboring particles are coupled through the $z$ component of their spins,
while at the same time a perpendicular field tries to align spins along its axis.
Depending on the strength of the transverse field, one of two alignments dominate.
For $\Gamma=0$ the ground state consists only of two N\'eel states
(i.e. $\ket{0101\ldots}$ and $\ket{1010\ldots}$).
For $\Gamma \to \infty$ the ground state is a product of states pointing in the $x$ direction, i.e.
$\ket{+}=(\ket{0} + \ket{1})/\sqrt{2}$.
But the most interesting is what happens in between. At $\Gamma=1$ there is a quantum phase transition.
The transition is plotted in Fig.~\ref{fig:itf_transition}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth]{figs/qubism/ift_transition_z_x}
\caption{\label{fig:itf_transition}Ground state of the Ising model with transverse field Hamiltonian with $N=10$ qubits and periodic boundary conditions.
Values of the transverse field are $\Gamma=0.1$, $0.75$, $1.0$, $1.33$, and $10$.
The critical point, $\Gamma_c=1$, corresponds central panel.
We show results both in $\sigma^z$ and $\sigma^x$ basis.}
\end{figure}
\subsubsection{Heisenberg Hamiltonian and Majumdar-Ghosh model}
The next system we want to study is the 1D Majumdar-Ghosh model \cite{Majumdar1969}
\begin{equation}
H = \sum_{i=1} \vec{\sigma}_i \cdot \vec{\sigma}_{i+1}
+ J \sum_{i=1} \vec{\sigma}_i \cdot \vec{\sigma}_{i+2},\label{eq:majumdar_ghosh_hamiltonian}
\end{equation}
that is, an antiferromagnetic model with spin-spin interactions between nearest neighbors and second nearest neighbors.
The later are parametrized by $J$.
We can consider different boundary conditions:
\begin{itemize}
\item periodic (spin chain forms a circle, summations in \eqref{eq:majumdar_ghosh_hamiltonian} are up to $N$; we identify $\vec{\sigma}_{N+1} \equiv \vec{\sigma}_{1}$ and $\vec{\sigma}_{N+2} \equiv \vec{\sigma}_{2}$),
\item open (spin chain does not form a circle, summations in \eqref{eq:majumdar_ghosh_hamiltonian} are up to $N-1$ and $N-2$).
\end{itemize}
Isotropic spin-spin interaction
\begin{equation}
\tfrac{1}{2}\vec{\sigma}_i \cdot \tfrac{1}{2}\vec{\sigma}_j
= \tfrac{1}{4}\left(\sigma_i^x \sigma_j^x + \sigma_i^y \sigma_j^y + \sigma_i^z \sigma_j^z\right)
\end{equation}
is invariant with respect to collective rotation, i.e. $U^{\otimes N}$ for any $U\in \text{SU}(2)$.
Consequently, all eigenstates of the Hamiltonian built from these operators can be labeled by their total spin number.
In this case, for an even number of particles $N$, we expect the ground state to be a singlet, i.e. to have total spin $0$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{figs/qubism/heisenberg_n10_trans_periodic_open}
\caption{\label{fig:heisenberg_majumdar_ghosh}
Majumdar-Ghosh model for periodic and open boundary conditions for $N=10$ qubits.
For $J=0$ it corresponds to the Heisenberg model, while for $J=1/2$ its ground state is the Majumdar-Ghosh state.
The qubistic plot is drawn only in one basis, as in other bases it is the same, due to the ground state being a singlet.
Notice the characteristic Z-like shape and how it is affected by changing boundary conditions. See also Fig.~\ref{fig:heisenberg_majumdar_ghosh_skewed}.
}
\end{figure}
For $J=0$ we have only nearest neighbor interactions --- the Heisenberg model.
For $J=0.5$ the ground state can be exactly found and is called the Majumdar-Ghosh state \cite{Majumdar1970}.
In Fig.~\ref{fig:heisenberg_majumdar_ghosh} we plot the ground state for various $J$,
both for periodic and open boundary conditions.
One of the striking features of this qubistic plot is the Z-like shape.
For now, let us focus only on the anti-diagonal line.
From their position in the plot, these states are of the form:
\begin{equation}
\{\ket{01}, \ket{10} \}^{N/2}.
\end{equation}
As we see, absolute values of their amplitudes are the same, but their sign varies.
Colors in the upper-right quadrant ($\ket{01}_{12}$) are complementary to colors in the lower-left quadrant ($\ket{10}_{12}$).
Consequently, we can write the state as
\begin{equation}
\ket{\psi_{ad}} = \frac{\ket{01}_{12} - \ket{10}_{12}}{\sqrt{2}} \ket{\psi_{ad}}_{34\ldots N}.
\end{equation}
Noticing that the plot is recursive (either graphically or from the fact that we deal with translation-invariant state, at least for periodic boundary conditions),
we see that the the anti-diagonal is a product of two-particle singlets $(1,2)(3,4)\ldots(N-1,N)$ --- with the bracket $(i,j)$ meaning the two-particle singlet state of $i$-th and $j$-th particle --- or
\begin{equation}
\ket{\psi_{ad}} = \left(\frac{\ket{01} - \ket{10}}{\sqrt{2}} \right)^{N/2}\label{eq:singlet_product_even}.
\end{equation}
But how can we interpret the two remaining lines in the Z-like shape?
For periodic boundary conditions, the ground state needs to be translationally invariant.
After shifting \eqref{eq:singlet_product_even} by $1$ particle, we get a product of singlet pairs for $(2,3)(4,5)\ldots(N,1)$.
It should be not surprising that this state has very low amplitude for open boundary conditions.
In fact, in \cite{Majumdar1970} it was shown that the ground state of \eqref{eq:majumdar_ghosh_hamiltonian} for $J=1/2$ and periodic boundary conditions
is exactly a superposition of \eqref{eq:singlet_product_even} and its shift, i.e. $(\ket{\psi_{ad}} + T\ket{\psi_{ad}})\sqrt{2}$.
What may remain puzzling is why, in the qubistic plot, there are two lines for $T\ket{\psi_{ad}}$.
It is related to the fact, that for plotting we use as our ``alphabet'' consecutive pairs of spins.
Position of a single amplitude is, in the binary system,
\begin{align}
x &= 0.s_2 s_4 \ldots s_N,\\
y &= 0.s_1 s_3 \ldots s_{N-1}.
\end{align}
So, for the singlet pairs $(1,2)(3,4)\ldots(N-1,N)$ we have
\begin{equation}
s_{2k} = 1-s_{2k-1} \quad \text{so} \quad x \quad \approx 1 - y,\label{eq:mg_antidiag}
\end{equation}
where the approximation is up to plot resolution.
For the singlet pairs $(2,3)(4,5)\ldots(N,1)$ we have
\begin{equation}
s_{2k+1} = 1-s_{2k} \quad \text{so} \quad x \approx (1 - 2y) \mod 1,
\end{equation}
as multiplying by $2$ shifts $y$ into the already solved instance \eqref{eq:mg_antidiag}.
\subsubsection{Spin-1 and AKLT states}
The qubistic plotting scheme is by no means restricted to qubits.
While we are presenting more general theory in Sec.~\ref{s:qubism_general},
it is straightforward to make a generalization for quantum states built out of qudits ($d$-level systems).
For example, let us focus on $d=3$ in terms of a spin-1 system.
As a basis, we can use eigenstates of the spin operator in the $z$-th direction.
Then, the local basis is $\{-1,0,1\}$ or, for the sake of simplicity,
$\{-,0,+\}$.
The only difference from $d=2$ (or qubits), is that instead of dividing the square into $2 \times 2$ quadrants, we divide it into $3 \times 3$ quadrants, see Fig.~\ref{fig:qubism_qutrits}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.20\textwidth]{figs/qubism/simplest_qutrits.pdf}
\caption{Qubism plotting scheme for qutrits, analogous to Fig.~\ref{fig:qubism_simplest}.}
\label{fig:qubism_qutrits}
\end{figure}
As an example, let us choose the Affleck-Kenedy-Lieb-Tasaki (AKLT) state \cite{Affleck1987},
i.e. the ground state of the following Hamiltonian:
\begin{equation}
H = \sum_{i=1}^N \vec{S}_i \cdot \vec{S}_{i+1}
+ \frac{1}{3} (\vec{S}_i \cdot \vec{S}_{i+1})^2,
\label{eq:haldane_hamiltonian}
\end{equation}
where $\vec{S}$ is the spin-1 operator,
i.e. $\vec{S}=(S_x, S_x, S_z)$ and
\begin{align}
S_x =
\frac{1}{\sqrt{2}}
\left[
\begin{matrix}
0 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0
\end{matrix}
\right]
\quad
S_y =
\frac{1}{\sqrt{2} i}
\left[
\begin{matrix}
0 & 1 & 0\\
-1 & 0 & 1\\
0 & -1 & 0
\end{matrix}
\right]
\quad
S_z =
\left[
\begin{matrix}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1
\end{matrix}
\right].
\end{align}
This state is an example of a valence bond solid, and has attracted considerable attention because of its relation to the Haldane conjecture \cite{Haldane1983},
its non-local order parameter \cite{DenNijs1989} and as a source of inspiration for tensor-network states \cite{Perez-Garcia2006}.
\begin{figure}
\centering
\includegraphics[width=6cm]{figs/qubism/aklt_ground_n06.png}
\caption{\label{fig:aklt} Ground state of the AKLT spin-1 Hamiltonian,
for $N=6$ spins.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4cm]{figs/article/qubism_njp/H06}
\includegraphics[width=4cm]{figs/article/qubism_njp/H08}
\includegraphics[width=4cm]{figs/article/qubism_njp/H10}
\caption{\label{fig:aklt_nonzero} Ground state of the AKLT spin-1 Hamiltonian,
for $N=6$, $8$ and $10$ spins. The same color is used to every non-zero amplitude.
Notice how the characteristic fractal structure of a snowflake develops.}
\end{figure}
We plot the ground state in Fig.~\ref{fig:aklt},
and shows its fractal structure in Fig.~\ref{fig:aklt_nonzero},
where we abstract the wavefunction to zero and non-zero values.
The qubistic plot shows that there are no consecutive $+$ or $-$ in any sequence.
Furthermore, we see that when the last entry was $+$ then the next one cannot start with $+$ or be $0+$ (and analogously for $-$).
Thus, we arrive at a rather accurate description of AKLT state, which contains all sequences with alternating $+$ and $-$ separated by an arbitrary number of $0$ states.
It is worth noting that the AKLT state is a prototypical matrix product state.
That is, it can be written \cite[Sec. 4.1.5.]{Schollwock2011} as
\begin{equation}
\alpha_{s_1 s_2 \ldots s_N} = \hbox{Tr} \left[ A^{s_1} A^{s_2} \cdots A^{s_N} \right],
\end{equation}
where matrices are
\begin{equation}
A^{-} = -\sqrt{\tfrac{2}{3}}
\left[
\begin{matrix}
0 & 0\\
1 & 0
\end{matrix}
\right]
\quad
A^{0} = \sqrt{\tfrac{1}{3}}
\left[
\begin{matrix}
-1 & 0\\
0 & 1
\end{matrix}
\right]
\quad
A^{+} = \sqrt{\tfrac{2}{3}}
\left[
\begin{matrix}
0 & 1\\
0 & 0
\end{matrix}
\right].
\end{equation}
or with the notation from \cite{Crosswhite2008}, i.e. using
\begin{equation}
A \equiv \ket{-} A^{-} + \ket{0} A^{0} + \ket{+} A^{+}
\end{equation}
we get
\begin{equation}
A =
\begin{bmatrix}
\tfrac{-1}{\sqrt{3}} \ket{0} & \tfrac{\sqrt{2}}{\sqrt{3}} \ket{+}\\
\tfrac{\sqrt{2}}{\sqrt{3}} \ket{-} & \tfrac{1}{\sqrt{3}} \ket{0}
\end{bmatrix}.
\end{equation}
That is, we use pure states as matrix entries, and use tensor product when multiplying matrices.
For instance
\begin{align}
\hbox{Tr} [ A A ]
&= \hbox{Tr}
\begin{bmatrix}
\tfrac{1}{3} \ket{00} + \tfrac{2}{3} \ket{+-} &
- \tfrac{\sqrt{2}}{3} \ket{0+} + \tfrac{\sqrt{2}}{3} \ket{+0} \\
- \tfrac{\sqrt{2}}{3} \ket{-0} + \tfrac{\sqrt{2}}{3} \ket{0-}&
\tfrac{2}{3} \ket{-+} + \tfrac{1}{3} \ket{00}
\end{bmatrix}\\
&= \tfrac{2}{3} \left( \ket{00} + \ket{+-} + \ket{-+}\right).
\end{align}
\subsection{General framework}\label{s:qubism_general}
From a very abstract point of view,
a set of all possible visualization of a $N$-qudit wavefunctions onto a unit square is:
\begin{align}
\left( \{0,1,\ldots,d-1\}^N \rightarrow \mathbb{C} \right)
\rightarrow \left( [0,1]^2 \rightarrow \mathbb{R}_{\geq 0}^3 \right),
\end{align}
where $\mathbb{R}_{\geq 0}^3$ stands for intensities of red, green and blue components.
This formula is very general --- it also includes writing amplitudes with fixed precision numbers.
However, we want to focus on specific visualizations, where
\begin{itemize}
\item all amplitudes are shown,
\item each amplitude is represented by a color,
\item the position of the region in which a certain amplitude is drawn does not depend on any of amplitude values.
\end{itemize}
That is, we restrict ourselves to visualizations which can be formulated as
\begin{align}
[0,1] \times[0,1] &\to \{0,1,\ldots,d-1\}^N\label{eq:geometric_mapping}\\
\mathbb{C} &\to \mathbb{R}_{\geq 0}^3,\nonumber
\end{align}
that is, visualization schemes for which each position is related to some sequence $\vec{s}$, and the color there is the color for amplitude $\alpha_{\vec{s}}$.
When it comes to the spatial mapping, we would like to add assumptions related to their recursive structure (making it a \emph{qubistic} visualization, not --- any ordering of amplitudes on a square).
For a function $f$ as in \eqref{eq:geometric_mapping}, we take the inverse image $f^{-1}(\{\cdot\})$, which for every spin sequences gives the region domain it is mapped to:
\begin{align}
f^{-1}(\{s_1 s_2 \ldots s_N\})
= A_{s_1} f^{-1}(\{s_2 s_3 \ldots s_N\}),
\end{align}
where $A_{s}$ is an affine transform and they are both complete and non-intersecting:
\begin{align}
\bigcup_{s_1} A_{s_1} f^{-1}(\{s_2 s_3 \ldots s_N\})
= f^{-1}(\{s_2 s_3 \ldots s_N\})\\
s_i \neq s_j \Rightarrow
\left( A_{s_i} f^{-1}(\{s_2 s_3 \ldots s_N\}) \right)
\cap
\left( A_{s_j} f^{-1}(\{s_2 s_3 \ldots s_N\}) \right)
= \emptyset.
\end{align}
The second condition can be relaxed --- it does need to be empty, measure zero is enough.
Moreover, instead of using states of one particle, as an alphabet, we can use states of a small number of consecutive particles
e.g. $\{\ket{00}, \ket{01}, \ket{10}, \ket{11}\}$,
though we will not do it for all visualizations we study here.
This recipe can be easily generalized for qudits, and for higher dimensional representations.
For example, 3D representations can show relations between 3 consecutive particles easily.
\subsubsection{Technical remarks}
Since the global phase has no physical meaning, we can fix it by setting the phase according to one of these recipes:
\begin{itemize}
\item Ensure that a certain selected amplitude is positive (arbitrary and not always possible),
\item Ensure that the sum of the wavefunction entries is positive (not always possible; for singlet states it is always impossible).
\item For a sequence of wavefunctions, ensure that $\braket{\psi_i}{\psi_{i+1}}$ is positive (works only for sequences, with consecutive entries being non-orthogonal; the starting global phase remains arbitrary).
\end{itemize}
For real wavefunctions it is somehow easier, as only the sign can change. Yet, even in this case, when for example,
tracking how the ground state changes when Hamiltonian parameters are being modified, it is better to have coherent colors.
It is especially important for processes where changes of the phase are important, for example --- the Berry phase \cite{Berry1984} acquired for a state evolving in an adiabatically changed setting.
As another remark, recursive structure allows us to find the position of ferromagnetic and antiferromagnetic states as a limit of
\begin{align}
\lim_{N \to \infty} A_{s_1 s_2}^N
\left[
\begin{matrix}
1/2\\
1/2\\
1
\end{matrix}
\right].
\end{align}
\subsection{Mappings}
\subsubsection{Typical mapping for qubits}
The typical mapping for qubits, defined as in Fig.~\ref{fig:qubism_mapping_simplest} and which we use in the previous examples, can be defined with the following affine transformations:
\begin{align}
A_{s_1s_2} &=
\left[
\begin{matrix}
B & \vec{r}_{s_1s_2}\\
0 & 1
\end{matrix}
\right]
\end{align}
where $B$ is a matrix scaling down by factor $2$, and $\vec{r}_{s_1s_2}$ is a translation dependent on two consecutive spins, i.e.:
\begin{equation}
B =
\left[
\begin{matrix}
1/2 & 0\\
0 & 1/2
\end{matrix}
\right]
\end{equation}
\begin{equation}
\vec{r}_{00} =
\left[
\begin{matrix}
-1/4\\
1/4
\end{matrix}
\right]
\quad
\vec{r}_{01} =
\left[
\begin{matrix}
1/4\\
1/4
\end{matrix}
\right]
\quad
\vec{r}_{10} =
\left[
\begin{matrix}
-1/4\\
-1/4
\end{matrix}
\right]
\quad
\vec{r}_{11} =
\left[
\begin{matrix}
1/4\\
-1/4
\end{matrix}
\right]
\end{equation}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.30\textwidth]{figs/qubism/mapping_simplest.pdf}
\caption{Qubism mapping with affine transforms.}
\label{fig:qubism_mapping_simplest}
\end{figure}
or as depicted in Fig.~\ref{fig:qubism_mapping_simplest} (compare it to Fig.~\ref{fig:qubism_simplest}).
\subsubsection{Alternate mapping for qubits}
We can define an alternative mapping that puts emphasis on the difference between ferromagnetic and antiferromagnetic states.
It is similar to the original one, but positions for $01$ and $10$ are swapped.
That is:
\begin{equation}
\vec{r}_{00} =
\left[
\begin{matrix}
-1/4\\
1/4
\end{matrix}
\right]
\quad
\vec{r}_{01} =
\left[
\begin{matrix}
-1/4\\
-1/4
\end{matrix}
\right]
\quad
\vec{r}_{10} =
\left[
\begin{matrix}
1/4\\
1/4
\end{matrix}
\right]
\quad
\vec{r}_{11} =
\left[
\begin{matrix}
1/4\\
-1/4
\end{matrix}
\right]
\end{equation}
In this mapping ferromagnetic states are on the left, while antiferromagnetic are on right.
In general, for qubits there are only three inequivalent qubistic square plotting schemes.
That is, there are $4!=24$ permutations of $\{ \ket{00}, \ket{01}, \ket{10}, \ket{11}\}$,
but the symmetry group of square has $8$ elements (identity, 3 rotations, 4 reflections).
Or, in other words,
a square scheme for qubits can be defined by what is the square is on the opposite site of $\ket{00}$.
However, if we consider visualization of states up to translations, then we end up with only two schemes.
Note that this alternate mapping is the same as the typical mapping of a state subjected to a product of controlled swaps, i.e.
\begin{equation}
\ket{\psi} \mapsto
\Big( \ket{0}\bra{0} \otimes \mathbb{I} + \ket{1}\bra{1} \otimes \sigma^x \Big)^{N/2} \ket{\psi}.
\end{equation}
For example, in Fig.~\ref{fig:heisenberg_majumdar_ghosh_skewed} we show the grounds states of the Majumdar-Ghosh model using the alternate mapping. The physical content is the same as in Fig.~\ref{fig:heisenberg_majumdar_ghosh}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{figs/qubism/heisenberg_n10_trans_skewed}
\caption{\label{fig:heisenberg_majumdar_ghosh_skewed}
Majumdar-Ghosh model for periodic and open boundary conditions for $N=10$ qubits,
plotted using alternate mapping for qudits. Cf. Fig.~\ref{fig:heisenberg_majumdar_ghosh}.
Note that for products of singlets of the form $(1,2)(3,4)\ldots(N-1,N)$, as in \eqref{eq:singlet_product_even}, we obtain a line on the right (instead of the diagonal line).
}
\end{figure}
\subsubsection{Square mapping for qudits}
The square visualizations can be generalized for $d$-level systems, as exemplified in Fig.~\ref{fig:qudits_mapping}.
In this case we have:
\begin{equation}
B =
\left[
\begin{matrix}
1/d & 0\\
0 & 1/d
\end{matrix}
\right]
\end{equation}
and
\begin{equation}
\vec{r}_{s_1 s_2} =
\left[
\begin{matrix}
\frac{2 s_1 + 1}{2 d} - \frac{1}{2}\\
\frac{2 f(s_1, s_2) + 1}{2 d} - \frac{1}{2}
\end{matrix}
\right]
\end{equation}
for $s_1,s_2 \in \{0, 1, \ldots, d-1 \}$,
where for the typical scheme we have
\begin{equation}
f(s_1, s_2) = s_2
\end{equation}
and for the alternate one:
\begin{equation}
f(s_1, s_2) = (s_2 - s_1) \mod d.
\end{equation}
In general, $f$ can be any permutation of $s_2$ as a function of $s_1$.
Such mapping can be also understood in terms of coordinates as \eqref{eq:qubism_position},
where instead of base $2$ we use base $d$.
In fact, we have already used this mapping (for $d=3$) in Figures \ref{fig:aklt} and \ref{fig:aklt_nonzero}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.80\textwidth]{figs/qubism/mapping_qudits.pdf}
\caption{Qubism mapping for qudits in two variants: typical (right) and alternate; example for $d=5$.}
\label{fig:qudits_mapping}
\end{figure}
\subsubsection{Triangular scheme}
Square plots are not the only possibility.
One of their shortcomings is that they look differently for an even and odd number of particles.
That is for an odd number of particles they pixel is a rectangle, instead of a square.
Let us create a plot starting from a right triangle, with vertices at $(-1,0)$, $(0,1)$ and $(1,0)$.
It can be split into two similar triangles, scaled by a factor $1/\sqrt{2}$.
The shifts, starting from the middle of the basis of the triangle, are
\begin{equation}
\vec{r}_0 =
\begin{bmatrix}
-\tfrac{1}{2}\\
+\tfrac{1}{2}
\end{bmatrix}
\quad
\vec{r}_1 =
\begin{bmatrix}
\tfrac{1}{2}\\
\tfrac{1}{2}
\end{bmatrix}
\end{equation}
and the linear transformations are
\begin{equation}
B_0 =
\begin{bmatrix}
-\tfrac{a}{2} & -\tfrac{1}{2}\\
\tfrac{a}{2} & -\tfrac{1}{2}
\end{bmatrix}
\quad
B_1 =
\begin{bmatrix}
-\tfrac{a}{2} & \tfrac{1}{2}\\
-\tfrac{a}{2} & -\tfrac{1}{2}
\end{bmatrix},
\end{equation}
where the parameter $a=1$ is for rotation and $a=-1$ for reflection.
Both variants are depicted in Fig.~\ref{fig:qubism_scheme_triangle}.
We provide example plots in Fig.~\ref{fig:triangular_examples}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=10cm]{figs/qubism/mapping_triangle}
\caption{\label{fig:qubism_scheme_triangle} Two variants for the triangular qubistic plotting schemes.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6cm]{figs/article/qubism_njp/F12_new_tri}\\
product state
\vspace{5mm}
\includegraphics[width=6cm]{figs/article/qubism_njp/wf_itf_af_012_010_tri}\\
ITF ground state for $\Gamma=1$
\vspace{5mm}
\includegraphics[width=6cm]{figs/article/qubism_njp/H_012_001_tri}\\
Heisenberg ground state
\caption{\label{fig:triangular_examples}Triangular representations of many-body wavefunctions, all for $N=12$ qubists.}
\end{figure}
\subsubsection{Other qubistic schemes}
One can design other visualizations.
For example, a qubistic scheme based on splitting a equilateral triangle into $4$ similar triangles would produce a pattern similar to that of Sierpiński triangle.
Furthermore, while presented visualizations emphasize two-body correlations, it is possible to devise a qubistic scheme capturing a few-body relations.
For instance, doing a 3-dimensional visualization analogous to Fig.~\ref{fig:qubism_simplest} would reveal 3-particle correlations.
Alternatively, particles can gather in tuples, so instead of considering a system of $n$ particles of $d$ levels, we can consider $n/k$ particles of $d^k$ levels.
\subsubsection{Schmidt plot}
\label{sec:schmidt_plot}
Besides qubistic plotting schemes, with their recursive structure,
we would like to discuss one more type of plots --- \emph{Schmidt plots}.
Justification for the name will be given later, related to the Schmidt decomposition.
A pure state of $N$ particles can be decomposed into correlated systems of $k$ and $N-k$ particles.
That is, parameterizing the wavefunction by indices related the respective sets of particles, we effectively get a matrix $\ket{\psi}_{\mu,\nu}$.
We plot this matrix as a density plot, with the same color scheme as discussed throughout this chapter.
We call this kind of plot the \emph{Schmidt plot}, as it is related to the Schmidt decomposition (i.e. the Singular Value Decomposition of the matrix $\ket{\psi}_{\mu,\nu}$).
A product state with respect to a given partition is a state
\begin{equation}
\ket{\psi}_{\mu,\nu} = \ket{\phi_1}_\mu \ket{\varphi_1}_\nu.
\end{equation}
Consequently, we can see entanglement by observing structure of the plot.
An alternative description of the Schmidt plot is plotting amplitudes of a wavefunction in a similar manner to that of \eqref{eq:qubism_position}, but using the $y$ coordinate for first $k$ particles and $x$ for the last $N-k$ particles.
Also note that the typical qubistic scheme is equivalent to the Schmidt plot for odd vs even particles.
In particular, if the qubistic scheme is a product of horizontal and vertical lines, it means there is no entanglement between subsets
\begin{equation}
(1,3,5,\ldots,N-1) \qquad \text{and} \qquad (2,4,6,\ldots,N).
\end{equation}
Furthermore, the Schmidt plot is is related to a variant of qubistic scheme, where instead of starting with the first particles, we start with the middle particles, i.e. with the consecutive pairs being $(N/2, N/2 + 1)$, $(N/2 - 1, N/2 + 2)$, $(N/2 - 2, N/2 + 3)$, $\ldots$.
This plot is the same as the Schmidt plot for ordering the following ordering of particles:
\begin{equation}
(N/2, N/2 - 1, \ldots, 1, N/2 +1, N//2 + 2, \ldots N).
\end{equation}
Thanks to these similarities, some Schmidt plots look the same as their qubistic variants: for example plots of permutation-symmetric states, for which the ordering of particles is irrelevant.
\subsection{Entanglement visualization}
\label{sec:entanglement-visualization}
One of the hallmark properties of quantum mechanics is the existence of entanglement \cite{Horodecki2009,Amico2008,Eisert2010} --- many-particle correlations that cannot be described by classical models.
In this section we present a general way to visualize quantum entanglement.
While, given a pure state, it is easy to compute whether a system split into two parties is entangled, is it possible to plot a state in a way that entanglement is visible?
For a pure state $\ket{\psi}$ we can perform the Schmidt decomposition:
\begin{equation}
\ket{\psi} = \sum_i \lambda_i \ket{\phi_i} \ket{\varphi_i},
\label{eq:schmidt_decomposition}
\end{equation}
and entanglement is stored in the Schmidt coefficients $\lambda_i$.
The Schmidt decomposition is Singular Value Decomposition of the matrix $\ket{\psi}_{\mu,\nu}$,
where indices $\mu$ and $\nu$ are related to the first and the second subsystem, respectively.
A straightforward way to visualize such system would be to show the matrix using a Schmidt plot, as in Sec.~\ref{sec:schmidt_plot}.
However, it allows us to visualize entanglement only for one particular splitting.
We will show that with qubism it is possible to show entanglement for various splittings within one plot.
Alternatively to \eqref{eq:schmidt_decomposition}, we can perform the partial trace of one of the subsystems, and get a reduced density matrix
\begin{equation}
\rho_1 = \sum_i \lambda_i^2 \ket{\phi_i}\bra{\phi_i},
\label{eq:schmidt_partial_trace}
\end{equation}
where the states $\ket{\phi_i}$ and numbers $\lambda_i$ are exactly as in \eqref{eq:schmidt_decomposition}
A general way to assess bipartite entanglement is to use the R\'enyi entropy of the Schmidt coefficients
\begin{equation}
H_q(\{\lambda_i \}) = \tfrac{1}{1-q}
\ln \sum_i \lambda_i^q
= \tfrac{1}{1-q} \ln \hbox{Tr} \left(\hbox{Tr}_1 \rho\right)^q.
\label{eq:schmidt_entropy}
\end{equation}
In particular, the most important entanglement measures can be expressed in terms of \eqref{eq:schmidt_entropy} as follows:
\begin{itemize}
\item The von Neumann entropy is the Shannon entropy of the Schmidt coefficients squared --- i.e. $H_{q \to 1}$. It plays an important role in quantum information.
\item The Schmidt rank (number of non-zero Schmidt components) is $\exp(H_{q \to 0})$. It is important for $GL$ transformations of states, i.e. which states can be obtained (with any non-zero probability) form a given state, when one can use any local operations.
\item State purity is $1-\exp(H_2)=1-\sum_i \lambda_i^2$. It is often used, as it is easy to calculate it and relate to other quantities.
\end{itemize}
When there is only one non-zero Schmidt coefficient, the state is not entangled --- it is a product state:
\begin{equation}
\ket{\psi} = \ket{\phi_1} \ket{\varphi_1}
\end{equation}
and has all entanglement entropies equal to $0$.
The maximally entangled state is
\begin{equation}
\ket{\psi} = \tfrac{1}{\sqrt{m}} \sum_{i=1}^m \ket{\phi_i} \ket{\varphi_i},
\end{equation}
where $m$ is the smaller of the two dimensions.
For such state all entropies are $H_{q}=\ln m$.
It is easy to compute \eqref{eq:schmidt_entropy}, but how to visualize it?
Let us go back to writing the state as partitioned between two parties (but not Schmidt-decomposed, as in \eqref{eq:schmidt_decomposition}; this time we used a fixed local basis)
\begin{align}
\ket{\phi} &= \sum_{ik} \alpha_{ik} \ket{i} \ket{k}\\
&\equiv \sum_k \ket{\Xi_k} \ket{k}.\nonumber
\label{eq:xi-parition}
\end{align}
So, the reduced density matrix for the first subsystem reads
\begin{align}
\rho_1 &= \sum_{ijk} \alpha_{ik}
\alpha_{jk}^* \ket{i} \bra{j}\\
&= \sum_k \ket{\Xi_k} \bra{\Xi_k}.
\end{align}
Bear in mind that vectors $\ket{\Xi_k}$ are neither normalized not orthogonal to each other.
The Schmidt number is, equivalently, the rank of $\rho_1$, i.e.
the dimension of the subspace spanned by
\begin{equation}
X = \left\{ \bra{\Xi_k} \right\}_{k}
\end{equation}
and the number of linearly independent components in $X$.
All other entanglement measures can be described by the set of vectors $X$.
In particular, the purity is
\begin{align}
\hbox{Tr}( \rho_1^2 ) &= \sum_{kk'} \left| \braket{\Xi_k}{\Xi_{k'}} \right|^2\\
&= \sum_{k} \left| \braket{\Xi_k}{\Xi_{k}} \right|^2
+ \sum_{k \neq k'} \left| \braket{\Xi_k}{\Xi_{k'}} \right|^2.\nonumber
\label{eq:tiles_purity}
\end{align}
As a side note, such vectors are related to classical probabilities, once off-diagonal terms in $\rho_1$ are removed.
Then instead of the number of linearly independent terms we get the number of non-zero terms, and in the case of purity --- we get only the $k=k'$ term in \eqref{eq:tiles_purity}.
Intuitively speaking, we lose the interference between different vectors in $X$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.40\textwidth]{figs/qubism/tiles_entanglement.pdf}
\caption{Schematic for presenting a qubistic plot for $N$ qubits as $2^4$ tiles, each being a plot of $N-4$ qubits.}
\label{fig:tiles_entanglement}
\end{figure}
Describing the system as two subsystems --- one with the first $k$ particles, and the other --- with the last $N-k$ particles \eqref{eq:xi-parition} can be readily presented graphically, see Fig.~\ref{fig:tiles_entanglement}.
The same presentation makes it easy to perform a projective measurement.
If we measure the state of, say, the first $4$ qubits, and the outcome is $0110$
(what happens with probability $\braket{\Xi_{0110}}{\Xi_{0110}}$),
the final state is
$\ket{\Xi_{0110}}/\sqrt{\braket{\Xi_{0110}}{\Xi_{0110}}}$.
As hinted, our goal is to visualize quantum entanglement. In Fig.~\ref{fig:tiles_entanglement_2x2} we provide plots for some $4$-qubit state and describe how entanglement between the first and last two particles can be spotted, with no calculations.
\begin{figure}
\begin{center}
\begin{tabular}{cccccc}
\raisebox{0.65cm}{$\{\ket{0},\ket{1}\}^{4}$} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_sep_01} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_ghz_01} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_w_01} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_d2_01} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_maxpmmp_01}\\
\raisebox{0.65cm}{$\{\ket{P},\ket{M}\}^{4}$} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_sep_pm} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_ghz_pm} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_w_pm} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_d2_pm} &
\includegraphics[width=1.3cm]{figs/article/qubism_njp/blocks_maxpmmp_pm}\\
&
$\ket{0000}$ &
$\ket{GHZ}$ &
$\ket{W}$ &
$\ket{D^4_2}$ &
$\ket{\chi}$ \\
Schmidt rank: &
1 &
2 &
2 &
3 &
4 \\
ent. entropy: &
$0$ &
$\log 2 = 1$ &
$\log 2 = 1$ &
$\log 3-\frac{1}{3}$ &
$\log 4 = 2$ \\
&
&
&
&
$\approx 1.25$ &
\end{tabular}
\end{center}
\caption{\label{fig:tiles_entanglement_2x2} Entanglement estimation for 2--2 partition of four-qubit states. As examples we use a separable state $\ket{0000}$, the Greenberger–Horne–Zeilinger state $\ket{GHZ}$, the W state (i.e. the Dicke state with one excitation) $\ket{W}$, the Dicke state with two excitations $\ket{D^4_2}$ and $\ket{\chi}=(\ket{0000}-\ket{0101}-\ket{1010}+\ket{1111})/2$. They are presented in two different bases, where $\ket{P}=(\ket{0}+\ket{1})/\sqrt{2}$ and $\ket{M}=(-\ket{0}+\ket{1})/\sqrt{2}$. Dividing in blocks is related to separating the first two particles from the last two. The Schmidt rank ($m$) equals the number of linearly independent blocks, which can be counted by a naked eye. While the result is basis-invariant, in some bases the task may be simpler than in others. For example, for $\ket{D^4_2}$ in the computational basis one easily sees that there are $3$ different blocks while in the other one needs to spot that top left and bottom right blocks are linearly independent. Entanglement entropy is bounded from above by $\log m$.}
\end{figure}
\subsection{Fractal dimension of the state}
Qubistic plots often look fractal-like. It arises directly from the recursive nature of the plotting scheme, see Fig.~\ref{fig:translation_sym}.
\begin{figure}
\centering
\includegraphics[width=6cm]{figs/article/qubism_njp/trans}
\caption{\label{fig:translation_sym} The figures show what happens if we measure two particles in the computational basis $\sigma^z$ and get results $(+1,+1)$ (or state $\ket{00}$).
For particles $(1,2)$ we are in upper left quadrant, alike in the construction of a qubistic scheme.
For particles $(3,4)$ or $(5,6)$ we get tiles as in the picture.
For translationally invariant states all 3 subplots (once put together) are the same.}
\end{figure}
In this section we will show that indeed some plots are fractals \cite{Schroeder1991book} and calculate their fractal dimensions.
For a probability distribution $X=(p_1, \ldots, p_n)$,
the R\'enyi entropy of order $q$ \cite{Renyi1961,Beck1993} is defined as
\begin{align}
H_q(X) = \frac{1}{1-q} \log \left( \sum_{k=1}^n p_i^q \right).
\end{align}
Throughout this work we use $\log \equiv \ln$, so we measure information in \emph{nits} instead of \emph{bits}.
Some entropies have particular names:
\begin{align}
H_0(X) &= \log \left( \#\{p_i > 0\}_i \right)
\quad& \text{Hartley entropy}\\
H_1(X) &= - \sum_{k=1}^n p_i \log(p_i)
\quad& \text{Shannon entropy}\\
H_2(X) &= - \log\left(\sum_{k=1}^n p_i^2\right)
\quad& \text{collision entropy}\\
H_\infty(X) &= \log \left( \#\{p_i = p_{max} \} \right)
\quad& \text{max entropy}
\end{align}
Furthermore, as already discussed in Sec.~\ref{sec:entanglement-visualization},
R\'enyi entropies can be used for quantum states, in which case we use probabilities of pure components of a density matrix, see also \cite{Wehrl1978,Muller-Lennert2013}.
Fractal dimension can be defined for a set, being a subset of a hypercube $[0,1]^n$.
There are a few different definitions of the fractal dimension.
One numerical way for defining the fractal dimension is called \emph{box counting}.
We divide the hypercube into boxes (smaller hypercubes) of the linear size $\varepsilon = 2^{-n}$.
Then we count the number of non-empty boxes, $b(\varepsilon)$.
The box counting dimension is defined as
\begin{align}
d = \lim_{\epsilon \to 0} \frac{\log(b(\varepsilon))}{\log(1/\varepsilon)},
\end{align}
provided the limit exists.
For example, as a function of $n$, for a point it stays constant, for a line it grows linearly and for a square --- quadratically.
But what if instead of a set we have a probability distribution?
Then we can define the dimension, parametrized by a real number $q$, using the R\'enyi entropy:
\begin{align}
d_q = \lim_{\epsilon \to 0} \frac{H_q(X(\varepsilon))}{\log(1/\varepsilon)},
\end{align}
where $X(\varepsilon)$ is the probability distribution coarse-grained by boxes of size $\varepsilon$, i.e.
\begin{align}
p_i = \int_{i\text{th box}} p(x) d^n x.
\end{align}
It is a generalization of the box counting dimension, since by mapping a set to a probability distribution being non-zero on the set, and zero everywhere else we get $d = d_0$.
In general $d_0$ is dimension of the support.
To calculate the fractal dimension of a qubistic plot,
in the first step we change amplitudes into probabilities, so that we can use the above methods:
\begin{equation}
p_{s_1\ldots s_N} = |\alpha_{s_1\ldots s_N}|^2.
\end{equation}
The next one is to see how does the \Renyi{} entropy scale with coarse-graining.
In the case of qubism, spatial coarse-graining is the same as coarse graining with respect to particles --- i.e. tracing out probabilities.
Let us have
\begin{equation}
P_k \equiv \{ p_{s_1\ldots s_k} \}
\end{equation}
where
\begin{equation}
p_{s_1\ldots s_k} \equiv \sum_{s_{k+1},\ldots,s_N}p_{s_1\ldots s_k \ldots s_N}.
\end{equation}
That is $P_k$ is the set of probabilities (in a selected basis) if we forget about the state of the $N-k$ last particles.
The fractal dimension \cite{Halsey1986,Theiler1990} is
\begin{equation}
d_q = \lim_{k \to \infty} \frac{H_q(P_k)}{\log(1/\varepsilon)},
\label{eq:dq_linear}
\end{equation}
where $\varepsilon$ is the linear box size.
As we operate with two-dimensional visualizations, $\varepsilon = d^{-k/2}$.
Or, alternatively, we can use l'Hôpital rule to get an alternative formula
\begin{equation}
d_q = \lim_{k \to \infty} 2\left(H_q(P_k) - H_q(P_{k-1})\right).
\label{eq:dq_diff}
\end{equation}
In other words --- we can either look at the slope for linear fit \eqref{eq:dq_linear} or the derivative \eqref{eq:dq_diff}.
For ideal fractals $H_q(P_k)$ grows linearly with $k$, so both formulas give the same result.
In practical cases we work with systems of fixed size (for example, $N=12$, which is feasible for exact diagonalization).
In this case formula \eqref{eq:dq_linear} is sensitive to short-range correlations, whereas \eqref{eq:dq_diff} is to long range correlations.
What seems to be the best trade-off is to take the derivative in the middle $k=N/2$.
This definition of $d_q$ is basis-dependent. For example, for the product state
\begin{equation}
\left(
\cos(\tfrac{\theta}{2})\ket{0} + \cos(\tfrac{\theta}{2}) \exp(i \varphi) \ket{1},
\right)^N
\label{eq:product-theta}
\end{equation}
a rotation of the local basis (equivalently, changing $\theta$) results in a fractal dimension changing from $0$ to $2$.
For this state, $H_q(P_k) = k H_q(P_1)$, so the fractal dimension is
\begin{equation}
d_q = \tfrac{2}{1-q} \log\left( \cos^{2q}(\tfrac{\theta}{2}) + \sin^{2q}(\tfrac{\theta}{2}) \right),
\label{eq:product-theta-dq}
\end{equation}
see Fig.~\ref{fig:fractal_dim_product_state}.
\begin{figure}
\centering
\includegraphics[width=8cm]{figs/qubism/fractal_dim_product_state}
\caption{\label{fig:fractal_dim_product_state} Fractal dimension \eqref{eq:product-theta-dq} of the product state \eqref{eq:product-theta} changes with the basis.
Look at Fig.~\ref{fig:product_state} for the respective qubistic plots.}
\end{figure}
So, in general, fractal dimension alone does not suffice to tell much about entanglement or any other properties which are basis-independent.
Moreover, for states that are not translationally invariant, qubistic plots typically are not fractal-like and we do not have a well-defined fractal dimension as the relevant limit does not exist.
A more interesting example is the ITF model, already discussed in Sec.~\ref{s:ising_model}.
As we already saw in Fig.~\ref{fig:itf_transition}, the plot changes with parameter $\Gamma$, from two points (Ne\'el state) to uniform color (as all particle point in the $x$ direction).
We quantify these changes in Fig.~\ref{fig:fractal_dim_itf}.
\begin{figure}
\centering
\includegraphics[width=5cm, angle=270]{figs/article/qubism_njp/renyitf}
\caption{\label{fig:fractal_dim_itf} Fractal dimension of the Ising model in the transverse field.
Note that for the phase transition, $\Gamma=1$, the fractal dimension related to the Shannon entropy ($q=1$) seems to be close to $1$.}
\end{figure}
\subsection{Qubism for mixed states and operators}
Above we described the qubistic visualization for pure states.
Below, we show a representation of mixed states and operators.
As both of them are Hermitian matrices, we can propose a single visualization scheme suitable for both of them.
Note, that the density matrix has twice as many coordinates as the wavefunction --- so, unless we are operating in four dimensions, we cannot straightforwardly use qubism for mixed states.
Also, a typical two-dimensional plot of a density matrix, i.e. $\rho_{ij}$ plotted as a density plot, does not show multiparticle relations.
Let us introduce so-called frame representations, that is, the expression of a density matrix as the following sum
\begin{equation}
\rho = \sum_{\vec{i}} t_{\vec{i}}
\sigma^{i_1} \otimes \sigma^{i_2} \otimes \cdots \otimes \sigma^{i_N},
\end{equation}
where $t_{\vec{i}}$ are real numbers and $\sigma^{s}$ are generators of $d$-dimensional density matrix. For example for qubits ($d=2$), they are the identity and the three Pauli matrices. The scheme, for qubits, is presented in Fig.~\ref{fig:scheme_mixed} (cf. Fig.~\ref{fig:qubism_simplest}), with examples for the Majumdar Ghosh model \eqref{eq:majumdar_ghosh_hamiltonian} provided in Fig.~\ref{fig:qubism-opertors-mixed}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.40\textwidth]{figs/qubism/mixed.pdf}
\caption{Scheme for presenting mixed qubit states with qubism.}
\label{fig:scheme_mixed}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=5cm]{figs/qubism/hermitian_majumdarghosh_ham}
&
\includegraphics[width=5cm]{figs/qubism/hermitian_majumdarghosh_ground}
\end{tabular}
\end{center}
\caption{\label{fig:qubism-opertors-mixed} A plot of the Majumdar Ghosh Hamiltonian \eqref{eq:majumdar_ghosh_hamiltonian} for $J=1/2$ (left) and its ground state, expressed as a density matrix (right).}
\end{figure}
So, it can be seen as a qubistic visualization of sequence-like objects, but where instead of $d$ symbols we have $d^2$ symbols.
Unfortunately, $t_{\vec{i}}$ can be interpreted as neither amplitudes nor probabilities.
They are related to purity, though,
\begin{equation}
\sum_{\vec{i}} t_{\vec{i}}^2 = d^N \hbox{Tr} [\rho^2].
\end{equation}
\subsection{Discussion}
Qubistic plots allow to visualize any pure state of $N$ qudits and, with modifications, of any operator and mixed state of particles in a finite number of levels.
Moreover, it makes it possible to show:
\begin{itemize}
\item two-particle correlations between nearest neighbors,
\item entanglement between the first $k$ particles and the rest,
\item some other patterns: for example permutation invariance or the structure of a singlet state.
\end{itemize}
However, it has its own limitations. For example:
\begin{itemize}
\item states which lack any symmetry may produce very cluttered plots,
\item three and more particle correlations are not always visible,
\item it is basis-dependent (as any representation of a state in a given basis),
\item for the square plot there is a difference between having even and odd number of particles,
\item adding particles changes the plot, although only in the resolution for translation-invariant states.
\end{itemize}
The last two remarks make it harder to compare, say, the ground states of a given system as a function of number of particles.
Moreover, plots of state look the same when we add $\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$ state to its end, in other words, $\ket{\Psi}$ and $\ket{\Psi}\otimes \ket{+}^{k}$ yield the same plot.
Furthermore, dependence on the local basis implies also dependence on phase.
It is a feature of the wavefunction the description of quantum states, so it should not be surprising.
If we want to plot numerical or experimental data, it may be useful to know how to disregard phase.
However, due to phase-to-hue mapping it should be not that hard even visually.
We proposed a few variants of qubistic plots.
It seems that for most applications the typical, square qubistic plotting scheme should be the standard choice.
However, other plotting schemes may be useful for putting emphasis on certain features of the wavefunction, e.g. ferromagnetism.
|
1,314,259,993,289 | arxiv | \section{Introduction}
The low voltage distribution network (DN) in Europe consists predominantly of single phase (1-$\phi)$ loads, inverter interfaced PV, storage, and electric vehicle charging infrastructure.
Often the phase connectivity of such resources is not accurately known.
This lack of DN observability will restrict the monitoring and control of DN imbalances.
Existing DN consists of multiple measurements at the feeder level and end of the feeder, which provides utilities with some degree of observability.
The goal of the paper is to develop a scalable and robust phase connectivity identification (PCI) framework that considers multiple measurement points for improving the PCI.
\begin{table*}
\centering
\tiny
\caption{\small{Literature review on non-intrusive phase identification}}
\label{tab:phaselit}
\begin{tabular}{p{6mm}|p{30mm}|p{52mm}|p{42mm}|p{22mm}|p{13mm}}
\hline
Ref & Measurement dependency & Proposed solution & Remarks & Methodology & Input \\
\hline
\hline
\cite{pi1:blakely2019spectral} & AMI voltage time series, partially incorrect phase label information & Spectral clustering with a sliding window; does not require substation measurement & 91\% accuracy, Google street view analysed for phase identification & Clustering / Unsupervised ML & voltage \\
\hline
\cite{pi77:liu2020practical} & Voltage magnitude (denoted as $|V|$) & Spectral clustering is utilized and MILP model is used for unbalance mitigation. & 156 user DN in China is used for validation. Majority rule is applied to over predictions over new data. & Clustering / Unsupervised ML & voltage \\
\hline
\cite{pi3:ni2017phase} & Voltage magnitude & k-means clustering with Gaussian Mixture Model algorithm for phase id & 91\% accurate; with salient features accuracy is 100\% & Clustering / Unsupervised ML & voltage \\
\hline
\cite{pi34:mitra2015voltage} & Voltage magnitude & k-means clustering is used. Use multiple references with \textbf{Majority rule} based estimation. & 90\% accuracy & Clustering / Unsupervised ML & voltage \\
\hline
\cite{pi23:zaragoza2022denoising} & Voltage magnitude & k-medoids clustering is with denoised data & Singular value decomposition is used for denoising & Clustering / Unsupervised ML & voltage \\
\hline
\cite{pix1:wang2016phase} & Voltage magnitude &
principal
components are used to extract feature vectors over which
constrained k-means clustering is applied & 90\% accuracy & Clustering / Unsupervised ML & voltage \\
\hline
\cite{simonovska2021phase} & Voltage magnitude & k-means clustering with principal component analysis & Phase identification is applied for multiple days separately and a majority rule is applied & Clustering / Unsupervised ML & voltage \\
\hline
\cite{pi5:xu2016phase} & Active power measurements, substation as reference & extract distinct features from load profiles and correlate with phase load; limitation: high granularity data needed & 93\% accuracy with 10\% SMs in DN; results compared with \cite{pi7:arya2011phase} & Correlation & power \\
\hline
\cite{pi10:pezeshki2012consumer} & Voltage magnitude & Correlation based; the salient features of the time series are extracted. & Large enough data sheet leads to 100\% accuracy for 75 consumer DN & Correlation & voltage \\
\hline
\cite{pi17:olivier2017automatic} & Voltage magnitude & Relies on graph theory and the notion of maximum spanning tree. Correlation based PCI for a {four wire DN}. & \textbf{Closer the measurement points are geographically, the stronger the correlation between the voltages} & Correlation & voltage \\
\hline
\cite{pi6:vycital2019phase} & Voltage magnitude & Difference matrix is created & 82\% accuracy & Correlation & voltage \\
\hline
\cite{pi4:olivier2018phase} & Voltage magnitude time series & Correlation between voltage measurements of SMs with constrained k-means & substation voltage is used as reference for phase identification & Correlation with unsupervised ML & voltage \\
\hline
\cite{pi9:hoogsteyn2022low} & Active power and voltage time series data & Correlationship with clustering is performed. {Ensemble learning combines voltage and power-based estimation results. } & Impact of different SM accuracy class is evaluated & Correlation with unsupervised ML & voltage and power \\
\hline
\cite{pi7:arya2011phase} & Active power time series at the transformer and consumers & integer programming along with branch and bound search algorithm. Access sensitivity of the ratio of measurement points and total number of consumers & MILP based solution depends on the principle of conservation of energy. & Optimization & power \\
\hline
\cite{pi32:heidari2021phase} & P, Q and {$|V|$} measurements & MILP with Bender's decomposition is used. Accuracy of phase id is governed by number of data points, SM class, data resolution & For large EU feeder, the runtime with 5\% SM error is 39.2 hours. Difficult to scale. & Optimization & power (P \& Q) and voltage \\
\hline
\cite{vanin2022phase} & P, Q, $|V|$ measurements & Utilize state-estimation with MILP, also considers errors in layout information & Data needs are smaller compared to statistical and ML-based techniques & Optimization & power and voltage \\
\hline
\cite{pi18:xiaoqing2018phase} & Active power time series & LASSO based data driven approach. Also considers SM accuracy class & 97\% accuracy with 60\% SMs; LASSO immune to noise unlike \cite{pi7:arya2011phase} & Statistical or ML & power \\
\hline
\cite{pi14:jayadev2016novel, pi12:wen2015phase, pi2:pappu2017identifying} & Energy measurement time series & data-driven approach with Principal component analysis \& graph theory interpretations & Also considers noisy data & Statistical or ML & power \\
\hline
\cite{pi8:zhou2021consumer} & Voltage magnitude time series & Develops multi-dimensional calibration in phase id based on voltage characteristics in LVDN & Observe that voltage characteristics are more robust under incomplete data & Statistical or ML & voltage \\
\hline
\cite{pi30:short2012advanced} & Voltage magnitude & Linear regression and voltage drop relationship for phase identification & \textbf{Observe close measurement are strongly correlated } & Statistical or ML & voltage \\
\hline
\cite{pi12:wen2015phase} & Voltage phasor time series measured using microPMUs & Phase id analyses cross correlations over voltage magnitudes and detects the phase angle difference between reference and test nodes & multi-phase connections are also considered. Fine resolution of 120 samples per sec is used & Statistical or ML & voltage phasor \\
\hline
\cite{pi21:padullaparti2019considerations} & P, $|V|$ time series & Using statistical analysis of AMI data over a day for DN with PV. Regression model is used to model substation voltage using nodal P, V and substation power. & Explore data needs, granularity and impact of PV penetration levels & Statistical or ML & voltage and power \\
\hline
\cite{pi22:foggo2018comprehensive} & Voltage magnitude and phase info of a small representative set & Train an ML model of constrained function of voltage time series. Since manual measurements are needed thus may not be scalable & 5\% selected representative set leads to accuracy of 91.9\% & Statistical or ML & voltage \\
\hline
\cite{pi20:liao2019unbalanced} & Voltage phasor time series & Use sequence component for phase identification & Also utilize SM data for learning the topology of the DN & Statistical or ML & voltage phasor \\
\hline
\cite{pi13:foggo2019improving} & Voltage magnitude & Supervised machine learning with theory of information loss & Accuracy up to 97\% & Supervised ML & \\
\hline
\cite{pi101:bariya2021guaranteed} & Voltage phasor time series & Topology and phase identification using linearized model of three-phase unbalanced DN. & 120 Hz PMU measurements are used. Data collected for 1 second to 1 minute is used for estimation & Statistical or ML & voltage phasors \\
\hline
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{}
\end{tabular}
\end{table*}
\begin{figure}[!htbp]
\center
\includegraphics[width=0.98\linewidth]{literature.pdf}
\vspace{-7pt}
\caption{\small{Classifying of phase connectivity identification literature based on parameter(s) and tool(s) used.}}
\label{fig:literature}
\end{figure}
Phase identification methodologies can be broadly classified as intrusive and non-intrusive. As the name suggests, intrusive methods require manual identification {of phases and are often labor-intensive and/or hardware-based \cite{kolwalkar2014phase}}. On the other hand, non-intrusive methods are often data-driven. A brief summary of non-intrusive phase identification methods is detailed in Table \ref{tab:phaselit}.
These data-driven methods can be classified based on the parameter used and tool used for phase identification.
For the literature summarized in Table \ref{tab:phaselit}, the classification is presented in Fig. \ref{fig:literature}.
From Fig. \ref{fig:literature}, we observe that 64\% of existing works utilize voltage magnitude time series and approximately 27\% utilize correlation as a tool for PCI.
In this work, we also utilize these widely used {techniques} for developing a consensus-based phase identification framework {using voltage time series and correlation as the tool. Voltage time series-based phase identification is more robust to limited observability in a DN, also observed in \cite{matijavsevic2022voltage}. The power-based methods rely on the law of conservation of energy and require a high degree of observability in a DN.}
\textit{Motivation}:
With the increasing generation from renewable energy sources and the growing addition of flexible loads like electric vehicles, heat pumps, etc. congestions, voltage violations and phase imbalances in the grid will likely become more frequent. Therefore, suitable corrective measures must be found and implemented. An important basis for mitigation measures is the detection of congestions and thus, firstly, {improving the network model of the} low-voltage grid, which is still largely unmonitored in {many parts of the world} today.
{Accurate network topology is assumed to be known in many works \cite{wang2020optimal, HASHMI2022108608}}.
However, this assumption is not accurate in the case of \href{https://euniversal.eu/}{EUniversal's} demo networks for the German DN.
\subsection{Observations of this paper}
The contributions and observations of the paper are as follows:
\begin{itemize}
\item A tailor-made solution for the German DSO, \href{https://www.mitnetz-strom.de/}{Mitnetz Strom}, is proposed for phase identification which considers multiple measurements in a DN zone for improving the PCI accuracy, this is detailed in Fig. \ref{fig:structure}.
{The proposed framework can also be applied for other DNs with limited DN observability.}
\item {Twelve} phase identification models are benchmarked over the na\"{i}ve model. The na\"{i}ve model considers only one {$3-\phi$ measurement} reference in a DN for phase estimation. The proposed phase identification models build a consensus among multiple measurements for robust phase estimation.
\item Metrics are proposed for evaluating phase identification models using (a1) accuracy of estimation, (a2) confidence factor, and (a3) sensitivity towards measurement errors. Metrics (a2) and (a3) provide a qualitative metric for evaluating estimation accuracy.
\item A detailed description is provided for synthetic data generation, which utilizes
{an example suburban LV DN grid model of Mitnetz Strom.}
{This is also crucial for DNs with limited or no historical measurement data. }
\item Four case studies are performed in the numerical results.
The performance of the na\"{i}ve model is used for benchmarking and comparing the proposed phase identification algorithms.}
\begin{itemize}
\item Firstly, the proposed phase identification models are compared for the read German DN
\item Secondly, we quantify the impact of measurement proximity on phase identification metrics. We observe that for a DN partitioned into zones, the estimation quality deteriorates as the measurement reference selected is farther away from the zone where the consumer is located.
\item Thirdly, the impact of measurement errors on PCI is assessed. For 1\% accuracy class measurements, an estimation accuracy exceeding 98.6\% is achieved for a DN with 646 nodes
\item Most European DNs are four-wire system. In our last case study, we quantify the impact of the neutral conductor model on phase estimation accuracy. We observe that if the neutral conductor is not modeled, then a pessimistic estimation is achieved. Based on the knowledge of the authors, the neutral conductor impact assessment is done for the first time.
\end{itemize}
\end{itemize}
\begin{figure*}[!htbp]
\center
\includegraphics[width=0.78\linewidth]{structurePhPaper.pdf}
\vspace{-2pt}
\caption{\small{Consensus-based phase identification, synthetic data generation and metrics used}}
\label{fig:structure}
\end{figure*}
The paper is organized as follows.
Section~\ref{section2} presents the German DN case of low observability and the need for enhanced phase information for future DN operation.
Section~\ref{section3} outlines the different modeling steps used for generating synthetic data used for phase connectivity identification.
Section~\ref{section4} presents the methodology, and
Section~\ref{section5} presents the consensus algorithms used for phase identification.
Section \ref{section6} presents the four numerical case studies, and
section~\ref{section7} concludes the paper.
\pagebreak
\section{Low observability in DN: the German case}
\label{section2}
The low voltage grid serves households and small consumers connecting at 230 V or 400 V \cite{link1}. German authorities have decided to implement {an optional} smart meter (SM) roll-out {as the information security standards need to be adjusted}. Presently, less than 5\% of residential customers are equipped with SMs \cite{link2, link526}. Thus, the smart meter infrastructure is not widespread at a low-voltage level. The German Energy Industry Act requires that customers with a yearly consumption of over 6000 kWh are provided with smart measurement systems (when technically possible) \cite{link998,link999}. The requirement also applies to generators with an installed capacity above 7 kW. However, these requirements leave the majority of German households unaffected. The lack of sufficiently granular metering equipment at the household level is currently a barrier for implementing imbalance sensitive flexibility activation for solving DN issues.
\subsection{Metering of German DN}
Mitnetz Strom is one of the largest regional distribution system operators in Eastern Germany and is responsible for supplying electricity to 2.2 million {electricity consumers}. The grid area of Mitnetz Strom covers an area of 30,804 km$^2$ and is characterized by rural conditions with a high share of renewables.
{
The installed capacity of renewable energy reached an all-time high of more than 10,000 MW (more than 64,0000 plants) in
2021. This development was spurred primarily by rapid growth in solar energy, as the number of photovoltaic installations
increased by more than 17 percent.
\cite{link222}.}
{In Germany, the metering and DSO roles are decoupled. The Meter Point Operator (MPO) is responsible for the installation, operation, data gathering, and maintenance of energy meters. Note, in many locations, system operators also perform as MPO. However, the electricity consumer could opt for an independent MPO. This is in accordance with the § 43 German MsbG (measuring point operation law).}
In principle, also a third party can be commissioned as a meter operator with the operation of the metering point on the free market \cite{link3A1}.
Furthermore, there is presently no general legal obligation to share grid-relevant information obtained from smart meters with the DSO.
{Due to these constraints, DSO needs to request for receiving historical data thus cannot be utilized for short-term grid operation, congestion mitigation, etc (due to delays in DSO making a request and receiving the measurement data). Thus, observability in DNs are limited not only by SM penetration level but also by data-sharing policies targeting system operators.}
\subsection{Roadmap of meter rollout}
Fig. \ref{fig:pic1} shows the meter rollout phases in German. It was expected that by 2032, all German consumers are to be equipped with modern metering devices. (§ 29 para. 3 p.1 MsbG) compared to other countries, thus it will still take several years before the DSO has sufficient data from SMs at its disposal.
{Complicating this schedule are also the}
legal issues concerning safety and privacy of smart meter operation and usage\footnote{On 20 May 2022, the Federal Office for Information Security (BSI) withdrew the general ruling of 7 February 2020 on the determination of technical feasibility pursuant § 30 MsbG (so-called market declaration on the rollout of smart metering systems) with effect for the past. In addition, the BSI issued a general ruling pursuant to §19 (6) MsbG in which it determined that the use and installation of smart metering systems available on the market do not pose any significant risks.}. The continued operation and installation of smart metering systems as defined by the Act by the MPOs is thus still possible. However, there is no longer an obligation to install them \cite{link5A3}. This makes the need for DN parameter estimation and mechanisms to enhance observability even more crucial for ensuring the operational integrity of the network.
\begin{figure*}[!htbp]
\center
\includegraphics[width=0.85\linewidth]{SMG_1.JPG}
\vspace{-2pt}
\caption{\small{Rollout Plan for smart meters in Germany by 2032 \cite{link4A2}} }
\label{fig:pic1}
\end{figure*}
Mitnetz Strom has invested a total of 19 million euros in 2022 in the conversion to digital local network stations. The plan is to install a total of 226 digiONS in the year 2022. By 2026, up to 30 percent of the transformer stations and cable distributors in the network area are to be digitally equipped or retrofitted with the corresponding metering technology. Measured secondary substations are an important component in the digital transition. They ensure better controllability and transparency of medium and low-voltage grids, which directly benefits the implementation of the energy transition and security of supply. Increasing feed-in from renewable energies, rising demand for charging power for electro-mobility, extreme weather conditions that endanger the energy supply, especially in areas with overhead lines - the reasons for the digital monitoring and control of electricity grids are many and have one goal: \textit{security of supply}.
\subsection{Demo network for EUniversal}
In EUniversal, Mitnetz Strom is testing the use of flexibility services and markets and is leading the German demonstration together with the parent company E.ON SE \cite{link6}. The German demonstration tries to combine principles of the German mandatory process Redispatch 2.0 \cite{link888} and a market-based approach to mitigate grid constraints in a cascaded operation across multiple voltage levels. The goal is to provide DSOs with access to flexibility from grid customers across the LV/MV level for their active system management. To this end, the EUniversal consortium is testing various optimization algorithms with the aim of minimizing activation costs while ensuring the secure operation of the grid and is developing concepts for grid state estimation of smart grids, of which the first interim results and experiences will be presented.
In the German demo of EUniversal, Mitnetz Strom and its partners are investigating the use of flexibility markets in low voltage grids for congestion management and voltage maintenance. An attempt is being made to develop an iterative procedure that will prevent new congestion from occurring when flexibility is activated.
\subsubsection{Network features and meter placement}
The network considered for numerical evaluation is a typical low-voltage network in Mitnetz's network area in a small town in Eastern Germany. There are already some flexible plants, but the penetration with them is not yet significant.
Due to application cases, certain locations in the LV grid are particularly interesting when it comes to equipping with measurement technology. In particular, the end of the feeder is often an important indicator for the evaluation of potential voltage band violations, while the current at the beginning of the feeder is important for the thermal constraints in the network. Unfortunately, these points are often not available in practice due to ownership issues and the partially pronounced building development in the localities. Therefore, cable distribution cabinets were selected and equipped with measurements.
For EUniversal devices are a bundle of voltage and current sensors (Rogowksi coils), gateway, and power supply.
\subsection{Need for phase information}
LV DN topology identification is essential for efficient network operation, monitoring, and control. This also assists in planning the phases for new resources connected to the DN. Next, there is a real issue Mitnetz Strom faced due to the connection of 8 out of 9 electric vehicle chargers was made using a single-phase ($1-\phi$). This led to thermal limit violations in that particular phase. With the topology identification, the phase connections were optimized.
According to DIN ISO 50160, \cite{link7}, limits of voltage deviations are defined up to $\pm 10\%$. LV networks are asymmetrically biased by 1-$\phi$ loads. Unbalanced new loads, such as electric fans{, heat pumps,} and unbalanced charging {could} amplify this effect. Unbalanced loaded DNs lose a substantial amount of their power transmission capability. In a research project for an automatic phase switch in EV charging showed the effects in charging for $1-\phi$ or two-phase connected EVs or hybrids in the grid.
{These findings presented here are also applicable}
to $1-\phi$ heaters, inverter interfaced PV, storage, and other unbalanced loads.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.6\linewidth]{Picture2.png}
\vspace{-1pt}
\caption{\small{Line loading for unplanned EV charger placement}}
\label{fig:pic2}
\end{figure}
\begin{figure}[!htbp]
\center
\includegraphics[width=0.6\linewidth]{Picture3.png}
\vspace{-1pt}
\caption{\small{Line loading after phase redistribution }}
\label{fig:pic3}
\end{figure}
In the further evaluation of this field test, it is shown that with the average existing charging capacity, the penetration rate with EVs until a line limit is violated increased from 16\% to 47\% with symmetrical utilization {of DN with approximately balanced phases}.
In the case of flexibility markets, this means the possibility of using flexibility decreases for ensuring grid integrity.
{This underlines the importance of having accurate knowledge}
of the phase connections, as flexibility activation should not further increase imbalance. In theory, flexibility activation could also limit DN imbalance.
Mitnetz Strom and most DSO’s follow a passive way of identifying the phase connections. The accurate topology identification is performed manually in case of a follow-up on customer complaints on power quality.
This motivates us to propose a scalable and robust phase identification mechanism using historical measurement data.
\pagebreak
\section{Synthetic data generation}
\label{section3}
In this section, we detail the steps we took for generating synthetic data for the DSO grid layout provided in DigSilent format. A digital twin is used for the process of phase and load placement.
Further, the neutral conductor modeling, noise injection, and zonal clustering model used in this work are elaborated {in this section}.
\subsection{DGS parser for network JSON}
A parser has been created to convert the DGS (\textbf{D}I\textbf{gS}ilent) \cite{manual2011digsilent} file format into a JSON file that is readable to the PowerModels script. The parser is derived from the GridCal python package \cite{gridcal}. The main difference between the two formats is that the DGS data format contains different classes with different information in a hierarchical structure, whereas the JSON file just requires information on the buses, branches, and devices in the grid.
\subsubsection{Bus information}
The DGS format relies on cubicle information, which can be seen as a connection point for the different elements in the grid. The cubicle information are the unique IDs given to each connection point, which are converted to simple grid ids starting from 1 and ending in the number of buses in the grid. The original ID is saved to allow for cross-checking data and for linking devices to the correct bus.
\subsubsection{Branch information}
The DGS branch format relies on cable type information, but the JSON file requires the values directly in per-unit (pu). Therefore, the \textit{r} and \textit{x} parameters (amongst other) needs to be converted from their ohmic values to pu of the total cable length. This requires a multiplication of the base impedance and the total length of the cable.
\subsubsection{Device information}
The DGS file format can contain detailed information on different loads and static/synchronous generators. The JSON file format only has information on the devices. Therefore, the parser extracts the relevant P and Q data from each device in the DGS file and compiles it as a separate entity in the device file. Additional information included is bus ID, PV size, and connected phase.
\subsubsection{Switches}
A crucial element of the parser is removing the switches that connect branches to substations and cabins. The switches are removed as they add numerical complexities when calculating the admittance matrix in PowerModels \cite{8442948}. Setting the \textit{r} and \textit{x} values to zero can lead to infinity values during the admittance calculation, but setting it to a very low value leads to inefficiencies when running the power flow. Removal of the switches not only removes numerical complexities but, since switches make up ~7\% of the branches in the system, by removing the extra nodes the computation time of the simulation significantly increases {(in the order of a 10\%)}.
Once the switches have been removed, the IDs should be renumbered. This also serves the purpose of removing empty nodes in the grid, which has a positive effect on the computation time of the simulation as the admittance matrix only contains non-zero components, which helps reduce its size. This also tidies up the network data, making it easier to view from a simulation perspective.
\subsection{Mitnetz Strom DN and metadata}
Along with the grid data, metadata on the loads in the grids are also provided in a separate file. This metadata contains details associated with different consumer devices in the grid. The information includes a node number to connect it to the grid data, load type (e.g. household, PV, CHP, etc.), the annual energy consumption, single or three-phase connection, and the available active and reactive power output (if applicable). The information is compiled and appended to the device's JSON file. Fig. \ref{fig:distkwh} shows the spread of the annual cumulative energy consumption of loads connected to the {test DN used in this work}.
Note that the exact phase connection of single-phase loads is not known. The goal of this paper is to present a framework for identifying the phase connection of such single-phase consumers.
\begin{figure}[!htbp]
\center
\includegraphics[width=5.5in]{distribution_kwh.pdf}
\vspace{-2pt}
\caption{\small{Metadata for annual kWh consumption of 331 consumers in the test LV DN considered. {Note that 94.5\% of consumers have an annual consumption of lower than 6000 kWh for the test DN.}}}
\label{fig:distkwh}
\end{figure}
\subsection{{Randomized} phase mapping}
DSOs actively try to balance the phases so that the load distribution is fairly balanced.
Madeira island case study in \cite{hashmi2020towards} and DSO questionnaire in \cite{hashmi:tel-02462786} details the phase assignment procedure of a DSO.
In order to generate synthetic data, randomized load mapping is used. For a single phase load with different levels of annual kWh consumption, a phase is randomly selected from phases A, B, and C with equal likelihood.
\begin{figure}[!htbp]
\center
\includegraphics[width=5.1in]{phase_load.png}
\vspace{-2pt}
\caption{\small{Phase load distribution of three-phase distribution network for 100000 phase mapping scenarios.}}
\label{fig:phaseload}
\end{figure}
The randomized phase mapping is evaluated based on the sum of the absolute error in phase load (AEPL), which is given as
\begin{gather*}
AEPL = \frac{1}{3}\frac{1}{D}\sum_{i=1}^D\sum_{\phi\in\{A,B,C\}} |L_{\phi}^D - \bar{L}^D|,
\end{gather*}
where $D$ denotes the number of Monte Carlo phase mapping scenarios, and $\bar{L}^D$ denotes the mean load in all the phases and is given as $\bar{L}^D = \sum_{\phi\in\{A,B,C\}} L_{\phi}^D/3$.
Using 100000 Monte Carlo simulations for phase mapping, we observe that randomized phase mapping performs fairly well, with a maximum and mean per phase load deviation of 30\% and 9.5\% with respect to the mean load met by the three phases in the worst case (Fig. \ref{fig:error}).
Fig. \ref{fig:error} shows the distribution of the phase load errors while performing randomized phase mapping.
\begin{figure}[!htbp]
\center
\includegraphics[width=5in]{plot_err.png}
\vspace{-2pt}
\caption{\small{Sum of the absolute value of the difference of phase load and mean load of all the phases.}}
\label{fig:error}
\end{figure}
Thus, in this work, we utilize randomized phase mapping for data generation for evaluating our proposed probabilistic phase identification mechanism.
\subsection{Neutral modelling {for four-wire DN}}
European LV DN is usually different from the North-American one with a larger size distribution transformer with multiple low voltage feeders supplying a large number of consumers per transformer. The German low voltage feeders normally follow a four-wire three-phase configuration with single-grounded neutral~\cite{en14051265}. {Such systems are usually reduced to three-wire equivalent using Kron's reduction\cite{Kersting2001b}. In Kron's reduction, it is assumed that the neutral is grounded multiple times and for a perfectly grounded neutral\footnote{A perfectly grounded neutral refers to grounding resistance of zero ohms. Typically the grounding resistance is $\approx$ 5 ohms which leads to a small voltage drop. In this work, we assume perfectly grounded neutral.}, the neutral voltage equals zero.} However, for the DN considered in this paper, this assumption is not true as {the neutral is isolated from consumer grounding and is grounded only at the sub-station~(See Fig.~\ref{fig_EU}). The inclusion of sparsely grounded neutral in modeling can be done by taking an exact four-wire model with four-wire power flow solvers or reducing it to a three-wire equivalent and solving by using the three-wire solvers.} In~\cite{geth2022computational} a new reduction method is proposed for sparsely grounded European LV feeders so that the impact of neutral is represented as equivalent as a four-wire model without the necessity of carrying around extra variables and measurements. In such reduction, the 4$\times$4 impedance matrix is transformed to 3$\times$3 matrix is given in \eqref{eq:impedancematrix}.
\begin{figure}[!htb]
\centering
\includegraphics[width=12cm]{model_org_real_5.pdf}
\caption{Isolated neutral model of distribution network}
\label{fig_EU}
\end{figure}
\begin{equation}
\text{Impedance matrix}~ =
\begin{bmatrix}
\zijksAA - \zijksNA - \zijksAN + \zijksNN ~~&\zijksAB - \zijksNB - \zijksAN + \zijksNN ~~& \zijksAC - \zijksNC - \zijksAN + \zijksNN \\
\zijksBA - \zijksNA - \zijksBN + \zijksNN ~~&\zijksBB - \zijksNB - \zijksBN + \zijksNN ~~& \zijksBC - \zijksNC - \zijksBN + \zijksNN \\
\zijksCA - \zijksNA - \zijksCN + \zijksNN ~~& \zijksCB - \zijksNB - \zijksCN+ \zijksNN ~~& \zijksCC - \zijksNC - \zijksCN + \zijksNN \\
\end{bmatrix}
\label{eq:impedancematrix}
\end{equation}
\vspace{5mm}
In \eqref{eq:impedancematrix}, $\zijksAA$ is self-impedance of the phase $a$ of the branch $l$, $\zijksNN$ is self-impedance of the neutral of the branch $l$, and so on. Similarly, $\zijksAB$ is the mutual-impedance between phase ${a}$ and ${b}$ of branch $l$. This transformation is exact and eliminates the error introduced by Kron's reduction in three-phase DN modeling for sparsely grounded system~\cite{geth2022computational, Koirala2019}. Furthermore, a minor boost in computation time is achieved {compared to the exact four-wire model} as the necessity of carrying extra variables for neutral voltage is removed. This reduction is more relevant as the measured voltages in {German demo-grid} are also phase-to-neutral. Interested readers are guided to~\cite{geth2022computational} for details about the transformation
\subsection{Metering noise injection model}
Prior works \cite{pi2:pappu2017identifying, pi32:heidari2021phase, pi18:xiaoqing2018phase, pi9:hoogsteyn2022low} consider smart meter measurement error based on the accuracy class of metering infrastructure.
Frequently, the measurement error is considered using Gaussian noise. Further, the measurement accuracy is considered to hold true for three sigma of the times, which corresponds to 99.7\% of total instances.
The standard deviation of the Gaussian noise is related to the tolerance $\tau$ of the measuring device.
The noisy measurement is given as
\begin{equation}
\hat{Z} = Z \times \texttt{Norm}(1, \tau/3),
\label{eq:measurmenterror}
\end{equation}
where $Z$ denotes the true measurement, $\texttt{Norm}(\mu,\sigma)$ denotes a sample of a normal distribution with mean $\mu$ and standard deviation $\sigma$.
In order to evaluate the impact of measurement noise, 1000 Monte Carlo simulations are considered.
\subsection{Zonal clustering of Mitnetz Strom DN}
Identifying the zones of an LV DN can be helpful to the DSO in planning the flexibility needs of a network. Due to the large numbers of DN feeders, a standardized approach to divide zones based on electrical and/or geographical distances is deemed essential \cite{https://doi.org/10.48550/arxiv.2207.10234}.
In this section, the summary of the clustering framework to identify the best-suited LV DN zonal partition using electrical distance as a measure is presented, which is explained in detail in \cite{https://doi.org/10.48550/arxiv.2207.10234}.
\textcolor{black}{This zonal partition method uses an incidence matrix-based measure, which can be obtained with the help of spectral decomposition of the admittance matrix. The adequate number of zones is obtained based on the maximization of silhouette score \textcolor{black}{while considering the desired number of clusters}}.
The zonal partition divides nodes $\mathscr{N}$ into $c \in\{1,...,C\}$ clusters.
The spectral clustering proposed in~\cite{ding2018clusters, sanchez2014hierarchical} for the creation of zones or network reduction is used for zonal partition.
A double stochastic matrix is formed, which is a special type of Markov matrix where not only each row but also each column add to 1.
For this transformed matrix, all eigenvalues are real and smaller than or equal to 1, with one eigenvalue exactly equal to 1~\cite{mourad2012spectral}.
For identifying $C$ partitions in a graph, the $C$ highest eigenvalues and corresponding orthonormal eigenvectors are identified.
The eigenvector matrix of the order $N \times C$ is used for DN partitioning, in effect reduces the dimensionality of the problem.
$k$-means clustering is used to partition the spectral data points.
The goodness of a cluster is measured using the mean silhouette index of the network cluster.
The silhouette coefficient of a node is a confidence indicator of its association in a group~\cite{ding2018clusters, scarlatache2012using}.
\subsection{Power flows for synthetic data}
Power flow equations translate the load information of consumers to the nodal voltage and nodal currents when the network topology and impedances are known using the first equations. Three-phase unbalanced power flow equations were used to create the pseudo-measurement point based on the given load data and network topology. Open-source power flow solver of \texttt{PowerModelsDistribution.jl} was used for creating such pseudo measurement points~\cite{Fobes2020}.
\pagebreak
\section{Phase identification methodology}
\label{section4}
Using the synthetic data generated in the previous section, we develop correlation based voltage matrices used for consensus algorithm base PCI algorithms in the next section.
\subsection{Notation}
A three-phase distribution network (DN) consists of phases denoted as $\phi \in \{A,B,C\}$. The DN consists of branches, nodes, loads, and generators.
A DN is represented as a directed graph by $<\mathscr{N},E>$, where $\mathscr{N}$ denotes the set of nodes in all the phases and $E$ denotes the set of branches connecting a pair of nodes.
For each node $i$ in the phase $\phi$ at any time $t$, have two variables: (i) voltage magnitude denoted as $V_{{\phi},i,t}$ and phase angle denoted as $\theta_{{\phi},i,t}$. The voltage phasor at a node and phase is governed by power injections.
The branch denoted as $(i,j) \in E$ is characterized by line admittance denoted as $Y_{\phi,ij}$. The line admittance governs power flow and line losses.
$\mathscr{N}_d \subset \mathscr{N}$ denotes the nodes with loads connected. For these nodes, the active and reactive power is given as $P^d_{{\phi},{i,t}}$ and $Q^d_{{\phi},{i,t}}$.
$\mathscr{N}_g \subset \mathscr{N}$ denotes the nodes with generators connected, have active and reactive power generation denoted as $P^g_{{\phi},{i,t}}$ and $Q^g_{{\phi},{i,t}}$.
The time $t$ is sampled hourly, and its range is given as $t \in \{1,..,T\}$.
\begin{figure}[!htbp]
\center
\includegraphics[width=5.2in]{diag1.pdf}
\vspace{-2pt}
\caption{\small{A distribution network cluster with measurement points and single-phase loads with unknown phase connectivity.}}
\label{fig:diag1}
\end{figure}
The DN is clustered into $c \in \{1,...,C\}$ clusters.
A cluster $c$ consists of $M_c$ number of three-phase reference measurement points, $L_c$ number of single-phase consumers and $N_c$ are the number of nodes {present} in that cluster.
{
We assume that all the reference measurements are aligned and there are no synchronization delays considered in this work.
A stylized representation of a zone with $1-\phi$ consumers and $3-\phi$ reference measurement points are shown in Fig. \ref{fig:diag1}.}
DN clustering is performed such that $N_i \subset \mathscr{N}$ and $N_i \cap N_j = \emptyset, ~ \forall i \neq j$.
The set of nodes where $M_c$ measurements are placed is denoted as $i_c^M$.
The set of nodes where $L_c$ single phase consumers are located is denoted as $i_c^L$.
For a vector parameter $K$, $\bar{K}$ is the mean value, and $|K|$ denotes its absolute value.
For a vector $K$, $\mathscr{C}(K)$ denotes its cardinality.
$\mathbbm{1}{(\text{condition})}$ returns 1 if the condition is true.
\subsection{Correlation based metrics}
In this work, we utilized voltage magnitude time series as a parameter for estimating phases of the single-phase consumers in each of the clusters.
Each of the measurement points is utilized for estimating the phases. Thus, a unique reference matrix is created using the phase voltage magnitudes at the measurement node. For cluster $c$, measurement $i_c^M(j)$ where $j\in \{1,..,M_c\}$, the reference voltage matrix is given as
\begin{equation}
\small
V^{\text{ref}}_{i_c^M(j)} =
{\begin{bmatrix}
V_{A,i_c^M(j), 1} & V_{B,i_c^M(j), 1} & V_{C,i_c^M(j), 1}\\
V_{A,i_c^M(j), 2} & V_{B,i_c^M(j), 2} & V_{C,i_c^M(j), 2}\\
:& : & :\\
:& : & :\\
V_{A,i_c^M(j), T} & V_{B,i_c^M(j), T} & V_{C,i_c^M(j), T}\\
\end{bmatrix}}
\end{equation}
The dimension of $V^{\text{ref}}_{i_c^M(j)}$ is $T\times 3$.
Note time $t$ is hourly, therefore, $\mathscr{C}(t\in\{1,..,T\})=T$.
The single phase consumer nodal voltage time series in cluster $c$ forms a column of the matrix denoted as $V_c^L$, and given as
\begin{equation}
\small
V^{\text{L}}_{c} =
{\begin{bmatrix}
V_{\phi,i_c^L(1), 1} & V_{\phi,i_c^L(2), 1} &..& V_{\phi,i_c^L(N_c), 1}\\
V_{\phi,i_c^L(1), 2} & V_{\phi,i_c^L(2), 2} &..& V_{\phi,i_c^L(N_c), 2}\\
:& : &..& :\\
:& : &..& :\\
V_{\phi,i_c^L(1), T} & V_{\phi,i_c^L(2), T} &..& V_{\phi,i_c^L(N_c), T}\\
\end{bmatrix}}
\end{equation}
The dimension of $V^{\text{L}}_{c}$ is $T\times N_c$.
The connection phase of $1-\phi$ consumers are assumed to be not known.
{For the calculation of correlation between reference $3-\phi$ voltage and $1-\phi$ consumer voltage time series,}
Pearson correlation\footnote{The Pearson correlation between two vectors $X$ and $Y$ is given as
\begin{equation}
\rho({X,Y}) = \frac{\sum (X-\bar{X}) (Y-\bar{Y})}{ \sqrt{ \sum (X-\bar{X})^2 \sum (Y-\bar{Y})^2 } }
\end{equation}} is utilized.
Three models are presented for generating metrics for phase identification using a consensus algorithm.
These three metrics are described next.
\subsubsection{J1: correlation of voltage time series}
The correlation matrix for measurement $i_c^M(j)$ is denoted as
\begin{equation}
\small
\begin{split}
& \rho_{J1}^{c,i_c^M(j)} = \\
&{\begin{bmatrix}
\rho(V_{i_c^L(1)}, V_{A,i_c^M(j)}), & \rho(V_{i_c^L(1)}, V_{B,i_c^M(j)})&\rho(V_{i_c^L(1)}, V_{C,i_c^M(j)})\\
:& : & :\\
:& : & :\\
\rho(V_{i_c^L(N_c)}, V_{A,i_c^M(j)}), &
\rho(V_{i_c^L(N_c)}, V_{B,i_c^M(j)})&\rho(V_{i_c^L(N_c)}, V_{C,i_c^M(j)})\\
\end{bmatrix}}
\end{split}
\end{equation}
Note that the voltage time series is denoted by dropping $t$ in the notation.
For instance, $V_{i_c^L(w)}$ denotes the voltage time series for $t \in \{1,..,T\}$ for single phase consumer located at node id $i_c^L(w): w \in \{1,..,N_c\}$.
For single-phase consumers with unknown phases, the phase notation is also dropped to avoid confusion.
\subsubsection{J2: Salient features with voltage difference time series}
Salient features in voltage time series could help in improving phase identification.
The use of salient features has been explored in \cite{pi3:ni2017phase, pi6:vycital2019phase, pi5:xu2016phase}. In this work, we utilize the difference matrix and a zonal voltage fluctuation threshold for identifying the salient features.
The difference matrix for the reference voltage matrix for cluster $c$ and measurement $i_c^M(j)$ is given as
\begin{equation}
\Delta V^{\text{ref}}_{i_c^M(j)} = \Big[V_{\phi,i_c^M(j), t+1} - V_{\phi,i_c^M(j), t}~~ \forall~ t, \forall~\phi \Big].
\end{equation}
The dimension of $\Delta V^{\text{ref}}_{i_c^M(j)}$ is $(T-1)\times 3$.
$\beta_c$ denotes the voltage change threshold for cluster $c$.
The salient features are extracted using the $\Delta V^{\text{ref}}_{i_c^M(j)}$ matrix as
\begin{equation}
i^{\text{salient}}_{c,i_c^M(j)} = \arg \mathbbm{1}\Big({|\Delta V^{\text{ref}}_{i_c^M(j)}(t)| > \beta_c~ \forall t\in \{1,..,T-1\}}\Big).
\end{equation}
The voltage difference matrix for the connected load matrix is denoted as
\begin{equation}
\Delta V_c^L = \texttt{diff}(V_c^L),
\end{equation}
where \texttt{diff} operator finds the difference of adjacent rows. The dimension of $\Delta V_c^L$ matrix is $(T-1)\times N_c$.
The new reference and load matrix extracts the rows with salient features in the reference matrix and are given as
\begin{gather}
\Delta V^{\text{ref, J2}}_{i_c^M(j)} = \Big[ \Delta V^{\text{ref}}_{i_c^M(j)}(i^{\text{salient}}_{c,i_c^M(j)}, :) \Big],\\
\Delta V^{\text{L, J2}}_{c} = \Big[ \Delta V_c^L(i^{\text{salient}}_{c,i_c^M(j)}, :) \Big],
\end{gather}
The correlation matrix with salient features using the voltage difference as a metric is given as
\begin{equation}
\small
\begin{split}
&\rho_{J2}^{c,i_c^M(j)} = \\
&{\begin{bmatrix}
\rho(\Delta V^{\text{L, J2}}_{c}(1), \Delta V^{\text{ref, J2}}_{A,i_c^M(j)}) & ..&\rho(\Delta V^{\text{L, J2}}_{c}(1), \Delta V^{\text{ref, J2}}_{C,i_c^M(j)})\\
:& : & :\\
:& : & :\\
\rho(\Delta V^{\text{L, J2}}_{c}(N_c), \Delta V^{\text{ref, J2}}_{A,i_c^M(j)}) & ..&\rho(\Delta V^{\text{L, J2}}_{c}(N_c), \Delta V^{\text{ref, J2}}_{C,i_c^M(j)})\\
\end{bmatrix}}
\end{split}
\end{equation}
\subsubsection{J3: Salient features with voltage magnitude time series}
Previously, we used the voltage difference as a metric for identifying the salient features. The salient features when projected onto the voltage magnitude would require the previous time stamp to capture the voltage change trajectory. This trajectory captured will improve the correlation-based metric we are utilizing for phase identification.
The new salient feature matrix is given as
\begin{equation}
i^{\text{sal, plus}}_{c,i_c^M(j)} = \texttt{unique}(\big[i^{\text{salient}}_{c,i_c^M(j)}, i^{\text{salient}}_{c,i_c^M(j)} +1 \big]),
\end{equation}
with \texttt{unique} operator finding unique time stamps, considering there could be repetitions that will be eliminated.
The new reference and load voltage matrix are given as
\begin{gather}
V^{\text{ref, J3}}_{i_c^M(j)} = \Big[ V^{\text{ref}}_{i_c^M(j)}(i^{\text{sal, plus}}_{c,i_c^M(j)}, :) \Big],\\
V^{\text{L, J3}}_{c} = \Big[ \Delta V_c^L(i^{\text{sal, plus}}_{c,i_c^M(j)}, :) \Big].
\end{gather}
The correlation matrix for model J3 is given as
\begin{equation}\begin{split}
&\rho_{J3}^{c,i_c^M(j)} = \\
&{\begin{bmatrix}
\rho(V^{\text{L, J3}}_{c}(1), V^{\text{ref, J3}}_{A,i_c^M(j)}), & ..&\rho(V_{i_c^L(1)}, V^{\text{ref, J3}}_{C,i_c^M(j)})\\
:& : & :\\
:& : & :\\
\rho(V^{\text{L, J3}}_{c}(N_c), V^{\text{ref, J3}}_{A,i_c^M(j)}), &
..&\rho(V_{i_c^L(N_c)}, V^{\text{ref, J3}}_{C,i_c^M(j)})\\
\end{bmatrix}}
\end{split}
\end{equation}
The dimension of $\rho_{J1}^{c,i_c^M(j)},\rho_{J2}^{c,i_c^M(j)}$ and $\rho_{J3}^{c,i_c^M(j)} $ equals $N_c \times 3$.
\pagebreak
\section{Consensus algorithm}
\label{section5}
In Section \ref{section4}, we calculated correlation matrices using the voltage time series for measurement reference located at node $i_c^M(j), j \in \{1,..,M_c\}$.
Thus, with multiple measurement points in a cluster, independent phase identifications can be performed. These estimations can be taken into consideration using consensus algorithms to be presented in this section.
A consensus algorithm is a strategy that a group of agents use to agree with each other on what's true.
In a multi-sensor PCI scenario, there is just one true phase placement (ground truth), which is given as $P^{\text{true}}_c$. Each of the measurements used as a reference for models J1, J2, and J3 as metrics are used for estimating the true phases of single-phase consumers in cluster $c$.
The advantage of the consensus algorithm is that no one measurement point limits the PCI accuracy.
One of the most widely used consensus algorithms is in blockchain technology.
Consensus algorithms are widely used in state estimation \cite{con2:rana2017consensus, con3:soatti2016consensus, con4:xia2019distributed}.
In this work, we use consensus for phase identification in a distribution network.
\subsection{Na\"{i}ve phase identification}
The na\"{i}ve phase identification, {denoted as $S_0$}, uses only one of the $3-\phi$ reference measurement points, typically the substation measurement.
\begin{equation}
P^{\text{est}, Jx, S_0}_{c} = \arg\max \rho_{Jx}^{c,i_\text{sel}},
\end{equation}
where $i_\text{sel}$ denotes the node id for the {reference measurement point.}
\textcolor{black}{Most literature on phase identification uses this na\"{i}ve model with substation time series measurement as the reference, see Tab. \ref{tab:phaselit}.}
\subsection{Majority rule}
The majority rule, {denoted as $S_1$}, is one of the most commonly used consensus algorithm. It identifies the most agreed-upon estimation. Earlier works such as \cite{maj1:goloboff2001methods, maj2:chappell2004majority} detail the applications of the majority rule in building consensus among agents (sensors).
We note that for a cluster $c$, measurement point located at node $i_c^M(j)$, and metric $Jx$, we can calculate $\rho_{Jx}^{c,i_c^M(j)}$. This correlation matrix is used for calculating the estimated phases, as
\begin{equation}
\begin{split}
P^{\text{est}, Jx}_{c,i_c^M(j)} = \arg \max \rho_{Jx}^{c,i_c^M(j)},\\
P^{\text{est}, Jx, S_1}_{c} = f_{S_1}\Big(P^{\text{est}, Jx}_{c,i_c^M(j)} ~ \forall j \in \{1,..,M_c\}\Big).
\end{split}
\end{equation}
The function $f_{S_1}$ calculates the majority among the estimated phases. Consider there are 7 measurement points in a cluster. For a node, consider 3 of the estimations that predict phase B, and 2 for phases A and C respectively. In this case, the majority rule predicts the phase to be estimated as phase B.
\subsection{Weighted measure}
Previously, for a majority rule-based consensus algorithm, we assumed all agents to be of equal importance (or weights). However, if we use the physical laws governing the system, we can calculate the weights for different agents. In phase identification, earlier works point out that measurement points in geographical proximity will have a greater voltage correlation among similar phases \cite{pi17:olivier2017automatic, pi30:short2012advanced}, see Tab. \ref{tab:phaselit}.
In this work, we use the correlation value as a weighing factor for calculating the estimated phase.
A correlation value of 1 implies 100\% correlation.
\subsubsection{Correlation as a measure}
\begin{equation}
\bar{\rho}^c_{Jx} = \sum_{k=1}^{M_c} \sum_{\phi=1}^3 \rho^{c,k}_{Jx},
\end{equation}
where $\bar{\rho}^c_{Jx}$ is $N_c\times1$ vector.
The normalized correlation coefficients are given as
\begin{equation}
G^{Jx,S_2}_c = \frac{\sum_{k=1}^{M_c} \rho^{c,k}_{Jx}}{\bar{\rho}^c_{Jx}},
\label{eq:gs2}
\end{equation}
where $G^{Jx,S_2}_c$ is $N_c\times3$ matrix.
The estimated phases are given as
\begin{equation}
P^{\text{est}, Jx, S_2}_{c} = \arg \max G^{Jx,S_2}_c.
\end{equation}
\subsubsection{Absolute value of correlation as a measure}
\begin{equation}
\bar{\rho}^c_{Jx, \texttt{abs}} = \sum_{k=1}^{M_c} \sum_{\phi=1}^3 |\rho^{c,k}_{Jx}|,
\end{equation}
where $\bar{\rho}^c_{Jx, \texttt{abs}}$ is $N_c\times1$ vector.
The normalized correlation coefficients are given as
\begin{equation}
G^{Jx,S_3}_c = \frac{\sum_{k=1}^{M_c} |\rho^{c,k}_{Jx}|}{\bar{\rho}^c_{Jx, \texttt{abs}}},
\label{eq:gs3}
\end{equation}
where $G^{Jx,S_3}_c$ is $N_c\times3$ matrix.
The estimated phases are given as
\begin{equation}
P^{\text{est}, Jx, S_3}_{c} = \arg \max G^{Jx,S_3}_c.
\end{equation}
\subsubsection{Maximum value of correlation as a measure}
\begin{equation}
\bar{\rho}^c_{Jx, \texttt{max}} = \max_{k=1,..,M_c} \max_{\phi=1,2,3} |\rho^{c,k}_{Jx}|,
\end{equation}
where $\bar{\rho}^c_{Jx, \texttt{max}}$ is $N_c\times1$ vector.
The normalized correlation coefficients are given as
\begin{equation}
G^{Jx,S_4}_c = \frac{\max_{k=1,..,M_c} |\rho^{c,k}_{Jx}|}{\bar{\rho}^c_{Jx, \texttt{max}}},
\label{eq:gs4}
\end{equation}
where $G^{Jx,S_4}_c$ is $N_c\times3$ matrix.
The estimated phases are given as
\begin{equation}
P^{\text{est}, Jx, S_4}_{c} = \arg \max G^{Jx,S_4}_c.
\end{equation}
Note that \eqref{eq:gs2}, \eqref{eq:gs3} and \eqref{eq:gs4} denotes element wise division of vector of length $N_c$.
\subsection{Metrics for phase identification models}
\subsubsection{Modelling accuracy}
Consider, the true phase information in a cluster is given as $P^{\text{true}}_c$.
The estimation accuracy is denoted as
\begin{gather*}
\text{Estimation accuracy} = \frac{\text{number of correct phase estimation}}{\text{total number of single phase consumers}}.
\end{gather*}
The phase estimation accuracy for metric $Jx \in \{J1, J2, J3\}$, cluster $c$ and measurement $i_c^M(j)$ is given as
\begin{equation}
A^{Jx}_{c, i_c^M(j)} = 100 \times \Big\{ 1- \frac{ \sum \mathbbm{1}\Big( P^{\text{est}, Jx}_{c, i_c^M(j)}- P^{\text{true}}_c \neq 0 \Big) }{N_c} \Big\},
\end{equation}
where $P^{\text{est}, Jx}_{c, i_c^M(j)}$ denote the phase estimation vector for cluster input metric $Jx$, cluster $c$ and measurement $i_c^M(j)$.
The estimation accuracy for consensus algorithm $Sy$ is given as
\begin{equation}
A^{Jx, Sy}_{c} = 100 \times \Big\{ 1- \frac{ \sum_{n=1}^{N_c} \mathbbm{1}\Big( P^{\text{est}, Jx, Sy}_{c}- P^{\text{true}}_c \neq 0 \Big) }{N_c} \Big\},
\end{equation}
Since there are three base metrics denoted as $Jx \in \{J1, J2, J3\}$ and five consensus models (including the na\"{i}ve model) denoted as $Sy \in \{ S_0, S_1, S_2, S_3, S_4\}$, therefore, we evaluate in total 13 phase identification models {(the na\"{i}ve model, $S_0$, is performed for J1 only)}. In numerical results, we will compare the benefits and shortcomings of these models. Estimation accuracy averaged over Q Monte Carlo simulations are denoted as $\bar{A}^{Jx, Sy}_{c}$.
\subsubsection{Confidence factor for phase identification}
Note that models $S_1,S_3$, and $S_4$ provide coefficients that add up to 1 (node-wise). Thus, $G^{Jx,S_3}_c$ and $G^{Jx,S_4}_c$ can be used to indicate probabilities of phase estimation.
For $S_1$, the estimation probabilities can be calculated by dividing $P^{\text{est}, Jx, S_1}_{c} $ with the number of measurements in a cluster, $M_c$.
We define the confidence factor as the minimum distance between the factor associated with the correct phase and the maximum of the two incorrect phases, over all nodes in the cluster $c$.
For the model, $S_2$ we normalized the confidence factor with the range of variation of $G^{Jx,S_2}_c$.
The proposed confidence factor will provide us with additional information about the robustness of our phase estimation output.
Note, the proposed confidence factor can lie in the range $\in[-1,1]$ for $S_1, S_3$, and $S_4$. A confidence factor close to 1 implies very high confidence in our phase estimation output.
The confidence factor for measurement $k\in \{1,..,M_c\}$ and cluster $c$ is given as $F^{ Jx, S_y}_{c,k}$.
The confidence factor for a cluster $c$ for all Monte Carlo scenarios is given as
\begin{equation}
\bar{F}^{ Jx, S_y}_{c} = \frac{1}{Q\times M_c} \sum_{q=1}^Q \sum_{k=1}^{M_c} F^{ Jx, S_y}_{c,k},
\end{equation}
where $w\in\{1,..,W\}$ denotes the Monte Carlo scenarios.
\subsubsection{Standard deviation with measurement error}
$Q$ Monte Carlo simulations are considered for minimizing the measurement error biases on phase estimation. For each of Monte Carlo iteration $q \in \{1,...,Q\}$, calculate the standard deviation of the measure used for calculating
$P^{\text{est}, Jx, Sy}_{c}$, denoted as $D^{ Jx, S_y}_{c}$.
\pagebreak
\section{Numerical results}
\label{section6}
The numerical case study considers a German DN with 646 nodes and 331 loads connected to it. Out of 331 loads, 313 loads (94.6\% of total consumers) are single-phase loads. The phase connections are {widely unknown} to Mitnetz Strom. The objective of the case study is to assess the phase identification algorithm proposed in this work, {benchmarked over na\"{i}ve phase identification model}.
The selected DN is part of the demo network selected for evaluation in the EUniversal project.
Mitnetz Strom placed 53 $3-\phi$ measurement {devices} in the DN. These measurement points will be considered as references used for PCI.
The flexibility participants will be provided with a Home Energy Management System (HEMS) which will provide measurements of load and voltages at the point of common coupling. In the case studies, we assume the time series of voltage measurements of all single-phase users are known. In the real world, only measurements of consumers with HEMS will be available.
There are four case studies performed in this paper.
The first case study compares the 12 phase identification models on different phase mappings.
The second case study quantifies the impact of reference location in a DN on the phase identification metrics proposed in the work.
The third case study assesses the impact of measurement error at the reference and/or at the consumer location on phase identification metrics.
The last case study compares the phase identification metrics for DN with and without the neutral conductor model. As most European DNs are four-wire, it is crucial to quantify the impact.
Prior to case studies, we detail the {test DN} clustering results and the performance of the na\"{i}ve phase identification algorithm. The benefits of the proposed phase identification algorithms are compared to the na\"{i}ve model.
\subsection{Clustering of distribution network}
Fig. \ref{fig:measurmentpts} shows the location of single-phase consumers and measurement points in the DN.
We apply the clustering algorithm for identifying the clusters in the DN.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.9\linewidth]{MLq0094_loads_measurements.png}
\vspace{-10pt}
\caption{\small{Consumers (blue) and measurement points (orange) in the DN}}
\label{fig:measurmentpts}
\end{figure}
Since the number of clusters to be formed is not clear, we utilize the silhouette score plotted in
Fig. \ref{fig:silhouette} for fixing the number of clusters.
Observe that the silhouette score is maximized for 3 clusters with a value of 0.872. However, we select the number of clusters to be 7 as maximization of silhouette score is not the only goal. We also need to quantify how many clusters will make the problem tractable {by explaining the DN sufficient}. Note there is a sharp decline in silhouette score if the number of clusters is increased beyond 7.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.9\linewidth]{silhouette.pdf}
\vspace{-1pt}
\caption{\small{Choosing the number of clusters of DN based on the silhouette coefficient. In (a) the variation of silhouette coefficient is plotted with increasing number of cluster. In (b) we zoom into the plot (a). Note the silhouette coefficient is maximum for 3 clusters, however, the best-suited number of clusters selected is 7 \cite{https://doi.org/10.48550/arxiv.2207.10234}.}}
\label{fig:silhouette}
\end{figure}
Previously, we defined $\beta_c$ as the voltage change threshold for cluster $c$. Note $\beta_c$ will vary with different clusters, as voltage fluctuation in different zones will vary drastically. Fig. \ref{fig:clustervoltage} shows the variation of nodal voltages in seven clusters of the Mitnetz Strom {example LV} distribution network. It can be observed that the voltage variation in cluster 2 is very small, ranging from 0.995 to 1.002. This narrowband is due to cluster 2 including the substation and the slack bus, where the voltage is regulated at 1 per unit level.
\begin{figure*}[!htbp]
\center
\includegraphics[width=1.07\linewidth]{cluster_voltage.pdf}
\vspace{-2pt}
\caption{\small{Distribution of voltage in phases in different clusters for Mitnetz Strom DN for cluster 1 to 7.}}
\label{fig:clustervoltage}
\end{figure*}
Fig. \ref{fig:clusterss} shows the DN clusters. We can also comment that the clusters identified are indeed stable, which is validated by 100 Monte Carlo (MC) simulations.
The stability of clusters, impacted due to initializations are discussed in \cite{bubeck2009initialization, kuncheva2006evaluation}.
Clustering based on $k$-means is sensitive to randomized initializations of centroids, and if not stable would provide different clusters in different iterations.
Observe that the numbering of clusters in Fig. \ref{fig:clusterss} is based on randomized initialization of the centroids, and indeed would vary in a different iteration.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.8\linewidth]{MLq0094_clusters.png}
\vspace{-10pt}
\caption{\small{Clusters of DN.}}
\label{fig:clusterss}
\end{figure}
Tab. \ref{tab:phasemapping} details the phase mapping scenarios. In the first case study, we assess the impact of different phase mappings.
C1 denotes balanced, C2 denotes moderately balanced, and C3 denotes unbalanced phase mapping based on the annual cumulative load on each phase.
For the rest of the paper, if phase mapping is not explicitly mentioned, then C2 phase mapping is used, see Tab. \ref{tab:phasemapping}.
\begin{table}[!htbp]
\centering
\caption{\small{Phase mapping scenarios}}
\begin{tabular}{c|c|c|c|c}
\cline{3-5}
\multicolumn{1}{l}{} & & \multicolumn{3}{l}{Annual cumulative load (MWh)} \\
\hline
ID & Cases & Phase A & Phase B & Phase C \\
\hline
\hline
C1 & Highly balanced & 274.6 & 282.3 & 275.7 \\
\hline
C2 & Fairly balanced & 288.6 & 292.4 & 251.6 \\
\hline
C3 & Highly unbalanced & 240.4 & 257.7 & 334.4 \\
\hline
\end{tabular}
\label{tab:phasemapping}
\end{table}
The details of the number of consumers, measurement points, and voltage variation for phase mapping C2 (see Table \ref{tab:phasemapping}) are provided in Table \ref{tab:clusterdetails}.
\begin{table}[!htbp]
\centering
\caption{\small{Network attributes {for C2 phase mapping}}}
\begin{tabular}{c|c|c|c|c}
\hline
\textbf{Cluster ID} & $N_c$ & $M_c$ & $\beta_c$ & {Max voltage deviation} \\
\hline
\hline
1 & 20 & 4 & 0.056 & 0.136 \\
\hline
2 & 73 & 8 & 0.008 & 0.018 \\
\hline
3 & 21 & 9 & 0.064 & 0.164 \\
\hline
4 & 56 & 2 & 0.051 & 0.172 \\
\hline
5 & 46 & 9 & 0.0493 & 0.112 \\
\hline
6 & 38 & 8 & 0.040 & 0.091 \\
\hline
7 & 59 & 13 & 0.031 & 0.058 \\
\hline
\end{tabular}
\label{tab:clusterdetails}
\end{table}
\subsection{Performance of na\"{i}ve model}
As the majority of prior works utilize a single voltage reference for PCI, we would show the performance of this model, {referred to as the na\"{i}ve model}, prior to evaluation of the proposed phase identification models.
We utilize 4 different measurement points close to the transformer for evaluating the na\"{i}ve phase identification model.
The measurement points are shown in Fig. \ref{fig:measurmentpts}.
It is also indicated that nodes 1, 72, 74, and 511 are connected to feeders going towards clusters 6, 4, 5, and 7 respectively, see Fig. \ref{fig:clusterss}.
A zoomed-in plot of measurement points and the location of the nodes of measurement is shown in Fig. \ref{fig:naiveLoc}.
The results of PCI using the na\"{i}ve model is detailed in Table \ref{tab:naiveres}.
It lists the cluster-wise PCI accuracy. Observe that na\"{i}ve model correctly estimates the phase connectivity for the cluster with which it is directly connected, however, the estimation accuracy for other clusters can be as low as 0\%.
This is also shown in Fig. \ref{fig:naiveAccuracy}. In Fig. \ref{fig:naiveAccuracy}, the measurement points are indicated by a black square, the correctly estimated consumer phase is indicated by a green circle and red circles show as the incorrect identification.
We can observe that \textit{\textbf{the selection of a reference highly affects the phase connectivity identification accuracy in a multi-feeder distribution network}}.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.5\linewidth]{naiveM.pdf}
\vspace{-10pt}
\caption{\small{Measurement locations for na\"{i}ve model evaluation}}
\label{fig:naiveLoc}
\end{figure}
\begin{table}[!htbp]
\centering
\caption{PCI accuracy with na\"{i}ve model}
\begin{tabular}{c|c|c|c|c}
\cline{2-5}
\multicolumn{1}{c|}{} & Node 1 & Node 72 & Node 74 & Node 511 \\
\hline
\hline
Cluster 1 & 33.33 & 52.94 & \textbf{100 } & 61.11 \\
\hline
Cluster 2 & 44.44 & 50 & 56.76 & 29.03 \\
\hline
Cluster 3 & 82.35 & 47.37 & \textbf{100} & 73.68 \\
\hline
Cluster 4 & 12.5 & \textbf{100} & 35.14 & 57.14 \\
\hline
Cluster 5 & 69.77 & 23.26 & \textbf{100} & 61.36 \\
\hline
Cluster 6 & \textbf{100} & 0 & 63.64 & 15.63 \\
\hline
Cluster 7 & 47.06 & 15.56 & 19.57 & \textbf{100 } \\
\hline
overall accuracy & 47.92 & 44.09 & 55.59 & 52.72 \\
\hline
\end{tabular}
\label{tab:naiveres}
\end{table}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\raisebox{-\height}{\includegraphics[width=\textwidth]{MLq0094_phase_estimate_node1.png}}
\caption{Node 1 as reference}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\raisebox{-\height}{\includegraphics[width=\textwidth]{MLq0094_phase_estimate_node72.png}}
\caption{Node 72 as reference}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\raisebox{-\height}{\includegraphics[width=\textwidth]{MLq0094_phase_estimate_node74.png}}
\caption{Node 74 as reference}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\raisebox{-\height}{\includegraphics[width=\textwidth]{MLq0094_phase_estimate_node511.png}}
\caption{Node 511 as reference}
\end{subfigure}
\caption{Phase connectivity identification accuracy with na\"{i}ve model. The correct phase estimations are indicated with green circles, and incorrect with red circles. The location of the reference is indicated with a black square. The substation is marked with a yellow square.}
\label{fig:naiveAccuracy}
\end{figure*}
It is clear from the numerical evaluations that
\begin{itemize}
\item Na\"{i}ve model for phase connectivity identification is very sensitive towards the selection of the reference voltage node which is utilized for phase identification of a single phase consumer, and
\item Na\"{i}ve model does not consider multiple reference measurements in a DN,
\item The mean PCI accuracy with na\"{i}ve model in a multi-feeder DN considered in this work is below 56\%.
\end{itemize}
The numerical case studies are presented subsequently.
\subsection{Case study 1: Comparing phase identification models}
Previously, we proposed three metrics $\{J1, J2,J3\}$ and four consensus algorithms $\{S1,S2,S3,S4\}$. In this case study, we compare 12 phase connectivity identification algorithms are proposed and applied to the German DN.
This case study provides recommendations for the best-suited metric and consensus algorithm to be used for phase identification. Note that these recommendations could vary for other DNs.
All results consider a measurement error of 1\%. In order to eliminate the impact of measurement error on PCI, we perform 1000 MC simulations with different measurement errors calculated using \eqref{eq:measurmenterror}.
\begin{table}[!htbp]
\centering
\caption{Majority rule (S1) model for cluster 2}
\begin{tabular}{c|c|c|c|c}
\hline
Case & Metric (Jx) & Accuracy & Confidence factor & Sensitivity \\
& & ($\bar{A}^{Jx, Sy}_{c}$) in \% & ($\bar{F}^{ Jx, S_y}_{c}$) & ($D^{ Jx, S_y}_{c}$) \\
\hline
\multirow{3}{*}{C1} & J1 & 50.99 & 0 & 0.3714 \\
\cline{2-5}
& J2 & 50.99 & 0 & 0.3714 \\
\cline{2-5}
& J3 & 59.99 & 0 & 0.4641 \\
\hline
\multirow{3}{*}{C2} & J1 & 60.76 & 0 & 0.3878 \\
\cline{2-5}
& J2 & 60.75 & 0 & 0.3879 \\
\cline{2-5}
& J3 & 61.14 & 0 & 0.4612 \\
\hline
\multirow{3}{*}{C3} & J1 & 58.91 & 0 & 0.3791 \\
\cline{2-5}
& J2 & 59.83 & 0 & 0.3792 \\
\cline{2-5}
& J3 & 59.41 & 0 & 0.4628 \\
\hline
\end{tabular}
\label{tab:cas1eliminateS1}
\end{table}
The majority rule consensus algorithm (S1) outperforms all other models proposed for all clusters except for cluster 2. For all other clusters, the majority rule provides 100\% accurate rules with a confidence factor of 1.
As detailed earlier, cluster 2 is the part of the DN around the substation. The voltage deviation in this cluster is very small.
Table \ref{tab:cas1eliminateS1} shows the performance of the majority rule consensus algorithm for cluster 2. Observe that all three metrics of phase identification are very poor. Thus, we drop the majority rule consensus algorithm in subsequent evaluations.
\textcolor{black}{From Fig. \ref{fig:c1confidence}, the confidence factor deteriorates from model C1 which is balanced phase mapping to C3 which is highly unbalanced phase mapping. No noticeable PCI accuracy change is observed for consensus algorithms S2, S3, and S4.
The mean confidence factor for cases C1, C2, and C3 are 0.201, 0.174, and 0.133 respectively.
Thus, \textbf{\textit{correlation-based phase connectivity identification tends to be more accurate for more balanced phase mapping}}.}
\begin{figure}[!htbp]
\center
\includegraphics[width=0.8\linewidth]{c1confidence.pdf}
\vspace{-2pt}
\caption{\small{Confidence factor for three-phase mappings for metrics \{J1, J2, J3\} and consensus algorithms \{S2, S3, S4\}.}}
\label{fig:c1confidence}
\end{figure}
\begin{figure}[!htbp]
\center
\includegraphics[width=0.75\linewidth]{c1_accuracy.pdf}
\vspace{-5pt}
\caption{\small{Accuracy for three-phase mappings.}}
\label{fig:c1accuracy}
\end{figure}
Fig. \ref{fig:c1accuracy} shows the mean phase estimation accuracy for metrics \{J1, J2, J3\}. Observe that mean estimation accuracy is deteriorating for metrics J1 to J3. Further, the accuracy is higher for balanced phase mapping compared to unbalanced phase mapping.
Thus, Fig. \ref{fig:c1confidence} and \ref{fig:c1accuracy} are used to evaluate the impact of phase mappings \{C1, C2, C3\} on phase identification. Further, we observe that metric J1 outperforms others. J1 is more robust as a measure of phase identification.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.7\linewidth]{modelSel.pdf}
\vspace{-2pt}
\caption{\small{For three different phase mappings and 7 clusters, S3-J1 phase identification model outperforms others for most of the time. The plot shows the histogram best model for a cluster.}}
\label{fig:best}
\end{figure}
{
For each cluster \{1,...,7\} and phase mappings \{C1, C2, C3\} we rank the phase identification methods based on the three metrics (a) estimation accuracy in \% denoted as $\bar{A}^{Jx, Sy}_{c}$, (b) confidence factor denoted as $\bar{F}^{ Jx, S_y}_{c}$, and (c) sensitivity towards measurement errors denoted as $D^{ Jx, S_y}_{c}$.}
Out of these 21 models, 13 times consensus algorithm S3 and metric J1 denoted as S3-J1\footnote{Model Sy-Jx denotes that the phase identification algorithm utilizes Jx as the metric and Sy as the consensus algorithm}, outperformed all the other combinations, see Fig. \ref{fig:best}.
Therefore, for all subsequent studies, we will utilize S3-J1 as the best-suited model for phase identification {for the DN considered}.
Table \ref{tab:case1consolidated} shows the consolidated results for metric J1.
Observe that the sensitivity towards phase identification due to measurement error is more than 10 times higher for cluster 2 (close to the substation with low $\beta_c$) compared to other clusters.
This effect can again be attributed to low voltage fluctuations in cluster 2, see Fig. \ref{fig:clustervoltage}.
\begin{table*}[!htbp]
\centering
\footnotesize
\caption{PCI estimation for metric J1}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
\cline{3-11}
\multicolumn{1}{c}{} & & \multicolumn{3}{c|}{Balanced (C1)} & \multicolumn{3}{c|}{Fairly balanced (C2)} & \multicolumn{3}{c|}{Unbalanced (C3)} \\
\hline
Cluster & Consensus & $\bar{A}^{Jx, Sy}_{c}$ & $\bar{F}^{ Jx, S_y}_{c}$ & $D^{ Jx, S_y}_{c}$ & $\bar{A}^{Jx, Sy}_{c}$ & $\bar{F}^{ Jx, S_y}_{c}$ & $D^{ Jx, S_y}_{c}$ & $\bar{A}^{Jx, Sy}_{c}$ & $\bar{F}^{ Jx, S_y}_{c}$ & $D^{ Jx, S_y}_{c}$ \\
\hline
1 & S2 & 100 & 0.5592 & 0.0295 & 100 & 0.2951 & 0.0344 & 40 & -0.0669 & 0.0386 \\
1 & S3 & 100 & 0.4509 & 0.016 & 100 & 0.376 & 0.0137 & 100 & 0.0615 & 0.0068 \\
1 & S4 & 100 & 0.4361 & 0.0179 & 100 & 0.3723 & 0.017 & 100 & 0.0654 & 0.0076 \\
\hline
2 & S2 & 86.7973 & -0.0022 & 3.6282 & 91.2137 & -0.0027 & 6.94 & 85.4356 & -0.0008 & 4.0251 \\
2 & S3 & 88.9466 & -0.0049 & 0.1293 & 87.2301 & -0.0038 & 0.1338 & 88.3151 & -0.0049 & 0.1344 \\
2 & S4 & 91.6849 & -0.0076 & 0.1202 & 90.3699 & -0.0065 & 0.1249 & 91.2589 & -0.007 & 0.1217 \\
\hline
3 & S2 & 100 & 0.4791 & 0.0264 & 100 & 0.1388 & 0.0567 & 100 & 0.4958 & 0.0261 \\
3 & S3 & 100 & 0.4115 & 0.0144 & 100 & 0.2644 & 0.0103 & 100 & 0.3907 & 0.0138 \\
3 & S4 & 100 & 0.3856 & 0.0159 & 100 & 0.2569 & 0.0122 & 100 & 0.3575 & 0.0157 \\
\hline
4 & S2 & 91.5196 & -0.0035 & 7.2585 & 100 & 0.1442 & 0.069 & 99.6857 & 0.0337 & 3.4869 \\
4 & S3 & 100 & 0.1738 & 0.0166 & 100 & 0.2467 & 0.0225 & 100 & 0.1704 & 0.0168 \\
4 & S4 & 100 & 0.1722 & 0.01471 & 100 & 0.261 & 0.236 & 100 & 0.1607 & 0.0172 \\
\hline
5 & S2 & 100 & 0.3729 & 0.0399 & 100 & 0.2052 & 0.0558 & 100 & 0.2329 & 0.0422 \\
5 & S3 & 100 & 0.3827 & 0.0235 & 100 & 0.3451 & 0.02 & 100 & 0.2453 & 0.0159 \\
5 & S4 & 100 & 0.3783 & 0.0265 & 100 & 0.3287 & 0.0242 & 100 & 0.2245 & 0.0194 \\
\hline
6 & S2 & 100 & 0.2579 & 0.0509 & 100 & 0.2887 & 0.0466 & 100 & 0.1096 & 0.1027 \\
6 & S3 & 100 & 0.3457 & 0.0191 & 100 & 0.2927 & 0.0201 & 100 & 0.2376 & 0.0196 \\
6 & S4 & 100 & 0.3153 & 0.0235 & 100 & 0.2923 & 0.0226 & 100 & 0.1968 & 0.0217 \\
\hline
7 & S2 & 100 & 0.0741 & 0.0961 & 99.9983 & 0.0746 & 0.5477 & 100 & 0.0406 & 0.1595 \\
7 & S3 & 100 & 0.1803 & 0.0461 & 100 & 0.1 & 0.0347 & 100 & 0.2281 & 0.0458 \\
7 & S4 & 100 & 0.1732 & 0.0491 & 100 & 0.0955 & 0.0479 & 100 & 0.1587 & 0.0484 \\
\hline
\end{tabular}
\label{tab:case1consolidated}
\end{table*}
\subsection{Case study 2: Effect of the proximity of measurement on PCI}
The goal of this case study is to assess the impact of measurement point proximity on phase identification {performance metrics}.
The simplified representation of the original network model in Fig. \ref{fig:clusterss} is shown in Fig. \ref{fig:zonessimplified}.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.7\linewidth]{zoneSimplified.pdf}
\vspace{-2pt}
\caption{\small{Zones in a simplified network diagram.}}
\label{fig:zonessimplified}
\end{figure}
\begin{table}[!htbp]
\centering
\caption{\small{Zonal connections}}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
L0 & 1 & 2 & 3 & 4 & 5 & 6 & {7} \\
\hline
\hline
L1 & 5 & {[}4,5,6,7] & 5 & 2 & {[}1,2,3] & 2 & 2 \\
\hline
L2 & {[}2,3] & {[}1,3] & {[}1,2] & {[}5,6,7] & {[}4,6,7] & {[}4,5,7] & {[}4,5,6] \\
\hline
L3 & {[}4,6,7] & - & {[}4,6,7] & {[}1,3] & - & {[}1,3] & {[}1,3] \\
\hline
\end{tabular}
\label{tab:zoneorder}
\end{table}
Zonal connections are listed in Table \ref{tab:zoneorder}. L0 denotes the zone where the $1-\phi$ consumer is located. L1 denotes the zones which are directly connected to L0. Similarly, L2 denotes zones that are not connected directly to L0 but via L1; second-order neighbor, and so on.
\begin{table}[!htbp]
\centering
\caption{Mean PCI accuracy with reference selection}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
level & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline
\hline
L0 & 100 & 90.44 & 100 & 100 & 100 & 100 & 100 \\
\hline
L1 & 100 & 43.94 & 100 & 100 & 100 & 99.39 & 78.78 \\
\hline
L2 & 100 & 35.71 & 100 & 29.57 & 44.94 & 75.18 & 48.44 \\
\hline
L3 & 34.76 & - & 47.31 & 27.37 & - & 30.47 & 19.55 \\
\hline
\end{tabular}
\label{tab:c2r1}
\end{table}
\begin{table}[!htbp]
\centering
\caption{Mean confidence factor ($\bar{F}^{ Jx, S_y}_{c}$) with reference selection}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
level & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline
\hline
L0 & 0.372 & -0.007 & 0.256 & 0.261 & 0.329 & 0.293 & 0.096 \\
\hline
L1 & 0.357 & -0.005 & 0.262 & 0.141 & 0.317 & 0.039 & -0.003 \\
\hline
L2 & 0.333 & -0.005 & 0.240 & -0.004 & -0.014 & -0.025 & -0.005 \\
\hline
L3 & -0.049 & - & -0.051 & -0.004 & - & -0.016 & -0.003 \\
\hline
\end{tabular}
\label{tab:c2r2}
\end{table}
\begin{table}[!htbp]
\centering
\caption{Mean PCI STD ($D^{ Jx, S_y}_{c}$) with reference selection}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
level & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\hline
\hline
L0 & 0.017 & 0.125 & 0.012 & 0.023 & 0.024 & 0.022 & 0.048 \\
\hline
L1 & 0.019 & 0.223 & 0.018 & 0.074 & 0.039 & 0.078 & 0.108 \\
\hline
L2 & 0.044 & 0.259 & 0.042 & 0.099 & 0.088 & 0.100 & 0.124 \\
\hline
L3 & 0.068 & - & 0.066 & 0.094 & - & 0.101 & 0.116 \\
\hline
\end{tabular}
\label{tab:c2r3}
\end{table}
Tables \ref{tab:c2r1}, \ref{tab:c2r2}, and \ref{tab:c2r3} list the numerical results with mean phase estimation metrics for each consumer and each measurement point in the DN.
The following observations are made:
\begin{itemize}
\item Phase estimation accuracy deteriorates as the electrical distance between the measurement point used as the reference in correlation-based phase identification and the $1-\phi$ consumer with unknown phase connection increases,
\item The confidence of estimation deteriorates as the distance between measurement and consumer increases,
\item The mean estimation variance increases (implying estimation becomes more sensitive to measurement errors) as the distance between the measurement point used as the reference in correlation based phase identification and the single phase consumer with unknown phase connection increases.
\end{itemize}
The phase connectivity identification metrics proposed in this work not only quantitatively provide the estimation accuracy in \% but also qualitatively provide the confidence factor (the higher the better) and the sensitivity to measurement errors (the lower the better). Using these metrics, we observe that selecting of measurement points in proximity is better in terms of phase estimation quality. This also \textbf{validates our zonal phase identification approach} we have established in this work.
\subsection{Case study 3: Impact of measurement error}
In case study 1, we identified the best-suited model for phase identification as the S3-J1. Previously, we observed that an imbalance in DN leads to worse estimation compared to balanced phase mapping. In order to not have pessimistic or optimistic phase identification results, we utilize C2 phase mapping. We apply different levels of measurement errors at the point of reference and at the point of single phase {consumer point of} connection.
In order to not be biased by one sample of estimation error, we perform 1000 MC simulations.
The three-phase identification metrics are shown in Fig. \ref{fig:case31}.
The three metrics are almost equally influenced by measurement error at reference voltage measurement or 1-$\phi$ consumer end.
Observe from Fig. \ref{fig:case31}
\begin{itemize}
\item Mean phase estimation accuracy for 1\% error in reference voltage and consumer voltage measurement is 98.7\%. Implying, our consensus-based phase estimation algorithm was able to estimate more than 308 out of 313 of single-phase consumer phases accurately on average.
\item For 10\% error in reference voltage and consumer voltage measurement is still 82\%. Thus, the phase estimation proposed in this work is largely immune to measurement errors.
\item The confidence factor deteriorates with an increase in measurement errors, see Fig. \ref{fig:case31}(b).
\item The sensitivity of phase estimation towards measurement error, shown as the variance in estimation Fig. \ref{fig:case31}(c) increases with an increase in measurement error.
\end{itemize}
\begin{figure}[!htbp]
\center
\includegraphics[width=0.75\linewidth]{case3_measuremtv3.pdf}
\vspace{-2pt}
\caption{\small{Impact of measurement accuracy on PCI metrics. (a) shows the accuracy, (b) shows the confidence factor and (c) shows the variance of PCI respectively.}}
\label{fig:case31}
\end{figure}
\subsection{Case study 4: Effect of size of neutral conductor}
European DNs are 4-wire systems with a neutral conductor. The impact of modeling the neutral conductor is not assessed in any of the prior works. This is especially crucial if the digital twin is used for generating synthetic data.
Fig. \ref{fig:case41} shows the phase identification metrics for model S3-J1 and phase mapping C2 with and without neutral conductor considerations.
It is clear that modeling the neutral conductor improves estimation accuracy for all three metrics.
\begin{figure}[!htbp]
\center
\includegraphics[width=0.5\linewidth]{case_4neutral.pdf}
\vspace{-2pt}
\caption{\small{Impact of neutral conductor modeling on phase estimation metrics.}}
\label{fig:case41}
\end{figure}
\pagebreak
\section{Conclusions}
\label{section7}
We propose a phase connectivity identification (PCI) algorithm that utilize voltage time series, distribution network (DN) zones, and multiple measurements as references for improving the PCI accuracy.
The proposed phase identification algorithm builds a consensus among multiple estimations in a zone. This method is extended to consider metrics derived from voltage time series, which filters larger voltage deviations.
Due to the consideration of multiple measurements, the PCI is drastically exceeding the performance compared to the widely used na\"{i}ve model. Further, our approach is immune to network topology and measurement errors, as PCI accuracy is not dependent on only one reference measurement point.
We also propose phase connectivity identification metrics that not only quantitatively describe the estimation accuracy, but also qualitatively describe how good the PCI is.
We utilize a real German DN with limited observability for phase identification. This network has 602 nodes and 313 single-phase consumers.
It is observed that the original voltage time series without salient feature extraction and absolute weighted consensus algorithm outperforms all the other phase estimation algorithms.
The proposed algorithm identifies the phase connections on average of over 308 consumers accurately for 1\% measurement error at the consumer end and reference measurements for 1000 Monte Carlo simulations. Thus, the proposed algorithm is robust towards uncertainty with high precision.
{In future work, we will consider synchronization errors in measurement while limiting DN observability even further. Further assessment is needed for selecting the best-suited algorithm for phase identification with varying network layouts, load profiles, and PV penetration. Finally, we will extend this work for executing the algorithms on real measurement data and verify algorithm efficacy by intrusive phase measurements.}
\section*{Acknowledgement}
This work is supported by the H2020 EUniversal project, grant agreement ID: 864334 (\url{https://euniversal.eu/}).
We would like to thank Clara Gouveia and Gil Silva Sampaio at INESC TEC, Porto for their comments on problem formulation.
We would like to thank Marta Vanin (KU Leuven), Deepjyoti Deka (Los Alamos National Lab), Lucas Pereira (Técnico Lisboa) for their insightful comments on the paper.
Special thanks to Kseniia Sinitsyna at Mitnetz Strom for her help in data handling.
\pagebreak
\bibliographystyle{elsarticle-num}
|
1,314,259,993,290 | arxiv | \section{Introduction}
Infrared luminous galaxies (L$_{\rm IR}$ $\gtrsim$
10$^{11}$L$_{\odot}$) radiate most of their luminosity as dust
emission in the infrared \citep{sam96}. Thus, powerful energy sources,
e.g., starbursts and/or active galactic nuclei (AGNs), must be present
hidden behind dust. In the local universe, these infrared luminous
galaxies dominate the bright end of the luminosity function
\citep{soi87}. In the distant universe, they dominate the cosmic
infrared background emission and have been used to trace the
dust-obscured star-formation rate, dust content, and metallicity in
the early universe, based on the assumption that they are powered by
starbursts \citep{bar00}. Estimating the AGN contribution to their
infrared luminosities is therefore important, not only to obtain a full
understanding of the nature of these galaxies, but also to study the
connections between obscured AGNs and starbursts in the universe.
In a powerful AGN that is obscured along our sightline by dust in a
{\it torus} geometry, the hard radiation of AGN can photo-ionize clouds
along the torus axis, above the torus scale height, and can produce the
so-called narrow line regions (NLRs; Antonucci 1993; Robson 1996).
Since the NLRs show
optical spectra that differ from normal star-forming galaxies, such
obscured AGNs should be detectable through optical spectroscopy
(classified as Seyfert 2s; Veilleux \& Osterbrock 1987). However,
since the nuclear regions of infrared luminous galaxies are very dusty
\citep{sam96}, virtually all lines of sight can be opaque to the bulk
of the ionizing radiation of AGNs. In such {\it buried} AGNs
\footnote{
We use the term ``obscured AGN'' if the AGN is obscured along
our line of sight, and ``buried AGN'' if the AGN is obscured along
virtually all sightlines.
},
X-ray dissociation regions (XDRs), which are dominated by low-ionization
species \citep{mal96}, develop instead of the NLRs, so that it is
difficult to find AGN signatures using either conventional optical
spectroscopy or high-resolution infrared spectroscopic searches for
high-excitation forbidden emission lines that originate in the NLRs
\citep{gen98,vei99,mur01}. From the spectral shape of the cosmic X-ray
background emission, it is inferred that buried AGNs are abundant in
the universe \citep{fab02}; therefore, establishing a method with which
to detect such optically elusive \citep{mai03} buried AGNs is very
important.
To find such buried AGNs, it is necessary to conduct observations at
wavelengths with low dust extinction. Infrared $L$-band (2.8--4.1 $\mu$m)
spectroscopy from the ground is one of the most powerful tools for
this purpose. First, dust extinction in the $L$-band is lower than at
shorter wavelengths (A$_{\rm L}$ $\sim$ 0.06 $\times$ A$_{\rm V}$;
Rieke \& Lebofsky 1985) and is as low as at 5--13 $\mu$m
\citep{lut96}. Second, the
contribution from AGN-powered dust emission to an observed flux,
relative to stellar emission, increases substantially
compared to $\lambda <$ 2 $\mu$m. Therefore, the detection of buried
AGNs becomes more feasible than at $\lambda <$ 2 $\mu$m. Third,
emission from a normal starburst and AGN are clearly
distinguishable based on $L$-band spectra \citep{moo86,imd00}. A
strong polycyclic aromatic hydrocarbon (PAH) emission feature is
usually found at 3.3 $\mu$m in a normal starburst galaxy
\citep{moo86}. In a pure normal starburst, high equivalent-width 3.3
$\mu$m PAH emission (EW$_{\rm 3.3PAH}$ $\sim$ 100 nm; Moorwood 1986;
Imanishi \& Dudley 2000) should always be found, since the equivalent
width is, by definition, insensitive to dust extinction of the
starburst. In a pure AGN, PAH molecules are destroyed, rather than excited,
by X-ray radiation of the AGN \citep{voi92,sie04}, so that no 3.3
$\mu$m PAH emission is expected \citep{moo86}. Instead, a featureless
continuum from submicron-sized \citep{dra84,mat89} hot dust heated by
an AGN is found, making a PAH-free smooth 3--4 $\mu$m spectrum. In an
AGN/starburst composite galaxy, 3.3 $\mu$m PAH emission from the
starburst should be found, unless the starburst regions are directly
exposed to the X-ray radiation of the AGN; however, the PAH emission is
strongly diluted by the AGN-originating PAH-free continuum, so the
EW$_{\rm 3.3PAH}$ value should be reduced compared to a pure normal
starburst galaxy. Thus, the presence of an obscured AGN can be
established using the EW$_{\rm 3.3PAH}$ value.
If a highly dust-obscured AGN is present, and yet the AGN emission
contributes significantly to the observed $L$-band flux, then
absorption features at 3.1 and 3.4 $\mu$m produced by ice-covered dust
and bare carbonaceous dust, respectively, should be found
\citep{imd00,imm03,ris03,ris06,idm06}. Although a normal starburst,
in which stellar energy sources and dust are spatially well mixed
\citep{pux91,mcl93,for01}, can also show these absorption features
\citep{stu00}, there is an upper limit for the absorption optical
depths \citep{imm03,idm06}. This is because in this mixed source/dust
geometry, the less-obscured, less-attenuated emission from the
foreground (which shows weak dust absorption features) makes a
stronger contribution to the observed flux than the emission from the
distant side (which shows strong dust absorption features but is
highly attenuated). Thus, strong dust absorption features whose
optical depths are larger than the upper limit determined by the mixed
source/dust geometry indicate that obscuring dust is present as a
foreground screen distribution \citep{imm03,idm06}. A compact energy
source that is more centrally concentrated than the surrounding dust,
such as a buried AGN, is a natural explanation \citep{imm03,idm06},
unless a large amount of dust in a nearly edge-on host galaxy is
present in front of the primary 3--4 $\mu$m continuum-emitting energy
source.
\citet{idm06} presented $L$-band spectra of a large number of nearby
ultraluminous infrared galaxies (L$_{\rm IR}$ $>$ 10$^{12}$L$_{\odot}$;
Sanders \& Mirabel 1996) in the well-studied {\it IRAS} 1 Jy sample
\citep{kim98}, searched for possible signatures of powerful buried AGNs
in these galaxies, and statistically investigated the properties of
buried AGNs.
However, there also exist several ultraluminous infrared galaxies
(L$_{\rm IR}$ $>$ 10$^{12}$L$_{\odot}$) not included in the {\it IRAS}
1 Jy sample, as well as galaxies with L$_{\rm IR}$ $<$
10$^{12}$L$_{\odot}$, that show possible signatures of AGNs obscured
behind dust in data at other wavelengths.
In this paper, we present ground-based $L$-band spectra of
such galaxies, with the aim of testing if this $L$-band spectroscopic
method is indeed effective at finding AGN signatures.
Throughout this paper, $H_{0}$ $=$ 75 km s$^{-1}$ Mpc$^{-1}$,
$\Omega_{\rm M}$ = 0.3, and $\Omega_{\rm \Lambda}$ = 0.7 are adopted.
\section{Targets}
Infrared luminous galaxies that have the following properties were
observed.
(1) They do not belong to ultraluminous infrared galaxies in the
{\it IRAS} 1 Jy sample and thus were not presented by \citet{idm06}.
(2) Observations at other wavelengths have indicated at least some
signatures of obscured AGNs.
(3) No high-quality $L$-band spectra were published in the
literature at the time of our observations.
Table 1 summarizes the observed targets.
The sample selection is heterogeneous, but our $L$-band spectra
can provide useful information about the nature of these interesting
galaxies.
\subsection{Radio excess relative to far-infrared emission}
In normal starburst galaxies, the radio and far-infrared luminosities
are correlated, with a relatively small scatter \citep{con91}.
If a galaxy shows a
significant radio excess above this correlation, then the presence of
a radio-intermediate (or radio-loud) AGN is suggested. \citet{cra96}
investigated the radio to far-infrared luminosity ratios of
ultraluminous infrared galaxies and found that IRAS 04154+1755 shows
(1) the largest radio excess and (2) a jet-like radio structure.
This object is classified optically as a Seyfert 2 \citep{cra96}, so
that a powerful radio-intermediate AGN must be present, hidden behind
torus-shaped dust. No significant 2--10 keV X-ray emission from the
putative AGN was detected \citep{imu99}, which suggests that the AGN
suffers from Compton-thick (N$_{\rm H}$ $>$ 10$^{24}$ cm$^{-1}$) X-ray
absorption.
\subsection{Strong ice absorption in mid-infrared 5--11.5 $\mu$m spectra}
Mid-infrared 5--11.5 $\mu$m spectra of many infrared luminous galaxies
were taken with {\it ISO}. From these spectra, \citet{spo02} selected
18 sources that show strong ice absorption features at 6.0 $\mu$m
(their Table 1, excluding NGC 253 and M82). To detect ice absorption,
a significant quantity of dust grains covered with an
ice mantle must be present in the foreground of the continuum-emitting
source(s). Such ice-covered dust grains are usually found deep inside
molecular gas, where dust is sufficiently shielded from ambient UV
radiation \citep{whi88}. A normal starburst galaxy with mixed
source/dust geometry can produce ice absorption features, if a
sufficient quantity of ice-covered dust is present. An obscured AGN is
an alternative candidate, particularly for sources with strong
absorption and weak PAH emission features (see $\S$1).
In the 5--11.5 $\mu$m spectra, PAH emission features are found at 6.2,
7.7, and 8.6 $\mu$m. In addition to the H$_{2}$O ice absorption at 6.0
$\mu$m, there are also absorption features due to hydrogenated
amorphous carbon (HAC) at 6.85 and 7.3 $\mu$m, and due to silicate
dust at 9.7 $\mu$m. Since these emission and absorption features
overlap significantly in wavelength, it is often difficult to
distinguish whether these galaxies are absorption-dominated (obscured
AGN candidates) or PAH-emission-dominated (normal starbursts)
\citep{spo02}. $L$-band spectroscopy may help to distinguish
whether the sources are dominated by PAH emission or absorption
features \citep{ima00a,ima00b,imd00}.
Among the 18 strong ice absorption sources, eight (IRAS
00188$-$0856, IRAS 05189$-$2524, UGC 5101, NGC 4418, Mrk 231, NGC
4945, Mrk 273, Arp 220) have available high quality $L$-band spectra
\citep{spo00,imd00,idm01,imm03,ima04,idm06}. Of these eight sources,
$L$-band spectra of five sources display signatures of obscured AGNs
(IRAS 00188$-$0856, IRAS 05189$-$2524, UGC 5101, Mrk 231, and Mrk 273)
\citep{imd00,idm01,imm03,idm06}, suggesting that this sample contains
many obscured AGNs. For the remaining ten unobserved sources, we
exclude four southern sources with declinations $<$ $-$30$^{\circ}$
from our target list, because they are difficult to observe from Mauna
Kea, Hawaii, our main observation site. Of the remaining six sources, we
observed NGC 828, IRAS 15250+3609, and IRAS
17208$-$0014, of which IRAS 15250+3609 shows the strongest absorption
and weakest PAH emission features, making this source the strongest
obscured AGN candidate. In addition, NGC 1377 (IRAS 03344$-$2103) is
also included, because its mid-infrared spectrum \citep{lau00} is
interpreted as dominated by strong 9.7 $\mu$m silicate dust absorption
($\tau_{9.7}$ $>$ 3.5), with weak PAH emission \citep{spo04}.
\subsection{Weak [CII] 158 $\mu$m emission relative to the far-infrared
continuum}
\citet{mal97} measured the [CII] 158 $\mu$m emission lines of 30
optically normal (non-Seyfert) star-forming galaxies, and found that
three sources, NGC 4418, IC 860, and CGCG 1510.8+0725, show a significant
[CII] deficiency relative to their far-infrared continuum
luminosities. The presence of buried AGNs is a possible explanation
for the [CII] deficiency \citep{mal97}, although alternative scenarios
have also been proposed \citep{mal97,mal01}. In fact, detailed infrared
spectroscopic studies of NGC 4418 have provided many pieces of
evidence for a powerful buried AGN \citep{dud97,spo01,ima04}. For this
reason, IC 860 and CGCG 1510.8+0725 are also included as targets, in
the expectation that they may be objects similar to NGC 4418.
\subsection{Narrow-line radio galaxies}
Narrow-line radio galaxies show Seyfert-2-type optical spectra.
According to the AGN unification paradigm, these galaxies are taken
to contain powerful radio-loud AGNs hidden behind torus-geometry dust
\citep{urr95}. It is of interest to investigate whether our $L$-band
spectroscopic method can indeed successfully detect clear signatures
of obscured AGNs in these galaxies that are already known to possess
obscured AGNs. Two nearby narrow-line radio galaxies, Cygnus A and 3C
234, are selected.
Cygnus A is a nearby well-studied narrow-line radio galaxy
\citep{ost75}. Strong, but moderately absorbed, power-law X-ray
emission was detected \citep{uen94,you02}, suggesting the presence
of an obscured AGN. Near-infrared (1--4 $\mu$m) observations also
suggested the presence of a powerful obscured AGN (A$_{\rm V}$ = 40--150
mag) \citep{djo94,war96,pac98,tad99}. Based on a mid-infrared 8--13 $\mu$m
spectrum, \citet{imu00} suggested that the energy source is more
centrally concentrated than the nuclear obscuring dust, as is expected
for an obscured AGN.
3C 234 shows a broad H$\alpha$ emission line in its optical spectrum
\citep{gra78}.
However, \citet{ant90} and \citet{you98} argued that the broad
component is interpreted as a scattered component, rather than a
directly transmitted component, and that optical classification of
this galaxy as a narrow-line radio galaxy is more appropriate. The
dust extinction toward the AGN is estimated to be A$_{\rm V}$ = 60 mag
\citep{you98}. \citet{sam99} detected intrinsically luminous,
moderately absorbed X-ray emission from the putative AGN.
\section{Observations and Data Analysis}
Infrared $L$-band spectroscopic observations were made using
infrared spectrographs, IRCS \citep{kob00} on the Subaru 8.2-m telescope
\citep{iye04} and SpeX \citep{ray03} on the IRTF 3-m telescope.
Table 2 summarizes a detailed observation log.
For Subaru IRCS observation runs, the sky was clear during the
observations and seeing sizes at $\lambda$ = 2.2 $\mu$m were measured
to be 0$\farcs$5--0$\farcs$8 in full-width at half maximum (FWHM).
A 0$\farcs$9-wide slit and the $L$-grism were used with a 58-mas pixel
scale. The achievable spectral resolution was $\sim$140 at 3.5 $\mu$m.
The position angle of the slit was set along the north-south direction.
A standard telescope nodding technique (ABBA pattern) with a throw of 7
arcsec along the slit was employed to subtract background emission.
Each exposure was 1.5--2.0 sec, and 30--40 coadds were made at each
position.
For the IRTF SpeX observations, we employed the 1.9--4.2 $\mu$m
cross-dispersed mode with a 1\farcs6 wide slit. This mode enables $L$-
(2.8--4.1 $\mu$m) and $K$-band (2--2.5 $\mu$m) spectra to be obtained
simultaneously, with a spectral resolution of R $\sim$ 500. The sky
conditions were clear throughout the observations, and seeing sizes at
$\lambda$ = 2.2 $\mu$m were in the range 0$\farcs$4--1$\farcs$0 FWHM.
The position angle of the slit was set along the north-south
direction. A standard telescope nodding technique with a throw of 7.5
arcsec was employed along the slit. The telescope tracking was
monitored with the infrared slit-viewer of SpeX. Each exposure was 15
sec, and 2 coadds were made at each position.
A-, F-, and G-type main sequence stars (Table~\ref{tbl-2}) were
observed as standard stars, with airmass differences of $<$0.1 to the
individual target objects, to correct for the transmission of the
Earth's atmosphere. The $K$- and $L$-band magnitudes of the standard
stars were estimated from their $V$-band (0.6 $\mu$m) magnitudes,
adopting the $V-K$ and $V-L$ colors appropriate to the stellar types
of individual standard stars, respectively \citep{tok00}.
Standard data analysis procedures were employed, using the Image Reduction
and Analysis Facility (IRAF)
\footnote{
IRAF is a general purpose software system distributed by the National
Optical Astronomy Observatories, which are operated by the Association
of Universities for Research in Astronomy, Inc. (AURA), under
cooperative agreement with the National Science Foundation.}.
Initially, frames taken with an A (or B) beam were subtracted from
frames taken subsequently with a B (or A) beam, and the resulting
subtracted frames were added and divided by a spectroscopic flat
frame. Bad pixels and pixels hit by cosmic rays were then replaced
with the interpolated values from surrounding pixels. Corrections for
cosmic ray hits were much larger for IRTF SpeX than Subaru IRCS.
Finally, the spectra of the target objects and standard stars were
extracted. Wavelength calibration was performed taking into account
the wavelength-dependent transmission of the Earth's atmosphere. The
spectra of the targets were divided by the observed spectra of
standard stars, multiplied by the spectra of blackbodies with
temperatures appropriate to the individual standard stars
(Table~\ref{tbl-2}), and then flux-calibrated. Appropriate binning of
spectral elements was performed to achieve an adequate signal-to-noise
ratio ($\gtrsim$10) in most of the elements, particularly at
$\lambda_{\rm obs} < 3.3$ $\mu$m in the observed frame, where the
Earth's atmospheric transmission is poorer than at longer $L$-band
wavelengths.
For all sources, whole spectral datasets were divided into a few
sub-groups, and error bars at each spectral element were estimated
from the scatter of actual signals among the sub-groups.
\section{Results}
\subsection{$L$-band spectra}
\subsubsection{3.3 $\mu$m PAH emission}
Figure 1 shows flux-calibrated nuclear $L$-band spectra of the nine
observed sources. All sources, except NGC 1377, Cygnus A, and
3C 234, show clear emission features at $\lambda_{\rm obs}$ $\sim$ (1
+ $z$) $\times$ 3.3 $\mu$m, which we identify as the 3.3 $\mu$m PAH
emission. The detection of the 3.3 $\mu$m PAH emission, an indicator
of starbursts, means that these galaxies contain detectable starburst
activity. To estimate the 3.3 $\mu$m PAH flux, we adopt a template
spectral shape for Galactic star-forming regions and nearby starburst
galaxies (type-1 sources; Tokunaga et al. 1991). In this template, the
main 3.3 $\mu$m PAH emission profile extends between $\lambda_{\rm
rest}$ $=$ 3.24--3.35 $\mu$m in the rest frame. Data
points at wavelengths slightly shorter than $\lambda_{\rm rest}$ $=$
3.24 $\mu$m and slightly longer than $\lambda_{\rm rest}$ $=$ 3.35
$\mu$m, unaffected by obvious absorption features, are adopted as the
continuum levels, which are shown as solid lines for PAH-detected
sources in Figure 1.
For sources with clearly detectable PAH emission, this template profile
reproduces the observed 3.3 $\mu$m PAH emission features reasonably
well, with our spectral resolution and signal-to-noise ratios, except
NGC 828 which displays a slightly broader profile than the template
(Figure 1, dotted lines).
Table 3 summarizes the observed properties of the 3.3 $\mu$m PAH
emission features.
The uncertainties of the 3.3 $\mu$m PAH fluxes, estimated from
the fittings, are $<$20\% in all cases.
Possible systematic uncertainties coming from continuum determination
ambiguities are also unlikely to exceed 20\%, as long as reasonable
continuum levels are adopted.
Thus, total uncertainties of the PAH fluxes are taken as $<$30\%.
Cygnus A, 3C 234, and NGC 1377 may also show excesses at the expected
wavelength of 3.3 $\mu$m PAH emission, but their significance is
marginal; we therefore estimate upper limits for the PAH fluxes.
\subsubsection{3.1 $\mu$m H$_{2}$O ice absorption}
IRAS 04154+1755 ($z =$ 0.056) shows a concave continuum, which is
naturally explained by the strong, broad 3.1 $\mu$m H$_{2}$O ice
absorption feature \citep{spo00,imm03,idm06}. The detection of this
ice absorption feature means that the $L$-band continuum-emitting
energy source is obscured by ice-covered dust grains deep inside
molecular gas \citep{whi88}. Following \citet{idm06}, we adopt a
linear continuum to estimate the optical depth of this 3.1 $\mu$m
H$_{2}$O ice absorption ($\tau_{3.1}$). The continuum level varies
slightly depending on the adopted data points used for the continuum
determination. We adopt a relatively low continuum level to obtain
a conservative lower limit for the $\tau_{3.1}$ value in IRAS
04154+1755.
It is $\tau_{3.1}$ $\sim$ 0.9, which corresponds to A$_{\rm
V}$ $\sim$ 15 mag if the Galactic $\tau_{3.1}$/A$_{\rm V}$ ratio
($\sim$0.06; Smith et al. 1993; Tanaka et al. 1990; Murakawa et al.
2000) is assumed. The ice absorption feature of IRAS 04154+1755 has a
strong absorption wing at $\lambda_{\rm obs}$ = 3.6--3.9 $\mu$m or
$\lambda_{\rm rest}$ = 3.4--3.7 $\mu$m, which resembles the absorption
profile of GL 2591 (a Galactic protostar surrounded by circumstellar
material), rather than Elias 16 (a Galactic late-type field star
behind molecular clouds) \citep{smi89}.
The continuum of IRAS 17208$-$0014 ($z =$ 0.042) is also concave-shaped.
We obtain $\tau_{3.1}$ $\sim$ 0.4, which is similar to the recent
estimate by \citet{ris06}, using independent data.
Unlike IRAS 04154+1755, the ice absorption feature of IRAS 17208$-$0014
has a relatively weak absorption wing at $\lambda_{\rm obs}$ = 3.55--3.85
$\mu$m or $\lambda_{\rm rest}$ = 3.4--3.7 $\mu$m, as seen in Elias 16,
rather than GL 2591 \citep{smi89}.
\subsubsection{3.4 $\mu$m bare carbonaceous dust absorption}
NGC 1377 ($z =$ 0.005) shows an absorption feature that peaks at
$\lambda_{\rm obs}$ = 3.4--3.45 $\mu$m, or $\lambda_{\rm rest}$ $\sim$
3.4 $\mu$m. We interpret this to be the 3.4 $\mu$m dust absorption
feature produced by bare carbonaceous dust grains, which is found in
Galactic stars highly reddened by the {\it diffuse} interstellar
medium outside molecular gas \citep{pen94,ima96,raw03}. Its optical
depth $\tau_{3.4}$ is estimated to be 0.17, which corresponds to
A$_{\rm V}$ = 25--40 mag, if the dust extinction curve in NGC 1377 is
similar to the Galactic diffuse interstellar medium
($\tau_{3.4}$/A$_{\rm V}$ $\sim$ 0.004--0.007; Pendleton et al. 1994).
For Cygnus A ($z =$ 0.056) and 3C 234 ($z =$ 0.185), the presence of the
3.4 $\mu$m absorption feature in our $L$-band spectra is uncertain. We
estimate conservative upper limits of $\tau_{3.4}$ $<$ 0.2 for both
sources, which corresponds to A$_{\rm V}$ $<$ 50 mag toward the $L$-band
continuum emitting regions, if the Galactic relation of
$\tau_{3.4}$/A$_{\rm V}$
$\sim$ 0.004--0.007 is assumed. The A$_{\rm V}$ value derived from our
$L$-band spectra is at the lower end of the range for Cygnus A
(A$_{\rm V}$ = 40--150 mag; $\S$2.4) and is smaller than the other
estimate for 3C 234 (A$_{\rm V}$ = 60 mag; $\S$2.4).
\subsubsection{[MgVIII]3.028$\mu$m emission}
3C 234 ($z =$ 0.185) shows an emission line at $\lambda_{\rm obs}$
$\sim$ 3.58 $\mu$m or $\lambda_{\rm rest}$ $\sim$ 3.02 $\mu$m, which
we identify as the [Mg VIII] 3.028$\mu$m line. We estimate its flux to
be f([MgVIII]) = 1.5 $\times$ 10$^{-14}$ ergs s$^{-1}$ cm$^{-2}$, and
its luminosity to be L([MgVIII]) = 1.2 $\times$ 10$^{42}$ ergs
s$^{-1}$.
\subsection{$K$-band spectra}
For the two sources observed with IRTF SpeX, i.e., IC 860 and CGCG
1510.8+0725 (weak [CII] emitters), $K$-band spectra are obtained
simultaneously with the $L$-band spectra. Figure 2 shows these
$K$-band spectra.
The spectra of IC 860 ($z =$ 0.013) and CGCG 1510.8+0725 ($z =$ 0.013)
show gaps in the continuum at $\lambda_{\rm obs} \sim$ 2.35 $\mu$m. We
attribute these gaps to CO absorption features at $\lambda_{\rm rest}$ =
2.31--2.4 $\mu$m produced by stars \citep{iva00,ia04,imw04}. For the
other weak [CII]
emitter, NGC 4418 \citep{mal97}, the CO absorption feature is clearly
detected \citep{ima04}. To estimate the CO absorption strengths in IC
860 and CGCG 1510.8+0725, we adopt the spectroscopic CO index
(CO$_{\rm spec}$) defined by \citet{doy94} and follow the procedures
applied to NGC 4418 \citep{ima04}. For both IC 860 and CGCG
1510.8+0725, power-law continuum levels (F$_{\rm \lambda}$ = $\alpha
\times \lambda^{\beta}$) are determined using data points at
$\lambda_{\rm obs}$ = 2.08--2.32 $\mu$m ($\lambda_{\rm rest}$ =
2.05--2.29 $\mu$m), excluding obvious emission lines. The adopted
continuum levels are shown as dashed lines in Figure 2. Data at
$\lambda_{\rm obs}$ = 2.34--2.43 $\mu$m ($\lambda_{\rm rest}$ =
2.31--2.40 $\mu$m) are used to derive the CO$_{\rm spec}$ values. We
obtain CO$_{\rm spec}$ $\sim$ 0.1 for IC 860 and $\sim$0.2 for CGCG
1510.8+0725.
However, the continuum determination of IC 860 is less secure than
CGCG 1510.8+0725 because of the large scatter of continuum data points
in the former (Figure 2).
We try several continuum levels within a reasonable range, and
obtain the largest plausible value of CO$_{\rm spec}$ $\sim$ 0.2 for IC 860.
The typical values observed in star-forming or elliptical (=spheroidal)
galaxies are CO$_{\rm spec}$ $>$ 0.15 \citep{gol97a,gol97b,iva00}.
Dilution of the CO absorption feature by a featureless continuum
from AGN-heated hot dust or from stars younger than a few
million years \citep{lei99} can decrease the CO$_{\rm spec}$ values.
Based on CO$_{\rm spec}$ values, there is no explicit AGN evidence in
IC 860 and CGCG 1510.8+0725.
The $K$-band spectrum of CGCG 1510.8+0725 shows strong H$_{2}$
emission features, as with NGC 4418 \citep{ima04}. For NGC 4418,
strong H$_{2}$ emission lines and other observational properties are
naturally explained by a powerful buried AGN
\citep{dud97,spo01,ima04}.
However, the signal-to-noise ratios for CGCG 1510.8+0725 are not good
enough to investigate in a quantitatively detailed manner whether
the strong H$_{2}$ emission originates in a buried AGN or other
mechanisms related to starbursts.
\section{Discussion}
\subsection{Detected modestly obscured starbursts}
3.3 $\mu$m PAH emission, a signature of starbursts, is detected in six
of nine observed infrared luminous galaxies. Since dust extinction
in the $L$-band is about 0.06 times as large as
that in the optical $V$-band ($\lambda$ = 0.6 $\mu$m; Rieke \& Lebofsky
1985), the flux of the 3.3 $\mu$m PAH emission is not significantly
attenuated ($<$1 mag) if dust extinction is less than 15 mag in
A$_{\rm V}$. Thus, the observed 3.3 $\mu$m PAH emission luminosities
are a good measure of the absolute luminosity of modestly obscured
(A$_{\rm V}$ $<$ 15 mag) starburst activity.
Table 3 summarizes the observed 3.3 $\mu$m PAH to infrared luminosity
ratios (L$_{\rm 3.3PAH}$/L$_{\rm IR}$). The ratios are factors of 3 to
$>$10 lower than those reported for modestly obscured starbursts
($\sim$10$^{-3}$; Mouri et al. 1990; Imanishi 2002). If these
ratios are taken at face value, the detected modestly obscured
starbursts can account for only some fraction of the infrared
luminosities of these galaxies. The infrared luminosities must
therefore be dominated by AGNs and/or very heavily obscured
(A$_{\rm V}$ $>>$ 15 mag) starbursts.
\subsection{AGNs obscured by torus-shaped dust in optical Seyfert 2s}
IRAS 04154+1755, Cygnus A, and 3C 234 are classified optically as
Seyfert 2s (Table 1). Their optical Seyfert 2 classifications indicate
that (1) the AGNs are obscured along our line of sight by torus-shaped
dust, (2) a large amount of the AGNs' ionizing photons are escaping
along the torus axis, and (3) emission lines, originating in the
well-developed narrow line regions (NLRs), are strong and clearly
detectable in the optical spectra. For these optical Seyfert 2s, the
presence of obscured AGNs is much more certain than in other infrared
luminous galaxies with no optical Seyfert signatures (non-Seyferts).
Hence, it is useful to test whether our $L$-band spectroscopic method
can succeed in detecting AGN signatures in these optical Seyfert 2s.
For Cygnus A and 3C 234, the 3.3 $\mu$m PAH equivalent widths (Table
3) are more than an order of magnitude lower than those of typical
starburst galaxies (EW$_{\rm 3.3PAH}$ $\sim$100 nm; Moorwood 1986;
Imanishi \& Dudley 2000). The observed $L$-band fluxes must be
dominated by PAH-free emission, most plausibly a featureless continuum
from AGN-heated hot dust. Thus, our $L$-band spectroscopy has
successfully revealed AGN signatures in these two optical Seyfert 2s.
For 3C 234, the [Mg VIII]3.028$\mu$m emission line is detected. The
ionization potential of the [Mg VIII] line is as high as 224.9 eV, and
this emission line has been detected in several nearby Seyfert
galaxies \citep{lut00,stu02,mar03}. The detection of such a high
excitation forbidden emission line requires a very hard radiation
field, further supporting the presence of an AGN.
The [Mg VIII] luminosity of 3C 234 is larger than the
prototypical nearby Seyfert 2 galaxy NGC 1068 by a factor of 40 \citep{mar03}. The observed 2--10 keV X-ray luminosity from the AGN in NGC 1068 is
estimated to be $\sim$1 $\times$ 10$^{41}$ ergs s$^{-1}$, but this is
believed to be a reflected component \citep{koy89}. Assuming a
reflected fraction of 0.01 \citep{pie94}, the intrinsic 2--10 keV
luminosity in NGC 1068 is Lx(2--10 keV) $\sim$ 10$^{43}$ ergs
s$^{-1}$. If the [Mg VIII] to 2--10 keV luminosity ratio in 3C 234 is
similar to that of NGC 1068, then the predicted 2--10 keV luminosity
for 3C 234 is Lx(2--10 keV) $\sim$ 4 $\times$ 10$^{44}$ ergs s$^{-1}$,
which is roughly comparable to the measured value of $\sim$1 $\times$
10$^{44}$ ergs s$^{-1}$ \citep{sam99}.
For IRAS 04154+1755, the presence of an AGN is less certain than for
Cygnus A and 3C 234, if we consider the EW$_{\rm 3.3PAH}$ value
($\sim$60 nm for IRAS 04154+1755; Table 3). However, the relatively
large EW$_{\rm 3.3PAH}$ value in IRAS 04154+1755 is probably the
consequence of the strong ($\tau_{3.1}$ $\sim$ 0.9) 3.1 $\mu$m H$_{2}$O
ice absorption. This absorption feature strongly attenuates the
AGN-originated flux and reduces the dilution of the 3.3 $\mu$m PAH
emission by an AGN continuum, making the EW$_{\rm 3.3PAH}$ value
relatively large, even in the presence of a powerful obscured AGN. We
therefore search for signatures of an obscured AGN based on a
$\tau_{3.1}$ value.
As mentioned in $\S$1, and in more detail in \citet{imm03} and
\citet{idm06}, $\tau_{3.1}$ values can be used to distinguish between
a normal starburst (mixed source/dust geometry, small $\tau_{3.1}$) or
an obscured AGN (centrally-concentrated energy source geometry, large
$\tau_{3.1}$).
Since the fraction of dust covered with an ice mantle in the majority of
infrared luminous galaxies is likely to be smaller than the well-studied
less infrared-luminous starburst galaxy M82, because of harsher
radiation environment in the former \citep{soi00,soi01}, $\tau_{3.1}$
larger than 0.3 suggests a centrally-concentrated energy source geometry
\citep{imm03,idm06}.
The large observed $\tau_{3.1}$ of IRAS 04154+1755 ($\sim$0.9) strongly
indicates a centrally-concentrated energy source, such as an obscured
AGN. We note
that IRAS 04154+1755 is located close to the Galactic Taurus molecular
clouds, but no significant molecular gas is detected in the exact
direction of
IRAS 04154+1755 \citep{ung87}. Furthermore, the wavelength range in which
H$_{2}$O ice absorption is strong in Figure 1 is similar
to that expected from the profile of GL 2591 redshifted with $z =$
0.056, rather than $z =$ 0. Thus, the strong 3.1 $\mu$m
H$_{2}$O absorption in IRAS 04154+1755 is believed to be nuclear
in origin.
In summary, our $L$-band spectroscopic method successfully provides
AGN evidence for the three optical Seyfert 2s, which are known to
possess luminous AGNs behind torus-shaped dust. \citet{idm06} also
detected AGN signatures in the $L$-band spectra of all observed
ultraluminous infrared galaxies classified optically as Seyferts.
We can therefore conclude
that our $L$-band spectroscopic method is at least as effective as
conventional optical spectroscopy for the purpose of finding AGNs
obscured behind dust. However, the question of how much more effective
our $L$-band method is depends on the detection rate of buried AGNs in
optical non-Seyfert galaxies.
\subsection{Buried AGNs in NGC 828 and NGC 1377}
Among the six observed optical non-Seyfert galaxies (NGC 828, IRAS
15250+3609, IRAS 17208$-$0014, NGC 1377, IC 860, and CGCG 1510.8+0725),
the EW$_{\rm 3.3PAH}$ values of NGC 828 and NGC 1377 are $\lesssim$ 20
nm. Starburst galaxies have an average value of EW$_{\rm 3.3PAH}$ $\sim$
100 nm, with some scatter,
but never lower than $\sim$40 nm \citep{moo86}. The small EW$_{\rm
3.3PAH}$ values of NGC 828 and NGC 1377 suggest that powerful
AGNs are present, which heat up dust grains and produce strong PAH-free
featureless continua that dilute the 3.3 $\mu$m PAH
emission from starbursts. Our $L$-band spectroscopy thus newly reveals
buried AGN signatures in two optical non-Seyfert galaxies.
The $L$-band continuum of NGC 1377 is unusually red compared to normal
starburst emission. Such red $L$-band continua have previously been
observed in the ultraluminous infrared galaxies Superantennae \citep{ris03}
and IRAS 08572+3915 \citep{idm06}, both of which show very weak 3.3
$\mu$m PAH emission and are therefore classified as
obscured-AGN-dominated. Very highly reddened hot dust emission heated
by an obscured AGN is the most natural explanation for the extremely
red $L$-band continuum of NGC 1377 \citep{ris06}. The observed
$\tau_{3.4}$ value of 0.17 corresponds to A$_{\rm V}$ = 25--40 mag,
or A$_{\rm 3.5 \mu m}$ = 1.4--2.5 mag if the Galactic dust extinction
curve is adopted. Thus, the dereddened luminosity heated by an AGN is
$\nu$L$_{\nu}$(3.8$\mu$m) = 0.4--1 $\times$ 10$^{43}$ ergs s$^{-1}$,
which can account for a significant fraction of the infrared
luminosity of NGC 1377 (L$_{\rm IR}$ $\sim$ 3 $\times$ 10$^{43}$ ergs
s$^{-1}$).
\subsection{Weak signatures of buried AGNs in IRAS 17208$-$0014 and CGCG
1510.8+0725}
For the two optical non-Seyfert galaxies IRAS 17208$-$0014 and CGCG
1510.8+0725, some indications of buried AGNs might be present.
For IRAS 17208$-$0014, the large $\tau_{3.1}$ value of $\sim$0.4
suggests a centrally-concentrated energy source, such as a buried AGN,
if the fraction of ice-covered dust is comparable to or smaller than
that in the well-studied starburst galaxy M82.
However, \citet{soi00} found that the mid-infrared 12.5 $\mu$m
dust emission from IRAS 17208$-$0014 is dominated by spatially
extended components, with no prominent core, indicating that dust
emission at this wavelength is dominated by extended starburst
activity.
The estimated surface brightness of IRAS 17208$-$0014 is exceptionally
low compared to other ultraluminous infrared galaxies
(L$_{\rm IR}$ $>$ 10$^{12}$L$_{\odot}$), and can even be smaller than
that of M82 \citep{soi00}.
Hence, the fraction of ice-covered dust can be larger than that in M82,
because of a weaker radiation density in IRAS 17208$-$0014.
Unlike the majority of other infrared luminous galaxies, the observed
$\tau_{3.1}$ value of $\sim$0.4 in IRAS 17208$-$0014 could be reproduced
by a mixed source/dust geometry ($\S$5.2).
Millimeter interferometric observations, based on the HCN (J = 1--0) to
HCO$^{+}$ (J = 1--0) ratio in brightness temperature, have suggested
that a buried AGN might be present, but failed to reveal clear AGN
evidence \citep{ink06}.
For this source, although absorption features at 5--11.5 $\mu$m are
found, 6.2 $\mu$m PAH emission is also strong \citep{spo02}.
The presence of a luminous buried AGN in IRAS 17208$-$0014 has not been
explicitly indicated in any available data and so is currently unclear.
For the weak [CII] emitter CGCG 1510.8+0725, strong H$_{2}$
emission might originate in a buried AGN ($\S$4.2), but the buried AGN
evidence is not strong.
Although the presence of a strong buried AGN is one possibility to
explain the small [CII] to far-infrared continuum luminosity ratio
\citep{mal97,mal01}, scenarios that do not invoke a powerful buried AGN,
such as large radiation density or optical depth effects in starbursts
\citep{mal97,mal01}, could be alternative possibilities for the weak
[CII] emission.
It is unclear whether CGCG 1510.8+0725 does in fact possess
a luminous buried AGN.
\subsection{No explicit buried AGN signatures in IC 860 and IRAS 15250+3609}
For the remaining two optical non-Seyfert galaxies IC 860 and IRAS 15250+3609,
no explicit buried AGN signatures are evident in our spectra.
For the weak [CII] emitter IC 860, it is unclear whether
there is indeed a powerful buried AGN, as in the case of CGCG
1510.8+0725 ($\S$5.4).
IRAS 15250+3609 shows strong absorption features, with very weak PAH
emission in its mid-infrared 5--11.5 $\mu$m spectrum. These
mid-infrared properties are very similar to NGC 1377, and yet the
$L$-band spectra are very different, in that only NGC 1377 shows
strong buried AGN signatures. This discrepancy probably derives from the
different overall spectral energy distributions. Figure 3 shows
infrared 1--11.5 $\mu$m spectral energy distributions of NGC 1377 and
IRAS 15250+3609. The emission at 5--11.5 $\mu$m is dominated by dust
emission powered by obscured energy sources (starbursts and/or AGNs),
whereas that at 1--2.5 $\mu$m originates in foreground stellar
emission. The signatures of obscured AGNs can only be detected in the
$L$-band spectra if AGN-heated hot dust emission contributes
significantly to the observed $L$-band flux.
In Figure 3 ({\it Left}), the $L$-band emission of NGC 1377 is
smoothly connected to the longer wavelength dust emission spectrum
observed with {\it ISO}, which is judged to be powered by an obscured
AGN, because of strong absorption and lack of PAH emission features
\citep{lau00}. Detection of strong AGN signatures in the $L$-band
spectrum of NGC 1377 is possible primarily because the AGN-heated dust
emission is a dominant component at $L$. In contrast, the $L$-band
emission of IRAS 15250+3609 (Figure 3, {\it Right}) corresponds to a
transition between stellar and dust emission. The shorter part of the
$L$-band spectrum ($<$3.5 $\mu$m) is likely to be dominated by
foreground stellar emission, which is why we missed clear AGN signatures
in our $L$-band spectrum of IRAS 15250+3609, despite its 5--11.5
$\mu$m spectrum strongly suggesting the presence of a powerful obscured
AGN.
\section{Summary}
We reported infrared $L$-band spectroscopic results for nine infrared
luminous galaxies that may possess signatures of AGNs hidden
behind dust based on observational data at other wavelengths. A radio to
far-infrared excess galaxy (IRAS 04154+1755), four sources that
exhibit absorption features in the 5--11.5 $\mu$m spectra (NGC 828,
IRAS 15250+3609, IRAS 17208$-$0014, and NGC 1377), two weak [CII]
158$\mu$m emitters (IC 860 and CGCG 1510.8+0725), and two narrow-line
radio galaxies (Cygnus A and 3C 234) were observed. Using the 3.3
$\mu$m PAH emission feature, the [Mg VIII] 3.028 $\mu$m emission line,
and absorption features at 3.1 $\mu$m from ice-covered dust and at 3.4
$\mu$m from bare carbonaceous dust, we searched for signatures of
obscured AGNs in our $L$-band spectra. For the two weak [CII]
emitters, $K$-band spectra were also simultaneously taken. We found
the following main results:
\begin{enumerate}
\item For two optical Seyfert 2 galaxies (Cygnus A and 3C 234), our
results strongly suggested that the observed $L$-band spectra are
dominated by a PAH-free featureless continuum, originating in
AGN-heated hot dust, because of very small 3.3 $\mu$m PAH
equivalent widths. A strong detection of the [Mg VIII] 3.028
$\mu$m line in the $L$-band spectrum of 3C 234 also supported
the presence of a luminous AGN behind torus-shaped dust in this
source.
\item For another optical Seyfert 2 galaxy, IRAS 04154+1755, an energy
source that is more centrally concentrated than dust was
suggested from the large optical depth of the 3.1 $\mu$m dust
absorption feature ($\tau_{3.1}$ $\sim$ 0.9). An obscured AGN is
a natural explanation for this energy source, because in
normal starburst galaxies, stellar energy sources and dust are
spatially well mixed.
\item Our $L$-band spectroscopic method succeeded in detecting clear
AGN signatures in all three optical Seyfert 2s
known to possess powerful AGNs behind torus-shaped dust.
This means that as a tool for finding obscured AGNs, our $L$-band
spectroscopic method is at least as powerful as conventionally used
optical spectroscopy.
\item Among the remaining six optical non-Seyferts (NGC 828, IRAS
15250+3609, IRAS 17208$-$0014, NGC 1377, IC 860, and CGCG
1510.8+0725), strong AGN signatures were found in NGC 828 and NGC 1377,
based on their very small 3.3 $\mu$m PAH equivalent widths.
NGC 1377 showed a very red $L$-band continuum, which also supports
the obscured-AGN-dominated nature of this galaxy.
The detection of AGNs in the two optical non-Seyferts clearly
indicated that our $L$-band spectroscopic method is more effective
than optical spectroscopy in finding obscured AGNs.
\item IRAS 17208$-$0014 and NGC 1377 may have shown
3.1 $\mu$m and 3.4 $\mu$m dust absorption features
with optical depths of $\tau_{3.1}$ $\sim$ 0.4 and $\tau_{3.4}$
$\sim$ 0.17, respectively.
The large $\tau_{3.1}$ value of IRAS 17208$-$0014 indicated a
centrally-concentrated energy source, such as a buried AGN, if
the fraction of ice-covered dust is similar to or smaller than
that in the well-studied starburst galaxy M82.
\item Strong H$_{2}$ emission found in the $K$-band spectrum of the weak
[CII] emitter CGCG 1510.8+0725 might come from phenomena related to
a buried AGN, like another weak [CII] emitter NGC 4418.
\item For IRAS 17208$-$0014 and CGCG 1510.8+0725, possible buried AGNs
signatures were found, but weak.
For IC 860 and IRAS 15250+3609, we failed to provide any explicit
buried AGN evidence in our spectra.
The non-detection of buried AGN signatures in IRAS 15250+3609 was
explained by its infrared 1--11.5 $\mu$m
spectral energy distribution, where foreground stellar emission is
dominant at $L$.
However, for the remaining three optical non-Seyfert galaxies IRAS
17208$-$0014, IC 860, and CGCG 1510.8+0725, the evidence for
buried AGNs at other wavelengths is also very weak.
The absence of buried AGN signatures in these sources may be due
to their starburst-dominated nature.
\end{enumerate}
We are grateful to M. Ishii, R. Potter, E. Pickett, A. Hatakeyama, M.
Lemmen, B. Golisch, and D. Griep for their support during our Subaru and
IRTF observations.
We thank the anonymous referee for his/her useful comments.
Text data of ISO spectra of IRAS 15250+3609 and
NGC 1377 were kindly provided by H. W. W. Spoon. M.I. is supported by
Grants-in-Aid for Scientific Research (16740117). Some of the data
analysis was carried out using a computer system operated by the Astronomical
Data Analysis Center (ADAC) and the Subaru Telescope of the National
Astronomical Observatory, Japan. This research has made use of the
SIMBAD database, operated by the Centre de Donnees astronomiques de
Strasbourg (CDS), Strasbourg, France; of the NASA/IPAC
Extragalactic Database (NED), which is operated by the Jet Propulsion
Laboratory, California Institute of Technology, under contract with
the National Aeronautics and Space Administration (NASA); and of data
products from the Two Micron All Sky Survey, which is a joint project
between the University of Massachusetts and the Infrared Processing and
Analysis Center/California Institute of Technology, funded by NASA and
the National Science Foundation.
|
1,314,259,993,291 | arxiv | \section{Introduction}
Thirty X-ray soft polars with negative hardness ratios\footnote{Hardness
ratio
$\mathrm{HR1_\mathrm{ROSAT}}=(H_\mathrm{R}-S_\mathrm{R})/(H_\mathrm{R}+S_\mathrm{R})$
of count rates in the $0.1-0.5\,\mathrm{keV}$ ($S_\mathrm{R}$) and
$0.5-2.5\,\mathrm{keV}$ ($H_\mathrm{R}$) energy bands, respectively.} HR1
have been identified in the ROSAT all-sky survey (RASS) source catalog
\citep{voges:99} by \citet{thomas:98}, \citet{beuermann:99}, and
\citet{schwope:02}. They are potential members of the subgroup of
\object{AM~Her}-type systems that show a `soft X-ray excess', i.\,e.,\ a
significant dominance of soft X-ray ($E \lesssim 0.5\,\mathrm{keV}$) over
hard X-ray ($E \gtrsim 0.5\,\mathrm{keV}$) luminosity, reviewed for example
by \citet{ramsay:94}, \citet{beuermann:94}, and
\citet{beuermann:95}. \citet{ramsay:04ebalance} demonstrated the dependence
of the soft-to-hard luminosity ratios on calibration, geometrical effects,
and spectral models. Using accretion-column models \citep{cropper:99} for
recalibrated ROSAT and XMM-Newton spectra, they came to the conclusion that
fewer systems show a distinct soft X-ray excess than originally estimated
from the ROSAT detections.
\object{RS~Caeli} was among the softest X-ray sources at the epoch of the
RASS observations, with a hardness ratio of $\mathrm{HR1}=-1.00(1)$. It was
detected as an extreme ultraviolet source in the ROSAT/WFC and in the EUVE
surveys \citep{pounds:93,bowyer:94}. \citet{burwitz:96} published the first
pointed X-ray and additional optical observations of RS~Cae. They derived an
optical apparent magnitude of $m_V\sim19^\mathrm{m}$, an X-ray flux on the
order of $10^{-11}\,\mathrm{erg\,cm^{-2}\,s^{-1}}$ in the ROSAT energy band,
a distance to the binary of at least 440\,pc, and a magnetic field strength
$B=36(1)\,\mathrm{MG}$ for the white-dwarf primary. The pronounced
phase-dependent cyclotron harmonics in the optical spectra could be modeled
for two possible accretion geometries: a binary inclination of $i\sim
60\degr$ and a colatitude of the accretion region of $\beta\sim 25\degr$, or
$i\sim 25\degr$ and $\beta\sim60\degr$. In the first case, stream absorption
dips should be seen in the X-ray light curves. They found two candidate
orbital periods of $0\fd0708(14)$ or $0\fd0652(15)$, giving preference to
the longer one. The optical spectra showed no clear signature of the M-star
secondary.
By exploiting new XMM-Newton data of polars that had not been observed in
X-rays since ROSAT, we are studying their system properties and the energy
balance, in particular during high states (\object{AI~Tri},
\citealt{traulsen:10}, RS~Cae, this work) and intermediate high states of
accretion (AI~Tri, \object{QS~Tel}, \citealt{traulsen:10,traulsen:11}). In
this paper, we present our third pointed XMM-Newton observation of a soft
X-ray selected polar. Section~\ref{sec:data} introduces the X-ray and
optical data of RS~Cae on which our analysis is based. In
Sect.~\ref{sec:photo}, we describe the multiwavelength light curves and
confirm the orbital binary period of $0\fd071$. Section~\ref{sec:spectra} is
dedicated to the X-ray spectra, the spectral models, and the derived
parameters. The whole spectral energy distribution, an approach to a
consistent modeling of spectra and light curves, and the implications on the
system geometry are presented in Sect.~\ref{sec:disc_sedmodeling}. We close
the paper with a discussion of the component fluxes and the energy budget of
RS~Cae in Sect.~\ref{sec:disc_energy}.
\begin{table}
\caption{Barycentric timings and $1\sigma$ errors of the dip centers in
the XMM-Newton light curves of RS~Cae.}
\label{tab:minima}\centering
\begin{tabular}{c@{\qquad}c@{\qquad}r}
\hline\hline
$\mathrm{BJD}_\mathrm{min}$(TT) & $\Delta T_\mathrm{min}$
& $O-C~$ \\ \hline
$2\,454\,903.05846$ & $0.00044$ & $-0.0123$ \\
$2\,454\,903.12995$ & $0.00031$ & $-0.0042$ \\
$2\,454\,903.20209$ & $0.00037$ & $ 0.0130$ \\
$2\,454\,903.27310$ & $0.00031$ & $ 0.0143$ \\
$2\,454\,903.34195$ & $0.00043$ & $-0.0148$ \\
$2\,454\,903.41275$ & $0.00056$ & $-0.0165$ \\
\hline
\end{tabular}
\end{table}
\section{Observations and data reduction}
\label{sec:data}
RS~Cae was scheduled for a 50\,ks observation with XMM-Newton on March
12/13, 2009 (observation ID 0554740801). Owing to background radiation, the
EPIC/pn and MOS exposures had to be stopped after 35\,ks and 39\,ks,
respectively. All three EPIC detectors were operated in full frame mode with
the thin filter. Using standard \textsc{sas}\,v9.0 tasks, we extracted light
curves and spectra from circular source regions on the EPIC chips with a
radius of 27.5\,arcsec for EPIC/pn and of 22.5\,arcsec for EPIC/MOS. We used
large circular background regions with radii between 75 and 100\,arcsec on
the same chip as the source for the background correction. Spectra were
taken from the first 28\,ks of the exposure, excluding the high-background
intervals. During our pointing, the source reached net peak count
rates\footnote{maximum rate and Poissonian error derived from 1\,ks
light-curve segments} of $1.64\pm0.04\,\mathrm{cts\,s^{-1}}$ for EPIC/pn,
$0.16\pm0.01\,\mathrm{cts\,s^{-1}}$ for MOS1, and
$0.24\pm0.02\,\mathrm{cts\,s^{-1}}$ for MOS2, which is in the range that can
be expected from the ROSAT All-Sky Survey results \citep{voges:99} for a
high-state observation. The net source count rate measured with RGS was
consistent with zero.
With the optical monitor OM, we performed fast-mode photometry consecutively
in the 3000$-$3900\,{\AA} band using the $U$ filter (mean net count
rate\footnote{corrected count rates and errors given by the OM
source-detection tasks} $1.06\pm0.02\,\mathrm{cts\,s^{-1}}$), in the
2450$-$3200\,{\AA} band using the UVW1
($0.53\pm0.02\,\mathrm{cts\,s^{-1}}$), and in the 2050$-$2450\,{\AA} band
using the UVM2 filter ($0.13\pm0.01\,\mathrm{cts\,s^{-1}}$). The exposure
times of 8.2\,ks per light-curve segment corresponded to about 1.3 orbital
cycles. We extracted fast-mode light curves using the \textsc{sas} v10.0
version of the source detection algorithm and the task \textsc{omfchain} and
took the background information from the imaging data. The UVM2 data were
mostly affected by the increased background. For the rest of the visit, the
optical monitor was operated with the grism1 filter
(2000$-$3500\,{\AA}). This part of the observation fell completely in the
time interval of high background radiation. Three of the fourteen scheduled
800\,s exposures were taken, but are unusable due to a low signal-to-noise
ratio.
To trigger the XMM-Newton observation, we repeatedly obtained optical
photometry of RS~Cae between December 2008 and March 2009 at the CTIO 1.3\,m
telescope/ANDICAM, operated by the SMARTS consortium. Two of the $B$-band
light curves cover more than one orbital period: one taken on September 22,
2008 during a low state of accretion and one taken simultaneously to the
last part of the XMM-Newton observations. Each of them comprises 32 data
points with integration times of 180\,s and a time resolution of
226$-$227\,s. Photometry was done relative to stars in the field as
described in \citet{gerke:06}.
We converted the ground-based data from UTC to terrestrial time TT,
consistently with the satellite data, and corrected all times used in this
paper to the barycenter of the solar system using the JPL ephemeris
\citep{standish:98}.
\section{Multiband light curves and photometric period}
\label{sec:photo}
Figure~\ref{fig:multilc} gives a synopsis of the X-ray, ultraviolet, and
optical light curves of RS~Cae. The corresponding X-ray light-curve profiles
are shown in Fig.~\ref{fig:lcprofile}. Our phase convention refers to the
centers of the pronounced X-ray dips and is derived in
Sect.~\ref{sec:ephem}.
The XMM-Newton X-ray light curves (Figs.~\ref{fig:multilc}a, b, and
\ref{fig:lcprofile}) show clear periodicity that could not be detected in
the 1992 ROSAT / PSPC light-curve segments presented by \citet{burwitz:96}.
With EPIC/pn, about 21\,400 source counts were collected in the soft X-ray
band at energies below 0.5\,keV and about 760 in the hard X-ray band at
energies above 0.5\,keV. Correspondingly, the hardness ratios
$\mathrm{HR}_\mathrm{XMM}=(H_\mathrm{X}-S_\mathrm{X})/(H_\mathrm{X}+S_\mathrm{X})$
between hard ($H_\mathrm{X}, 0.5-10.0\,\mathrm{keV}$) and soft
($S_\mathrm{X}, 0.1-0.5\,\mathrm{keV}$) counts are remarkably low
(Fig.~\ref{fig:multilc}c). During a sharp recurring light-curve dip, the
EPIC count rates almost drop to zero for 0.07 in phase, and the hardness
ratios increase. This dip is visible in all X-ray bands and coincides with
ultraviolet and optical light-curve minima. The optical and ultraviolet
light curves (Figs.~\ref{fig:multilc}d and e) are double-humped with
semi-amplitudes between 0.5 and 0.7\,mag,showing similar shapes during high
and low states of accretion (Fig.~\ref{fig:optlcs}). A minimum with a depth
of about $\Delta\mathrm{mag}\sim 0.8$ occurs at the time of maximum X-ray
flux. They resemble the double-humped white light curves of
\citet{burwitz:96}, taken in October 1992 and September 1993.
\begin{figure}
\centering
\includegraphics[width=8.8cm]{21383fig1}
\caption{September 2009 light curves of RS~Cae in time bins of 100\,s,
folded on the orbital period with the center of the X-ray dips defining
phase zero (Eq.~\ref{eq:ephem}). The grey area marks the interval of
high background activity during the XMM-Newton pointing. \textbf{a--b)}
Energy-resolved EPIC/pn light curves. \textbf{c)} Corresponding hardness
ratios
$\mathrm{HR}_\mathrm{XMM}=(H_\mathrm{X}-S_\mathrm{X})/(H_\mathrm{X}+S_\mathrm{X})$.
\textbf{d)} Optical and ultraviolet light curves measured subsequently
with three filters at the optical monitor. \textbf{e)} Optical $B$-band
light curve with a time resolution of about 227\,s, plotted twice and
shifted by $-3$ and $-2$ orbital cycles, respectively.}
\label{fig:multilc}%
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.7cm]{21383fig2}
\caption{Phase-averaged X-ray light curves, excluding the high-background
interval, in time bins of 60\,s and with the same phase convention and
energy ranges as in Fig.~\ref{fig:multilc}. \textbf{a--b)}
Energy-resolved EPIC/pn light curves. \textbf{c)} Corresponding hardness
ratios.}
\label{fig:lcprofile}%
\end{figure}
\subsection{X-ray ephemeris}
\label{sec:ephem}
\citet{burwitz:96} determined a spectroscopic ephemeris for the system and
referred to the times of minimum spectral flux as phase zero. Using the
recurrent X-ray dips to constrain the binary period, we can confirm their
preferred period of $P_\mathrm{orb} = 102\,\mathrm{min} = 0.071\,\mathrm{d}$
independently by Lomb-Scargle analysis, epoch folding, and a least-squares
method to compute the maximum of $1/\Sigma(O-C)^2$ (observed minus
calculated dip times). Observed dip times and $1\sigma$ errors are derived
by Gaussian fits to $\Delta\varphi\sim 0.2$ segments in the 10s-binned
EPIC/pn light curve (Table~\ref{tab:minima}). The photometric ephemeris is
calculated by an error-weighted fit to the observed dip times using
1ms-spaced trial periods within a $\pm3\sigma$ search interval around the
longer period of \citet{burwitz:96}.
\begin{equation}
\label{eq:ephem}
\mathrm{BJD}_\mathrm{dip}(\mathrm{TT}) = 2\,454\,903\fd2012(4) +
0\fd0709(3) \times E
\end{equation}
defines phase zero throughout the paper. The uncertainties of the last
digit, given in parentheses, are $1\sigma$ errors of the $O-C$ method.
\subsection{The nature of the soft X-ray dip}
\label{sec:Xraydip}
Light-curve dips may occur due to an eclipse by the secondary star, a
self-eclipse of the accretion region, or stream absorption. A self-eclipse
is unlikely for a geometry with $i+\beta<90\degr$, which is expected for
RS~Cae. The dips in the optical and UV light curves around phase zero
resemble a (partial) eclipse feature. In Sect.~\ref{sec:disc_lcsim},
however, we show that the modulation of these light curves is mostly due to
cyclotron emission and that their minima can be explained without assuming
an eclipse by the secondary. We conclude that the sharp dip in the X-ray
light curves is caused by absorption in the accretion stream when it crosses
our line of sight towards the accretion region: \textit{(i)} The hardness
ratios increase rapidly at dip times, where photons at lower X-ray energies
are more strongly absorbed than those at higher energies. \textit{(ii)} The
depth of the minima varies slightly from cycle to cycle. \textit{(iii)}
Accretion-stream and absorption models for RS~Cae following
\citet{silva:12pre} reproduce the X-ray dip. The X-ray dip appears to be
broader toward higher energies (Fig.~\ref{fig:lcdips}). The energy
dependence may indicate that more extended, cooler parts of the accretion
region are still visible, while the harder emission region is absorbed by
the dense core of the stream.
\section{X-ray spectroscopy}
\label{sec:spectra}
From the first 28\,ks of our XMM-Newton exposure, we extract EPIC/pn, MOS1,
and MOS2 spectra as described in Sect.~\ref{sec:data}, excluding the phases
around the X-ray light curve dips $\varphi_\mathrm{X-ray}\sim 0.9-1.1$, and
fit them simultaneously to derive general system parameters such as
temperatures in the accretion region, plasma abundances, and the amount of
intrinsic absorption. The EPIC spectra show the typical components of
ROSAT-discovered polars: \textit{(i)} the black-body like, X-ray soft
emission, which can be attributed to the accretion-heated surface of the
white dwarf (Sect.~\ref{sec:softspec}), and \textit{(ii)} the
bremsstrahlung-like, X-ray hard component, which can be attributed to the
hot accretion column above the white-dwarf surface
(Sect.~\ref{sec:hardspec}). Since a very strong soft X-ray component was
seen in the ROSAT/PSPC spectra and almost $97\,\%$ of the EPIC/pn source
counts are measured at energies below 0.5\,keV, we are investigating in
particular the soft and hard X-ray fluxes that we derive from the different
spectral model approaches and discuss in Sect.~\ref{sec:disc_energy}.
Our models in \textsc{xspec} v12.6 \citep{arnaud:96,dorman:03} consist of
two additive spectral components and up to two absorption terms:
\textit{(i)} the black-body-like component to describe the soft part of the
spectrum, dominating at energies up to about 0.5\,keV; and \textit{(ii)} the
plasma component to describe the hard part of the spectrum, dominating from
energies around $0.5-0.7$\,keV onward. Plasma abundances are given with
respect to the solar abundances of \citet{asplund:09}. In addition,
absorption by material on our line of sight is expected to affect the
emitted spectra: \textit{(i)} absorption by the interstellar medium. We fit
it with a \textsc{tbnew}\footnote{Most recent version of \textsc{tbvarabs}
in
\textsc{xspec}. See\\ \href{http://pulsar.sternwarte.uni-erlangen.de/wilms/research/tbabs/}{http://pulsar.sternwarte.uni-erlangen.de/wilms/research/tbabs/}.}
component and employ the abundances of \citet{wilms:00} and cross-sections
of \citet{verner:96a} and \citet{verner:96b} for it; \textit{(ii)}
absorption by diffuse gas around the X-ray emitting accretion regions, whose
amount can vary over the orbital cycle. We fit it with the partially
covering absorption model \textsc{pcfabs}. Reflection from the white-dwarf
surface may also contribute to the flux at higher energies. We consider it
optionally in the fits in Sect.~\ref{sec:hardspec}.
\begin{figure}
\includegraphics[width=8.3cm]{21383fig3}
\caption{SMARTS $B$-band light curves of RS~Cae obtained at 2008/09/22
during a low state and at 2009/03/12 during a high state of accretion,
each folded on a period of $0\fd07089$ and plotted twice.}
\label{fig:optlcs}%
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.8cm]{21383fig4}
\caption{The dip in phase-averaged, energy-resolved X-ray light
curves. Left to right: Light curves extracted from soft X-ray energy
intervals of constant width, compared to the harder
$0.5-10.0\,\mathrm{keV}$ band. The data are binned into a time
resolution of 150\,s and normalized to their nondip median count
rates. Right panel: Corresponding ratios of X-ray soft to X-ray hard
light curves, binned into a time resolution of 300\,s.}
\label{fig:lcdips}%
\end{figure}
\subsection{The black-body-like component}
\label{sec:softspec}
The soft X-ray component is fitted by an absorbed single-temperature black
body at a temperature of $35.7^{+0.6}_{-0.7}\,\mathrm{eV}$. The
\textsc{tbnew} absorption term of
$N_\textsc{H,tbnew}=2.1^{+1.0}_{-0.7}\times 10^{19}\,\mathrm{cm}^{-2}$ stays
below the upper limit of interstellar hydrogen absorption on our line of
sight, $N_\textsc{H}=1.2-1.6\times 10^{20}\,\mathrm{cm}^{-2}$
\citep{kalberla:05,dickey:90}. Using a \textsc{mekal} component for the
harder part of the spectrum (cf.\ Sect.~\ref{sec:hardspec}), we achieve a
reduced $\chi^2_\mathrm{red}$ of 1.23 at 199 degrees of freedom over the
whole XMM-Newton energy range. Residuals remain around the energies of the
emission lines of helium-like C\,V, N\,VI, and O\,VII
(Fig.~\ref{fig:spectra}).
Single-temperature models approximate a time and spatially averaged
temperature of a rather complex accretion area, which is expected to be
extended, comprising a wider spread of temperatures, and not necessarily
circular \citep[cf.][]{milgrom:75,kuijpers:82,ferrario:90}. Aiming at a
better description of the temperature gradient in the accretion region on
the white dwarf, we tested multitemperature black-body models. A fit with a
second black body shows that the resolution of the data allows for the
application of multicomponent models. It results in a reduced
$\chi^2_\mathrm{red}$ of 1.22 (197 d.\,o.\,f., \textsc{mekal} plasma
component) and an \textit{F-test} probability of $75\,\%$ that the fit is
improved.
In the fits with multiple black bodies, we employed models that have been
successfully used for other polars: black-body components whose effective
emitting surface areas obey an exponential distribution over temperatures
\citep[AM~Her,][]{beuermann:12} or whose individual temperatures obey a
Gaussian distribution over emitting radii
\citep[AI~Tri,][]{traulsen:10}. The model with a Gaussian temperature
distribution reproduces the low-energy continuum better, without changing
the reduced $\chi^2_\mathrm{red}=1.22$ over the full
$0.1-10.0\,\mathrm{keV}$ range.
To test the quality of the fits independently of the $\chi^2$ fit
statistics, we performed a \textit{runs test} for randomness for each model
and determined the probability that the residuals of the fit are randomly
distributed around zero. It increases from $P_\mathrm{random}=35\,\%$ for
the single-temperature black body to $45\,\%$ for the two black-body
components and to $76\,\%$ for the Gaussian temperature distribution, mainly
triggered by a low number of residuals changing sign and indicating a
somewhat higher preference for the multitemperature
approach.\footnote{Probabilities of the two-tailed \textit{runs test} have
been calculated as twice the single-tail values of a normalized Normal
distribution. If the data are described well by the model, we expect a
random distribution, while systematic deviations manifest themselves as
longer sequences of positive or of negative residuals and a lower
probability value.} It yields temperatures up to
$kT_{\textsc{bbody,}\mathrm{max}}=39.1^{+0.4}_{-0.9}\,\mathrm{eV}$ with
highest flux at 34.5\,eV and an interstellar absorption term of
$N_\textsc{H,tbnew}=3.3^{+1.1}_{-0.8}\times 10^{19}\,\mathrm{cm}^{-2}$.
Owing to the wider temperature range covered, the bolometric model flux
$1.2^{+0.2}_{-0.1}\times 10^{-11}\,\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}$
is about $50\,\%$ higher than for the single temperature.
\begin{table*}
\caption{Best-fit parameters for the non-dip EPIC spectra of RS~Cae,
employing single-temperature components
\textsc{tbnew(bbody+\,}\textit{plasma}\textsc{)}.}
\label{tab:xspecfits}
\centering
\begin{tabular}{l*{3}{c@{\quad}}r@{}l@{\quad}c@{\quad}r@{.}lcrc}
\hline\hline\\[-2ex]
Plasma component & $\chi^2_\mathrm{red}$ & $N_\textsc{H,tbnew}$ &
$kT_\textsc{bbody}$ & \multicolumn{2}{c}{$N_\textsc{H,pcfabs}$} &
cover. & \multicolumn{2}{c}{$kT_{\textsc{mekal}}$} & abund. &
$F_\mathrm{bol}(\textsc{bbody})$ & $F_\mathrm{bol}(\textsc{mekal})$ \\
& & [$10^{19}\,\mathrm{cm}^{-2}$] & [$\mathrm{eV}$] &
\multicolumn{2}{@{}c@{\quad}}{[$10^{23}\,\mathrm{cm}^{-2}$]} & [$\%$] &
\multicolumn{2}{c}{[$\mathrm{keV}$]} & (solar) &
\multicolumn{2}{c}{[$10^{-12}\,\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}$]}
\\
\hline\\[-1.5ex]
\textsc{mekal}
& 1.30 & $2.1^{+0.8}_{-0.7}$ & $35.6^{+0.6}_{-0.7}$ & & & &
$12$&$6^{+10.6}_{-2.7}$ & $7.8^{+5.4}_{-2.7}$ &
$7.7^{+0.8}_{-0.4}$ & $0.32^{+0.09}_{-0.05}$
\\[.8ex]
\textsc{pcfabs(mekal)}
& 1.23 & $2.1^{+1.0}_{-0.7}$ & $35.7^{+0.6}_{-0.7}$ &
\quad$3$&$.9^{+6.0}_{-2.0}$ & $72^{+8}_{-12}$ & $7$&$4^{+3.6}_{-2.6}$ &
$1.0^{+1.6}_{-0.7}$ & $7.9^{+1.0}_{-0.8}$ & $0.73^{+0.05}_{-0.05}$
\\[.8ex]
\textsc{pexmon+pcfabs(mekal)}
& 1.23 & $2.2^{+0.9}_{-0.7}$ & $35.7^{+0.7}_{-0.7}$ & $2$&$.9^{+11.9}_{-1.9}$
& $62^{+26}_{-20}$ & $6$&$8^{+3.3}_{-2.1}$ & $:= 1.0$ &
$7.9^{+0.5}_{-0.5}$ & $0.52^{+0.04}_{-0.04}$
\\[.3ex] \hline
\end{tabular}
\tablefoot{%
\textsc{pexmon} component: inclination $i:=65^\circ$, photon index
$1.1$, scaling factor $-0.7$. Unabsorbed bolometric fluxes have been
determined via \textsc{cflux} within \textsc{xspec}. Errors are given
within a 90\,\% confidence range.%
}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=8.8cm]{21383fig5}
\caption{Nondip EPIC spectra of RS~Cae and the \textsc{xspec} black-body
plus plasma fits. \textbf{Small panel:} Corresponding unabsorbed model
fluxes.}
\label{fig:spectra}%
\end{figure}
\subsection{The plasma component}
\label{sec:hardspec}
The hard X-ray component is fitted by a \textsc{mekal} plasma model
\citep[cf.][]{mewe:85,liedahl:95}, comprising continuum and line
emission. Table~\ref{tab:xspecfits} summarizes the parameters of our three
main \textsc{mekal} fits, which we now describe. The pure \textsc{mekal}
model is not sufficient to reproduce the observed iron emission around
6.7\,keV. Underestimating the continuum flux, it results in an
unrealistically high element abundance of several times the solar values. We
include a partially covering absorber \textsc{pcfabs} as the simplest
approach to the complex absorption spectrum expected for the accretion
emission \citep[cf.][]{done:98,cropper:98}. The best fit yields a mean
plasma temperature of $kT_\textsc{mekal}=7.4^{+3.6}_{-2.6}\,\mathrm{keV}$ at
solar element abundances and intrinsic absorption of
$N_\textsc{H,pcfabs}=3.9^{+6.0}_{-2.0}\times 10^{23}\,\mathrm{cm}^{-2}$,
covering $72^{+8}_{-12}\,\%$ of the emission region. The bolometric model
flux of the \textsc{mekal} component increases by a factor of about three
when adding the absorption term, as obvious from the plot of the unabsorbed
model components in Fig.~\ref{fig:spectra}.
In addition, we test the spectra for a neutral Compton reflection
component. Reflection features were detected, for example, by
\citet{beardmore:95} for AM~Her, by \citet{done:98} for \object{BY~Cam}, or
by \citet{bernardini:12} for hard X-ray selected polars, and theoretically
investigated by \citet{teeseling:96,matt:04,mcnamara:08}. In the EPIC
spectra of RS~Cae, there is no direct evidence of a Fe K$\alpha$ fluorescent
line at 6.4\,keV, since the components of the line complex are resolved. We
find an upper limit on the order of $2\times
10^{-5}\,\mathrm{photons\,cm}^{-2}\,\mathrm{s}^{-1}$ in a Gaussian emission
line component at 6.4\,keV. To test for a reflection continuum, we employed
the model by \citet{nandra:07}, developed for Compton reflection by neutral
gas in AGN with a power law as incident spectrum, which is based on a model
by \citet{magdziarz:95} and implemented as \textsc{pexmon} in
\textsc{xspec}. A higher \textit{runs-test} probability of
$P_\textrm{random}=56\,\%$, compared to $38\,\%$ for
\textsc{bbody+pcfabs(mekal)}, indicates that a reflection component might be
present.
As described for the accretion region on the white dwarf, a physically
realistic model for the post-shock accretion column needs to include its
temperature, density, and velocity structure
\citep[cf.][]{cropper:99,fischer:01}. We are employing multitemperature
accretion-column models that are based on the models of \citet{fischer:01}
and described by \citet{traulsen:10} for AI~Tri. They fit the spectra of
RS~Cae well, without improving the $\chi^2$ statistical values of the fit
significantly. We refer to them in the discussion of the energy balance in
Sect.~\ref{sec:disc_energy}. Models with a magnetic field strength of
$B=36\,\mathrm{MG}$ \citep{burwitz:96} and different specific mass flow
rates $\dot{m}$ result in similar values of reduced
$\chi^2_\mathrm{red}=1.2$ to the single-temperature models, but different
probabilities $P_\mathrm{random}$ in the \textit{runs test} for
randomness. Best fits are reached for
$\dot{m}=5-10\,\mathrm{g}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ in combination
with the single-temperature black body at $P_\mathrm{random}=68\,\%$, and
for $\dot{m}=0.1\,\mathrm{g}\,\mathrm{cm}^{-2}\,\mathrm{s}^{-1}$ in
combination with the multitemperature black bodies at a high
$P_\mathrm{random}=89\,\%$. This multi-black body and \textsc{mekal}
best-fit model results in essentially the same black-body temperatures as
described in Sect.~\ref{sec:softspec} and a wide range of plasma
temperatures between 2.6 and 62\,keV with a flux-weighted mean of
$kT_\mathrm{plasma}=13.3^{+8.3}_{-6.1}\,\mathrm{keV}$ at a total unabsorbed
flux of $F_\mathrm{bol,column}=1.3^{+8.3}_{-0.6}\times
10^{-12}\,\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}$ and higher intrinsic
absorption of $N_\textsc{H,pcfabs}=5.2^{+13.3}_{-2.9}\times
10^{23}\,\mathrm{cm}^{-2}$.
\begin{figure*}
\includegraphics[width=18.2cm]{21383fig6}
\caption{Spectral energy distribution of RS~Cae during high and low states
of accretion from 1992 to 2010: archival data of various missions from
the infrared to the X-ray bands and our 2009 observations. The optical
and UV spectroscopic and photometric data are given as orbital minimum
and maximum, and 2MASS $H$- and $K$-band fluxes as upper limits. The
X-ray spectra are shown with the models (solid lines) and their
components (dashed and dotted). The shaded areas mark the confidence
ranges of the ROSAT temperatures. The gray lines represent stellar and
cyclotron model spectra (light gray), and their sum (dark gray, details
are given in the text).}
\label{fig:sed}%
\end{figure*}
\section{Towards a combined SED model}
\label{sec:disc_sedmodeling}
\subsection{The spectral energy distribution}
\label{sec:disc_sed}
Aiming for a physically realistic description of the whole system, we
inspected the spectral energy distribution (SED) of RS~Cae and studied the
contributions of the individual system components over the whole energy
range.
Figure~\ref{fig:sed} shows the SED in a synopsis of high- and low state
observations between 1992 and 2010, including our SMARTS and XMM-Newton
data. The ROSAT/WFC count rates of \citet{pye:95} have been converted into
fluxes using the conversion factors according to \citet{hodgkin:94}, and the
EUVE Survey data according to \citet{bowyer:96}. From the first XMM-Newton
observation in 2002, obtained during a low state of the system, the
Optical-Monitor measurements are included in the plot, while RS~Cae was too
faint to be detected by the EPIC and RGS instruments \citep[observation ID
0109464301, ][]{ramsay:04lowstates}. Since it has low infrared and optical
fluxes even during high states of accretion, the WISE data are low
signal-to-noise data taken from the reject catalog, and the 2MASS data are
``B'' quality data (detection valid to $80\,\%-90\,\%$). The survey data
(WISE, 2MASS, EUVE, WFC) are snapshots of RS~Cae at unknown orbital phase,
while the ESO, SMARTS, and XMM-Newton/OM fluxes are given as orbital minimum
and maximum during high states and as orbital means during low states. The
X-ray spectra are shown unfolded with their respective best-fit models: the
XMM-Newton/EPIC data with the absorbed black-body plus \textsc{mekal} fit as
described in Sect.~\ref{sec:spectra}, and the archival PSPC data with
black-body plus bremsstrahlung fits. With the bremsstrahlung temperature
being fixed to $kT_\textsc{brems}:=5\,\mathrm{keV}$, they yield
$N_\textsc{H,tbnew}=1.5^{+1.1}_{-0.8}\times 10^{20}\,\mathrm{cm}^{-2}$,
$kT_\textsc{bbody}=17.9^{+9.6}_{-8.8}\,\mathrm{eV}$ for the 1992 January
data and $N_\textsc{H,tbnew}=1.3^{+0.6}_{-0.5}\times
10^{20}\,\mathrm{cm}^{-2}$,
$kT_\textsc{bbody}=19.6^{+6.3}_{-4.6}\,\mathrm{eV}$ for the 1992 September
data. These values are mostly independent of the bremsstrahlung temperature,
varying only on a sub-percent level within an $1-20\,\mathrm{keV}$ interval
of $kT_\textsc{brems}$, but poorly constrained.
We compare the observed data points with models for the different system
components and theoretical expectations.
\subsection{Spectral models and parameters}
\label{sec:disc_models}
In addition to our X-ray spectral fits, we apply three models for the cooler
emission in the infrared to the ultraviolet range:
\begin{description}
\item[\textit{(i)}] emission from the secondary M-dwarf atmosphere,
\item[\textit{(ii)}] cyclotron emission from the cooling post-shock
accretion flow, and
\item[\textit{(iii)}] emission from the unheated white-dwarf atmosphere.
\end{description}
The spectral contribution of the preshock
accretion stream is not considered in the plot. We include
\begin{description}
\item[\textit{(i)}] a PHOENIX model \citep{hauschildt:99}, as
employed by \citet{heller:11} in their analyses of white-dwarf
M-star binaries. It depends on temperature, surface gravity, and
chemical composition of the M-dwarf;
\item[\textit{(ii)}] a cyclotron model spectrum by
\citet{fischer:01}. It depends on the magnetic
field strength, the mass of the white dwarf, and the local
mass-flow densities in the accretion column; and
\item[\textit{(iii)}] a non-LTE model atmosphere of a hydrogen-helium
white dwarf, using routines of the T\"ubingen NLTE Model-Atmosphere
Package \citep{werner:86,werner:99}. It depends on temperature, surface
gravity (or: mass and age), and chemical composition of the white dwarf.
\end{description}
In the following, we describe the parameter set we use for the three
models.
Parameters that are constant in time are taken from the literature. For the
system geometry, we assume a binary inclination $i\sim 60\degr$ and a
colatitude $\beta\sim 25\degr$ of the accretion region on the white dwarf,
following the stream-eclipse scenario of \cite{burwitz:96}. For the
secondary star (\textit{i}), we estimate the parameters via the empirical
relations by \citet{knigge:06,knigge:07err} for
$P_\mathrm{orb}=1.7\,\mathrm{hrs}$: the effective temperature to
$T_{\mathrm{eff},2}=3\,000\,\mathrm{K}$, the surface gravity to $\log
g_2=5.0\,[\mathrm{cm\,s}^{-2}]$, and the radius to $R_2=0.166\,R_\odot$. For
the white dwarf (\textit{ii} and \textit{iii}), we use the magnetic field
strength $B=36\,\mathrm{MG}$ of \citet{burwitz:96} and typical values of
white dwarfs in polars as reviewed, for example, by
\citet{sion:99,kawka:07,townsley:09}:
$T_\mathrm{eff,WD}=15\,000\,\mathrm{K}$, $\log
g_\mathrm{WD}=8.0\,[\mathrm{cm\,s}^{-2}]$, and $M_\mathrm{WD}=0.6\,M_\odot$,
corresponding to a radius of about $R_\mathrm{WD}\sim 0.012\,R_\odot$
\citep{koester:86}. In all models, we assume solar element abundances.
The distance to the system cannot be derived directly from the optical
spectra, because they lack the features of the secondary star. The models
(\textit{i} to \textit{iii}) are fully consistent with the ultraviolet,
optical, and infrared measurements shown in Fig.~\ref{fig:sed} if we scale
them to a distance of 750\,pc. At this value, the M-star model coincides
with the ground-based $JHK$ measurements, which serve as an upper limit for
the stellar contribution and, thus, as lower limit for the distance
estimate. The low $J$-to-$H$ magnitude ratio supports the interpretation
that the 2008 data are M-star-dominated and represent a low state of
accretion, while the steeper 2MASS measurements include significant
cyclotron emission during a high state. The 750\,pc agree with the lower
limit of 440\,pc given by \citet{burwitz:96} and with the determination of
$880^{+300}_{-220}\,\mathrm{pc}$ of \citet{pretorius:13}. It mainly depends
on the stellar radii of the white dwarf and the M-star. A distance less than
750\,pc would require smaller radii for both stars in order not to exceed
the data; a longer distance would require larger stellar radii to match the
observed data. These radii would conflict with the empirical values of
\citet{knigge:06}.
Time-variable parameters are mass-flow rates, accretion temperatures, and
emitting areas. They have to be fitted per observational epoch. X-ray
temperatures, emitting surface areas, and column densities are derived from
the X-ray spectral fits (Sects.~\ref{sec:spectra} and
\ref{sec:disc_sed}). The mass-flow density in the accretion column is a free
parameter of the models of \citet{fischer:01}. We estimate it by scaling the
models to approximately match the shape of the ESO spectra observed in 1993:
two local mass-flow densities in the accretion column,
$\dot{m}_1=0.01\,\mathrm{g\,cm}^{-2}\,\mathrm{s}^{-1}$ with a column base
area $A_1=10^{16}\,\mathrm{cm}^2$ and
$\dot{m}_2=0.1\,\mathrm{g\,cm}^{-2}\,\mathrm{s}^{-1}$ with $A_2=1.3\times
10^{15}\,\mathrm{cm}^2$. The bremsstrahlung flux of the \citet{fischer:01}
models are also in accordance with the hard X-ray fluxes in the
XMM-Newton/EPIC observation and their upper limits in the ROSAT/PSPC
observations, when using a partially covering absorber on the same order as
in the spectral fits in Sect.~\ref{sec:spectra}.
\subsection{Application to the multiband light curves}
\label{sec:disc_lcsim}
From the combined spectral models, we derived model light curves in optical,
ultraviolet, and soft X-ray bands, aiming at a physical interpretation of
the light curves and a consistency check for our models. This is the first
effort to model the multiwavelength light curves of a polar, combining
photometric with spectroscopic information during high and low states of
accretion. For simulating the low-energy light curves, we consider cyclotron
emission from the accretion column, accretion-stream emission, and
white-dwarf emission. The contribution of the secondary star is negligible
in the wave bands of our photometry (cf.\ Fig.~\ref{fig:sed}).
We calculate the cyclotron light curves from the phase-resolved column
spectra (model \textit{ii} in Sect.~\ref{sec:disc_models}), folding them
with the transmission curves of the Johnson-Cousins and XMM-Newton/OM
filters. They show a double-humped structure that has been observed and
attributed to cyclotron beaming in other polars, as in AM~Her
\citep{gaensicke:01}, \object{AR~UMa} \citep{howell:01}, and \object{HU~Aqr}
\citep{schwope:03}. In our models, the phasing of the deepest minimum
changes from $\varphi_\mathrm{X-ray}=0.0$ in the infrared, $U$, and $B$
light curves to $\varphi_\mathrm{X-ray}=0.5$ in the $VRI$ light curves. The
white-dwarf fluxes are given by the atmosphere model (\textit{iii}) of
Sect.\ \ref{sec:disc_models}. The flux modulation of the preshock accretion
stream with the orbital phase is calculated within a 3D binary model for a
constant temperature along the stream \citep{staude:01}, which means the
same \textit{relative} stream-flux modulation in all filters. The
\textit{absolute} stream flux is guessed per filter from the light curve
minima as measured flux minus cyclotron and white-dwarf flux.
To combine the different light curve components, we need their relative
phasing, i.\,e.,\ information on the system geometry. The spectroscopic
ephemeris is not accurate enough to be extrapolated to 2009 and to
independently determine the orbital phasing. We therefore estimate the phase
shifts between the components directly from the observed light curves: the
shift between X-ray (dip) phase and cyclotron (magnetic) phase from the
primary optical light-curve minimum to
$\varphi_\mathrm{X-ray}-\varphi_\mathrm{cycl}\sim -0.04$, and the shift
between cyclotron and stream flux via the color dependence of the secondary
minimum and the asymmetric shape of the optical and UV light curves to
$\varphi_\mathrm{cycl}-\varphi_\mathrm{stream}\sim
-0.055$. Figure~\ref{fig:simlcs} shows our 2009 UVW1, $U$, and $B$
observations, along with the final simulations.
\begin{figure}
\centering
\includegraphics[width=7.2cm]{21383fig7}
\caption{Observed (2009) and simulated light curves of
RS~Cae. \textbf{Upper panel:} Phase-averaged soft X-ray light curve in
the black-body-dominated $0.1\,\mathrm{keV} \leq E \leq
0.5\,\mathrm{keV}$ band plus simulated light curves of a flat circular
(dashed green) and a cylindrical (red) accretion region. The solid curve
includes the absorption-dip fit described in the text. EPIC/pn time bins
are 100\,s. \textbf{Lower panel:} Optical monitor UVW1 filter (shifted
by $-0.7\,\mathrm{mag}$), $U$ filter, and SMARTS $B$-band. The dashed
and dotted lines represent the cyclotron and the accretion-stream
contribution to the $B$-band simulation, respectively. OM time bins are
300\,s.}
\label{fig:simlcs}%
\end{figure}
In addition, we model the soft X-ray light curves as orbital projections of
emitting black-body surface areas, to which the measured black-body flux is
proportional. We employ the same binary inclination $i\sim 60\degr$ and
colatitude $\beta\sim 25\degr$ of the accretion region as in
Sect.~\ref{sec:disc_models}. We start with the simplest approach of a flat
circular accretion region (Fig.~\ref{fig:simlcs}), which results in a
significantly higher light-curve amplitude than measured for RS~Cae. The
observed amplitude could only be explained by a flat emission region if both
inclination $i$ and colatitude $\beta$ were on the order of $25\degr$, which
is inconsistent with the optical data of \citet{burwitz:96}. In fact, the
soft X-ray and EUV emitting regions are expected to be bulging and
arc-shaped rather than flat and circular, corresponding to extended,
arc-shaped bases of the post-shock accretion columns
\citep[cf.][]{cropper:89,ferrario:90,potter:97}. Observationally, evidence
of more complexity was found, for example, by
\citet{vennes:95,sirk:98,gaensicke:98,szkody:99}. We tested for a
three-dimensional shape of the accretion region of RS~Cae with the
projection of a cylindrical emission region. The flux from an extended
cylinder with a height of 1.25 times its diameter reproduces the soft
EPIC/pn light curve reasonably well (Fig.~\ref{fig:simlcs}). We derive a
phase shift of about $\varphi_\mathrm{X-ray} - \varphi_\mathrm{softX}\sim
-0.2$ from the light curves, where $\varphi_\mathrm{softX}$ denotes maximum
soft X-ray flux, i.e.,\ maximum visibility of the accretion region. This
shift means that the centers of accretion region and accretion column might
be offset.
To account for the stream-absorption dip in our simulations of the soft
X-ray light curves, we use phase-resolved spectral models of the dip phase.
We extract EPIC/pn spectra at energies below 0.5\,keV in a $\pm 0.225$ phase
range around the dip center with a phase resolution of $\Delta\varphi=0.05$
and fit them simultaneously with absorbed black-body models. Since the
$\Delta\varphi=0.05$ spectra during dip phase comprise only a few bins, we
repeat the fit with spectra extracted for overlapping $\Delta\varphi=0.1$
intervals, centered at $\varphi_\mathrm{X-ray}=0.00, 0.05, 0.10,$ etc.,
similar to the approach that \citet{girish:07} use to fit the iron lines of
AM~Her. We use identical black-body temperatures for all phase intervals and
couple the normalizations (i.e., emitting surface areas) according to the
projected areas of the cylindrical emission region. The hydrogen absorption
on the line of sight increases in our fits up to $N_\textsc{H,tbabs}\sim
8\times 10^{22}\,\mathrm{cm}^{-2}$, including interstellar absorption,
during the dip phase. With a reduced $\chi^2_\mathrm{red}=1.5$ of the
simultaneous black-body fit, modeling the emission region as a
three-dimensional cylinder is significantly more appropriate than as a flat
circle, but still lacking. We convert the $\Delta\varphi=0.05$ model fluxes
in the $0.1-0.5\,\mathrm{keV}$ interval to absorption factors by dividing
the absorbed fluxes by the fluxes for a constant interstellar (nondip)
absorption of $N_\mathrm{H}=2.1\times 10^{19}\,\mathrm{cm}^{-2}$ and
multiply the simulated X-ray light curves by them. The results are shown in
the upper panel of Fig.~\ref{fig:simlcs} and give a reasonable description
of the absorption dip. Deficits of the model manifest themselves in
particular between $\varphi_\mathrm{X-ray}=0.1$ and 0.2, where the count
rate is underestimated because of the simplifications made in modeling the
emitting areas and the phase shifts.
\subsection{Results}
\label{sec:disc_sedres}
With our SED and light curve models, we separate the contributions of the
different system components to the spectral energy distribution and derive a
new lower limit estimate of the distance to RS~Cae. In the infrared range, a
large part of the total flux can be attributed to the M-type secondary, plus
cyclotron emission during high states of accretion. The cyclotron component
is dominating the stellar emission in the optical and near-infrared
range. In particular, large parts of the $B$-band modulation can be
attributed to cyclotron emission (Fig.~\ref{fig:simlcs}). The color
dependence of the light-curve minima is explained by the decreasing
contribution of the cyclotron flux and increasing contribution of
accretion-stream flux from the optical toward the ultraviolet. The cyclotron
model spectra reproduce the high-state data in 1993 and 2009, and the 2MASS
data indicate that even higher infrared fluxes may be reached at other
epochs. The unheated white dwarf contributes little to the high-state flux,
with the low-state UV flux serving as upper limit for the white dwarf.
\section{The energy balance of RS~Cae}
\label{sec:disc_energy}
Since RS~Cae was discovered at a hardness ratio close to $-1.0$ during the
ROSAT All-Sky Survey \citep{thomas:98}, it has been considered to be one of
the soft X-ray dominated polars. In these systems, the soft X-ray luminosity
exceeds the $50\,\%$ of the total luminosity predicted by the standard model
of accretion \citep{king:79,lamb:79}. Former ROSAT results are biased
towards soft energies, since the energy range was limited to
$0.1-2.5\,\mathrm{keV}$, and have to be verified over a broader energy
range. With our fits to the XMM-Newton spectra, we investigate the energy
balance of RS~Cae on the basis of data in the $0.1-0.5\,\mathrm{keV}$ and
$0.5-10.0\,\mathrm{keV}$ bands, and with our SED models on the basis of the
optical and ultraviolet measurements.
\subsection{XMM-Newton fluxes and X-ray flux ratios}
From the absorbed single-temperature model described in
Sect.~\ref{sec:spectra}, we have derived bolometric fluxes of
$F_\mathrm{bol}(\textsc{bbody})=7.9^{+1.0}_{-0.8} \times
10^{-12}\,\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}$ and
$F_\mathrm{bol}(\textsc{mekal})=7.3^{+0.5}_{-0.5} \times
10^{-13}\,\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}$, and a soft-to-hard flux
ratio of $F_{\mathrm{bol,}\textsc{bbody}} /
F_{\mathrm{bol,}\textsc{mekal}}=10.8^{+1.4}_{-1.0}$ during non-dip
phases. For the cyclotron component, we estimate a bolometric flux of about
$F_\mathrm{bol,cycl}\sim 5\times
10^{-13}\,\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}$ from the accretion-column
model presented in Sect.~\ref{sec:disc_models}.
The flux values strongly depend on the choice of the spectral model
(cf.\ Table~\ref{tab:xspecfits}). \citet{ramsay:04ebalance} present flux and
luminosity ratios based on single-temperature black-body and
multitemperature column models. Multitemperature models typically result in
higher bolometric fluxes: Since the flux in the instrumental energy window
is scaled to the observed flux, the fluxes toward the (unobserved) lower and
higher energies are raised compared to a single-temperature continuum owing
to the broader temperature range (cf.\ small panel in
Fig.~\ref{fig:spectra}). Our fits indicate that the spectra reflect the
multitemperature nature both of accretion region and accretion column, while
higher spectral resolution and sensitivity would be needed to derive the
parameter distributions directly from the observed data. In the combined
multicomponent fit with predefined temperature distributions, the bolometric
fluxes of both the multi-\textsc{bbodyrad} and the multi-\textsc{mekal}
component increase by a factor of 1.5, respectively, leaving the flux ratio
essentially unchanged.
The soft X-ray excess of polars that was measured from ROSAT data increases
with magnetic field strength \citep{beuermann:94,ramsay:94}. The excess in
RS~Cae is comparable to the XMM-Newton results for polars of similar field
strengths: \object{EK~UMa} ($B\sim 35\,\mathrm{MG}$) with a moderate
luminosity ratio of six for single-temperature models during a short 5\,ks
exposure \citep{ramsay:04ebalance}; AI~Tri ($B\sim 38\,\mathrm{MG}$) with
moderate to high flux ratios of 6 to at least 70, depending on the accretion
state \citep{traulsen:10}; HU~Aqr ($B\sim 35\,\mathrm{MG}$) with low flux
ratios during a low state of accretion, balanced fluxes or a slight excess
during an intermediate state, and a strong excess during a high state
\citep{schwarz:09}.
\begin{table}
\caption{Emitting black-body areas and bolometric fluxes of RS~Cae during
the high-state observations by ROSAT in 1992 and XMM-Newton in 2009,
together with their 90\,\% confidence intervals.}
\label{tab:fluxes}\centering
\begin{tabular}{lcc}
\hline\hline\\[-2ex]
& ROSAT 1992 & XMM-Newton 2009 \\ \hline\\[-1.5ex]
$A_\textsc{bbody}$
& $\sim 10^{16}$
& \multirow{2}{*}{$5.1^{+1.1}_{-0.8}\times 10^{13}$} \\
$~~~[\mathrm{cm}^{-2}]$
& $([0.3, 9]\times 10^{16})$
& \\[.8ex]
$F_\textsc{bbody}$
& $\sim 2\times{10^{-10}}$
& \multirow{2}{*}{$7.9^{+1.0}_{-0.8}\times 10^{-12}$} \\
$~~~[\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}]$
& $(\gtrsim 5\times 10^{-11})$
& \\[.8ex]
$F_\textsc{brems}~~(5\,\mathrm{keV})$
& $9\times 10^{-14}$
& \multirow{2}{*}{$1.6^{+0.1}_{-0.1}\times 10^{-13}$} \\
$~~~[\mathrm{erg\,cm}^{-2}\,\mathrm{s}^{-1}]$
& $([6, 12]\times 10^{-14})$
& \\
\hline
\end{tabular}
\end{table}
\subsection{XMM-Newton and optical luminosities}
We convert the bolometric fluxes to luminosities as $L=\eta\pi Fd^2$, where
$\eta$ denotes the geometric correction factor. Typically, for the hard
component a factor of $3\pi$ (considering reflection effects) up to $4\pi$
is chosen, while \citet{king:87} emphasize that a factor of $2\pi$ for
column emission into one half space would be more appropriate. For the soft
emission of a flat accretion region, a factor of $2\pi$ or
$\pi\mathrm{cos}^{-1}\vartheta$, depending on the viewing angle $\vartheta$,
is used, and of $4\pi$ for blobby accretion when accretion mounds form at
the impact area due to the heating of the photosphere
\citep{hameury:88,beuermann:89}. Having shown the evidence of a
three-dimensional structure of the accretion region in RS~Cae, we choose the
same geometric factor of $4\pi$ for the X-ray soft and for the X-ray hard
component, and $2\pi$ for the cyclotron component. For the multitemperature
model, we thus obtain $L_\mathrm{softX}=7.7\times
10^{32}\,\mathrm{erg\,s}^{-1}$, $L_\mathrm{hardX}=7.3\times
10^{31}\,\mathrm{erg\,s}^{-1}$, $L_\mathrm{cycl}\sim1.7\times
10^{31}\,\mathrm{erg\,s}^{-1}$ for a distance of $d=750\,\mathrm{pc}$, and
soft-to-hard luminosity ratios on the same order of 10 as the flux ratios.
The accretion-induced luminosities of soft X-ray, hard X-ray, and cyclotron
component add to $L_\mathrm{accr}=8.6\times 10^{32}\,\mathrm{erg\,s}^{-1}$,
without the (unknown) contribution of the preshock accretion stream. For
a white-dwarf mass of $M_\mathrm{WD}=0.6\,\mathrm{M}_\odot$ and radius of
$R_\mathrm{WD}\sim 0.012\,R_\odot$, this corresponds to a mass-accretion rate
on the order of
$\dot{M}=L_\mathrm{accr}\,R_\mathrm{WD}/(G\,M_\mathrm{WD})\sim
10^{-10}\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}$, within the typical range of
high-state accretion rates of polars.
\subsection{Long-term characteristics of soft and hard emission}
The SED plot in Fig.~\ref{fig:sed} shows the distinct soft X-ray components
in the ROSAT and XMM-Newton observations. They changed significantly from
1992 to 2009, seen in the emitting black-body areas, which represent the
area at the mean temperature of the accretion region, and, correspondingly,
in the (bolometric) fluxes of the black-body fits (Table~\ref{tab:fluxes}):
The soft X-ray fluxes during the ROSAT observation of September 1992 are by
a factor of at least 5.5 higher than during the 2009 XMM-Newton
observation. The hard X-ray and cyclotron fluxes, on the other hand, are on
the same order of magnitude during the 2009 observations as in 1992/93. Both
hard X-ray and cyclotron emission are assumed to arise from the accretion
column, so the similar flux levels may indicate similar physical properties
of the column at both epochs. Although the ROSAT fit results are poorly
constrained, they show clearly that the long-term characteristics of soft
and hard X-ray component are not correlated. \citet{gaensicke:95}
demonstrated for AM~Her that the bremsstrahlung and cyclotron flux are
balanced by the (reprocessed) UV flux both during high and low states of
accretion, independently of the soft X-ray component, which arises from
blobby accretion. Correspondingly, the soft stages of RS~Cae and its
decoupled soft and hard X-ray fluxes may indicate inhomogeneous accretion
processes.
\section{Summary and conclusions}
Our pointed XMM-Newton observation of RS~Cae, covering 4.5 orbital cycles,
provides the first opportunity to investigate its energy balance on the
basis of data at energies up to 10\,keV. RS~Cae was clearly in a high state
of accretion at the epoch of our observations. Archival infrared and X-ray
data indicated that still higher and potentially softer states might be
possible. Almost $97\,\%$ of the EPIC/pn photons were detected in the soft
range, yielding hardness ratios close to $-1$ in the X-ray light curves. The
light curves gave evidence of a three-dimensional shape of the accretion
region and inhomogeneous accretion events, as is typical of soft polars. We
identified the sharp, recurring light-curve dips as stream absorption and
used them to derive a photometric period of 0.0709(3)\,days, confirming the
preferred period of \citet{burwitz:96} and placing RS~Cae among the
short-period polars. The energy dependence of the width of the dip, becoming
broader towards higher energies, indicated different spatial extents of the
X-ray emitting regions.
Using SED modeling, we consistently connected the multiwavelength spectra
and light curves of the different system components. SEDs constructed of
nonsimultaneous observations provide insight into long-term behavior and
accretion-stage changes in the system, but have a limited ability to model
the SED. This successful approach shows the potential of SED modeling and
simultaneous multiwavelength observations of polars.
Using single- and multitemperature fits to the EPIC spectra, we find a soft
X-ray excess with soft-to-hard luminosity ratios of about ten. Our
multitemperature spectral models give physically plausible descriptions of
the structure of the accretion region and column and the respective X-ray
and low-energy spectra and light curves. They result in the same
soft-to-hard flux ratios as the single-temperature models.
Up to now, we have studied three systems with a clear soft X-ray excess
during (intermediate) high states of accretion in the ROSAT and in our
pointed XMM-Newton observations: AI~Tri \citep{traulsen:10}, QS~Tel
\citep{traulsen:11}, and RS~Cae. Their soft X-ray luminosity correlates with
their accretion state and can change drastically on time scales as short as
days, in agreement with the dependence of the soft-to-hard luminosity ratio
on the accretion state as described by \citet{ramsay:04ebalance}. The three
systems augment the number of polars whose ROSAT-detected soft X-ray excess
could be confirmed over the broader energy range of XMM-Newton. Which
fraction of AM~Her-type systems actually shows a soft X-ray excess is hard
to determine from the currently available data. One limiting factor has
been instrumental biases. With higher sensitivity in particular in the hard
X-ray regime, an increasing number of X-ray hard and low accretion rate
polars are being detected. Another limiting factor is observational
biases. In untriggered observations, a substantial number of polars are
caught during low states, so their soft stages would be missed. Considering
the selection effects and our observational results, we expect that the
fraction of 25\,\% X-ray soft polars among ROSAT detections might
overestimate the actual number, but that the soft systems form a significant
group among the polars.
\begin{acknowledgements}
This research has been supported by the DLR under project numbers
50\,OR\,0501, 50\,OR\,0807, and 50\,OR\,1011. We thank Ren\'e Heller for
providing the M-star spectrum shown in Fig.~\ref{fig:sed} and Karleyne Silva
for calculating accretion-column light curves and for fruitful
discussions. FWM's access to the SMARTS observatory is supported in part by
a NASA grant NNX10AE51G to Stony Brook University.
Figure~\ref{fig:sed} includes data from the High Energy Astrophysics Science
Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight
Center; data products from the Two Micron All Sky Survey, which is a joint
project of the University of Massachusetts and the Infrared Processing and
Analysis Center/California Institute of Technology, funded by the National
Aeronautics and Space Administration and the National Science Foundation;
and data products from the Wide-field Infrared Survey Explorer, which is a
joint project of the University of California, Los Angeles, and the Jet
Propulsion Laboratory/California Institute of Technology, funded by the
National Aeronautics and Space Administration.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,314,259,993,292 | arxiv | \section{Introduction}
The mathematical and physical significance of Painlev\'e equations \cite{INCE}
is well established \cite{ASF5}. In particular, regarding Painlev\'e II, we note the following:
\begin{enumerate}
\item[(a)] A new method for solving its initial value problem, the
so-called isomonodromy method, was introduced in \cite{Flasch}; this
method was imbedded within the framework of the Riemann-Hilbert
formalism in \cite{ASF4}. Rigorous aspects of this formalism,
including the proof that the solution possesses the so-called
Painlev\'e property, were discussed in Fokas and Zhou
\cite{ASF_Zhou}.
\item[(b)] There exist several Lax pairs for
Painlev\'e II, including those presented in \cite{Flasch}, in Jimbo
and Miwa \cite{JM}, and in Harnad, Tracy and Widom \cite{HTW}.
\item[(c)] Painlev\'e II is a Hamiltonian system
\cite{Flasch}\cite{Okamoto}\cite{M.Noumi}.
\item[(d)] It is possible
to construct certain particular explicit solutions of Painlev\'e II
using certain ``B\"acklund transformations'' \cite{Yablonskii}
\cite{M.Noumi} and \cite{ASF2}.
\end{enumerate}
The concept of conjugate Hamiltonian systems is introduced in
\cite{Yang}: The solution of the equation $h=H(p,q,t),$ where $H$ is
a given Hamiltonian which contains $t$ explicitly, yields the
function $t=T(p,q,h)$. The Hamiltonian system with Hamiltonian $T$
and independent variable $h$ is called \textit{conjugate} to the
Hamiltonian system with Hamiltonian $H$. The conjugate Hamiltonian
system has the following properties:
\begin{enumerate}
\item If $p=p(t),~q=q(t)$ is a solution of the Hamiltonian system with
Hamiltonian $H$, then $p=p(t(h)),q=q(t(h))$\footnote{
We sometimes write $p(h),q(h)$ instead of $p(t(h)),q(t(h))$.}, is a solution
of the conjugate Hamiltonian system, where $t=t(h)$ is the so-called
$t$-function, the inverse function of the $h-$function\footnote{We
recall that the $h-$function, $h=h(t)$, is defined by
$h(t)=H(p(t),q(t),t))$.}.
\item A first integral of a Hamiltonian system, also provides a first integral of
the associated conjugate Hamiltonian system.
\end{enumerate}
The classical Painlev\'e equations are Hamiltonian systems, thus we
can associate with each Painlev\'e equation a conjugate Hamiltonian
system. The gauge freedom of a Hamiltonian implies that
we can in fact associate an \emph{infinite} family of integrable
second-order nonlinear ODEs with a given Painlev\'e equation.
Furthermore, by utilising the gauge freedom of the conjugate
Hamiltonian, we can associate with any of the conjugate ODEs
constructed, another infinite family of integrable ODEs, etc.
Here, we only present the ODEs with the
simplest form.
This paper is organized as follows: In section 2, we construct the
conjugate Painlev\'e II and also derive an associated Lax pair. In
section 3, we construct a class of explicit solutions of the
conjugate Painlev\'e II. In section 4, we derive the conjugate ODEs
corresponding to Painlev\'e I and IV. In section 5, we prove a general theorem
for constructing Lax pairs for conjugate Painlev\'e equations. In
section 6, we discuss further these results.
\section{The conjugate Painlev\'e II equation}
Let $P_{I\!I}$ denote the second Painlev\'e equation, namely
\begin{equation}
\label{P2_q} P_{I\!I}:~~~\f{d^2q}{dt^2}=2q^3+tq+b-\f{1}{2},~~~~t,q\in\mathbb{C},
\end{equation}
where $b$ is an arbitrary complex constant. $P_{I\!I}$ possesses the
Hamiltonian $H$, where
\begin{equation}\label{P2_H}
H(p,q,t)=\f{1}{2}p^2-\Big(q^2+\f{t}{2}\Big)p-bq.
\end{equation}
Indeed, Hamilton's equations associated with $H$ are
\begin{subequations}\label{Ham_P2}
\begin{align}
\label{Ham_P2_a} &\f{dp}{dt}=-\f{\pa H}{\pa q}=2pq+b,\\
\label{Ham_P2_b} &\f{dq}{dt}=\f{\pa H}{\pa p}=p-q^2-\f{t}{2}.
\end{align}
\end{subequations}
Eliminating from equations \eqref{Ham_P2} the variable $p$ we find
$P_{I\!I}:$
$$\f{d^2q}{dt^2}=2pq+b-2q\Big(p-q^2-\f{t}{2}\Big)-\f{1}{2},$$
which is equation \eqref{P2_q}.
If we eliminate from equations \eqref{Ham_P2} the variable $q$, we
find an other second order integrable ODE, which appears in the list
of Ince \cite{INCE} as $XXXIV$, and which we denote by
$\widetilde{P_{I\!I}}$ \footnote{Either $\widetilde{P_{I\!I}}$ or
$P_{I\!I}$ could have been chosen as the second Painlev\'e equation. The
historical choice of $P_{I\!I}$ is due to Painlev\'e himself
\cite{INCE}.}:
\begin{equation}
\label{P2_p}
\widetilde{P_{I\!I}}:~~\f{d^2 p}{dt^2}=\f{1}{2p}\Big(\f{dp}{dt}\Big)^2-\f{b^2}{2p}+2p^2-tp,~~~~t,p\in\mathbb{C}.
\end{equation}
Indeed,
\begin{equation*}
\begin{split}
\f{d^2 p}{dt^2}&=2q(2pq+b)+2p\Big(p-q^2-\f{t}{2}\Big)\\
&=2p^2-tp+2pq^2+2bq.
\end{split}
\end{equation*}
Replacing in this equation $q$ from equation \eqref{Ham_P2_a}, we
find equation \eqref{P2_p}.
\begin{remark}\label{solution corres}
It has been shown in \cite{ASF2} that there exists a one-to-one
correspondence between solutions of $P_{I\!I}$ and
$\widetilde{P_{I\!I}}$. This result follows directly from
their Hamiltonian structure \eqref{Ham_P2}.
\end{remark}
\paragraph{Notations} Prime, `` ${}'$ '', denotes derivative with respect to $h$.
\begin{proposition}
The conjugate equations of $P_{I\!I}$ and of $\widetilde{P_{I\!I}}$,
i.e., the conjugate equations of equations \eqref{P2_q} and
\eqref{P2_p}, are the following ODEs:
\begin{subequations} \label{C_P2}
\begin{align}
\label{P2_C_q}CP_{I\!I}:~~&\f{d^2 q}{dh^2}=(q'+1)\Big(\f{1-2b-bq'}{h+bq}+8q\Big(\f{-q'-1}{2h+2bq}\Big)^{\f{1}{2}}\Big),~~h,q\in\mathbb{C},\\
\label{P2_C_p}\widetilde{CP_{I\!I}}:~~&\f{d^2 p}{dh^2}=4+\f{8h}{p^2}-\f{4b^2}{p^3}, ~~~h,p\in \mathbb{C}.
\end{align}
\end{subequations}
Equations \eqref{C_P2} possess the Hamiltonian function $T$, where
\begin{equation}
\label{CH2_T} T(p,q,h)=(p-2q^2)-\f{2h+2bq}{p}.
\end{equation}
Moreover, equations \eqref{C_P2} admit the following Lax pair:
\begin{subequations}\label{Lax_CP2}
\begin{align}
\f{\pa \psi}{\pa \la}&=\left\{\f{1}{2\la}\left(
\begin{array}{llcl}
b & 0\\
-p & -b\\
\end{array}
\right)+\left(
\begin{array}{llcl}
q & \f{1}{p}(2h+2bq)\\
\f{1}{2} & -q\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & 1\\
0 & 0\\
\end{array}
\right)\right\}
\psi,\\
\f{\pa \psi}{\pa h}&=\left\{-\f{2}{p}\left(
\begin{array}{llcl}
-q & 0\\
-\f{1}{2} & q\\
\end{array}
\right)-\la\f{2}{p}\left(
\begin{array}{llcl}
0 & -1\\
0 & 0\\
\end{array}
\right)\right\} \psi, ~~~~~~~~~\la\in\mathbb{C},
\end{align}
\end{subequations}
where $\psi$ is a $2\times 2$ matrix-valued function of $\la$ and
$h$.
\end{proposition}
\paragraph{Proof.}
The solution of the equation
\begin{equation}
h=\f{1}{2}p^2-(q^2+\f{t}{2})p-bq,
\end{equation}
yields
\begin{equation}
t=T(p,q,h),
\end{equation}
where the function $T$ denotes the RHS of equation \eqref{CH2_T}. The associated Hamilton's equations are:
\begin{subequations}\label{CH2_PQ}
\begin{align}
&\f{dp}{dh}=\f{\pa T}{\pa q}=-4q-\f{2b}{p},\\
&\f{dq}{dh}=-\f{\pa T}{\pa p}=-1-\f{2h+2bq}{p^2}.
\end{align}
\end{subequations}
Eliminating from equations \eqref{CH2_PQ} the function $q$ we find,
$$\f{d^2 p}{dh^2}=-4\Big(-1-\f{2h+2bq}{p^2}\Big)+\f{2b}{p^2}\Big(-4q-\f{2b}{p}\Big),$$
which is equation \eqref{P2_C_p}. Similarly, eliminating from equations \eqref{CH2_PQ} the function $p$ we find equation \eqref{P2_C_q}.
The Lax pair \eqref{Lax_CP2} can be verified directly. Indeed, under
the assumption that $p_{\la}=0$ and $q_{\la}=0$,
i.e, the compatibility equation
$$\f{\pa^2\psi}{\pa h\pa \la}=\f{\pa^2\psi}{\pa \la \pa h},$$
is equivalent to equations \eqref{CH2_PQ}, and so is equivalent to
equations \eqref{C_P2}.
Alternatively, equations \eqref{Lax_CP2} can be derived from the following Lax pair of
$P_{I\!I}$ and $\widetilde{P_{I\!I}}$\footnote{This pair is the
so-called Harnad-Tracy-Widom pair (HTW-pair), which was first
discovered by Harnad, Tracy and Widom in \cite{HTW} and first
written out explicitly by Joshi, Kitaev and Treharne in
\cite{Kita}.}:
\begin{subequations}\label{Lax_P2_HTW}
\begin{align}
\f{\pa \psi}{\pa \la}&=\left\{\f{1}{2\la}\left(
\begin{array}{llcl}
b & 0\\
-p & -b\\
\end{array}
\right)+\left(
\begin{array}{llcl}
q & p-2q^2-t\\
\f{1}{2} & -q\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & 1\\
0 & 0\\
\end{array}
\right)\right\}
\psi,\\
\f{\pa \psi}{\pa t}&=-\left\{\left(
\begin{array}{llcl}
q & 0\\
\f{1}{2} & -q\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & 1\\
0 & 0\\
\end{array}
\right)\right\}
\psi.
\end{align}
\end{subequations}
Indeed, the HTW-pair \eqref{Lax_P2_HTW} is a Lax pair for
Hamilton's equations \eqref{Ham_P2} \cite{Kita}. By applying Proposition \ref{Lax pair},
it can be shown that equations \eqref{Lax_P2_HTW} imply equations \eqref{Lax_CP2}.
\begin{flushright} $\square$ \end{flushright}
\section{Solving $CP_{I\!I}$ and $\widetilde{CP_{I\!I}}$}
The discussion in the introduction implies that starting with the well-known
special solutions of $P_{I\!I}$ and $\widetilde{P_{I\!I}}$, we can
construct special solutions for $CP_{I\!I}$ and
$\widetilde{CP_{I\!I}}$. It also implies that we can solve, at least implicitly, the general initial
problem.
\subsection{A class of special solutions}
First we recall the rational solutions of $P_{I\!I}$ and
$\widetilde{P_{I\!I}}$. There are two fundamental types of
B\"acklund transformations for $P_{I\!I}$ and $\widetilde{P_{I\!I}}$
which were derived in \cite{Yablonskii}, \cite{ASF2},
\cite{M.Noumi}. Taking into consideration Remark \ref{solution
corres}, we express these transformations for Hamilton's equations
\eqref{Ham_P2}:
\begin{enumerate}
\item[$(i)$] Suppose that $(q(t;b),p(t;b))$ is a solution of equations \eqref{Ham_P2}
with constant $b$. Then
$$(\hat{q}(t),\hat{p}(t))=(q(t;b)+\f{b}{p(t;b)},~p(t;b))$$ is a solution
of equations \eqref{Ham_P2} with constant $-b$.
\item[$(ii)$] Suppose that $(q(t;b),p(t;b))$ is a solution for equations \eqref{Ham_P2}
with constant $b$. Then
$$(\hat{q}(t),\hat{p}(t))=(-q(t;b),~-p(t;b)+2q^2(t;b)+t)$$ is a solution
of equations \eqref{Ham_P2} with constant $1-b$.
\end{enumerate}
The transformations $(i)$ and $(ii)$ imply, respectively,
$\hat{h}(t)=h(t)$ and $\hat{h}(t)=h(t)+q(t)$.
Starting from a particular solution $q=0,p=t/2$ with $b=1/2$, and
applying the above B\"acklund transformations, we can obtain a
class of rational solutions
\cite{Yablonskii}, \cite{ASF2}, \cite{M.Noumi} for $P_{I\!I}$ and
$\widetilde{P_{I\!I}}$. For example:
\begin{enumerate}
\item[ ] $q=\f{2(t^3-2)}{t(t^3+4)}$, $p=\f{t^3+4}{2t^2},$
$h=-\f{t^2}{8}+\f{1}{t}$ with $b=-\f{3}{2}$;
\item[ ] $q=\f{1}{t}$, $p=\f{t}{2}$, $h=-\f{t^2}{8}$ with $b=-\f{1}{2}$;
\item[ ] $q=0$, $p=\f{t}{2}$, $h=-\f{t^2}{8}$ with $b=\f{1}{2}$;
\item[ ] $q=-\f{1}{t}$, $p=\f{t^3+4}{2t^2}$, $h=-\f{t^2}{8}+\f{1}{t}$ with $b=\f{3}{2}$;
\end{enumerate}
By inverting the $h-$function and substituting the resulting
$t-$function to the rational solutions for $P_{I\!I}$ and
$\widetilde{P_{I\!I}}$, we obtain the following solutions for
$CP_{I\!I}$ and $\widetilde{CP_{I\!I}}$:
\begin{itemize}
\item{
\begin{subequations}\label{CP_sol_1}
\begin{align}
&q=\f{2\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^3-4}
{\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big[\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^3+4\Big]},\\
&p=\f{4+\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^3}{2\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}
-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^2},\\
&t=\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}
\end{align}
\end{subequations}
with $b=-\f{3}{2}$, where $D=9+\Big(81+96h^3\Big)^{1/2}$;}
\item{ \begin{equation}q=\f{1}{2(-2h)^{1/2}}, ~~p=(-2h)^{1/2}, ~~t=2(-2h)^{1/2} \end{equation} with
$b=-\f{1}{2}$;}
\item{
\begin{equation} q=0,~~ p=(-2h)^{1/2},~~ t=2(-2h)^{1/2} \end{equation} with $b=\f{1}{2}$;
}
\item{
\begin{subequations}\label{CP_sol_4}
\begin{align}
&q=-\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^{-1},\\
&p=\f{4+\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^3}{2\Big(\Big(\f{2}{3}\Big)^{2/3}D^{1/3}
-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}\Big)^2},\\
&t=\Big(\f{2}{3}\Big)^{2/3}D^{1/3}-\Big(\f{2}{3}\Big)^{1/3}4hD^{-1/3}
\end{align}
\end{subequations}
with $b=\f{3}{2}$, where $D=9+\Big(81+96h^3\Big)^{1/2}$;}
\end{itemize}
The four particular solutions computed above can be verified
directly. For $|b|>3/2$, in order to compute the corresponding solutions we
need to solve polynomial equations of order higher than 4.
Using the transformations $(i)$ and $(ii)$, we find the following result:
\begin{proposition} (B\"acklund transformations)\label{B_trans}
\begin{enumerate}
\item[$(i)$] Suppose that $(q(h;b),p(h;b))$ is a solution of equations
\eqref{CH2_PQ} with constant $b$. Then
$$(\hat{q}(h),~\hat{p}(h))=(q(h;b)+\f{b}{p(h;b)},~p(h;b))$$ is a solution of
equations \eqref{CH2_PQ} with constant $-b$.
\item[$(ii)$] Suppose that $(q(h;b),p(h;b))$ is a solution of equations \eqref{Ham_P2}
with constant $b$. Then
$$(\hat{q}(\hat{h}),~\hat{p}(\hat{h})=(-q(h;b),~\f{-2h+2bq(h;b)}{p(h;b)})$$ is a solution
of equations \eqref{Ham_P2} with constant $1-b$ and independent variable $\hat{h}$,
where $\hat{h}=h+q(h;b)$.
\end{enumerate}
\end{proposition}
\begin{remark}
The solutions \eqref{CP_sol_1} -- \eqref{CP_sol_4} can also be
generated by employing Proposition \ref{B_trans}. The main
difficulty for the explicit computation of these solutions is the
requirement of solving the equation $\hat{h}=h+q(h;b)$ for $h$ in
terms of $\hat{h}$, $h=h(\hat{h})$.
\end{remark}
\subsection{An implicit representation of the solution of the initial value problem}
We study the following initial value problem (IVP) of
$\widetilde{CP_{I\!I}}$:
\begin{subequations}
\begin{align}
&\f{d^2 p}{dh^2}=4+\f{8h}{p^2}-\f{4b^2}{p^3},\\
&p|_{h=h_0}=p_0,~~p'|_{h=h_0}=p_1.
\end{align}
\end{subequations}
$\widetilde{CP_{I\!I}}$ is equivalent to Hamilton's equations
\eqref{CH2_PQ}. Using these equations, we can find the
initial values of $q$ and $q'$ at $h=h_0:$
$$q|_{h=h_0}=-\f{b}{2p_0}-\f{p_1}{4}:=q_0,~~q'|_{h=h_0}=-1-\f{2h_0+2bp_1}{p_0^2}-\f{b^2}{p_0^3}:=q_1.$$
Thus,
$$t_0=T(p_0,q_0,h_0),~~q|_{t=t_0}=q_0,~~p|_{t=t_0}=p_0.$$
Next, from the Hamiltonian structure \eqref{Ham_P2} of $P_{I\!I}$,
we obtain the following initial values:
$$\f{dq}{dt}\Big|_{t=t_0}=2p_0q_0+b,~~~\f{dp}{dt}\Big|_{t=t_0}=p_0-q_0^2-t_0/2.$$
The IVP of $P_{I\!I}$ with initial values $q|_{t=t_0}$ and
$\f{dq}{dt}\big|_{t=t_0}$, can be solved via the isomonodromy method and yields
$$q=q(t).$$
Substituting this solution to equation \eqref{Ham_P2}, we obtain
$$p=p(t).$$ The $h-$function is obtained by
$$h(t)=H(p(t),q(t),t).$$
By the inverse function theorem, as least locally, we obtain
$$t=t(h).$$ Thus, the implicit solution of the IVP of
$\widetilde{CP_{I\!I}}$ is given by
$$p=p(t(h)).$$
The IVP for $CP_{I\!I}$ can be solved in a similar way.
\section{Conjugate equations of Painlev\'e I and IV}
Let $P_{I}$ denote the first Painlev\'e equation, namely
\begin{equation}\label{P1_q}
P_I:~~ \f{d^2 q}{dt^2}=6q^2+t,~~~~t,q\in\mathbb{C}.\\
\end{equation}
$P_{I}$ possesses the Hamiltonian $H$, where
\begin{equation} \label{P1_H}
H(p,q,t)=\f{1}{2}p^2-2q^3-tq.
\end{equation}
Indeed, the associated Hamilton's equations are
\begin{subequations}\label{Ham_P1}
\begin{align}
\label{Ham_P1_p}& \f{dp}{dt}=-\f{\pa H}{\pa q}=6q^2+t,\\
\label{Ham_P1_q}& \f{dq}{dt}=\f{\pa H}{\pa p}=p.
\end{align}
\end{subequations}
Eliminating from equations \eqref{Ham_P1} the variable $p$ we find $P_I$:
$$\f{d^2 q}{dt^2}=6q^2+t.$$
Eliminating from equations \eqref{Ham_P1} the variable $q$ we find $\widetilde{P_I}$:
\begin{equation}\label{P1_p}
\widetilde{P_I}:~~\f{d^2 p}{dt^2}=2p\Big(6\f{dp}{dt}-6t\Big)^{1/2}+1,~~~~t,p\in\mathbb{C}.
\end{equation}
Indeed,
$$\f{d^2 p}{dt^2}=12qp+1.$$
Replacing in this equation $q$ from equation \eqref{Ham_P1_p}, we find equation \eqref{P1_p}.
\begin{proposition}
The conjugate equations of equations $P_I$ and of $\widetilde{P_I}$,
i.e., the conjugate equations of equations \eqref{P1_q} and
\eqref{P1_p}, are the following ODEs:
\begin{subequations}\label{CP_1}
\begin{align}
CP_I:~~\label{CP1_q}&\f{d^2 q}{dh^2}=-\f{1}{2q}\Big(\f{dq}{dh}\Big)^2+4-\f{h}{q^3},~~~h,q\in\mathbb{C},\\
\widetilde{CP_I}:~~\label{CP1_p}&\f{d^2 p}{dh^2}=\f{2hp-p^3}{F(p',p,h)^4}+\f{1-pp'}{F(p',p,h)^2}+\f{4p}{F(p',p,h)},~~~h,p\in\mathbb{C},
\end{align}
\end{subequations}
where $F$ is a solution of the following equation $$4F^3+p'F^2+\f{1}{2}p^2-h=0,~~~h,p,F\in\mathbb{C}.$$
Equations \eqref{CP_1} possess the Hamiltonian $T$, where
\begin{equation}\label{CH1_T}
T(p,q,h)=\f{1}{2}\f{p^2}{q}-2q^2-\f{h}{q}.
\end{equation}
\end{proposition}
\paragraph{Proof.}
The solution of the equation
\begin{equation}
h=\f{1}{2}p^2-2q^3-tq,
\end{equation}
yields
\begin{equation}
t=T(p,q,h),
\end{equation}
where the function $T$ denotes the RHS of equation \eqref{CH1_T}. The associated Hamilton's equations are:
\begin{subequations}\label{CH1_PQ}
\begin{align}
&\f{dp}{dh}=\f{\pa T}{\pa q}=-\f{1}{2}\f{p^2}{q^2}-4q+\f{h}{q^2},\\
&\f{dq}{dh}=-\f{\pa T}{\pa p}=-\f{p}{q}.
\end{align}
\end{subequations}
Eliminating from equations \eqref{CH1_PQ} the function $p$ we find,
$$\f{d^2 q}{dh^2}=-\f{1}{2q}\Big(\f{dq}{dh}\Big)^2+4-\f{h}{q^3},$$
which is equation \eqref{CP1_q}. Similarly, eliminating from equations \eqref{CH1_PQ} the function $q$ we find equation \eqref{CP1_p}.
\begin{flushright} $\square$ \end{flushright}
Let $P_{I\!V}$ denote the fourth Painlev\'e equation, namely
\begin{equation}\label{P4_q}
P_{I\!V}:~~\f{d^2 q}{dt^2}=\f{1}{2q}\Big(\f{d q}{dt}\Big)^2+\f{3}{2}q^3+2tq^2+\Big(\f{t^2}{2}+a_1+2a_2-1\Big)q-\f{a_1^2}{2q}, ~t,q\in\mathbb{C}.
\end{equation}
where $a_1,a_2$ are arbitrary complex constants.
$P_{I\!V}$ possesses the Hamiltonian $H$, where
\begin{equation}\label{P4_H}
H(p,q,t)=qp(p-q-t)-a_2q-a_1p.
\end{equation}
The associated Hamilton's equations are
\begin{subequations}\label{Ham_P4}
\begin{align}
\label{Ham_P4_p}& \f{dp}{dt}=-\f{\pa H}{\pa q}=-p^2+2pq+pt+a_2,\\
\label{Ham_P4_q}& \f{dq}{dt}=\f{\pa H}{\pa p}=-q^2+2pq-qt-a_1.
\end{align}
\end{subequations}
Eliminating from equations \eqref{Ham_P4} the variable $p$ we find $P_{I\!V}$:
$$\f{d^2 q}{dt^2}=\f{1}{2q}\Big(\f{d q}{dt}\Big)^2+\f{3}{2}q^3+2tq^2+\Big(\f{t^2}{2}+a_1+2a_2-1\Big)q-\f{a_1^2}{2q}.$$
Eliminating from equations \eqref{Ham_P4} the variable $q$ we find $\widetilde{P_{I\!V}}$:
\begin{equation}\label{P4_p}
\widetilde{P_{I\!V}}:~~\f{d^2 p}{dt^2}= \f{1}{2p}\Big(\f{d
p}{dt}\Big)^2+\f{3}{2}p^3-2tp^2+\Big(\f{t^2}{2}-2a_1-a_2+1\Big)p-\f{a_2^2}{2p},~~~t,p\in\mathbb{C}.
\end{equation}
Indeed,
$$\f{d^2 p}{dt^2}=-2p\f{dp}{dt}+2p(-q^2+2pq-qt-a_1)+2\f{dp}{dt}q+\f{dp}{dt}t+p.$$
Replacing in this equation $q$ from equation \eqref{Ham_P4_p}, we
find equation \eqref{P4_p}.
\begin{proposition}
The conjugate equations of equations $P_{I\!V}$ and of
$\widetilde{P_{I\!V}}$, i.e., the conjugate equations of equations
\eqref{P4_q} and \eqref{P4_p}, are the following ODEs:
\begin{subequations}\label{CP_4}
\begin{align}
\label{P4_C_q}
\begin{split}CP_{I\!V}:~~
\f{d^2 q}{dh^2}=\f{1+q'}{hq+a_2 q^2}\big(q-2q^2(1+q')G_1+&2(h+a_1G_1)+q'(h+2a_1G_1)\big),\\
&h,q\in\mathbb{C},
\end{split}\\
\label{P4_C_p}
\begin{split}
\widetilde{CP_{I\!V}}:~~
\f{d^2 p}{dh^2}=\f{1+p'}{hp+a_1 p^2}\big(p+2p^2(1+p')G_2+&2(h+a_2G_2)+p'(h+2a_2G_2)\big),\\
&h,p\in\mathbb{C},
\end{split}
\end{align}
\end{subequations}
where $G_1=\Big(-\f{h+a_2q}{q+qq'}\Big)^{1/2},~G_2=\Big(\f{h+a_1p}{p+pp'}\Big)^{1/2}.$
Equations \eqref{CP_4} possess the Hamiltonian $T$, where
\begin{equation}\label{CH4_T}
T(p,q,h)=p-q-\f{a_2}{p}-\f{a_1}{q}-\f{h}{pq}.
\end{equation}
\end{proposition}
\paragraph{Proof.}
The solution of the equation
\begin{equation}
h=qp(p-q-t)-a_2q-a_1p,
\end{equation}
yields
\begin{equation}
t=T(p,q,h),
\end{equation}
where the function $T$ denotes the RHS of equation \eqref{CH4_T}. The associated Hamilton's equations are:
\begin{subequations}\label{CH4_PQ}
\begin{align}
&p'=\f{\pa T}{\pa q}=\f{h}{q^2p}+\f{a_1}{q^2}-1,\\
&q'=-\f{\pa T}{\pa p}=-\f{h}{p^2q}-\f{a_2}{p^2}-1.
\end{align}
\end{subequations}
Eliminating from equations \eqref{CH4_PQ} the function $p$ we find \eqref{P4_C_q}.
Similarly, eliminating from equations \eqref{CH4_PQ} the function $p$ we find equation \eqref{P4_C_p}.
\begin{flushright} $\square$ \end{flushright}
\begin{remark}
Every hamiltonian has the gauge freedom $\ti{H}=H+f(t)$, where
$f(t)$ is an arbitrary function of $t$. This implies that we can associate infinitely
many ODEs with each Painlev\'e equation. Among these ODEs, the ODEs presented here are expected to have
the simplest form.
\end{remark}
\begin{remark}
We note that conjugate Painlev\'e equations are of the form
$y''=F(y,y',t)$, where $F$ is algebraic in $y,y'$. The corresponding
conjugate Hamiltonian systems are of the form
\begin{subequations}
\begin{align}
p'=F_1(p,q,h),\\
q'=F_2(p,q,h),
\end{align}
\end{subequations}
where $F_1$ and $F_2$ are rational in $p,q.$
\end{remark}
\section{Lax pairs for conjugate equations}
The following proposition provides a method for constructing Lax pairs for conjugate Painlev\'e equations.
\begin{proposition}\label{Lax pair}
An explicit Lax pair for the Hamiltonian form of any Painlev\'e
equation, leads an explicit Lax pair for the Hamiltonian form of the
corresponding conjugate Painlev\'e equation. The relevant
construction involves the following steps:
\begin{enumerate}
\item [(i)] Substitute the $t-$function into the Lax pair of a given Painlev\'e equation, so that the new independent variables become $\la$ and $h$ (instead of $\la$ and $t$).
\item [(ii)] Replace in the resulting Lax pair the unknown functions by the associated explicit functions of $(p,q,h)$.
\end{enumerate}
\end{proposition}
\paragraph{Proof.}
Let $H(p,q,t)$ be a Hamiltonian of a given Painlev\'e equation, i.e., the given Painlev\'e equation is equivalent to
Hamilton's equations
\begin{equation}
\label{Hamilton} \f{dp}{dt}=-\f{\pa H}{\pa q},~~~\f{dq}{dt}=\f{\pa
H}{\pa p}.
\end{equation}
Let $T(p,q,h)$ be the associated conjugate Hamiltonian, i.e., the associated
conjugate Painlev\'e equation is equivalent to Hamilton's
equations
\begin{equation}
\label{CHamilton} \f{dp}{dh}=\f{\pa T}{\pa q},~~~\f{dq}{dh}=-\f{\pa
T}{\pa p}.
\end{equation}
Suppose equations \eqref{Hamilton} admit the following Lax pair:
\begin{subequations}\label{LAX pair general HAM}
\begin{align}
&\f{\pa \psi}{\pa \la}(\la,t)=A(p(t),q(t),t,\la)\psi(\la,t),\\
&\f{\pa \psi}{\pa t}(\la,t)=B(p(t),q(t),t,\la)\psi(\la,t),~~~~~~~~\la\in\mathbb{C},
\end{align}
\end{subequations}
where $A$ and $B$ are two known $k\times k$ matrix-valued functions
of $(p,q,t)$, and the function $\psi$ is a $k\times k$
matrix-valued function of $\la$ and $t$ (for some $k>1$). Equations \eqref{LAX pair general HAM} imply Lax's equation
\begin{equation} \label{LAX_General}
\pa_t A-\pa_{\la}B+[A,B]=0, \footnote{We mention that $\pa_t
A=\f{\pa A}{\pa p}\f{dp}{dt}+\f{\pa A}{\pa q}\f{dq}{dt}+\f{\pa
A}{\pa t}$,}
\end{equation}
where $[~\cdot~,~\cdot~]$ denotes the usual matrix commutator.
Let $h=h(t)$ denote the $h-$function and let $t=t(h)$ denote the
$t-$function, which is the inverse of the $h-$function. Let
$\phi(\la,h)=\psi(\la,t(h))$. Replacing in equations \eqref{LAX pair general HAM} $\psi$ by $\phi$, we find
\begin{subequations}\label{LAX pair general trans}
\begin{align}
&\f{\pa \phi}{\pa \la}(\la,h)=A(p(t(h)),q(t(h)),t(h),\la)\phi(\la,h),\\
&\f{\pa \phi}{\pa h}(\la,h)=\f{dt}{dh}
B(p(t(h)),q(t(h)),t(h),\la)\phi(\la,h).
\end{align}
\end{subequations}
Both the $h-$function and the $t-$function are unknown
functions (the knowledge of these functions requires solving
Hamilton's equations). However, by the definition of the
conjugate Hamiltonian, we have
\begin{equation}\label{a}
t(h)=T(p(t(h)),q(t(h)),h).
\end{equation}
Moreover, the conjugate Hamiltonian structure \eqref{CHamilton}
implies
\begin{equation}\label{b}
\f{dt}{dh}=\f{\pa T}{\pa h}.
\end{equation}
Using in equations \eqref{LAX pair general trans} equations \eqref{a} and
\eqref{b} to replace the unknown functions in terms of explicit
functions, we obtain the following Lax pair for equations
\eqref{CHamilton}:
\begin{subequations}\label{LAX pair final}
\begin{align}
&\f{\pa \phi}{\pa \la}=A(p,q,T(p,q,h),\la)\phi,\\
&\f{\pa \phi}{\pa h}=\f{\pa T}{\pa h} B(p,q,T(p,q,h),\la)\phi.
\end{align}
\end{subequations}
Indeed, Lax's equation reads:
\begin{equation}\label{LAX_Gen_C}
\pa_h A-\f{\pa T}{\pa h} \pa_{\la}B + \f{\pa T}{\pa h} [A, B]=0.
\end{equation}
Noting that $$\pa_h=\f{dt}{dh}\pa_t =\f{\pa T}{\pa h} \pa_t,$$ we
find that equation \eqref{LAX_Gen_C} is just equation
\eqref{LAX_General} using $h$ as the independent variable.
\begin{flushright}$\square$\end{flushright}
\begin{example} (A Lax pair for $CP_I$ and $\widetilde{CP_I}$)
Recall the Jimbo-Miwa pair \cite{JM}\cite{Kita} for Hamilton's equations \eqref{Ham_P1} of Painlev\'e I:
\begin{subequations}\label{Lax_P1_JM}
\begin{align}
\f{\pa \psi}{\pa \la}&=\left\{\left(
\begin{array}{llcl}
-p & q^2+t/2\\
-4q & p\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & q\\
4 & 0\\
\end{array}
\right)+\la^2\left(
\begin{array}{llcl}
0 & 1\\
0 & 0\\
\end{array}
\right)\right\}
\psi,\\
\f{\pa \psi}{\pa t}&=\left\{\left(
\begin{array}{llcl}
0 & q\\
2 & 0\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & 1/2\\
0 & 0\\
\end{array}
\right)\right\}
\psi.
\end{align}
\end{subequations}
Replacing in equations \eqref{Lax_P1_JM} $t$ by $$\f{1}{2}\f{p^2}{q}-2q^2-\f{h}{q}$$ and using
$$\f{\pa T}{\pa h}=-\f{1}{q},$$
we obtain the following Lax pair for $CP_I$ and $\widetilde{CP_I}$:
\begin{subequations}\label{Lax_CP1_JM}
\begin{align}
\f{\pa \psi}{\pa \la}&=\left\{\left(
\begin{array}{llcl}
-p & \f{p^2}{4q^2}-\f{h}{2q}\\
-4q & p\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & q\\
4 & 0\\
\end{array}
\right)+\la^2\left(
\begin{array}{llcl}
0 & 1\\
0 & 0\\
\end{array}
\right)\right\}
\psi,\\
\f{\pa \psi}{\pa h}&=-\f{1}{q}\left\{\left(
\begin{array}{llcl}
0 & q\\
2 & 0\\
\end{array}
\right)+\la\left(
\begin{array}{llcl}
0 & 1/2\\
0 & 0\\
\end{array}
\right)\right\}
\psi.
\end{align}
\end{subequations}
This Lax pair can be verified directly.
\end{example}
\section{Conclusions}
We have introduced a novel class of integrable ODEs, which are
related to $P_{I}$, $P_{I\!I}$, $P_{I\!V}$ and $\widetilde{P_{I}}$,
$\widetilde{P_{I\!I}}$ and $\widetilde{P_{I\!V}}$. The relation
between the new ODEs and the Painlev\'e equations is
\textit{implicit}. We recall, that there exist analogous implicit
relations among integrable PDEs, namely the relations derived via the
so-called hodograph transformations. For example, the celebrated
Korteweg-de Vries and Harry-Dym equations are related by precisely
such a transformation \cite{CFA}.
Hodograph type transformation do \textit{not} preserve the
Painlev\'e property (for example, solutions for the Harry-Dym
equation do \textit{not} possess this property \cite{CFA}).
Similarly, we do not expect that conjugate Painlev\'e equations to
possess the Painlev\'e property. Nevertheless, these equations
\textit{are} integrable. Indeed, it is possible to construct a large
class of solutions of the conjugate equations. Furthermore, in
principle, it is possible to express the solution of the general
initial value problem in terms of the solutions of the initial value
problem of the associated Painlev\'e equation. However, the most
efficient way to solve the initial value problem of a given
conjugate ODE, is to use its associated Lax pair. For the conjugate
equations of $P_{I\!I}$ and $P_{I}$, relevant Lax pairs are given by
equations \eqref{Lax_CP2} and \eqref{Lax_CP1_JM}. For other
conjugate ODEs, similar Lax pairs can be constructed using
Proposition \eqref{Lax pair}.
Taking into consideration the relation between the implicit
transformations discussed here and hodograph type transformations,
it is natural to expect that the ODEs introduced here might appear
as ODE reductions of integrable PDEs (such as the Harry-Dym
equation), which are related to well known integrable PDEs (such as
the Korteweg-de Vries equation) via hodograph transformations.
\paragraph{Acknowledgements}
D. Yang would like to thank Professor Youjin Zhang for his advise
and for helpful discussions, as well as the China Scholarship
Council for supporting him for a joint PhD study at the University
of Cambridge. A. S. Fokas is grateful to the Guggenheim Foundation,
USA, for partial support.
\bibliographystyle{amsplain}
|
1,314,259,993,293 | arxiv | \section{Introduction}
\label{Introduction}
Crystallographic methods of structure solution are the gold-standard for determining atomic arrangements in crystals including, in the absence of single crystals, structure solution from powder diffraction data~\cite{pecha;b;fopdascom05,david;b;sdfpdd02}. Here we show that crystal structure solution is also possible from experimentally determined atomic pair distribution functions (PDF) using the Liga algorithm that was developed for nanostructure determination~\cite{juhas;n06,juhas;aca08}. The PDF is the Fourier transform of the properly normalized intensity data from an isotropically scattering sample such as a glass or a crystalline powder. It is increasingly used as a powerful way to study atomic structure in nanostructured materials~\cite{billi;jssc08,egami;b;utbp03}. Such nanostructures do not scatter with well defined Bragg peaks and are not amenable to crystallographic analysis~\cite{billi;s07}, but refinements of models to PDF data yield quantitatively reliable structural information~\cite{proff;jac99,farro;jpcm07,tucke;jpcm07}. Recently \emph{ab initio} structure solution was demonstrated from PDF data of small elemental clusters~\cite{juhas;n06}. Here we show that these methods can be extended to solve the structure of a range of crystalline materials.
Whilst it is unlikely that this kind of structure solution will replace crystallographic methods for well ordered crystals, this work demonstrates both that structure solution from PDF data can be extended to compounds, and that robust structure solutions are possible from the experimentally determined PDFs of a wide range of materials. We also note that there may be an application for this approach when the space-group of the crystal is not known, as the Liga algorithm does not make use of such symmetry information. In fact, the space group can be determined afterwards analyzing the symmetry of the solved electron density map \cite{palat;jac08}.
However, this approach is promising for the case where the local
structure deviates from the average crystallographic structure, as has
been observed in a number of complex crystals, for example the
magnetoresistive La$_{1-x}$Ca$_x$MnO$_3$ system~\cite{qiu;prl05,bozin;pb06} or ferroelectric lead-based perovskites~\cite{dmows;jpcs00,juhas;prb04}.
The PDF contains this local information due to the inclusion of diffuse scattering intensities in the Fourier transform and it is possible to focus the modeling on a
specific length-scale when searching for matching structure models, allowing in principle structure solutions of local, intermediate, and long-range order to be obtained separately.
The procedure assumes a
periodic system with known lattice parameters and stoichiometry,
otherwise there is no information on location or symmetry of the
atom sites in the unit cell. To solve the unit cell structure the technique
constructs a series of trial clusters using the PDF-extracted
distances. The tested structures are created with a direct use of
distance information in the experimental data giving it significantly better performance than procedures that search by
random structure updates such as Monte Carlo based minimization schemes~\cite{juhas;n06,juhas;aca08}.
\section{Experimental procedures}
\label{ExperimentalProcedures}
The extended Liga procedure has been tested with
experimental x-ray PDFs collected from inorganic test materials.
Powder samples of Ag, BaTiO$_{3}$, C-graphite, CaTiO$_3$, CdSe,
CeO$_2$, NaCl, Ni, PbS, PbTe, Si, SrTiO$_3$, TiO$_2$ (rutile),
Zn, ZnS (sphalerite) and ZnS (wurtzite) were obtained from commercial
suppliers. Samples were ground in agate mortar to decrease their
crystallite size and improve powder averaging.
The experimental PDFs were measured using synchrotron
x-ray diffraction at the 6ID-D beamline of the Advanced Photon Source,
Argonne National Laboratory using the x-ray energies of 87 and 98~keV.
The samples were mounted using a thin kapton tape in a circular,
10~mm hole of a 1~mm thick flat plate holder, which was positioned
in transmission geometry with respect to the beam. The x-ray data were
measured using the ``Rapid Acquisition'' (RA-PDF) setup, where the
diffracted intensities were scanned by a MAR345 image plate
detector, placed about 20~cm behind the sample~\cite{chupa;jac03}.
All measurements were performed at room temperature.
The raw detector images were integrated using the Fit2D program
\cite{hamme;esrf98} to reduce them to a standard intensity
vs.\ $2\theta$ powder data. The integrated data were then
converted by the PDFgetX2 program~\cite{qiu;jac04i} to experimental PDFs.
The conversion to PDF was conducted with corrections for Compton
scattering, polarization and fluorescence effect, as available in
the PDFgetX2 program. The maximum value of the scattering
wavevector $Q_{\max}$ ranged from 19~Å$^{-1}$ to 29~Å$^{-1}$,
based on a visual inspection of the noise in the $F(Q) = Q [S(Q) - 1]$
curves.
The PDF function $G(r)$ was obtained by a Fourier transformation
of $F(Q)$,
\begin{equation}
\label{eq;sqtogr}
G(r) = \frac{2}{\pi}\int_{Q_{\min}}^{Q_{\max}} F(Q) \sin Qr \> \mathrm{d}Q,
\end{equation}
and provided a scaled measure of finding a pair of atoms separated by
distance $r$
\begin{equation}
\label{eq;grassum}
G(r) = \frac{1}{N r \langle f \rangle^2}
\sum_{i \neq j} f_i f_j \delta(r - r_{ij}) - 4 \pi \rho_0 r.
\end{equation}
The $G(r)$ function has a convenient property that its peak amplitudes
and standard deviations remain essentially constant with $r$ and is thus
suitable for curve fitting. A detailed discussions of the PDF theory,
data acquisition and applications for structure analysis can be found
in~\cite{egami;b;utbp03,farro;aca09}.
\section{Methods}
\label{Methods}
The structure solution procedure was carried out in three
separate steps, as described in the sections below.
The first step consists of peak search and profile fitting in
the experimental PDF to identify prominent inter-atomic distances up to
a cutoff distance $d_{\mathit{cut}}$.
We have developed an automated
peak extraction method which eases this task. In the second step these
distances are used as inputs for the Liga algorithm, which searches for
unit cell positions that give structure with the best match in pair
lengths. If the sample has several chemical species, a final ``coloring'' step
is necessary to assign proper atom species to the unit cell
sites. This can be done by making use of PDF peak amplitude information.
However, we have found that coloring can be also solved by optimizing the
overlap of the empirical atom radii at the neighboring sites, which is
simpler to implement and works with greater reliability.
To verify the quality and uniqueness of the structure, the Liga
algorithm has been run for each sample multiple (at least 10) times
with the same inputs, but different seeds of the random number generator. For most
samples the resulting structures were all equivalent, but sometimes
the program gave several geometries with similar agreement to
the PDF-extracted pair distances. In all these cases the correct
structure could be resolved in the coloring step, where it displayed
significantly lower atom radii overlap and converged to known structure
solution. A small number of structures would not solve by this process
and the reasons for failure are discussed below.
\subsection{Extraction of pair distances from the experimental PDF}
In the PDF frequent
pair distances generate sharp peaks in the measured $G(r)$ curve
with amplitudes following Equation~(\ref{eq;grassum}). The peaks
are broadened
to approximately Gaussian shape that reflects atom thermal vibrations
and limited experimental resolution. Additional broadening and
oscillations are introduced to the PDF due to the maximum wavevector
$Q_{\max}$ that can be achieved in the measurement. This cutoff in
$Q_{\max}$ in effect convolutes ideal peak profiles with a
sinc function $\sin(Q_{\max} r) / r$ thus creating satellite
termination ripples.
Recovering the underlying peaks from the PDF is not trivial. The
experimental curve can have false peaks due to termination ripples.
Nearby peaks can overlap and produce complicated profiles that are
difficult to decompose. To simplify the process of extracting
inter-atomic distances we have developed an automated method for peak
fitting that adds peak profiles to fit the data to some user-defined
tolerance, while using as few peaks as possible to avoid over-fitting. This
method grows peak-like clusters of data points while fitting one or more
model peaks to each cluster. Adjacent clusters iteratively combine
until there is a single cluster with a model that fits the entire data
set. This allows a steady growth in model complexity by progressively
refining earlier and less accurate models. Furthermore, most adjustable
parameters can be estimated, in principle, from experimental knowns. A
full description of the peak extraction method will be presented in a
future paper.
The present work uses the simplest model for peaks, fitting the $G(r)$
data with Gaussian peaks over $r$ and using an assumed value of $\rho_{0}$. This
model ignores the effect of termination ripples, but for our data the
spurious peaks due to these ripples were usually identifiable by
their small size. Furthermore, the Liga algorithm is not required to
use every distance it is given, and should exhibit a limited tolerance
of faulty distances. The peak fitting procedure returns positions,
widths and integrated areas of the extracted peaks, of which only the
peak positions were used for structure determination.
The peak extraction procedure was implemented in Mathematica 6
and tested on both the experimental and
simulated data. A typical runtime was about 5 minutes.
Since the structures are known we can compare the results of the peak extraction with
the expected results. For both experimental and simulated PDFs of the tested structures
these compared qualitatively well to the ideal distances up to
$\sim$10-15~Å, including accurate identification of some
obscured peaks. Past that range the number of distinct, but very close,
distances in the actual structure is so great that reliable peak
extraction is much more difficult. For this reason we only performed
peak extraction up to 10~Å before running the trials
described in Section~\ref{Results}. Apart from removing peaks below a noise
threshold in order to filter termination ripples out, and one difficult
peak in the graphite data, all distances used in the structure solution
trials below come directly from the peak extraction method.
\subsection{Unit cell reconstruction using the Liga algorithm}
In the second step the Liga algorithm searches for the atom
positions in the unit cell that make the best agreement
to the extracted pair distances. The quality of distance match
is expressed by cost $C_d$ defined as a mean square difference between
observed and modeled pair distances.
\begin{equation}
\label{eq;ligacost}
C_d = \frac{1}{P} \sum_{d_k < d_{\mathit{cut}}}
\left( t_{k,\mathit{near}} - d_k \right)^2
\end{equation}
The index $k$ goes over all pair distances $d_k$ in the model that are
shorter than the cutoff length $d_{\mathit{cut}}$ and compares them with
the nearest observed distance $t_{k,\mathit{near}}$, while $P$ is the
number of model distances. This cost definition considers
only distance values as extracted from the PDF peak positions,
and ignores their relative occurrences.
For multi-component systems there is in fact no
straightforward way of extracting distance multiplicities,
because it is not known what atom pairs are present in each PDF peak.
Nevertheless, the cost definition still imposes strict
requirements on the model structure, as displayed in
Fig.~\ref{fig2dLattice}. A
site in the unit cell must be at a good, matching distance not
only from all other cell sites, but also from all of their translational
images within the cutoff radius.
\begin{figure}
\includegraphics[clip=true]{figures/fig2dLattice}
\label{fig2dLattice}
\caption{Schematic calculation of the distance cost $C_d$. To achieve
low $C_d$ a unit cell site needs to be at a correct distance from
other cell sites and from their translational images.
}
\end{figure}
To find an optimum atom position in the unit cell structure the
Liga algorithm uses input pair distances in an iterative build-up and
disassembly of partial structures~\cite{juhas;n06}.
The procedure maintains a pool of partial unit cell structures at each
possible size from a single atom up to a complete unit cell.
These ``candidate clusters'' are assigned to ``divisions''
according to the number of sites they contain, therefore there are
as many divisions as is the number of atoms in a complete unit cell.
The structures at each division compete
against each other in a stochastic process, where the probability of
winning equals the reciprocal distance cost $C_d$, and a win is thus more
likely for low-cost structures. The winning cluster is selected for
``promotion,''
where it adds one or more atoms to the
structure and thus advances to a higher division. At the new division a
poorly performing high-cost candidate is ``relegated'' to the original
division of the promoted structure, thus keeping the total number of
structures at each division constant. The relegation is accomplished
by removing cell sites that have the largest contributions to the
total cost of the structure. Both promotion and relegation
steps are followed by downhill relaxation of the worst site,
i.e., the site with the largest share of the total cost $C_d$.
The process of promotion and relegation is performed at every division
in a ``season'' of competitions. These seasons are repeated many
times until a full sized structure
attains sufficiently low cost or until a user-specified time
limit. A complete description of the Liga algorithm details
can be found in~\cite{juhas;aca08}.
\subsection{Atom assignment}
The Liga algorithm used in the structure solution step has no notion of
chemical species and therefore returns only coordinates of the atom
sites in the unit cell. For a multi-component system an
additional step, dubbed coloring, is necessary to assign
chemical elements over known cell sites. To assess the
quality of different assignments we have tested two definitions
for a cost of a particular coloring. The first method uses
a weighted residuum, $R_w$, from a least-squares PDF
refinement to the input PDF data~\cite{egami;b;utbp03}.
The PDF refinement was performed with a fully automated PDFfit2 script,
where the atom positions were all fixed and only the atomic displacement
factors, PDF scale factor and $Q$-resolution damping factor were
allowed to vary. The second procedure
defines coloring cost $C_c$ as an average overlap of the empirical
atomic radii, so that
\begin{equation}
\label{eq;coloringcost}
C_c = \frac{1}{N} \sum_{d_{k} < r_{k,1} + r_{k,2}}
\left( r_{k,1} + r_{k,2} - d_{k} \right)^2
\end{equation}
The index $k$ runs over all atom pairs considering periodic boundary
conditions,
$r_{k,1}$ and $r_{k,2}$ are the empirical radii values of
the first and second atom in the pair $k$, and $N$ is the number of
atoms in the unit cell.
Considering an $N$ atom structure with $s$ different atom species,
the total number of possible assignments is given by the multinomial
expression $N! / (n_1! \: n_2! \: \ldots \: n_s!)$\@.
For a 1:1 binary system the number of possible assignments tends to
$2^N$ with increasing $N$. Such exponential growth in possible
configurations makes it quickly impossible to compare them all in
an exhaustive way.
We have therefore employed a simple downhill search, which starts with
a random element assignment. The initial coloring cost $C_c$ is calculated
together with a cost change for every possible
swap of two atoms between unit cell sites. The site flip that
results in the largest decrease of the total coloring cost is accepted and all
cost differences are evaluated again. The site swap is then repeated
until a minimum configuration is achieved, where all site flips
increase the coloring cost. The downhill procedure was
verified by repeating it 5 times using different initial assignments.
In nearly all cases these runs converged to the same
atom configurations.
The downhill procedure was performed using both definitions of the
coloring cost. For the coloring cost obtained by PDF fitting the
procedure was an order of magnitude slower and less reliable, as the
underlying PDF refinements could converge badly for poor atom assignments.
The second method, which calculated cost from radii-overlap, was
considerably
faster and more robust. For all tested materials, the overlap-based
coloring assigned all atoms correctly, when run on correct structure
geometry. The overlap cost was evaluated using either the covalent
radii by~\cite{corde;d08}
or the ionic radii from~\cite{shann;aca76} for more ionic
compounds. For some ions the Shannon table provides
several radii values depending on their coordination number or
spin state. Although these variants in ionic radii can vary by
as much as about 30\%, the choice of particular radius had no
effect on the best assignment for all studied structures.
\section{Results}
\label{Results}
The experimental x-ray PDFs were acquired from 16 test samples
with well known crystal structures. To verify that the measured
PDF data were consistent with established structure results structure
refinements of the known structures were carried out using the PDFgui program~\cite{farro;jpcm07}. The PDF fits were done with structure data
obtained from the Crystallography Open Database (COD)~\cite{grazu;jac09}.
The structure parameters were all kept constant in the refinements,
which modified only parameters related to the PDF extraction, such
as PDF scale, $Q$ resolution dampening envelope and a small rescaling
of the lattice parameters. These refinements are summarized in
Table~\ref{tab;PDFrefinements}, where low values of the fitting residual
$R_w$ confirm good agreement between experimental PDFs and expected
structure results.
The PDF datasets were then subjected to the peak search, Liga structure
solution and coloring procedures as described above. To check the
stability of this method, several structures were solved using an enlarged
periodicity of 1$\times$1$\times$2, 1$\times$2$\times$2 or
2$\times$2$\times$2 super cells. The lattice parameters used in the
Liga crystallography step were obtained from the positions of the
nearest PDF peaks. In several cases, such as for BaTiO$_{3}$ where peak
search could not resolve tetragonal splitting, the cell parameters were
taken from the respective CIF reference, as listed in
Table~\ref{tab;PDFrefinements}.
The structure solution was considered successful if the found structure
displayed the same nearest neighbor coordination as its CIF reference and
no site was offset by more than 0.3~Å from its correct
position. The solution accuracy was evaluated by finding the best
overlay of the found structure to the reference CIF data. The
optimum overlay was obtained by an exhaustive search over all symmetry
operations defined in the CIF file and over all mappings of solved atom
sites to all reference sites containing the same element. The
overlaid structures were then compared for the differences in
fractional coordinates and for the root mean square distortion $s_r$ of the solved
sites from their correct positions. Table~\ref{tab;SolvedStructures}
shows a summary of these results for all tested structures.
\begin{table}
\caption{List of measured x-ray PDFs and their fitting residua
$R_w$ with respect to established structures from the literature.
}
\label{tab;PDFrefinements}
\begin{tabular}{lll}
\hline
sample & $R_w$ & CIF reference \\
\hline
Ag & 0.095 & \cite{wycko;bk63} \\
BaTiO$_{3}$ & 0.123 & \cite{megaw;ac62} \\
C (graphite) & 0.248 & \cite{wycko;bk63} \\
CaTiO$_{3}$ & 0.083 & \cite{sasak;acc87} \\
CdSe & 0.149 & \cite{wycko;bk63} \\
CeO$_{2}$ & 0.098 & \cite{wycko;bk63} \\
NaCl & 0.161 & \cite{jurge;ic00} \\
Ni & 0.109 & \cite{wycko;bk63} \\
PbS & 0.085 & \cite{ramsd;ami25} \\
PbTe & 0.070 & \cite{wycko;bk63} \\
Si & 0.085 & \cite{wycko;bk63} \\
SrTiO$_{3}$ & 0.143 & \cite{mitch;apa02} \\
TiO$_{2}$ (rutile) & 0.146 & \cite{meagh;canmin79} \\
Zn & 0.105 & \cite{wycko;bk63} \\
ZnS (sphalerite) & 0.102 & \cite{skinn;ami61} \\
ZnS (wurtzite) & 0.174$^1$ & \cite{wycko;bk63} \\
\hline
\multicolumn{3}{l}{
$^1$ refined as mixture of wurtzite and sphalerite phases
} \\
\hline
\end{tabular}
\end{table}
The procedure converged to a correct structure for 14 out of 16 studied
samples and failed to find the remaining 2. The convergence was
more robust for high-symmetry structures, such as Ag (\textit{f.c.c.}),
NaCl or ZnS sphalerite, which could be reliably solved also in
enlarged [222] supercells. For all successful runs the distance cost $C_d$
of the Liga-solved structure was comparable to the one from the CIF
reference and the atom overlap measure $C_c$ was close to zero.
ZnS sphalerite shows a notable difference between the $C_d$ values of
the solution and its CIF reference, however this was caused by using
a PDF peak position as a cell parameter for the solved structure.
Apparently the PDF peak extracted at $r \approx a$ was slightly
offset with respect to other peaks, nevertheless the Liga algorithm
still produced atom sites with correct fractional coordinates. The mean
displacement $s_r$ for ZnS is 0~Å, because solved structures and CIF
references were compared using lattice parameters rescaled to their
CIF values.
\begin{table}
\caption{Summary of tested structure solutions from x-ray PDF data}
\label{tab;SolvedStructures}
\begin{tabular}{lr*{8}{l}}
\hline
\multicolumn{2}{l}{sample \hfill atoms} &
\multicolumn{2}{l}{cost $C_d$ (0.01~Å$^2$)} &
\multicolumn{2}{l}{cost $C_c$ (Å$^2$)} &
\multicolumn{4}{l}{deviation of coordinates} \\
(supercell) & & Liga & CIF & Liga & CIF &
$s_x$ & $s_y$ & $s_z$ & $s_r$ (Å) \\
\hline
\multicolumn{10}{l}{successful solutions} \\
\hline
Ag [111] & 4 & 0.0232 & 0.136 & 0 & 0.001 &
0 & 0 & 0 & 0 \\
Ag [222] & 32 & 0.0097 & 0.136 & 0 & 0.001 &
0.00025 & 0.00024 & 0.00003 & 0.0014 \\
BaTiO$_3$ [111] & 5 & 0.370 & 0.394 & 0.040 & 0.042 &
0.0057 & 0.0066 & 0.014 & 0.064 \\
BaTiO$_3$ [112] & 10 & 0.392 & 0.394 & 0.058 & 0.042 &
0.00023 & 0.039 & 0.018 & 0.16 \\
C graphite [111] & 4 & 0.396 & 0.574 & 0.010 & 0.016 &
0.0029 & 0.0029 & 0.036 & 0.14 \\
C graphite [221] & 16 & 0.420 & 0.574 & 0.010 & 0.016 &
0.0086 & 0.0065 & 0.036 & 0.15 \\
CdSe [111] & 4 & 0.107 & 0.138 & 0 & 0.001 &
0 & 0 & 0.0055 & 0.027 \\
CdSe [221] & 16 & 0.0856 & 0.138 & 0 & 0.001 &
0.00010 & 0.00013 & 0.0057 & 0.028 \\
CeO$_2$ [111] & 12 & 0.515 & 0.554 & 0 & 0 &
0 & 0 & 0 & 0 \\
NaCl [111] & 8 & 1.75 & 1.71 & 0 & 0 &
0 & 0 & 0 & 0 \\
NaCl [222] & 64 & 1.20 & 1.71 & 0 & 0 &
0.00031 & 0.00031 & 0.00035 & 0.0032 \\
Ni [111] & 4 & 0.0024 & 0.0024 & 0 & 0 &
0 & 0 & 0 & 0 \\
Ni [222] & 32 & 0.0025 & 0.0024 & 0 & 0 &
0.00015 & 0.00013 & 0.00013 & 0.0008 \\
PbS [111] & 8 & 0.0125 & 0.0104 & 0.010 & 0.011 &
0 & 0 & 0 & 0 \\
PbS [222] & 64 & 0.0140 & 0.0104 & 0.010 & 0.011 &
0.00005 & 0.00004 & 0.00005 & 0.0005 \\
PbTe [111] & 8 & 0.0024 & 0.0127 & 0.097 & 0.090 &
0 & 0 & 0 & 0 \\
PbTe [222] & 64 & 0.0022 & 0.0127 & 0.097 & 0.090 &
0.00011 & 0.00011 & 0.00008 & 0.0011 \\
Si [111] & 8 & 0.0045 & 0.0045 & 0 & 0 &
0 & 0 & 0 & 0 \\
Si [222] & 64 & 0.0048 & 0.0045 & 0 & 0 &
0.00010 & 0.00009 & 0.00008 & 0.0009 \\
SrTiO$_3$ [111] & 5 & 0.437 & 0.437 & 0.002 & 0.002 &
0 & 0 & 0 & 0 \\
Zn [111] & 2 & 0.495 & 0.470 & 0 & 0 &
0 & 0 & 0.027 & 0.095 \\
Zn [222] & 16 & 0.564 & 0.470 & 0 & 0 &
0.00010 & 0.00006 & 0.020 & 0.080 \\
ZnS sphalerite [111] & 8 & 0.150 & 0.0647 & 0 & 0 &
0 & 0 & 0 & 0 \\
ZnS sphalerite [222] & 64 & 0.160 & 0.0647 & 0 & 0 &
0.00029 & 0.00033 & 0.00031 & 0.0028 \\
ZnS wurtzite [111] & 4 & 0.141 & 0.152 & 0 & 0 &
0 & 0 & 0.0038 & 0.017 \\
ZnS wurtzite [221] & 16 & 0.165 & 0.152 & 0 & 0 &
0.00003 & 0.00002 & 0.0039 & 0.017 \\
\hline
\multicolumn{10}{l}{failed solutions} \\
\hline
CaTiO$_3$ [111] & 20 & 0.4967 & 0.902 & 0.52 & 0.072 &
0.16 & 0.14 & 0.17 & 1.6 \\
TiO$_2$ rutile [111] & 6 & 0.5358 & 0.758 & 0.40 & 0.009 &
0.081 & 0.24 & 0.00004 & 0.94 \\
\hline
\multicolumn{10}{p{\textwidth}}{
$C_d$, $C_c$ -- distance and atom overlap cost as defined in equations
(\ref{eq;ligacost}), (\ref{eq;coloringcost})
$s_x$, $s_y$, $s_z$ -- standard deviation in fractional coordinates
normalized to a simple [111] cell
$s_r$ (Å) -- root mean square displacement of the solved sites from the
reference CIF positions
} \\
\hline
\end{tabular}
\end{table}
The structure determination did not work for 2 lower-symmetry
samples of CaTiO$_3$ and TiO$_2$ rutile. In both of these cases,
the simulated structure showed significantly lower distance cost
$C_d$ while its atom overlap $C_c$ was an order of magnitude higher
than for the correct structure and clearly indicated an unphysical result.
Such results were caused by a poor quality of the extracted
distances, which contained significant errors and omissions with
respect to an ideal distance list. The peak search and distance extraction
is more difficult for lower symmetry structures, because their pair
distances are more spread and produce small features that can be
below the technique resolution. Because of poor distance data,
the Liga algorithm converged to incorrect geometries that actually
displayed a better match with the input distances. Both CaTiO$_3$ and
TiO$_2$ were easily solved when run with ideal distances calculated
from the CIF structure.
The results in Table~\ref{tab;SolvedStructures} suggest several ways
to extend the method and improve its success rate. First, the
Liga geometry solution and coloring steps can be performed
together, in other words the structure coloring step needs to be merged to
a chemistry aware Liga procedure. Since atom overlap
cost $C_c$ is meaningful and can be easily evaluated for partial
structures, the total cost minimized by the Liga algorithm should
equal a weighted sum of $C_c$ and distance cost $C_d$. Such a cost
definition would steer the Liga algorithm away from faulty
structures found for CaTiO$_3$ and TiO$_2$ rutile, because both of them
had huge atom overlaps $C_c$. Another improvement is to perform PDF
refinement for a full sized structure and update its cost formula so
that the PDF fit residuum $R_w$ is used instead of distance cost $C_d$.
Such modification would prevent the cost advantage for wrong structures
due to errors and omissions in the extracted distances. The assumption
is that the distance data are still good enough to let
the Liga algorithm construct the correct structure in one of its many
trials. Finally, the cost definition for partial structures can be
enhanced with other structural criteria such as bond valence sums (BVS)
agreement~\cite{brese;acb91,norbe;jac09}. Bond valence sums are
not well determined for incomplete intermediate structures and thus
cannot fully match their expected values. However,
BVS are always increasing, therefore a BVS of some ion that is
significantly larger than its expected value is a clear sign of
such a partial structure's poor quality.
\section{Conclusions}
We have demonstrated the Liga algorithm for structure determination from
PDF can be extended from its original scope of single-element non-periodic
molecules~\cite{juhas;n06,juhas;aca08} to multi-component crystalline
systems. The procedure assumes known lattice parameters and it solves
structure geometry by optimizing pair distances to match the PDF
extracted values, while the chemical assignment is obtained from
minimization of the atomic radii overlap. The procedure was tested
on x-ray PDF data from 16 test samples, of which in 14 cases it gave
the correct structure solution. These are promising results, considering
the technique is at a prototype stage and will be further developed to
improve its ease of use and rate of convergence. The procedure can be
easily amended by a final PDF refinement step. Such an implementation
could significantly reduce the overhead in PDF analysis of crystalline materials,
because its most difficult step, a design of suitable structure model,
would become fully automated.
\ack{\textbf{Acknowledgements}}
We gratefully acknowledge Dr.~Emil Božin, Dr.~Ahmad Masadeh and
Dr.~Douglas Robinson for help with x-ray measurements at the
Advanced Photon Source at the Argonne National Laboratory (APS, ANL).
We thank Dr.~Christopher Farrow for helpful suggestions and
consultations and Dr.~Christos Malliakas for providing NaCl and ZnS
wurtzite samples. We appreciate the computing time and support
at the High Performance Computing Center of the Michigan State
University, where we performed all calculations. This work has
been supported by the National Science Foundation (NSF) Division of
Materials Research through grant DMR-0520547. Use of the APS is supported by the U.S. DOE, Office
of Science, Office of Basic Energy Sciences, under Contract
No. W-31-109-Eng-38. The 6ID-D beamline in the MUCAT sector at the APS is
supported by the U.S. DOE, Office of Science, Office of
Basic Energy Sciences, through the Ames Laboratory under
Contract No. W-7405-Eng-82.
|
1,314,259,993,294 | arxiv | \section{Microscopic Fermi-liquid analysis}
\label{sec1}
The goal of this section is to relate plasmon dispersion with the standard quantities such as Landau FL interactions and renormalized velocity. The analysis proceeds by standard steps via resumming the ladder contributions to the dynamical polarization function which account for the quasiparticle dynamics in Landau's FL framework. In doing so, we keep $\omega$ and $q$ small but finite, as appropriate for a plasmon dispersion analysis. This leads to a polarization response, $\Pi(q,\omega)\sim q^2/\omega^2$, describing plasmon excitations in the low-frequency and long-wavelength domain, $\omega\ll E_F$, $q\ll p_F$.
Charge carriers in graphene single layer are described by the Hamiltonian for $N=4$ species of massless Dirac particles. In second-quantized representation the Hamiltonian reads
\bea
&&
\mathcal{H}= \sum_{\vec p,i} \psi^\dag_{\vec p,i} v_0\boldsymbol{\sigma}\vec p \psi_{\vec p,i} + \mathcal{H}_{\rm el-el},
\label{eq:hamiltonian}
\\\label{eq:interaction}
&&
\mathcal{H}_{\rm el-el} = \frac12\sum_{\vec q, \vec p,\vec{p'},i,j}V(\vec q) \psi^\dag_{\vec p + \vec q, i} \psi^\dag_{\vec p' - \vec q,j} \psi_{\vec p',j} \psi_{\vec p, i}
,
\eea
where $i,j=1...N$, $v_0\approx10^6{\rm m/s}$ is unrenormalized Fermi velocity, and $V(\vec q)=2\pi e^2/|\vec q|\kappa$ is the Coulomb interaction with the dielectric constant $\kappa$ describing screening by the substrate.
Here $\psi_{\vec p,i}$ is a two-component spinor describing the wave-function amplitude on the two sublattices of the graphene crystal lattice. The amplitudes associated with the two sublattices are usually referred to as pseudospin up and down components, with the (pseudo)spin-$1/2$ Pauli matrices in Eq.(\ref{eq:hamiltonian}) acting on (pseudo)spinors $\psi_{\vec p,i}$.
\begin{figure}
\includegraphics[width=1\linewidth]{plasmon_fig2}
\caption{Resummed Feynman graphs for the polarization operator $\Pi(\vec q,\omega)$. The non-quasiparticle contribution $\Pi_0(\vec q,\omega)$ and the FL ladder $\sum_{n\geq1}\Pi_n(\vec q,\omega)$ are shown. Only the contributions $\Pi_1(\vec q,\omega)$ and $\Pi_2(\vec q,\omega)$ contribute to the low-energy plasmon dispersion, see text.}
\label{fig:pi}
\end{figure}
Plasmons are collective excitations of 2D electrons coupled by the electric field in 3D. They can be described microscopically using the density correlation function
\be
\label{eq:K}
K(\vec q,\omega)=i\int dt \la [\rho_{\vec q}(t),\rho_{\vec q}(t_0)] \ra e^{i\omega (t-t_0)}
\ee
where $\rho_{\vec q}(t)=\sum_{\vec p,i}\psi^\dagger_{\vec p,i}(t)\psi_{\vec p +\vec q,i}(t)$ are Fourier harmonics of the total electron density.
The quantity $K$ is expressed in a standard fashion\cite{lifshitz_pitaevskii} through geometric series involving the polarization function $\Pi(\vec q,\omega)$ defined as the irreducible density-density correlator,
\be\label{eq:vareps_zero}
K(\vec q,\omega)=\frac{\Pi(\vec q,\omega)}{\tilde{\kappa}(\vec q,\omega)}
,\quad
\tilde{\kappa}(\vec q,\omega)=1-V(\vec q)\Pi(\vec q,\omega)
\ee
Zeros of the dynamical screening function $\tilde\kappa(\vec q,\omega)$ give the poles of $K$, defining plasmon dispersion.
To obtain the dispersion from the condition $\tilde\kappa(\vec q,\omega)=0$ we need
an input
on $\Pi(\vec q,\omega)$ from a microscopic approach. In the long-wavelength limit, $q\ll p_F $, $\omega\ll E_F$, the behavior of the quantity $\Pi(\vec q,\omega)$ is dominated by excitations near the Fermi surface, which can be described in the FL framework.
The microscopic approach used to justify the FL picture involves several standard steps. We start, as usual, by isolating a quasiparticle pole contribution to the
electron Greens function $G(x-x')=-i\left<\psi(x)\psi^\dagger(x')\right>$ near the Fermi surface,
\be\label{eq:G_pole}
G(\epsilon,\vec p)=
G^{\rm (reg)}(\epsilon,\vec p) + G^{\rm (sing)}(\epsilon,\vec p)
.
\ee
The first term is a regular part of the Greens function
behaving as a smooth function near the Fermi level. The second term is a singular contribution describing quasiparticles,
\be
G^{\rm (sing)}(\epsilon,\vec p)=\frac{Z}{i\epsilon-\xi(p)+i\gamma \,\mathrm{sgn} \epsilon}
.
\ee
Here $Z$ is a quasiparticle residue, $\gamma$ is a quasiparticle decay rate, and $\xi(p)=v(p-p_F)$ is a quasiparticle energy dispersion, with $v$ the renormalized velocity.
This general
discussion
can be specialized to the case of graphene as follows. The Green's function for electrons in graphene has a $2\times 2$ matrix pseudospin structure. By projecting on the conduction and valence bands, it can be represented as
\be
\hat G(\varepsilon,\vec p)
=G_<(\varepsilon, \vec p)\hat{P}_<+G_>(\varepsilon, \vec p)\hat{P}_>
\ee
where $\hat{P}_{>(<)}=(1\pm\boldsymbol{\sigma} \vec e_p)/2$ are projectors for the two bands (here $\vec e_p$ is a unit vector in the direction of momentum $\vec p$).
The quasiparticle excitations with low energies, which govern the low-frequency and long-wavelength response, reside near the Fermi level.
Without loss of generality, we assume n-type doping, so that the Fermi level lies in the upper band, $E_F>0$. In this case, excitations from the lower band do not appear explicitly in the FL theory and lead only to renormalization of various parameters such as the effective interactions and the quasiparticle velocity. The quasiparticle pole in Eq.(\ref{eq:G_pole}) therefore arises only from the upper-band contribution $G_>$ whereas the lower-band contribution $G_<$ can be absorbed into the regular part $G^{\rm (reg)}$. Below the subscripts $>$ and $<$ will be omitted for brevity.
The next step, which is key for understanding the role of low-energy excitations, is the analysis of the polarization function $\Pi(q,\omega)$ at small $\omega$ and $q$. This is done by identifying the contributions due to pairs of Greens functions with proximal poles (the ``dangerous'' two-particle crosssections),\cite{lifshitz_pitaevskii} which we write symbolically as $G^{\rm (sing)}G^{\rm (sing)}\sim Z^2\frac{\vec v\vec k}{\omega-\vec v\vec k}$. One can represent $\Pi(q,\omega)$ as a sum of terms with different numbers of such contributions,
\bea\label{eq:P0+P1+P2}
&&\Pi(q,\omega)=\Pi_0(q,\omega)+\Pi_1(q,\omega)+\Pi_2(q,\omega)+...
\\\nonumber
&&
\Pi_1(q,\omega)=\mathcal{T}^\omega GG\mathcal{T}^\omega
,\quad
\Pi_2(q,\omega)=\mathcal{T}^\omega GG\Gamma^\omega GG\mathcal{T}^\omega
...
\eea
The corresponding graphs are shown in Fig.\ref{fig:pi}. Here we introduced so-called quasiparticle-irreducible quantities: the renormalized scalar vertex $\mathcal{T}^\omega$ and the two-particle scattering vertex $\Gamma^\omega$ (see Fig.\ref{fig:gamma_t} b,c). These quantities absorb all non-quasiparticle contributions in the upper band as well as the inter-band
processes and
the contribution of the states in the lower band.
We recall that the quasiparticle-irreducible quantities are distinct from the conventional irreducible quantities defined as sums of Feynman graphs that cannot be split in two by removing two electron lines.\cite{lifshitz_pitaevskii} For example, the quasiparticle-irreducible vertex $\Gamma^\omega$ is obtained by summing all kinds of graphs except the ones with dangerous cross-sections. The vertex $\Gamma^\omega$ ($\mathcal{T}^\omega$) can be obtained from the conventional irreducible vertex $\Gamma_0$ ($\mathcal{T}_0$) by the resummation procedure pictured in Fig.\ref{fig:gamma_t}, where the hatched blocks represent contributions due to pairs of Greens functions save for $G^{\rm (sing)}G^{\rm (sing)}$.
To analyze the dependence on $\omega$ and $\vec q$ in the long-wavelength limit,
caution must be exercised by employing
the quantities $\mathcal{T}^\omega$ and $\Gamma^\omega$ taken at small but nonzero frequency and momentum values. We therefore adopt an approach similar to that used in Ref.\onlinecite{luttinger}: our quasiparticle-irreducible quantities correspond to Luttinger's $\omega$-quantities which are taken at finite $\omega$ and $\vec q$. They are distinct from the conventional $\omega$-quantities\cite{lifshitz_pitaevskii} obtained in the limit $\omega,q\to 0,\,(\omega\gg v_Fq)$. This distinction, however, turns out to be inessential: Luttinger's $\omega$-quantities reproduce the conventional $\omega$-quantites in the limit $\omega,q\to 0$, which can be taken in arbitrary order since dangerous cross-sections were left out of the definition of $\mathcal{T}^\omega$ and $\Gamma^\omega$.
\begin{figure}
\includegraphics[width=1\linewidth]{plasmon_fig1}
\caption{
Feynman graphs for the ``quasiparticle-irreducible'' quantities $\mathcal{T}^\omega,\,\Gamma^\omega$.
The bold lines represent a full Green's function, the thin lines represent
a singular part $G^{\rm (sing)}$.
The vertex $\Gamma_0$ represents the conventional irreducible vertex, whereas hatched blocks represent products of two Green's
functions, with the
contributions $G^{\rm (sing)}G^{\rm (sing)}$ which give a non-analytical behavior at small $\omega,\vec q$,
taken out (see text).
}
\label{fig:gamma_t}
\end{figure}
Proceeding with the analysis, we note that the dependence on $\omega$ and $q$ is very different for $\Pi_0(q,\omega)$ and $\Pi_{n\ge 1}(q,\omega)$. We will first analyze the contribution $\Pi_0(q,\omega)$. This quantity does not contain dangerous cross-sections which can generate a nonanalytic behavior at small $\omega$ and $\vec q$. Taking $\Pi_0(q,\omega)$ to be analytic, we can represent it as
\be
\Pi_0(\vec q,\omega)=A(\omega)+B(\omega)\left(\frac{q}{p_F}\right)^2+...
\ee
where $A(\omega)$ and $B(\omega)$ are regular functions.
Further, we recall that gauge invariance prohibits any physical response to spatially uniform time-dependent scalar field. Applying this to the full polarization function, Eq.(\ref{eq:P0+P1+P2}), we
see that setting $q=0$ yields $\Pi(\omega,0)=0$. Also, since the contributions of the dangerous cross-sections $GG$ vanish at $q=0$, all the quantities $\Pi_{n\ge 1}(q,\omega)$ do so. We therefore conclude that the function $A(\omega)$ vanishes, leaving us with $\Pi_0(\vec q,\omega)=(q/p_F)^2B(\omega)+O(q^4)$. This gives an effective $q$-dependent permittivity
\be
\label{eq:kappa}
\tilde\kappa(q,\omega)=1-V(q)\Pi_0(q,\omega).
\ee
The second term in Eq.(\ref{eq:kappa}) may henceforth be ignored in the long-wavelength limit. Indeed, since $V(q)=2\pi e^2/\kappa q$ in 2D,
whereas $\Pi_0\propto q^2$, the quantity $\tilde{\kappa}$ equals unity in the limit $q/p_F \to 0$.
It is instructive to compare
this behavior of $\tilde\kappa(q,\omega)$ with that arising for $q\gg p_F$. In this case the effects of finite doping are negligible and we can estimate the polarization function using the result obtained for massless Dirac particles at zero doping,
\be\label{eq:Pi1}
\Pi(q,\omega)=-\frac{N}{16}\frac{q^2}{\sqrt{q^2v^2-\omega^2}}
,
\ee
where $N=4$ is the number of spin/valley flavors. In the limit $qv\gg \omega$, we obtain a well known
renormalized permittivity
\be
\tilde\kappa(q,\omega)=1+\frac{\pi N\alpha}8
\ee
This $q$-independent expression describes the effect of intraband polarization in undoped graphene. We stress, however,
that while $\tilde{\kappa}>1$, the above expression is
obtained for $q$ and $\omega$
values which are not relevant for plasmon excitations.
This is so because plasmons do not exist for such $\Pi(\vec q,\omega)$, as the plasmon dispersion terminates for $q\gtrsim p_F $.
In contrast, the permittivity in Eq.(\ref{eq:kappa}), evaluated in the long-wavelength limit relevant for plasmons,
$q\ll p_F$, $\omega\ll E_F$,
equals unity.
Next, we proceed with the analysis of the remaining terms, $\Pi_{n\ge 1}(q,\omega)$, which
give a leading contribution to the low-energy plasmon dispersion.
This is the part of polarization which
depends on the quasiparticle contributions. The corresponding Feynman graphs are given by ladder with rungs consisting of two quasiparticle lines separated by vertex parts, as shown on a Fig.\ref{fig:pi}. This gives geometric series that can be easily summed up:
\be\label{eq:pi_interacting}
\Pi(\vec q,\omega)=
N\oint\frac{d\theta}{2\pi}\mathcal{T}^\omega\lp \frac{\nu Z^2}{\omega-\vec q\vec v(1+\hat F)}\vec q\vec v\mathcal{T}^\omega\rp
,
\ee
where $\hat{F}$ is an integral operator
\be\label{eq:F=Gamma}
\hat F f(\theta)=\nu Z^2\oint\frac{d\theta'}{2\pi}\Gamma^\omega(\omega,\vec q,\theta,\theta')f(\theta')
.
\ee
Here $\nu=p_F /(2\pi\hbar^2 v)$ is
the density of states per flavor and $\theta$($\theta'$) is an angle between $\vec p$($\vec p'$) and $\vec q$.
For zero external momentum $\vec q\rightarrow0$ the kernel of the operator $\hat F$ depends only on the angle between $\vec p$ and $\vec p'$.
In what follows, we will need the quantity $\Gamma^\omega(0,0,\theta,\theta')\equiv \Gamma^\omega(\theta-\theta')$.
The scalar vertex $\mathcal{T}^\omega$
takes on a simple form on the Fermi surface. For small external frequency and momentum values $\omega,\,v_Fq\ll\varepsilon_F$ the vertex can be decomposed as
\be\label{eq:T_expanded}
\mathcal{T}^\omega(\omega,\vec q, \theta)=\mathcal{T}_0+\mathcal{T}_1\omega+\mathcal{T}_2 q\cos\theta+...
\ee
where $\mathcal{T}_0=Z^{-1}$ by virtue of Ward's identity.\cite{lifshitz_pitaevskii} The linear terms
are potentially relevant, if judged by power counting. However these contributions drop out, because for external frequency $\omega$ and momentum $\vec q$, the expressions in question contain both $\mathcal{T}^\omega(\omega,\vec q,\theta)$ and $\mathcal{T}^\omega(-\omega,-\vec q,\theta)$. This leads to a cancellation of the terms linear in $\omega$ and $\vec q$.
Continuing with the analysis, we note that only the first two terms of the series in Eq.(\ref{eq:P0+P1+P2}) are relevant for long-wavelength plasmons with $v_Fq\ll\omega\ll \varepsilon_F$. Anticipating the square root dependence for plasmon frequency vs. wavenumber, we expand in $qv/\omega$ to obtain
\bea\nonumber
\Pi_1=&&\oint\frac{d\theta}{2\pi}\mathcal{T}^\omega (\omega,\vec q,\theta)\frac{\nu Z^2 vq\cos\theta}{\omega-vq\cos\theta}\mathcal{T}^\omega (-\omega,-\vec q,\theta)
\\
\label{eq:P1}
&&=\nu Z^2\mathcal{T}_0^2\oint\frac{d\theta}{2\pi}\frac{(vq\cos\theta)^2}{\omega^2}
=\frac{\nu}{2}\frac{v^2q^2}{\omega^2}
,
\\
\nonumber
\Pi_2=&&\oint\frac{d\theta d\theta'}{(2\pi)^2}\mathcal{T}^\omega (\omega,\vec q,\theta)\frac{\nu Z^2vq\cos\theta}{\omega}\Gamma^\omega(\omega,\vec q,\theta,\theta')
\\\nonumber
&& \times \frac{\nu Z^2 vq\cos\theta'}{\omega}\mathcal{T}^\omega (-\omega,-\vec q,\theta')
\\\nonumber
&&=\nu^2Z^4\mathcal{T}_0^2\frac{v^2q^2}{\omega^2}\oint\frac{d\theta d\theta'}{(2\pi)^2}\Gamma^\omega(\theta-\theta')\cos\theta\cos\theta'
\\
&&
=\frac{\nu}{2}\frac{v^2q^2}{\omega^2}\nu Z^2\oint\frac{d\theta}{2\pi}\Gamma^\omega(\theta)\cos\theta
.
\eea
The terms $\Pi_{n\ge 3}$, expanded in $qv/\omega$, yield contributions which are higher order in $q$ . The same is true for contributions arising from expanding $\mathcal{T}^\omega$, $\Gamma^\omega$ in powers of $\vec q$ and $\omega$ (with the exception for potentially relevant linear terms $\mathcal{T}_1$, $\mathcal{T}_2$ in Eq.(\ref{eq:T_expanded}) which merely cancel out). These terms are therefore not essential in the long-wavelength limit.
Combining all the above results for $\Pi_0$ and $\Pi_{n\ge1}$, we find the long-wavelength asymptotic behavior for the net polarization function:
\bea
&&
\Pi(\vec q,\omega)=\frac{1}{2}\nu(1+F_1)\frac{v_F^2q^2}{\omega^2},
\quad
\\\label{F_1}
&&
F_1=\nu Z^2\oint\frac{d\theta}{2\pi}\Gamma^\omega(\theta)\cos\theta
.
\eea
The quantity $F_1$ also
gives the eigenvalues of the integral operator $\hat F$ corresponding to eigenfunctions $\cos\theta$ and $\sin\theta$. We can therefore write $F_1\cos\theta=\hat F[\cos\theta]$, which gives a Fourier harmonic of the operator kernel identical to Eq.(\ref{F_1}).
Plasmon dispersion can now be obtained from the relation $1-V(q)\Pi(q,\omega)=0$, giving Eq.(\ref{eq:plasmon_dispersion0}). The effects of interaction
are encoded in the quantity ${Y}$ which equals Fermi velocity $v_0$ in the absence of interactions and is renormalized to a different value in an interacting system.
We note a difference between the quantities $F_m$ used in the FL literature\cite{lifshitz_pitaevskii} and
those used here, which is manifest in {\it their sign}. The difference
arises due to the long-range character of the $1/r$ interaction. In our case the density-density interaction $F(\theta-\theta')$ accounts for the effects due to exchange correlation but not for the Hartree effects. The Hartree contribution is expressed through the $1/r$ interaction taken at the plasmon momentum $\vec q$, corresponding to the Feynman graphs which can be disconnected by cutting a single interaction line. These contributions are incorporated in the dynamically screened interaction, Eq.(\ref{eq:vareps_zero}), and hence not included in the definition of $\Gamma^\omega$ above.
In contrast, for Fermi liquids with short-range interactions, the Landau interactions describing density-density response are dominated by the Hartree effects. As a result, they have positive sign for weak repulsive interactions. In contrast, our $F_m$ are negative, since they are dominated by exchange effects. In particular, we expect $F_1<0$. The negative sign, expected from this general reasoning, is also borne out by a microscopic analysis at weak coupling, see below.
We also note an interesting analogy between the approach developed in this section and the analysis of superconducting Fermi liquids by Larkin and Migdal,\cite{larkin_migdal} and Leggett.\cite{leggett} Refs.\onlinecite{larkin_migdal,leggett} were concerned with Fermi-liquid renormalization of the quantities such as superfluid density in a metal with BCS pairing. Their analysis focused on the current correlation function which determines the response of current to vector potential, and followed similar steps as in the above discussion of $\Pi(\vec q,\omega)$. The renormalization effects were expressed through a combination of FL parameters, featuring a cancellation for a system with a parabolic band.
\section{Density dependence from one-loop RG}
\label{sec2}
In this section we derive plasmon dispersion for a simple model describing strongly interacting Dirac particles. This is done by employing the renormalization group analysis developed in Refs.\cite{gonzalez1994,vafek2007,son2007}.
We treat the two-body scattering vertex by accounting for dynamical screening of the Coulomb interaction in the random-phase approximation (RPA),
\be
U_{\vec q,\omega} = \frac{V(\vec q)}{\tilde{\kappa}(\vec q,\omega)}
,\quad
\tilde{\kappa}(\vec q,\omega)=1- V(\vec{q}) \Pi (\vec q, \omega)
.
\ee
Here
the quantity $\tilde\kappa(\vec q,\omega)$ which describes dynamical screening is identical to that introduced in the above discussion of the dynamical density correlator, Eq.(\ref{eq:vareps_zero}).
Here $\Pi(\vec q, \omega)$ is the polarization function \cite{hwang1}
\be
\Pi (\vec q, \omega) =N\sum_{\vec k, s, s'} | F_{\vec k, \vec k + \vec q }|^2 \frac{ f(\epsilon_{\vec{k}, s}) - f(\epsilon_{{\vec{k}+\vec q}, s'})}{ i\omega + \epsilon_{\vec{k}, s}- \epsilon_{\vec{k} + \vec{q}, s'} + i0}
,
\ee
with the band indices $\{s, s'\} = \pm $ and the coherence factors $| F_{\vec k, \vec k'}|^2=|\la \vec k',s'|\vec k,s\ra|^2$ describing overlaps of different pseudospin states. The polarization function is a sum of interband and intraband contributions, $\Pi=\Pi_1+\Pi_2$, described by $s'\ne s$ and $s'=s$, respectively. The factor $\epsilon(\vec q,\omega)$ describes the effect of intrinsic screening in graphene arising due to both the interband and intraband polarization.
For undoped graphene, only interband transitions contribute, giving
$\Pi_1 (\vec q, \omega)=-\frac{N}{16}\frac{ \vec q^2}{\sqrt{v^2\vec q^2+\omega^2}}$. This
expression is sufficient for our RG analysis [for a comprehensive treatment of the quantity $\Pi(\vec q,\omega)$ we refer to Ref.\onlinecite{hwang1}].\cite{gonzalez1994,vafek2007,son2007}
The full RG analysis of log-divergent corrections to Greens functions and vertices was performed in Refs.\onlinecite{gonzalez1994,vafek2007,son2007}. Below we use the results for one-loop RG calculation for large $N$. The
RG flow for the quasiparticle velocity takes the form
\be
\frac{dv}{d\ell}=\beta v,\quad
\beta=\frac{8}{N\pi^2}
,
\ee
where $\ell=\ln(p_0/p)$ is the RG time parameter (here the UV cutoff is set by interatomic spacing in graphene lattice, $p_0\sim a^{-1}$). This gives a power-law dependence
\be\label{eq:v(p)}
v(p)=(p/p_0)^{-\beta} v_0
.
\ee
For $N=4$ we find $\beta\approx 0.2$. This value is obtained from a one-loop RG which employs $1/N$ as a small parameter. The results for $N\sim 1$ are qualitatively similar, however the mathematical expressions are more cumbersome. Acknowledging an approximate character of the scaling dimensions obtained from one-loop RG, we shall leave the exponent $\beta$ unspecified in the analytic expressions.
In the case of interest (doped graphene) the interband contribution $\Pi_1$ follows the above dependence for large momenta and frequencies, $|\vec q|\gg p_F$, $\omega\gg E_F$,
which dominate the RG flow. The intraband contribution $\Pi_2$ is much smaller than $\Pi_1$ at
such $\vec q$ and $\omega$, with the two contributions becoming comparable for $|\vec q|\sim p_F$, $\omega\sim E_F$. In the static limit, $\omega\ll E_F$, the polarization is dominated by the $\Pi_2$ contribution. In the range $q<2p_F$, which is where we will need it below, it is identical to that for two-dimensional systems with parabolic band,
\be
\label{eq:Pi2}
\Pi (|\vec q|<2p_F)=-N\nu,
\ee
$\nu=p_F/2\pi v$ (we refer to Ref.\onlinecite{hwang1} for the analysis of other regimes).
This gives a standard expression for the static RPA-screened interaction
\be\label{eq:U_q0}
U_{\vec q,0}=\frac{2\pi e^2}{\kappa|\vec q|+2\pi N\nu e^2}
\ee
We can obtain the two-particle scattering vertex $\Gamma^\omega$ by taking the interaction on the Fermi surface, $\vec q\sim p_F$, $\omega\ll E_F$. This gives
\be\label{eq:Gamma_RPA}
\Gamma^\omega(\theta,\theta')= - g_{\vec p,\vec p'}[{\cal T}(\Delta \vec p)]^2 U_{\Delta \vec p,0}
,\quad
\Delta p=2p_F\sin\frac{\Delta\theta}2
,
\ee
where $g_{\vec p,\vec p'}=|\la \vec p'\alpha'|\vec p\alpha \ra|^2=\cos^2(\Delta\theta/2)$ is the coherence factor describing the overlap of (pseudo)spinors describing quasiparticles at different points of the Fermi surface (here $\Delta\theta=\theta-\theta'$). The minus sign in Eq.(\ref{eq:Gamma_RPA}) arises because this expression represents a contribution from an exchange part of the two-particle vertex.\cite{lifshitz_pitaevskii}
The FL interaction can now be obtained from its relation with the vertex
$\Gamma^\omega$, Eq.(\ref{eq:F=Gamma}). Combining Eq.(\ref{eq:F=Gamma}) and Eq.(\ref{eq:Gamma_RPA}), we find
\be\label{eq:F_definition}
F(\theta-\theta')=-\nu g_{\vec p,\vec p'} Z^2[{\cal T}(\Delta \vec p)]^2U_{\Delta \vec p,0}
\ee
In the large $N$ limit, the static RPA-screened interaction can be approximated as
$U_{\vec q,\omega}\approx -\frac1{\Pi(\vec q,\omega)}=\frac1{N\nu}$,
where we take into account that $|\Delta p|<2p_F$.
Both $Z$ and ${\cal T}$ flow under RG, however their product remains equal to unity because of the Ward identity. As a result,
FL interactions do not
undergo a power-law renormalization.
Starting from $Z(p){\cal T}(p)=1$, where both $Z(p)$ and ${\cal T}(p)$ are given by power laws drawn from RG, we set $p=p_F$. This gives
\be
F(\theta-\theta')=
-\frac1{N}\lp\frac{{\cal T}(\Delta \vec p)}{{\cal T}(p_F)}\rp^2\cos^2\lp\frac{\theta-\theta'}2\rp
.
\ee
We therefore conclude, that up to a remnant dependence on $p_F$ which may arise in the angle dependence due to the ratio ${\cal T}(\Delta \vec p)/{\cal T}(p_F)$, the function $F$ does not flow under RG.
The function $F(\theta-\theta')$ is essentially independent of doping, whereas the velocity has a power-law dependence on doping, $v\propto p_F^{-\beta}$, given by Eq.(\ref{eq:v(p)}) for $p=p_F$. Combining these results, we find a power law dependence for plasmon dispersion,
\be\label{eq:w(k)_large_N}
\omega^2=A|\vec q|
,\quad
A=\frac{Ne^2 p_Fv(p_F)}{2\kappa\hbar^2}(1+F_1)\propto p_F^{1-\beta}.
\ee
This result is valid for plasmons with long wavelengths, $q\ll p_F$. The predicted power-law dependence
holds in a wide range of carrier densities, both large and small, except very near the neutrality point where spatial inhomogeneity and thermal broadening play a role.
To conclude, plasmon renormalization results from competition of two effects: plasmons tend to stiffen due to RG-enhancement of velocity, and to soften due to the negative sign of $F_1$. However, since $F_1$ does not flow under RG, whereas velocity $v$ does, the net effect of interactions
is to stiffen plasmon dispersion.
The predicted dependence
$A\propto n^{(1-\beta)/2}$
can be used for extracting the exponent $\beta$ from measurement results.
\section{Magnetoplasmon in a Fermi liquid}
Below we analyze plasmon dispersion using FL transport equations.
We will first deal with plasmons in the absence of magnetic field, then proceed to add a $B$ field. Some of the relevant quantites, such the Landau FL interaction $F(\theta-\theta')$ has already been introduced and analyzed,
here we discuss them again to make connection to the microscopic derivation in Sec.\ref{sec1}.
In a semiclassical picture, the main
effect
dominating the Fermi-liquid behavior is forward scattering wherein the whole system of interacting particles acts as a refractive medium
in which a quasiparticle energy is a
function of occupancies of other particles. This is described by so-called Landau functional,\cite{lifshitz_pitaevskii}
\be
\delta\epsilon(\vec p)=\int \frac{d^2p}{(2\pi)^2} f(\vec p,\vec p')\delta n(\vec p',\vec r),
\ee
where $\delta n(\vec r,t)$ accounts for deviation of quasiparticle distribution from equilibrium.
Since deviation from equilibrium occurs in a narrow band of states near Fermi surface, it is convenient to write the Landau functional by setting $|\vec p|=|\vec p'|=p_F$ and parameterizing the Fermi surface by a unit vector $\vec n=\hat{\vec p}$. Introducing the dimensionless Landau interaction $F(\vec p,\vec p')=\nu f(\vec p,\vec p')$, where $\nu=p_F /(2\pi \hbar^2 v)$ is the density of states per flavor, we write
\be\label{eq:epsilon(p)}
\epsilon(\vec p,\delta n)=\epsilon_0(\vec p)+\int \frac{d\theta'}{2\pi} F(\vec p,\vec p')\delta \tilde n(\vec p',\vec r)
.
\ee
Here $\epsilon_0(\vec p)=v(p-p_F)$ is linearized quasiparticle energy, the angle $\theta'$ describes orientation of $\vec p'$, and $\delta \tilde n(\vec p)$ is obtained by integrating $\delta n(\vec p)$ along the Fermi surface normal. The expression (\ref{eq:epsilon(p)}) can be treated as a Hamiltonian of one quasiparticle moving in a selfconsistent field of other quasiparticles. Equations of motion can then be obtained from Hamiltonian formalism via $\p_t n=\{H,n\}$. This gives
\be\label{eq:FL_dynamics}
(\p_t+\vec v\nabla) \delta n(\vec p,\vec r,t)=-\vec v\nabla\hat F\delta n(\vec p,\vec r,t),
\ee
where $\hat F$ is the integral operator defined in Eq.(\ref{eq:epsilon(p)}).
In a system with rotational symmetry, such as graphene and 2DEG's, the functional $F$ depends only on the angle between $\vec p$ and $\vec p'$:
\be
\hat F \delta n(\theta)\to\oint\frac{d\theta'}{2\pi}F(\theta-\theta') \delta n(\theta')
\ee
This expression defines a hermitian operator in the space of functions on the Fermi surface with the inner product
\be\label{eq:innerproduct}
\la f_1(\theta)|f_2(\theta)\ra=\oint\frac{d\theta}{2\pi} f^*_1(\theta)|f_2(\theta).
\ee
The eigenvalues of $\hat F$ are simply given by the Fourier coefficients
\be
F_m=\overline{F(\theta)e^{-im\theta}}=\oint \frac{d\theta}{2\pi}F(\theta)e^{-im\theta}
.
\ee
The quantities $F_m$
parametrize FL interactions of a 2D system.
To describe plasmons,
we add to Eq.(\ref{eq:FL_dynamics}) a long range electric field arising due to oscillating charge density,
\be
\lp \p_t+\vec v\nabla(\hat 1+\hat F)\rp \delta n(\vec p,\vec r,t)+e\vec E\nabla_{\vec p}n_0(\vec p)=0
\ee
where $n_0(\vec p)$ is the equilibrium Fermi distribution. Here $\vec E=-\nabla\Phi$, where $\Phi(\vec r)$ is the potential
\[
\Phi(\vec r)=\sum_i\int d^2r'\oint \frac{d\theta'}{2\pi} \frac{e}{\kappa |\vec r-\vec r'|}\delta n(\theta',\vec r',t).
\]
Here the sum is taken over $N$ spin/valley flavors, and the dielectric constant $\kappa$ accounts for screening by substrate.
Performing Fourier transform, $\delta n(\theta,\vec r,t)=\int\int \frac{d\omega d^2k}{(2\pi)^3}\delta n_{\omega,\vec q}(\theta)e^{-i\omega t+i\vec q\vec r}$, we arrive at an eigenvalue equation of the form identical to that found in Sec.\ref{sec1} by analyzing poles of the dynamical screening function,
\be\label{eq:RPA}
1-V(\vec q)\Pi(\vec q,\omega)=0
\ee
The quantity $\Pi(\vec q,\omega)$ is identical to that found above by summation of FL-type ladder graphs,
\be
\Pi(\vec q,\omega)=
N\nu \,\tr_\theta\, \lp \frac{1}{\omega-\vec q\vec v(1+\hat F)}\vec q\vec v\rp
\ee
where trace is taken with respect to the inner product defined by Eq.(\ref{eq:innerproduct}).
Plasmon dispersion in the long wavelength limit can be found by expanding in the ratio $v|\vec q|/\omega$. We obtain
\be
\Pi (\vec q,\omega)\approx\frac{\nu}{\omega^2}\la \vec q\vec v|(1+\hat F)|\vec q\vec v\ra
=\frac{\nu q^2v^2}{\omega^2}\la \cos\theta|(1+\hat F)|\cos\theta\ra
\ee
Expressing the angle-averaged quantity through the Fourier coefficient $F_1$ and using the relation $\nu=p_F/(2\pi v)$ we rewrite this result as
\be
\Pi (\vec q,\omega)=\frac{Np_F \vec q^2}{4\pi\omega^2}v(1+F_1)
\ee
Plugging this into Eq.(\ref{eq:RPA}) and restoring Planck's constant, we obtain the same expression for plasmon dispersion as above, see Eq.(\ref{eq:plasmon_dispersion0}).
This analysis can be easily generalized to a system in the presence of an external magnetic field. This is done by accounting for the Lorentz force in the $\nabla_{\vec p}n$ term:
\be
\lp \p_t+\vec v\nabla(\hat 1+\hat F)\rp \delta n(\vec p,\vec r,t)+\lp e\vec E
+\frac{e}{c}\tilde{\vec v}\times \vec B\rp \cdot\nabla_{\vec p}n=0
\ee
where the velocity $\tilde{\vec v}=\nabla_{\vec p}\epsilon(\vec p,\delta n)$ includes
the contributions accounting for the distribution function change
$\delta n(\vec p)$.
This equation can be linearized as above, $n(\vec p,\vec r,t)=n_0(\vec p)+\delta n(\vec p,\vec r,t)$. In doing so, particular caution must be taken with the Lorentz force term since it is affected by
the FL interactions. Accounting for the term in the velocity $\tilde{\vec v}$ that depends on $\delta n(\vec p)$, we write
\bea
& \tilde{\vec v}(n_0+\delta n)&=\nabla_{\vec p}\varepsilon(n_0+\delta n)
\\
\nonumber
&&
=\nabla_{\vec p}\varepsilon+\nabla_{\vec p}\hat F\delta n =\vec v+\hat F \nabla_{\vec p'} \delta n,
\eea
Here we used Eq.(\ref{eq:epsilon(p)})
, performing integration by parts in the last term.
Terms linear in $\delta n$ can arise both from $\nabla_{\vec p}n_0$ and $\tilde{\vec v}$. Taking a solution in a plane wave form $\delta n(\vec r,\vec p,t)\propto e^{-i\omega t+i\vec q\vec r}\nabla_\varepsilon n_0(\vec p) f(\theta)$
where $\theta$ is the angle between $\vec q$ and $\vec v$, we have
\bea
\lp i\omega-ivq \cos\theta(1+\hat F)\rp f(\theta)-\frac{e}{c}\lp\vec v\times \vec B\rp\cdot\nabla_{\vec p}f(\theta)
\\
\nonumber
-\lp\hat F \nabla_{\vec p}f(\theta) \times\vec B\rp \cdot\vec v
=i\nu V(q) qv\cos\theta \oint\frac{d\theta'}{2\pi}f(\theta').
\eea
This equation can be simplified as follows:
\bea
&&\lp i\omega-ivq \cos\theta(1+\hat F)-\omega_B(1+\hat F)\partial_\theta\rp f(\theta)
\\
\nonumber
&&=i\nu V(q) qv\cos\theta \oint\frac{d\theta'}{2\pi}f(\theta'),\quad \omega_B=\frac{evB}{p_F c}
.
\eea
This gives
an eigenvalue problem with $\omega$ a spectral parameter and $f(\theta)$ an eigenfunction.
Inverting the operator on the left hand side gives a self-consistency equation
\[
\oint\frac{d\theta}{2\pi}\frac{i\nu V(q)qv}{\omega+i\omega_B\left(1+\hat F\right)\partial_\theta
-qv\cos\theta(1+\hat F)}\cos\theta =1,
\]
where $\frac1{\omega+...}$ is a shorthand for operator inverse.
Magnetoplasmon dispersion can be obtained via perturbation theory in the parameter $qv/\omega\ll 1$, giving
\be
\label{eq:magnetoplasmon_FL}
\omega^2(\vec q)=(1+F_1)^2\omega_B^2+(1+F_1)\frac{1}{2}V(\vec q) \nu \vec q^2v^2
\ee
This analysis ignores Bernstein modes which appear for $q r_c \sim 1$,
where $r_c$ is the cyclotron radius.\cite{bernstein} The validity of Eq.(\ref{eq:magnetoplasmon_FL}) is henceforth limited to long wavelengths, \mbox{$q r_c \ll 1$}.
Using the notation ${Y}=(1+F_1)v$ we arrive at Eq.(\ref{eq:magnetoplasmon_dispersion}).
Magnetoplasmon dependence on interactions is therefore described by the parameter ${Y}$ identical to that found for plasmons at $B=0$.
As discussed above, the density dependence of the quantities $v$ and $1+F_1$ can be linked to their flow under RG. The power-law RG flow of velocity leads to stiffening of plasmon dispersion, which overwhelms the effect of softening due to the negative sign of $F_1$.
\section{Comparison to systems with parabolic band dispersion}
\label{sec:galilean}
To put the above results in perspective,
we recall some important aspects of
long-wavelength plasmons in 2D electron systems with a parabolic band.
Such plasmons afford a simple description in terms of classical equations of motion for
collective ``center-of-mass'' variables describing oscillating charge density.\cite{giuliani2005} The result is expressed in a general form through unrenormalized band mass and electron interaction as
\be\label{eq:plasmon_unrenormalized}
m \omega^2 = n V(q) q^2
\ee
where $n$ is carrier density and $V(q)=\frac{2\pi e^2}{|q|\kappa}$ for two-dimensional systems. An identical result is found for the quantum problem, since Heisenberg evolution generates classical equations of motion for the operators corresponding to the center-of-mass variables describing collective charge dynamics.
The absence of renormalization of plasmon dispersion, Eq.(\ref{eq:plasmon_unrenormalized}), can be linked to Galilean invariance. In quantum systems, Galilean invariance is a symmetry of the Hamiltonian generated by the transformation $x'=x+vt$, $t'=t$. This symmetry, which holds for any system with parabolic band dispersion and instantaneous interactions,
ensure a complete cancellation of the effects of interaction,
rendering plasmon dispersion unrenormalized.
As discussed above, the cancellation of Fermi-liquid corrections follows from the Fermi-liquid identity\cite{lifshitz_pitaevskii} which relates renormalized velocity with the quantity $F_1$,
\be\label{eq:v/v*}
{Y}=(1+F_1)v=v_0
,
\ee
where $v_0=p_F/m$ is Fermi velocity for noninteracting particles.
Crucially,
the validity of this identity depends on the band structure being parabolic on the scales
$\epsilon\gtrsim E_F$ and $\epsilon\sim E_F$
which determine the FL interactions.
The relation between unrenormalized plasmon dispersion and Galilean symmetry also holds in the presence of a magnetic field, wherein gapless plasmons turn into gapped magnetoplasmons. The magnetoplasmon dispersion is
$\omega_B^2(q)=\omega_0^2(q)+\omega_c^2$, where $\omega_0(q)$ is given by Eq.(\ref{eq:plasmon_unrenormalized}) and $\omega_c=eB/mc$ is unrenormalized cyclotron frequency.
In this case, the absence of renormalization is guaranteed by Kohn's theorem.\cite{kohn's theorem} The Kohn's theorem is established by treating collective charge dynamics in magnetic field using the center-of-mass variables in complete analogy with the derivation of Eq.(\ref{eq:plasmon_unrenormalized}). Because of the Galilean invariance, Heisenberg equations of motion for the center-of-mass variables obey classical dynamics with unrenormalized cyclotron frequency.
Unrenormalized plasmon dispersion also arises in other space dimensions, with $V(q)=\frac{4\pi e^2}{q^2\kappa}$ for 3D systems and $V(q)=\frac{2e^2}{\kappa}\ln\frac1{|q|a}$ for 1D systems. In the latter case, plasmon dispersion matches that of charge modes in one-dimensional Luttinger liquids. We stress that, in a general Luttinger liquid framework, the effective interaction for 1D plasmons
is distinct from
the bare interaction.
Nevertheless, due to Galilean invariance, plasmon dispersion in a 1D system with parabolic bands is expressed through unrenormalized bare interaction.
As noted above, what matters here
is the character of the overall band structure rather than the linear dispersion in a system linearized near the Fermi points.
In contrast to systems with parabolic dispersion, plasmons in graphene are sensitive to interactions.
This is so because Galilean invariance is a non-symmetry for particles with linear dispersion, and hence the absence of renormalization is not guaranteed by any general principles. As a result, plasmons in graphene feature a nontrivial dependence on interactions. As we have seen above, plasmon dispersion is expressed through the Fermi velocity value $v$ which is renormalized by the interaction effects, and also through the FL interaction via a factor $1+F_1$.
We parenthetically note that electronic spectrum of graphene bilayer, while featuring parabolic band dispersion, does not obey Galilean invariance. We therefore expect plasmon dispersion in a bilayer to exhibit a full-fledged Fermi-liquid renormalization, similar to graphene monolayer.
To summarize, renormalization of electron properties due to interactions results in a nonclassical dependence of plasmon frequency on carrier density. Using a nonperturbative approach based on the FL theory, we show that plasmon dispersion can be expressed through Landau FL interactions. Measurements of plasmon resonance can therefore be used to extract the interaction parameters in a model-free way, which is particularly useful for studying strongly interacting systems such as graphene. Our results indicate a significant deviation from the $n^{1/4}$ power law dependence predicted for weakly interacting electrons in Refs.[\onlinecite{wunsch06,hwang1,polini2009,dassarma09}]. The density dependence predicted by our approach derives from the RG flow of the quantity $Y=(1+F_1)v$, where the RG ``time'' parameter value tracks the Fermi momentum. As an illustration, we consider RG for a large number of fermion flavors, which yields a power law of the form $n^{(1-\beta)/4}$, $\beta>0$. The density dependence of the plasmon resonance can therefore provide a direct,
model-free probe of the RG theory of interaction effects in graphene.
We thank A. V. Chaplik, M. I. Dyakonov, F. H. L. Koppens, I. V. Kukushkin and M. Yu. Reizer for useful discussions. This work was supported in part under the MIT Skoltech Initiative, a collaboration between the Skolkovo Institute of Science and Technology (Skoltech), the Skolkovo foundation, and the Massachusetts Institute of Technology.
|
1,314,259,993,295 | arxiv | \section{Introduction}
Learning from demonstrations has gained a lot of popularity in decision-making and control tasks in a number of domains such as gaming, human-computer interactions, robotics, and self-driving vehicles. Commonly used methods include Imitation Learning (IL)~\cite{bojarski2016end}~\cite{ross2011reduction}, Inverse Reinforcement Learning (IRL)~\cite{ng2000algorithms}~\cite{ziebart2008maximum}, Generative Adversarial Imitation Learning (GAIL)~\cite{ho2016generative}, Adversarial Inverse Reinforcement Learning (AIRL)~\cite{fu2017learning}~\cite{wang2019human}, etc. These methods do not require designing a reward function manually that can be hard for complex tasks, but they generally do need an abundance of demonstrations in order to mimick the demonstrated behaviors. Furthermore, the learned behavior usually works only in that specific task environment and fails to generalize to new tasks with different distributions. In reality, it is usually the case that we continuously enrich the dataset by collecting data from new tasks or environments. For example, to learn an automated lane-change behavior, we may train our vehicle agent with thousands or even millions of labeled driving demonstrations from different cities or countries, but these demonstrations may not cover all the possible situations and we may still have new data obtained from other locations that are not originally included in our training dataset. In this situation, it is labor intensive and costly to keep labeling all the newly acquired data and retrain the model from scratch again. Therefore, it is essential to have a model that can make good use of the knowledge learned from existing tasks and generalize quickly to new tasks with limited data samples.
There have been a number of studies that applied meta-learning to decision-making and control tasks, such as Meta-Reinforcement Learning~\cite{wang2016learning}~\cite{xu2018meta}, Meta-Inverse Reinforcement Learning~\cite{xu2018learning}~\cite{yu2019meta}, Meta-DAgger~\cite{sallab2017meta}, etc. These studies target on either policy learning or reward function learning. For a decision-making and control task, what we ultimately prefer is a policy that maps states to actions for maneuvering the agent to complete a task, but we may still want a reward function that is considered as a succinct, robust and transferable representation of a task~\cite{abbeel2004apprenticeship}. Some of the learning from demonstrations methods, e.g. Adversarial Inverse Reinforcement Learning (AIRL)~\cite{fu2017learning}, can simultaneously recover a policy and a reward function from expert demonstrations for single task. In this work, we propose a framework to combine Meta-learning and Adversarial Inverse Reinforcement Learning to learn the model initialization that can be quickly adapted to new situations with limited data.
Our contributions in this work are as follows: 1) We proposed a general Meta-AIRL algorithm that can efficiently generalize both of the learned policy and reward function to new tasks. 2) We applied the proposed method to the decision-making tasks in autonomous driving domain with the consideration that such an application is challenging as the learning agent needs to dynamically interact with adjacent agents to make proper actions.
The rest of the paper is organized as follows. In Section 2, we present related work. Section 3 provides details on the formulation of the Meta-AIRL methodology. Experiments and results are given in Section 4. Section 5 concludes this paper.
\section{Related Work}
\label{sec:citations}
\subsection{Learning from Demonstrations}
There has been a number of studies focusing on learning from demonstrations.Pure Behavior Cloning~\cite{bojarski2016end} suffers from the dilemma of distribution shift that leads to the failure of issuing correct actions when the states are out of the training distribution. DAgger~\cite{ross2011reduction} solves the problem by introducing a human expert or an oracle to give corrective labels for some states that are not included in the collected data. However, human intervention in deployment is troublesome in many practical settings. Generative Adversarial Imitation Learning (GAIL)~\cite{ho2016generative} is proposed to learn a policy in an adversarial way by involving a discriminator and a generator (i.e. the policy). It saves the effort of inquiring a human annotation but uses a discriminator for distinguishing the expert data and the generated data. A limitation is that GAIL does not recover a reward function which is generally considered robust over different environments. Adversarial Inverse Reinforcement Learning (AIRL)~\cite{fu2017learning} formulates the discriminator in a special form instead of directly using neural network output, which enables AIRL to learn both the policy and reward function simultaneously.
\subsection{Meta-learning}
Meta-learning, also known as learning to learn, is an approach to adapt learned models to novel settings by exploiting the inherent structural similarities across a distribution of tasks. Meta-learning can be metrics based~\cite{vinyals2016matching}~\cite{sung2018learning}, model based ~\cite{ravi2016optimization}, or optimization based ~\cite{ravi2016optimization}~\cite{finn2017model}~\cite{nichol2018first}. Optimization based meta-learning is a powerful approach that adjusts the optimization algorithm itself for fast learning. For example, Model-agnostic Meta-learning (MAML)~\cite{finn2017model} learns a good parameter initialization by performing one-step unrolling only. First-order MAML (FOMAML) and REPTILE~\cite{nichol2018first} algorithms learns the initialization by looking ahead multiple gradient steps but ignores second order derivatives. The learnt initialization by these methods encode common knowledge among different tasks and allows efficient adaptation in only a few gradient steps.
\subsection{Integration of Meta-learning and Learning from Demonstrations}
Some studies have explored integrating Meta-learning and learning from demonstrations methods for decision-making and control tasks. One study is Meta-IRL, which combines Meta-learning and IRL to adapt the learned reward function to new tasks. In the studies of ~\cite{xu2018learning}~\cite{gleave2018multi}, Meta-IRL has been applied to solve problems with discrete tabular settings, while in the study of~\cite{yu2019meta}, the authors extend the standard IRL framework to include latent variables and learn an adaptable reward function in an unsupervised way. The approach has been applied to continuous control tasks in Mujoco environments~\cite{todorov2012mujoco}. Xu. et al.~\cite{xu2018learning} and Yu. et al.~\cite{yu2019meta} leveraged Meta-IRL to learn prior distributions of the tasks. These approaches have been generally applied to simulated game environments where the background is static and no interactions between agents are involved. It can be intractable to model the distribution of tasks for highly dynamic environments such as a driving domain.
Autonomous driving is a much more challenging task in terms of the dynamically changing environment and complex interactions with surrounding agents. Recently, a couple of studies have explored to apply meta-learning in this domain. \cite{sallab2017meta} proposed Meta-DAgger to improve the generalization of the learned policy in a simulated environment called TORCS. It involves a human or an oracle in the loop which limits the application in the real world. \cite{jaafra2019context} is another work that integrated a policy iteration based RL into the meta-learning pipeline, and was tested for lane-keeping task in a simulated environment in CARLA. It requires the access to a known reward function.
Our work aims to learn both a policy and a reward function that can be effectively adapted to new tasks by learning from limited demonstrations, and we target on more complicated decision-making problems under dynamic environments where the learning agent intensively interacts with its surrounding agents.
\section{Meta-Adversarial Inverse Reinforcement Learning}
\subsection{Preliminaries}
\subsubsection{Adversarial Inverse Reinforcement Learning}
\label{AIRL}
Adversarial Inverse Reinforcement Learning~\cite{fu2017learning} builds on Adversarial Learning and maximum causal entropy Inverse Reinforcement Learning framework (IRL)~\cite{ziebart2010modeling}.
It introduces a discriminator to obtain the formulation of the reward function. The optimization in AIRL is similar to that in GAN~\cite{radford2015unsupervised} where the discriminator and generator are updated in an adversarial way. The difference is that AIRL takes a special form of the discriminator $D_{\omega}(s,a)$ instead of directly using the output of the neural network $f_{\omega}(s,a)$ as the discriminator value. The discriminator is given as:
\begin{equation}
\label{disc}
D_{\omega}(s,a) = \frac{\exp(f_{\omega}(s,a))}{\exp(f_{\omega}(s,a))+\pi(a|s)}
\end{equation}
where $s$ is the state, $a$ is the action, $\omega$ is the discriminator's parameter, and $\pi$ is the updated policy.
\subsubsection{Meta Learning}
Meta-learning involves a meta-learner and a learner. The learner learns the model $f_{\theta_{T_i}}$ from individual tasks $T_i$ in an inner loop, while the meta-learner trains the learner to get an overall common structure $f_{\theta}$ over a set of training tasks $T=\{T_i|i=1,...,N\}$ in an outer loop.
In our study, we leverage REPTILE pipeline as it does not unroll a computation graph or calculate any second derivatives. This feature facilitates the convenience of performing multiple gradient steps on the training tasks in the inner loop. More importantly, it allows us to apply different update frequency for the discriminator and the generator.
\subsection{Meta-Adversarial Inverse Reinforcement Learning}
\label{method}
\subsubsection{Loss Definition}
In our study, we use the cross-entropy loss $L_D(\omega)$ in Equation (\ref{D_loss}) as the objective of the discriminator that performs a binary classification task. We label the expert data as $1$ and the generated data as $0$. Instead of using the full trajectory as input to the discriminator which might result in high variance, we input single state-action pairs $(s, a)$ at individual time steps to the discriminator, which has been proved more stable in learning~\cite{fu2017learning}.
\begin{multline}
\label{D_loss}
L_{D}(\omega) = E_{(s,a)\sim \pi_{E}} [-\log D_{\omega}(s,a)] \\
+ E_{(s,a)\sim \pi_{\phi}} [-\log (1-D_{\omega}(s,a))]
\end{multline}
The generator represents the action policy of the agent whose optimization is based on a reward function. In our formulation, the reward function in Equation (\ref{reward}) is built upon the outcome of the discriminator and is considered as the connection between the discriminator and generator. The functionality of the reward function is to score high values for the generated data that confused the discriminator.
\begin{equation}
\label{reward}
r(s,a) = \log(D_{\omega}(s,a)) - \log (1-D_{\omega}(s,a))
\end{equation}
The objective of the generator is to generate trajectories as much similar as the expert demonstrations. That is achieved by maximizing the total rewards accumulated in an episode or up to a horizon $H$ in the policy optimization procedure. Actually, such a policy optimized is equivalent to an entropy regularized policy.
\begin{align}
\label{E_pi}
-L_G(\phi) =& E_{\pi_{\phi}} [\sum_{t=0}^{H} (\log (D_{\omega}(s_t,a_t)) - \log(1 - D_{\omega}(s_t,a_t)))] \nonumber\\
=& E_{\pi_{\phi}} [\sum_{t=0}^{H} (f_{\omega}(s_t,a_t) - \log \pi(a_t|s_t))]
\end{align}
\subsubsection{Meta Optimization}
Within meta-training, we sample $N$ training tasks $\{T_i| i=1,...,N\}$ from the distribution $p(T)$. Each decision-making task include $K$ expert demonstrations, $S_{T_i}=\{\tau_1, ..., \tau_K\}$, where each demonstration $\tau_k$ is a sequence of state and action pairs $(s,a)_t, t\in\{1,...,H\}$.
In our study, the inner loop neural network in the meta-learning structure consists of two components, the discriminator and the generator, i.e., the parameters in the meta-learning model includes parameters from the two individual neural networks $\theta = \{\omega, \phi \}$. The loss of the overall model also includes two parts, $L = \{L_D(\omega), L_G(\phi) \}$. For the update in the inner loop, we use ADAM optimizer for the discriminator and Trust Region Policy Optimization (TRPO) for the generator, which are consistent with the AIRL and GAIL methods.
\begin{align}
\frac{\partial L_{D}(\omega)}{\partial \omega} =& E_{(s,a)\sim \pi_{E}} [- \nabla_{\omega} \log (D_{\omega}(s,a))] \nonumber \\
& + E_{(s,a)\sim \pi_{\phi}} [- \nabla_{\omega} \log (1-D_{\omega}(s,a))] \label{grad_D}
\end{align}
\begin{align}
\frac{\partial L_{G}(\phi)}{\partial \phi} = E_{(s,a)\sim \pi_{\phi}} [&\nabla_{\phi} \log \pi_{\phi}(a|s) (\log D_{\omega}(s,a) \nonumber\\
&- \log (1-D_{\omega}(s,a)))] \label{grad_G}
\end{align}
Note that the discriminator is used to provide reward signals to the generator, so it will be beneficial to have a good discriminator as soon as possible. Therefore, we use high update frequency $k_D$ for the discriminator and low frequency $k_G$ for the generator.
In the outer loop, the model weights update is as follows:
\begin{equation}
\label{meta_update}
\omega \leftarrow \omega + \beta_\omega \frac{1}{N} \sum_{i=1}^N (\omega_{T_i} - \omega), \phi \leftarrow \phi + \beta_\phi \frac{1}{N} \sum_{i=1}^N (\phi_{T_i} - \phi)
\end{equation}
where $\omega_{T_i}$ and $\phi_{T_i}$ are the updated weights for the discriminator and the generator in the task $T_i$, $\beta = \{\beta_\omega, \beta_\phi\}$ are the meta learning rates. Similar to update frequencies, we set $\beta_\omega > \beta_\phi$, which gives better training stability.
In our study, the data used for the discriminator update consists of two sets, an expert dataset $\mathcal{S}^E$ that is collected from simulated expert demonstrations and a generated dataset $\mathcal{S}^G$ that is from the generator under the current policy $\pi_{\phi}$. Each episode, $\tau = \{(s_1, a_1), ..., (s_H, a_H)\}$, includes a sequence of state-action pairs of $H$ steps that might vary in different episodes in our study. The overall algorithm is given in Algorithm \ref{algo}.
\begin{algorithm}[t!]
\caption{Meta Adversarial IRL}
\label{algo}
\begin{algorithmic}[1]
\State Initialize $\theta=\{\omega,\phi\}$, $\omega$: parameters of the discriminator, $\phi$: parameters of the policy
\For {iteration = $1,..., M$}
\State Sample training tasks $T_i\sim p(T)$, $i\in \{1, 2, ..., N\}$
\For{all tasks $T_i$}
\State Obtain expert demos $\mathcal{S}_{T_i}^D$
\For{update = $1,2,...,K$}
\State Obtain generated demos $\mathcal{S}_{T_i}^G$ under $\pi_{\phi_{T_i}}$
\State Calculate $L_{D}(\omega_{T_i})$ with $\mathcal{S}_{T_i}^D$ and $\mathcal{S}_{T_i}^G$
\State Update discriminator parameters $\omega_{T_i} \leftarrow \omega_{T_i}-\alpha_D \nabla_{\omega} L_{D}(\omega_{T_i})$ for $k_D$ iterations
\State Calculate reward $r_{T_i}$ with $\mathcal{S}_{T_i}^G$ and $D_{\omega_{T_i}}$
\State Calculate $L_{G}(\phi_{T_i})$ with $\mathcal{S}_{T_i}^G$ and $r_{T_i}$
\State Update policy parameters $\phi_{T_i} \leftarrow \phi_{T_i}-\alpha_{G} \nabla_{\phi} L_{G}(\phi_{T_i})$ for $k_G$ iterations
\EndFor
\State Obtain parameters $\theta_{T_i}=\{\omega_{T_i}, \phi_{T_i}\}$
\EndFor
\State Update $\theta \leftarrow \theta + \beta \frac{1}{N} \sum_{i=1}^N (\theta_{T_i} - \theta)$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\subsection{Application Case}
We test our proposed method on decision-making tasks for lane-change scenarios in autonomous driving. The task is to decide when and where to make a lane-change maneuver. For example, when a vehicle receives a lane-change command from a route navigation module, our proposed method is to tell the vehicle at what time and to which vehicle gap it should make the lane change maneuver.
It is common that the expert demonstrations are collected by drivers with different personalities and in different regions. Particularly, in terms of the decision-making task, different driving styles show different preferences of vehicle merging gaps, safety distance, maximum/minimum accelerations, etc., resulting in different data distributions in the states and actions. The driving tasks in our study are considered drawn from such a distribution of different driving styles. As most off-the-shelf simulators have not provided the functionality of conveniently simulating diverse driving behaviors, we develop a new platform with oracle modules to simulate expert driving in multiple styles. The demonstrations include various behaviors, such as yielding behavior where the ego vehicle waits for a safety gap to merge in, overtaking behavior where the ego vehicle accelerates to merge to a gap in front of it on the adjacent lane, and aborting behavior where the ego vehicle aborts the lane changing when it detects potential collisions measured by a safety margin. In this study, we take the conservative and neutral driving styles as the meta-training tasks and the challenging style (aggressive driving) as the meta-testing task to check whether the generalized model really learns the encoded structure across tasks or just performs an average over tasks.
As decisions are generally considered as discrete, we design the action space as discrete by joining the decisions on the lateral and longitudinal directions. Specifically, the longitudinal decision decides which vehicle gap to merge into and the lateral decision decides whether to make the lateral movement right now. An illustration of the simulated scenarios is shown in Figure~\ref{fig:simulation}.
The state space incorporates vehicle kinematic information of the ego vehicle and its surrounding vehicles. It includes information such as vehicle positions, speeds, accelerations, lane ids, etc., which constitute a 44-dimension state space. More experiment details are given in the next subsection.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{graphs/simulation.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{\small An illustration of a simulated scenario. The learning agent (in yellow) interacts with its adjacent vehicles (in blue) and decides which vehicle gap (in yellow) to merge into and whether to make the lane change right now (in red).}
\label{fig:simulation}
\end{figure}
\subsection{Experiment Details}
For the two meta-training tasks, i.e. conservative and neutral driving styles, each has 3000 demonstrations available for training, while the meta-testing task, i.e. aggressive driving task, only has access to a limited number of demonstrations such as $\{5, 10, .., 50\}$.
As mentioned in \ref{method}, the discriminator is trained with higher update frequency than the generator. We fine-tune this hyperparameter with the meta-training tasks and find the optimal numbers of iterations are $k_D=50$ for the discriminator and $k_G=1$ for the generator respectively. Also, for the meta learning rate in the outer loop, we fine-tune it and use $\beta_{\omega}=0.5$ for the discriminator and $\beta_{\phi}=0.25$ for the generator. The meta-training process is conducted for 2500 iterations.
In meta-testing, we test the generalization performance with different number of available demonstrations, i.e. $\{5,10,...,50\}$. For each case, we conduct 10 update iterations upon the model learned from meta-training.
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\linewidth]{graphs/f1_d_rwd.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{\small Performance of the discriminator (left and middle graphs) and the generator (right graph). The left and middle graphs display the classification probabilities of the expert data and the generated data respectively from the discriminator. The high values in the experts and low values in the generator in early episodes (x-axis) indicate that the discriminator is able to distinguish the experts from the generator well. The values then go to around 0.5 at convergence for both the expert data and generated data as desired due to the adversarial effect with the generator. The right graph show the accumulated total rewards over training episodes. The increasing and plateauing trend indicates the improved learning ability of the generator. (Blue and green curves represent the meta-training tasks, and the red curve represents the online-testing results of the meta-testing task.)}
\label{fig:d_rwd}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\linewidth]{graphs/f2_steps_ratio.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{\small Driving performance. The left and right graphs depict the agent's maneuvering performance in training via the roll-out steps and decision-making steps. The agents takes shorter and more stable time to make lane changes after trained for hundreds of episodes. The right graph displays the ratio curves of successful lane-change episodes over all experimenting episodes. The values are close to 1 after convergence, indicating that the agent can always make successful lane changes when requested.}
\label{fig:steps_ratio}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\linewidth]{graphs/f3_max_min.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{\small Vehicle kinematic performance in maximum acceleration (left graph), minimum acceleration (middle graph), and maximum speed (right graph). The distinguishable curves in the three graphs indicate that the agent has learned different driving characteristics. Furthermore, the aggressive style (in red) shows higher values in maximum acceleration and maximum speed, and lower negative values in minimum acceleration, indicating that the meta-training procedure indeed encodes the features of different driving behaviors rather than just performing an average over them.}
\label{fig:max_min}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.45\linewidth, scale=0.5]{graphs/f5_test_curves.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{\small Testing performance of three learning models. Our Meta-AIRL model (in red) shows higher rewards and episode success ratio than the other two baselines (the pretrained model in blue and the learning-from-scratch AIRL model in green), along with the increase of available samples (x-axis). It achieves the satisfactory generalization performance with only about 10 demonstrations from the meta-testing task, indicating the effectiveness of the fast adaptation of the proposed model.}
\label{fig:new_test_curves}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\linewidth]{graphs/histograms.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{\small Histograms of four driving kinematics (maximum acceleration, maximum speed, minimum acceleration, and minimum speed) in testing for three learned models and the expert. The distributions of the four metrics in the Meta-AIRL model (red bars) are much closer to the expert distributions (black bars) than those of the other two models (blue and green bars).}
\label{fig:histogram}
\end{figure*}
\subsection{Evaluation Metrics}
\label{subsec: metrics}
We assess the effectiveness of the proposed method in four different aspects: the discriminator performance, the generator performance, the driving performance, and the vehicle kinematic performance. Within each aspect, we specify some metrics in detail.
\textbf{Discriminator Performance}.
The discriminator performance indicates the capability of distinguishing the expert data from the generated data. For a good discriminator, the value of $D_{\omega}(s_E,a_E)$, where $(s_E,a_E)$ is from expert dataset $\mathcal{S}^E$, should be as high as $1$ whereas the value of $D_{\omega}(s_G,a_G)$, where $(s_G, a_G)$ is from $\mathcal{S}^G$, should be as low as $0$ (Remember that we label expert data as $1$ and generated data as $0$). When trained with the generator, at optimal, the discriminator should output $0.5$ for both $D_{\omega}(s_E,a_E)$ and $D_{\omega}(s_G,a_G)$.
\textbf{Generator Performance}.
The generator's capability is illustrated by the accumulated rewards in the roll-out episodes calculated by Equation (\ref{reward}). The larger the accumulated value is, the better the performance is.
\textbf{Driving Performance}.
We use three metrics to evaluate the driving performance. Specifically, the roll-out step represents the duration of an episode before its termination (e.g. a crash, a prolonged driving, or a successful lane change). The decision-making step measures the duration of the decision-making process in an episode (e.g. how much time it uses to make the final decision). The episode success rate is the successful episodes over the total number of experimenting episodes. The value of the first two metrics should stay steady when the agent has learned the lane-change behavior, and the success rate should be as high as $1$ at optimal.
\textbf{Vehicle Kinematic Performance}.
To further explore the features of different driving styles, we evaluate some vehicle kinematics such as the maximum acceleration, the minimum acceleration, and the maximum speed, with the consideration that these metrics are representative to distinguish different habits. Ideally, they should be in different but stable ranges for different driving styles.
\subsection{Results}
In this subsection, we show the meta-training results in Figure \ref{fig:d_rwd}, \ref{fig:steps_ratio}, and \ref{fig:max_min}, and the meta-testing results in Figure \ref{fig:new_test_curves} and \ref{fig:histogram}. In meta-training, we plot the performance curves of the two meta-training tasks as well as the online-testing curve of the meta-testing task adapted upon the current model parameters. In meta-testing, we compare the generalization performance of our model with two baselines. One is a pretrained model that is trained with all training tasks and then fine-tuned on the meta-testing task. This comparison is to show that the model from the meta-training phase does encode the patterns of different training tasks instead of just being integrated as an averaged model. The other baseline is a learning-from-scratch model that is trained based on AIRL on the meta-testing task with randomly initialized parameters. This comparison is to show how fast the our Meta-AIRL model can adapt to a satisfactory level.
\subsubsection{Meta-training Results}
In Figure~\ref{fig:d_rwd}, the left and middle graphs show the probability curves of the discriminator. As mentioned earlier, a well-trained discriminator should output high probability values for expert data in its early stage as we use a high update frequency for it, and then goes down to 0.5 at optimal due to the adversarial effect from the generator, vice versa for the probability of the generated data. These two graphs in Figure \ref{fig:d_rwd} satisfactorily show this trend.
The generator's performance is reflected by the total reward curve in the right graph in Figure~\ref{fig:d_rwd}. The increasing and plateauing trend of the total rewards indicates the improved learning ability on the driving task.
Figure~\ref{fig:steps_ratio} shows the driving performance which is represented by three metrics. The rolling-out steps in the left graph and decision-making steps in the middle graph show that the agent takes a long time to make lane changes in early episodes, but leans to make stable performance with fewer steps after trained for hundreds of episodes. The distinguishable curves in these two graphs represent different styles, indicating that the meta-training procedure does encode the characteristics of different tasks instead of just simply averaging over tasks. Furthermore, the online testing curves in red show that the task execution duration for aggressive drivers is shorter than that of conservative and neutral style drivers which is in line with our expectation.
The episode success ratio of the three driving styles is shown in the right graph in Figure \ref{fig:steps_ratio}. The curves increase quickly from low values to high ones and are quite close to 1, indicating that the trained agent can always make successful and acceptable lane-change decisions when requested.
Figure~\ref{fig:max_min} displays the vehicle kinematic performance of the driving styles in three metrics, i.e. the maximum acceleration, minimum acceleration, and maximum speed. The distinguishable curves indicate that the agent does learn different driving characteristics of different styles, with different value ranges for their corresponding styles. More importantly, the aggressive style shows higher values in maximum acceleration and maximum speed, and lower values (in negative) in minimum acceleration. This further verifies that the trained model indeed encodes the features of different driving behaviors instead of just performing an average over them.
\subsubsection{Meta-testing Results}
In meta-testing, we fine-tune the meta-trained model on the meta-testing task with different numbers of limited demonstrations, i.e. $\{5,10,15,...,50 \}$, separately, to check how well the model would be adapted when it has different data accessibility. The baseline models are also fine-tuned (for the pretrained baseline model) or trained (for the learning-from-scratch (AIRL) model) with the same numbers of demonstrations. The performance of the model is evaluated with 300 episodes rolled out in the meta-testing environment. The comparisons are shown in Figure~\ref{fig:new_test_curves} and \ref{fig:histogram}.
Figure~\ref{fig:new_test_curves} displays the total rewards (left graph) and the success ratios (right graph) of the three learning methods. It is easy to observe that the Meta-AIRL model (red curves) performs much better than the other two baselines (blue and green curves): its total reward curve goes up much faster and stays higher, and its episode success ratio curve also rises up faster and is closer to 1. Such a good performance is achieved with only about 10 expert demonstrations through adaptation.
Figure~\ref{fig:histogram} shows the histograms of four driving kinematics, i.e. maximum acceleration, maximum speed, minimum acceleration, and minimum speed, of the expert data and the testing results of the three learning methods. As the number of test cases (300 tests) of the three trained models is different from that of the expert (3000 demonstrations), to make a fair comparison, we divide the bin's raw count by its corresponding total number of counts and the bin width, which is marked as y-axis. X-axis represents the metric values. The graphs visually show that the distributions of all of the four metrics in the Meta-AIRL model (red blocks) are much closer to the expert distributions (black blocks) when compared with the other two baselines (green and blue blocks).
The quantitative difference between the histogram from each learnt model and that from the expert is summarized in Table ~\ref{tab:table_example}. The deviation is measured by $l_1$ distance and calculated with values on y-axis. We can observe that Meta-AIRL has lower deviation values which confirms that Meta-AIRL compares favorably with the other two. Therefore, our Meta-AIRL model can faithfully recover the expert behavior with limited number of samples for challenging novel tasks.
\begin{table}[h]
\caption{\small Comparison of three approaches in terms of the deviation of the learned distribution from the expert distribution. The deviation is measured by the $l_1$ distance between the histograms.}
\label{tab:table_example}
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline
\textbf{Metrics} & \textbf{Meta-AIRL} & \textbf{Pretrain model}
& \textbf{AIRL model} \\
\hline
\textbf{max\_acce} & \bf{0.32} & 0.48 & 0.83 \\
\hline
\textbf{max\_speed} & \bf{0.08} & 0.18 & 0.27 \\
\hline
\textbf{min\_acce} & \bf{0.30} & 0.41 & 0.42 \\
\hline
\textbf{min\_speed} & \bf{0.15} & 0.28 & 0.33 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this work, we propose a framework (Meta-AIRL) to integrate Meta-learning and Adversarial Inverse Reinforcement Learning for fast adaptation of the learned model to new tasks. The model learns both the policy and reward function simultaneously from demonstrations by invoking a discriminator and a generator, and adapts quickly to new tasks with limited data samples by using different update frequencies and meta-learning rates for the discriminator and the generator.
The proposed model has been applied to the challenging decision-making task in the autonomous driving domain where the learning vehicle intensively interacts with surrounding agents. Meta-training results show that the agent indeed learns the inherent features of different driving styles instead of just performing an average over tasks. The comparison of Meta-AIRL with other baselines in the meta-testing results show that our model can fast adapt to new tasks with limited demonstrations and achieve satisfactory results comparable to those of the experts.
\begin{comment}
\section*{APPENDIX}
\subsection{Proof of Equation (\ref{E_pi})}
\label{appd: method}
The proof of Equation (\ref{E_pi}) is as follows:
\begin{equation*}
\begin{split}
\label{proof}
& -L_G(\phi) = E_{\pi_{\phi}} [\sum_{t=0}^{H} r(s_t,a_t)] \\
& = E_{\pi_{\phi}} [\sum_{t=0}^{H} \log (D_{\omega}(s_t,a_t)) - \log(1 - D_{\omega}(s_t,a_t))] \\
& = E_{\pi_{\phi}} [\sum_{t=0}^{H} \log \frac{e^{f_{\omega}(s_t,a_t)}}{e^{f_{\omega}(s_t,a_t)} + \pi(a_t|s_t)}
- \log \frac{\pi(a_t|s_t)}{e^{f_{\omega}(s_t,a_t)} + \pi(a_t|s_t)}]\\
& = E_{\pi_{\phi}} [\sum_{t=0}^{H} \log \frac{e^{f_{\omega}(s_t,a_t)}}{\pi(a_t|s_t)}] \\
& = E_{\pi_{\phi}} [\sum_{t=0}^{H} f_{\omega}(s_t,a_t) - \log \pi(a_t|s_t)]\\
\end{split}
\end{equation*}
\end{comment}
\bibliographystyle{IEEEtran}
|
1,314,259,993,296 | arxiv | \section{Introduction}
Sterile neutrinos, a proposed fourth neutrino flavor, are one viable dark matter candidate.\cite{abazajian2001b} A sterile neutrino is an electroweak singlet and is thus consistent with limits from the {\it LEP} measurement \cite{LEP2006} of the width of the $Z^{0}$ gauge boson. The standard model does not provide any predictions about this proposed particle, but bounds can be placed using astronomy, cosmology and supernovae\cite{abazajian2001a,abazajian2001b,boyarsky2006a,boyarsky2006b,boyarsky2007,boyarsky2009b,boyarsky2009c,boyarsky2014,bulbul2014,chan2014} on the mass and mixing angle parameter space. If sterile neutrino dark matter can radiatively decay,\cite{pal1982} x-ray observations\cite{boyarsky2006b,boyarsky2007,boyarsky2006a,boyarsky2009b,boyarsky2009c,boyarsky2014,bulbul2014} from galaxies and galaxy clusters can also be used to place bounds, i.e $1\text{ keV} < m_{s}< 18 \text{ keV}$ and $\sin^{2} 2 \theta_{s} \lesssim1.93 \times 10^{-5} \left({m_{s}}/{\text{keV}}\right)^{-5.04}$.
The explosion mechanism of CCSNe remains an outstanding problem in physics as well. Spherically symmetric models do not easily explode and two-dimensional and three-dimensional models that do explode often have too little energy to match observations.\cite{janka2012} The problem remains that, although a shock forms successfully and propagates outward in mass, it loses energy to the photodissociation of heavy nuclei and becomes a standing accretion shock.
However, even one dimensional models explode in simulations with enhanced neutrino fluxes, either from convection below the neutrinosphere \cite{wilson1988,wilson1993,book} or a QCD phase transition.\cite{fischer2011} Along this vein, we have explored the resonant mixing between sterile and electron neutrinos (or antineutrinos) to increase the early neutrino luminosity and revitalize the shock.\cite{hidaka2006,hidaka2007,warren2014,warren2016}
Hidaka and Fuller \cite{hidaka2006,hidaka2007} were the first to propose that sterile neutrinos could serve as an efficient neutrino energy transport mechanism in the supernova core. They used a one zone collapse calculation to study the resonant oscillations of a sterile neutrino with the mass and mixing angle of a warm dark matter candidate. They found that the resonant mixing of electron and sterile neutrinos can serve as an efficient method of transporting neutrino energy from the protoneutron star core, where high energy neutrinos are trapped, to the stalled shock to assist in neutrino reheating. This mechanism is highly sensitive to the feedback between neutrino oscillations and the local composition, energy transport, and hydrodynamics and warranted detailed numerical studies.\cite{warren2014,warren2016}
In Refs.~\refcite{warren2014} \& \refcite{warren2016} coherent active-sterile neutrino oscillations were studied using the University of Notre Dame-Lawrence Livermore National Laboratory (UND/LLNL) code,\cite{book,bowers1982} a spherically symmetric general relativistic supernova model with detailed neutrino transport and hydrodynamics. The impact on shock reheating of sterile neutrinos with masses and mixing angles consistent with dark matter constraints was studied. Sterile neutrino dark matter candidates can enhance the shock energy and lead to a successful explosion, even in a simulation that would not otherwise explode.\cite{warren2014}
\section{Matter-enhanced Neutrino Oscillations\label{sec:osc}}
With the inclusion of a sterile neutrino, the full neutrino mixing problem requires a complete understanding of all mass differences and mixing angles in the $4\times 4$ mixing matrix. However, for this work, only two neutrino mixing between electron neutrinos $\nu_{e}$ and sterile neutrino $\nu_{s}$ (or their antiparticles) have been considered. This is sufficient for exploring the impact on the explosion energy since electron neutrinos and antineutrinos dominate in the gain region during shock reheating.
Matter-enhanced neutrino oscillations in supernovae occur via the Mikheyev-Smirnov-Wolfenstein (MSW) effect.\cite{mikheyev1985,wolfenstein1978} As neutrinos propagate through matter, they experience an effective potential from charged and neutral current interactions due to forward scattering on baryonic matter, electrons, and other neutrinos. Each neutrino flavor will experience a different potential because $\nu_{e}$ experience both charged and neutral current interactions whereas $\nu_{s}$ do not experience any weak interactions. This can induce a coherent effect where maximum mixing is possible, even for a small vacuum mixing angle, when the phase arising from the potential difference cancels the phase due to the mass difference. The forward scattering potential experienced by electron neutrinos is
\begin{equation}
V(r) = \frac{3 \sqrt{2}}{2} G_{F} n_{B} \left(Y_{e} + \frac{4}{3} Y_{\nu_{e}} + \frac{2}{3} Y_{\nu_{\tau}} -1\right)~,
\end{equation}
where $G_{F}$ is the fermi coupling constant, $n_{B}$ is the baryon number density, and $Y_{i}$ is the number fraction of species $i$. The antineutrino species will experience forward scattering potentials with the opposite sign. In the supernova environment, one can assume $Y_{\nu_{\mu}} = Y_{\nu_{\tau}} = 0$, since $\nu_{\mu}$ and $\nu_{\tau}$ neutrinos and antineutrinos are produced via pair emission processes and thus occur in equal numbers.
This results in an ``effective'' in-medium mixing angle,\cite{warren2014,warren2016}
\begin{equation}
\sin^{2} 2 \theta_{M} (r) = \frac{\Delta^{2} \sin^{2} 2 \theta_{s}}{(\Delta \cos 2 \theta_{s} - V(r))^{2} + \Delta^{2} \sin^{2} 2 \theta_{s}}~.
\end{equation}
From this expression, it is simple to see that one can achieve maximal mixing in matter, even for small vacuum mixing angles. Such resonant mixing will occur for neutrinos with the energy where $\sin^{2} 2 \theta_{M} = 1$,
\begin{equation}
E_{res} = \frac{\Delta m^{2}}{2 V(r)} \cos 2 \theta_{s}~.
\end{equation}
In this work, only coherent and adiabatic oscillations are considered.\cite{warren2014,warren2016} This ensures that all electron neutrinos of the given flavor with the resonance energy $E_{res}$ will oscillate to sterile neutrinos, and vice versa.\cite{mikheyev1985,wolfenstein1978} It is probable that incoherent, scattering-induced oscillations will be significant in the high matter densities of the central core, but this will not be a dominant effect and will be left to future work.
\section{Results\label{sec:results}}
A model that can successfully explode was used as a baseline for comparison. The UND/LLNL supernova model\cite{bowers1982,book} is a spherically symmetric, general relativistic hydrodynamic supernova model. We have used the $20 \text{ M}_{\odot}$ progenitor from Woosley \& Weaver \cite{woosley1995}.
Figure~\ref{fig:parameter} shows the enhancement to the explosion energy in a simulation with a sterile neutrino compared to a simulation without a sterile neutrino. Sterile neutrino masses were considered in the range $1 \text{ keV} < m_{s} < 10 \text{ keV}$ and mixing angles in the range of $10^{-11} < \sin^{2} 2\theta_{s} < 10^{-2}$, which includes the region that corresponds with dark matter candidates. The shaded regions show the parameter space allowed for sterile dark matter by recent x-ray flux measurements.\cite{abazajian2001a,boyarsky2006a,boyarsky2006b,boyarsky2007,boyarsky2009c} There is a large region of the parameter space that both enhances the explosion energy of core-collapse supernovae and is satisfies constraints on sterile neutrino dark matter.
\begin{figure}[h]
\centering
\includegraphics[width=3.0in]{eparameter.eps}
\caption{Sterile neutrino mass $m_{s}$ and mixing angle $\sin^{2} 2 \theta_{s}$ parameter space. The region above the solid line enhances the supernova explosion energy by $1.01\times$ compared to a simulation without a sterile neutrino. The region above the dashed line enhances the explosion energy by $1.1\times$. The dark gray shaded region shows the parameter space allowed by x-ray flux measurements if sterile neutrinos make 100\% of the observed dark matter mass.\cite{abazajian2001a,boyarsky2006a,boyarsky2006b,boyarsky2007,boyarsky2009c} The medium gray region is for 10\% of the observed dark matter mass and the light gray region is for 1\% of the observed dark matter mass. The solid black square shows the most recent best fit point from the x-ray flux from Boyarsky et al\cite{boyarsky2014} and Bulbul et al.\cite{bulbul2014} Figure from Warren et al.\cite{warren2016}}
\label{fig:parameter}
\end{figure}
To illustrate how the explosion energy enhancement occurs, consider a single choice of sterile neutrino mass $m_{s} = 1$ keV and mixing angle $\sin^{2} 2\theta_{s} = 10^{-5}$. Any mass and mixing angle that causes an enhancement will show similar behavior.
The enhancement to the kinetic energy isn't evident until $\sim0.2$s post-bounce, when neutrino reheating becomes important. By 1s post-bounce, the explosion energy is enhanced by a factor of $2.2\times$ compared to the simulation without a sterile neutrino.
This dramatic enhancement to the kinetic energy is due to increased neutrino heating in the simulation with a sterile neutrino. The presence of a sterile neutrino enhances the luminosities of all neutrino and antineutrino species considered here. The enhancement to the neutrino luminosities does not become significant until $\sim 0.1$s post-bounce, which is when oscillations to a sterile neutrino become important\cite{warren2014}. The luminosities of all neutrino and antineutrino species are enhanced by about $2\times$ until 0.4s post-bounce, which corresponds to the enhancement in the kinetic energy. The increased neutrino luminosities emerging from the protoneutron star increase the rate of neutrino heating in the gain region and lead to the enhanced explosion energy.
Although oscillations are only allowed between electron neutrinos $\nu_{e}$ and sterile neutrinos $\nu_{s}$ (and their antiparticles), enhancements are seen in the luminosities of all neutrino species. This is because the oscillations $\nu_{e} \leftrightarrow \nu_{s}$ leads to a ``double reheating'' scenario: the $\nu_{e} \leftrightarrow \nu_{s}$ oscillations cause additional heating near the location of the neutrinosphere, which causes increased neutrino cooling in all flavors, and finally these enhanced neutrino luminosities reheat the stalled shock.
In the simulation with a sterile neutrino, the neutrinosphere radius increases by about $1.4\times$ between 0.1s and 0.4s post-bounce, which corresponds with the increased neutrino luminosities. The neutrinosphere radius is increased because the oscillations heat the protoneutron star surface by depositing energy at the location of the $\nu_{s} \rightarrow \nu_{e}$ resonance. However, this additional heating of the protoneutron star surface does not increase the temperature of the neutrinosphere, but instead causes it to expand outward.
Although the location of the neutrinosphere is increased by $\sim1.4\times$, the temperature at the neutrinosphere is roughly the same in both simulations. Thus the larger neutrinosphere radius leads to enhanced emission of neutrinos, but the neutrinos in both simulations have roughly the same characteristic temperature.
\section{Conclusions\label{sec:conc}}
Recent observations of galaxies and galaxy clusters indicate an unidentified emission line at $\sim3.5$keV. This line may be due to the radiative decay of sterile neutrino dark matter with $m_{s} \approx 7$keV. If this is the case, bounds can be placed on the sterile neutrino mass and mixing angle from the observed photon energies and fluxes. Further observations are needed to confirm the presence of this line in additional dark matter dominated environments, such as dwarf spheroidal galaxies.
For oscillations between an electron neutrino and sterile neutrino, a large region of the sterile neutrino mass and mixing angle parameter space that is allowed by these observations leads to an enhancement of the explosion energy in core-collapse supernovae. The enhancement is due to increased neutrino heating in the gain region caused by increased neutrino luminosities of all neutrino and antineutrino flavors. The neutrino luminosities are enhanced due to a ``double reheating'' mechanism in the protoneutron star, where the surface of the protoneutron star is heated due to the oscillations between electron and sterile neutrinos, and in turn, the heating of the protoneutron star increases the luminosities of all neutrino and antineutrino flavors.
\section*{Acknowledgments}
Work at the University of Notre Dame supported by the U.S. Department of Energy under Nuclear Theory Grant DE-FG02-95-ER40934. One of the authors (MW) is also supported by U.S. National Science Foundation through the Joint Institute for Nuclear Astrophysics (JINA) Frontier center. This work was also supported in part by Grants-in-Aid for Scientific Research of the JSPS (20105004, 24340060).
|
1,314,259,993,297 | arxiv |
\section{Introduction}
In this work we develop a theory of asynchronous networks and event driven dynamics. This theory constitutes an approach to network dynamics
that takes account of features encountered in networks from modern technology, engineering, and biology, especially neuroscience.
For these networks dynamics can involve a mix of distributed and decentralized control,
adaptivity, event driven dynamics, switching, varying network topology and hybrid dynamics (continuous and discrete).
The associated network dynamics will generally only be piecewise smooth, nodes may stop and later restart
and there may be no intrinsic global time (we give specific examples and definitions later).
Significantly, many of these networks have a \emph{function}. For example, transportation
networks bring people and goods from one point to another and neural networks may perform pattern recognition or
computation.
Given the success of network models based on smooth differential equations and methods based on statistical physics,
thermodynamic formalism and averaging (which typically lead to smooth network dynamics),
it is not unreasonable to ask whether it is \emph{necessary} to incorporate issues such as nonsmoothness in a theory of network dynamics.
While nonsmooth dynamics is more familiar in engineering than in physics,
we argue below that ideas from engineering, control and nonsmooth dynamics apply to many classes of network
and that nonsmoothness often cannot be ignored in the analysis of network function.
As part of these introductory comments, we also explain the motivation underlying our work, and
describe one of our main results: the modularization of dynamics theorem.
\subsection*{Temporal averaging}
Consider the analysis of a network where links are added and removed over time. Two extreme cases have been widely considered in the literature.
If the network topology switches rapidly, relative to the time scale of the phenomenon being considered,
then we may be able to replace the varying topology by the time-averaged topology\footnote{For example, if the input structure is additive -- see section~\ref{generalities}.}. Providing that the network topology is not
state dependent, the resulting dynamics will typically be smooth. On the other hand, if the topology changes
slowly enough relative to the time scale of interest, we may regard the topology as constant and again we obtain smooth
network dynamics. Either one of these approaches may be applicable in a system where time scales are clearly separated.
However, in many situations, especially those involving control or close to bifurcation,
\emph{changes in network topology may play a crucial role in network function}
and an averaging approach may fail or neglect essential structure. This is well-known for problems in optimized control where
solutions are typically nonsmooth and averaging gives the \emph{wrong} solutions (for example, in switching problems using thermostats).
For an example with variable network topology, we cite the effects of changing connection
structure (transmission line breakdown), or adding/subtracting a microgrid, on a power grid.
Neither averaging nor the assumption of constant network structure are appropriate tools: we cannot average the problems away.
Instead, we are forced to engage with an intermediate regime, where nonsmoothness (switching) and control play a crucial role in network function.
\subsection*{Spatial averaging and network evolution}
Much current research on networks is related to the description and understanding of complex systems~\cite{BA,CH,LLK,RMc}.
Roughly speaking, and avoiding a formal definition~\cite{LLK}, we regard a complex system as a large network of nonlinearly interacting dynamical systems where there
are feedback loops, multiple time and/or spatial scales, emergent behaviour, etc.
One established approach to complex networks and systems uses ideas from statistical mechanics and thermodynamic formalism.
For example, models of complex networks of interconnected neurons can sometimes be described in terms of their information
processing capability and entropy~\cite{RNL}.
These methods originate from applications to large interacting systems of particles in physics.
As Schr\"odinger points out in his
1943 Trinity College, Dublin, lectures~\cite{Sc}
\vspace*{0.06in}
\centerline{``...the laws of physics and chemistry are statistical throughout.''}
\vspace*{0.06in}
In contrast to the laws of physics and chemistry, evolution plays a decisive role in the development
of complex biological structure. Functional biological structures that provided the basis for evolutionary development can be quite small -- the nematode worm \emph{caenorhabditis elegans} has 302 neurons. If the underlying small-scale structure still has functional relevance, an approach based on statistical averages to complex biological networks has to be limited; on the one hand, averaging over the entire network will likely ignore any small scale structure, and on the other hand statistical averages have little or no meaning for small systems -- \emph{at least on a short time scale}.
Reverse engineering large biological structures
appears completely impractical; in part this is because of the role that evolution plays in the development of complex structure. \emph{Evolution works
towards optimization of function, rather than simplicity}, and is often local in character with the flavour of decentralized control.
Similar issues arise in understanding evolved engineering structures. For example,
the internal combustion engine of a car in 1950 was a simple device, whose operation was synchronized through mechanical means.
A modern internal combustion engine is structurally complex and employs a mix of
synchronous and asynchronous systems controlled by multiple computer processors, sensors and complex computer code.
\subsection*{Reductionism}
In nonlinear network dynamics, and complex systems generally, there is the
question as to how far one can make use of reductionist techniques~\cite{Anderson1972}, \cite[2.5]{LLK}.
One approach, advanced by Alon and Kastan~\cite{KU} in biology,
has been the identification and description of relatively simple and small dynamical units,
such as non-linear oscillators or network motifs (small network configurations that occur
frequently in large biological networks~\cite[Chapter 19]{CH}). Their premise is that a
modular, or engineering, approach to network dynamics is feasible: identify building blocks,
connect together to form networks and then describe
dynamical properties of the resulting network in terms of the dynamics of its components.
\begin{quote}
``Ideally, we would like to understand the dynamics of the entire network based on the dynamics of the individual building blocks." Alon~\cite[page 27]{Alon}.
\end{quote}
While such a reductionist approach works well in linear systems theory, where a superposition principle holds, or
in the study of synchronization in weakly coupled systems of nonlinear approximately identical oscillators~\cite{PC,PN,BBH,GST}, it is
usually unrealistic in the study of heterogenous networks modelled by a system of analytic nonlinear differential
equations: network dynamics may bear little or no relationship to the intrinsic (uncoupled) dynamics of nodes.
\subsection*{A theory of asynchronous networks.}
The theory of asynchronous networks we develop provides an approach to
the analysis of dynamics and function in complex networks.
We illustrate the setting for our main result with a simple example. Figure~\ref{schem1} shows the schematics of a network where there is only intermittent
connection between nodes\footnote{Figure~\ref{schem1} can be viewed as representing part of a threaded computer program.
The events~$\is{E}^a,\dotsc,\is{E}^h$ will represent synchronization events -- evolution of associated threads is stopped until each thread has finished its computation and then variables are
synchronized across the threads.}.
We assume eight nodes $N_1, \dotsc, N_8$. Each node~$N_i$ will be given an initial state
and started at time $T_i \ge 0$. Crucially, we assume the network has a function: reaching designated
terminal states in finite time -- indicated on the right hand side of the figure.
Nodes interact depending on their state. For example, referring to figure~\ref{schem1}, nodes $N_1$, $N_2$ first interact
during the event indicated by~$\is{E}^a$.
Observe there is no
global time defined for this system but there is a partially ordered temporal structure: event~$\is{E}^c$ always occurs after
event~$\is{E}^a$ but may occur before or after event~$\is{E}^b$. We caution that while the direction of time is from left-to-right,
there is no requirement of moving from left to right in the spatial variables: the phase space dimension for nodes could be
greater than one and the initialization and terminations sets could be the same. This example can be generalized to allow for
changes in the number and type of nodes after each event.
The intermittent connection structure we use may be viewed as an extension of the
idea of \emph{conditional action} as defined by Holland in the context of complex adaptive systems~\cite{Holl}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8 \textwidth]{figureI_01.eps}
\caption{A functional feedforward network with 8 nodes}
\label{schem1}
\end{figure}
Our main result, stated and proved in part II of this work~\cite{BF1}, is a \emph{modularization of dynamics theorem} that yields a functional decomposition for a
large class of asynchronous networks.
Specifically, we give general conditions that enable us to represent a large class of
functional asynchronous networks as feedforward functional networks of the type illustrated in figure~\ref{schem1}. As a consequence, the
function of the original network can be expressed explicitly in terms of uncoupled node dynamics and event function.
Nonsmooth effects, such as changes in network topology through decoupling of nodes and stopping and restarting of nodes, are one of the crucial ingredients
needed for this result. In networks modelled by smooth dynamical systems,
all nodes are effectively coupled to each other at all times and information propagates instantly across the entire network. Thus, a spatiotemporal decomposition
is only possible if the network dynamics is nonsmooth and (subsets of) nodes are allowed to evolve independently of each other for periods of time.
This allows the identification of dynamical units, each with its own function, that together comprise the dynamics and function of the entire network.
The result highlights a drawback of averaging over a network: the loss of information
about the individual functional units, and their temporal relations, that yield network function.
A functional decomposition is natural from an evolutionary point of view: the goal of an evolutionary process is optimization of
(network) function. Thus, rather than asking how network dynamics can be understood in terms of the dynamics of constituent
subnetworks -- the classical reductionist question -- the issue is how network function can be understood in terms of the
function of network constituents. Our result not only gives a satisfactory answer to Alon's question for a
large class of functional asynchronous networks but suggests an approach to determining key structural features of
components of a complex system that is partly based on an evolutionary model for development of structure.
Starting with a small well understood model, such as the class of functional feedforward networks described above,
we propose looking at bifurcation in the context of optimising a network function -- for example, understanding the effect on function
when we break the feedforward structure by adding feedback loops.
\subsection*{Relations with distributed networks}
An underlying theme and guide for our formulation and theory of asynchronous networks is that of efficiency and cost in large distributed networks.
We recall the guidelines given by Tannenbaum \& van Steen~\cite[page 11]{TvS} for scalability in large distributed networks
(italicised comments added):
{\small
\begin{itemize}
\item No machine has complete information about the (overall) system state. [\emph{communication limited}]
\item Machines make decisions based only on local information. [\emph{decentralized control}]
\item Failure of one machine does not ruin the algorithm. [\emph{redundancy}]
\item There is no implicit assumption of global time.
\end{itemize}
}
Of course, networks dynamics, in either technology, engineering or biology, is likely to involve a complex mix of synchronous and asynchronous components.
In particular, timing (clocks, whether local or global) may be used to trigger the onset of events or processes as part of a weak mechanism for
centralized control or resetting. Evolution is opportunistic -- whatever works well will be adopted (and adapted) whether synchronous or asynchronous in character.
In specific cases, especially in biology, it may be a matter of debate as to which viewpoint -- synchronous or asynchronous -- is the most appropriate.
The framework we develop is sufficiently flexible to allow for a wide mix of synchronous and asynchronous structure at the global or local level.
\subsection*{Past work}
Mathematically speaking, much of what we say has significant overlap with
other areas and past work. We cite in particular, the general area of nonsmooth dynamics,
Filippov systems and hybrid systems (for example, \cite{Fi,As,Min,BBCK})
and time dependent network structures (for example, \cite{BBKP,LuAtayJost,GBG,Holme}). While the theory of nonsmooth dynamics
focuses on problems in control, impact, and engineering,
rather than networks, there is significant work studying bifurcation (for example~\cite{KRG,bbc}) which is likely to apply
to parts of the theory we describe.
From a vast literature on networks and dynamics, we cite Newman's text~\cite{Newman1} for a comprehensive
introduction to networks, and the very recent tutorial of Porter \& Gleeson~\cite{PG} which addresses questions related to our work,
gives an overview and introduction to dynamics on networks, and includes an extensive bibliography of past work.
\subsection*{Outline of contents}
After preliminaries in section~\ref{generalities},
we give in section~\ref{asyncsec} vignettes (no technical details) of several asynchronous networks from technology, engineering,
transport and neuroscience.
In section~\ref{sec:AsynNetModel}, we give a mathematical formulation of an asynchronous network with a focus on event driven dynamics,
and constraints. We follow in section~\ref{ampex} with two more detailed examples of asynchronous networks
including an illuminating and simple example of
a transport network which requires minimal technical background yet exhibits many characteristic features of an asynchronous network,
and a discussion of power grid models that indicates both the limitations and possibilities of our approach.
We conclude with a discussion of products of asynchronous networks in section~\ref{sec:Products} that illuminates some of the
subtle features of the event map. In part II~\cite{BF1}, we develop the
theory of functional asynchronous networks and give the statement and proof of the modularization of dynamics theorem.
\medskip
\noindent \emph{Dedication.}
The genesis of this paper lies in a visit in 2010 by one us (MF) to work with Dave Broomhead at Manchester University.
Dave was very interested in asynchronous processes and local clocks. During the visit, he came up with a 2 cell
random dynamical systems model for the investigation of asynchronous dynamics and local time. This 2 cell model provided the
seed and stimulus for the work described in this paper. Dave's illness and untimely death
sadly meant that our planned collaboration on this work could not be realized.
\section{Preliminaries and generalities on networks}
\label{generalities}
\subsection{Notational conventions}
We recall a few mostly standard notational conventions used throughout.
Let $\mathbb{N}$ denote the natural numbers (the strictly positive integers),
$\mathbb{Z}_+$ denote the set of nonnegative integers,
$\mathbb{R}_+ = \set{x \in \mathbb{R} }{ x \ge 0}$, and
$\mathbb{R}(>0) = \set{x \in \mathbb{R}_+ }{ x \ne 0 }$. Given
$n \in \mathbb{N}$, define $\is{n} = \sset{1,\dotsc,n}$. Let
$\iz{n} = \sset{0, 1, \dotsc, n}$ and, more generally, for
$A\subset\mathbb{N}$ define $\bu{A}=A\cup\sset{0}$.
\subsection{Network notation}
We establish our conventions on network notation; we follow these throughout this work.
Suppose the network $\mathcal{N}$ has $k$ nodes, $N_1,\dotsc,N_k$. Abusing notation, we often let $\mathcal{N}$ denote both network and the set of nodes $\{N_1,\dotsc,N_k\}$.
Denote the state or phase space for $N_i$ by $M_i$\footnote{We assume the phase space for each node is a connected differential manifold -- usually a domain in $\mathbb{R}^n$ or
the $n$-torus, $\mathbb{T}^n$.} and set $\mathbf{M} = \prod_{i \in \is{k}} M_i$ -- the network phase space.
Denote the state of node $N_i$ by ${\mathbf{x}}_i \in M_i$ and the
network state by $\mathbf{X} = ({\mathbf{x}}_1,\dotsc,{\mathbf{x}}_k) \in \mathbf{M}$.
Smooth dynamics on $\mathcal{N}$
will be given by a system of ordinary differential equations (ODEs) of the form
\begin{equation}
\label{EQ1}
{\mathbf{x}}_i' = f_i({\mathbf{x}}_i;{\mathbf{x}}_{j_1},\dotsc, {\mathbf{x}}_{j_{e_i}}),\;\; i \in \is{k},
\end{equation}
where the components $f_i$ are at least $C^1$ (usually $C^\infty$ or analytic) and
the following conditions are satisfied. \\
\noindent (N1) For all $i \in \is{k}$, $j_1 < \dotsc < j_{e_i}$ are distinct
elements of $\is{k} \smallsetminus \{i\}$ (and so $e_i < k$). \\
Set $J(i) = \{j_1, \dotsc, j_{e_i}\} \subset \is{k}$, $i \in \is{k}$ ($J(i)$ may be empty). \\
\noindent (N2) For each $i \in \is{k}$, the evolution of $N_i$ depends nontrivially on the state of
$N_j$, $j \in J(i)$, in the sense that there exists a
choice of ${\mathbf{x}}_i\in M_i$ and ${\mathbf{x}}_{j_s} \in M_{j_s}$, $j_s \in J(i) \smallsetminus \{j\}$,
such that $f_i({\mathbf{x}}_i;{\mathbf{x}}_{j_1},\dotsc, {\mathbf{x}}_{j_{e_i}})$ is not constant as a function of ${\mathbf{x}}_j$. \\
\noindent (N3) We generally assume the evolution of $N_i$ depends
on the state of $N_i$. If we need to emphasize that $f_i$ does not depend on ${\mathbf{x}}_i$
in the sense of (N2), we write
$f_i({\mathbf{x}}_{j_1},\dotsc, {\mathbf{x}}_{j_{e_i}})$, if $J(i) \ne \emptyset$.
If $J(i) = \emptyset$, we regard the dependence of~$f_i$ on~${\mathbf{x}}_i$ as
nontrivial iff~$f_i$ is not identically zero and then write
$f_i({\mathbf{x}}_i)$. Otherwise $f_i \equiv 0$.
\begin{rem}
\label{remdep}
Given network equations~\Ref{EQ1} which do not satisfy (N1--3), we can first redefine the $f_i$ so as to satisfy (N1). Next we
remove trivial dependencies so as to satisfy (N2). Finally, we check for the dependence of $f_i$ on the internal state ${\mathbf{x}}_i$ and modify the $f_i$ as necessary to achieve (N3).
If $f_i \equiv 0$, we can remove the node from the network.
Consequently, it is no loss of generality to always assume that (N1--3) are satisfied, with $f_i \not\equiv 0$. A consequence is that any \emph{network vector field}
$\mathbf{f}=(f_1,\dotsc,f_k):\mathbf{M} \rightarrow T\mathbf{M}$
can be uniquely written in the form~\Ref{EQ1} so as to satisfy (N1--3).
\end{rem}
Let $M(k)$ denote the space of $k\times k$ \mbox{$0$\,-$1$ } matrices $\beta = (\beta_{ij})_{i,j \in \is{k}}$ with coefficients
in $\sset{0, 1}$ and $\beta_{ii} = 0$, all $i \in \is{k}$.
Each $\beta \in M(k)$ determines uniquely a directed graph $\Gamma_\beta$ with vertices
$N_1,\dotsc,N_k$ and directed edge $N_j \rightarrow N_i$ iff $\beta_{ij} = 1$ and $i \ne j$. The matrix $\beta$ is the
adjacency matrix of $\Gamma_\beta$. We refer to $\beta$ as a \emph{connection structure} on $\mathcal{N}$.
If $\mathbf{f}:\mathbf{M} \rightarrow T\mathbf{M}$ is a network vector field satisfying (N1--3), then $\mathbf{f}$
determines a unique connection structure $C(\mathbf{f}) \in M(k)$ with associated graph $\Gamma_{C(\mathbf{f})}$. In order to specify the graph uniquely,
it suffices to specify the set of directed edges.
We define the \emph{network graph} $\Gamma = \Gamma(\mathcal{N},\mathbf{f})$ to be the directed graph $\Gamma_{C(\mathbf{f})}$. Thus, $\Gamma(\mathcal{N},\mathbf{f})$ has node
set $\mathcal{N} = \{N_1,\dotsc,N_k\}$ and a directed connection $N_j \rightarrow N_i$ will be an edge of $\Gamma$
if and only if $j \ne i$ and the dynamical evolution of $N_i$ depends nontrivially on the state of $N_j$.
\begin{rem}
\label{remdep2}
Our conventions are different from formalisms involving
multiple edge types (for example, see \cite{GST,AADF} for continuous dynamics and \cite{AF1} for discrete dynamics).
We allow at most one connection between distinct nodes of the network graph and do not use self-loops: connections encode dependence.
\end{rem}
\subsubsection{Additive input structure}
In many cases of interest, we have an additive input structure~\cite{F2014} and the components $f_i$ of $\mathbf{f}$ may be written
\begin{equation}
\label{EQais}
f_i({\mathbf{x}}_i;{\mathbf{x}}_{j_1},\dotsc, {\mathbf{x}}_{j_{e_i}}) = F_i({\mathbf{x}}_i) + \sum_{s=1}^{e_i} F_{ij_s}({\mathbf{x}}_{j_s} , {\mathbf{x}}_i),
\;i \in \is{k}.
\end{equation}
Additive input structure implies that there are no interactions between
inputs $N_j, N_k \rightarrow N_i$, as long as $j,k \ne i$, $j \ne k$, and allows us to add and subtract inputs and nodes in a consistent way.
We may think of ${\mathbf{x}}_i' = F_i({\mathbf{x}}_i)$ as defining the \emph{intrinsic dynamics} of the node.
\begin{rems}
\label{addrems}
(1) Additive input structure is usually assumed for modelling weakly coupled nonlinear oscillators and is required
for reduction to the standard Kuramoto phase oscillator model~\cite{K1984,EK,HI}.\\
(2) If we identify a null state ${\mathbf{z}}_j^\star$ for each node~$N_j$,
then the decomposition \Ref{EQais} will be unique if we require
$F_{ij}({\mathbf{z}}_j^\star,{\mathbf{x}}_i) \equiv 0$\footnote{For
identical phase spaces, assume inputs are asymmetric -- $F_{ij}\ne F_{i\ell}$, if $j \ne \ell$.
For symmetric inputs see~\cite{GST,AF2}.}.
If a node is in the null state then it has no output to other
nodes and is `invisible' to the rest of the network. If
we have an additive structure on the phase spaces $M_i$ (for example,
each~$M_i$ is a domain in~$\mathbb{R}^n$ or an $n$-torus $\mathbb{T}^n$)
it is natural to take ${\mathbf{z}}_i^\star = 0$. \\
(3) If $M_i = \mathbb{R}^n$ or $\mathbb{T}^n$, $i \in \is{k}$, and
$F_{ij_s}({\mathbf{x}}_{j_s},
{\mathbf{x}}_i) = G_{ij_s}({\mathbf{x}}_{j_s} - {\mathbf{x}}_i)$,
$i,j_s\in\is{k}$, the coupling is \emph{diffusive} (see~\cite[\S 2.5]{AF1} for general
phase spaces).
\end{rems}
\subsection{Synchronous networks}
Systems of ordinary differential equations like \Ref{EQ1}
give mathematical models for \emph{synchronous} networks. By synchronous, we mean
nodes are all synchronized to a global clock -- the terminology comes from computer science. Indeed,
if each node comes with a local clock, then all the clocks can be set to the same time provided
that the network is connected (we ignore the issue of delays, but see~\cite{LL}).
The synchronization of local
clocks is essentially forced by the model and the connectivity of the network graph; nodes cannot evolve independently of one another
unless the network is disconnected.
We recall some characteristic features of
synchronous networks.
\begin{description}
\item[Global evolution]
Nodes \emph{never} evolve independently of each other: if the state of any node is perturbed, then generically the
evolution of the states of the remaining nodes changes.
\item[Stopped nodes]
If a node (or subset of node variables) is at equilibrium or ``stopped'' for a period of time, it will remain stopped for all future time.
If a node has a non-zero initialization, it will never stop (in finite time).
\item[Fixed connection structure]
The connection structure of a synchronous network is fixed:
it does not vary in time and is not dependent on node states: one system of ODEs suffices to
model network dynamics.
\item[Reversibility] Solutions are uniquely defined in backward time.
\end{description}
\section{Asynchronous networks: examples}
\label{asyncsec}
In this section, we give several vignettes of asynchronous networks that illustrate
the main features differentiating them from synchronous networks. We amplify
two of these examples in section~\ref{ampex} after we have developed our
basic formalism for asynchronous networks.
\begin{exam}[Threaded and parallel computation]
\label{thread}
Threaded or parallelized computation provides an example of a discrete stochastic asynchronous network.
Computation based on a single processor (or single core of a processor) proceeds synchronously and sequentially.
The speed of the computation is dependent on the clock speed of the processor as the processor clock synchronizes the various steps in the computation.
In threaded or parallel computation, computation is broken into
blocks or `threads' which are then computed \emph{independently} of each other at a rate that is partly dependent on the clock rates of the
processors involved in the computation (these need not be identical). At certain points in the computation,
threads need to exchange information with other threads. This process
involves stopping and synchronizing (updating) the thread states: a thread may have to stop and wait for other threads to complete their
computations and update data before it can continue with its own computation.
Threaded computation is non-deterministic: the running and stopping times of
each thread are unpredictable and differ from run to run.
Each thread has its own clock (determined by its associated processor). Threads will be unaware of
the clock times of other threads except during the stopping and synchronization events which can be managed
synchronously (central control) or asynchronously (local control).
This example shows many characteristic features of an asynchronous network: nodes (threads) evolving independently of each other,
and stopping, synchronization and restarting events. The network also has a \emph{function} -- transforming a set of initial data into a set
of final data in finite time -- and there is the possibility of incorrect code that can lead to
a process that stops before the computation is complete (a deadlock), or errors where threads try to access a resource at the same time (race condition). \hfill \mbox{$\diamondsuit$}
\end{exam}
\begin{exam}[Power grids \& microgrids]
\label{pg}
A power grid consists of a connected network of various types of generators and loads connected by transmission lines.
A critical issue for the stability of the power grid is maintaining tight voltage frequency synchronization across the
grid in the presence of voltage phase differences between generators and loads and variation in generator outputs and loads.
We refer to Kundur~\cite{Ku} for classical power grid theory, D\"orfler \emph{et al.}~\cite{DCB} or Nishikawa \& Motter~\cite{Motter2015}, for some more recent and
mathematical perspectives, and \cite{Ku2} for general issues and definitions on power system stability.
Historically, power grids have been centrally controlled and one of the main stability issues has been the effect on
stability of a sudden change in structure -- such as the removal of a transmission line, breakdown of a generator or big change in load.
Detailed models of the power grid need to account for a complex multi-timescale stiff system. Typically stability has been analyzed using numerical methods.
However, relatively simple classes of network models for power grids based on frequency and phase synchronization
have been developed which are applicable for the analysis of some stability and control issues, especially those described in the next paragraph.
We describe these models in more technical detail in section~\ref{ampex}.
Interest has recently focused on renewable (small) energy sources in a power grid (for example, wind and solar power) and
how to integrate microgrids based on renewable sources into the power grid using a mix of centralized and decentralized control.
Concurrent with this interest is the issue of smart grids: modifying local loads in terms of the availability and real time costs of power.
While the classical power grid model is of a synchronous network, though with asynchronous features such as the effects on stability of
the breakdown of a connection (transmission line), these problems focus on asynchronous networks. For example, given a microgrid
with renewable energy sources such as wind and solar, time varying loads and buffers (large capacity batteries), how can the microgrid
be switched in and out of the main power grid while maintaining overall system stability? In this case, switching will be determined by
state (for example, frequency changes in the main power grid signifying changes in power demand or changes in the output of renewable sources or battery reserves)
and stochastic effects (resulting, for example, from load changes and the incorporation of smart grid technology).
This is already a tricky problem of distributed and decentralized control with just one microgrid; in the presence of many microgrids
there is the potential problem of synchronization of switching microgrids in and out of the main power grid. Similar problems occur in smart grids~\cite{T}.
Asynchronous features of power grid networks include variation in connection and node structure (separation, or islanding, of microgrids from main power grid),
state dependence of connection structure, synchronization and restarting events (during incorporation of microgrid into main grid).
\hfill \mbox{$\diamondsuit$}
\end{exam}
\begin{exam}[Thresholds, spiking networks and adaptation]
\label{spike}
Many mathematical models from engineering and biology incorporate thresholds. For networks,
when a node attains a threshold, there are often changes (addition, deletion, weights) in connections to another nodes.
For networks of neurons, reaching a threshold can result in a neuron firing (spiking) and short term connections to
other neurons (for transmission of the spike). For learning mechanisms, such as Spike-Timing Dependent Plasticity (STDP)~\cite{Ge}
relative timings (the order of firing) are crucial~\cite{GK,CD,MDG} and so each neuron, or connection between a pair of neurons, comes
with a `local clock' that governs the adaptation in STDP. In general, networks with thresholds, spiking and adaptation
provide characteristic examples of asynchronous networks where dynamics is piecewise smooth and hybrid -- a mix of
continuous and discrete dynamics. Spiking networks also highlight the importance of
efficient communication in large networks: spiking induced connections between neurons are brief and low cost.
There is also no oscillator clock governing all computations along the lines of a single processor computer.
These examples all fit well into the framework of asynchronous networks but, on account of the background knowledge required,
we develop the theory and formalism elsewhere~\cite{BF2}. \hfill \mbox{$\diamondsuit$}
\end{exam}
\begin{exam}[Transport \& production networks]
\label{pass1}
We discuss transport networks first. For simplicity, we work with a single transport mode: trains. Typically, trains have to be scheduled to be
in a station for overlapping times (stopping, restarting, connections and local times) so that passengers can transfer between trains, or stop in a passing loop (so that trains can pass on a
single track line). In addition, a train can divide into two parts or two trains can be combined (variation in node structure, stopping and synchronization event).
Generally, transport networks will have asynchronous features and exhibit state dependent connection structure, local times and have a strong stochastic component (for
example, in stopping and restarting times). We develop a simple formal transport model in section~\ref{ampex} (and in part II~\cite{BF1})
that illustrates basic ideas and results in the theory of asynchronous networks but does not require extensive background knowledge.
A simple example of a production
network is a paint mixer. Assume a
controller which accepts inputs -- requested colour -- which, after computation
to find tint weights (`tint code'), signals a request to inject
selected tints according to the tint code into the base paint
which is then mixed. The output is a can of coloured and
fully mixed paint. Dynamics plays a limited role --
except possibly at the mixing stage (for example, if there is a sensor
that can detect an acceptable level of mixing).
For this network, there is a varying connection
structure determined by the signalling and tint injection.
A characteristic feature
of this, and many production networks, is the large variation in time
scales. Signalling will typically be very fast, injection moderately fast
and mixing rather slow.
If the times of inputs to the controller are stochastic (for example,
follow a Poisson process), then there will be issues of queueing and prioritization
of inputs. If it is intended to maximize usage of the production
facilities and avoid long waits, then it is natural to suppose that there
are several mixing units and the output of the tint units is switched
between mixer units according to their availability.
Of course, the paint mixer may be a small part of a much
larger distributed production network for which we can expect multiple time scales,
switching between production units, changing the output of production
units, stopping or restarting units, etc.
The control of large distributed production systems will typically
involve a mix of decentralized and centralized control.
Synthesis of proteins at the cellular level can be viewed as a generalization
of the paint mixer model. We refer the reader to Alon~\cite[8, Chapter 1]{Alon} for background and
more details, especially on transcription networks.
\hfill \mbox{$\diamondsuit$}
\end{exam}
We summarize some of the key features of asynchronous networks illustrated by all of the preceeding examples.
\begin{enumerate}
\item Variable connection structure and dependencies between nodes. Changes in connection structure may depend on the state of the system
or be given by a stochastic process.
\item Synchronization events associated with stopping or waiting states of nodes.
\item Order of events may depend on the initialization of the system or stochastic effects.
\item Dynamics is only piecewise smooth and there may be a mix of continuous and discrete dynamics.
\item Aspects involving function, adaptation and control.
\item Evolution only defined for forward time -- systems are non-reversible.
\end{enumerate}
\section{A Mathematical model for asynchronous networks}
\label{sec:AsynNetModel}
In this section we formalize the notion of an asynchronous network.
Our focus is on deterministic (not stochastic)
and continuous time asynchronous networks which are autonomous (no explicit dependencies on time)
and we use the term `asynchronous network' as synonym for a deterministic and autonomous continuous time asynchronous network.
\subsection{Basic formalism for asynchronous networks}
\label{oview}
Consider a
network $\mathcal{N}$ with $k$ nodes, $N_1,\dotsc,N_k$, and follow the conventions of section~\ref{generalities}:
each node $N_i$ has phase space $M_i$, and
$\mathbf{M} = \prod_{i = 1}^k M_i$ -- the network phase space. A network vector field $\mathbf{f}$ on $\mathbf{M}$ is
assumed to satisfy conditions (N1--3) and so
determines a unique connection structure $C(\mathbf{f}) \in M(k)$
and associated network graph $\Gamma_{C(\mathbf{f})}$ (no self-loops).
Stopping, waiting, and synchronization are characteristic features of
asynchronous networks. If nodes of a network are stopped or partially
stopped, then node dynamics will be constrained to subsets of node phase
space. We codify this situation by introducing a \emph{constraining node}
$N_0$ that, when connected to $N_i$, implies that dynamics on $N_i$
is constrained. We give the precise definition of constraint shortly (in 4.3); for the present, the reader
may regard a constrained node as stopped -- node dynamics is defined by the zero vector field.
We only allow connections $N_0 \rightarrow N_i$, $i\in \is{k}$, and do not
consider connections $N_i \rightarrow N_0$, $i \in \is{k}^\bullet$.
Henceforth we usually always assume there is a constraining node
and let $\mathcal{N} = \sset{N_0, N_1,\dotsc,N_k}$ denote the set of nodes.
We emphasize that the constraining node $N_0$ has no dynamics and no associated phase space.
In a network with no constraints (there are no connections $N_0\rightarrow N_i$),
the constraining node~$N_0$ plays no role and can be omitted. If we allow constraints,
there may be more than one type of constraint on a node~$N_i$.
Suppose that there are $p_i \in \mathbb{N}$ different constraints on the node $N_i$, $i \in \is{k}$.
Set $\is{P} = (p_1,\dotsc,p_k) \in \mathbb{Z}_+^k$ and let $\CC{k;\is{P}}$
denote the space of $k \times (k+1)$ matrices
$\aa = (\aa_{ij})_{i\in \is{k}, j \in \iz{k}}$ such that
\begin{enumerate}
\item $(\aa_{ij})_{i,j \in \is{k}} \in M(k)$ (and so $\alpha_{ii} = 0, i \in \is{k}$).
\item $\aa_{i0} \in \is{p}_i^\bullet$, $i \in \is{k}$.
\end{enumerate}
If $\alpha \in \CC{k;\is{P}}$, we define the directed graph $\Gamma_\alpha$ by
\begin{enumerate}
\item $\Gamma_\alpha$ has node set $\mathcal{N}$.
\item For all $i,j \in \is{k}$, $N_j \to N_i$ is an edge iff $\aa_{ij}=1$.
\item $N_0 \rightarrow N_i$ is an edge iff $\aa_{i0} \ne 0$. We write $N_0 \stackrel{\ell}{\rightarrow} N_i$ if we
need to specify the constraint corresponding to $\ell \in \is{p}_i$.
\end{enumerate}
We usually abbreviate $\CC{k;\is{P}}$ to $\CC{k}$.
Let ${\boldsymbol{\emptyset}}\in\CC{k}$ denote the empty connection structure (no edges).
If $\aa\in \CC{k}$, let $\CSz{\aa}$ denote the first column $(\aa_{i0})_{i \in \is{k}}$ of $\aa$. We have a natural projection
$\pi: \CC{k} \rightarrow M(k)$; $\aa \mapsto \CSf{\aa}$, defined by omitting the column $\CSz{\aa}$. We write $\aa \in \CC{k}$ uniquely as
\[
\aa = (\CSz{\aa} \,|\, \CSf{\aa}).
\]
The column vector $\CSz{\aa}$ codifies the connections from the constraining node and
$\CSf{\aa}$ encodes the connections between the nodes $\{N_1,\dotsc, N_k\}$.
Let $\aa \in \CC{k}$. We provisionally define an \emph{$\aa$-admissible vector field} $\mathbf{f}=(f_1,\dotsc,f_k)$
to be a network vector field such that for $i,j \in \is{k}$, $i \ne j$, $f_i$
depends on the state ${\mathbf{x}}_j$ of $N_j$ iff~$\aa_{ij} = 1$.
If there is a connection $N_0 \rightarrow N_i$ ($\alpha_{i0} \ne 0$), then there is a nontrivial constraint
on~$N_i$.
An $\aa$-admissible vector field has constrained dynamics if there are
connections from the constraining node. If $\alpha = {\boldsymbol{\emptyset}}$, nodes are uncoupled and unconstrained.
\begin{Def}
(Notation and assumptions as above.)
\begin{enumerate}
\item A \emph{generalized connection structure}
$\mc{A}$ is a (nonempty) set of connection structures on $\mathcal{N}$.
\item An \emph{$\mc{A}$-structure $\mathcal{F}$} is a set
$\mathcal{F} = \{\mathbf{f}^\aa \,|\, \aa \in \mc{A}\}$ of network vector fields such that
each $\mathbf{f}^\aa\in \mathcal{F}$ is $\aa$-admissible.
\end{enumerate}
\end{Def}
Interactions between nodes in asynchronous networks may vary
and can be state or time dependent or both.
We focus on state dependence and assume interactions and
constraints are determined by the state of the network through an
\emph{event map} $\mc{E}: \mathbf{M} \rightarrow \mc{A}$.
\begin{Def}
Given a network $\mathcal{N}$, generalized connection structure
$\mc{A}$, $\mc{A}$-structure $\mathcal{F}$, and surjective event map $\mathcal{E}:\mathbf{M} \rightarrow \mc{A}$,
the quadruple $\mathfrak{N} = (\mathcal{N},\mathcal{A},\mathcal{F},\mathcal{E})$
defines an \emph{asynchronous network}.
The network vector field of $\mathfrak{N}$ is
given by the state dependent vector field $\is{F}:\mathbf{M} \rightarrow T\mathbf{M}$ defined by
\[
\is{F}(\mathbf{X}) = \mathbf{f}^{\mathcal{E}(\mathbf{X})}(\mathbf{X}), \; \mathbf{X} \in \mathbf{M}.
\]
\end{Def}
\begin{rems}
(1) Subject to simple regularity conditions, which we give later, the network vector field $\is{F}$
will have a uniquely defined semiflow.\\
(2) In the sequel we often use the notation $\mathfrak{N}$ as shorthand for the asynchronous network
$(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ (by extension, $\mathfrak{N}^a$ will be shorthand for $(\mathcal{N}^a,\mc{A}^a,\mathcal{F}^a,\mc{E}^a)$, etc.).
\end{rems}
\begin{exam}
\label{conex}
Let $k = 2$ and $M_1 = M_2 = \mathbb{R}\times \mathbb{T}$. Suppose that dynamics of the uncoupled node $N_i$
is given by the smooth vector field $V_i(x_i,\theta_i) = (f_i(x_i),\omega_i)$, where $f_i(0)\ne 0$, $\omega_i \in \mathbb{R}$, $i \in \is{2}$.
Assume constrained dynamics for either node is defined on the invariant circle $\{0\} \times \mathbb{T}\subset \mathbb{R}\times \mathbb{T}$ by
the vector field $Z_i(x_i,\theta_i) = (0,\omega_i)$, $i \in \is{2}$.
When both nodes are constrained ($x_1 =x_2 = 0$), assume (constrained) coupling is defined by the vector field $H = (H_1,H_2)$, where
\begin{eqnarray*}
H_1(x_1,\theta_1,x_2,\theta_2)&=&(0,\omega_1+h(\theta_2-\theta_1)) \\
H_2(x_1,\theta_1,x_2,\theta_2)&=&(0,\omega_2+h(\theta_1-\theta_2)),
\end{eqnarray*}
and $h:\mathbb{T} \rightarrow \mathbb{R}$ is smooth.
The 2-tori $\{(x_1,x_2)\} \times \mathbb{T}^2$ are invariant by the flow of $H$
for all $(x_1,x_2)\in \mathbb{R}^2$.
Revert to standard (uncoupled and unconstrained) dynamics when $|\theta_1-\theta_2| \le \varepsilon$, where $0 < \varepsilon \ll 1$.
We describe the network dynamics using asynchronous network formalism.
Take the generalized connection structure $\mc{A} = \{{\boldsymbol{\emptyset}},\alpha_1,\alpha_2,\beta\}$, where
$\alpha_i=N_0 \rightarrow N_i$, $i \in \is{2}$, and
$\beta=N_0 \rightarrow N_1 \leftrightarrow N_2 \leftarrow N_0$.
Take $\mathcal{F} = \{\mathbf{f}^\gamma \,|\, \gamma \in \mc{A}\}$, where
\[
\mathbf{f}^{\boldsymbol{\emptyset}} = (V_1,V_2),\;
\mathbf{f}^{\alpha_1} = (Z_1,V_2),\;
\mathbf{f}^{\alpha_2} = (V_1,Z_2),\;
\mathbf{f}^{\beta} = (H_1,H_2).
\]
Define the event map $\mc{E}: (\mathbb{R}\times\mathbb{T})^2 \rightarrow \mc{A}$ by
\begin{eqnarray*}
\mc{E}(0,\theta_1,0,\theta_2) & = & \beta,\;\text{if } |\theta_1-\theta_2| > \varepsilon\\
& = & {\boldsymbol{\emptyset}}, \;\text{if } |\theta_1-\theta_2| \le \varepsilon\\
\mc{E}(0,\theta_1,x_2,\theta_2) & = & \alpha_1,\;\text{if } x_2 \ne 0\\
\mc{E}(x_1,\theta_1,0,\theta_2) & = & \alpha_2,\;\text{if } x_1 \ne 0\\
\mc{E}(x_1,\theta_1,x_2,\theta_2) & = & {\boldsymbol{\emptyset}},\;\text{if } x_1x_2 \ne 0.
\end{eqnarray*}
Network dynamics is given by the vector field $\is{F}(\is{X}) = \mathbf{f}^{\mc{E}(\is{X})}(\is{X})$.
Trajectories for $\is{F}$ are built from pieces of the trajectories of $\mathbf{f}^{\boldsymbol{\emptyset}}$, $\mathbf{f}^{\alpha_1}$, $\mathbf{f}^{\alpha_2}$, and $\mathbf{f}^{\beta}$.
Using the condition $f_i(0) \ne 0$, $i \in \is{2}$,
we see easily that $\is{F}$ has a well-defined semiflow $\Phi_t(x_1,\theta_1,x_2,\theta_2)$,
which is continuous in time $t \ge 0$ but is not necessarily continuous in $(x_1,\theta_1,x_2,\theta_2)$.
\hfill \mbox{$\diamondsuit$}
\end{exam}
\subsection{Local foliations}
\label{constraint:sec}
Conditions for a constrained node $N_i$ will be given in terms of \emph{foliations} of open subsets of $M_i$. We
start by recalling basic definitions on foliations (see~\cite{BL} for a detailed review).
A $p$-dimensional
smooth (always $C^\infty$ here) foliation $\mathcal{L}$ of the $m$-dimensional manifold $W$
consists of a partition $\{L_\aa \,|\, \aa \in \Lambda\}$ of $W$ into
connected sets, called \emph{leaves}, such that for every $x \in W$,
we can choose an open neighbourhood $U$ of $x$ and smooth embedding
$\psi:U \rightarrow \mathbb{R}^m$ such that for each leaf $L_\aa$, the components
of $\phi(L_\aa \cap U)$ are given by equations $x^{p+1} = \text{constant},
\ldots, x^m = \text{constant}$.
Each leaf of a foliation will be an immersed $p$-dimensional submanifold
of $W$. For our applications, we always assume leaves are properly embedded
closed submanifolds of~$W$, $p < m$, and
that the manifold $W$ has \emph{finitely} many connected components.
In general, a smooth foliation of the manifold $W$
will consist of a smooth foliation of each connected component of $W$
such that the dimension of leaves is constant on each connected component of $W$.
\begin{exams}
\label{folexam}
(1) Every smooth nonsingular vector field on $W$ defines a 1-dimensional
smooth foliation of $W$ (``flow-box'' theorem of dynamical systems). The
leaves are trajectories of the vector field. \\
(2) If $W = A \times B$, where $A$ and $B$ are manifolds, we have the product foliations $\mathcal{L}(A)$ and $\mathcal{L}(B)$ of $W$ defined by
$\mathcal{L}(A) = \{A \times \{b\} \,|\, b \in B\}$ and $\mathcal{L}(B) = \{ \{a\} \times B\,|\, a \in A\}$. Each leaf
$\mathcal{L}(A)$ is transverse to every leaf of $\mathcal{L}(B)$. More generally, foliations $\mathcal{L}, \mathcal{L}'$ are transverse
if leaves are transverse.
A foliation of $W$, even by compact 1-dimensional leaves, need not have a transverse foliation. The best-known example is
the Hopf fibration which defines a foliation of $S^3$ into circles.
\hfill \mbox{$\diamondsuit$}
\end{exams}
Suppose that $\mathcal{L}$ is a $p$-dimensional smooth foliation of $W$ with leaves
$\set{L_\aa }{ \aa \in \Lambda}$. The \emph{tangent bundle
along the foliation} $\tau: \mathbb{L} \rightarrow W$ is the smooth vector sub-bundle of the tangent bundle
$TW$ of $W$ defined by
\[
\mathbb{L} = \bigcup_{x \in L_\aa,\, \aa \in \Lambda} T_x L_\aa \subset TW.
\]
\subsection{Constrained nodes and admissible vector fields}
\label{constrained:nodes}
Following section~\ref{oview}, we assume $\mathcal{N} = \{N_0,N_1,\dotsc,N_k\}$, where the nodes $N_i$ have phase space $M_i$, $i \in \is{k}$.
Fix a $k$-tuple $\is{P} = (p_1,\dotsc,p_k) \in \mathbb{Z}_+^k$.
In what follows, we assume $\is{P} \ne \is{0}$.
\begin{Def}
(Notation and assumptions as above.) A family $\is{C} = \{(\is{W}_i,\boldsymbol{\mathcal{L}}_i) \,|\, i \in \is{k}\}$ is a \emph{constraint structure} on
$\mathcal{N}$ if, for all $i \in \is{k}$ with $p_i > 0$,
\begin{enumerate}
\item $\is{W}_i = \{W_i^\ell \,|\, \ell \in \is{p_i}\}$ is a family of nonempty open subsets of $M_i$.
\item $\boldsymbol{\mathcal{L}}_i = \{ \mathcal{L}_i^\ell \,|\, \ell \in \is{p_i}\}$, where $\mathcal{L}_i^\ell$ is a smooth foliation of $W_i^\ell$.
\end{enumerate}
\end{Def}
\begin{rems}
\label{constrain:rems}
(1) If $p_i = 0$, there are no constraints on $N_i$. \\
(2)
If $p_i = 1$, we set $\is{W}_i = (W_i,\mathcal{L}_i)$ and $\mathcal{L}_i$ is a smooth foliation of the
nonempty open subset $W_i$ of $M_i$. If we allow the dimension of leaves to vary
between different connected components, and the families
$\is{W}_i$ to consist of disjoint open subsets of $M_i$, $i \in \is{k}$, then we can reduce to the
case $p_i \le 1$ by taking $W_i = \bigcup_\ell W_i^\ell$ and $\mathcal{L}_i$ to be the
foliation determined on $W_i$ by $\mathcal{L}_i|W_i^\ell = \mathcal{L}_i^\ell$, $\ell \in \is{p_i}$.
For our applications, it is no loss of generality to assume that $\is{W}_i$ always consists of disjoint open subsets of $M_i$, $i \in \is{k}$.
\end{rems}
We can now give a precise definition of an $\aa$-admissible vector field when there are constraints.
\begin{Def}
\label{Cadmiss}
Fix a constraint structure $\is{C} = \{(\is{W}_i,\boldsymbol{\mathcal{L}}_i) \,|\, i \in \is{k}\}$ on
$\mathcal{N}$ and
let $\aa \in \CC{k}$. A smooth vector
field $\is{f} = (f_1,\dotsc,f_k)$ on $\mathbf{M}$ is an
\emph{$\aa$-admissible vector field} if
\begin{enumerate}
\item For $i,j \in \is{k}$, $i \ne j$, $f_i$ depends on $\is{x}_j$ iff $\aa_{ij} = 1$.
\item If $\aa_{i0} = \ell > 0 $, then $f_i$ is tangent to the smooth foliation $\mathcal{L}_i^\ell$ at all points of $W_i^\ell \subset M_i$. Equivalently, $f_i|W_i^\ell$
defines a section of~$\mathbb{L}_i^\ell$, the tangent bundle
along the foliation $\mathcal{L}_i^\ell$.
\end{enumerate}
\end{Def}
\begin{exam}
\label{stop:constraint}
Suppose that $p_i = 1$ and $\aa_{i0} = 1$ so that there is a constraining connection
$N_0 \rightarrow N_i$. Let $\mathbf{f} = (f_1,\dotsc,f_k)$ be $\aa$-admissible,
$M_i = \mathbb{R}^\ell$, and $\mathcal{L}_i$ be an $(\ell-p)$-dimensional foliation of $M_i$ with
leaves given by $x_{r_1} = c_1,\dotsc, x_{r_p} = c_p$. The components
$f_i^{r_1},\dotsc,f_i^{r_p}$ of $f_i=(f_i^1,\dotsc,f_i^\ell)$ will be
identically zero and the node $N_i$ is partially stopped on each leaf.
This is the situation described in example~\ref{conex}
where the 1-dimensional foliation of $\mathbb{R}\times\mathbb{T}$ is $\{\{x\} \times \mathbb{T} \,|\, x \in \mathbb{R}\}$.
\hfill \mbox{$\diamondsuit$}
\end{exam}
\begin{rem}
Note that if $N_0 \rightarrow N_i \leftarrow N_j$, then the coupling from $N_j$ must respect
constraints on $N_i$ though now of course the dynamics on
a leaf of $\mathcal{L}_i$ will depend on the state of $N_j$.
\end{rem}
\subsection{The event map}
Let $\mc{A}$ be a generalized connection structure with constraint
structure $\is{C} = \{(\is{W}_i,\boldsymbol{\mathcal{L}}_i) \,|\, i \in \is{k}\}$.
Let $\mc{E}: \mathbf{M} \rightarrow \mc{A}$ be an event map and recall $\mc{E}$ is always assumed to be surjective.
For each $\aa \in \mc{A}$, define the \emph{event set} $E^\aa \subset \mathbf{M}$ by
\[
E^\aa = \{\mathbf{X} \in \mathbf{M} \,|\, \mathcal{E}(\mathbf{X}) = \aa\}.
\]
The event sets $\{E^\aa \,|\, \aa \in \mc{A}\}$ partition the network phase space $\mathbf{M}$. We require additional conditions on the event map when there are
constraints. These conditions relate the event sets to the constraint structure $\is{C} $ and are required because
foliations are only locally defined.
Let $\pi_i:\mathbf{M} \rightarrow M_i$ denote the projection map onto the phase space of $N_i$, $i \in \is{k}$. Given $i \in \is{k}$,
$\ell \in \is{p_i}$, define
\[
E_i^\ell = \bigcup_{\{\aa \,|\, \aa_{i0} = \ell\}} \pi_i(E^\aa) \subset M_i.
\]
\begin{Def}
The event map
$\mathcal{E}: \mathbf{M} \to \mc{A}$ is \emph{constraint regular} if
for all $i \in \is{k}$, $\ell \in \is{p_i}$, we have
\[
\overline{E_i^\ell} \subset W_i^\ell
\]
\end{Def}
Henceforth we assume that event maps are constraint regular.
\subsection{Asynchronous network with constraints}
\begin{Def}
\label{defasync}
An asynchronous network
$\mathfrak{N}=(\mathcal{N}, \mc{A},\mathcal{F},\mathcal{E})$, with constraint structure $\is{C}$,
consists of
\begin{enumerate}
\item A finite set $\mathcal{N}= \{N_0,N_1,\dotsc,N_k\}$ nodes with associated phase spaces $M_i$, $i \in \is{k}$.
\item A generalized connection structure $\mc{A} \subset \CC{k}$.
\item An $\mc{A}$-structure $\mathcal{F} = \{\mathbf{f}^\alpha \,|\, \alpha \in \mc{A}\}$ consisting of
admissible vector fields.
\item A (constraint regular) event map $\mc{E}:\mathbf{M} \rightarrow \mc{A}$.
\end{enumerate}
\end{Def}
\begin{rem}
\label{comprem2}
If $\mc{A}$ consists of a single connection structure~$\aa$ (with or without constraints),
then~$\mathcal{F}$ consists of one vector field
$\mathbf{f}=\mathbf{f}^\aa$, with dependencies given by~$\aa$. We
recover a synchronous network with dynamics defined by~$\mathbf{f}$
and a fixed connection structure.
\end{rem}
\subsection{Network vector field of an asynchronous network}
An asynchronous network $\mathfrak{N}$
uniquely determines the \emph{network vector field} $\is{F}$ by
\begin{equation}
\label{Deq}
\is{F}(\mathbf{X}) = \mathbf{f}^{\mathcal{E}(X)}(\mathbf{X}),\; \mathbf{X} \in \mathbf{M}.
\end{equation}
\begin{rems}
\label{abc}
(1) We may give a discrete version of definition~\ref{defasync}: each $\mathbf{f}^\alpha$
will be a network map $\mathbf{f}^\alpha:\mathbf{M}\rightarrow\mathbf{M}$ and dynamics is defined
by the map $\is{F}:\mathbf{M} \rightarrow \mathbf{M}$ given by \Ref{Deq}. \\
\noindent (2) Equation \Ref{Deq} defines a \emph{state dependent} dynamical system. Similar
structures have been used in engineering
applications (for example, \cite{Haddad}). We indicate in
section~\ref{Filippov_dig} a relationship with Filippov systems (this is explored further in~\cite{FA}).
However, the notion of an integral curve for an asynchronous network is generally
different from that of a Filippov system, see examples~\ref{path}(2). \\
(3) The network vector field does not uniquely determine $\mc{A}$, $\mathcal{E}$ or
$\mathcal{F}$. Usually, however, the choice of $\mc{A}$, $\mathcal{E}$ and $\mathcal{F}$
is naturally determined by the problem. Sometimes it is convenient to view the
network vector field as the basic object and regard asynchronous networks
as being \emph{equivalent} if they define the same network vector field. \\
(4) Since the event sets $\{E^\aa \,|\, \aa \in \mc{A}\}$ partition $\mathbf{M}$, the network vector field $\is{F}$
only depends on $\mathbf{f}^{\aa}|E^\aa$. Rather than assume that
$\mathbf{f}^{\aa}$ is smooth on $\mathbf{M}$, we could have required that each $\mathbf{f}^{\aa}$ was defined as
smooth map in the sense of Whitney~\cite{Whitney34} on $\overline{E^\aa}$ (and so extends smoothly to $\mathbf{M}$).\\
(5) Although the vector fields $\mathbf{f}^\alpha \in \mathcal{F}$ are assumed to satisfy (N1--3), this may \emph{not} hold for
$\mathbf{f}^\alpha|E^\alpha$, $\alpha \in\mc{A}$. Sometimes, but \emph{not} always, there is an
equivalent network $\mathfrak{N}'$ such that the dependencies of
each admissible vector field for $\mathfrak{N}'$ are not changed by restriction to the corresponding event set.
\end{rems}
\subsection{Integral curves and proper asynchronous networks}
\label{reg:sec}
We start with a definition of integral curve suitable for asynchronous networks.
\begin{Def}
\label{sol:def}
Let $\mathfrak{N}$ be an asynchronous network with network vector field
$\is{F}$. An \emph{integral curve} or \emph{trajectory} for $\is{F}$ with initial condition $\mathbf{X}_0 \in \mathbf{M}$ is a
map $\boldsymbol{\phi}:[0,T) \rightarrow \mathbf{M}$, $T \in (0,\infty]$, satisfying
\begin{enumerate}
\item $\boldsymbol{\phi}(0) = \mathbf{X}_0$.
\item $\boldsymbol{\phi}$ is continuous.
\item There exists a closed countable subset $D$ of $[0,T)$ such that for every $u \in D$, there exists $v \in D\cup \{T\}$, $v > u$,
such that
\begin{enumerate}
\item $(u,v) \cap D = \emptyset$.
\item $\boldsymbol{\phi}$ is $C^1$ on $(u,v)$ and $\boldsymbol{\phi}'(t) = \is{F}(\boldsymbol{\phi}(t))$, $t \in (u,v)$.
\item $\lim_{t \rightarrow u+} \boldsymbol{\phi}'(t) = \is{F}(\boldsymbol{\phi}(u))$.
\end{enumerate}
\end{enumerate}
\end{Def}
\begin{rems}
(1) It is routine to verify that if $\boldsymbol{\psi}:[0,S)\rightarrow \mathbf{M}$ is another integral curve with initial condition $\mathbf{X}_0$, then
$\boldsymbol{\psi} = \boldsymbol{\phi}$ on $[0,\min\{S,T\})$ (uniqueness).
As a consequence we can define the \emph{maximal}
integral curve $\boldsymbol{\phi}:[0,T_{\text{max}}) \rightarrow \mathbf{M}$ with initial condition $\mathbf{X}_0$.
In the sequel, integral curves will be maximal unless otherwise indicated.\\
(2) If $T = \infty$ in the definition, the trajectory $\boldsymbol{\phi}:\mathbb{R}_+ \rightarrow \mathbf{M}$ is \emph{complete}.\\
(3) The set $D$ may have accumulation points in $D$ -- accumulation is always from the left on
account of condition (3a). In the examples we consider $D$ will always be a finite set.\\
(4) Typically, for each $u \in D$, there exists $\alpha \in \mc{A}$
such that $\mc{E}(\boldsymbol{\phi}(t)) = \alpha$ for $t\in(u,v)$
and so $\boldsymbol{\phi}((u,v))\subset E^\alpha$.
Condition (3c) implies that if $\mc{E}(\boldsymbol{\phi}(u)) = \beta \ne \alpha$, we
must have $\mathbf{f}^\alpha(\boldsymbol{\phi}(u)) = \mathbf{f}^\beta(\boldsymbol{\phi}(u))$.
\end{rems}
Without further conditions on the event map, the vector field
$\is{F}$ determined by an asynchronous network $\mathfrak{N}$
may not have integral curves through every point of the phase space.
\begin{exams}
\label{path}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{figureI_02.eps}
\caption{Integral curves for the network vector field may not be well defined (a) and may differ from those given by the
Filippov conventions (b).}
\label{ws}
\end{figure}
(1) Take event sets $E^1 = \{(x_1,x_2) \,|\, x_1 \le 0\}$, $E^2 = \mathbb{R}^2 \smallsetminus E^1$,
and corresponding constant vector fields $\mathbf{f}^1 = (1,-2)$, $\mathbf{f}^2 = (-1,0)$ (see figure~\ref{ws}(a)).
Trajectories cannot be continued, according to definition~\ref{sol:def}, once they meet $x_1 = 0$.
One way round this problem is to define a new event set $E^3 = \partial E^1$ and the \emph{sliding} vector field $\mathbf{f}^3 = \mathbf{f}^1+\mathbf{f}^2 = (0,-2)$.
There is then a complete integral curve through every point of $\mathbb{R}^2$ and the corresponding semiflow $\Phi: \mathbb{R}^2 \times \mathbb{R}_+ \rightarrow \mathbb{R}^2$
is continuous. This approach is based on the Filippov
construction~\cite[Chapter 2, page 50]{Fi} where we take a vector field in the positive cone defined by
$\mathbf{f}^1, \mathbf{f}^2$ (often the unique convex combination
$\lambda \mathbf{f}^1 + (1-\lambda)\mathbf{f}^2$)
which is tangent to $\partial E^1=E^3$).
\noindent (2) Take event sets $F^1 = \{(x_1,x_2) \,|\, x_1 \ne x_2\}$,
$F^2 = \{(x_1,x_2) \,|\, x_1 = x_2\}$, and corresponding vector fields
$\mathbf{f}^1(x_1,x_2) = (1,-1)$, $\mathbf{f}^2(x_1,x_2) = (0,0)$ (see figure~\ref{ws}(b) and note that the event $F^2$ models a collision, after which dynamics stops).
Integral curves are defined for all initial conditions in $\mathbb{R}^2$ but the semiflow $\Phi: \mathbb{R}^2 \times \mathbb{R}_+ \rightarrow \mathbb{R}^2$
will not be continuous on $F^2$. Here the Filippov construction gives the wrong network solution -- the diagonal
$F^2$ is regarded as a removable singularity.
We discuss the relationship between asynchronous
networks and Filippov systems further in section~\ref{Filippov_dig}; see also~\cite{FA}.
\hfill \mbox{$\diamondsuit$}
\end{exams}
\begin{Def}
\label{ammen2}
The asynchronous network $\mathfrak{N}$ is
\emph{proper} if for all $\mathbf{X}\in\mathbf{M}$, the maximal integral curve through $\mathbf{X}$ is complete: $\boldsymbol{\phi}_\mathbf{X}:[0,\infty)\rightarrow \mathbf{M}$.
\end{Def}
\begin{rems}
\label{rem:TrajCont}
(1) If $\mathfrak{N}$ is proper,
network dynamics is given by a semiflow $\Phi:\mathbf{M} \times \mathbb{R}_+ \rightarrow \mathbf{M}$.
Although $\Phi(\mathbf{X}, t)$ will be continuous as a function of $t \in \mathbb{R}_+$,
it need not be continuous as a function of $\mathbf{X} \in \mathbf{M}$ (see examples~\ref{path}(2)).\\
(2) In many cases of interest, some of the node phase spaces $M_i$ may be open domains in $\mathbb{R}^n$ with with $\partial M_i \ne \emptyset$.
Here there is the possibility that trajectories may exit $\mathbf{M}$: if
$\boldsymbol{\phi}=(\boldsymbol{\phi}_1,\dotsc,\boldsymbol{\phi}_k)$ is a trajectory, there may exist $i \in \is{k}$ and a smallest $s > 0$ such that
$\boldsymbol{\phi}_i(s) \stackrel{\mathrm{def}}{=} \lim_{t \rightarrow s_-}\boldsymbol{\phi}_i(t) \in \partial M_i$.
The maximal domain for $\boldsymbol{\phi}$ is necessarily $[0,s)$.
Under additional hypotheses, it may be possible to extend~$\boldsymbol{\phi}$ to a complete trajectory by
setting $\is{F}_j \equiv 0$ on $\mathbb{R}^n \smallsetminus M_j$, $j \in \is{k}$ (the $j$th component of $\boldsymbol{\phi}$ is
stopped when it meets the boundary of $M_j$). In this way, we can regard $\mathfrak{N}$ as proper.
We develop this point of view further in part II~\cite{BF1}.
\end{rems}
Event sets are typically defined by analytic and algebraic conditions that reflect
logical conditions on the underlying dynamics.
\begin{Def}
Let $\mathfrak{N}$ be an asynchronous network.
The event structure $\{E^\aa \,|\, \aa\in\mc{A}\}$ of $\mathfrak{N}$ is \emph{regular} if
the event sets $E^\aa$ are all semianalytic subsets\footnote{Defined locally by analytic equations and inequalities.
We refer to \cite{Gibson,Boc} for precise definitions and properties.} of $\mathbf{M}$.
\end{Def}
\begin{rem}
For the examples in this paper, event sets will typically be
semialgebraic -- defined by polynomial equalities and inequalities.
\end{rem}
\begin{Def}
\label{ammen}
An asynchronous network $\mathfrak{N}$ is \emph{amenable} if
\begin{enumerate}
\item The event structure $\{E^\aa \,|\, \aa \in \mc{A}\}$ is regular.
\item If $\mathbf{X} \in E^\aa$, $\aa \in \mc{A}$, there exists
a maximal $t(\mathbf{X})\in (0, \infty]$
such that the integral curve $\boldsymbol{\phi}_\mathbf{X}$ through $\mathbf{X}$ is defined on $[0,t(\mathbf{X}))$ and
\[
\boldsymbol{\phi}_\mathbf{X}(t) \in E^\aa,\;\; t \in [0,t(\mathbf{X})).
\]
\item Either $M_i$ is compact without boundary or $M_i = \mathbb{R}^{n_i}$ and
vector fields have at most linear growth on $M_i$: $\exists a,b > 0$ such that
\[
\|\is{f}_i^\aa(\mathbf{X})\| \le a+ b \|\mathbf{X}\|, \;\mathbf{X} \in \mathbf{M},\; \aa \in \mc{A}.
\]
\end{enumerate}
\end{Def}
\begin{rems}
\label{defammen}
(1) Condition (2) of definition~\ref{ammen} suggests that the vector field $\is{f}^{\aa}$ should in some sense be tangent to $E^\aa$.
The issue of tangency can be made precise using the regularity assumption which implies that $E^\aa$ has a locally finite
stratification into submanifolds without boundary (for example, the canonical Whitney regular stratification of
each event set~\cite{Gibson,Mather}). This allows us to unambiguously define tangency at points of $E^\aa$ which do not lie in the boundary
of strata. Care is needed at points lying in the boundary of strata and in the example below we indicate how the geometric
structure of the event set can impose strong constraints on associated vector fields.\\
(2) If an event set is a closed submanifold without boundary, it follows from definition~\ref{ammen}(2) that
any trajectory that meets the event set will never leave the event set.\\
(3) In part II we extend definition~\ref{ammen}(3) to allow for trajectories to exit the domain and stop (see remark~\ref{rem:TrajCont}(2)).\\
(4) We may extend the definition of amenability to include asynchronous networks which are equivalent to
an amenable network.
\end{rems}
\begin{exams}
\label{amex}
Take $k = 2$, $M_1 = M_2 = \mathbb{R}$. \\
(1) As event sets take the semialgebraic subsets of $\mathbb{R}^2$ defined by
\[
E^1 = \{(x,0) \,|\, x < 0\},\;E^2 = \{(0,y) \,|\, y > 0\},\; E^0 = \mathbb{R}^2\smallsetminus \bigcup_{i=1}^2 E^i.
\]
The event sets are neither open nor closed. We define associated vector fields $\mathbf{f}^j$, $j \in \iz{2}$, on $\mathbb{R}^2$ by
\[
f^1(x,y) = (1,0),\; f^2(x,y) = (0,-1),\; f^0 = f^1+f^2.
\]
It is a simple exercise to verify that the network is amenable and proper but that the associated semiflow
$\Phi: \mathbb{R}^2 \times \mathbb{R}_+ \rightarrow \mathbb{R}^2$ is not continuous along $E^1$ or $E^2$ (it is continuous at $(0,0)$).
\noindent (2) Suppose that the event set $E^1$ is the cusp defined by $\{(x,y) \in \mathbb{R}^2 \,|\, x \ne 0,\;y^2 = x^3\}$ and
$E^2 = \mathbb{R}^2 \smallsetminus E^1$.
In this case any smooth ($C^1$ suffices) vector field on $\mathbb{R}^2$
which is tangent to $E^1$ must vanish at $\{(0,0)\}$ (an example of such a vector field is $(2ax,3ay)$, $a \in \mathbb{R}$). If we require amenability,
then all trajectories which meet $E^1$ will never leave $E^1$.
\hfill \mbox{$\diamondsuit$}
\end{exams}
\begin{prop}
\label{reglemma1}
An amenable asynchronous network is proper.
\end{prop}
\begin{proof}We give details for the case when $\mathbf{M}$ is compact. Fix $\mathbf{X} \in \mathbf{M}$. Suppose that $\boldsymbol{\phi}_i: [0,s_i) \rightarrow \mathbf{M}$ are forward
trajectories for $\is{F}$ through $\mathbf{X}$, $i \in \is{2}$. Using uniqueness of solutions of
differential equations and definition~\ref{ammen}(2), it is easy to see that $\boldsymbol{\phi}_1 = \boldsymbol{\phi}_2$ on $[0,s_1) \cap [0,s_2)$.
It follows that if we define
\[
T = \sup\set{t}{\text{there is a trajectory $\boldsymbol{\psi}:[0,t)\rightarrow \mathbf{M}$ through $\mathbf{X}$}}
\]
then we have a unique trajectory $\boldsymbol{\phi}: [0,T) \rightarrow \mathbf{M}$ through $\mathbf{X}$. If $T = \infty$, we are done. But if $T < \infty$, then
we can extend $\boldsymbol{\phi}$ to $[0,T]$ by $\boldsymbol{\phi}(T) = \lim_{t \rightarrow T-} \boldsymbol{\phi}(t)$ (remarks~\ref{defammen}(3)). If $\boldsymbol{\phi}(T) \in E^\aa$ then by
definition~\ref{ammen}(2), $\boldsymbol{\phi}$ extends to $[0,T+t(\boldsymbol{\phi}(T)))$, where $t(\boldsymbol{\phi}(T)) > 0$. This contradicts the maximality of $T$ and so
$T = \infty$.
\end{proof}
\begin{rems}
(1) Proposition~\ref{reglemma1} says nothing about the number of changes in
the event map that occur along a trajectory. Without further conditions,
there may be a countable infinity of changes with countably many accumulation points (see definition~\ref{sol:def}
and note the analogy with Zeno-like behaviour~\cite{BBCK}).\\
(2) As shown in examples~\ref{amex}(1), the semiflow given by proposition~\ref{reglemma1} need not be continuous (as
a function of $(\mathbf{X},t)$). \\
(3) Amenability is sufficient but not necessary for properness.
\end{rems}
\subsection{Semiflows for amenable asynchronous networks}
Assume $\mathfrak{N}$ is an amenable asynchronous network with network vector field $\is{F}$.
For each $\aa \in \mc{A}$, denote the flow of $\is{f}^\aa$ by $\Phi^\aa$.
Let $\mathbf{X} \in \mathbf{M}$ and $\boldsymbol{\phi}:\mathbb{R}_+ \rightarrow \mathbf{M}$ be the maximal integral curve through $\mathbf{X}$ for $\is{F}$. If follows from
the definition of integral curve and amenability that there is a countable closed subset $D = D(\mathbf{X})$ of $\mathbb{R}_+ \cup \{\infty\}$ such that for each
$u \in D$, there exist unique $\alpha \in \mc{A}$, $v = v(u) \in D$ such that
\[
(u,v) \cap D = \emptyset, \; \mc{E}(v) \ne \alpha, \; \boldsymbol{\phi}([u,v)) \subset E^\alpha.
\]
(For $\mc{E}(u) = \alpha$ we need amenability.)
\begin{prop}
Let $\mathfrak{N}$ be an amenable asynchronous network. Suppose that for all $\mathbf{X} \in \mathbf{M} $, $D(\mathbf{X})$ is finite
and set $D(\mathbf{X}) = \{ t^\mathbf{X}_j \,|\, 0 = t^\mathbf{X}_0 < t^\mathbf{X}_1 < \dotsc < t^\mathbf{X}_N < t^\infty_{N+1} = \infty\}$, $\alpha^\mathbf{X}_j = \mc{E}(\boldsymbol{\phi}(t^\mathbf{X}_j))$, $j \in \iz{N}$.
The semiflow $\Phi: \mathbf{M} \times \mathbb{R}_+ \rightarrow \mathbf{M}$ for $\is{F}$ is given in terms of the flows $\Phi^\aa$ by
\[
\Phi_\mathbf{X}(t) = \Phi^{\aa_p^\mathbf{X}}(\cdots \Phi^{\alpha_1^\mathbf{X}}(\Phi^{\aa^\mathbf{X}_0}(\mathbf{X},t^\mathbf{X}_1),t^\mathbf{X}_2-t^\mathbf{X}_1)\cdots, t -t^\mathbf{X}_p),
\]
where $t \in [t^\mathbf{X}_p,t^\mathbf{X}_{p+1})$, $p \in \iz{N}$.
\end{prop}
\proof For $t \in [t^\mathbf{X}_p,t^\mathbf{X}_{p+1})$, $\Phi^{\aa_p^\mathbf{X}}(\mathbf{X}_p,t)$ is the solution to $\mathbf{X}'(t) = \mathbf{f}^{\alpha_p^\mathbf{X}}(\mathbf{X})$
with initial condition $\mathbf{X}_p = \Phi_\mathbf{X}(t_p^\mathbf{X})$. \qed
\subsection{Asynchronous networks with additive input structure}
\label{sec:AddStruct}
A natural source of asynchronous networks comes from synchronous networks with additive input structure.
The event map can be either state dependent (with constraints) or stochastic (see the following section).
Fix a~$k$ node synchronous
network $\mathcal{N}$ with additive input structure and network vector field $\mathbf{f}=(f_1,\dotsc,f_k)$ given by.
\begin{equation}
\label{EQais2}
f_i(\is{x}_i;\is{x}_{j_1},\dotsc, \is{x}_{j_{e_i}}) = F_i(\is{x}_i) + \sum_{s=1}^{e_i} F_{ij_s}(\is{x}_{j_s} , \is{x}_i),
\;i \in \is{k}.
\end{equation}
On account of the additive input structure, it is natural to remove and later reinsert connections
between nodes.
For $i \in \is{k}$, let $(W_i,\mc{L}_i)$ be the constraint defined by the $0$-dimensional foliation of $W_i = M_i$.
If dynamics on $N_i$ is constrained, then dynamics is stopped: ${\mathbf{x}}_i' = 0$.
Let~$\Gamma$ be the network graph determined by~\Ref{EQais2} with
associated \mbox{$0$\,-$1$ }~matrix $\gamma \in M(k)$. Take $\is{P} = (1,\dotsc,1)$ and let $\mc{A} \subset \CC{k}$ be
a generalized connection structure such that
\begin{enumerate}
\item $(0 \,|\, \gamma) \in \mc{A}$,
\item for all $\aa = (\CSz{\aa}\,|\, \CSf{\aa})$ the matrix
$\CSf{\aa}$ defines a subgraph of $\Gamma$, and
\item $\aa_{i0} \in \{0,1\}$ for all $i \in \is{k}$, $\aa \in \mc{A}$.
\end{enumerate}
For each $\aa\in\mc{A}$, define the $\aa$-admissible vector field
$\is{f}^\aa$ by
\begin{equation*}
f^\aa_i(\is{x}_i;\is{x}_{j_1},\dotsc, \is{x}_{j_{e_i}}) = (1-\aa_{i0})\left(F_i(\is{x}_i) + \sum_{s=1}^{e_i} \aa_{ij_s}F_{ij_s}(\is{x}_{j_s} , \is{x}_i)\right),
\;i \in \is{k},
\end{equation*}
and set $\mathcal{F} = \{\mathbf{f}^\alpha \,|\, \alpha \in \mc{A}\}$.
If we choose an event map $\mc{E}: \mathbf{M}\to\mc{A}$ and
take $\mathcal{F} = \set{\is{f}^\aa}{\aa\in \mc{A}}$, then
$\mathfrak{N} = (\mathcal{N}, \mc{A}, \mathcal{F}, \mc{E})$ is an asynchronous network.
We refer to $\mathfrak{N}$ as an \emph{asynchronous
network with additive input structure}.
For $\alpha \in \mc{A}$, $i \in \is{k}$, let $J(i, \aa) = \{j \,|\, \alpha_{ij} =1, j \in \iz{k}\}$ be the
dependency set of $f_i^\alpha$.
\begin{Def}
An asynchronous network $\mathfrak{N}$ is
\emph{input consistent} if for any node~$N_i$ and $\aa, \beta\in\mc{A}$
with dependency sets satisfying $J(i, \aa) = J(i, \beta)$
we have
$f_i^\aa = f_i^\beta$.
\end{Def}
As an immediate consequence of our constructions we have
\begin{lemma}
Asynchronous networks with additive input structure are input
consistent.
\end{lemma}
In summary, if $\mathfrak{N}$ is an asynchronous network with additive input structure all the admissible vector fields
are derived from the network vector field of a synchronous network.
\subsection{Local clocks on an asynchronous network}
In this section we describe \emph{local clocks} on an asynchronous network. We give only brief details sufficient for the examples we give later
(the general set up appears in~\cite{BF2}). Roughly speaking, a local clock will be associated to a set of nodes, or connections, and may be thought of
thought of as a stopwatch with time $\tau \in \mathbb{R}_+$. In particular, the local clock will run intermittently and switching between
on and off states will be determined by thresholds.
Fix a finite set of nodes $\mathcal{N}= \{N_0,N_1,\dotsc,N_k\}$ with associated phase spaces $M_i$, $i \in \is{k}$,
a generalized connection structure $\mc{A} \subset \CC{k}$ and a constraint structure $\is{C}$. Local clocks will be
defined in terms of strongly connected components of elements of $\mc{A}$.
Suppose that $\alpha \in \mc{A}$ and let $\beta,\gamma$ be distinct strongly connected components of $\alpha$ with
respective node sets $A \subset \is{k}$, $B \subset \iz{k}$. A local time $\tau_{\beta,\gamma} \in \mathbb{R}_+$ will be defined on $\beta$ (or the nodes $A$)
if there exists a connection $N_j \rightarrow N_i$, $j \in B$, $i \in A$.
\begin{exams}
(1) The constraining node $N_0$ is always a strongly connected component of $\alpha$. If $\alpha=N_0\rightarrow N_i$, then
we may take $\beta = \{N_i\}$, $\gamma = \{N_0\}$ and define the local time $\tau_i$ on $N_i$.\\
(2) If $\alpha = N_0 \rightarrow N_i \leftrightarrow N_j \leftarrow N_0$, then we may take $\beta = N_i \leftrightarrow N_j$, $\gamma = \{N_0\}$ and
obtain the local time $\tau_{\beta} = \tau_{ij}$ defined on $N_i, N_j$ (or $N_i \leftrightarrow N_j$).
\hfill \mbox{$\diamondsuit$}
\end{exams}
Choose a set $\tau_1,\dotsc,\tau_s$ of local times and set
\[
\mc{T} = \mathbb{R}_+^s = \{\boldsymbol{\tau}=(\tau_1,\dotsc,\tau_s) \,|\, \tau_1,\dotsc,\tau_s \in \mathbb{R}_+\}.
\]
We extend the phase space of $\mathcal{N}$ to $\mc{M} = \mathbf{M} \times \mc{T}$. Given $\alpha \in \mc{A}$, an $\alpha$-admissible vector field
$\mathbf{f}^\alpha$ on $\mc{M}$ will be a smooth vector field of the form
\[
\mathbf{f}^\alpha(\mathbf{X},\boldsymbol{\tau}) = (f_1^\alpha(\mathbf{X},\boldsymbol{\tau}),\dotsc,f_k^\alpha(\mathbf{X},\boldsymbol{\tau}),h_1,\dotsc,h_s),
\]
where $h_1,\dotsc,h_s \in \{0,1\}$ are constant vector fields.
Just as before, we define an $\mc{A}$-structure $\mathcal{F}$, an event map $\mc{E}: \mc{M} \rightarrow \mc{A}$ and associated asynchronous network $(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$. Our previous definitions and results
continue to apply.
\begin{exam}
Suppose $k = 1$, $\mathcal{N} = \{N_0,N_1\}$, and $M_1 = \mathbb{R}$. Choose a smooth vector field $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $1 \ge f(x) > 0$ for all $x \in \mathbb{R}$.
Define $\mc{A} = \{{\boldsymbol{\emptyset}}, \alpha = N_0 \rightarrow N_1\}$. Define the local time $\tau \in \mathbb{R}_+$ associated to $\alpha$. Set $\mc{M} = \mathbb{R} \times \mathbb{R}_+$.
Define $\mathcal{F} = \{\mathbf{f}^{\boldsymbol{\emptyset}},\mathbf{f}^\aa\}$ by
\[
\mathbf{f}^{{\boldsymbol{\emptyset}}}(x,\tau) = (f(x),0),\; \mathbf{f}^{\aa}(x,\tau) = (0,1),\; (x,\tau) \in \mc{M}.
\]
Fix $T > 0$ and define the event map $\mc{E}: \mc{M} \rightarrow \mc{A}$ by
\begin{eqnarray*}
\mc{E}(x,\tau) & = & {\boldsymbol{\emptyset}}, \;\text{if } x \ne 0\; \text{or } \tau \ge T\\
& = & \alpha, \;\text{if } x = 0\; \text{and } \tau < T
\end{eqnarray*}
The asynchronous network $(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ is amenable.
If we initialize at $(x_0,0)$, $x_0 < 0$, then the system evolves until $x = 0$, stops for local time $T$ seconds and then restarts.
In practice, the local clock is reset to zero after the system restarts.
\hfill \mbox{$\diamondsuit$}
\end{exam}
\subsection{Stochastic event processes and asynchronous networks}
Given node set $\mathcal{N}$, constraint structure $\is{C}$, generalized connection structure $\mc{A}$ and
$\mc{A}$-structure $\mathcal{F}$, an
\emph{event process} is a state dependent stochastic process $\mc{E}_{(t,\mathbf{X})}$ taking values in
$\mc{A}$.
\begin{Def}
\label{defasync_stoc0}
(Notation as above.)
A \emph{stochastic asynchronous network} $\mathfrak{N}$ is a quadruple
$(\mathcal{N},\mathcal{A},\mathcal{F},\mc{E})$, where
$\mc{E}=\mc{E}_{(t,\mathbf{X})}$ is an event process.
\end{Def}
In the most general case there are no restrictions on the process
$\mc{E}_{(t,\mathbf{X})}$: there may be (stochastic) dependence on time~$t\in\mathbb{R}^+$, pure space
dependence ($\mc{E}_{(t,\mathbf{X})} = \mc{E}(\mathbf{X})$), or both. If $\mc{E}_{(t,\mathbf{X})}$ is independent of time, then
the event process reduces to an event map $\mc{E}: \mathbf{M}\rightarrow \mc{A}$. If $\mc{E}_{(t,\mathbf{X})}$ is independent of $\mathbf{X}$,
then under mild conditions on $\mc{E}$, such as assuming $\mc{E}$ is Poisson, integral curves
on the stochastic asynchronous network
$(\mathcal{N},\mc{A},\mathcal{F},\mc{E}_t)$ will be almost surely piecewise smooth.
We discuss stochastic asynchronous networks in more detail in~\cite{BF2}. We give one simple example here
related to additive input structure.
\begin{exam}
We follow the assumptions and notational conventions of section~\ref{sec:AddStruct} and assume given a synchronous network with additive
input structure and dynamics given by~\Ref{EQais2}. Let $\mc{A}$ be a generalized connection structure and $\mc{E}$ be a time dependent
event process taking values in $\mc{A}$. Assume $\mathbf{M}$ is compact and the set of times $t_0 < t_1 < \dotsc $ where the
connection structure changes has Poisson statistics. The stochastic asynchronous network $(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ is an example of
a stochastic asynchronous networks with additive input structure. Almost surely, trajectories will be piecewise smooth
and defined for all positive time.
\hfill \mbox{$\diamondsuit$}
\end{exam}
\section{Model examples of asynchronous networks}
\label{ampex}
In this section, we describe two asynchronous networks using the formalism and ideas developed in the previous section.
We refer also to~\cite{BF2}, for the detailed description of an asynchronous network modelling spiking neurons, adaptivity and learning (STDP).
\renewcommand{\star}{*}
\subsection{A transport example: train dynamics}
\label{basic}
We use a simple transport example -- a single track line with a passing loop -- to illustrate characteristic features
of asynchronous networks in a setting requiring minimal structure and background knowledge.
Consider two trains $\mathfrak{T}_1, \mathfrak{T}_2$ travelling in opposite directions along a single track
railway line; see figure~\ref{strl}. We assume no central control and no communication between train drivers
unless both trains are in the passing loop.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figureI_03.eps}
\caption{Two trains on a single track railway line with a passing loop and stations.}
\label{strl}
\end{figure}
Take as phase spaces for the trains the closed interval $I = [-a,b]$, where $a,b > 0$. Suppose the end points of $I$ correspond to the
stations $A$ (at $-a$) and $B$ (at $b$) and that the passing loop is at $0 \in I$. Assume that the passing loop
is associated with a third station $P$.
The position of train $\mathfrak{T}_i$ at time $t\ge 0$ will be denoted by $x_i(t) \in I$, $i \in \is{2}$. Suppose
that $x_1(0) = -a$, $x_2(0) = b$. Assume that, outside of the stations $A,B,P$,
the velocity of the trains is given by smooth vector fields $V_1, V_2:I\rightarrow\mathbb{R}$ satisfying
\[
V_1(x) > 0 > V_2(x),\; x \in I.
\]
That is, $\mathfrak{T}_1$ is moving to the right and $\mathfrak{T}_2$ to the left.
In order to pass each other, the trains must enter the passing loop and stop at $P$.
Fix thresholds $S,S_1,S_2,T_1,T_2 \in \mathbb{R}_+$. Train $\mathfrak{T}_i$ will depart at time $T_i$, $i \in \is{2}$.
We require that trains have to be together in station $P$ for time $S$ and, additionally, the train $\mathfrak{T}_i$
must be in the station for time $S_i$, $i \in \is{2}$ (this is an additional condition on $\mathfrak{T}_i$ only if $S_i > S$).
The trains can move out of the station when these thresholds are met.
Note that
the trains will not generally leave the station at the same time if $S_1> S$ or $S_2 > S$. We model train dynamics by an asynchronous network.
First we discuss connection structures. Associate the node $N_i$ with train $\mathfrak{T}_i$, $i \in \is{2}$.
Train $\mathfrak{T}_i$ will be stopped at $P$ only if there is a
connection $\aa_i = N_0 \rightarrow N_i$, $i \in \is{2}$. We only allow communication between trains when both trains are
stopped at $P$. In this case, the connection structure will be $\beta =N_0 \rightarrow N_1 \leftrightarrow N_2 \leftarrow N_2$.
If either train is not stopped at $P$, there is no connection between the trains.
As the drivers of the trains cannot communicate (unless both trains are in the station $P$)
and there is no central control, the times associated
with the thresholds $S_1,S_2$ will be local times.
Specifically, when train $\mathfrak{T}_i$ stops at $P$, the driver's stopwatch will be started.
This will be a local time $\tau_i$ for $\mathfrak{T}_i$ and associated to the connection $N_0 \rightarrow N_i$,
When both trains are stopped at $P$, we use a third local time $\tau = \tau_{12}$ associated
to the connection $N_1 \leftrightarrow N_2$ (alternatively, the drivers could synchronize their stopwatches
but still the stopwatches may not run at the same speed).
We describe this setup using our formalism for asynchronous networks.
As network phase space we take
\[
\mc{M} = \{(\mathbf{X},\boldsymbol{\tau}) = (x_1,x_2,\tau_1,\tau_2,\tau) \,|\, x_1,x_2 \in I,\; \tau_1,\tau_2,\tau \in \mathbb{R}_+\} = I^2 \times\mathbb{R}_+^3.
\]
We define the generalized connection structure $\mc{A} = \{ \aa_1, \aa_2,\beta,{\boldsymbol{\emptyset}}\}$ and
let $\mathcal{F}$ be the $\mc{A}$-structure given by
\begin{eqnarray*}
\mathbf{f}^{\boldsymbol{\emptyset}}(\mathbf{X},\boldsymbol{\tau})&=&((V_1(x_1), V_2(x_2)),(0,0,0))\\
\mathbf{f}^{\aa_1}(\mathbf{X},\boldsymbol{\tau})&=&((0, V_2(x_2)),(1,0,0))\\
\mathbf{f}^{\aa_2}(\mathbf{X},\boldsymbol{\tau})&=&((V_1(x_1), 0),(0,1,0))\\
\mathbf{f}^{\beta}(\mathbf{X},\boldsymbol{\tau})&=&((0, 0),(1,1,1))
\end{eqnarray*}
We define the event map $\mathcal{E}: \mc{M}\rightarrow \mc{A}$ by
\[ {\small
\mathcal{E}(\mathbf{X},\boldsymbol{\tau}) =
\begin{cases}
\alpha_1& \text{if }(x_1 = 0, x_2 > 0) \vee ( (x_1 = 0, x_2 \le 0) \wedge (\tau_1 < S_1)) \\
\alpha_2& \text{if }(x_2 = 0, x_1 < 0) \vee ( (x_2 = 0, x_1 \ge 0) \wedge (\tau_2 < S_2)) \\
\beta&\text{if } (x_1 = x_2 = 0) \wedge ((\tau < S) \vee ((\tau_1 < S_1) \wedge (\tau_2 < S_2)))\\
{\boldsymbol{\emptyset}}& \text{otherwise}.
\end{cases}
}\normalsize
\]
\noindent Here we have used the logical connectives $\vee$ for \emph{or} and $\wedge$ for \emph{and}.
Dynamics on the asynchronous network $\mathfrak{N}=(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ is
given by the vector field $\is{F}(\mathbf{X}) = \mathbf{f}^{\mc{E}(\mathbf{X})}(\mathbf{X})$.
Provided that we initialize so that $x_1(0) < 0 < x_2(0)$, $\tau_1(0) = \tau_2(0) = \tau(0)=0$, it is easy to see that
$\mathfrak{N}$ is amenable.
\subsubsection{Initialization, termination and function}
\label{sec:FuncExample}
The network $\mathfrak{N}$ has a function: each train has to traverse the line to reach the opposite station. Thus we
can regard $\mathfrak{N}$ as a \emph{functional asynchronous network}. Formally,
define \emph{initialization} and \emph{termination} sets by $\mathbb{I}_1 = \{-a\}$, $\mathbb{I}_2 = \{b\}$
and $\mathbb{F}_1 = \{b\}$, $\mathbb{F}_2 = \{-a\}$ respectively.
We call $\mathbb{I} = \mathbb{I}_1 \times \mathbb{I}_2$ and $\mathbb{F} = \mathbb{F}_1 \times \mathbb{F}_2$ the
initialization and termination sets for $\mathfrak{N}$. The function of the network is to get from $\mathbb{I}$ to $\mathbb{F}$ in finite time.
Typically, the thresholds $S,S_1,S_2,T_1,T_2 \in \mathbb{R}_+$ will be chosen stochastically. For example,
the starting times $T_1, T_2$ according to an exponential distribution. If we initialize at $(-a,T_1), (b,T_2)$, and
take $\tau_1(0) = \tau_2(0) = \tau(0) = 0$, it is
easy to verify that solutions will be defined and continuous for all positive time under
the assumption that a train stops when it reaches its termination set.
\subsubsection{Adding dynamics}
\label{dynamics}
The trains only ``interact'' when both are stopped at $P$.
We now add a non-trivial dynamic interaction
when the trains are stopped at $P$. To this end, we
additionally require that
\begin{enumerate}
\item The drivers are running oscillators of approximately the same frequency
(randomly initialized at the start of the trip).
\item When both trains are at $P$, the oscillators are cross-coupled allowing for eventual approximate frequency synchronization.
\item The trains cannot restart until the oscillators have phase synchronized to within $\varepsilon$, where $0 < \varepsilon < 0.5$.
\end{enumerate}
For example, fix $\omega_1, \omega_2 \in \mathbb{R}$ and define $H(\theta) = k\sin 2\pi \theta$, $\theta \in \mathbb{T}$, where $k > 0$.
Take as network phase space $\mc{M}^\star = \mc{M} \times \mathbb{T}^2$. Define vector fields $\is{h}^{\boldsymbol{\emptyset}} = \is{h}^{\aa_1} = \is{h}^{\aa_2}$
and $\is{h}^{\beta}$ on $\mc{M}^\star$ by
\begin{eqnarray*}
\is{h}^{\boldsymbol{\emptyset}}(\mathbf{X},\boldsymbol{\tau},\theta_1,\theta_2)& =& (\is{0},\is{0},\omega_1,\omega_2)\\
\is{h}^\beta(\mathbf{X},\boldsymbol{\tau},\theta_1,\theta_2)& =& (\is{0},\is{0},\omega_1+ H(\theta_2-\theta_1), \omega_2 + H(\theta_1-\theta_2))
\end{eqnarray*}
Define a new $\mc{A}$-structure $\mc{F}^\star$ by
\[
\is{g}^{\boldsymbol{\emptyset}} = \mathbf{f}^{\boldsymbol{\emptyset}}+\is{h}^{\boldsymbol{\emptyset}},\; \is{g}^{\aa_1} = \mathbf{f}^{\aa_1}+\is{h}^{\aa_1},\; \is{g}^{\aa_2} = \mathbf{f}^{\aa_2}+\is{h}^{\aa_2},\;
\is{g}^\beta = \mathbf{f}^\beta+ \is{h}^\beta,
\]
where $\mathbf{f}^{\boldsymbol{\emptyset}}, \is{f}^{\aa_1}, \is{f}^{\aa_2}, \is{f}^\beta\in\mathcal{F}$ do not depend on $(\theta_1,\theta_2) \in \mathbb{T}^2$.
Modify the event map $\mc{E}$ by requiring that $\mc{E}(\mathbf{X},\boldsymbol{\tau},\theta_1,\theta_2) = \beta$ iff
\[
(x_1 = x_2 = 0) \wedge ((\tau < S)\vee (|\theta_1-\theta_2| > \varepsilon) \vee ((\tau_1 < S_1) \wedge (\tau_2 < S_2)))
\]
In this case, for almost all initializations, the oscillators will eventually phase synchronize to
within $\varepsilon$ provided that $\sin^{-1}(|\omega_1 - \omega_2|/2k) < 2 \pi \varepsilon$. In particular, if $\omega_1 = \omega_2$, the oscillators
will synchronize unless $|\theta_1(0)-\theta_2(0)|=0.5$.
\subsubsection{Relations with Filippov systems}
\label{Filippov_dig}
Assume all the thresholds of our model are zero.
Note that if $S = S_1 = S_2 = 0$, then there is no need for local clocks and we may model by the
asynchronous network $\mathfrak{N}^\star = (\mathcal{N},\mc{A}^\star,\mathcal{F}^\star,\mc{E}^\star)$, where
$\mc{A}^\star = \{\aa_1,\aa_2,{\boldsymbol{\emptyset}}\}$, $\mathcal{F}^\star = \{\mathbf{f}^{\boldsymbol{\emptyset}},\mathbf{f}^{\aa_1},\mathbf{f}^{\aa_2}\}$, where
$\mathbf{f}^{\boldsymbol{\emptyset}}(\mathbf{X})=(V_1(x_1), V_2(x_2))$, $\mathbf{f}^{\aa_1}(\mathbf{X})=(0, V_2(x_2))$,
$\mathbf{f}^{\aa_2}(\mathbf{X})=(V_1(x_1), 0)$,
and the event map $\mc{E}^\star$ is defined by
\[
\mathcal{E}^\star(\mathbf{X}) =
\begin{cases}
\alpha_1& \text{if }x_1 = 0, x_2 > 0 \\
\alpha_2& \text{if }x_2 = 0, x_1 < 0\\
{\boldsymbol{\emptyset}}& \text{otherwise}.
\end{cases}
\]
We show dynamics for $\mathfrak{N}^\star$ in figure~\ref{trains1} under the initialization assumption that $x_1(0) \le 0 \le x_2(0)$.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figureI_04.eps}
\caption{Dynamics on a one track line with passing loop.}
\label{trains1}
\end{figure}
Referring to the figure, trajectory $\eta$ corresponds to train $\mathfrak{T}_2$ reaching $P$ first
and restarting only when $\mathfrak{T}_1$ reaches $P$. Train $\mathfrak{T}_1$ reaches $P$ first for the
trajectory $\nu$. Regardless of which train reaches $P$ first, the `exit trajectory'
$\phi$ is always the same and so there is a reduction to 1-dimensional dynamics.
If both trains arrive simultaneously at $P$, neither stops.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figureI_05.eps}
\caption{Dynamics for the Filippov system. Trajectories $\eta$ and $\phi$ are unchanged; trajectories $\kappa$ and $\xi$
correspond to one train reversing after the other train enters the passing loop and are artifacts of the Filippov representation.}
\label{fil}
\end{figure}
The dynamics shown in figure~\ref{trains1} is suggestive of a Filippov system~\cite{Fi,BBCK} and it
is natural to ask whether there are connections between asynchronous network and
Filippov systems. Set $\mathbb{R}^2_\circ = \{(x_1,x_2) \,|\, x_1x_2 \le 0\}$ and observe that dynamics on $\mathfrak{N}^\star$ is
given by a continuous semiflow $\Phi^\star:\mathbb{R}^2_\circ \times \mathbb{R}_+\rightarrow\mathbb{R}_\circ^2$.
We define a Filippov system on $\mathbb{R}^2$, with continuous semiflow $\Phi:\mathbb{R}^2 \times \mathbb{R}_+ \rightarrow \mathbb{R}^2$, such that
$\Phi = \Phi^\star$ on $\mathbb{R}^2_\circ$. To this end we let $Q_{ij}$, $i,j \in \{+,-\}$ denote the closed quadrants of
$\mathbb{R}^2$ (so $Q_{+-} = \{(x_1,x_2) \,|\, x_1 \ge 0, x_2 \le 0\}$, etc) and define smooth vector fields
on each quadrant by
\begin{eqnarray*}
\is{V}_{++}(x_1,x_2) & = & (-V(x_1),V_2(x_2)),\; (x_1, x_2) \in Q_{++} \\
\is{V}_{+-}(x_1,x_2) & = & (V(x_1),V_2(x_2)),\; (x_1, x_2) \in Q_{+-}\\
\is{V}_{--}(x_1,x_2) & = & (V(x_1),-V_2(x_2)),\; (x_1, x_2) \in Q_{--}\\
\is{V}_{-+}(x_1,x_2) & = & (V(x_1),V_2(x_2)),\; (x_1, x_2) \in Q_{-+}.
\end{eqnarray*}
These vector fields uniquely define a smooth vector field $\is{V}$ on the union of the interiors of
the quadrants. We extend $\is{V}$ to a piecewise smooth vector field on $\mathbb{R}^2 \smallsetminus \{(0,0)\}$ using
the Filippov conventions. Thus, we regard the $x_i$-axis as a sliding line $S^i$, $i \in \is{2}$,
and define $\is{V}$ on $\partial Q_{-+} \cap \partial Q_{--}=E^{\aa_2}\subset S^1$ to be the unique convex combination
of $\is{V}_{-+}$ and $\is{V}_{--}$ which is tangent to $S^1$ (in this case $(\is{V}_{-+}+\is{V}_{--})/2$).
Finally define $\is{V}(0,0) = (V_1(0),V_2(0))$.
The piecewise smooth vector field $\is{V}$ has a continuous flow $\Phi:\mathbb{R}^2 \times \mathbb{R}_+\rightarrow \mathbb{R}^2$
(integral curves are defined using the standard conventions of piecewise smooth dynamics -- see~\cite{Fi})
and $\Phi|\mathbb{R}_\circ^2 = \Phi^\star$.
Of course, the semiflow on
$\mathbb{R}^2\smallsetminus\mathbb{R}_\circ^2$ does not have an interpretation
in terms of trains on a line with a passing loop (see figure~\ref{fil}).
In an asynchronous network, dynamics on event sets
is given explicitly rather than by the conventions used in
Filippov systems. However, as we have shown,
asynchronous networks can sometimes be locally represented by a Filippov system
(see~\cite{FA} for more details and greater generality).
This relationship suggests the possibility of
applying methods and results from the extensive bifurcation theory of nonsmooth systems
to asynchronous networks.
\subsubsection{Combining and splitting nodes}
We conclude our discussion of asynchronous networks modelling transport with a brief
description of processes defined by combining or splitting nodes (a dynamical version of a
\emph{Petri Net}~\cite{PNet}). We consider the simplest cases of two trains combining to form a
single train or one train splitting to form two trains. We only give details for the first case but note that
both situations are easily generalized and also, like much of what we have discussed above,
apply naturally to production networks.
Consider node sets $\mathcal{N}^a = \{N_0,N_1,N_2\}$ and $\mathcal{N}^b = \{N_0,N_{12}\}$, where $N_1,N_2,N_{12}$ have phase space $\mathbb{R}$
and correspond to trains $\mathfrak{T}_1, \mathfrak{T}_2$, $\mathfrak{T}_{12}$ respectively. We give a network formulation of
the event where trains $\mathfrak{T}_1, \mathfrak{T}_2$ are combined to form a single train $\mathfrak{T}_{12}$ (see figure~\ref{comb}).
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{figureI_06.eps}
\caption{Combining two trains into a single train.}
\label{comb}
\end{figure}
Fix vector fields $V_1, V_2, V_{12}$ on $\mathbb{R}$ and assume $V_1(x), V_2(x), V_{12}(x) > 0$ all $x \in \mathbb{R}$.
Define generalized connections structures
\begin{eqnarray*}
\mc{A}^a&=&\{{\boldsymbol{\emptyset}}, \alpha_1 = N_0 \rightarrow N_1, \alpha_2 = N_0 \rightarrow N_2, \beta = N_0\rightarrow N_1 \leftrightarrow N_2 \leftarrow N_2\}, \\
\mc{A}^b&=&\{ {\boldsymbol{\emptyset}}, \gamma=N_0 \rightarrow N_{12}\}.
\end{eqnarray*}
Assume a local clock with time $\tau = \tau_{12}$ that is shared between the connection $\beta \in \mc{A}^a$ and $\gamma \in \mc{A}^b$.
Define network phase spaces for $\mathcal{N}^a$, $\mathcal{N}^b$ to be $\mc{M}^a=\mathbb{R}^2 \times \mathbb{R}_+$, $\mc{M}^b=\mathbb{R}\times \mathbb{R}_+$ respectively.
Define the $\mc{A}^a$-structure $\mathcal{F}^a$ by
\[
\mathbf{f}^{a,{\boldsymbol{\emptyset}}} = ((V_1,V_2),0),\; \mathbf{f}^{a,\aa_1} = ((0,V_2),0),\; \mathbf{f}^{a,\aa_2} = (V_1,0),\;\mathbf{f}^{a,\beta} = ((0,0),1).
\]
and the $\mc{A}^b$-structure $\mathcal{F}^b$ by $\mathbf{f}^{b,{\boldsymbol{\emptyset}}} = (V_{12},0),\; \mathbf{f}^{b,{\gamma}} = (0,1)$.
Fix thresholds $S_2, S_1 > 0$. The threshold $S_1$ gives the time taken to combine $\mathfrak{T}_1$ and $\mathfrak{T}_2$,
and $S_2$ models the time $\mathfrak{T}_{12}$ spends in the station before leaving. Initialize $\mathcal{N}^a$ so that $x_1(0), x_2(0) < 0$ and
$\tau(0) = 0$. The event map $\mc{E}^a(\mathbf{X},\tau)$ is defined for $x_1, x_2 \le 0$ and $\tau < S$ by
\begin{eqnarray*}
\mc{E}^a(\mathbf{X},\tau)&=&{\boldsymbol{\emptyset}}, \; x_1,x_2 < 0\\
& = & \aa_1, \; x_1 = 0, x_2 < 0 \\
& = & \aa_2, \; x_1 < 0, x_2 = 0\\
& = & \beta ,\; x_1 =x_2 = 0, \tau < S_1
\end{eqnarray*}
The event map $\mc{E}^b(x_{12},\tau)$ is defined for $x_{12} \ge 0$ and $\tau \ge S_1$ by
\begin{eqnarray*}
\mc{E}^b(x_{12},\tau)&=&\gamma, \; x_{12}= 0, \tau < S_1 + S_2\\
& = & {\boldsymbol{\emptyset}},\; \text{otherwise}
\end{eqnarray*}
When $\tau = S_1$, we switch from network $\mathcal{N}^a$ to $\mathcal{N}^b$.
The splitting construction is similar except that we need to split the local clock for the combined train into two clocks, one for each separated train.
\subsection{Power grids and microgrids}
\subsubsection{Power grids as asynchronous networks}
We first consider an unrealistic, but simple and instructive model that shows how asynchronous and event dependent effects can naturally fit into the framework of power grids.
In the following section, we describe how more realistic models are obtained, their limitations, and where we might expect asynchronous network models to be useful.
We use the simplest model~\cite{FNP} for power grid frequency stability that assumes generators are synchronous, loads are synchronous motors and consider the network of mechanical phase oscillators
\begin{equation}
\label{pn}
\theta_j'' + \alpha_j \theta_j' = P_j - \sum_{i=1}^n k_{ij}\sin (\theta_i - \theta_j), \; j \in \is{n},
\end{equation}
where~$(k_{ij})$ is a a symmetric matrix, all entries positive (zero is
allowed). If $\sum_j P_j = 0$ the system can reach an equilibrium ($P_j < 0$ corresponds to a load).
Let $\Gamma$ be the (undirected) graph determined by the matrix of connections given by $(k_{ij})$.
While the network described by~\Ref{pn} is not asynchronous (and the main
interest lies with the stability of the equilibrium solution), the dynamics
of real-world power grids are subject to factors that cannot
be adequately described by a synchronous model. For integrity of transmission lines, as well as system stability, it is essential that the phase
differences $|\theta_i - \theta_j|$ are bounded away from $\pi/2$. For example, we might require $|\theta_i - \theta_j|\le T_{ij}$,
where $T_{ij} \in (0,\pi/2)$ will be a threshold determining the safe operational load for the transmission line. This leads to the
construction of state dependent event maps $\mc{E}_{ij}: \mathbb{T}^n\rightarrow \{\Gamma,\Gamma\smallsetminus \{i\leftrightarrow j\}\}$. If $|\theta_i - \theta_j|> T_{ij}$,
then $\mc{E}_{ij}(\boldsymbol{\theta}) = \Gamma\smallsetminus \{i\leftrightarrow j\}$ and the transmission line between nodes $i$ and $j$ is disconnected.
Equation \Ref{pn} is modified accordingly.
Similarly, lines or generators may be disconnected because of external events -- such
as lightening strikes or mechanical breakdowns. These can be modelled using a stochastic event map.
As indicated above, this model is unrealistic
(it is not true, for example, that typical loads are synchronous motors). In the next section, we indicate how more realistic models
are obtained, their limitations, and where we might expect asychronous network models to be useful.
\subsubsection{Network-reduced model for power grids}
We give an overview of the network-reduced coupled phase oscillator model for power grids, largely based on D\"orfler~\cite{DT},
and refer the reader to \cite{Motter2015,DT} for greater generality, alternative models, and the many details we omit. Apart from
describing the model, our goal is part cautionary (it is not evident that general theories of synchronous or asynchronous networks
have much to contribute to stability problems involving structural change), and part comparative with the models we describe later for microgrids.
Assume a power grid with synchronous generators, DC power sources, transmission lines and various types of load.
We assume a reference frequency $\omega_R$ for the power grid, usually $50$Hz or $60$Hz, and note
that frequency synchronization is critical for the stability of the power grid: our equations will be
written nominally in terms of phases $\theta_i(t)$ but for the models, we can always replace $\theta_i(t)$ by $\theta_i(t)-\omega_R t$ to get the (same) equations
for phase deviations that are needed for stability theory (phase differences, but not absolute phases, matter).
Formally, assume given an undirected (connected) weighted graph $\mc{G}$ with node set $\mc{V} = \is{n}$ and edge
set $\mc{E} \subset \mc{V}^2$. Nodes will be partitioned as
$
\mc{V} = \mc{V}_1 \cup \mc{V}_2 \cup \mc{V}_3,
$
where $ \mc{V}_1$ consists of synchronous generators, $\mc{V}_2$ are DC power sources, and $\mc{V}_3$
comprises various types of load (see below and note we do not consider all types of load).
Each edge $(i,j) \in \mc{E}$, $i \ne j$, is weighted by a non-zero admittance $Y_{ij}\in\mathbb{C}$ and corresponds to a transmission line. The imaginary part $\mathfrak{I}(Y_{ij})$
is the susceptance of transmission line and $\mathfrak{R}(Y_{ij})$ is the conductance.
Typically, a high voltage AC transmission line is regarded as lossless ($\mathfrak{R}(Y_{ij}) = 0$) and inductive ($\mathfrak{I}(Y_{ij}) > 0$). We allow self-loops $i = j$, these will
correspond to loads modelled as impedances to ground (nonzero ``shunt admittances'').
To each node is associated a voltage phasor $V_i = |V_i|e^{\imath \theta_i}$ corresponding to phase $\theta_i$ and magnitude $|V_i|$ of the sinusoidal solution to the circuit equations.
For a lossless network, the power flow from node $i$ to node $j$ is given by
$a_{ij} \sin(\theta_i - \theta_j)$, where $a_{ij} = |V_i||V_j| \mathfrak{I}(Y_{ij})$ gives the maximal power flow
(see Kundur~\cite[Chapter 6]{Ku}).
\subsubsection{Synchronous generators}
We assume dynamics of synchronous generators are given by
\begin{equation}
\label{sync}
M_i \theta_i'' + D_i \theta_i' = P_{m,i} + \sum_{j=1}^n a_{ij} \sin(\theta_j -\theta_i), \; i \in \mc{V}_1,
\end{equation}
where $\theta_i, \theta_i'$ are generator rotor angle and frequency, $M_i, D_i > 0$
are inertia and damping coefficients, and $P_{m,i}$ is mechanical power input.
\subsubsection{DC/AC inverters: droop controllers}
Each DC source in $\mc{V}_2$ is connected to the AC grid via a DC/AC inverter following a frequency droop control law which obeys
the dynamics~\cite{SPDB}
\begin{equation}
\label{dc}
D_i \theta_i' = P_{d,i} + \sum_{j=1}^n a_{ij} \sin(\theta_j -\theta_i), \; i \in \mc{V}_2.
\end{equation}
\subsubsection{Frequency dependent loads}
We assume the active power demand drawn by load $i$ consists of a constant term $P_{l,i}>0$ and a frequency dependent term
$D_i \theta_i'$, $D_i > 0$, leading to the power balance equation
\begin{equation}
\label{fdl}
D_i \theta_i' = -P_{l,i} + \sum_{j=1}^n a_{ij} \sin(\theta_j -\theta_i), \; i \in \mc{V}_{3,f},
\end{equation}
where $\mc{V}_{3,f}$ is the subset of $\mc{V}_3$ consisting of frequency dependent loads.
Equation~\Ref{fdl} is of the same form as \Ref{dc}, and
we may replace $\mc{V}_2$ by $\mc{V}_2 \cup \mc{V}_{3,f}$ and consider the general equation
\begin{equation}
\label{fdl1}
D_i \theta_i' = \omega_i + \sum_{j=1}^n a_{ij} \sin(\theta_j -\theta_i), \; i \in \mc{V}_{2},
\end{equation}
where $\omega_i$ is positive if the node is a DC generator and negative if it is a frequency dependent load.
We can similarly allow for loads which are synchronous motors, incorporate them in $\mc{V}_1$ and consider
\begin{equation}
\label{sync1}
M_i \theta_i'' + D_i \theta_i' = \omega_i + \sum_{j=1}^n a_{ij} \sin(\theta_j -\theta_i), \; i \in \mc{V}_1,
\end{equation}
where $\omega_i$ is positive if the node is a synchronous generator and negative if it is a synchronous motor.
\subsubsection{Constant current and constant admittance loads}
We assume the remaining loads each require a constant amount of current and have a shunt admittance (to ground).
In this case we have a current balance equation and, through the process of Kron reduction~\cite{DBKron}, may obtain a reduced
network the equations of which are
\begin{eqnarray}
\label{M1}
M_i \theta_i'' + D_i \theta_i' = \tilde{\omega}_i + \sum_{j=1}^n \tilde{a}_{ij} \sin(\theta_j -\theta_i+\varphi_{ij}), \; i \in \mc{V}_1,\\
\label{M2}
D_i \theta_i' = \tilde{\omega}_i + \sum_{j=1}^n \tilde{a}_{ij} \sin(\theta_j -\theta_i+\varphi_{ij}), \; i \in \mc{V}_{2}.
\end{eqnarray}
We refer to~\cite{DB} for the explicit form of the coefficients in (\ref{M1},\ref{M2}).
The original power grid network is typically sparse with many nodes -- $\mc{V}_3$ is large. The process of Kron reduction results in a much smaller
network which will be all-to-all coupled provided that the graph defined by $\mc{V}_3$ is connected~\cite{DBKron}. However, even if the original
transmission lines are lossless, the phase shifts $\phi_{ij}$ will generally be non-zero and not necessarily always small
(we refer to~\cite[\S 6.2 Figure 4]{Motter2015} for data from a real power grid network). The presence of phase shifts can and does
make it harder to frequency synchronize (\ref{M1},\ref{M2}).
From the point of view of transmission line failure in a power grid, even if the removal of an edge still results in a all-to-all coupled reduced network,
many of the coupling coefficients $\tilde{a}_{ij}$ will change. It is a hard problem, that goes beyond existing analytical theory
for synchronous and asynchronous networks, to get good insight into whether or not a breakdown will destabilize the network
(this is irrespective of phenomena like Braess's paradox~\cite{WT,PP}).
\subsubsection{Microgrids}
Assume given a stable power grid network, robust to ``small'' changes in power demand, and consider
the problem of modelling a microgrid and its combination or separation from the main grid.
We outline structural and logical issues to make transparent
the connection with asynchronous networks and largely ignore dynamics so as to keep the model simple and our discussion short
(we refer to~\cite{DCB,SPDB,DSPB,BGG} for more details and references on microgrids and control from a large and rapidly developing literature in this area).
Assume power generation in the microgrid is from
DC generators (such as solar power or DC wind power) and that $\mc{V}_1 = \emptyset$ (most motor loads are not synchronous).
Assume the microgrid is Kron reduced.
Unlike the power grid model described above, we
allow directed (one way) connections and a constraining node. Consider the simplified network $\mathcal{N} = \{N_0,N_B,N_G,N_P\}$, where
the nodes $N_B ,N_G,N_P$ correspond to a large capacity battery (buffer), a DC generator, and main power grid respectively, and define subnetworks $\mathcal{N}_M = \{N_B,N_G\}$ (microgrid) and
$\mathcal{N}_P = \{N_P\}$ (main power grid).
The battery acts as reserve storage or buffer for the microgrid; in particular to maintain power in the event of intermittent loss of
generated DC power or when the microgrid has been separated ``islanded'' from the main power grid. We suppose battery capacity $B= B(t) \in [0,B_M]$, where $B_M$ corresponds to the battery
being fully charged. We suppose that the DC generator produces power $O = O(t) \in [0,O_M]$, where $O_M$ is the maximum power than can be generated.
The constraining node will play a role when the microgrid is islanded and is to be reconnected to the main power grid: either because the
microgrid has insufficient power for the microgrid loads or because the microgrid has an excess of available power some of which can now be contributed to the
main power grid. In either case a transition process needs to be implemented where the droop controller for the DC/AC converter needs to bring the AC
output of the microgrid in precise voltage (phase, frequency and magnitude) synchronization with the state of the power grid at the connection point(s) to the microgrid.
Similarly, we can constrain when the microgrid is to be islanded from the main grid so that the reduction in power contributed to the main power grid is gradual
and done over an appropriate time scale so as not to destabilize the main power grid.
Leaving aside the dynamics of islanding and combining the microgrid with the main power grid, the generalized connection structures and control logic we need for
management of the microgrid are complex and depend on several thresholds which may need to be time dependent -- for example, if we
use a time dependent model for the projected microgrid power load. If the microgrid is islanded, we work with $\mathcal{N}_M$ and use the generalized connection structure
\[
\mc{A}_M = \{\alpha = N_G \rightarrow N_B, \beta=N_B \rightarrow N_G, {\boldsymbol{\emptyset}}\}.
\]
The connection structure $\alpha$ corresponds to the DC generator having sufficient output to supply all power needed for the microgrid load and with a surplus which can be used to
charge the battery, $\beta$ corresponds to battery and generator providing all necessary power for the microgrid, and
${\boldsymbol{\emptyset}}$ corresponds to the generator providing all needed power for the microgrid and either there is surplus power available for battery charging or the battery is fully charged.
Thresholds that determine switching between these states are chosen so as to avoid ``chattering'' in the control system.
If the microgrid is combined with the main power grid, this can be either because battery and DC generators cannot provide sufficient power for the microgrid load or because
the microgrid has surplus power which can be contributed to the main power grid or because the main power grid is stressed (possibly locally detected by
frequency variation) and the battery state of the microgrid is sufficiently high to allow a temporary power contribution to the main grid. As generalized connection structure $\mc{A}$
we take the set of connection structures
\[
N_G \rightarrow N_M,\; N_G \rightarrow N_M\leftarrow N_B,
\]
\[
N_B \rightarrow N_G\leftarrow N_M,\; N_M \rightarrow N_G,\;
N_M\rightarrow N_G\rightarrow N_B,
\]
Each of these connection structures has a natural interpretation. For example, $N_M\rightarrow N_G\rightarrow N_B$ corresponds to the main power grid contributing to both the load of the
microgrid and battery charging while $N_G \rightarrow N_M\leftarrow N_B$ means battery and DC generator are contributing power to the main power grid as well as supplying all the power for the microgrid.
On the other hand, $N_G \rightarrow N_M$ means DC generated power, but not battery power, is being contributed to the main power grid.
Of course, what we have described above is highly simplified as we have taken no account of (1) multiple DC generators and batteries within a microgrid, or
(2) multiple microgrids. In the latter case, we need to take care that microgrid switching does not synchronize as this could lead to large destabilizing
changes in load on the main grid.
\section{Products of asynchronous networks}
\label{sec:Products}
We conclude part I with the definition of the product of asynchronous networks and give
sufficient conditions for an asynchronous network to decompose as a product of two or more
asynchronous networks. Although the methods we use are elementary, the study of products is
illuminating as it clarifies some subtleties in both the event map and the functional structure that
are not present in the theory of synchronous networks. These ideas play a central role in the
proof of the modularization of dynamics theorem in part II.
\subsection{Products}
\label{sec:prod}
Given $\aa,\beta \in M(k)$, define $\aa \vee \beta\in M(k)$ (the join
of~$\aa$ and~$\beta$) by
\[
(\aa \vee \beta)_{ij} = \max \sset{\aa_{ij},\beta_{ij}}, \;i,j \in\is{k}
\]
(the max-plus addition of tropical algebra~\cite{HOW}). We have
$\aa \vee {\boldsymbol{\emptyset}} = \aa$ for all $\aa\in M(k)$. If
$\mc{A}, \mathcal{B}\subset M(k)$ are generalized connection structures,
define the generalized connection structure $\mc{A} \vee \mathcal{B}$ by
\[
\mc{A} \vee \mathcal{B} = \set{\aa \vee \beta}{\aa \in \mc{A},\; \beta \in \mathcal{B}}.
\]
Note that ${\boldsymbol{\emptyset}} \in \mc{A} \vee \mathcal{B}$ if and only if
${\boldsymbol{\emptyset}} \in \mc{A}\cap\mathcal{B}$.
Consequently, if ${\boldsymbol{\emptyset}} \in \mc{A}\vee\mathcal{B}$, then $\mc{A} , \mathcal{B} \subset \mc{A} \vee \mathcal{B}$.
Suppose that~$A$ is a nonempty subset of $\is{k}$ containing $k_A$
elements. There is a natural embedding of $M(k_A)$ in $M(k)$ defined
by relabelling the matrices in $M(k_A)$ according to~$A$. Specifically,
map the matrix $(\aa_{ij})_{i,j \in A} \in M(k_A)$ to the matrix
$\widehat{\alpha}\in M(k)$ defined by
\[
\widehat{\alpha}_{ij} = \begin{cases}
\aa_{ij} & \text{for }i, j \in A,\\
0 & \text{otherwise}.
\end{cases}
\]
This embedding extends to an embedding of $\CC{k_A}$ in
$\CC{k}$ by
\[
\widehat{\alpha}_{i0} = \begin{cases}
\aa_{i0} & \text{for }i \in A, \\
0 & \text{otherwise.}
\end{cases}
\]
Given disjoint nonempty subsets $A, B$ of~$k$, regard $\CC{k_A}, \CC{k_B}$
as embedded in $\CC{k}$. Given $\aa \in \CC{k_A}$, $\beta\in \CC{k_B}$,
define
\[\aa \vee \beta = \widehat{\alpha}\vee\widehat{\beta}\in\CC{k}.\]
This extends to the join $\mc{A} \vee \mathcal{B}$ of generalized
connection structures on disjoint sets of nontrivial nodes.
Let $\mathcal{N} = \{N_0, \dotsc,N_k\}$ and $A$ be a proper subset of~$\is{k}$.
Define $\mathcal{N}^A = \{N_j \,|\, j \in \bu{A}\}$ and $\mathbf{M}_A = \prod_{i \in A} M_i$.
Denote points in $\mathbf{M}_A$ by $\mathbf{X}_A$. Suppose $B = \is{k}\smallsetminus A$. We have
$\mathcal{N}^{A}\cap \mathcal{N}^{B}=\sset{N_0}$ and $\mathbf{M}_A \times \mathbf{M}_B \approx \mathbf{M}$.
If $\is{C}^A$, $\is{C}^B$ are constraint structures on
$\mathcal{N}^A$, $\mathcal{N}^B$ respectively, let $\is{C} = \is{C}^A \vee \is{C}^B$ denote the induced constraint structure
on $\mathcal{N}$ -- well defined since constraints depend only on nodes and $A\cap B = \emptyset$.
More generally, given disjoint node sets $\mathcal{N}^A = \{N_j \,|\, j \in \bu{A}\}$, $\mathcal{N}^B = \{N_j \,|\, j \in \bu{B}\}$,
we can identify $A,B$ with complementary subsets of $\is{k}$, where $k$ is the total number of elements in $A \cup B$, and then follow the
conventions described above.
\begin{Def}
\label{prodasy}
(Notation and assumptions as above.)
Given asynchronous networks
$\mathfrak{N}^X = (\mathcal{N}^X,\mc{A}^X,\mathcal{F}^X,\mathcal{E}^X)$, $X\in\sset{A,B}$,
define the product
$\mathfrak{N}^A \times \mathfrak{N}^{B}$ to
be the asynchronous network $\mathfrak{N} =(\mathcal{N},\mc{A},\mathcal{F},\mathcal{E})$ where
\begin{enumerate}
\item $\mathcal{N} = \mathcal{N}^A \cup \mathcal{N}^B$,
\item $\is{C} = \is{C}^A \vee \is{C}^B$,
\item $\mc{A} = \mc{A}^A \vee \mc{A}^B$,
\item $\mathcal{F} = \mathcal{F}^A \times \mathcal{F}^B = \{ \mathbf{f}_A^\aa\times \mathbf{f}_{B}^\beta\,|\, \aa\in\mc{A}^A,\; \beta\in \mc{A}^B\}$, and
\item $\mc{E}$ is defined by $$\mathcal{E}(\mathbf{X}_A,\mathbf{X}_B) = \mathcal{E}^A(\mathbf{X}_A) \vee \mathcal{E}^B(\mathbf{X}_B),\; \text{for } (\mathbf{X}_A,\mathbf{X}_B) \in \mathbf{M}_A\times \mathbf{M}_B.$$
\end{enumerate}
\end{Def}
\begin{rem}
If $\mathfrak{N}^A, \mathfrak{N}^B$ are proper (or amenable), then so is $\mathfrak{N}^A\times\mathfrak{N}^B$.
\end{rem}
\begin{lemma}
(Notation of definition~\ref{prodasy}.)
The network vector field on $\mathfrak{N}^A\times\mathfrak{N}^B$ is given by
\begin{equation}
\label{DEQD}
\is{F}(\mathbf{X}_A,\mathbf{X}_B) = (\is{f}_A^{\mathcal{E}^A(\mathbf{X}_A)}(\mathbf{X}_A),\is{f}_B^{\mathcal{E}^B(\mathbf{X}_B)}(\mathbf{X}_B)),
\end{equation}
for all $(\mathbf{X}_A,\mathbf{X}_B) \in \mathbf{M}_A\times \mathbf{M}_B$.
\end{lemma}
\begin{proof}Immediate from the definitions. \end{proof}
\subsection{Decomposability}
\label{sec:Indecomposability}
\begin{Def}
\label{indec}
An asynchronous network $(\mathcal{N},\mc{A},\mathcal{F},\mathcal{E})$
is \emph{decomposable} if it can be written as a product of asynchronous
networks. If the network is not decomposable, it is \emph{indecomposable}.
\end{Def}
\begin{exam}
\label{syncexample}
Suppose that $\mathcal{N}$ is a synchronous network with connection structure $\alpha \in M(k)$ and
$\aa$-admissible network vector field $\mathbf{f}$ satisfying conditions (N1--3) of section~\ref{generalities}. Since $\alpha$ encodes the dependencies of $\mathbf{f}$
it is trivial that $\mathcal{N}$ can be written as a product of two synchronous networks iff the network graph $\Gamma_\aa$ is disconnected.
\hfill \mbox{$\diamondsuit$}
\end{exam}
Our aim to find sufficient conditions on an asynchronous network for it to be decomposable.
\begin{Def}
The \emph{connection graph} of the asynchronous network
$\mathfrak{N} = (\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ is the graph defined by the \mbox{$0$\,-$1$ } matrix
$\Gamma_\mathfrak{N}= \bigvee_{\aa\in\mc{A}}\CSf{\aa}$.
\end{Def}
\begin{lemma}
\label{lem:NecDecomp}
If an asynchronous network $\mathfrak{N}$ is decomposable,
then the {connection graph}
$\Gamma_\mathfrak{N}$ of $\mathfrak{N}$
has at least two connected components.
\end{lemma}
\begin{proof} If $\mathfrak{N}$ is decomposable, then
$\mathfrak{N} = \mathfrak{N}^A\times\mathfrak{N}^B$, where $A,B$ are proper complementary subsets of $\is{k}$.
Since there are no connections between nodes in $\mathcal{N}^A$ and $\mathcal{N}^B$,
$\Gamma_\mathfrak{N}$ has at least two connected components.
\end{proof}
\begin{rem}\label{dep}
Lemma~\ref{lem:NecDecomp} gives a necessary condition for
decomposability which is not sufficient.
There are two issues.
First, the event map encodes information about spatial dependence of node interactions that cannot be
deduced from the connection graph. Second, the admissible vector fields may have dependencies that are
incompatible with decomposability.
\end{rem}
\begin{exam}\label{ex:StrucIndecomp}
Let $k = 2$, $M_1 = M_2 = \mathbb{R}$. Define connection structures $\aa_i = N_0 \rightarrow N_i$, $i \in \is{2}$ and
generalized connection structure
$\mc{A} = \{{\boldsymbol{\emptyset}}, \aa_1, \aa_2,\beta = \aa_1 \vee \aa_2\}$.
Suppose the event map is given by
\[
\mc{E}(x_1,x_2) = \begin{cases}
\aa_1,& \text{if } x_1 < 0, x_2 =0\\
\aa_2, & \text{if } x_1 = 0, x_2 >0\\
\beta, & \text{if } x_1 = x_2 = 0,\\
{\boldsymbol{\emptyset}}, & \text{otherwise}
\end{cases}
\]
In this case, $\mc{A} = \mc{A}^1\vee\mc{A}^2$, where $\mc{A}^i = \{{\boldsymbol{\emptyset}},\aa_i\}$, $ i \in \is{2}$,
and the network graph is disconnected. However,
there is no way to write $\mc{E}(x_1, x_2)$ as $\mc{E}^1(x_1)\vee\mc{E}^2(x_2)$ as
the event sets involving $x_1\in M_1$ depend nontrivially on~$x_2\in M_2$.
Hence the network cannot be decomposable or
even equivalent to a decomposable network whatever choice we make for admissible vector fields.
Suppose instead we define the event map by
\[\tilde\mc{E}(x_1,x_2) = \begin{cases}
\aa_1, & \text{if } x_1 = 0, x_2\neq 0\\
\aa_2, & \text{if } x_2 = 0, x_1\neq 0\\
\beta,& \text{if } x_1 = x_2 = 0\\
{\boldsymbol{\emptyset}}, & \text{otherwise}
\end{cases}
\]
In this case $\mc{A} = \mc{A}^1\vee\mc{A}^2$ and we may write $\mc{E} = \mc{E}^1 \vee \mc{E}^2$ where
$\mc{E}^i(0) = \aa_i$, and $\mc{E}^i(x_i) = {\boldsymbol{\emptyset}}$, $x_i\ne 0$, $i \in \is{2}$.
Suppose that $\mathbf{f}^{\aa_1}(x_1,x_2) = (0,v_2)$, $\mathbf{f}^{\aa_2}(x_1,x_2) = (v_1,0)$, $\mathbf{f}^{\boldsymbol{\emptyset}}(x_1,x_2) = (v_1,v_2)$, where $v_1,v_2 \ne 0$. For the
moment leave $\mathbf{f}^\beta$ unspecified.
Define $\mathcal{F}^i=\{\mathbf{f}_i^{\boldsymbol{\emptyset}}, \mathbf{f}_i^{\alpha_i}\}$, where $\mathbf{f}_i^{\boldsymbol{\emptyset}}(x_i) = v_i$, $\mathbf{f}_i^{\aa_i}(x_i) = 0$, $i \in \is{2}$. Observe that
$\mathbf{f}^{\boldsymbol{\emptyset}} = \mathbf{f}_1^{\boldsymbol{\emptyset}} \times \mathbf{f}_2^{\boldsymbol{\emptyset}}$, $\mathbf{f}^{\aa_1} = \mathbf{f}_1^{\aa_1} \times \mathbf{f}_2^{\boldsymbol{\emptyset}}$ and $\mathbf{f}^{\aa_2} = \mathbf{f}_1^{\boldsymbol{\emptyset}}\times \mathbf{f}_2^{\aa_2}$.
For $(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ to be a product we additionally
require $\mathbf{f}^\beta(x_1,x_2) = (\mathbf{f}_1^{\aa_1}(x_1),\mathbf{f}_2^{\aa_2}(x_2)) = (0,0)$, all $(x_1,x_2) \in \mathbb{R}^2$.
In particular, if $\mathbf{f}^\beta(0,0) \ne (0,0)$, the network $(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ is not even equivalent to a product network.
However, if $\mathbf{f}^\beta(0,0) = (0,0)$, then the network $(\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ will be equivalent to a product network
if we redefine $\mathbf{f}^\beta$ to be $\mathbf{f}_1^{\aa_1} \times \mathbf{f}_2^{\aa_2}$ (this does not change the values of $\mathbf{f}^\beta$ on $E^\beta$).
\hfill \mbox{$\diamondsuit$}
\end{exam}
\subsection{Sufficient conditions for decomposability}
\label{sec:Decomposability}
Let~$\mathfrak{N}$ be an asynchronous network with~$k$ nodes and $C$ be
a proper connected component of the connection
graph~$\Gamma_\mathfrak{N}$. Identify $C$ with
the nonempty subset of~$\is{k}$ corresponding to the labels of the
nodes in the component $C$. Let~$\ol{C} = \is{k}\smallsetminus C$.
Since~$C$ is a connected component of $\Gamma_\mathfrak{N}$,
we can write each $\aa\in \mc{A}$ uniquely as
$\aa=\aa_C\vee\aa_{\ol{C}}$, where $\aa_C$,
$\aa_{\ol{C}}$ are connection structures on $\mathcal{N}^C$ and
$\mathcal{N}^{\ol{C}}$ respectively. Set
$\mc{A}^C = \{\aa_C \,|\, \aa \in\mc{A}\}$. We have a well defined
projection $\pi_C:\mc{A}\rightarrow \mc{A}^C$ defined by $\pi_{C}(\aa)=\aa_C$.
Define the event map $\mathcal{E}^C: \mathbf{M}_C\times\mathbf{M}_{\ol{C}}\to\mc{A}^C$
by
\[
\mathcal{E}^C(\mathbf{X}_C, \mathbf{X}_{\overline{C}}) = \pi_C(\mathcal{E}(\mathbf{X}_C, \mathbf{X}_{\overline{C}})).
\]
\begin{Def}
An asynchronous network $\mathfrak{N}$
is \emph{structurally decomposable} if for
any connected component~$C$ of the connection graph $\Gamma_\mathfrak{N}$,
the map $\mc{E}^C$ is independent of
$\mathbf{X}_{\ol{C}}\in\mathbf{M}_{\ol{C}}$
(that is, $\mc{E}^C(\mathbf{X}_C, \mathbf{X}_{\ol{C}}) = \mc{E}^1(\mathbf{X}_C)$ where $\mc{E}^1: \mathbf{M}_C \to \mc{A}^C$).
\end{Def}
\begin{rem}
Structural decomposability implies conditions on
structural dependencies that will generally be different from
the dependencies of the network vector field. For example, suppose a component $C$ of the connection graph contains the node $N_1$. If the node $N_1$ is stopped there may
be a condition that $N_1$
will restart when the state of another node, say $N_2$, attains a certain
value. Necessarily, $N_2$ must lie in $C$ (structural decomposability). However, there need be no connection between $N_1$ and $N_2$ unless $C$ contains exactly two nodes.
\end{rem}
Suppose that $\mathfrak{N}$ is structurally decomposable and that $\Gamma_\mathfrak{N}$ has connected components $C_1,\dotsc,C_q$.
Set $\mathbf{M}_\ell = \mathbf{M}_{C_\ell}$, $\mc{A}^\ell = \pi_{C_\ell}(\mc{A})$, $\ell \in \is{q}$.
By structural decomposability we may write $\mc{E}(\mathbf{X}) = \bigvee_{\ell\in \is{q}} \mc{E}^\ell(\mathbf{X}_\ell)$ where
$\mc{E}^\ell:\mathbf{M}_\ell \rightarrow \mc{A}^\ell$. For $\alpha \in \mc{A}$, $\ell \in \is{q}$, set $\alpha_\ell = \pi_{C_\ell}(\alpha) \in \mc{A}^\ell$ and $E^\ell_{\alpha_\ell} = (\mc{E}^\ell)^{-1}(\alpha_\ell)\subset \mathbf{M}_\ell$.
\begin{lemma}
(Notation as above.)
If $\mathfrak{N}$ is structurally decomposable and $\Gamma_\mathfrak{N}$ has connected components $C_1,\dotsc,C_q$, then
\[
E_\alpha = \prod_{\ell \in \is{q}} E^\ell_{\aa_\ell} \subset \prod_{\ell \in \is{q}} \mathbf{M}_\ell,\;\text{for all } \aa\in\mc{A}.
\]
\end{lemma}
\proof An immediate consequence of structural decomposability. \qed
If $C$ be a proper connected component of the connection graph $\Gamma_\mathfrak{N}$ of an asynchronous network $\mathfrak{N}$, then
by admissibility
\[
\mathbf{f}^\aa = \mathbf{f}_C^\aa\times \mathbf{f}_{\ol{C}}^\aa, \; \text{for all } \alpha \in \mc{A},
\] where
$\mathbf{f}^\aa_C: \mathbf{M}_C\to T\mathbf{M}_C$ and
$\mathbf{f}^\aa_{\ol{C}}: \mathbf{M}_{\ol{C}}\to T\mathbf{M}_{\ol{C}}$.
In order that $\mathfrak{N}$
be decomposable, this decomposition has to be compatible
with the projections
$\pi_C: \mc{A}\to \mc{A}^C$, $\pi_{\ol{C}}: \mc{A}\to \mc{A}^{\ol{C}}$. In particular, if
connections in the set of nodes that are in~$\ol{C}$ are added or deleted,
dynamics on $\mathbf{M}_C$ is not affected.
\begin{Def}
(Notation as above.)
The asynchronous network $\mathfrak{N}$ is \emph{dynamically decomposable} if
for any connected component~$C$ of $\Gamma_\mathfrak{N}$, we have
\[
\mathbf{f}_C^\aa = \mathbf{f}_C^\beta
\]
for all $\alpha,\beta \in \mc{A}$ such that $\pi_C(\alpha) = \pi_C(\beta)$.
\end{Def}
\begin{lemma}
(Notation as above.)
Input consistent asynchronous networks are dynamically decomposable. In particular,
asynchronous networks with additive input structure are dynamically decomposable.
\end{lemma}
\begin{proof}
Given $i \in \is{k}, \alpha \in \mc{A}$, let $J(i, \aa)$ be the associated dependency set for node~$N_i$.
If $\alpha,\beta \in \mc{A}$ and $J(i, \aa) = J(i, \beta)$, then $f_i^\aa = f_i^\beta$ by
input consistency.
If $i\in C$, where~$C$ is a connected component of the network
graph~$\Gamma_\mathfrak{N}$, then $J(i, \aa)\cap\is{k}\subset C$ for
all~$\aa\in \mc{A}$. Hence
$J(i, \aa)=J(i, \aa_C\vee \aa_{\ol{C}})$ is independent of $\aa_{\ol{C}}$.
Input consistency implies that
$f_i^{\aa_C\vee\beta} = f_i^{\aa_C\vee\gamma}$ for all
$\beta, \gamma\in\mc{A}_{\ol{C}}$ which yields dynamical decomposability.
\end{proof}
We now state the main result of this section.
\begin{thm}\label{lem:SufDecomp}
Let $\mathfrak{N}$ be a structurally and dynamically
decomposable asynchronous network with connection graph~$\Gamma$.
If~$\Gamma$ has connected components $C_1, \dotsc, C_q$ then there
exist indecomposable asynchronous networks
$\mathfrak{N}^1, \dotsc, \mathfrak{N}^q$ such that
\[\mathfrak{N} = \mathfrak{N}^1\times\dotsb\times\mathfrak{N}^q.\]
\end{thm}
\begin{proof}
For $\ell \in \is{q}$, define $\mc{A}^\ell = \{\alpha_\ell \stackrel{\mathrm{def}}{=} \pi_\ell(\aa)\,|\,\aa\in\mc{A}\}$
and $\mathcal{F}^\ell = \{\mathbf{f}_\ell^{\aa_\ell} \stackrel{\mathrm{def}}{=} \mathbf{f}_{C_\ell}^\alpha:\mathbf{M}_\ell \rightarrow T\mathbf{M}_\ell \,|\, \aa \in \mc{A}\}$. By dynamical indecomposability
we have $\mathbf{f}^\alpha = \prod_{\ell \in \is{q}} \mathbf{f}_\ell^{\aa_\ell}$, for all $\alpha \in \mc{A}$.
Constraint structures are
defined for individual nodes and so factorise naturally. Let $\mc{E}^\ell:\mathbf{M}_\ell\rightarrow \mc{A}^\ell$ be the event maps given by structural indecomposability.
If we let $\mathfrak{N}^\ell$ be the asynchronous network $(\mathcal{N}^\ell,\mc{A}^\ell,\mathcal{F}^\ell,\mc{E}^\ell)$, where $\mathcal{N}^\ell = \{N_0\}\cup \{N_i \,|\, i \in C_i\}$, $\ell\in\is{q}$, then
$\mathfrak{N}=\prod_{\in\is{q}} \mathfrak{N}^i$.
\end{proof}
Our concluding result on decomposability is an immediate consequence of lemma~\ref{lem:NecDecomp} and theorem~\ref{lem:SufDecomp}.
\begin{cor}\label{prop:DecompConnectionGraph}
A structurally and dynamically decomposable asynchronous
network~$\mathfrak{N}$ is decomposable if and only if its connection
graph has more than one nontrivial connected component.
\end{cor}
\subsection{Factorization of asynchronous networks}
Assume for this section that $\mathfrak{N} = (\mathcal{N},\mc{A},\mathcal{F},\mc{E})$ is an asynchronous network which is not necessarily
structurally or dynamically indecomposable.
\begin{Def}
The asynchronous network $\mathfrak{N}^1$ is a factor of $\mathfrak{N}$ if there is an asynchronous network
$\mathfrak{N}^2$ such that $\mathfrak{N} = \mathfrak{N}^1 \times \mathfrak{N}^2$.
\end{Def}
The proof of the next lemma is immediate from the definition of a product.
\begin{lemma}\label{neccond}
If $\mathfrak{N}^1$ is a factor of $\mathfrak{N}$, then
the connection graph $\Gamma_{\mathfrak{N}^1}$ is a union of connected components of $\Gamma_{\mathfrak{N}}$.
\end{lemma}
\begin{rem}
If $\mathfrak{N}^1$ is indecomposable, the connection graph $\Gamma_{\mathfrak{N}^1}$ may have more than one component -- unless
$\mathfrak{N}$ is structurally and dynamically indecomposable (theorem~\ref{lem:SufDecomp}).
\end{rem}
\begin{prop}
\label{uniqfactor}
Every asynchronous network $\mathfrak{N}$ has a factorization $\prod_{a\in\is{q}}\mathfrak{N}^a$ as a product of indecomposable
asynchronous networks. The factorization is unique, up to the order of factors.
\end{prop}
\proof Existence is obvious. The uniqueness of factorization follows easily from lemma~\ref{neccond}. \qed
\vspace*{0.1in}
\noindent {\bf Acknowledgements.}
CB would like to thank Marc Timme at the Max Planck Institute for Dynamics
and Self-Organization for continuing hospitality. MF would like to
thank the Mathematics Department at Rice University, where
much of this work was done, for providing a warm and supportive working environment, and
Steve Furber of Manchester University for his penetrating insights and questions.
Both authors would like to acknowledge fruitful discussions with
many people --- too many to name individually here --- whose input has proved
invaluable in helping to shape these ideas.
|
1,314,259,993,298 | arxiv | \section{Introduction}
Deep neural networks have been pushing the frontiers of artificial intelligence (AI) by yielding excellent performance in numerous tasks, from understanding images \citep{ResNet} to text \citep{VDCNN}.
Yet, high performance is not always a sufficient factor - as some real-world deployment scenarios might necessitate that an ideal AI system is `interpretable', such that it builds trust by explaining rationales behind decisions, allow detection of common failure cases and biases, and refrains from making decisions without sufficient confidence.
In their conventional form, deep neural networks are considered as \emph{black-box models} -- they are controlled by complex nonlinear interactions between many parameters that are difficult to understand.
There are numerous approaches, e.g. \citep{tcav,visualizeHF,visualizeconv1,visualizeconv2}, that bring post-hoc explainability of decisions to already-trained models.
Yet, these have the fundamental limitation that the models are not designed for interpretability.
There are also approaches on the redesign of neural networks towards making them inherently-interpretable, as in this paper. Some notable ones include sequential attention \citep{Bahdanau}, capsule networks \citep{capsules}, and interpretable convolutional filters \citep{InterpretableCNN}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.6\textwidth]{prototypical_example.pdf}
\caption{ProtoAttend bases the decision on a few prototypes from the database. This enables interpretability of the prediction (by visualizing the highest weight prototypes) and confidence estimation for the decision (by measuring agreement across prototype labels).}
\label{fig:prototype_general}
\end{figure*}
We focus on inherently-interpretable deep neural network modeling with the foundations of \emph{prototypical learning}. Prototypical learning decomposes decision making into known samples (see Fig. \ref{fig:prototype_general}), referred here as prototypes.
We base our method on the principle that prototypes should constitute \emph{a minimal subset of samples with high interpretable value that can serve as a distillation or condensed view of a dataset} \citep{prototype_selection}.
Given that the number of objects a human can interpret is limited \citep{magicalseven}, outputting few prototypes can be an effective approach for humans to understand the AI model behavior.
In addition to such interpretability, prototypical learning:
(1)~provides an efficient confidence metric by measuring mismatches in prototype labels, allowing performance to be improved by refraining from making predictions in the absence of sufficient confidence,
(2)~helps detect deviations in the test distribution by measuring mismatches in prototype labels that represent the support of the training dataset, and
(3)~enables performance in the high label noise regime to be improved by controlling the number of selected prototypes.
Given these motivations, prototypes should be controllable in number, and should be perceptually relevant to the input in explaining the decision making task.
Prototype selection in its naive form is computationally expensive and perceptually challenging \citep{prototype_selection}. We design ProtoAttend to address this problem in an efficient way.
Our contributions can be summarized as follows:
\begin{enumerate}[noitemsep, nolistsep, leftmargin=*]
\item We propose a novel method, ProtoAttend, for selecting input-dependent prototypes based on an attention mechanism between the input and prototype candidates. ProtoAttend is model-agnostic and can even be integrated with pre-trained models.
\item ProtoAttend allows interpreting the contribution of each prototype via the attention outputs.
\item For a `condensed view', we demonstrate that sparsity in weights can be efficiently imposed via the choice of the attention normalization and additional regularization.
\item On image, text and tabular data, we demonstrate the four key benefits of ProtoAttend: interpretability, confidence control, diagnosis of distribution mismatch, and robustness against label noise. ProtoAttend yields superior quality for sample-based interpretability, better-calibrated confidence scoring, and more sensitive out-of-distribution detection compared to alternative approaches.
\item ProtoAttend enables all these benefits via the same architecture and method, while maintaining comparable overall accuracy.
\end{enumerate}
\section{Related Work}
{\bf Prototypical learning:}~
The principles of ProtoAttend are inspired by \citep{prototype_selection}.
They formulate prototype selection as an integer program and solve it using a greedy approach with linear program relaxation.
It seems unclear whether such approaches can be efficiently adopted to deep learning.
\citep{thislookslikethat} and \citep{casebasedreasoning} introduce a prototype layer for interpretability by replacing the conventional inner product with a distance computation for perceptual similarity.
In contrast, our method uses an attention mechanism to quantify perceptual similarity and can choose input-dependent prototypes from a large-scale candidate database.
\citep{representerpointselection} decomposes the prediction into a linear combination of activations of training points for interpretability using representer values.
The linear decomposition idea also exists in ProtoAttend, but the weights are learned via an attention mechanism and sparsity is encouraged in the decomposition.
In \citep{influence_functions}, the training points that are the most responsible for a given prediction are identified using influence functions via oracle access to gradients and Hessian-vector products.
{\bf Metric learning:}~
Metric learning aims to find an embedding representation of the data where similar data points are close and dissimilar data pointers are far from each other.
ProtoAttend is motivated by efficient learning of such an embedding space which can be used to decompose decisions.
Metric learning for deep neural networks is typically based on modifications to the objective function, such as using triplet loss and N-pair loss \citep{deepmetric1,deepmetric2,deepmetric3}. These yield perceptually meaningful embedding spaces yet typically require a large subset of nearest neighbors to avoid degradation in performance \citep{deepmetric2}.
\citep{attention_metric_learning} proposes a deep metric learning framework which employs an attention-based ensemble with a divergence loss so that each learner can attend to different parts of the object.
Our method has metric learning capabilities like relating similar data points, but also performs well on the ultimate supervised learning task.
{\bf Attention-based few-shot learning:}~
Some of our inspirations are based on recent advances in attention-based few-shot learning.
In \citep{matching_networks}, an attention mechanism is used to relate an example with candidate examples from a support set using a weighted nearest-neighbor classifier applied within an embedding space.
In \citep{attentionattractor_fewshot}, incremental few-shot learning is implemented using an attention attractor network on the encoded and support sets.
In \citep{prototypical}, a non-linear mapping is learned to determine the prototype of a class as the mean of its support set in the embedding space. During training, the support set is randomly sampled to mimic the inference task.
Overall, the attention mechanism in our method follows related principles but fundamentally differs in that few-shot learning aims for generalization to unseen classes whereas the goal of our method is robust and interpretable learning for seen classes.
{\bf Uncertainty and confidence estimation:}~
ProtoAttend takes a novel perspective on the perennial problem of quantifying how much deep neural networks' predictions can be trusted.
Common approaches are based on using the scores from the prediction model, such as the probabilities from the softmax layer of a neural network, yet it has been shown that the raw confidence values are typically poorly calibrated \citep{dnn_calibration}. Ensemble of models \citep{deep_ensemble} is one of the simplest and most efficient approaches, but significantly increases complexity and decreased interpretability. In \citep{deepKNN}, the intermediate representations of the network are used to define a distance metric, and a confidence metric is proposed based on the conformity of the neighbors. \citep{trust_classifier}, proposes a confidence metric based on the agreement between the classifier and a modified nearest-neighbor classifier on the test sample. In \citep{DeVries2018Uncertainity}, direct inference of confidence output is considered with a modified loss. Another direction of uncertainty and confidence estimation is Bayesian neural networks that return a distribution over the outputs \citep{bayesian_dnn} \citep{bayesian_dnn2} \citep{KendallGal2017Uncertainties}.
\section{ProtoAttend: Attention-based Prototypical Learning}
Consider a training set with labels, $\{\mathbf{x_i}, y_i \}$.
Conventional supervised learning aims to learn a model $s(\mathbf{x_i} ; \mathbf{S})$ that minimizes a predefined loss $1/B \cdot \sum\nolimits_{i=1}^{B} L(y_i, \hat{y_i}=s(\mathbf{x_i} ; \mathbf{S}))$\footnote{$\mathbf{S}$ represents the trainable parameters for $s( ;\mathbf{S})$ and is sometimes not show for notation convenience.} at each iteration, where $B$ is the batch size for training.
Our goal is to impose that decision making should be based on only a small number of training examples, i.e. \emph{prototypes}, such that their linear superposition in an embedding space can yield the overall decision and the superposition weights correspond to their importance.
Towards this goal, we propose defining a solutions to prototypical learning with the following six principles:
\begin{enumerate}[label=\roman*., noitemsep, nolistsep]
\item $\mathbf{v_i} = f(\mathbf{x_i}; \mathbf{\theta})$ encodes all relevant information of $\mathbf{x_i}$ for the final decision. $f( )$ considers the global distribution of the samples, i.e. learns from all $\{\mathbf{x_i}, y_i \}$. Although all the information in training dataset is embodied in the weights of the encoder\footnote{Training of $f( )$ may also involve initializing with pre-trained models or transfer learning.}, we construct the learning method in such a way that decision is dominated by the prototypes with high weights.
\item From the encoded information, we can find a decision function so that the mapping $g(\mathbf{v_i}; \mathbf{\eta})$ is close to the ground truth $y_i$, in a consistent way with conventional supervised learning.
\item Given candidates $\mathbf{x^{(c)}_j}$ to select the prototypes from, there exists weights $p_{i,j}$ (where $p_{i,j} \geq 0$ and $\sum_{j=1}^D p_{i,j} = 1$), such that the decision $g(\sum_{j=1}^D p_{i,j} \mathbf{v^{(c)}_j}; \mathbf{\eta})$ (where $\mathbf{v^{(c)}_j} = f(\mathbf{x^{(c)}_j}; \mathbf{\theta})$) is close to the ground truth $y_i$.
\item When the linear combination $\sum_{j=1}^D p_{i,j} \mathbf{v^{(c)}_j}$ is considered, prototypes with higher weights $p_{i,j}$ have higher contribution in the decision $g(\sum_{j=1}^D p_{i,j} \mathbf{v^{(c)}_j}; \mathbf{\eta})$.
\item The weights should be sparse -- only a controllable amount of weights $p_{i,j}$ should be non-zero. Ideally, there exists an efficient mechanism for outputting $p_{i,j}$ to control the sparsity without significantly affecting performance.
\item The weights $p_{i,j}$ depend on the relation between input and the candidate samples, $p_{i,j} = r(\mathbf{x_i}, \mathbf{x^{(c)}_j}; \mathbf{\Gamma})$, based on their perceptual relation for decision making. We do not introduce any heuristic relatedness metric such as distances in the representation space, but we allow the model to learn the relation function that helps the overall performance.
\end{enumerate}
Learning involves optimization of the parameters $\mathbf{\theta}, \mathbf{\Gamma}, \mathbf{\eta}$ of the corresponding functions.
If the proposed principles (such as reasoning from the linear combination of embeddings or assigning relevance to the weights) are not imposed during training but only at inference, a high performance cannot be obtained due to the train-test mismatch, as the intermediate representations can be learned in an arbitrary way without any necessities to satisfy them.\footnote{For example, commonly-used distance metrics in the representation spaces fail at determining perceptual relevance of between samples when the model is trained in a vanilla way \citep{deepknn_robust}.}
The subsequent section presents ProtoAttend and training procedure to implement it.
\subsection{Network architecture and training}
The principles above are conditioned on efficient learning of an encoding function to encode the relevant information for decision making, a relation function to determine the prototype weights, and a final decision making block to return the output. Conventional supervised learning comprises the encoding and decision blocks. On the other hand, it is challenging to design a learning method with a relation function with a reasonable complexity. To this end, we adapt the idea of attention \citep{Corbetta2002ControlOG, attention}, where the model focuses on an adaptive small portion of input while making the decision. Different from conventional employment of attention in sequence or visual learning, we propose to use attention at sample level, such that the attention mechanism is used to determine the prototype weights by relating the input and the candidate samples via alignment of their keys and queries.
Fig. \ref{fig:model_architecture} shows the proposed architecture for training and inference. The three main blocks are described below:
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.99\textwidth]{training_inference_architecture.pdf}
\caption{ProtoAttend method for training and testing. Shared encoder between input samples and the candidate samples generates input representations, that are mapped to key, query and value embeddings (with a single nonlinear layer). The alignment between keys and queries determines the weights of the prototypes, and the linear combination of the values determines the final decision. Conformity of the prototype labels is used as a confidence metric.}
\label{fig:model_architecture}
\end{figure*}
{\bf Encoder:}~
A trainable encoder is employed to transform $B$ input samples (note that $B$ may be 1 at inference) and $D$ samples from the database of prototype candidates (note that $D$ may be as large as the entire training dataset at inference) into keys, queries and values. The encoder is shared and jointly updated for the input samples and prototype candidate database, to learn a common representation space for the values.
The encoder architecture can be based on any trainable discriminative feature mapping function, e.g. ResNet \citep{ResNet} for images, with the modification of generating three types of embeddings. For mapping of the last encoder layer to key, query and value embeddings, we simply use a single fully-connected layer with a nonlinearity, separately for each.\footnote{There are other viable options for the mapping but we restrict it to a single layer to minimize the additional number of trainable parameters, which becomes negligible in most cases.}
For input samples, $\mathbf{V} \in \Re ^ {B \times d_{out}}$ and $\mathbf{Q} \in \Re ^ {B \times d_{att}}$ denote the values and queries, and for candidate database samples $\mathbf{K^{(c)}} \in \Re ^ {D \times d_{att}}$ and $\mathbf{V^{(c)}} \in \Re ^ {D \times d_{out}}$ denote the keys and values.
{\bf Relational attention:}~
The relational attention yields the weight between the $i^{th}$ sample and $j^{th}$ candidate, $p_{i,j}$, via alignment of the corresponding key and query in dot-product attention form\footnote{We use $\mathbf{A_i}$ to denote the $i^{th}$ row of $\mathbf{A}$.}:
\begin{equation}
p_{i,j} = n \left ( {\mathbf{K_j^{(c)}} \mathbf{Q_b}^T} / {\sqrt{d_{att}}} \right ),
\end{equation}
where $n()$ is a normalization function to satisfy $p_{b,j} \geq 0$ and $\sum_{j=1}^D p_{b,j} = 1$ for which we consider $\textrm{softmax}$ and $\textrm{sparsemax}$ \citep{sparsemax}\footnote{Sparsemax encourages sparsity by mapping the Euclidean projection onto the probabilistic simplex.}. The choice of the normalization function is an efficient mechanism to control the sparsity of the prototype weights, as demonstrated in experiments.
Note that the relational attention mechanism does not introduce any extra trainable parameters.
{\bf Decision making:}~
The final decision block simply consists of a linear mapping from a convex combination of values that results in the output $y_i$. Consider the convex combination of value embeddings, parameterized by $\alpha$:
\begin{equation}
\hat{y_i} (\alpha) = g\left ( (1 - \alpha) \mathbf{v_i} + \alpha \sum\nolimits_{j=1}^D p_{i,j} \mathbf{v^{(c)}_j} \right ).
\end{equation}
For $\alpha=0$, $L\left (y_i, \hat{y_i} (0) \right ) $ is the conventional supervised learning loss (ignoring the relational attention mechanism) that can only impose principles (i) and (ii), but not the principles (iii)-(vi). A high accuracy for $\hat{y_i} (0)$ merely indicates that the value embedding space represents each input sample accurately.
For $\alpha=1$, $L\left (y_i, \hat{y_i} (1) \right ) $ encourages the principles (i), (iii)-(iv), but not the principles (ii) and (vi).\footnote{For example, simply assigning non-zero weights to another predetermined class, prototypical learning method can obtain perfect accuracy, but the assignment of predetermined class would be arbitrary.} A high accuracy for $\hat{y_i} (1)$ indicates that the linear combination of value embeddings accurately maps to the decision. For (vi), we propose that there should be a similar output mapping for the input and prototypes, for which we encourage high accuracy for both $\hat{y_i} (0)$ and $\hat{y_i} (1)$ with a loss term that is a mixture of $L\left (y_i, \hat{y_i} (0) \right )$ and $L\left (y_i, \hat{y_i} (1) \right )$ or guidance with an intermediate term, as $\hat{y_i} (0.5)$, is required.
Lastly, when $\alpha \leq 0.5$, we obtain the condition that the input sample itself has the largest contribution in the linear combination. Intuitively, the sample itself should be more relevant for the output compared to other samples, so the principles (iii) and (iv) can be encouraged.
We propose and compare different training objective functions in Table \ref{table:loss_functions}. We observe that the last four are all viable options as the training objective, with similar performance. We choose the last one for the rest of the experiments, as in some cases, slightly better prototypes are observed qualitatively (see Sect. 5.2 for further discussion).
\begin{table}[!htbp]
\centering
\caption{Ablation study. Impact of various training losses on ProtoAttend with softmax attention for Fashion-MNIST. $1 \leq i \leq N_t$ is the training iteration index and $N_t$ is the total number of iterations.}
\begin{tabular}{|C{5 cm}|C{1.3 cm}|C{1.3 cm}|C{1.9 cm}|C{1.9 cm}|}
\hline
\multirow{2}{*}{Training objective function} & Acc. \% for $\hat{y_i} (0)$ & Acc. \% for $\hat{y_i} (1)$ & $-\underset{\hat{y} = y}{E} \{C\}$ & $-\underset{\hat{y} \neq y}{E} \{C\}$ \\ \hline
$L\left (y_i, \hat{y_i} (0) \right )$ & 94.28 & 13.13 & 0.029 & 0.194\\ \hline
$L\left (y_i, \hat{y_i} (1) \right )$ & 10.92 & 94.21 & 0.103 & 0.002\\ \hline
$L\left (y_i, \hat{y_i} (0.5) \right )$ & 94.01 & 94.25 & 0.927 & 0.049\\ \hline
$L\left (y_i, \hat{y_i} (0) \right ) $ + $L\left (y_i, \hat{y_i} (1) \right )$ & 94.37 & 94.38 & 0.931 & 0.047 \\ \hline
$ (1 - i/N_t) \cdot L\left (y_i, \hat{y_i} (0) \right ) $ + $ (i/N_t) \cdot L\left (y_i, \hat{y_i} (1) \right )$ & 94.14 & 94.18 & 0.927 & 0.049 \\ \hline
$L\left (y_i, \hat{y_i} (0) \right ) $ + $L\left (y_i, \hat{y_i} (1) \right ) $ + $L\left (y_i, \hat{y_i} (0.5) \right )$ & 94.37 & 94.45 & 0.928 & 0.047\\
\hline
\end{tabular}
\label{table:loss_functions}
\end{table}
To control the sparsity of the weights (beyond the choice of the attention operation), we also propose a sparsity regularization term with a coefficient $\lambda_{sparse}$ in the form of entropy, $L_{sparse}(\mathbf{p}) = - 1/B \sum\nolimits_{i=1}^{B} \sum\nolimits_{j=1}^{D} p_{i,j} \log({p_{i,j}} + \epsilon)$, where $\epsilon$ is a small number for numerical stability. $L_{sparse}(\mathbf{p})$ is minimized when $\mathbf{p}$ has only 1 non-zero value.
\subsection{Confidence scoring using prototypes}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.49\textwidth]{reliability_diagram}
\caption{Impact of confidence on ProtoAttend accuracy. Reliability diagram for Fashion-MNIST, as in \citep{deepKNN}. Bars (left axis) indicate the mean accuracy of predictions binned by confidence; the red line (right axis) shows the number of samples across bins.}
\label{fig:reliability}
\end{figure*}
ProtoAttend provides a linear decomposition (via value embeddings) of the decision into prototypes that have known labels. Ideally, labels of the prototypes should all be the same as the labels of the input. When prototypes with high weights belong to the same class, the model shall be more confident and a correct classification result is expected, whereas in the cases of disagreement between prototype labels, the model shall be less confident and the likelihood of a wrong prediction is higher. With the motivation of separating correct vs. incorrect decisions via its value, we propose a confidence score based on the agreement between the prototypes:
\begin{equation}
C_i = \sum_{j=1}^{D} p_{i,j} \cdot \textrm{I}(y_j^{(c)} = \hat{y_i}),
\end{equation}
where $I( )$ is the indicator function.
Table \ref{table:loss_functions} shows the significant difference of the average confidence metric between correct vs. incorrect classification cases for the test dataset, as desired. In Fig. \ref{fig:reliability}, the impact of confidence on accuracy is further analyzed with the reliability diagram as in \citep{deepKNN}. When test samples are binned according to their confidence, it is observed that the bins with higher confidence yield much higher accuracy. There are small number of samples in the bins with lower confidence, and those tend to be the incorrect classification cases. In Section \ref{confidence_controlled}, the efficacy of confidence score in separating correct vs. incorrect classification is experimented in confidence-controlled prediction setting, demonstrating how much the prediction accuracy can be improved by refraining from small number of samples with low confidence at test time.
To further encourage confidence during training, we also consider a regularization term $L_{conf}(\mathbf{p}) = - 1/B \sum\nolimits_{i=1}^{B} \sum\nolimits_{j=1}^{D} p_{i,j} \cdot \textrm{I}(y_j^{(c)} = y_i)$ with a coefficient $\lambda_{conf}$. $L_{conf}$ is minimized when all prototypes with $p_{i,j}>0$ are from the same ground truth class with output $y_i$.\footnote{Note that the gradients of this regularization term with respect to $p_{i,j}$ is either 0 or 1 and it is often insufficient to train the model itself from scratch. But it is observed to provide further improvements in some cases.}
\section{Experiments}
\subsection{Setup}
We demonstrate the results of ProtoAttend for image, text and tabular data classification problems with different encoder architectures (see Supplementary Material for details).
Outputs of the encoders are mapped to queries, keys and values using a fully-connected layer followed by ReLU. For values, layer normalization \citep{layernorm} is employed for more stable training.
A fully-connected layer is used in the decision making block, yielding logits for determining the estimated class. Softmax cross entropy loss is used as $L()$.
Adam optimization algorithm is employed \citep{adam} with exponential learning rate decay (with parameters optimized on a validation set). For image encoding, unless specified, we use the standard ResNet model \citep{ResNet}. For text encoding, we use the very deep convolutional neural network (VDCNN) \citep{VDCNN} model, inputting sequence of raw characters. For tabular data encoding, we use an LSTM model \citep{lstm}, which inputs the feature embeddings at every timestep. See Supplementary Material for implementation details, additional results and discussions.
\subsection{Sparse explanations of decisions}
\begin{table}[htbp]
\centering
\caption{ProtoAttend achieves interpretability without significant degradation in performance. Accuracy and median number of prototypes to add up to 50\%, 90\% and 95\% of the decision, quantified with prototype weights.}
\begin{tabular}{|C{3 cm}|C{5 cm}|C{0.9 cm}|C{0.72 cm}|C{0.72 cm}|C{0.72 cm}|}
\cline{1-6}
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{Acc. \% } & \multicolumn{3}{|c|}{No. of prototypes} \\ \cline{4-6}
& & & 50 \% & 90 \% & 95 \% \\ \cline{1-6}
\multirow{3}{*}{MNIST} & Baseline enc. & 99.70 & \multicolumn{3}{|c|}{-} \\ \cline{2-6}
& Softmax attn. & 99.66 & 365 & 1324 & 1648 \\ \cline{2-6}
& Sparsemax attn. & 99.69 & 2 & 4 & 5 \\ \cline{1-6}
\multirow{4}{*}{Fashion-MNIST} & Baseline enc. & 94.74 & \multicolumn{3}{|c|}{-} \\ \cline{2-6}
& Softmax attn. & 94.42 & 712 & 2320 & 2702 \\ \cline{2-6}
& Sparsemax attn. & 94.42 & 4 & 10 & 11 \\ \cline{2-6}
& Sparsemax attn. + sparsity reg. & 94.47 & 1 & 2 & 2 \\\cline{1-6}
\multirow{4}{*}{CIFAR-10} & Baseline enc. & 91.97 & \multicolumn{3}{|c|}{-} \\ \cline{2-6}
& Softmax attn. & 91.69 & 317 & 1453 & 1898 \\ \cline{2-6}
& Sparsemax attn. & 91.44 & 5 & 14 & 16 \\ \cline{2-6}
& Sparsemax attn. + sparsity reg. & 91.26 & 2 & 3 & 4 \\\cline{1-6}
\multirow{4}{*}{DBPedia} & Baseline enc. & 98.25 & \multicolumn{3}{|c|}{-} \\ \cline{2-6}
& Softmax attn. & 98.20 & 63 & 190 & 225 \\ \cline{2-6}
& Sparsemax attn. & 97.74 & 2 & 4 & 4 \\ \cline{1-6}
\multirow{4}{*}{Income} & Baseline enc. & 85.68 & \multicolumn{3}{|c|}{-} \\ \cline{2-6}
& Softmax attn. & 85.64 & 2263 & 9610 & 12419 \\ \cline{2-6}
& Sparsemax attn. & 85.58 & 20 & 57 & 67 \\ \cline{2-6}
& Sparsemax attn. + sparsity reg. & 85.41 & 3 & 6 & 7 \\\cline{1-6}
\end{tabular}
\label{table:prototype_count_acc}
\end{table}
We foremost demonstrate that our inherently-interpretable model design does not cause significant degradation in performance.
Table \ref{table:prototype_count_acc} shows the accuracy and the median number of prototypes required to add up to a particular portion of the decision\footnote{E.g. if the prototype weights are [0.2, 0.15, 0.15, 0.25, 0.1, 0.05, 0.28, 0.02], then 2 prototypes are required for 50\% of the decision, 6 for 90\% and 7 for 95\%.} for different prototypical learning cases.
In all cases, very small accuracy gap is observed with the baseline encoder that is trained in conventional supervised learning way. The attention normalization function and sparsity regularization are efficient mechanisms to control the sparsity -- the number of prototypes required is much lower with sparsemax attention compared to softmax attention and can be further reduced with sparsity regularization (see Supplementary Material for details).
With a small decrease in performance, the number of prototypes can be reduced to just a handful.\footnote{We observe that excessively high sparsity (e.g. to yield 1-2 prototypes in most cases) may sometimes decrease the quality of prototypes due to overfitting to discriminative features that are less perceptually meaningful.} There is difference between datasets, as intuitively expected from the discrepancy in the degree of similarity between the intra-class samples.
\begin{figure*}[htbp]
\centering
\subfigure[MNIST \& Fashion MNIST]{\includegraphics[width=0.4\textwidth]{MNIST_fashion_MNIST.pdf}}
\hspace{1cm}
\subfigure[Fruits]{\includegraphics[width=0.4\textwidth]{fruits.pdf}}
\caption{Example inputs and ProtoAttend prototypes for (a) MNIST (with sparsemax), Fashion-MNIST dataset (with sparsemax and sparsity regularization) and (b) Fruits (with sparsemax and sparsity regularization).
For MNIST \& Fashion-MNIST, prototypes typically consist of discriminative features such as the straight line shape for the digit 1, and the long heels and strips for the sandal.
For Fruits, prototypes often correspond to the same fruit captured from a very similar angle.}
\label{fig:MNIST_example}
\label{fig:fruits_example}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.99\textwidth]{dbpedia_examples.pdf}
\caption{Example inputs and ProtoAttend prototypes for DBPedia (with sparsemax).
While classifying the inputs as athlete, prototypes have very similar sentence structure, words and concepts.}
\label{fig:dbpedia_example}
\end{figure*}
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.99\textwidth]{income.pdf}
\caption{Example inputs and ProtoAttend prototypes for Adult Census Income (with sparsemax and sparsity regularization).
For the first example, all prototypes have similar age, two share similar education level and one has the same occupation.
For the second example, three prototypes have the same occupation, all work more than 40 hours/week, and three have postgraduate education.}
\label{fig:income_example}
\end{figure*}
Figs. \ref{fig:MNIST_example}, \ref{fig:dbpedia_example} and \ref{fig:income_example} exemplify prototypes for image, text and tabular data.
In general, perceptually-similar samples are chosen as the prototypes with the largest weights.
We also compare the relevant samples found by ProtoAttend with the methods of representer point selection \citep{representerpointselection} and influence functions \citep{influence_functions} (see Supplementary Material for details) on Animals with Attributes dataset.
As shown in Fig. \ref{fig:awa_examples}, our method finds qualitatively more relevant samples. This case also exemplifies the potential of our method for integration into pre-trained models by addition of simple layers for key, query and value generation.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.99\textwidth]{awa_comparison.pdf}
\caption{Samples found by ProtoAttend vs. representer point selection \citep{representerpointselection} and influence function \citep{influence_functions} for the two examples from \citep{representerpointselection} on Animals with Attributes dataset. See Supplementary Material for more examples.}
\label{fig:awa_examples}
\end{figure}
\subsection{Robustness to label noise}
\begin{table}[!htbp]
\caption{Label noise ratio vs. accuracy for baseline encoder, dropout method \citep{dropout_noisy_label} (optimizing the keep probability) and ProtoAttend with sparsemax attention and sparsity regularization for CIFAR-10.}
\centering
\begin{tabular}{|C{1.8 cm}|C{1.8 cm}|C{1.8 cm}|C{1.8 cm}|}
\cline{1-4}
\multirow{2}{*}{Noise level} &\multicolumn{3}{|c|}{Test accuracy \%} \\ \cline{2-4}
& Baseline & Dropout & ProtoAttend \\ \cline{1-4}
0.8 & 57.02 & 56.76 & \textbf{60.50} \\ \cline{1-4}
0.6 & 71.27 & 72.15 & \textbf{74.67} \\ \cline{1-4}
0.4 & 77.47 & 78.99 & \textbf{80.04} \\ \cline{1-4}
\end{tabular}
\label{table:label_noise}
\end{table}
As prototypical learning with sparsemax attention aims to extract decision-making information from a small subset of training samples, it can be used to improve performance when the training dataset contains noisy labels (see Table \ref{table:label_noise}). The optimal value\footnote{For a fair comparison, we re-optimize the learning rate parameters on a separate validation set.} of $\lambda_{sparse}$ increases with higher noisy label ratios, underlining the increasing importance of sparse learning.
\subsection{Confidence-controlled prediction}
\label{confidence_controlled}
\begin{figure*}[!htbp]
\centering
\subfigure[]{\includegraphics[width=0.49\textwidth]{mnist_dknn_comparison}}
\hspace{0.1cm}
\subfigure[]{\includegraphics[width=0.49\textwidth]{cifar10_conf_acc}}
\hspace{0.1cm}
\caption{Confidence-controlled prediction.
(a) Accuracy vs. ratio of samples for MNIST. We compare dkNN~\citep{deepKNN} and prototypical learning (with softmax attention and $\lambda_{conf}$=0.1) using the same network architecture from ~\citep{deepKNN} without augmentation. (b) Accuracy vs. ratio of samples for CIFAR-10. We compare prototypical learning (with softmax attention and $\lambda_{conf}$=0.1) with trust score \citep{trust_classifier} and deep ensemble \citep{deep_ensemble} methods for the same baseline encoder network architecture.}
\label{fig:conf_controlled_prediction_mnist}
\label{fig:conf_controlled_prediction_cifar}
\end{figure*}
By varying the threshold for the confidence metric, a trade-off can be obtained for what ratio of the test samples that the model makes a prediction for vs. the overall accuracy it obtains on the samples above that threshold.\footnote{Note that this trade-off is often more meaningful to consider rather than the metrics based on the actual value of confidence score itself, as methods may differ in how they define the confidence metric, and thus yield very different ranges and distributions for it.}
Figs. \ref{fig:conf_controlled_prediction_mnist}(a) and \ref{fig:conf_controlled_prediction_cifar}(b) demonstrate this trade-off and compare it to alternative methods.
The sharper slope of the plots show that our method is superior to dkNN~\citep{deepKNN} and trust score~\citep{trust_classifier}, the methods based on quantifying the mismatch with nearest-neighbor samples, in terms of finding related samples.
Although the baseline accuracy is higher with 4 ensemble networks obtained via deep ensemble \citep{deep_ensemble}, our method utilizes a single network and the additional accuracy gains by refraining from uncertain predictions is similar to our approach as shown by the similar slopes of the curves.
Overall, the baseline accuracy can be significantly improved by making less predictions. Compared to the state of the art models, our canonical method with simple and small models shows similar accuracy by making slightly fewer predictions -- e.g. for MNIST, \citep{dropconnect} achieves 0.21\% error rate, that is obtained by our method refraining from only 0.45\% of predictions using ResNet-32 and for DBpedia, \citep{revisiting_lstm} achieves 0.91\% error, that is obtained by our method refraining from 3\% of predictions using 9-layer VDCNN.
In general, the smaller the number of prototypes, the smaller the trade-off space. Thus, softmax attention (which normally results in more prototypes) is better suited for confidence-controlled prediction compared to sparsemax (see Supplementary Material for more comparisons).
\subsection{Out-of-distribution samples}
Well-calibrated confidence scores at inference can be used to detect deviations from the training dataset. As the test distribution deviates from the training distribution, prototype weights tend to mismatch more and yield lower confidence scores. Fig. \ref{fig:ood_performance} (a) shows the ratio of samples above a certain confidence level as the test dataset deviates. Rotations deviate the distribution of test images from the training images, and cause significant degradation in confidence scores, as well as the overall accuracy. On the other hand, using test image from a different dataset, degrade them even further. Next, Fig. \ref{fig:ood_performance2} (b) shows quantification of out-of-distribution detection with prototypical learning, using the method from \citep{Hendrycks1}. ProtoAttend yields an AUC of 0.838, being on par with the-state of the art approaches \citep{Hendrycks2}.
\begin{figure*}[!htbp]
\centering
\subfigure[]{\includegraphics[width=0.4\textwidth]{OOD_samples_fmnist}}
\hspace{0.1cm}
\subfigure[]{\includegraphics[width=0.44\textwidth]{OOD_svhn}}
\hspace{0.1cm}
\caption{Out-of-distribution detection.
(a) Ratio of samples above the confidence level for prototypical learning with softmax attention, trained with Fashion-MNIST, and tested on the shown datasets. E.g. if we assess the ratio of samples above confidence ~0.9, it is far more likely that those samples to come from the same distribution with the training dataset.
(b) ROC curve for in-distribution vs. out-of-distribution detection, using CIFAR-10 as in-distribution and SVHN as out-of-distribution, computed using the method from \citep{Hendrycks1} and compared to the proposed baseline in \citep{Hendrycks1}. Softmax attention and confidence regularization ($\lambda_{conf}=0.1$) are used.}
\label{fig:ood_performance}
\label{fig:ood_performance2}
\end{figure*}
\section{Computational Cost}
ProtoAttend requires only a very small increase in the number of learning parameters (merely two extra small matrices for the fully-connected layers to obtain queries and keys).
However, it does require a longer training time and has higher memory requirements to process the candidate database. At inference, keys and values for the candidate database can be computed only once and integrated into the model. Thus, the overhead merely becomes the computation of attention outputs (e.g. for CIFAR-10 model, the attention overhead at inference is less than 0.6 MFLOPs, orders of magnitude lower than the computational complexity of a ResNet model).
During training on the other hand, both forward and backward propagation steps for the encoder need to be computed for all candidate samples and the total time is higher (e.g. ~4.45 times slower to train until convergence for CIFAR-10 compared to the conventional supervised learning). The size of the candidate database is limited by the memory of the processor, so in practice we sample different candidate databases randomly from the training dataset at each iteration. For faster training, data and model parallelism approaches are straightforward to implement -- e.g., different processors can focus on different samples, or they can focus on different parts of the convolution or inner product operations.
Further computationally-efficient approaches may involve less frequent updates for candidate queries and values.
\section{Conclusions}
We propose an attention-based prototypical learning method, ProtoAttend, and demonstrate its usefulness for a wide range of problems on image, text and tabular data.
By adding a relational attention mechanism to an encoder, prototypical learning enables novel capabilities.
With sparsemax attention, it can base the learning on a few relevant samples that can be returned at inference for interpretability, and can also improves robustness to label noise.
With softmax attention, it enables confidence-controlled prediction that can outperform state of the art results with simple architectures by simply making slightly fewer predictions, as well as enables detecting deviations from the training data.
All these capabilities are achieved without sacrificing overall accuracy of the base model.
\section{Acknowledgements}
Discussions with Zizhao Zhang, Chih-Kuan Yeh, Nicolas Papernot, Ryan Takasugi, Andrei Kouznetsov, and Andrew Moore are gratefully acknowledged.
\newpage
\bibliographystyle{iclr2020_conference}
|
1,314,259,993,299 | arxiv | \section{Introduction}
The field of causal inference has developed massively in recent decades. Most of this literature concerns central quantities such as expectations, but certain causal mechanisms manifest themselves only in rare events and or may simplify in distribution tails. Standard methods of causal inference are ill-suited for such situations, and recent work has begun to link causality and extreme value theory. Examples are \citet{Gissibl.Kluppelberg:2018}, who define recursive max-linear models on directed acyclic graphs, \cite{Kluppelberg21}, who propose a scaling technique to determine the causal order of the variables in such graphs, \citet{kiriliouk}, who use multivariate generalized Pareto distributions to study probabilities of necessary and sufficient causation as defined in the counterfactual theory of Pearl, and \citet{mhalla2018}, who construct a causal inference method for tail quantities relying on Kolmogorov complexity of extreme conditional quantiles. See surveys by \cite{naveau2020} on extreme event attribution and by \cite{Engelke2020} on the detection and modeling of sparse patterns in extremes.
Our work stems from that of \citet{Gnecco2019}, who propose an estimator of the causal tail coefficient and an algorithm that, under mild conditions, consistently retrieves a causal order on an underlying graph even in the presence of hidden confounders. Such an order helps to exclude some causal structures, but does not provide evidence for the existence of a specific structure, as in general a given order is causal for several possible graphs; in particular, all orders are causal for the empty graph corresponding to absence of causality.
Although it is asymptotically invariant to hidden confounders, this estimator can suffer from confounding in finite samples when inference on the direct relationship between two variables is needed, when these effects are too strong or when the confounders have heavier tails than the two variables.
This paper addresses a central challenge in causal inference: the presence of confounders. It is often assumed that all the relevant variables are observed and can be included in the model, but typically this cannot be ensured. The available variables are often subject to external influences, observed or unobserved, that affect the variables of interest and can make it harder or even impossible to infer a correct causal relationship. Our goals are to mitigate the effect of a known confounder on an extremal causal analysis by treating it as a covariate, and to present a permutation test for direct causality between the two observed variables. Such a model enables causal discovery and inference for a greater variety of situations.
Our work was stimulated by average daily discharge data from 68 gauging stations along the Rhine and Aare catchments in Switzerland, see Figure~\ref{fd:chstations}. The data were collected by by the Swiss Federal Office for the Environment (\url{hydrodaten.admin.ch}), but were provided by the authors of~\citet{Engelke2020}, with some useful preliminary insights. We focus on the causal relationship between extreme discharges, for which precipitation is an obvious confounder, and use daily precipitation data from $105$ meteorological stations, provided by the Swiss Federal Office of Meteorology and Climatology, MeteoSwiss (\url{gate.meteoswiss.ch/idaweb}). Unlike in our simulation experiments, we know neither the true tail properties of the discharges and precipitation nor the effect of the confounder.
We use precipitation as a covariate and show that it allows correct inference on the direct causal relationships between discharges for the majority of the station pairs, with at least $95\%$ estimated confidence, which was impossible without our proposed approach.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{CH/CH_stations_map.png}
\caption{Topographic map of Switzerland showing the $68$ gauging stations (red dots) along the Rhine, the Aare and their tributaries. Water flows towards station $68$. Adapted from~\citet{Engelke2020}.}
\label{fd:chstations}
\end{figure}
The paper is organised as follows. Section~\ref{s:ctc} discusses the causal tail coefficient, its interpretation and its properties. Section~\ref{s:pctc} introduces a new parametric estimator for it based on generalized Pareto modelling of threshold excesses, which allows a known confounder to be used as a covariate. A simulation study in Section~\ref{s:simul} underlines the strengths and limitations of the two estimators. Section~\ref{s:test} presents a permutation test intended to detect direct causality between two heavy-tailed variables, which is also assessed via simulation. Section~\ref{s:rivers} applies the methodology to the river discharges, and Section~\ref{s:conclu} gives a brief discussion.
\section{Causal tail coefficient and its estimation}\label{s:ctc}
We first give some basic notions needed to describe the setting in which causal relationships between random variables can be recovered.
\begin{definition}\label{df:scm}
A \emph{linear structural causal model (LSCM)} over a set of random variables $X_1,\dots,X_p$ satisfies
\begin{equation*}
X_j=\sum_{k\in \pa(j)}\beta_{jk}X_k+\varepsilon_j, \quad j\in V,%
\end{equation*}
where $V:=\{1,\dots,p\}$ is a set of nodes representing the corresponding random variables, $\pa(j)\subseteq V$ is the set of parents of $j$, $\beta_{jk}\in\mathbb{R}\setminus\{0\}$ is called the \emph{causal weight} of node $k$ on node $j$, and $\varepsilon_1,\dots,\varepsilon_p$ are jointly independent noise variables. We suppose that the \emph{associated graph} $G=(V,E)$, in which the directed edge $(i,j)\in V\times V$ belongs to $E$ if and only if $i\in\pa(j)$, is a directed acyclic graph (DAG).
\end{definition}
In a DAG $G=(V,E)$, we say that $i\in V$ \emph{is an ancestor of} $j\in V$ in $G$, if there exists a directed path from $i$ to $j$. The set of the ancestors of $j$ in $G$ is denoted by $\An(j,G)$, and we define $\an(j,G):=\An(j,G)\setminus\{j\}$. In a LSCM over random variables $X_1,\dots,X_p$, with associated DAG $G=(V,E)$, we say that $X_i$ \emph{causes} $X_j$, if $i\in\an(j,G)$. We call $X_i$ a \emph{confounder} (or \emph{common cause}) of $X_j$ and $X_k$ if there exist directed paths from $i$ to $j$ and from $i$ to $k$ in $G$ that do not include $k$ and $j$, respectively. We say that there is \emph{no causal link} between $X_i$ and $X_j$ if $\An(i,G)\cap\An(j,G)=\emptyset$. For any $i,j\in V$ we let $\beta_{i\rightarrow j}$ denote the sum of the products of the causal weights along the distinct directed paths from vertex $i$ to vertex $j$; we set $\beta_{j\rightarrow j}:=1$ and $\beta_{i\rightarrow j}:=0$ if $i\notin\An(j,G)$.
Let $X_i$ and $X_j$ be random variables from a LSCM with respective distributions $F_i$ and $F_j$. The \emph{causal (upper) tail coefficient}\ of a random variable $X_i$ on another random variable $X_j$ is defined as \citep{Gnecco2019}
\begin{equation}\label{eq:ctc}
\Gamma_{ij}:=\lim_{u\to 1^-}\mathbb{E}\left\{F_j(X_j)\mid F_i(X_i)>u\right\},
\end{equation}
if the limit exists. This coefficient lies between zero and one and captures the causal influence of $X_i$ on $X_j$ in their upper tails: if $X_i$ has a linear causal effect on $X_j$, $\Gamma_{12}$ will be close to unity. The coefficient is asymmetric, as extremes of $X_j$ need not lead to extremes of $X_i$, and in that case, $\Gamma_{ji}$ will be appreciably smaller than $\Gamma_{ij}$. As $\Gamma_{ij}$ only depends on the rescaled margins of the variables, it is invariant to monotone increasing marginal transformations.
If both tails are of interest, the causal tail coefficient can be generalized to capture the causal effects in both directions, by considering the \emph{symmetric causal tail coefficient}\ of $X_i$ on $X_j$, i.e.,
\begin{equation*}
\Psi_{ij}:=\lim_{u\to 1^-}\mathbb{E}\left[\rho\left\{F_j(X_j)\right\}\mid\rho\left\{F_i(X_i)\right\}>u\right]
\end{equation*}
if the limit exists, where $\rho : x\mapsto \lvert 2x-1\rvert$.
As $F_i(X_i)\sim {\rm Unif}(0,1)$,
\begin{equation*}
\Psi_{ij}=\underbrace{\lim_{u\to 1^-}\dfrac{1}{2}\mathbb{E}\left[\rho\left\{F_j(X_j)\right\}\mid F_i(X_i)>u\right]}_{=:\Psi_{ij}^+} + \underbrace{\lim_{u\to 0^+}\dfrac{1}{2}\mathbb{E}\left[\rho\left\{F_j(X_j)\right\}\mid F_i(X_i)<u\right]}_{=:\Psi_{ij}^-}.
\end{equation*}
The interpretation and properties of $\Psi_{ij}$ are similar to those of $\Gamma_{ij}$. The symmetric version captures the causal influence of $X_i$ on $X_j$ in both of their tails.
For simplicity we focus on $\Gamma_{ij}$ in this paper, though all of our results and methods can be generalized to both tails by considering $\Psi_{ij}$ instead, if the assumptions for the upper tails are also satisfied in the lower tails of the variables considered.
Before stating the theorem that describes how the underlying causal relationships in a set of random variables can be recovered, we define the concept of regular variation.
\begin{definition}\label{df:rvf}
A positive measurable function $f$ is said to be \emph{regularly varying} with index $\alpha\in\mathbb{R}$, written $f\in\RV_\alpha$, if for all $c>0$, $\lim_{x\to\infty}f(cx)/f(x)=c^\alpha$.
If $f\in\RV_0$, then $f$ is said to be \emph{slowly varying}.
\end{definition}
\begin{definition}\label{df:rvx}
The random variable $X_j$ is said to be \emph{regularly varying} with index $\alpha\in\mathbb{R}$, if, for some $\ell\in\RV_0$, $\mathbb{P}(X_j>x)\sim\ell(x)x^{-\alpha}$ as $x\to\infty$.
\end{definition}
Independent regularly varying random variables $X_1,\dots,X_p$ are said to have \emph{comparable upper tails} if there exist $c_1,\dots,c_p>0$, $\alpha>0$ and $\ell\in\RV_0$ such that, for each $j\in\{1,\dots,p\}$, $\mathbb{P}(X_j>x)\sim c_j\ell(x)x^{-\alpha}$ as $x\to\infty$.
The following theorem describes how the causal relationships underlying a set of random variables can be recovered from their causal tail coefficients.
\begin{theorem}[\citeauthor{Gnecco2019}, \citeyear{Gnecco2019}]\label{thm:ctc1val}
Let $X_1,\dots,X_p$ be random variables from a LSCM, with associated directed acyclic graph $G=(V,E)$ and suppose that%
\begin{enumerate}
\item[(a)] the coefficients $\beta_{jk}$ of the linear structural causal relationship $X_j=\sum_{k\in \pa(j,G)}\beta_{jk}X_k+\varepsilon_j$ are strictly positive for all $j\in V$ and $k\in\pa(j,G)$, and %
\item[(b)] the real-valued noise variables $\varepsilon_1,\dots,\varepsilon_p$ are independent and regularly varying with comparable upper tails.
\end{enumerate}
Then the values of $\Gamma_{ij}$ and $\Gamma_{ji}$ allow one to distinguish between the different possible causal relationships between $X_i$ and $X_j$ summarized in Table~\ref{t:ctc1}.
\begin{table}[!t]
\caption{Equivalence of the possible values of $\Gamma_{ij}$ and $\Gamma_{ji}$ with the underlying causal relationship between $X_i$ and $X_j$.}%
\label{t:ctc1}
\centering
\begin{tabular}{@{}|l|ccc|@{}}%
\hline
& $\Gamma_{ji}=1$ & $\Gamma_{ji}\in(1/2,1)$ & $\Gamma_{ji}=1/2$ \\
\hline
$\Gamma_{ij}=1$ & & $X_i$ causes $X_j$ & \\
$\Gamma_{ij}\in(1/2,1)$ & $X_j$ causes $X_i$ & common cause only & \\
$\Gamma_{ij}=1/2$ & & & no causal link \\
\hline
\end{tabular}
\end{table}
\end{theorem}
Under the theorem's assumptions, the blank entries in Table~\ref{t:ctc1} cannot occur. Theorem~\ref{thm:ctc1val} is generalizable to the $\Psi_{ij}$ variant of the coefficient and possibly negative $\beta_{ij}$ values if the assumptions are also satisfied in the lower tails of the variables.
\citet{Gnecco2019} show that under the setup and assumptions of Theorem~\ref{thm:ctc1val}, the causal tail coefficient~\eqref{eq:ctc} for any distinct $i,j\in V$, and with $A_{ij}:=\An(i,G)\cap\An(j,G)$, is
\begin{equation}\label{eq:lemctc1}
\Gamma_{ij}=\dfrac{1}{2}+\dfrac{1}{2}\dfrac{\sum_{h\in A_{ij}}\beta_{h\rightarrow i}^\alpha}{\sum_{h\in\An(i,G)}\beta_{h\rightarrow i}^\alpha}.
\end{equation}
If $\left\{(X_{i,1},X_{i,2})\right\}_{i=1}^n$ are independent replicates of $(X_1,X_2)$, with the random variables $X_i$ and $X_j$ from the LSCM, then the \emph{non-parametric estimator} of $\Gamma_{12}$ is defined to be
\begin{equation}\label{eq:npctc1}
\hat{\Gamma}_{12}=\dfrac{1}{k}\sum_{i=1}^n\hat{F}_2(X_{i,2})\mathds{1}(X_{i,1}>X_{(n-k),1})
\end{equation}
for some $k\in\{1,\ldots, n-1\}$, where $\mathds{1}(\cdot)$ denotes the indicator function, $X_{(h),1}$ denotes the $h^\text{th}$ order statistic and $\hat{F_j}$ is the empirical cumulative distribution function of $X_j$, i.e.,
\begin{equation*}
\hat{F}_j(x)=\dfrac{1}{n}\sum_{i=1}^n\mathds{1}(X_{i,j}\leq x),\quad j=1,2.
\end{equation*}
This estimator is the empirical counterpart to~\eqref{eq:ctc}, as $X_{(h),1}=\hat{F}_1^\leftarrow(h/n)$ is a quantile of the corresponding empirical distribution. The value of $k$ controls the number of data pairs in the upper tail of $X_1$ that contribute to the estimator. Under the assumptions of Theorem~\ref{thm:ctc1val} and a ``very mild assumption that is satisfied by most univariate regularly varying distributions of interest'', estimator~\eqref{eq:npctc1} is consistent as $n\to\infty$, for a choice of $k$ such that $k\to\infty$ and $k/n\to 0$ \citep{Gnecco2019}.
A strength of the causal tail coefficient approach is its asymptotic robustness to hidden confounders. A frequent assumption in causality is that all the relevant variables are observed, but this is usually moot. Theorem~\ref{thm:ctc1val} holds even when some variables in the underlying LSCM are unobserved.
The capacity to deal with confounders both when studying the causal relationship between two variables and when retrieving a causal order is not generally shared by other approaches in causal inference, as argued by~\citet{Gnecco2019}, but this valuable property has the limitation that the unobserved variables must satisfy the regular variation assumption of Theorem~\ref{thm:ctc1val}, which cannot be checked in practice and may be unrealistic. In our motivating setting, for example, the tail of the precipitation variable, the confounder, may not behave like the tails of the river discharges. Moreover, it may be difficult to distinguish between causal cases using empirical estimates. In particular, an increase in the strength of the causal effect of a common confounder of $X_i$ and $X_j$ will increase $\Gamma_{ij}$, making it harder to tell whether a high value of $\hat{\Gamma}_{ij}$ indicates that $\Gamma_{ij}=1$ or that $\Gamma_{ij}\lesssim 1$. This is illustrated in the simulations in Section~\ref{s:simul}.
\section{Parametric Tail Causality and Confounder Dependence}\label{s:pctc}
The causal tail coefficient (\ref{eq:ctc}) is useful when trying to understand a causal relationship between two variables, as it allows inference on the underlying extremal causal structure and has attractive asymptotic properties. However, the non-parametric estimator~\eqref{eq:npctc1} is inappropriate when the underlying assumptions are not met or when a hidden confounder has a strong influence.
This problem generally occurs when the confounder is different in nature from the variables of interest, as its tail is then rarely comparable to theirs. In particular the tail of the confounder may be heavier, giving it an even larger unwanted effect on the coefficients.
Thus a useful modification would be to allow conditioning on the value of a known confounder in order to remove or at least reduce its effect on the estimated causal tail coefficient . In this section, we propose an approach to this through a peaks-over-threshold method. Another useful modification would be a reliable statistical test for direct causality. We propose and discuss such a test, based on a permutation resampling method, in Section~\ref{s:test}.
\subsection{Generalized Pareto Causal Tail Coefficient}\label{ss:gpdctc}
We first model the tails of the variables using the generalized Pareto distribution (GPD) to model the excesses~\citep[Chapter~4]{IExtr}. This has scale and shape parameters that are typically estimated using maximum likelihood and that can depend on other variables~\citep{davison1990}. For $j=1,2$, and under mild conditions on $X_j$, for a threshold $u_j$ large enough, the threshold excesses follow approximately a \emph{generalized Pareto distribution}. That is,
\begin{equation}\label{eq:gpd}
\mathbb{P}(X_j-u_j\leq x\mid X_j>u_j)\approx G(x;\sigma_j,\xi_j)=1-\left(1+\xi_j x/\sigma_j\right)^{-1/\xi_j}_+,
\quad x>0,
\end{equation}
with a {scale} parameter $\sigma_j>0$ and a {shape} parameter $\xi_j\in\mathbb{R}$ that determines $G$:
\begin{itemize}
\item $\xi_j =0$ corresponds to light-tailed distributions, and then $X_j$ lies in the maximum domain of attraction of the Gumbel distribution;
\item $\xi_j >0$ corresponds to heavy-tailed distributions, and then $X_j$ lies in the maximum domain of attraction of the Fr\'echet distribution; and
\item $\xi_j <0$ corresponds to distributions with bounded upper tails, and then $X_j$ lies in the maximum domain of attraction of the (reverse) Weibull distribution.
\end{itemize}
\indent
Any random variable satisfying the assumptions of Theorem~\ref{thm:ctc1val} satisfies~\eqref{eq:gpd}, as a regularly varying random variable with index $\alpha >0$ lies in the Fr\'echet maximum domain of attraction. If the threshold $u_j$ is chosen to be the $q$ quantile of $X_j$ for some $q\in (0,1)$, then we can write
\begin{equation*}
\begin{split}
\mathbb{P}(X_j\leq x) &\approx\left\{G(x-u_j;\sigma_j,\xi_j)(1-q)+q\right\}\mathds{1}(x>u_j) + \mathbb{P}(X_j\leq x)\mathds{1}(x\leq u_j),
\end{split}
\end{equation*}
and using the empirical distribution $\hat F(x)$ to estimate $\mathbb{P}(X_j\leq x)$ and maximum likelihood estimation using the excesses of $u_j$ to obtain $\hat{\sigma}_j$ and $\hat{\xi}_j$ yields an estimator of the distribution function $F_j(x)$ of $X_j$, i.e.,
\begin{equation*}
\hat{F}_j(x;\hat{\sigma}_j,\hat{\xi}_j)=\hat{F}(x)\mathds{1}(x\leq u_j)+\left\{G(x-u_j;\hat{\sigma}_j,\hat{\xi}_j)(1-q)+q\right\}\mathds{1}(x>u_j).
\end{equation*}
Using hybrid estimators for $F_1$ and $F_2$ for an integer $k\in\{1,\ldots, n-1\}$ yields the parametric \emph{GPD causal tail coefficient} estimator for $\Gamma_{12}$,
\begin{equation}\label{eq:gpdctc}
\hat{\Gamma}_{12}^{\rm GPD}=\dfrac{1}{k_1}\sum_{i=1}^n \hat{F}_2(X_{i,2};\hat{\sigma}_2,\hat{\xi}_2)\mathds{1}\left\{\hat{F}_1(X_{i,1};\hat{\sigma}_1,\hat{\xi}_1)>1-k/n\right\},
\end{equation}
where $k_1:=\lvert\{i\in[n] : \; \hat{F}_1(X_{i,1};\hat{\sigma}_1,\hat{\xi}_1)>1-k/n \}\rvert$. Unlike with the non-parametric estimator~\eqref{eq:npctc1}, the number of data pairs $k_1$ used in~\eqref{eq:gpdctc} may not equal $k$, as it depends on the fit of $\hat{F}_1(X_{i,1};\hat{\sigma}_1,\hat{\xi}_1)$.%
The GPD model can be reparametrized to allow dependence on time or on another variable of interest.
More precisely, the model's parameters can be rewritten in the form $\theta (t)=h\{\mathbf{X}(t)^T\boldsymbol{\beta}\}$, where $\theta$ denotes either $\sigma$, $\xi$ or both, $h$ is an inverse link function, $\boldsymbol{\beta}$ is a vector of parameters and $\mathbf{X}(t)$ contains the values of explanatory variables on which the model might depend~\citep[Chapter~6]{IExtr}.
We wish to reparametrise the model to reduce or remove the effect on $\Gamma_{12}$ of a potential confounder $H$ of $X_1$ and $X_2$. If $H$ appears linearly in the LSCM then under the setup in Section~\ref{s:ctc} it is straightforward to show that $H$ affects the scale parameters of the GPD model that applies to $X_1$ and $X_2$ above high thresholds, and not their shapes.%
Thus we write
\begin{equation*}
\sigma_j(i) := \sigma_j^0+\sigma_j^1 H_i, \quad i=1,\ldots,n,\; j=1,2,
\end{equation*}
where $H_i$ is the replicate of $H$ corresponding to the observations $(X_{i,1},X_{i,2})$ of $(X_1,X_2)$.
This yields, for $k\in\{1,\ldots, n-1\}$, the parametric \emph{$H$-conditional linear generalized Pareto distribution (LGPD) causal tail coefficient estimator}\ for $\Gamma_{12}$,
\begin{equation}\label{eq:lgpdctc}
\hat{\Gamma}_{1,2\mid H}^{\rm GPD}=\dfrac{1}{k_1}\sum_{i=1}^n \hat{F}_2\{X_{i,2};\hat{\sigma}_2(i),\hat{\xi}_2\}\mathds{1}\left[\hat{F}_1\{X_{i,1};\hat{\sigma}_1(i),\hat{\xi}_1\}>1-k/n\right].
\end{equation}
Estimation of $\sigma_j^0$, $\sigma_j^1$ and $\xi_j$ is performed by maximum likelihood.
In applications it is preferable to center and rescale the values of $H$ to have unit variance and be centered around zero.
\subsection{The positive linear scale issue}\label{sss:posscaleissue}
Linear modelling of the GPD scale parameter may not yield positive scale estimates $\hat{\sigma}_j(i)>0$ for each $i=1,\ldots,n$ and $j=1,2$. The use of a nonlinear link function to ensure that the scale estimates were positive would not agree with the assumption of extremal linearity of the causal relationships, as the effect of $H$ on the scale is also necessarily linear. We now describe two different solutions to this problem, which we compare by simulation in Section~\ref{s:simul}.
The first solution, \emph{post-fit correction},\ replaces $\hat{\sigma}_j(i)$ in~\eqref{eq:lgpdctc} by $\max\{\hat{\sigma}_j(i),\epsilon \}$ for an arbitrarily small positive $\epsilon$. The second solution, the \emph{constrained approach},\ applies the constraints
\begin{equation}\label{eq:linbounds}
\sigma_j^0+\sigma_j^1 \min_{i=1,\ldots,n} H_i>0, \quad \sigma_j^0+\sigma_j^1 \max_{i=1,\ldots,n} H_i>0, \quad j=1,2,
\end{equation}
to the estimates when maximizing the likelihood. When the data have a known distribution, box constraints can replace the linear ones.
For example, if $X_1$, $X_2$ and $X_h=H$ have $t_\nu$ distributions, then $\sigma_j^0=u_j/\nu$ and $\sigma_j^1=-\beta_{h\rightarrow i}/\nu$. Thus, if $ \sigma_j(i)=\sigma_j^0+\sigma_j^1 H_i>0$ $(j=1,2;i=1,\ldots,n)$, then
\begin{equation}
-\dfrac{u_j}{\nu\max_{i=1,\ldots,n}H_i}<\sigma_j^1<-\dfrac{u_j}{\nu\min_{i=1,\ldots,n}H_i},
\label{eq:studentbounds}
\end{equation}
where the lower and upper bounds are needed for positive and negative $H_i$, respectively.
\section{Simulation Study}\label{s:simul}%
Here we perform a simulation study using the Student $t$, Pareto and log-normal distributions. The first two lie in the Fr\'echet maximum domain of attraction and are regularly varying with index $\alpha=1/\xi>0$. As the Pareto distribution exactly satisfies Definition~\ref{df:rvx}, one might expect better behaviour with this than with the Student $t$ distribution. The log-normal distribution lies in the maximum domain of attraction of the Gumbel distribution and is not regularly varying, but finite samples from it can appear to be heavy-tailed.
We focus on the behaviour of the causal tail coefficient estimators~\eqref{eq:npctc1} and~\eqref{eq:lgpdctc} between two variables $X_1$ and $X_2$ in their causal configurations, as shown in Figure~\ref{f:causalstructs}. As we study the estimators of causal effects of both $X_1$ on $X_2$ and of $X_2$ on $X_1$, we generated simulations only for the four causal cases, A, B, C and D.
The LSCM causal weights $\beta_{21}$, $\beta_{1h}$ and $\beta_{2h}$ were chosen to equal unity, by default, for each existing edge in all four cases. Hence, in D, $X_2$ is caused by $X_1$ and $H$ with equal strength, even though $H$ has a second effect on $X_2$ through $X_1$.
Unless stated otherwise, each estimate is based on a random sample of $n=10^6$ triples $(X_1,X_2,H)$, of which $k=2\lfloor n^{0.4}\rfloor=502$ were chosen based on~\citet{Gnecco2019}, who found that the optimal fractional exponent of $n$ for choosing $k$ seems to lie between $0.3$ and $0.4$. The factor $2$ doubles the number of data pairs used in the estimator, thus decreasing its variability, but does not introduce much bias for such large values of $n$. One thousand independent replicate estimates were calculated for each of the four causal configurations and three distributions.
We present only the highlights of the study; the code and all the results are available from \url{github.com/opasche/ExtremalCausalModelling}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{other/Causal_structs.png}
\caption{The six possible causal configurations between $X_1$ and $X_2$ with a possible confounder $H$, separated into the four cases studied in the simulations, and the two omitted by symmetry.}
\label{f:causalstructs}
\end{figure}
\subsection{Variables with comparable tails}\label{ss:const}
Detailed results for variables with comparable tails are presented in Section~\ref{sm:comparableTails} of the Supplementary Material. In this case it is essentially always possible to infer the existence and direction of any causality between $X_1$ and $X_2$, based on the non-parametric or $H$-conditional LGPD estimators, \eqref{eq:npctc1} or~\eqref{eq:lgpdctc}, of $\Gamma_{12}$ and $\Gamma_{21}$ alone. When the causal effects of $H$ on $X_1$ and $X_2$, i.e., $\beta_{1h}$ and $\beta_{2h}$, are increased relative to the noise variance and any causal effect $\beta_{21}$ of $X_1$ on $X_2$, both $\Gamma_{12}$ and $\Gamma_{21}$ increase in configuration B, and $\Gamma_{21}$ increases in configurations C and D. This increase is more marked with the non-parametric estimators of $\Gamma_{12}$ and $\Gamma_{21}$, which are biased upwards in these configurations. When the confounder has a high causal impact, inference based on the non-parametric estimator~\eqref{eq:npctc1} for direct causal link between $X_1$ and $X_2$ is sometimes impossible, as $\hat{\Gamma}_{12},\hat{\Gamma}_{21}\approx 1$ and their difference is close to zero in configurations B and D.
Use of the $H$-conditional LGPD estimator~\eqref{eq:lgpdctc} greatly reduces the effect of $H$ on the coefficient estimates in configurations B and D. For Pareto and log-normal data, the results are indistinguishable from those without the confounder, both in terms of location and variability, as if the effect of $H$ had been entirely removed. For the Student distribution, the estimates are also shifted to around the same values as in the corresponding confounder-free configurations, though their upper tails are marginally heavier. These few greater values remain appreciably lower than without $H$ as a covariate. For configurations A and C, unlike for B and D, the estimator is almost unaffected by the addition of $H$ as a covariate when it is not a confounder. This is also a useful property, as it could allow tests of whether a specific covariate is a confounder of two variables, based on changes to the estimated coefficient when it is added to the model.
\subsection{Confounder with a different tail}\label{ss:difftail}
One generalisation allows the tail of the distribution of $H$ to be heavier or lighter than those of $X_1$ and $X_2$. A lighter tail does not negatively affect whether the non-parametric and $H$-conditional LGPD estimators can infer a direct causal relationship between $X_1$ and $X_2$, since the tails of $X_1$ and $X_2$ are then dominant.
Figure~\ref{fs:lnh1p5npctc} shows the sampling distributions of $\hat{\Gamma}_{12}$ and $\hat{\Gamma}_{21}$ for all four causal structures when the tail of $H$ is heavier than those of $X_1$ and $X_2$. The true coefficient values are unknown, as assumption (b) of Theorem~\ref{thm:ctc1val} is not satisfied, though the coefficient for comparable tails,~\eqref{eq:lemctc1}, is shown for comparison.
\begin{figure}[!t]
\centering
\includegraphics[width=0.92\textwidth]{sims_estimators_q90/n1e6/t4_H3_q90_1M2k1000_vals_npctc.png}\\
\includegraphics[width=0.92\textwidth]{sims_estimators_q90/n1e6/ln_H1p5_q90_1M2k1000_vals_npctc.png}
\caption{Histograms of $\hat{\Gamma}_{12}$ (turquoise) and $\hat{\Gamma}_{21}$ (blue) for $t_4$-distributed $X_1$ and $X_2$, and $t_3$-distributed $H$ (top four panels) and for ${\rm LogNormal}(0,1)$-distributed $X_1$ and $X_2$, and ${\rm LogNormal}(0,1.5)$-distributed $H$ (bottom four panels). Half-lines indicate $\Gamma_{12}$ and $\Gamma_{21}$ for comparable tails.
The panels for ${\rm Pareto}(1,3)$ distributed $X_1$ and $X_2$, and ${\rm Pareto}(1,1.5)$ distributed $H$ are very similar to the lower four panels.}
\label{fs:lnh1p5npctc}
\end{figure}
When $H$ has a heavier tail than $X_1$ and $X_2$, the non-parametric estimators $\hat{\Gamma}_{12}$ and $\hat{\Gamma}_{21}$ in configuration B and $\hat{\Gamma}_{21}$ in configuration D are shifted well towards unity. With an even heavier-tailed, Student $t_2$, distribution for $H$ (not shown here), the Student results resemble those for the Pareto and log-normal distributions. In all these cases it becomes impossible to infer a direct causal relationship between $X_1$ and $X_2$, due to the effect of the heavier confounder tail on the non-parametric estimators.
Figure~\ref{fs:lnh1p5pflgpdctc} shows the sample distributions of $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$ and $\hat{\Gamma}_{2,1\mid H}^{\rm GPD}$ with post-fit correction, for all four causal configurations, when the tail of $H$ is heavier than those of $X_1$ and $X_2$.
\begin{figure}[tp]
\centering
\includegraphics[width=0.92\textwidth]{sims_estimators_q90/n1e6/t4_H3_q90_1M2k1000_vals_pfcorr_lgpdctc.png}\\
\includegraphics[width=0.92\textwidth]{sims_estimators_q90/n1e6/Pa3_H1p5_q90_1M2k1000_vals_pfcorr_lgpdctc.png}\\
\includegraphics[width=0.92\textwidth]{sims_estimators_q90/n1e6/ln_H1p5_q90_1M2k1000_vals_pfcorr_lgpdctc.png}
\caption{Histograms of $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$ (turquoise) and $\hat{\Gamma}_{2,1\mid H}^{\rm GPD}$ (blue) with post-fit correction for $t_4$ distributed $X_1$ and $X_2$, and $t_3$ distributed $H$ (top four panels), for ${\rm Pareto}(1,3)$ distributed $X_1$ and $X_2$, and ${\rm Pareto}(1,1.5)$ distributed $H$ (middle four panels), and ${\rm LogNormal}(0,1)$ distributed $X_1$ and $X_2$, and ${\rm LogNormal}(0,1.5)$ distributed $H$ (lower four panels). Half-lines indicate $\Gamma_{12}$ and $\Gamma_{21}$ for comparable tails.}
\label{fs:lnh1p5pflgpdctc}
\end{figure}
Figure~\ref{fs:lnh1p5npctc} shows that in configurations B and D the non-parametric estimator is badly affected by the heavier tail of $H$: all estimates lie very close to unity, rendering inference on the direct causal relationship between $X_1$ and $X_2$ impossible. In the $H$-conditional LGPD context, the use of $H$ as a covariate solves this problem: the estimates shift towards the coefficient values in the corresponding confounder-free cases, and consistently yield positive values of the difference of estimates $\hat{\Gamma}_{1,2\mid H}^{\rm GPD} - \hat{\Gamma}_{2,1\mid H}^{\rm GPD}$ for configuration D and differences centred at zero for configuration B; see also Section~\ref{sm:comparableTails} of the Supplementary Material. The estimates in configurations A and C, without the confounder causal effect, are barely changed by the addition of $H$ as a covariate.
Simulation results for $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$ and $\hat{\Gamma}_{2,1\mid H}^{\rm GPD}$ with the constrained fit are very similar to those for post-fit correction for the Pareto and log-normal distributions, but not for the Student distribution. Figure~\ref{fs:t4h3cstrlgpdctc} shows the sample distribution of $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$ and $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$ with the constrained fit, for a heavier confounder tail.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{sims_estimators_q90/n1e6/t4_H3_q90_1M2k1000_vals_constr_lgpdctc.png}
\caption{Histograms of $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$ (turquoise) and $\hat{\Gamma}_{2,1\mid H}^{\rm GPD}$ (blue) with constrained fit for $t_4$ distributed $X_1$ and $X_2$, and $t_3$ distributed $H$. Half-lines indicate $\Gamma_{12}$ and $\Gamma_{21}$ for comparable tails.}
\label{fs:t4h3cstrlgpdctc}
\end{figure}
For the Student distribution, the effect of the confounder on the estimator is appreciably less reduced for the constrained fit than for post-fit correction, compared to the non-parametric results. As the Student distribution is heavy in both tails, the lower inequality constraint in~\eqref{eq:linbounds} forces $\hat{\sigma}_j(i)$ ($j=1,2$) to have an appreciably smaller slope, explaining this reduced effect. In configurations with a confounder, the absolute values of the constained $\hat{\sigma}_j^1$ may be up ten times smaller than those for post-fit correction. With both approaches $\hat{\sigma}_j^1$ rarely differs greatly from zero for configurations without a confounder.
In the case of the Student distribution, both types of constraint yield very similar estimates; see \url{github.com/opasche/ExtremalCausalModelling}.
To summarize, the simulations show that both the non-parametric estimator~\eqref{eq:npctc1} and the $H$-conditional LGPD estimator~\eqref{eq:lgpdctc} perform well when the theoretical assumptions are met and the influence of a hidden confounder is not too strong. When this influence grows, it becomes increasingly difficult to confidently infer the causal relationship between the variables using the non-parametric estimator, but the $H$-conditional LGPD estimator allows us to detect this relationship by reducing the effect of the confounding.
\section{Testing for Direct Causality}\label{s:test}
\subsection{Permutation Test}\label{ss:testexpl}
In situations such as the causal analysis presented in Section~\ref{s:rivers}, the distributions of the $\Gamma_{12}$ and $\Gamma_{21}$ estimators must be estimated to be used for inference. One way to obtain such distributions would be bootstrap resampling, but the extremal nature of the causal tail coefficient would require an unrealistically large sample size for its bootstrap distributions to be trustworthy, as these distributions tend to be too discrete in the extremes.
We therefore propose a permutation test~\citep[Chapter~4]{DavisonBootstrap} for direct causality between two observed variables, measuring the asymmetry in their direct causal relationship. Suppose we have a sample $\left\{(X_{i,1},X_{i,2})\right\}_{i=1}^n$ from a LSCM and wish to test the null hypothesis of no direct causal relationship between $X_1$ and $X_2$, $H_0: \beta_{21}=0$, versus the alternative that $X_1$ causes $X_2$, $H_A: \beta_{21}> 0$. Our proposed procedure is as follows:
\begin{enumerate}
\item[1.] Rescale values $\tilde{X}_{i,j}=\tilde{F}_j(X_{i,j})$ ($i=1,\ldots,n$, $j=1,2$), where known confounders can be used in the distribution estimator $\tilde{F}_j$, as for $\hat{\Gamma}_{1,2\mid H}^{\rm GPD}$.
\item[2.] For $r=1,\ldots,R$, obtain $\tilde{X}_{i,1}^{(r)}$ and $\tilde{X}_{i,2}^{(r)}$ by randomly permuting the indices $j=1,2$ for each pair $(\tilde{X}_{i,1},\tilde{X}_{i,2})$ ($i=1,\ldots,n$).
\item[3.] Compute $\tilde{\Delta}_{12}=\tilde{\Gamma}_{12}-\tilde{\Gamma}_{21}$ on $\{(\tilde{X}_{i,1},\tilde{X}_{i,2})\}_{i=1}^n$ and $\tilde{\Delta}^{*r}_{12}=\tilde{\Gamma}_{12}^{*r}-\tilde{\Gamma}_{21}^{*r}$ on $\{(\tilde{X}_{i,1}^{(r)},\tilde{X}_{i,2}^{(r)})\}_{i=1}^n$ $(r=1,\ldots,R)$.
\item[4.] Obtain the Monte Carlo $p$-value, by comparing the value of the test statistic on the original rescaled data with the permutation distribution,
\begin{equation*}
p_{\rm mc}=\frac{1+\#_r\{\tilde{\Delta}^{*r}_{12}\geq\tilde{\Delta}_{12}\}}{R+1}.
\end{equation*}
\end{enumerate}
If there are no asymmetric confounding effects on the two variables, then $\Delta_{12}:=\Gamma_{12}-\Gamma_{21}=0$ under $H_0$, whereas $\Delta_{12}>0$ under $H_A$; see Theorem~\ref{thm:ctc1val}. In the first case, the direct causal relationship is symmetric, i.e., $X_2$ is as likely to take extreme values when $X_1$ is extreme as is $X_1$ when $X_2$ is extreme. If so, then permutations such as those performed in step 2.\ are equally likely, $\tilde{\Delta}_{12}, \tilde{\Delta}^{*1}_{12},\ldots,\tilde{\Delta}^{*R}_{12}$ have a common distribution centered around zero, and $p_{\rm mc}$ will be uniformly distributed. Under the alternative, the direct causal relationship is ``asymmetric'', as $X_2$ is more likely to be extreme when $X_1$ is extreme than conversely; then $\tilde{\Delta}_{12}$ is more likely to lie in the upper tail of $\tilde{\Delta}^{*1}_{12},\ldots,\tilde{\Delta}^{*R}_{12}$. Thus the distribution of $p_{\rm mc}$ will become increasingly skewed towards zero as the causal strength of $X_1$ on $X_2$ increases.
If all asymmetric confounding effects are captured in $\tilde{F}_j$, $X_1$ and $X_2$ have comparable tails and causal effects behave linearly in the extremes, then the proposed procedure should provide a reliable $p$-value for testing direct causality of $X_1$ on $X_2$.
\subsection{Simulations}\label{ss:testsims}
We used simulation from different data distributions and for different causal configurations involving $X_1, X_2$ and a potential confounder $H$ to assess our proposed test. We used values of $0, 0.01, 0.05, 0.1, 0.2$ for the causal strength $\beta_{21}$ of $X_1$ on $X_2$, with confounding effects both present and absent. Symmetric ($\beta_{1H}=\beta_{2H}=1$) and asymmetric ($\beta_{1H}=0.8$ and $\beta_{2H}=1$, or $\beta_{1H}=1$ and $\beta_{2H}=0.8$) confounding effects were considered, and the noise variable were Pareto, Student $t$ and log-normal. We generated $m=10^3$ replicate samples of $n=10^4$ independent triples $(X_{i,1},X_{i,2},H_i)$ for each causal configuration and noise distribution. Three versions of the permutation test were performed for each sample, corresponding to the causal tail coefficient estimators discussed in Sections~\ref{s:ctc} and~\ref{s:pctc}: non-parametric~\eqref{eq:npctc1} and $H$-conditional LGPD~\eqref{eq:lgpdctc} with either post-fit correction or constrained fit. Each used $R=10^3$ permutations and the estimator hyper-parameters were set to $k=2\lfloor n^{0.4}\rfloor=78$ and $q=0.9$.
Figure~\ref{fs:QQtestPa2H1} shows uniform QQ-plots of $p_{\rm mc}$ for the Pareto and Student distributions, in the case of heavier confounder tail, with symmetric effects. In the absence of confounding the test behaves as expected in both cases, and adding dependence on the independent $H$ variable in the modelling through the parametric estimators has no visible effect on the distribution of $p_{\rm mc}$ compared to the non-parametric approach. For the Pareto distribution, the test has power of almost $0.9$ for a direct causal strength of $0.01$, and it behaves perfectly for higher causal strengths. For the Student distribution, the test reaches a power of $0.3$ for a direct causal strength of $0.05$, of $0.7$ for causal strength of $0.1$ and of near 1.0 for a causal strength of $0.2$.
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{Permutation_test_q90/Pa1-2_H1-1_q90_10k2k_R1k_betas_Pmc_QQ.png}\\
\includegraphics[width=\textwidth]{Permutation_test_q90/t4_H3_q90_10k2k_R1k_betas_Pmc_QQ.png}
\caption{Uniform QQ-plots of Monte Carlo $p$-values $p_{\rm mc}$, with Kolmogorov--Smirnov confidence bands for different causal strengths $\beta_{21}$ (colors), the three estimators (columns) and optional symmetric confounding effects, $\beta_{1H}=\beta_{2H}=1$ (rows).
Top six panels: ${\rm Pareto}(1,2)$ distributed $X_1$ and $X_2$, and ${\rm Pareto}(1,1)$ distributed $H$.
Bottom six panels: $t_4$ distributed $X_1$ and $X_2$, and $t_3$ distributed $H$, }
\label{fs:QQtestPa2H1}
\end{figure}%
When the confounding effects are added, the test based on the non-parametric estimator fails for the Pareto distribution, as most of the $p_{\rm mc}$ then lie outside the $95\%$ confidence bands, indicating that the distribution of $p_{\rm mc}$ is highly non-uniform. This is corrected when the value of the confounder is taken into account using the parametric approaches, and power $0.9$, for a direct causal strength of only one twentieth of the confounder's marginal effects. In the Student case, $p_{\rm mc}$ seems to be close to uniformity in the absence of direct causality (the difference in tail shape is much greater in the Pareto case), but post-fit correction increases the power from below $0.2$ to above $0.4$ for a direct causal strength of one fifth of the confounder's marginal effects. Similar conclusions to those of Section~\ref{ss:difftail} about the constrained fit for distributions with both tails heavy apply, as the constrained fit estimator is not significantly better than the non-parametric estimator compared to post-fit correction.
Figure~\ref{fs:QQtestPa2asym0p8_1} shows the uniform QQ-plots with asymmetric confounding effects for the Pareto distribution with comparable tails.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{Permutation_test_q90/Pa1-2_asym0p8_1_q90_10k2k_R1k_betas_Pmc_QQ.png}
\caption{QQ-plot of the $p_{\rm mc}$ estimates against the standard uniform distribution, with Kolmogorov--Smirnov confidence bands, for ${\rm Pareto}(1,2)$ distributed $X_1$ and $X_2$ and $H$, for different causal strengths $\beta_{21}$ (colors), the three estimators (columns) and optional asymmetric confounding effects, $\beta_{1H}=0.8$, $\beta_{2H}=1$ (rows).}
\label{fs:QQtestPa2asym0p8_1}
\end{figure}
Unlike in the corresponding symmetric case, the test here fails when using the non-parametric estimator owing to the asymmetry induced by the confounder, but both parametric approaches remove this unwanted effect by enough that $p_{\rm mc}$ nearly has a uniform distribution, with almost perfect power, for a causal strength of one sixteenth and one twentieth of the marginal confounding effects.
\section{Application to Swiss rivers}\label{s:rivers}
We now illustrate how our method can discover direct causal relationships between the discharge extremes of pairs of river stations. This illustrates our method on a real example for which we know the `ground truth' of extremal causality, but unlike in the simulations of Section~\ref{s:simul}, we cannot control and do not know the true tail behaviour of the stations discharges and their potential confounders.
\subsection{Data sources and additional collection}\label{sss:datasources}
We use the average daily discharges of the $68$ Swiss gauging stations shown in Figure~\ref{fd:chstations}, and add daily precipitation data from $105$ meteorological stations. Some additional information, such as the stations' elevation, their catchment surface area and mean elevation, their glaciation percent and their coordinates, was collected from the Federal Office for the Environment's website. To reduce any seasonal effects due to unobserved confounders, we only consider data during June, July and August, as the more extreme observations happen during this period when mountain rivers are less likely to be frozen.
Figure~\ref{fd:shapech} shows relationships between the estimates, station altitudes and average discharges. Although altitude does not greatly affect the estimates, the shape parameter estimates broadly decrease with increased average river discharge volume.
\subsection{Choice of stations and comonotonicity}\label{ss:choicestati}
For the causal analysis, we consider pairs of stations with known direct causal relationships, and pairs with no direct causal relationship.
Causal pairs are ordered by the flow of water, with one downstream of the other. The river volumes for the pairs should be as similar as possible, as our exploratory analysis indicated different tail behaviours for rivers with very different average discharges.
There should also be enough confluences between the two stations, otherwise one would observe \emph{comonotonicity}, i.e., almost perfect dependence, between their discharges. If there is comonotonicity between $X_1$ and $X_2$, then $F_1(X_{i,1})\approx F_2(X_{i,2})$, for all $i=1,\ldots,n$, and it is impossible to know which variable causes which based on the data alone, even if one is certain of direct causality. Confluences between the two stations reduce comonotonicity and make it possible to detect the direction of causality.
As we shall use precipitation as the confounding covariate, the stations must share likely meteorological effects and must lie in regions where precipitation data is available. Based on these criteria, we chose $7$ causal station pairs for the analysis: $(43,62)$, $(42,63)$, $(36,63)$, $(24,61)$, $(44,61)$, $(22,38)$, $(22,35)$.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{CH/Summer_shapes_all_log_resh.png}
\caption{Relation between shape parameter estimates, scale parameter estimates (log scale), station elevation and average discharge (log scale), with standard errors ($\pm {\rm SE}$) shown as error bars.}
\label{fd:shapech}
\end{figure}%
The non-causal station pairs were selected to have similar average volume and similar shape parameter estimates. Both pairs with stations separated by long distances and pairs relatively close to each other were considered. The $13$ pairs selected are $(30,45)$, $(36,39)$, $(42,34)$, $(32,33)$, $(62,63)$, $(57,60)$, $(13,14)$, $(17,22)$, $(12,21)$, $(26,28)$, $(27,31)$, $(23,39)$, $(23,35)$.
The choice of covariate for the causal pairs was the mean daily precipitation among the meteorological stations in the area and the catchment of the two stations.
The choice of covariate was less meaningful for the non-causal pairs with large separating distances, as they have different meteorological conditions, so the average daily precipitation over the whole country was used. For the pair $(42,34)$, which has the closest stations and local precipitation data available, the daily average in the local catchments was also considered. In the latter case, the pair will be highlighted with an asterisk to avoid confusion.
\subsection{Causal Analysis Results}\label{ss:statiresults}
For each station pair, the permutation test for direct causality was performed using the non-parametric~\eqref{eq:npctc1} and $H$-conditional LGPD~\eqref{eq:lgpdctc} estimators with post-fit correction or constraints, with $R=10^4$ permutations and estimator hyper-parameters $k=1.5\lfloor n^{0.4}\rfloor$ and $q=0.9$. Table~\ref{t:swissresults} shows the values of $p_{\rm mc}$, the covariate shape estimate and its estimated extremal linear effects for the two stations, the latter estimated without constraints. The number of common observations for the pairs varies from $\numprint{2024}$ to $\numprint{8464}$, and $k$ lies between $31$ and $55$. With precipitation covariates added, the number of common observations ranges from $\numprint{1483}$ to $\numprint{7820}$, and $k$ lies between $27$ and $54$.
\begin{table}[t]
\centering
\caption{Permutation $p$-values $p_{\rm mc}$ for station pairs using the non-parametric approach (NP), the $H$-conditional post-fit corrected (PFC) and constrained fit (CF) LGPD approaches, and an $H$-conditional exponential inverse-link GPD approach (Exp). The shape estimate $\hat{\xi}_H$ for the precipitation covariate and the unconstrained scale slope estimates are also shown (with standard errors of at most $0.03$ for the former and in parentheses for the latter).\label{t:swissresults}}
\nprounddigits{2}
\begin{tabular}{ll|n{1}{2}|n{1}{2}|n{1}{2}|n{1}{2}|n{1}{2}|n{1}{2}|n{2}{2}|}
Stations & Pair type & \text{NP} & \text{PFC} & \text{CF} & \text{Exp} & \text{$\hat{\xi}_H$} & \text{$\hat{\sigma}_{1}^{1}$} & \text{$\hat{\sigma}_{2}^{1}$} \\
\hline
43-62 & causal & 0.0080991901 & 0.0079992001 & 0.0074992501 & 0.0080991901 & 0.0566669337 & 0.8842009468 (0.3) & 1.9066945453 (1.3) \\
42-63 & causal & 0.0279972003 & 0.0211978802 & 0.0245975402 & 0.0351964804 & 0.0601108523 & 6.4878543848 (1.1) & 8.6026410091 (2.2) \\
36-63 & causal & 0.0274972503 & 0.0226977302 & 0.0198980102 & 0.0295970403 & 0.0601108523 & 5.027641044 (1.1) & 7.2532593647 (2.8) \\
24-61 & causal & 0.0555944406 & 0.0074992501 & 0.0053994601 & 0.0039996 & -0.014601569 & 3.4157640955 (1.2) & -2.3350882618(2.4) \\
44-61 & causal & 0.0104989501 & 0.00449955 & 0.00459954 & 0.0065993401 & 0.0127824624 & 1.8939044269 (0.7) & -1.2100340565(2.0) \\
22-38 & causal & 0.5802419758 & 0.402859714 & 0.397660234 & 0.3305669433 & 0.0659749896 & 3.425892068 (0.8) & 8.000950484 (2.0) \\
22-35 & causal & 0.2177782222 & 0.1730826917 & 0.1745825417 & 0.097190281 & 0.0295179206 & 3.4292170544 (0.9) & 11.666130149 (3.0) \\\hline
30-45 & non-caus. & 0.5578442156 & 0.4658534147 & 0.4668533147 & 0.4561543846 & 0.0052839693 & 1.0132876404 (0.4) & 0.8868363734 (0.9) \\
36-39 & non-caus. & 0.796620338 & 0.699030097 & 0.698330167 & 0.6874312569 & 0.0052839693 & 4.6059105148 (1.1) & 4.168906454 (1.6) \\
42-34 & non-caus. & 0.2264773523 & 0.0422957704 & 0.0408959104 & 0.099690031 & 0.0052839693 & 5.967653909 (1.2) & 0.4274070406 (0.3) \\
42-34$^*$ & non-caus. & 0.2264773523 & 0.1303869613 & 0.1250874913 & 0.1125887411 & 0.0543840826 & 6.2882944671 (1.1) & 0.6556023806 (0.3) \\
32-33 & non-caus. & 0.0098990101 & 0.0096990301 & 0.0099990001 & 0.00159984 & 0.0052839693 & 0.6325946323 (0.4) & 1.0012907206 (0.3) \\
62-63 & non-caus. & 0.101389861 & 0.4889511049 & 0.4753524648 & 0.299970003 & 0.0052839693 & 1.08066753 (1.4) & 7.6742023274 (2.1) \\
57-60 & non-caus. & 0.9923007699 & 0.99930007 & 0.99910009 & 0.9990001 & 0.0052839693 & 6.3100949401 (3.7) & 5.2283442628 (1.8) \\
13-14 & non-caus. & 0.3215678432 & 0.5590440956 & 0.5639436056 & 0.5337466253 & 0.0052839693 & 0.5925547817 (0.2) & 1.1850691043 (0.3) \\
17-22 & non-caus. & 0.0050994901 & 0.0547945205 & 0.0565943406 & 0.0478952105 & 0.0052839693 & 0.776163342 (0.5) & 2.1809668187 (0.7) \\
12-21 & non-caus. & 0.5146485351 & 0.498550145 & 0.500349965 & 0.7209279072 & 0.0052839693 & 0.7126162033 (0.3) & 1.3336258551 (0.4) \\
26-28 & non-caus. & 0.6345365463 & 0.900109989 & 0.8927107289 & 0.9220077992 & 0.0052839693 & 1.9041701392 (0.5) & 1.6324917792 (0.4) \\
27-31 & non-caus. & 0.404459554 & 0.6259374063 & 0.6215378462 & 0.7521247875 & 0.0052839693 & 1.7129163243 (0.7) & 2.9055730916 (1.1) \\
23-39 & non-caus. & 0.802719728 & 0.9134086591 & 0.9184081592 & 0.9323067693 & 0.0052839693 & 2.4975701839 (0.6) & 4.2681161898 (1.5) \\
23-35 & non-caus. & 0.6484351565 & 0.8844115588 & 0.8877112289 & 0.8647135286 & 0.0052839693 & 2.4975701839 (0.6) & 6.6610856116 (1.7) \\
\hline
\end{tabular}
\npnoround
\end{table}
With the non-parametric approach for the causal stations, the absence of direct causality was rejected for four of the seven station pairs at significance level $5\%$, and for two of these four at level $2.5\%$. Adding daily precipitation as a covariate by either parametric approach decreases the $p$-values but two pairs remain non-significant; both lie in the same region and contain station number $22$.
With the non-parametric approach, the absence of direct causality was not rejected for ten of the $13$ non-causal station pairs. Adding precipitation as a covariate with the two parametric approaches `corrected' the p-value for another station. For the pair $(42,34)$ using local instead of global precipitation as a covariate gave a higher p-value.
We also considered using an exponential rather than a linear inverse-link function, i.e., taking $\log \sigma_j(i) = \sigma_j^0+\sigma_j^1 H_i$ $(i=1,\ldots,n;j=1,2)$, to avoid any need for correction or constraints. The resulting $p_{\rm mc}$ values, also shown in Table~\ref{t:swissresults}, lead to the same conclusions as with the linear approaches.
Using the usual normal approximation, every $\hat{\sigma}_{1}^{1}$ is significantly positive for the causal pairs and $10$ of the $14$ estimates are positive for the non-causal pairs, with the highest confidence for the pair using local precipitation. Standard errors for $\hat{\sigma}_{2}^{1}$ are systematically larger than those for $\hat{\sigma}_{1}^{1}$ for the causal pairs, perhaps owing to the double causal effect of the covariate on the downstream station, both direct and indirect through the upstream station, as we do not observe this systematically for non-causal pairs. Consequently, the $\hat{\sigma}_{1}^{1}$ estimates are significantly positive for only four of the seven causal pairs, to be contrasted with $12$ of the $14$ estimates for the non-causal pairs. In particular, only the local precipitation effect is significant for the pair $(42,34)$.
\section{Discussion and Conclusion}
\label{s:conclu}
This paper addresses the reduction or removal of the unwanted effect of a known confounder from the extremal causal analysis between two variables and the discovery of extremal causal relationships using a parametric estimator of the causal tail coefficient, based on generalized Pareto modelling, and a permutation test for direct causality. Both allow the use of a known confounder as a covariate.
In our simulation study, the new estimator removed the confounder's unwanted effect almost entirely for variables with comparable tails, and reduced its effect enough to allow correct causal inference on the direct causal relationship in the case of a confounder with a heavier tail. The permutation test was shown to provide reliable $p$-values when all asymmetrical confounding effects are captured in the model.
When applied to Swiss river discharge data, our methodology allowed correct inference on the direct causal relationships between discharges for the majority of the chosen station pairs, and the parametric approach captured the confounding effect of precipitation.
In many real-life situations, significant covariates do not correspond to causal effect. \citet{CausalPred} have proposed a methodology for causal discovery, for when data is observed from different settings or regimes. Their method constructs invariant causal regression or classification models, that should still make accurate predictions under interventions on the covariates or a change of environment. Adapting this approach to our setting would lead to a better understanding of extreme causality in fields of predictive statistics such as machine learning and multivariate time series modelling for extremes.
\section*{Acknowledgements}
The work was supported by the Swiss National Science Foundation.
\bibliographystyle{abbrvnat-namefirst}%
|
1,314,259,993,300 | arxiv | \section{Introduction}
\label{sec:Intro}
Traditional, simplified models of energetic particle propagation typically assume homogeneous turbulence generated by fields that are wave-like perturbations or random noise. These scenarios lack what would be called intermittency in turbulence theory. Indeed, a variety of indications, both theoretical \citep{ambrosiano1988test, drake2006electron, dalena2012magnetic, pecora2018ion} and observational \citep{mazur2000interplanetary, tessein2013association, tessein2016local, khabarova2017energetic}, build a case that interactions of particles with turbulence is structured and inhomogeneous. Depending on the topology and connectivity of the magnetic field, these interactions may involve temporary trapping \citep{ruffolo2003trapping}, as well as exclusion from certain regions of space \citep{kittinaradorn2009solar}. In some cases, such as solar energetic particle (SEP) ``dropouts'', the influence of the magnetic structure is dramatic \citep{mazur2000interplanetary}; in other cases, it is more subtle, as in cases where detected flux tube boundaries coincide with ``edges'' of
SEP events \citep{tessein2016local, khabarova2016small}.
With Parker Solar Probe (PSP) \citep{fox2016solar} now reaching distances closer to the Sun than any previous mission, novel opportunities are available for examination of the relationship between magnetic flux structures and energetic particle populations. In particular, energetic particle (EP) measurements from the Integrated Science Investigation of the Sun (IS$\odot$IS) \citep{mccomas2016integrated} along with FIELDS magnetic field measurements \citep{bale2016fields} and SWEAP plasma moments \citep{kasper2016solar}, are enabling characterization of observations of EPs and their transport properties closer to their sources than ever before possible.
We make use of a novel compact scheme \citep{pecora2020identification} for the detection of helical magnetic flux tubes. The approach makes use of an efficient real-space method for quantifying magnetic helicity \citep{matthaeus1982evaluation}, in conjunction with the partial variance of increments (PVI) \citep{greco2018partial} to detect boundaries within, or at the edges of, magnetic flux tubes.
We find evidence for helical flux ropes acting as transport boundaries for solar energetic particles. Such influence arises from the sudden appearance of SEP enhancements in the vicinity of helical flux tubes accompanied, at their edges, by clusters of enhanced PVI events. This elaborates on previous findings near 1 au \citep{tessein2016local} and indicates that the channeling of SEPs occurs closer to the Sun than has been previously observed.
The paper is organized as follows: in Sec.~\ref{sec:hmpvi} we describe the techniques used to evaluate magnetic helicity and PVI to detect helical flux ropes and their boundaries. In Sec.~\ref{sec:psp} we present the results obtained from the analysis of PSP orbit 5. Opposite but complementary views of energetic particle transport are shown. Finally, in the last Section, we discuss the results.
\section{Local magnetic helicity and PVI methods}
\label{sec:hmpvi}
To correlate EP populations with helical structures and strong PVI events, we make use of the technique described in \citet{pecora2020identification}. The technique exploits the property that flux ropes are, in general, helical. Indeed, flux ropes can be described as structures with approximate cylindrical symmetry, and magnetic field lines that wind about a central axis. From a quantitative perspective, an appropriate measure is the magnetic helicity of the magnetic fluctuations,
defined as $H_m = \langle \aa \cdot \bm{b} \rangle$ where $\aa$ is the vector potential associated with magnetic field fluctuations $\bm{b} ={\bm \nabla}\times{\bm a}$. The averaging operation $\langle \dots \rangle$ is performed over an appropriate volume \citep{woltjer1958theorem, taylor1974relaxation, matthaeus1982measurement}.
Following the definition of \cite{matthaeus1982measurement}, magnetic helicity is related to the off-diagonal part of the magnetic field autocorrelation tensor and can be estimated by single-spacecraft 1D measurements as described in the following. This definition relies only on the assumption of homogeneity, a requirement that is eventually weakened in implementations of local analyses. This method is basically free of any strong symmetry assumptions and does not depend on complex transformations.
The procedure we proposed recently in \citet{pecora2020identification} calculates a {\it local} value of $H_m$, based on estimates of the elements of the magnetic field correlation matrix $R_{ij} = \langle b_i(x)b_j(x+l) \rangle$, at a certain point $x$, with increment $l$, and averaging the local correlator of magnetic fluctuations over a region of width $w_0$ centred about $x$. This procedure, implemented entirely in real space, provides a localized evaluation, as do, for example, certain wavelet transforms. The relevant indices $i, j$ for the calculation of helicity are those that refer to directions perpendicular to the relative solar wind-spacecraft motion. For this interval at $\sim$ 0.3 au, this ``sweeping direction'' is almost purely radial, due to the super-Alfv\'enic flow of the solar wind. In the Radial-Tangential-Normal (RTN) coordinate system, the sweeping of the solar wind is in the R direction, corresponding to increment lags $l$ also in the direction R. Then, the integral to calculate helicity involves the T and N magnetic field components. For the more general case, see \citet{pecora2020identification}. For the present choice of coordinates, $H_m$ is evaluated explicitly as
\begin{equation}
H_m(x,\ell) = \int_0^\ell dl~C(x,l) h(l),
\label{eq:Hm}
\end{equation}
where the integral is performed in scale-space and the chosen value of
$\ell$ represents the largest scale that will contribute to $H_m$. In particular, we choose $\ell$ to be a multiple of the correlation length $\lambda_c$ of the specific interval. $h(l) = \frac{1}{2}\left[ 1 + \cos\left(\frac{2\pi l}{w_0}\right) \right]$ is the Hann window used to smooth estimates to zero at the edges of the investigated data interval, thus avoiding spurious effects of boundary fluctuations. $C(x, l)$ is the correlation function defined as
\begin{equation}
C(x, l) = \frac{1}{w_0} \int_{x-\frac{w_0}{2}}^{x+\frac{w_0}{2}} \left[ b_T(\xi)b_N(\xi+l) -b_N(\xi)b_T(\xi+l)\right] d\xi,
\label{hm1}
\end{equation}
again, with increments along R. The interval of local integration $w_0$ is arbitrary, but we typically chose it as an order unity multiple of the scale $\ell$, such as $w_0=2\ell$. The above formulas convert directly to the time domain using the Taylor hypothesis. For a detailed derivation of the theory for helicity determination, see \citet{matthaeus1982evaluation} and \cite{matthaeus1982measurement}. In a subsequent paper, we will describe a statistical analysis that is able to separate helicity values attributable to stochastic magnetic field fluctuations, from those which are due to coherent structures.
It is useful to define a normalized version of the magnetic helicity as
\begin{equation}
\tilde{H}_m = \frac{H_m}{\langle \delta b^2 \rangle \lambda_c},
\label{eq:sig}
\end{equation}
where $\langle \delta b^2 \rangle$ is the fluctuation energy computed from a local average taken on an interval of about a correlation length $\lambda_c$.
\comm{
This measure is useful for two purposes: the first is more practical, as this definition does not involve bulk velocity measurements, which are not always available, and Taylor's hypothesis needs not to be used; the second, it has physical relevance as it quantifies magnetic helicity relative to the local energy content, thus permitting smaller helical structures to be readily detected. It is worthwhile to show both $H_m$ and $\tilde{H}_m$ to have a more comprehensive understanding of the helicity-to-energy ratio of the emerging structures (note for example the flux tube from 05-24 14:00 to 17:00 in Fig.~\ref{fig:20200524}. Note that the normalized helicity is a generalization of the Fourier space dimensionless helicity introduced by \citep{matthaeus1982evaluation}.
}
The correlation length $\lambda_c = V_{sw} \tau_c$ is related to the spacecraft correlation time $\tau_c$, and the solar wind speed $V_{sw}$ via the Taylor hypothesis. Computed in this way, the normalized magnetic helicity becomes more sensitive to local conditions. Note the analogy with the normalized magnetic helicity $\sigma_m$ obtained in the Fourier representation of $H_m$ \citep{matthaeus1982measurement}.
Sharp gradients, frequently current sheets, that often reside at external and internal boundaries of flux ropes are determined using the PVI \citep{greco2008intermittent,greco2018partial}, defined as
\begin{equation}
\mbox{PVI}(s, \ell) = \frac{ | \Delta {\bm B}(s,\ell) | }{ \sqrt{ \langle | \Delta {\bm B}(s,\ell) |^2 \rangle } },
\label{pvieq}
\end{equation}
where $\Delta {\bm B}(s,\ell) = {\bm B}(s+\ell) - {\bm B}(s)$ are the magnetic field vector increments evaluated at scale $\ell$ and the averaging operation $\langle \dots \rangle $ is performed over a suitable interval \citep{servidio2011statistical}. The function can be computed spatially in simulations or in magnetic field time series, again by assuming the Taylor hypothesis. The technique has been extensively validated, both in numerical simulations and observations, to be able to identify magnetic field discontinuities, current sheets, and reconnection events \citep{greco2009statistical, osman2014magnetic, greco2018partial, pecora2019single}.
\section{Parker Solar Probe}
\label{sec:psp}
During PSP orbit 5, from 2020 May 24 to June 2, a sequence of several EP events have been measured, at radial distances from 0.45 to 0.2 au, and have been extensively studied \citep{cohen2020parker, chhiber2020magnetic}. We will analyze selected properties of these events using several PSP data products. We employ magnetic field data from the MAG instrument on the FIELDS suite, resampled from the original four samples per cycle (4Hz) to a 60-second resolution. PVI is calculated at this scale. Magnetic helicity is calculated with varying window sizes scaled to multiples of the regional correlation length $\lambda_c$. As the window size $\ell$ increases (decreases), the helicity measurement (Eq.~\ref{eq:Hm}) becomes sensitive to contributions from larger (smaller) scale flux tubes \citep{pecora2020identification}. Particle measurements are obtained from the IS$\odot$IS EPI--Lo and EPI--Hi instruments.
\comm{
We use count rates from EPI--Lo ChanP (80--200 keV protons) and ChanE (30--550 keV electrons). For EPI--Hi, we use the end A of both the High Energy Telescope (HET A; 0.4--1.2 MeV electrons; 7--60 MeV protons) and the Low Energy Telescope 1 (LET1 A; 1--30 Mev protons).
}
All public data are available from the IS$\odot$IS database and on Coordinated Data Analysis Web (CDAWeb).
\comm{
Priority buffer (PBUF) rates measurements are considered uncalibrated (engineering) data. These are integrated counts measured at different stopping depths within the telescope and cannot be calibrated to fluxes discerning energy ranges. For our use, we summed the ranges R1 through R6, which is similar to integrating over energies. In the following, we do not intend to use these measurements to obtain quantitative estimates, rather they are used to show a better time-resolved envelope of the hourly-averaged fluxes (panels (j) and (k) of Figs.~\ref{fig:20200523}-\ref{fig:20200527}).
}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig1.png}
\caption{Period from 2020 May 24 to June 2, during which $\lambda_c \sim 6.5 \times 10^6$ km (corresponding to $\tau_c \sim 6$ hours). The stacked panels show (a) magnetic field measured by FIELDS resampled at 60-second cadence, (b) the PVI signal computed with a time lag of 60 seconds, (c) the magnetic helicity of fluctuations at different scales, and (d) its normalized version, (e) proton count rate at 60-second resolution measured by EPI--Hi LET1, (f) proton count rate at 60-second resolution measured by EPI--Hi HET, (g) electron count rate in the energy range 0.4--1.2 MeV at 1-hour resolution measured by EPI--Hi HET A, (h) proton count rate in the energy range 80--200 keV at 300-second resolution measured by EPI--Lo, (i) electron count rate in the energy range 30--550 keV at 300-second resolution measured by EPI--Lo, (j) proton flux at 1-hour resolution measured by EPI--Hi LET1 A, and (k) proton flux at 1-hour resolution measured by EPI--Hi HET A.}
\label{fig:20200523}
\end{figure*}
An overview of the five events occurring during the selected period is shown in Fig.~\ref{fig:20200523}. Even at this scale of 11 days, the sets of measurements show interesting behaviour and a correlation between energetic particles (both protons and electrons) and magnetic field properties. In particular, it is possible to notice that the large helical structure appearing around May 28 encloses both the energetic electrons (panels (g) and (i)) and the higher-energy portion of the energetic proton population (panels (f) and (k)). On the other hand, the LET channel (panel (e)) exhibits fewer structures, suggesting less confinement than its higher energy counterpart HET in panel (f). We will focus on the events labelled as 1, 2, and 3 in the two intervals marked with grey shadings in Fig.~\ref{fig:20200523}, as they give opposite but complementary, views of the ``exclusion'' and ``trapping'' phenomena (see below). We note, in passing, that another event just after the second shaded region, beginning around May 29, 08:00, appears to be nonhelical and possibly nondispersive. That interval will not be discussed in the present paper.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig2.png}
\caption{2020 May 24 from 00:30 to 23:30 UTC, $\lambda_c \sim 0.8 \times 10^6$ km ($\tau_c \sim 38$ minutes). The panels are arranged in the same fashion as in Fig.~\ref{fig:20200523}. The energetic proton population, appearing in panel (e), is confined between -- and excluded from -- the leading and trailing negative-helicity peaks. Vertical dashed lines highlight approximate flux tubes boundaries, coinciding with strong PVI events near the edges of helical regions.}
\label{fig:20200524}
\end{figure*}
Figure~\ref{fig:20200524} shows the same quantities as Fig.~\ref{fig:20200523}, but the analysis is performed over the restricted time interval from May 24 00:30 to 23:30 UTC (first shaded region of Fig.~\ref{fig:20200523}) during which $\lambda_c \sim 0.8 \times 10^6$ km (corresponding to $\tau_c \sim 38$ minutes). The restriction to a shorter timescale enhances the smaller-scale helical structures that were obscured before. In this case, the energetic proton population appearing from 8:00 to 15:00 is confined between -- and excluded from -- two helical structures (each indicated with two vertical dashed lines). During this event, the helical field lines appear to act as excluding boundaries for the particles (that may be streaming along ambient solar wind magnetic field lines not organized in a helical fashion), which have suppressed transport across the structures. This kind of exclusionary behaviour is reminiscent of the phenomenon of SEP dropouts that have been associated with topological structures, and that are frequently observed at 1 au and in simulations \citep{mazur2000interplanetary, ruffolo2003trapping, tooprakai2016simulations}.
\comm{
The exclusion of the energetic protons from spatial regions associated with times before 08:00 may be associated with transport effects mediated by the helical tubes that forbid SEPs to access those particular regions of space after their onset.
}
This event appears to be dispersive, as faster particles arrive first (panel (j)), so our suggestion is that the exclusion effect influences which flux tube guided the particle transport from the source region to the point of observation.
The spreading of particles following onset is typically associated with diffusive transport (e.g., \citet{droege2016multi}). In this case, the helical structure near 14:30 may be gradually inhibiting diffusion into the relatively quiet region (in terms of PVI) that resides beyond 16:00. Phenomena such as this have been observed in simulations and have been interpreted as temporary topological trapping \citep{tooprakai2016simulations}, possibly accompanied by suppressed diffusive transport \citep{chuychai2005suppressed}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig3.png}
\caption{Period from 2020 May 27 04:30 to 29 08:00, during which $\lambda_c \sim 4.0 \times 10^6$ km ($\tau_c \sim 4$ hours). The panels are arranged in the same fashion as in Fig.~\ref{fig:20200523}.
Both protons and electrons show sudden onsets (panels (e--g), (i--k)) corresponding to the encounter of PSP with the first positive-helicity structure after a strong PVI event. The nature of the magnetic field changes in the period between the two pulses of high-energy particles in panel (f) -- the sign of helicity changes and there is a burst of PVI activity. This may signal complexity on the transport processes for these apparently distinct events, (see \protect\cite{cohen2020parker}),
especially given that the LET1 counts show little change across this transition.
Vertical dashed lines highlight approximate flux tubes boundaries coinciding with strong PVI events near the edges of helical regions.
}
\label{fig:20200527}
\end{figure*}
The picture given in Fig.~\ref{fig:20200527}, over an approximately two-day period beginning on 2020 May 27, is complementary to the one of Fig.~\ref{fig:20200524}. Energetic particle events 2 and 3, shown here and indicated in panel (j) of Fig.~\ref{fig:20200523}, can be better distinguished in the HET signal of EPI--Hi, in which two separated populations clearly appear (panels (f) and (k)). In LET1, the signal of the first event does not fully decline before the onset of the second one (Fig. \ref{fig:20200527}, panels (e) and (j)). This local analysis shows that there are two adjacent flux tubes of opposite-sign helicity, possibly separated by a strong current sheet individuated by the large-PVI region. One can notice that the dispersive onset of the first EP event, occurring over a period of about an hour, coincides with the appearance of the flux tube, suggesting that the spacecraft has suddenly experienced a different environment, passing from the ambient solar wind to a more confined plasma. In this case, contrary to the previous in Fig.~\ref{fig:20200524}, each energetic population is confined within a different helical structure.
The lower-energy protons (Fig. \ref{fig:20200527} panel (h)) suggest a more complex history that may intertwine source and transport effects. The activity in panel (h) is not directly associated with the positive-helicity flux rope between 05-27 18:00 and 05-28 08:00 nor with the negative-helicity flux rope between 05-28 08:00 and 05-28 16:00 (the latter being better distinguishable at the scale of one correlation length). Rather, one sees an enhancement of the EPI--Lo 80--200 keV protons in association with the trailing edge of the negative-helicity flux tube. One cannot rule out that the rise in EPI--Lo activity around 05-28 01:00 (panel (h)) may share source region proximity with the EPI--Hi onset near 05-27 18:00 (panels (e) and (f)). Similarly, the larger EPI--Lo increase near 05-28 16:00 (panel (h)) may have an association with the EPI--Hi increase near 05-28 12:00 (panel (f)). If so, the delay in timing would be due to slower propagation of the EPI--Lo particles. However, it is also clear that the flux tubes guiding these lower energy particles to the PSP position have a distinct character, suggesting differences also in transport of the higher and lower energy particles.
As an aside, we noticed that these energetic particles have gyration radii much smaller than the dimension of the detected flux ropes. The largest gyroradius -- corresponding to a 30 MeV proton in a 25 nT magnetic field -- is about $3 \times 10^4$ km. The average duration of the two flux tubes is about 10 hours, which corresponds to an extension of $\sim 10^7$ km (for an average solar wind speed of 275 km/s on these days).
\comm{
From this comparison of scales, a further consideration arises. Generally, charged particles are expected to follow magnetic field lines; both ions and electrons are subject to electromagnetic field properties. It is clear that the relationship between particle gyroradius and flux tube size can have a significant impact, as can the helicity and energy density of fluctuations. Therefore, each species may behave differently depending on charge sign and energy (rigidity). As an example, high energy cosmic rays follow the topological properties of the heliosphere \citep{demarco2007numerical}, while lower energy populations are modulated by more local properties \citep{tooprakai2016simulations}. Moreover, each of these particle properties may influence its behaviour with regard to both transport and energization. The comparison of electron and ion behaviour is a useful step in attempting to unravel and identify these physical effects.
}
\comm{
In the trapping case of Fig.~\ref{fig:20200527}, even the most energetic ions ($\sim$~30 MeV) seem to be confined within the boundaries of flux tubes. Therefore, we can infer that the same behaviour is expected for lower energy and less massive particles (given that they share source properties as discussed above). The electron signals (panels (g) and (i)) support this view also for opposite-sign particles. Similar conclusions may be drawn for the ``exclusion'' phenomenon of Fig.~\ref{fig:20200524}, even though no high energy ($>$~5MeV) proton or electron measurements are available. In general, we can suggest that both ``exclusion'' and ``trapping'' phenomena are energy-dependent and a comparison of typical scales for both particles and structures need to be considered when describing this type of interaction.
}
\comm{
A qualitative picture of the exclusion and trapping paradigms is suggested in Fig.~\ref{fig:cartoon}. Several approximations have been made to represent the two scenarios: Flux ropes have their axes directed along the local mean magnetic field, and the approximate path of the PSP trajectory, assumed radial, is shown relative to the flux tube magnetic axes in the two cases.
}
\begin{figure}
\centering
\includegraphics[width=0.36\textwidth]{Fig4.png}
\caption{Cartoon illustration of the (a) exclusion and the (b) trapping scenarios. We imagine the flux tubes to be elongated cylinders with magnetic axes aligned with the measured magnetic field. The PSP trajectory, assumed radial, is shown as a dashed line that indicates the sampling of the data. Energetic particles (EPs) are symbolized by clouds. In case (a) EPs transport along file lines outside the two flux tubes. In case (b) the EPs are within the cylindrical tubes, and an interaction region, possibly containing current sheets, is suggested, one that could be responsible for bursting PVI events.}
\label{fig:cartoon}
\end{figure}
For a more quantitative visualization of the (anti-)correlation between helical patches and both PVI and EPs in the observed events, we plot PVI versus the absolute value of the normalized helicity (Eq.~\ref{eq:sig}, calculated at the scale of two correlation lengths) as shown in Fig.~\ref{fig:scatter}. The Figure shows data for the May 24 (Fig.~\ref{fig:20200524}) and the May 27-29 intervals (Fig.~\ref{fig:20200527}). It should be apparent from the earlier discussion that the former interval shows particles excluded from a flux tube. For the latter interval, particles are (at least temporarily) confined or trapped in a flux tube. As discussed before, and as has been confirmed in simulations and observations, high-PVI regions frequently demarcate boundaries of flux ropes \citep{greco2009statistical, pecora2019single}, while relatively large values of $H_m$ characterize the flux tube interiors \citep{pecora2020identification}. This view is consistent with the panels of Fig.~\ref{fig:scatter}. This alternative and more compact way of representing correlations between PVI and EPs with helical structures will be of even more practical use when longer surveys of several events will be analyzed.
To include EP information, we colour-coded the symbols based on proton count rates, to indicate a general level of energetic particle activity. Specifically, we use LET1 PBUF counts for event 1 of May 24, and HET PBUF counts for events 2 and 3 starting on May 27. The separation in low, mid, and high count rates offers a more compact picture of the scenarios envisioned before. In fact, the distinction between {\it exclusion} and {\it trapping} events is very clear. In the event reported in Fig.~\ref{fig:20200524}, the population of EPs is confined between two consecutive helical structures and excluded from penetrating them; thus, the large count rates in panel (a) of Fig.~\ref{fig:scatter} are confined to lower $\tilde{H}_m$ values. On the contrary, Fig.~\ref{fig:20200527} shows two EP populations confined within two flux ropes and, indeed, in panel (b) of Fig.~\ref{fig:scatter} the large count rates are correlated to large $\tilde{H}_m$ values. Note that the data from the entire respective intervals in Figs.~\ref{fig:20200524} and \ref{fig:20200527} are included in the analysis in Fig.~\ref{fig:scatter}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig5.png}
\caption{Scatter plot of PVI \textit{vs} $|\tilde{H}_m|$. Larger PVI values are mostly found at smaller $\tilde{H}_m$ (flux tube boundaries). The colour code indicates proton count rates $(\nu)$ of (a) LET1 PBUF (panel (e) of Fig.~\ref{fig:20200524}) and (b) HET PBUF (panel (f) of Fig.~\ref{fig:20200527}.) (a) is illustrative of an exclusion event, while (b) illustrates a trapping event (see text).}
\label{fig:scatter}
\end{figure}
\section{Discussion of the Results}
The association of magnetic field properties with energetic particle propagation is a subject of ongoing interest, especially with regard to understanding detailed observations of solar energetic particle events. In particular, there has been renewed interest \citep{ruffolo2004separation, chuychai2005suppressed, khabarova2014particle, tessein2016local} in understanding how magnetic flux tube structure and topology influence transport as well as acceleration processes \citep{giacalone2000small, drake2006electron}.
The problem of SEP dropouts has been tackled and addressed in several ways, invoking magnetic field lines connectivity, footpoints motion, and local turbulence properties \citep{mazur1998solar, mazur2000interplanetary, ruffolo2003trapping, chuychai2005suppressed, tooprakai2007temporary, kittinaradorn2009solar, tooprakai2016simulations}. There is also substantial evidence in the literature (e.g., \citet{sanderson1998wind}) that magnetic features associated with the heliospheric current sheet, compression regions, high-speed streams, etc., have a substantial influence on the intensity and occurrence of SEP events. In addition to associations with these large-scale structures, energetic particles (especially electrons) have provided probes of magneto-topology in large magnetic clouds or ICMEs \citep{larson1997using}. The ongoing PSP mission provides the opportunity to examine these issues and the regions of SEP origination closer to the Sun than has been previously possible.
So far, a number of techniques have been used to characterize relevant properties of magnetic field structures, including standard techniques of CME identification \citep{davies2020situ}, wavelet helicity measurements \citep{trenchi2013solar, zhao2020identification}, Grad--Shafranov reconstruction \citep{hu2017grad, pecora2019single}. A recent study \citep{pecora2020identification} combined the use of PVI and a high-performance real-space computation of local magnetic helicity to identify flux tubes in the PSP data, finding results comparable with other methods \citep{zhao2020identification, chen2020small}. Generally speaking,
the present application of these novel statistical techniques extends to smaller-scale magnetic structures than those typically examined in, for example, CME or CIR observations \citep{larson1997using, sanderson1998wind}.
Here, we have applied the $H_m$--PVI technique to selected intervals in which energetic particles are observed by the IS$\odot$IS instruments on PSP. In particular, in the period 2020 May 24 to June 2, during which at least five SEP events are recorded \citep{cohen2020parker}. We focused on two sub-intervals in this period -- one in which energetic particles appear to be confined in the region {\it between} two helical flux ropes, and another in which the particles are confined {\it within} adjacent (and possibly interacting) flux ropes. The combined use of PVI and helicity measurements add details to this characterization; the basic properties of which were previously reported using ACE data and the PVI alone \citep{tessein2013association, tessein2016local, malandraki2019current, khabarova2021current}.
Both types of events, those described as exclusion events and those described as trapping events, confirm that helical flux structures can provide ``hard walls'' or transport barriers for energetic particles. In fact, numerical experiments have shown in various contexts that both field lines and energetic particles can be temporarily trapped within \citep{chuychai2005suppressed} or temporarily excluded from \citep{kittinaradorn2009solar} certain regions of space based on their points of origin and the intervening magnetic structures. In a complex environment in which there are many magnetic structures encountered by the particles or field line trajectories, complex patterns can emerge, including the formation of steep gradients. This has been offered as an explanation of dropouts \citep{ruffolo2003trapping, tooprakai2016simulations} as well as solar moss \citep{kittinaradorn2009solar}.
The use of the $H_m$--PVI method has so far produced results that support the interpretations given in the references given above. The PSP observations, here made at $\sim 0.3$ au, are providing a characterization of energetic particles at closer distances to sources in the lower solar atmosphere than has been previously available. The main result of the present work is that within the cluster of SEP events from 2020 May 24 to 2020 May 29, we have been able to find indications of two major types of interactions between energetic particles and magnetic field structures -- namely {\it exclusion} events and {\it trapping} events. These two different, but complementary, empirical descriptions emerging from the events in Fig.~\ref{fig:20200524} and Fig.~\ref{fig:20200527} are not in contrast to one another in terms of basic physical causes; rather they confirm the same vision.
\comm{
The analysis of the two periods suggests that helical flux tubes act as difficult-to-penetrate transport boundaries both for particles that are encapsulated within the structure and are prohibited to leak outside, as well as for those that are found outside and cannot easily gain access to the interior.
}
Particles that are initially outside a strong helical flux tube may have difficulty breaking into the region of helical field lines and populate the core of the flux rope. As in the moss model \citep{kittinaradorn2009solar}, particles can impinge at flux rope boundaries and may get energized by discontinuities and reconnection events. An even more complex scenario observed by PSP is reported recently by \cite{giacalone2021energetic}. In this case, a flux tube-like structure is identified in the vicinity of, and likely passing through, an interplanetary shock and a local sea of energetic particles. The flux tube apparently provides a region of exclusion of particles, perhaps similar to what we have described here, but on a relatively smaller scale.
The picture proposed here is that EPs are guided by the observed helical flux ropes without influencing the magnetic field. Nonetheless, it is also worth mentioning that in some circumstances EPs can provide non-negligible pressure and actually affect the local topology of the magnetic field. Many classes of MHD equilibria, for example, Grad--Shafranov \citep{hu2017grad} and Chew--Goldberger--Low \citep{chew1956boltzmann}, take into account {\it total} particle pressure, which may include thermal and suprathermal contributions. In cases for which the EP pressure is non-negligible, one might anticipate that EPs confined within a flux tube would cause bulging of the magnetic structure; when EPs are squeezed between helical flux ropes, they might cause inward distortion of the boundaries of the excluding structures. Here the plasma beta based on thermal
particle
pressure is of order one, so to estimate the potential for an influence of EPs on the magnetic field it suffices to compare EP pressure to magnetic pressure. We carried out a simple estimate based on the formula given by \cite{lario2015energetic} and using the fluences for these events reported by \cite{cohen2020parker}. We estimate that the EP pressure, associated with events 1, 2, and 3 in Figure \ref{fig:20200523}, is at least five to seven orders of magnitude lower than the magnetic pressure. Thus, we can rule out dynamical changes of the magnetic field due to the EP events in these cases.
The overall picture presented here lends some detail to the well-known fact that charged particle transport is controlled by magnetic fields. The events discussed above have been previously analyzed to determine an effective path length \citep{chhiber2020magnetic}. Dispersion analysis suggests a path length of 0.625 au for transport, while the distance from a likely source to PSP along the mean magnetic field is approximately 0.3 au. It has been suggested \citep{chhiber2020clustering} that the extra distance is due to the meandering of the magnetic field lines that guide the particles. It is also clear that field lines are not independent of one another and tend to form structures, sometimes identified as flux tubes or helical flux ropes, often with discontinuities or current structures separating them at boundaries. In the analysis present above, based on Fig.~\ref{fig:20200527}, we see evidence that several distinct SEPs events have produced particle populations that transport to the point of detection at PSP along separate magnetic flux tubes that exhibit distinct properties. \comm{For a more complete discussion of possible sources of the particles, see \citet{cohen2020parker}.}
\comm{Moreover, as reported by \cite{cohen2020parker}, events 2 and 3 have He/H abundance ratios that differ for about two orders of magnitude. This finding may be consistent with the description given above. Indeed, like protons, He ions have orbits much smaller than the size of flux tubes and are likely to be confined in the same regions. Therefore, the impressive difference in abundances might be explained by confinement associated with the local flux tube topology. Leakage and mixing would have been expected for populations residing in adjacent and interacting (as the PVI suggests) structures, resulting in a lower difference in particle composition.}
The different behaviour of higher- and lower-energy protons (panels (e) and (f)) may seem counterintuitive. In numerical experiments \citep{tooprakai2007temporary,tooprakai2016simulations},
it is clear that higher energy particles tend to escape trapping and exclusion barriers more readily. However, in the present observations, the lower-energy population appears to be less confined.
\comm{
The overall effect, in both simulations and observations, involves an interplay between the topological properties of the source region and, later, on the transport properties. Indeed, the initial state leading to an SEP ``trapping'' event might require the source region to be encapsulated within helical magnetic field lines; on the other hand, no constraint is evident for ``exclusion'' events. Later, while flux tubes and EPs propagate, both trapping and exclusion may influence transport by reducing diffusion into and out of flux tubes.
}
For the present case, it is reasonable to think of the source region of very high energy particles (panel (f)) to be more localized in space. The generated population is then confined and advected following the coherent magnetic field lines that were encapsulating the source region. On the other hand, the source region of less energetic particles (panel (e)) may be broader in space. Such particles are then transported by a collection of flux tubes, filling entire regions that can include different helicities and even more dispersed/less coherent field lines within that region. This scenario might account for the apparently greater degree of confinement of the higher energy particles in this case.
Also, particles that are originally confined within a flux tube structure may transport more easily along the tube, and may also experience coherent energization processes due to inner reconnection events or due to flux rope topological evolution, such as contraction or expansion \citep{leroux2015energetic, leroux2015kinetic, leroux2018self, du2018plasma}. Whether energization, or perhaps re-energization \citep{khabarova2017energetic}, actually occurs locally in a given event is an intriguing question, but one that lies outside the current scope of this paper. The present results add direct support for the idea that magnetic field topology and structures can influence the transport properties of energetic particles in space plasmas. This itself presents a challenge on the theoretical front since the most widely used transport theories \citep{jokipii1966cosmic,shalchi2004nonlinear}, with rare exception \citep{chuychai2005suppressed}, assume a completely homogeneous plasma medium. We recall that the above interpretation may be visualized, although in cartoon-like form, in Fig.~\ref{fig:cartoon}.
A final point of discussion relates to the issue of how the channelling of EPs by magnetic structures might change in the sub-Alfv\'enic corona at altitudes lower than the Alfv\'en critical point. It is well known that the structures observed by coronagraphs in this region suggest a highly ordered magnetic field, and presumably low plasma beta (e.g. \citet{deforest2018highly}). Near the Alfv\'en critical region, these orderly flux tubes presumably begin to break up and isotropize \citep{deforest2016fading, ruffolo2020shear}. It is then reasonable to suppose that such more orderly flux tubes would act to more effectively confine EPs, though energy-dependent escape is still anticipated \citep{tooprakai2016simulations}. If particle events are detected by PSP below the Alfv\'en critical region, more strict confinement (or exclusion) of EPs by magnetic structures might be directly observed.
The flexibility of the $H_m$--PVI method provides the possibility to conduct this type of analysis on extended data sets at desired scales. In future works, the growing collection of identified EP-containing flux tubes is expected to provide more statistical insight on both the formation and evolution of helical structures and EP transport in space plasmas.
\section*{Acknowledgements}
This research is partially supported by the Parker Solar Probe mission and the IS$\odot$IS project (contract NNN06AA01C) and a subcontract to the University of Delaware from Princeton University (SUB0000165). Additional support is acknowledged from the NASA LWS program (NNX17AB79G) and the HSR program (80NSSC18K1210 \& 80NSSC18K1648). Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA’s Living with a Star (LWS) program (contract NNN06AA01C). S. S. has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No. 776262 (AIDA, www.aida-space.eu). We thank the IS$\odot$IS team for its support as well as the FIELDS and SWEAP teams for cooperation.
\section*{Data availability}
The IS$\odot$IS data and visualization tools are available to the community at \url{https://spacephysics.princeton.edu/missions-instruments/isois}; data are also available via the NASA Space Physics Data Facility (\url{https://spdf.gsfc.nasa.gov/}).
\bibliographystyle{mnras}
\section{Introduction}
\label{sec:Intro}
Traditional, simplified models of energetic particle propagation typically assume homogeneous turbulence generated by fields that are wave-like perturbations or random noise. These scenarios lack what would be called intermittency in turbulence theory. Indeed, a variety of indications, both theoretical \citep{ambrosiano1988test, drake2006electron, dalena2012magnetic, pecora2018ion} and observational \citep{mazur2000interplanetary, tessein2013association, tessein2016local, khabarova2017energetic}, build a case that interactions of particles with turbulence is structured and inhomogeneous. Depending on the topology and connectivity of the magnetic field, these interactions may involve temporary trapping \citep{ruffolo2003trapping}, as well as exclusion from certain regions of space \citep{kittinaradorn2009solar}. In some cases, such as solar energetic particle (SEP) ``dropouts'', the influence of the magnetic structure is dramatic \citep{mazur2000interplanetary}; in other cases, it is more subtle, as in cases where detected flux tube boundaries coincide with ``edges'' of
SEP events \citep{tessein2016local, khabarova2016small}.
With Parker Solar Probe (PSP) \citep{fox2016solar} now reaching distances closer to the Sun than any previous mission, novel opportunities are available for examination of the relationship between magnetic flux structures and energetic particle populations. In particular, energetic particle (EP) measurements from the Integrated Science Investigation of the Sun (IS$\odot$IS) \citep{mccomas2016integrated} along with FIELDS magnetic field measurements \citep{bale2016fields} and SWEAP plasma moments \citep{kasper2016solar}, are enabling characterization of observations of EPs and their transport properties closer to their sources than ever before possible.
We make use of a novel compact scheme \citep{pecora2020identification} for the detection of helical magnetic flux tubes. The approach makes use of an efficient real-space method for quantifying magnetic helicity \citep{matthaeus1982evaluation}, in conjunction with the partial variance of increments (PVI) \citep{greco2018partial} to detect boundaries within, or at the edges of, magnetic flux tubes.
We find evidence for helical flux ropes acting as transport boundaries for solar energetic particles. Such influence arises from the sudden appearance of SEP enhancements in the vicinity of helical flux tubes accompanied, at their edges, by clusters of enhanced PVI events. This elaborates on previous findings near 1 au \citep{tessein2016local} and indicates that the channeling of SEPs occurs closer to the Sun than has been previously observed.
The paper is organized as follows: in Sec.~\ref{sec:hmpvi} we describe the techniques used to evaluate magnetic helicity and PVI to detect helical flux ropes and their boundaries. In Sec.~\ref{sec:psp} we present the results obtained from the analysis of PSP orbit 5. Opposite but complementary views of energetic particle transport are shown. Finally, in the last Section, we discuss the results.
\section{Local magnetic helicity and PVI methods}
\label{sec:hmpvi}
To correlate EP populations with helical structures and strong PVI events, we make use of the technique described in \citet{pecora2020identification}. The technique exploits the property that flux ropes are, in general, helical. Indeed, flux ropes can be described as structures with approximate cylindrical symmetry, and magnetic field lines that wind about a central axis. From a quantitative perspective, an appropriate measure is the magnetic helicity of the magnetic fluctuations,
defined as $H_m = \langle \aa \cdot \bm{b} \rangle$ where $\aa$ is the vector potential associated with magnetic field fluctuations $\bm{b} ={\bm \nabla}\times{\bm a}$. The averaging operation $\langle \dots \rangle$ is performed over an appropriate volume \citep{woltjer1958theorem, taylor1974relaxation, matthaeus1982measurement}.
Following the definition of \cite{matthaeus1982measurement}, magnetic helicity is related to the off-diagonal part of the magnetic field autocorrelation tensor and can be estimated by single-spacecraft 1D measurements as described in the following. This definition relies only on the assumption of homogeneity, a requirement that is eventually weakened in implementations of local analyses. This method is basically free of any strong symmetry assumptions and does not depend on complex transformations.
The procedure we proposed recently in \citet{pecora2020identification} calculates a {\it local} value of $H_m$, based on estimates of the elements of the magnetic field correlation matrix $R_{ij} = \langle b_i(x)b_j(x+l) \rangle$, at a certain point $x$, with increment $l$, and averaging the local correlator of magnetic fluctuations over a region of width $w_0$ centred about $x$. This procedure, implemented entirely in real space, provides a localized evaluation, as do, for example, certain wavelet transforms. The relevant indices $i, j$ for the calculation of helicity are those that refer to directions perpendicular to the relative solar wind-spacecraft motion. For this interval at $\sim$ 0.3 au, this ``sweeping direction'' is almost purely radial, due to the super-Alfv\'enic flow of the solar wind. In the Radial-Tangential-Normal (RTN) coordinate system, the sweeping of the solar wind is in the R direction, corresponding to increment lags $l$ also in the direction R. Then, the integral to calculate helicity involves the T and N magnetic field components. For the more general case, see \citet{pecora2020identification}. For the present choice of coordinates, $H_m$ is evaluated explicitly as
\begin{equation}
H_m(x,\ell) = \int_0^\ell dl~C(x,l) h(l),
\label{eq:Hm}
\end{equation}
where the integral is performed in scale-space and the chosen value of
$\ell$ represents the largest scale that will contribute to $H_m$. In particular, we choose $\ell$ to be a multiple of the correlation length $\lambda_c$ of the specific interval. $h(l) = \frac{1}{2}\left[ 1 + \cos\left(\frac{2\pi l}{w_0}\right) \right]$ is the Hann window used to smooth estimates to zero at the edges of the investigated data interval, thus avoiding spurious effects of boundary fluctuations. $C(x, l)$ is the correlation function defined as
\begin{equation}
C(x, l) = \frac{1}{w_0} \int_{x-\frac{w_0}{2}}^{x+\frac{w_0}{2}} \left[ b_T(\xi)b_N(\xi+l) -b_N(\xi)b_T(\xi+l)\right] d\xi,
\label{hm1}
\end{equation}
again, with increments along R. The interval of local integration $w_0$ is arbitrary, but we typically chose it as an order unity multiple of the scale $\ell$, such as $w_0=2\ell$. The above formulas convert directly to the time domain using the Taylor hypothesis. For a detailed derivation of the theory for helicity determination, see \citet{matthaeus1982evaluation} and \cite{matthaeus1982measurement}. In a subsequent paper, we will describe a statistical analysis that is able to separate helicity values attributable to stochastic magnetic field fluctuations, from those which are due to coherent structures.
It is useful to define a normalized version of the magnetic helicity as
\begin{equation}
\tilde{H}_m = \frac{H_m}{\langle \delta b^2 \rangle \lambda_c},
\label{eq:sig}
\end{equation}
where $\langle \delta b^2 \rangle$ is the fluctuation energy computed from a local average taken on an interval of about a correlation length $\lambda_c$.
\comm{
This measure is useful for two purposes: the first is more practical, as this definition does not involve bulk velocity measurements, which are not always available, and Taylor's hypothesis needs not to be used; the second, it has physical relevance as it quantifies magnetic helicity relative to the local energy content, thus permitting smaller helical structures to be readily detected. It is worthwhile to show both $H_m$ and $\tilde{H}_m$ to have a more comprehensive understanding of the helicity-to-energy ratio of the emerging structures (note for example the flux tube from 05-24 14:00 to 17:00 in Fig.~\ref{fig:20200524}. Note that the normalized helicity is a generalization of the Fourier space dimensionless helicity introduced by \citep{matthaeus1982evaluation}.
}
The correlation length $\lambda_c = V_{sw} \tau_c$ is related to the spacecraft correlation time $\tau_c$, and the solar wind speed $V_{sw}$ via the Taylor hypothesis. Computed in this way, the normalized magnetic helicity becomes more sensitive to local conditions. Note the analogy with the normalized magnetic helicity $\sigma_m$ obtained in the Fourier representation of $H_m$ \citep{matthaeus1982measurement}.
Sharp gradients, frequently current sheets, that often reside at external and internal boundaries of flux ropes are determined using the PVI \citep{greco2008intermittent,greco2018partial}, defined as
\begin{equation}
\mbox{PVI}(s, \ell) = \frac{ | \Delta {\bm B}(s,\ell) | }{ \sqrt{ \langle | \Delta {\bm B}(s,\ell) |^2 \rangle } },
\label{pvieq}
\end{equation}
where $\Delta {\bm B}(s,\ell) = {\bm B}(s+\ell) - {\bm B}(s)$ are the magnetic field vector increments evaluated at scale $\ell$ and the averaging operation $\langle \dots \rangle $ is performed over a suitable interval \citep{servidio2011statistical}. The function can be computed spatially in simulations or in magnetic field time series, again by assuming the Taylor hypothesis. The technique has been extensively validated, both in numerical simulations and observations, to be able to identify magnetic field discontinuities, current sheets, and reconnection events \citep{greco2009statistical, osman2014magnetic, greco2018partial, pecora2019single}.
\section{Parker Solar Probe}
\label{sec:psp}
During PSP orbit 5, from 2020 May 24 to June 2, a sequence of several EP events have been measured, at radial distances from 0.45 to 0.2 au, and have been extensively studied \citep{cohen2020parker, chhiber2020magnetic}. We will analyze selected properties of these events using several PSP data products. We employ magnetic field data from the MAG instrument on the FIELDS suite, resampled from the original four samples per cycle (4Hz) to a 60-second resolution. PVI is calculated at this scale. Magnetic helicity is calculated with varying window sizes scaled to multiples of the regional correlation length $\lambda_c$. As the window size $\ell$ increases (decreases), the helicity measurement (Eq.~\ref{eq:Hm}) becomes sensitive to contributions from larger (smaller) scale flux tubes \citep{pecora2020identification}. Particle measurements are obtained from the IS$\odot$IS EPI--Lo and EPI--Hi instruments.
\comm{
We use count rates from EPI--Lo ChanP (80--200 keV protons) and ChanE (30--550 keV electrons). For EPI--Hi, we use the end A of both the High Energy Telescope (HET A; 0.4--1.2 MeV electrons; 7--60 MeV protons) and the Low Energy Telescope 1 (LET1 A; 1--30 Mev protons).
}
All public data are available from the IS$\odot$IS database and on Coordinated Data Analysis Web (CDAWeb).
\comm{
Priority buffer (PBUF) rates measurements are considered uncalibrated (engineering) data. These are integrated counts measured at different stopping depths within the telescope and cannot be calibrated to fluxes discerning energy ranges. For our use, we summed the ranges R1 through R6, which is similar to integrating over energies. In the following, we do not intend to use these measurements to obtain quantitative estimates, rather they are used to show a better time-resolved envelope of the hourly-averaged fluxes (panels (j) and (k) of Figs.~\ref{fig:20200523}-\ref{fig:20200527}).
}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig1.png}
\caption{Period from 2020 May 24 to June 2, during which $\lambda_c \sim 6.5 \times 10^6$ km (corresponding to $\tau_c \sim 6$ hours). The stacked panels show (a) magnetic field measured by FIELDS resampled at 60-second cadence, (b) the PVI signal computed with a time lag of 60 seconds, (c) the magnetic helicity of fluctuations at different scales, and (d) its normalized version, (e) proton count rate at 60-second resolution measured by EPI--Hi LET1, (f) proton count rate at 60-second resolution measured by EPI--Hi HET, (g) electron count rate in the energy range 0.4--1.2 MeV at 1-hour resolution measured by EPI--Hi HET A, (h) proton count rate in the energy range 80--200 keV at 300-second resolution measured by EPI--Lo, (i) electron count rate in the energy range 30--550 keV at 300-second resolution measured by EPI--Lo, (j) proton flux at 1-hour resolution measured by EPI--Hi LET1 A, and (k) proton flux at 1-hour resolution measured by EPI--Hi HET A.}
\label{fig:20200523}
\end{figure*}
An overview of the five events occurring during the selected period is shown in Fig.~\ref{fig:20200523}. Even at this scale of 11 days, the sets of measurements show interesting behaviour and a correlation between energetic particles (both protons and electrons) and magnetic field properties. In particular, it is possible to notice that the large helical structure appearing around May 28 encloses both the energetic electrons (panels (g) and (i)) and the higher-energy portion of the energetic proton population (panels (f) and (k)). On the other hand, the LET channel (panel (e)) exhibits fewer structures, suggesting less confinement than its higher energy counterpart HET in panel (f). We will focus on the events labelled as 1, 2, and 3 in the two intervals marked with grey shadings in Fig.~\ref{fig:20200523}, as they give opposite but complementary, views of the ``exclusion'' and ``trapping'' phenomena (see below). We note, in passing, that another event just after the second shaded region, beginning around May 29, 08:00, appears to be nonhelical and possibly nondispersive. That interval will not be discussed in the present paper.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig2.png}
\caption{2020 May 24 from 00:30 to 23:30 UTC, $\lambda_c \sim 0.8 \times 10^6$ km ($\tau_c \sim 38$ minutes). The panels are arranged in the same fashion as in Fig.~\ref{fig:20200523}. The energetic proton population, appearing in panel (e), is confined between -- and excluded from -- the leading and trailing negative-helicity peaks. Vertical dashed lines highlight approximate flux tubes boundaries, coinciding with strong PVI events near the edges of helical regions.}
\label{fig:20200524}
\end{figure*}
Figure~\ref{fig:20200524} shows the same quantities as Fig.~\ref{fig:20200523}, but the analysis is performed over the restricted time interval from May 24 00:30 to 23:30 UTC (first shaded region of Fig.~\ref{fig:20200523}) during which $\lambda_c \sim 0.8 \times 10^6$ km (corresponding to $\tau_c \sim 38$ minutes). The restriction to a shorter timescale enhances the smaller-scale helical structures that were obscured before. In this case, the energetic proton population appearing from 8:00 to 15:00 is confined between -- and excluded from -- two helical structures (each indicated with two vertical dashed lines). During this event, the helical field lines appear to act as excluding boundaries for the particles (that may be streaming along ambient solar wind magnetic field lines not organized in a helical fashion), which have suppressed transport across the structures. This kind of exclusionary behaviour is reminiscent of the phenomenon of SEP dropouts that have been associated with topological structures, and that are frequently observed at 1 au and in simulations \citep{mazur2000interplanetary, ruffolo2003trapping, tooprakai2016simulations}.
\comm{
The exclusion of the energetic protons from spatial regions associated with times before 08:00 may be associated with transport effects mediated by the helical tubes that forbid SEPs to access those particular regions of space after their onset.
}
This event appears to be dispersive, as faster particles arrive first (panel (j)), so our suggestion is that the exclusion effect influences which flux tube guided the particle transport from the source region to the point of observation.
The spreading of particles following onset is typically associated with diffusive transport (e.g., \citet{droege2016multi}). In this case, the helical structure near 14:30 may be gradually inhibiting diffusion into the relatively quiet region (in terms of PVI) that resides beyond 16:00. Phenomena such as this have been observed in simulations and have been interpreted as temporary topological trapping \citep{tooprakai2016simulations}, possibly accompanied by suppressed diffusive transport \citep{chuychai2005suppressed}.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig3.png}
\caption{Period from 2020 May 27 04:30 to 29 08:00, during which $\lambda_c \sim 4.0 \times 10^6$ km ($\tau_c \sim 4$ hours). The panels are arranged in the same fashion as in Fig.~\ref{fig:20200523}.
Both protons and electrons show sudden onsets (panels (e--g), (i--k)) corresponding to the encounter of PSP with the first positive-helicity structure after a strong PVI event. The nature of the magnetic field changes in the period between the two pulses of high-energy particles in panel (f) -- the sign of helicity changes and there is a burst of PVI activity. This may signal complexity on the transport processes for these apparently distinct events, (see \protect\cite{cohen2020parker}),
especially given that the LET1 counts show little change across this transition.
Vertical dashed lines highlight approximate flux tubes boundaries coinciding with strong PVI events near the edges of helical regions.
}
\label{fig:20200527}
\end{figure*}
The picture given in Fig.~\ref{fig:20200527}, over an approximately two-day period beginning on 2020 May 27, is complementary to the one of Fig.~\ref{fig:20200524}. Energetic particle events 2 and 3, shown here and indicated in panel (j) of Fig.~\ref{fig:20200523}, can be better distinguished in the HET signal of EPI--Hi, in which two separated populations clearly appear (panels (f) and (k)). In LET1, the signal of the first event does not fully decline before the onset of the second one (Fig. \ref{fig:20200527}, panels (e) and (j)). This local analysis shows that there are two adjacent flux tubes of opposite-sign helicity, possibly separated by a strong current sheet individuated by the large-PVI region. One can notice that the dispersive onset of the first EP event, occurring over a period of about an hour, coincides with the appearance of the flux tube, suggesting that the spacecraft has suddenly experienced a different environment, passing from the ambient solar wind to a more confined plasma. In this case, contrary to the previous in Fig.~\ref{fig:20200524}, each energetic population is confined within a different helical structure.
The lower-energy protons (Fig. \ref{fig:20200527} panel (h)) suggest a more complex history that may intertwine source and transport effects. The activity in panel (h) is not directly associated with the positive-helicity flux rope between 05-27 18:00 and 05-28 08:00 nor with the negative-helicity flux rope between 05-28 08:00 and 05-28 16:00 (the latter being better distinguishable at the scale of one correlation length). Rather, one sees an enhancement of the EPI--Lo 80--200 keV protons in association with the trailing edge of the negative-helicity flux tube. One cannot rule out that the rise in EPI--Lo activity around 05-28 01:00 (panel (h)) may share source region proximity with the EPI--Hi onset near 05-27 18:00 (panels (e) and (f)). Similarly, the larger EPI--Lo increase near 05-28 16:00 (panel (h)) may have an association with the EPI--Hi increase near 05-28 12:00 (panel (f)). If so, the delay in timing would be due to slower propagation of the EPI--Lo particles. However, it is also clear that the flux tubes guiding these lower energy particles to the PSP position have a distinct character, suggesting differences also in transport of the higher and lower energy particles.
As an aside, we noticed that these energetic particles have gyration radii much smaller than the dimension of the detected flux ropes. The largest gyroradius -- corresponding to a 30 MeV proton in a 25 nT magnetic field -- is about $3 \times 10^4$ km. The average duration of the two flux tubes is about 10 hours, which corresponds to an extension of $\sim 10^7$ km (for an average solar wind speed of 275 km/s on these days).
\comm{
From this comparison of scales, a further consideration arises. Generally, charged particles are expected to follow magnetic field lines; both ions and electrons are subject to electromagnetic field properties. It is clear that the relationship between particle gyroradius and flux tube size can have a significant impact, as can the helicity and energy density of fluctuations. Therefore, each species may behave differently depending on charge sign and energy (rigidity). As an example, high energy cosmic rays follow the topological properties of the heliosphere \citep{demarco2007numerical}, while lower energy populations are modulated by more local properties \citep{tooprakai2016simulations}. Moreover, each of these particle properties may influence its behaviour with regard to both transport and energization. The comparison of electron and ion behaviour is a useful step in attempting to unravel and identify these physical effects.
}
\comm{
In the trapping case of Fig.~\ref{fig:20200527}, even the most energetic ions ($\sim$~30 MeV) seem to be confined within the boundaries of flux tubes. Therefore, we can infer that the same behaviour is expected for lower energy and less massive particles (given that they share source properties as discussed above). The electron signals (panels (g) and (i)) support this view also for opposite-sign particles. Similar conclusions may be drawn for the ``exclusion'' phenomenon of Fig.~\ref{fig:20200524}, even though no high energy ($>$~5MeV) proton or electron measurements are available. In general, we can suggest that both ``exclusion'' and ``trapping'' phenomena are energy-dependent and a comparison of typical scales for both particles and structures need to be considered when describing this type of interaction.
}
\comm{
A qualitative picture of the exclusion and trapping paradigms is suggested in Fig.~\ref{fig:cartoon}. Several approximations have been made to represent the two scenarios: Flux ropes have their axes directed along the local mean magnetic field, and the approximate path of the PSP trajectory, assumed radial, is shown relative to the flux tube magnetic axes in the two cases.
}
\begin{figure}
\centering
\includegraphics[width=0.36\textwidth]{Fig4.png}
\caption{Cartoon illustration of the (a) exclusion and the (b) trapping scenarios. We imagine the flux tubes to be elongated cylinders with magnetic axes aligned with the measured magnetic field. The PSP trajectory, assumed radial, is shown as a dashed line that indicates the sampling of the data. Energetic particles (EPs) are symbolized by clouds. In case (a) EPs transport along file lines outside the two flux tubes. In case (b) the EPs are within the cylindrical tubes, and an interaction region, possibly containing current sheets, is suggested, one that could be responsible for bursting PVI events.}
\label{fig:cartoon}
\end{figure}
For a more quantitative visualization of the (anti-)correlation between helical patches and both PVI and EPs in the observed events, we plot PVI versus the absolute value of the normalized helicity (Eq.~\ref{eq:sig}, calculated at the scale of two correlation lengths) as shown in Fig.~\ref{fig:scatter}. The Figure shows data for the May 24 (Fig.~\ref{fig:20200524}) and the May 27-29 intervals (Fig.~\ref{fig:20200527}). It should be apparent from the earlier discussion that the former interval shows particles excluded from a flux tube. For the latter interval, particles are (at least temporarily) confined or trapped in a flux tube. As discussed before, and as has been confirmed in simulations and observations, high-PVI regions frequently demarcate boundaries of flux ropes \citep{greco2009statistical, pecora2019single}, while relatively large values of $H_m$ characterize the flux tube interiors \citep{pecora2020identification}. This view is consistent with the panels of Fig.~\ref{fig:scatter}. This alternative and more compact way of representing correlations between PVI and EPs with helical structures will be of even more practical use when longer surveys of several events will be analyzed.
To include EP information, we colour-coded the symbols based on proton count rates, to indicate a general level of energetic particle activity. Specifically, we use LET1 PBUF counts for event 1 of May 24, and HET PBUF counts for events 2 and 3 starting on May 27. The separation in low, mid, and high count rates offers a more compact picture of the scenarios envisioned before. In fact, the distinction between {\it exclusion} and {\it trapping} events is very clear. In the event reported in Fig.~\ref{fig:20200524}, the population of EPs is confined between two consecutive helical structures and excluded from penetrating them; thus, the large count rates in panel (a) of Fig.~\ref{fig:scatter} are confined to lower $\tilde{H}_m$ values. On the contrary, Fig.~\ref{fig:20200527} shows two EP populations confined within two flux ropes and, indeed, in panel (b) of Fig.~\ref{fig:scatter} the large count rates are correlated to large $\tilde{H}_m$ values. Note that the data from the entire respective intervals in Figs.~\ref{fig:20200524} and \ref{fig:20200527} are included in the analysis in Fig.~\ref{fig:scatter}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig5.png}
\caption{Scatter plot of PVI \textit{vs} $|\tilde{H}_m|$. Larger PVI values are mostly found at smaller $\tilde{H}_m$ (flux tube boundaries). The colour code indicates proton count rates $(\nu)$ of (a) LET1 PBUF (panel (e) of Fig.~\ref{fig:20200524}) and (b) HET PBUF (panel (f) of Fig.~\ref{fig:20200527}.) (a) is illustrative of an exclusion event, while (b) illustrates a trapping event (see text).}
\label{fig:scatter}
\end{figure}
\section{Discussion of the Results}
The association of magnetic field properties with energetic particle propagation is a subject of ongoing interest, especially with regard to understanding detailed observations of solar energetic particle events. In particular, there has been renewed interest \citep{ruffolo2004separation, chuychai2005suppressed, khabarova2014particle, tessein2016local} in understanding how magnetic flux tube structure and topology influence transport as well as acceleration processes \citep{giacalone2000small, drake2006electron}.
The problem of SEP dropouts has been tackled and addressed in several ways, invoking magnetic field lines connectivity, footpoints motion, and local turbulence properties \citep{mazur1998solar, mazur2000interplanetary, ruffolo2003trapping, chuychai2005suppressed, tooprakai2007temporary, kittinaradorn2009solar, tooprakai2016simulations}. There is also substantial evidence in the literature (e.g., \citet{sanderson1998wind}) that magnetic features associated with the heliospheric current sheet, compression regions, high-speed streams, etc., have a substantial influence on the intensity and occurrence of SEP events. In addition to associations with these large-scale structures, energetic particles (especially electrons) have provided probes of magneto-topology in large magnetic clouds or ICMEs \citep{larson1997using}. The ongoing PSP mission provides the opportunity to examine these issues and the regions of SEP origination closer to the Sun than has been previously possible.
So far, a number of techniques have been used to characterize relevant properties of magnetic field structures, including standard techniques of CME identification \citep{davies2020situ}, wavelet helicity measurements \citep{trenchi2013solar, zhao2020identification}, Grad--Shafranov reconstruction \citep{hu2017grad, pecora2019single}. A recent study \citep{pecora2020identification} combined the use of PVI and a high-performance real-space computation of local magnetic helicity to identify flux tubes in the PSP data, finding results comparable with other methods \citep{zhao2020identification, chen2020small}. Generally speaking,
the present application of these novel statistical techniques extends to smaller-scale magnetic structures than those typically examined in, for example, CME or CIR observations \citep{larson1997using, sanderson1998wind}.
Here, we have applied the $H_m$--PVI technique to selected intervals in which energetic particles are observed by the IS$\odot$IS instruments on PSP. In particular, in the period 2020 May 24 to June 2, during which at least five SEP events are recorded \citep{cohen2020parker}. We focused on two sub-intervals in this period -- one in which energetic particles appear to be confined in the region {\it between} two helical flux ropes, and another in which the particles are confined {\it within} adjacent (and possibly interacting) flux ropes. The combined use of PVI and helicity measurements add details to this characterization; the basic properties of which were previously reported using ACE data and the PVI alone \citep{tessein2013association, tessein2016local, malandraki2019current, khabarova2021current}.
Both types of events, those described as exclusion events and those described as trapping events, confirm that helical flux structures can provide ``hard walls'' or transport barriers for energetic particles. In fact, numerical experiments have shown in various contexts that both field lines and energetic particles can be temporarily trapped within \citep{chuychai2005suppressed} or temporarily excluded from \citep{kittinaradorn2009solar} certain regions of space based on their points of origin and the intervening magnetic structures. In a complex environment in which there are many magnetic structures encountered by the particles or field line trajectories, complex patterns can emerge, including the formation of steep gradients. This has been offered as an explanation of dropouts \citep{ruffolo2003trapping, tooprakai2016simulations} as well as solar moss \citep{kittinaradorn2009solar}.
The use of the $H_m$--PVI method has so far produced results that support the interpretations given in the references given above. The PSP observations, here made at $\sim 0.3$ au, are providing a characterization of energetic particles at closer distances to sources in the lower solar atmosphere than has been previously available. The main result of the present work is that within the cluster of SEP events from 2020 May 24 to 2020 May 29, we have been able to find indications of two major types of interactions between energetic particles and magnetic field structures -- namely {\it exclusion} events and {\it trapping} events. These two different, but complementary, empirical descriptions emerging from the events in Fig.~\ref{fig:20200524} and Fig.~\ref{fig:20200527} are not in contrast to one another in terms of basic physical causes; rather they confirm the same vision.
\comm{
The analysis of the two periods suggests that helical flux tubes act as difficult-to-penetrate transport boundaries both for particles that are encapsulated within the structure and are prohibited to leak outside, as well as for those that are found outside and cannot easily gain access to the interior.
}
Particles that are initially outside a strong helical flux tube may have difficulty breaking into the region of helical field lines and populate the core of the flux rope. As in the moss model \citep{kittinaradorn2009solar}, particles can impinge at flux rope boundaries and may get energized by discontinuities and reconnection events. An even more complex scenario observed by PSP is reported recently by \cite{giacalone2021energetic}. In this case, a flux tube-like structure is identified in the vicinity of, and likely passing through, an interplanetary shock and a local sea of energetic particles. The flux tube apparently provides a region of exclusion of particles, perhaps similar to what we have described here, but on a relatively smaller scale.
The picture proposed here is that EPs are guided by the observed helical flux ropes without influencing the magnetic field. Nonetheless, it is also worth mentioning that in some circumstances EPs can provide non-negligible pressure and actually affect the local topology of the magnetic field. Many classes of MHD equilibria, for example, Grad--Shafranov \citep{hu2017grad} and Chew--Goldberger--Low \citep{chew1956boltzmann}, take into account {\it total} particle pressure, which may include thermal and suprathermal contributions. In cases for which the EP pressure is non-negligible, one might anticipate that EPs confined within a flux tube would cause bulging of the magnetic structure; when EPs are squeezed between helical flux ropes, they might cause inward distortion of the boundaries of the excluding structures. Here the plasma beta based on thermal
particle
pressure is of order one, so to estimate the potential for an influence of EPs on the magnetic field it suffices to compare EP pressure to magnetic pressure. We carried out a simple estimate based on the formula given by \cite{lario2015energetic} and using the fluences for these events reported by \cite{cohen2020parker}. We estimate that the EP pressure, associated with events 1, 2, and 3 in Figure \ref{fig:20200523}, is at least five to seven orders of magnitude lower than the magnetic pressure. Thus, we can rule out dynamical changes of the magnetic field due to the EP events in these cases.
The overall picture presented here lends some detail to the well-known fact that charged particle transport is controlled by magnetic fields. The events discussed above have been previously analyzed to determine an effective path length \citep{chhiber2020magnetic}. Dispersion analysis suggests a path length of 0.625 au for transport, while the distance from a likely source to PSP along the mean magnetic field is approximately 0.3 au. It has been suggested \citep{chhiber2020clustering} that the extra distance is due to the meandering of the magnetic field lines that guide the particles. It is also clear that field lines are not independent of one another and tend to form structures, sometimes identified as flux tubes or helical flux ropes, often with discontinuities or current structures separating them at boundaries. In the analysis present above, based on Fig.~\ref{fig:20200527}, we see evidence that several distinct SEPs events have produced particle populations that transport to the point of detection at PSP along separate magnetic flux tubes that exhibit distinct properties. \comm{For a more complete discussion of possible sources of the particles, see \citet{cohen2020parker}.}
\comm{Moreover, as reported by \cite{cohen2020parker}, events 2 and 3 have He/H abundance ratios that differ for about two orders of magnitude. This finding may be consistent with the description given above. Indeed, like protons, He ions have orbits much smaller than the size of flux tubes and are likely to be confined in the same regions. Therefore, the impressive difference in abundances might be explained by confinement associated with the local flux tube topology. Leakage and mixing would have been expected for populations residing in adjacent and interacting (as the PVI suggests) structures, resulting in a lower difference in particle composition.}
The different behaviour of higher- and lower-energy protons (panels (e) and (f)) may seem counterintuitive. In numerical experiments \citep{tooprakai2007temporary,tooprakai2016simulations},
it is clear that higher energy particles tend to escape trapping and exclusion barriers more readily. However, in the present observations, the lower-energy population appears to be less confined.
\comm{
The overall effect, in both simulations and observations, involves an interplay between the topological properties of the source region and, later, on the transport properties. Indeed, the initial state leading to an SEP ``trapping'' event might require the source region to be encapsulated within helical magnetic field lines; on the other hand, no constraint is evident for ``exclusion'' events. Later, while flux tubes and EPs propagate, both trapping and exclusion may influence transport by reducing diffusion into and out of flux tubes.
}
For the present case, it is reasonable to think of the source region of very high energy particles (panel (f)) to be more localized in space. The generated population is then confined and advected following the coherent magnetic field lines that were encapsulating the source region. On the other hand, the source region of less energetic particles (panel (e)) may be broader in space. Such particles are then transported by a collection of flux tubes, filling entire regions that can include different helicities and even more dispersed/less coherent field lines within that region. This scenario might account for the apparently greater degree of confinement of the higher energy particles in this case.
Also, particles that are originally confined within a flux tube structure may transport more easily along the tube, and may also experience coherent energization processes due to inner reconnection events or due to flux rope topological evolution, such as contraction or expansion \citep{leroux2015energetic, leroux2015kinetic, leroux2018self, du2018plasma}. Whether energization, or perhaps re-energization \citep{khabarova2017energetic}, actually occurs locally in a given event is an intriguing question, but one that lies outside the current scope of this paper. The present results add direct support for the idea that magnetic field topology and structures can influence the transport properties of energetic particles in space plasmas. This itself presents a challenge on the theoretical front since the most widely used transport theories \citep{jokipii1966cosmic,shalchi2004nonlinear}, with rare exception \citep{chuychai2005suppressed}, assume a completely homogeneous plasma medium. We recall that the above interpretation may be visualized, although in cartoon-like form, in Fig.~\ref{fig:cartoon}.
A final point of discussion relates to the issue of how the channelling of EPs by magnetic structures might change in the sub-Alfv\'enic corona at altitudes lower than the Alfv\'en critical point. It is well known that the structures observed by coronagraphs in this region suggest a highly ordered magnetic field, and presumably low plasma beta (e.g. \citet{deforest2018highly}). Near the Alfv\'en critical region, these orderly flux tubes presumably begin to break up and isotropize \citep{deforest2016fading, ruffolo2020shear}. It is then reasonable to suppose that such more orderly flux tubes would act to more effectively confine EPs, though energy-dependent escape is still anticipated \citep{tooprakai2016simulations}. If particle events are detected by PSP below the Alfv\'en critical region, more strict confinement (or exclusion) of EPs by magnetic structures might be directly observed.
The flexibility of the $H_m$--PVI method provides the possibility to conduct this type of analysis on extended data sets at desired scales. In future works, the growing collection of identified EP-containing flux tubes is expected to provide more statistical insight on both the formation and evolution of helical structures and EP transport in space plasmas.
\section*{Acknowledgements}
This research is partially supported by the Parker Solar Probe mission and the IS$\odot$IS project (contract NNN06AA01C) and a subcontract to the University of Delaware from Princeton University (SUB0000165). Additional support is acknowledged from the NASA LWS program (NNX17AB79G) and the HSR program (80NSSC18K1210 \& 80NSSC18K1648). Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA’s Living with a Star (LWS) program (contract NNN06AA01C). S. S. has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No. 776262 (AIDA, www.aida-space.eu). We thank the IS$\odot$IS team for its support as well as the FIELDS and SWEAP teams for cooperation.
\section*{Data availability}
The IS$\odot$IS data and visualization tools are available to the community at \url{https://spacephysics.princeton.edu/missions-instruments/isois}; data are also available via the NASA Space Physics Data Facility (\url{https://spdf.gsfc.nasa.gov/}).
\bibliographystyle{mnras}
|
1,314,259,993,301 | arxiv | \section{Introduction}
The goal of the present paper is to understand the impact of large IR loop corrections on the vacuum states in field theory on Poincare patch (PP) of de Sitter (dS) space. In \cite{Krotov:2010ma} one--loop correction to the scalar field Wightman function was calculated in PP over the Bunch--Davies (BD) state \cite{Bunch:1978yq}. The calculation was done in the non--stationary (in--in or Schwinger--Keldysh) diagrammatic technick. There are large IR contributions in the one--loop correction even for the very massive fields.
They reveal themselves through the particle creation --- via the vacuum averages $\langle a^+ \, a\rangle$ and $\langle a \, a\rangle$, where $a$ and $a^+$ are annihilation and creation operators.
E.g. for the real massive scalar field theory with the $\lambda \phi^3$ self--interaction one obtains that $\langle a^+_p \, a_p\rangle \propto \lambda^2 \, \log(p\eta)$ and $\langle a_p \, a_{-p}\rangle \propto \lambda^2 \, \log(p\eta)$ as the conformal time approaches the future infinity, $\eta\to 0$. Here $p$ is the modulus of the spatial co--moving momentum.
Similar IR contributions do appear in other field theories in PP independently of the spin of the fields and self--interaction potentials, as long as they do not respect conformal invariance \cite{Woodard}, \cite{Dolgov:1994cq}, \cite{Antoniadis:2006wq}, \cite{Xue:2012wi}, \cite{Giddings:2010ui}.
In \cite{Akhmedov:2011pj} the observations of \cite{Krotov:2010ma} were generalized to the other dS invariant states (so called $\alpha$--vacua \cite{Mottola:1984ar},\cite{Allen:1985ux}) and to the states containing finite densities of particles. Furthermore, in \cite{Akhmedov:2011pj} kinetic equation was derived. Its solution sums up the leading IR contributions in all loops. One of the goals of the present paper is to show explicitly the latter statement, i.e. to derive that kinetic equation directly from the Dyson--Schwinger (DS) equation of the non--stationary diagrammatic technick. In the situation when, due to the large IR effects, in--out S--matrix approach is not appropriate, the description of the physics via the quantum kinetic (Dyson--Schwinger) equation is more suitable, because the latter equation describes the time--evolution of the state occupation numbers.
The situation with the kinetic theory in dS space demands some clarifications.
Tree--level Wightman function for any $\alpha$--vacuum respects whole dS isometry group \cite{Mottola:1984ar},\cite{Allen:1985ux} even if one restricts field theory to the PP, which covers only half of dS. But there are generators of dS isometry group which deform PP. As the result,
this symmetry is naively broken in the vertexes of the loop integrals to a subgroup, respecting only PP.
One can prove\footnote{We would like to thank A.Polyakov for telling us the idea of this proof. Some elements of the proof can be found in \cite{Polyakov:2007mm},\cite{Polyakov:2009nq},\cite{Polyakovtalk}. See as well the discussion in the Appendix.}, however, that for the BD state the variation of the loop contributions, under those isometry transformations which deform PP, does vanish. Hence, exact Wightman function over the BD state depends only on the dS invariant distance between its two arguments. But for the other $\alpha$--vacua the isometry is broken in loop integrals down to the subgroup in question.
In curved space--times (or in flat space curvilinear coordinates) various coordinate systems frequently cover only their parts. Hence, to do the calculations in such coordinates one has to specify suitable conditions at the boundaries of the corresponding patches. Obviously large IR effects are sensitive to the boundary conditions. Hence, if one does not perform a careful study of the matching between the boundary conditions, he obtains different physical results by doing calculations in different coordinate systems.
In particular it happens that loop contributions in the {\it global} dS space are not just large, but they are explicitly IR divergent even for the massive fields \cite{Akhmedov:2008pu}, \cite{Krotov:2010ma}. In this respect dS space is similar to the QED in strong background electric fields \cite{Akhmedov:2009vh}. The presence of such divergences shows that the moment when the interactions or background field are switched on can not be taken to the past infinity \cite{Krotov:2010ma}. This puts an obstruction for the dS isometry invariance of the correlation functions in global dS and favors the conclusion that cosmological constant should be secularly screened by large IR effects. At least with the appropriate choice of the boundary conditions, i.e. with those boundary conditions which do {\it not} put dS space on the ``life support'' \cite{Krotov:2010ma}.
The fact that dS isometry is respected in the loops over the BD state is a good sign that cosmological constant can not be secularly screened in PP, if the initial conditions are just mild excitations over the BD state. But the presence of the large IR effects means that the BD state itself gets modified.
Indeed, the one--loop correction $G^1(Z)$ to the BD Wightman propagator $G^0(Z)$ is $G^1(Z) \propto \lambda^2\, \log(Z) \, G^0(Z)$, when the hyperbolic distance is taken to infinity, $Z\to \infty$ \cite{Krotov:2010ma}. This is just the Fourier transform of $\lambda^2 \log(p\eta)$ corrections. Thus, the factor $\lambda^2 \, \log(Z)$ can be big and the loop corrections are not suppressed even if $\lambda^2$ is small.
The question is what is the dressed state? We show below that this question is related to the following one: What is the fate of small density perturbations over the BD vacuum in the future infinity?
To address these questions we derive and solve the kinetic equation which describes the dynamics of such density perturbations and, as we have mentioned, by product sums the leading IR contributions.
From the solution we see that if one sets BD state as the initial one at past infinity, where it is the ground state of the time--dependent free Hamiltonian, this state gets modified even if one switches off the coupling constant at future infinity. It will appear that the result of the summation of all loops will contain modifications of the BD propagator, which do not vanish as $\lambda \to 0$ in the future infinity, but which can not be seen in the free, $\lambda=0$, theory. This makes dS space quite different from Minkowski or Anti--de--Sitter spaces \cite{Akhmedov:2012hk}, where adiabatic variations of the self--interactions do not change the true vacuum state.
To avoid confusions at this point let us clarify our statement. For the fixed co--moving momentum $p$ past infinity in the expanding PP, $\eta \to \infty$, corresponds to the UV limit of the physical momentum, $p\eta$. At the same time future infinity, $\eta\to 0$, corresponds to the IR limit of the physical momentum. So if one starts at the past infinity with the BD state the correlation functions have proper Hadamard UV behavior. What we observe, however, is that for the fixed $p$ as the time goes by, $\eta \to 0$, the IR behavior of the correlation functions is changed (without changing their UV properties) and is described by a different state --- flat density of out--Jost harmonics on top of the corresponding vacuum.
The phenomenon we observe is a more complicated version of the following one. Consider simple linear oscillator. In the perfectly linear case the oscillator will remain in an excited state forever, if it was originally in such a state. However, if one will switch on an interaction of the oscillator to an external field and then switch it off, the oscillator will relax to the ground state. That will happen independently of the type of the interaction or on the type of the external field. The crucial difference of the dS system from the simple oscillator one is that in the case of dS system the oscillator frequency changes in time. As the result even if one had started at past infinity with the ground state of the future infinity, the system would deviate form this state at the intermediate times and then relax back into it in the future.
In the second section we propose the dS invariant Kadanoff--Baym equation which may be suitable to sum the dS invariant IR corrections exactly over the BD state. This section just gives the idea what kind of problem has to be solved if one would like to respect dS isometry exactly. However, we find it rather unphysical to address the question of the stability of the system in the circumstances when all the symmetries are respected exactly. We propose to consider slight excitations above the highly symmetric state and to trace where they evolve in the future infinity. For that reason, in the third section we derive the kinetic equation which does not respect dS isometry, but, unlike full DS equation, is suitable for the separation of the IR renormalization form the UV one. The same equation was derived in \cite{Akhmedov:2011pj}. It was shown there that its collision integral is annihilated by the Gibbons--Hawking density of out--Jost states on top of the out--vacuum. The same state annihilates the collision integral of the Kadanoff--Baym equation of the second section up to subleading terms in IR limit.
To make the paper self--contained we present the general discussion of the scalar fields in PP in the Appendix. All the notations, which are not defined in the main text, can be found in the Appendix.
\section{Towards invariant Kadanoff--Baym equation for BD state}
In this paper we are going to study the following field theory:
\bqa
L = \sqrt{|g|}\,\left[\frac{g^{\mu\nu}}{2}\, \pr_\mu \phi \, \pr_\nu\phi + \frac{m^2}{2}\, \phi^2 + \frac{\lambda}{3}\, \phi^3 + \dots\right].
\eqa
Dots here stand for the higher self--interaction terms, which make the theory stable.
The reason why we are going to consider below formulas only due to the unstable cubic part of the potential is just to simplify them. This instability does not affect our conclusions \cite{Akhmedov:2011pj}.
For the BD state the one loop correction to the Wightman function $G_{-+}$ was calculated in \cite{Krotov:2010ma} (see as well \cite{Leblond}). The result for the sum of the tree--level and one--loop contributions in the IR limit, $Z\to\infty$, is as follows:
\bqa\label{polkrot}
G^{0+1}_{-+}(Z) \approx \left[1 - \frac{\lambda^2\,\left(1 - e^{-2\pi\mu}\right)}{4\,\mu} \, \left|\int_0^\infty dx \, x^{\frac{D-3}{2} - i\, \mu} \, h^2(x)\right|^2 \, \log(Z)\right]\, G^0_{-+}(Z).
\eqa
All notations in this formula and in the formulas that follow are given in the Appendix.
For large enough $D$ the theory in question becomes non--renormalizable. But in the IR limit we do not care about UV divergences and renormalizability of the theory in question. We assume that all couplings in all equations below take their physical values, i.e. all UV divergent ($\sim \lambda^2 \, \log \Lambda$) or finite ($\lambda^2$) contributions are absorbed into their renormalization. For the propagators which have proper Hadamar behavior the UV divergences in dS space are the same as in flat one. Because of that we prefer to consider the BD state (or mild density excitations above it) as the initial state of our system. But below we are keeping track only of the leading large IR contributions.
As seen from (\ref{polkrot}), loops are not suppressed in comparison with the tree--level contribution for large enough $Z$. One has to understand what is the result of the summation of the leading IR contributions at all loops. The answer on this question can be obtained from the solution of the Dyson--Schwinger (DS) equation:
\bqa\label{DS}
\hat{G}(Z_{XY}) = \hat{G}^0(Z_{XY}) + \lambda^2 \int [dW]\int [dU] \hat{G}^0(Z_{XW}) \, \hat{\Sigma}(Z_{WU})\, \hat{G}(Z_{UY}),
\eqa
where $\hat{G}(Z)$ is the matrix of the exact propagators, while $\hat{G}^0(Z)$ is the matrix of the tree--level ones. All propagators in (\ref{DS}) are the functions of the invariant distance, because we are quantizing over the BD state.
Having in mind the physical and mathematical origin of the large IR effects \cite{Akhmedov:2011pj} we have simplified the complete system of DS equations in (\ref{DS}). We have assumed that the vertex $\lambda$ does not receive any new large IR contributions on top of those which are caused by the contributions contained in the two--point functions.
Eq. (\ref{DS}) is not suitable for the summation of only large IR contributions $\lambda^2 \log(Z)$, because it does not separate the UV from the IR renormalization. One needs an equation which sums up only the leading IR contributions and does not even see the contributions which are either suppressed by the higher powers of $\lambda$ or even UV divergent ($\sim \lambda^2 \log \Lambda$). The proper equation is the kinetic one of the next section. However, it does not respect dS isometry, while one would like to sum the dS invariant contributions for the BD state.
One possible variant is as follows. We apply the Klein--Gordon operator to both sides of (\ref{DS}) to get rid of its dependence on the initial value of the propagator. This operator, when acting on the function of $Z$, is equivalent to $\Box(g) + m^2 = (Z^2 - 1) \pr_Z^2 + D\,Z\, \pr_Z + m^2$. That converts the DS equation into an integrodifferential equation of the Kadanoff--Baym form. Recalling that the tree--level Wightman functions, $G^0_{+-}$ and $G^0_{-+}$, solve the homogeneous equation, while the Feynman propagators, $G^0_{++}$ and $G^0_{--}$, solve the inhomogeneous one, we obtain the following equation for the Wightman function $G_{-+}(Z_{XY})$:
\bqa\label{kadbay}
\left[Z_{XY}^2 \pr_{Z_{XY}}^2 + D\,Z_{XY}\, \pr_{Z_{XY}} + m^2\right] \, G[Z_{XY}] = \nn \\ = \lambda^2 \, \int [dW] \, G^2[Z_{XW} + i \epsilon] \, G[Z_{WY} + i \epsilon \, sgn(\eta_w - \eta_y)] + \nn \\ + \lambda^2 \, \int [dW] \, G^2[Z_{XW} + i \epsilon \, sgn(\eta_x - \eta_w)] \, G[Z_{WY} - i \epsilon]
\eqa
in the limit $Z_{XY}\to \infty$. However, we do not see that this equation sums up only the leading IR terms and nothing else. Possible way to move further is to apply the ansatz $G(Z) = f(Z)\, G^0(Z)$ for $Z\to\infty$, where $f(Z)$ is slow in comparison with $G^0(Z)$. But instead we are going to find the stationary IR solution of this equation by approaching the problem from a different perspective.
As a side remark let us mention that it was argued in \cite{Marolf:2010zp} that the result of the summation of the IR contributions should be the propagator build with the use of the exact Hartle--Hawking state. The one which is obtained via analytical continuation from the sphere and constructed with the use of the exact Hamiltonian. Obviously such a state depends on the coupling constant $\lambda$.
The exact state in dS should as well depend on $\lambda$, but besides that we encounter a new phenomenon, which can not be grasped through the analytical continuation from the sphere. We are going to show that the IR stationary solution of (\ref{kadbay}), the one which annihilates its RHS up to subleading terms, does not depend on the coupling constant. I.e. even if we start from the BD state (ground state of the free Hamiltonian on the sphere) and then adiabatically switch on interactions and eventually switch them off the theory relaxes to another state independently from the selfinteractions.
In the next section we will propose the result of the IR dressing of the BD propagator, which, however, will not allow us to fix the function $f(Z)$. Because we will be able to find the propagator at the stationary state, which is reached as $Z\to\infty$, but we will not be able to find the route how it approaches the stationarity in a dS invariant way. We will not be able to find the expression for the propagator at finite values of $Z$. We will find the form of its approach to the stationarity only in the circumstances when the dS isometry is broken.
The hint for the expression of the stationary propagator comes from the following observations. First, the dressed propagator should respect dS isometry. Second, it should annihilate the RHS of (\ref{kadbay}) up to the suppressed terms. These subleading terms can be absorbed into the finite ($\sim \lambda^2$) and infinite ($\sim \lambda^2 \log \Lambda$) UV renormalization.
\section{Solution of the Dyson--Schwinger equation in IR limit}
Let us consider small density perturbation over any $\alpha$--vacuum. Then the dS invariance of the propagators is broken even at tree--level, but we still have large IR contributions.
To sum them up one as well has to solve the DS equation, but this time it does not respect dS isometry.
Due to the relation $G^0_{+-} + G^0_{-+} = G^0_{++} + G^0_{--}$ it is convenient to perform the Keldysh rotation \cite{Kamenev}, \cite{LL} to the new basis: $D^K_0(X,Y) = -\frac{i}{2}\, \left[G^0_{+-}(X,Y) + G^0_{-+}(X,Y)\right]$, $D^R_0(X,Y) = \theta(\eta_y-\eta_x) \, \left[G^0_{-+}(X,Y) - G^0_{+-}(X,Y)\right]$ and $D^A_0(X,Y) = \theta(\eta_x-\eta_y) \, \left[G^0_{+-}(X,Y) - G^0_{-+}(X,Y)\right]$.
Here $D^{R,A}$ are retarded and advanced Green functions. They carry information about the quasi--particle spectrum of the theory. At the same time the Keldysh propagator $D^K$ describes the state of the theory. Thus, our main concern below should be the solution of the DS equation for the Keldysh propagator $D^K$.
Due to spatial homogeneity of PP and due to its rapid expansion, which is supposed to fade away any initial inhomogeneity, we find it convenient to perform the Fourier transform of all quantities along the spatial directions: $D^{K,R,A}_p(\eta_1, \eta_2) \equiv \int d^{D-1}x \, e^{i\, \vec{p}\, \vec{x}} D^{K,R,A}(\eta_1, \vec{x}; \eta_2, 0)$. Then the Fourier transformed form of the DS equation for $D^K$ is as follows\footnote{Feynman rules can be found in \cite{vanderMeulen:2007ah}.}:
\bqa\label{DSDK}
D_p^{K}(\eta_1,\eta_2) = D_{0p}^K(\eta_1,\eta_2) + \nn \\ + \l^2 \int \fr{d^{D-1}\vec{q}}{(2\pi)^{D-1}} \iint_\infty^0 \fr{d\eta_3 d\eta_4}{(\eta_3\eta_4)^D} \, \Biggl[ \Biggr. D_{0p}^R(\eta_1,\eta_3) \, D_q^K(\eta_3,\eta_4) \, D_{p-q}^K(\eta_3,\eta_4) \, D_p^A(\eta_4,\eta_2) + \nn\\ + 2 \, D_{0p}^R(\eta_1,\eta_3) \, D_q^R(\eta_3,\eta_4) \, D_{p-q}^K(\eta_3,\eta_4) \, D_p^K(\eta_4,\eta_2) + 2 \, D_{0p}^K(\eta_1,\eta_3) \, D_q^K(\eta_3,\eta_4) \, D_{p-q}^A(\eta_3,\eta_4)\, D_p^A(\eta_4,\eta_2) - \nn\\ - \fr{1}{4} \, D_{0p}^R(\eta_1,\eta_3) \, D_q^R(\eta_3,\eta_4) \, D_{p-q}^R(\eta_3,\eta_4) \, D_p^A(\eta_4,\eta_2) - \fr{1}{4} \, D_{0p}^R(\eta_1,\eta_3) \, D_q^A(\eta_3,\eta_4) \, D_{p-q}^A(\eta_3,\eta_4) \, D_p^A(\eta_4,\eta_2) \Biggl. \Biggr].
\eqa
Note that we are looking for the kinetic equation whose collision integral is defined at the $\lambda^2$ order. In such an approximation $D_{0p}^K$ can be substituted by $D_p^K$ under the integral on the RHS of (\ref{DSDK}).
We propose the following ansatz to solve (\ref{DSDK}):
\bqa\label{ansatz}
D_p^{K}(\eta_1,\eta_2) = \left(\eta_1\eta_2\right)^{\fr{D-1}{2}}\, d^K(p\eta_1,p\eta_2) , \nn \\
d^K\bigl(p\eta_1,p\eta_2\bigr) = \fr12 h\bigl(p\eta_1\bigr)\,h^*\bigl(p\eta_2\bigr) \biggl[1+2\,n\bigl(p\eta_{12}\bigr)\biggr] + h\bigl(p\eta_1\bigr)h\bigl(p\eta_2\bigr)\,\kappa\bigl(p\eta_{12}\bigr) + c.c.,
\eqa
where $\eta_{12} = \sqrt{\eta_1 \eta_2}$. As well we use the tree--level retarded and advanced propagators $D_p^{R}(\eta_1,\eta_2) = \theta\left(\eta_2-\eta_1\right) \, \left(\eta_1\eta_2\right)^{\fr{D-1}{2}} \,
d^-\bigl(p\eta_1,p\eta_2\bigr)$, $D_p^{A}(\eta_1,\eta_2) = - \theta\left(\eta_1-\eta_2\right) \, \left(\eta_1\eta_2\right)^{\fr{D-1}{2}}\,d^-\bigl(p\eta_1,p\eta_2\bigr)$, where $d^-\bigl(p\eta_1,p\eta_2\bigr)= 2\,{\rm Im}\left[h(p\eta_1)h^*(p\eta_2)\right]$. In (\ref{ansatz}) $n(p\eta)$ and $\kappa(p\eta)$ are unknown functions to be defined by the equations under derivation.
This ansatz is inspired by the following observations. The retarded and advanced Green functions can be found as classical objects if the spectrum of quasi--particles is known. The ansatz for the Keldysh propagator follows from the interpretation of $n(p\eta)$ as the particle density, $\langle a^+_p a_p \rangle$, and of $\kappa(p\eta)$ as the anomalous quantum average, $\langle a_p \, a_{-p}\rangle$, \cite{Akhmedov:2011pj}. We assume that in the future infinity $n$ and $\kappa$ are independent of the spatial coordinates. Furthermore, due to the symmetry of the PP under simultaneous rescalings of its coordinates, $\eta \to l\, \eta$ and $\vec{x} \to l \, \vec{x}$, we expect that in the future infinity $n$ and $\kappa$ should be functions of the physical momentum, $p\eta$, only: $n_p(\eta) = n(p\eta)$ and $\kappa_p(\eta) = \kappa(p\eta)$.
It is known in condensed matter physics that non--vanishing $\kappa$ signals that one have chosen wrong harmonics to describe the quasi--particle spectrum. As well for constant $\kappa$ one can always set it to zero by performing Bogolyubov transformation which leads to the same ansatz (\ref{ansatz}), but with harmonics corresponding to a different $\alpha$--vacuum and different value of $n$. Because of these observations we do not specify harmonics until the end where we check the IR behavior of $\kappa(p\eta)$ for the various choices of them.
For general values of $\eta_1$ and $\eta_2$ the ansatz (\ref{ansatz}) does not solve the DS equation in question. However, in the limit $p\eta_{1,2}\to 0$ and $\eta_1/\eta_2=const$, one can neglect the difference between $\eta_1$ and $\eta_2$ in the expressions which follow. That can be done if one keeps track only of the leading large IR contributions.
As a result one can substitute the average conformal time $\eta_{12} = \sqrt{\eta_1\eta_2}$, instead of both $\eta_1$ and $\eta_2$ for the limits of integrations over $\eta_3$ and $\eta_4$. Then the ansatz in question reproduces itself under the substitution into the DS equation if $n$ and $\kappa$ obey:
\bqa\label{np}
n(p\eta_{12}) \approx n_p^{(0)} - \l^2 \, \int \fr{d^{D-1}q}{(2\pi)^{D-1}} \, \iint_{\infty}^{\eta_{12}} d\eta_3 \, d\eta_4 \, (\eta_3\eta_4)^{\fr{D-3}{2}} \times \nn\\
\times \Biggl\{ \Biggr. \Biggl[ d^K\biggl(q\eta_3,q\eta_4\biggr)\, d^K\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr) + \fr14 \, d^-\biggl(q\eta_3,q\eta_4\biggr) \, d^-\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr)
- \Biggr. \nn \\ - \Biggl. d^-\biggl(q\eta_3,q\eta_4\biggr)\, d^K\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr) \, \biggl[1 + 2\,n\bigl(p\eta_{13}\bigr)\biggr] \Biggr] \, h^*\bigl(p\eta_3\bigr)\,h\bigl(p\eta_4\bigr)
+ \nn \\ + 4 \, \theta\left(\eta_4-\eta_3\right) \, d^K\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr) \, {\rm Re} \biggl[d^-\biggl(q\eta_3,q\eta_4\biggr)\,h\bigl(p\eta_3\bigr)\,h\bigl(p\eta_4\bigr)\, \kappa\bigl(p\eta_{42}\bigr) \biggr] \Biggl. \Biggr\}
\eqa
and
\bqa\label{kappa}
\kappa(p\eta_{12}) \approx \kappa_p^{(0)} - \l^2 \, \int \fr{d^{D-1}q}{(2\pi)^{D-1}} \, \iint_{\infty}^{\eta_{12}} d\eta_3 \, d\eta_4 \, (\eta_3\eta_4)^{\fr{D-3}{2}} \times \nn\\
\times \Biggl\{ \Biggr. \Biggl[ d^K\biggl(q\eta_3,q\eta_4\biggr) \, d^K\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr) + \fr14 \, d^-\biggl(q\eta_3,q\eta_4\biggr) \, d^-\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr)
+ \Biggr. \nn \\ + \Biggl. d^-\biggl(q\eta_3,q\eta_4\biggr) \, d^K\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr)\, \biggl[1 + 2\,n\bigl(p\eta_{13}\bigr)\biggr] \Biggr] \, h^*\bigl(p\eta_3\bigr) \, h^*\bigl(p\eta_4\bigr)
+ \nn \\ + 4 \, \theta\left(\eta_4-\eta_3\right) \, d^K\biggl(|p-q|\eta_3,|p-q|\eta_4\biggr) \, d^-\biggl(q\eta_3,q\eta_4\biggr) \, h^*\bigl(p\eta_3\bigr) \, h\bigl(p\eta_4\bigr) \, \kappa\bigl(p\eta_{42}\bigr) \Biggl. \Biggr\}
\eqa
where $n_p^{(0)}$ and $\kappa_p^{(0)}$ define the initial propagator $D^K_{0p}(\eta_1, \eta_2)$. Their presence is the drawback of the integral form of the equations under consideration, because then the equation itself depends on the initial conditions. The integrodifferential form of these equations is just the system of kinetic equations for $n$ and $\kappa$ together, which was derived using different methods in \cite{Akhmedov:2011pj}.
In the derivation of (\ref{np}) and (\ref{kappa}) we have used the following relations $d^-(p\eta_1,p\eta_2) = - d^-(p\eta_2,p\eta_1) = - \biggl[d^-(p\eta_1,p\eta_2)\biggr]^*$ and $\int d^{D-1}\vec{q} f\left(q,|p-q|\right) = \int d^{D-1}\vec{q} f\left(|p-q|,q\right)$.
As well we assumed that $n(p\eta)$ and $\kappa(p\eta)$ are slow functions in comparison with $h(p\eta)$. Then one can safely change their positions under the $d\eta_3$ and $d\eta_4$ integrals, what we frequently do in the equations below. This is due to the usual separation of scales, which lays in the basis of the kinetic theory \cite{LL}. In our case this approximation is correct only for the fields from the principal series, $m>(D-1)/2$, for which the harmonics $h(p\eta)$ oscillate at the future infinity.
However, the ansatz (\ref{ansatz}) solves (\ref{DSDK}) as well for the scalars from the complementary series, $m\le (D-1)/2$. For them the harmonics do {\it not} oscillate at future infinity. The main problem with the situation when $h(p\eta)$ is as slow as $n(p\eta)$ and $\kappa(p\eta)$ is that then one can not derive the kinetic equation of the usual form. More complicated integrodifferential equations are available whose solution and physical interpretation is not yet known to us.
To simplify (\ref{np}) and (\ref{kappa}) we change the variables as $q\eta_{1,2,3,4} = x_{1,2,3,4}$ and use some approximations \cite{Akhmedov:2011pj} to arrive at:
\bqa\label{np1}
n(p\eta_{12}) \approx n_p^{(0)} + \frac{\l^2\, S_{D-2}}{(2\pi)^{D-1}} \, \int_p^{1/\eta_{12}} \fr{dq}{q} \, \iint_{\infty}^{0} dx_3 \, dx_4 \, (x_3\,x_4)^{\fr{D-3}{2}} \times \nn\\
\Biggl\{ \Biggr. \Biggl[ \biggl[d^K\left(x_3,x_4\right)\biggr]^2 +
\fr14 \, \biggl[d^-\left(x_3,x_4\right)\biggr]^2
- d^-\left(x_3,x_4\right)\, d^K\left(x_3,x_4\right) \, \left[1 + 2^{\phantom{\frac12}} n\left(\frac{p}{q}x_{13}\right)\right] \Biggr] \times \nn \\ \times h^*\left(\frac{p}{q}x_3\right)\,h\left(\frac{p}{q}x_4\right) + 4 \, \theta\left(x_4-x_3\right) \, d^K\left(x_3,x_4\right) \, {\rm Re} \left[d^-\left(x_3,x_4\right)\,h\left(\frac{p}{q}x_3\right)\,h\left(\frac{p}{q}x_4\right)\, \kappa\left(\frac{p}{q}x_{42}\right) \right] \Biggl. \Biggr\}\nn \\
{\rm and} \quad
\kappa(p\eta_{12}) \approx \kappa_p^{(0)} - \frac{\l^2\, S_{D-2}}{(2\pi)^{D-1}} \, \int_p^{1/\eta_{12}} \fr{dq}{q} \, \iint_{\infty}^0 dx_3 \, dx_4 \, (x_3x_4)^{\fr{D-3}{2}} \times \nn\\
\Biggl\{ \Biggr. \Biggl[ \biggl[d^K\left(x_3,x_4\right)\biggr]^2 + \fr14 \, \biggl[d^-\left(x_3,x_4\right)\biggr]^2 + d^-\left(x_3,x_4\right) \, d^K\left(x_3,x_4\right)\, \left[1 + 2^{\phantom{\frac12}} n\left(\frac{p}{q}x_{13}\right)\right] \Biggr] \times \nn \\ \times h^*\left(\frac{p}{q}x_3\right) \, h^*\left(\frac{p}{q}x_4\right)
+ 4 \, \theta\left(x_4-x_3\right) \, d^K\left(x_3,x_4\right) \, d^-\left(x_3,x_4\right) \, h^*\left(\frac{p}{q}x_3\right) \, h\left(\frac{p}{q}x_4\right) \, \kappa\left(\frac{p}{q}x_{42}\right) \Biggl. \Biggr\}.
\eqa
Here $S_{D-2}$ is the volume of the $(D-2)$--dimensional sphere of unit radius and $x_{ij} = \sqrt{x_i\,x_j}$. In (\ref{np1}) we have neglected $p$ in comparison with $q$ inside the integrals to keep only the leading IR terms. See \cite{Akhmedov:2011pj} for more detailed discussion.
Now for the BD state $h(x)\propto {\cal H}^{(1)}_{i\mu}(x)$. Then the $x_{3,4}$ integrals are saturated around $x\sim \mu$, because of the rapid oscillations of the Hankel function at large values of their arguments. Hence, $h(px_{3,4}/q)$ can be expanded around zero, because $p/q\ll 1$ in (\ref{np1}). Then, because ${\cal H}^{(1)}_{i\mu}(x)$ behaves as $C_+ \, x^{i\mu} + C_- \, x^{-i\mu}$, when $x\to 0$, there are interference terms under the $dq/q$ integral which do not depend on $q$. As the result, both $n(p\eta)$ and $\kappa(p\eta)$ behave as $\lambda^2 \, \log(p\eta)$ in the future infinity. Moreover, $\kappa(p\eta)$ is generated even if it was set to zero at the initial stage \cite{Akhmedov:2011pj}. Its presence in the future infinity signals that the backreaction on the BD state (please do not confuse it with the backreaction on the dS geometry) is huge.
One should be a bit more careful with the similar manipulations for the other $\alpha$--vacua, because their harmonics behave as linear combinations of $e^{ip\eta}$ and $e^{-ip\eta}$ at large momenta.
But the careful study reveals the same picture for the most of them \cite{Akhmedov:2011pj}. The explanation comes from the fact that their harmonics $h(x)$ as well behave as linear combinations of $x^{i\mu}$ and $x^{-i\mu}$ in the future infinity.
Only for the out--Jost harmonics, $h(x) \propto J_{i\mu}(x)$, which behave as single waves $x^{i\mu}$, the situation is different. In particular if one puts $\kappa(p\eta)$ to be zero it is not generated back in (\ref{np1}). Or more precisely, contribution to it behaves as $\lambda^2$, i.e. is negligible in comparison with $\lambda^2\,\log(p\eta)$. That is because the integrand of $dq/q$, defining $\kappa$ in (\ref{np1}), contains only $q$--dependent terms and, hence, the corresponding integral is convergent as $p\eta \to 0$. At the same time for the harmonics in question $n(p\eta)$ has contributions of the order of $\lambda^2 \log(p\eta)$.
(The physics for the general $\alpha$--vacua was discussed in grater details e.g. in \cite{Kundu:2011sg}.)
All in all, out--Jost harmonics represent the proper quasi--particle states in the future infinity. Which means that for out--Jost harmonics the ansatz (\ref{ansatz}) with $\kappa(p\eta)=0$ {\it does} reproduce itself after the substitution into DS equation. That is possible if one neglects terms which are suppressed in comparison with powers of $\lambda^2 \log(p\eta)$. This is the argument which favors the interpretation that independently of the initial state at the past infinity of PP the field theory state flows in the future infinity to the out--vacuum with some density of particles on top of it \cite{Akhmedov:2011pj}. To support such a conclusion we are going to show in a moment that for the out--Jost harmonics $\kappa(p\eta)$ indeed flows to zero in the future infinity, even if it was not zero originally.
The kinetic equation is obtained from (\ref{np1}) when $\kappa(p\eta)$ set to zero, via application of the differential operator to its both sides:
\bqa\label{cint}
\frac{d n(x)}{d\log (x)} = - \frac{\lambda^2\, S_{D-2}}{2 (2\,\pi)^{D-1}\, \mu} \, \int^0_{\infty} dx_3 \,x_3^{\frac{D-3}{2}} \, \int_{\infty}^{0} dx_4 \, x_4^{\frac{D-3}{2}} \times \nonumber \\ \times \left\{ {\rm Re} \left[x_3^{-i\mu} \, V(x_3) \,x_4^{i\mu}\, V^*(x_4)\right] \, \left[(1+n(x))\, n(x_3)^{2\phantom{\frac12}} - \,\, n(x) \, (1+n(x_3))^2\right] + \right. \nonumber \\
+ 2\,{\rm Re} \left[x_3^{i\mu} \, W(x_3) \, x_4^{- i\mu} \, W(x_4) \right]\, \left[ n(x_3)\, (1+n(x_3))\,(1+n(x))^{\phantom{\frac12}} - \,\,(1+n(x_3))\, n(x_3)\, n(x) \right] + \nonumber \\
+ \left. {\rm Re} \left[x_3^{i\mu} \, V(x_3) \, x_4^{- i\mu} \, V^*(x_4)\right] \, \left[ (1+n(x_3))^2 \,(1+n(x))^{\phantom{\frac12}} - \,\, n(x_3)^2\, n(x) \right]^{\phantom{2}}\right\}.
\eqa
Here $x=p\eta_{12}$, $V(x) = \left[h^2\left(x\right) - \frac{\pi \, e^{-\pi\mu}}{4\, \sinh(\pi\, \mu)\, |x|} - \dots\right]$ and $W(x) = \left[\left|h\left(x\right)\right|^2- \frac{\pi \, e^{-\pi\mu}}{4\, \sinh(\pi\, \mu)\, \left|x\right|} - \dots\right]$, where $h(x) = \sqrt{\frac{\pi}{\sinh(\pi\mu)}}\, J_{i\mu}(x)$ with $J$ being the Bessel function. Dots in these expressions stand for a finite number of terms with higher powers of $1/|x|$. The presence of such contributions makes the collision integral well defined after the Taylor expansion of $h(px/q)$ and can be explained by the behavior of the out--Jost harmonics in the limit $x\to\infty$. All this is clarified in \cite{Akhmedov:2011pj}.
This is exactly the kinetic equation which was derived in \cite{Akhmedov:2011pj}. If one have started with a small density perturbation over the BD vacuum, he can expect that $n(p\eta)$ is small in the future infinity. As is explained in \cite{Akhmedov:2011pj} in this case (\ref{cint}) degenerates into a renormalization group type differential equation. The latter one can be solved with the result:
\bqa\label{solution}
n(p\eta) = \frac{\Gamma_2}{\Gamma_1}\left[C \, \left( p\,\eta\right)^{\Gamma} + 1\right], \nonumber \\
\Gamma_1 = \frac{\lambda^2\, S_{D-2}}{(2\pi)^{D-1}\, \mu} \, \left|\int_0^{\infty} dy \,y^{\frac{D-3}{2} - i\, \mu} \, V(y) \right|^2, \nonumber \\ \Gamma_2 = \frac{\lambda^2\, S_{D-2}}{(2\,\pi)^{D-1}\, \mu} \, \left|\int_0^{\infty} dy \,y^{\frac{D-3}{2} + i\,\mu} \, V(y)\right|^2.
\eqa
where $C$ is the integration constant, which depends on the initial conditions.
This solution has stable point $\frac{\Gamma_2}{\Gamma_1} \approx e^{-2\,\pi\mu}\ll 1$ for $\mu\gg 1$, which approximately annihilates the collision integral in (\ref{cint}). The stable point is reached when the production of particles is equilibrated by their decay \cite{Akhmedov:2011pj}. In fact, from the collision integral (\ref{cint}) it should be clear that $\Gamma_1$ defines the decay rate of the scalar particle into two, while $\Gamma_2$ defines the particle production rate. Note that $\log(p\eta)$ is decreasing as we approach the future infinity and $n(p\eta)$ is the density per co--moving volume, which does not dependent on scale $1/\eta$ \cite{Akhmedov:2011pj}.
What is the most interesting fact, from the perspective of the discussion above, is that the stable point in question does not depend on $\lambda$. (Of cause the way the solution (\ref{solution}) approaches the stationarity (its value for non--zero $p\eta$) does depend on $\lambda$.) Furthermore, it is not hard to see now that by product we have shown that the stationary state of the kinetic equation (\ref{cint}) as well annihilates, modulo subleading terms, the RHS (collision integral) of (\ref{kadbay}).
The last thing which we have to check is the behavior of $\kappa(p\eta)$ for the out--Jost harmonics if it was initially non--zero. We as well assume that we have started from its small value in the past infinity and that it flows to the zero in the future. Under these assumptions, if one keeps only the leading terms, the integrodifferential form of the equation for $\kappa(p\eta)$ from (\ref{np1}) degenerates to:
\bqa
\frac{d\kappa(p\eta)}{d\log(p\eta)} = \Gamma_3 \, \kappa(p\eta), \nn \\
\Gamma_3 = \frac{4\, i\, \lambda^2 \, S_{D-2}}{(2\pi)^{D-1}}\, \iint^0_\infty dx_3 \, dx_4 \, x_3^{\frac{D-3}{2} - i\mu}\, x_4^{\frac{D-3}{2} + i\mu}\, \theta(x_4-x_3) \, {\rm Im}\left[V(x_3)V^*(x_4)\right].
\eqa
Here Re$\Gamma_3 = \Gamma_1 - \Gamma_2 \approx \left(1 - e^{-2\pi\mu}\right)\, \Gamma_1 > 0$ and, hence, the solution of this equation, $\kappa(p\eta) \propto \left(p\eta\right)^{\Gamma_3}$, flows to zero in the future infinity. I.e. our assumption is self consistent and (\ref{solution}) is stable under linearized perturbations of $\kappa(p\eta)$.
\section{Conclusions and Acknowledgments}
We have found the result of IR dressing of the BD vacuum in PP of dS. The dressed state is
described by out--Jost harmonics and corresponds to $\kappa(p\eta)=0$ with $n(p\eta) \approx e^{-2\pi\mu}$. Furthermore, the corresponding two--point correlation function depends, in the future infinity, only on the time difference, $\eta_1/\eta_2 = e^{t_2 - t_1}$, rather than on both of the times ($\eta_1$ and $\eta_2$) independently. Which means that the dressed state as well solves the kinetic problem in dS space.
Using the same methods as those which lead to (\ref{cint}) one can derive the kinetic equation in the contracting PP, $ds^2 = dt^2 - e^{-2t}\, d\vec{x}^2 = \frac{1}{\eta^2}\, \left(d\eta^2 - d\vec{x}^2\right)$ where $0\to \eta = e^t \to +\infty$. The solution of the latter equation for low momenta $p$ is \cite{Akhmedov:2011pj}:
\bqa
n(\eta) \sim \frac{1}{A - \bar{\Gamma}\, \log\eta} \sim \frac{1}{\bar{\Gamma}\,\log\frac{\eta_0}{\eta}},
\eqa
and is independent of $p$. It is valid for $\eta < \eta_0 = e^{const/\lambda^2}\gg 1$. Here $A$ is an integration constant, which depends on the initial state and $\bar{\Gamma} \propto \frac{\lambda^2}{m^2} > 0$ for $m \gg (D-1)/2$.
One can see that the distribution in question grows with time, due to the contraction of the space and constant particle production, and moreover has a pole at some finite $\eta_0$. In this case the backreaction on the gravitational background should be strong.
This observation means that in global dS space the situation can be quite different form the one in the expanding PP, at least because global dS contains expanding and contracting PP simultaneously. Then we have two competing processes --- expansion of the space time and explosive particle production \cite{Akhmedov:2011pj}.
We would like to acknowledge discussions with A.Polyakov and I.Burmistrov. We would like to thank MPI, AEI, Golm, Germany for the hospitality during the final stage of the work on this project. The work of AET was partially supported by the grant "Leading Scientific Schools" No. NSh-6260.2010.2, RFBR-11-02-01227-a. The work of PhB was partially supported by the grant RFBR--11--02--01120 by the Dynasty Foundation. This work was done under the support of the grant from the Ministry of Education and Science of the Russian Federation, contract No. 14.740.11.0081.
\section{Appendix}
The $D$--dimensional de Sitter (dS) space is the hyperboloid, $X_\mu^2 \equiv - X_0^2 + X_i^2 = 1$, ($\mu = 0, 1, \dots, D$ and $i=1,\dots,D$) in the $(D+1)$--dimensional Minkowski space $ds^2 = dX_0^2 - dX_i^2$. Throughout this paper we fix the curvature of the hyperboloid to be one. The expanding Poincare patch (PP) of this space is defined by the coordinates:
\bqa \label{induced}
X_0 = \sinh t + \frac{\vec{x}^2}{2}\, e^t, \quad X_D = - \cosh t + \frac{\vec{x}^2}{2}\, e^t \nn \\
X_a = e^t \, x_a, \quad a=1,\dots, D-1
\eqa
and covers only half of dS space, $X_0 - X_D= e^t \geq 0$. The induced metric in these coordinates is $ds^2 = dt^2 - e^{2\,t}\, d\vec{x}^2 = \frac{1}{\eta^2}\, \left(d\eta^2 - d\vec{x}^2\right)$, where $\eta = e^{-t} = 1/(X_0 - X_D)$. The past infinity of the PP corresponds to $t\to -\infty$, i.e. to $\eta = + \infty$. This is the boundary of the PP inside global dS space, $X_0 = X_D$. The future infinity is at $t=+\infty$, i.e. at $\eta = 0$. The dS isometry is just the rotation symmetry group of the ambient Minkowski space, $SO(D,1)$.
To quantize scalar fields in PP one has to specify time--dependent part of the harmonics $g_p(\eta) = \frac{\sqrt{\pi}\, \eta^{\frac{D-1}{2}}}{2} \, h(p\eta)$ inside the harmonic expansion $\phi(\eta, \vec{x}) = \int d^{D-1}p \, \left[a_p \, g_p(\eta) \, e^{-i\, \vec{p}\, \vec{x}} + a^+_p \, g^*_p(\eta) \, e^{i\, \vec{p}\, \vec{x}}\right]$. From the Klein--Gordon equation in PP follows that $h(p\eta)$ has to solve Bessel equation with the index $\mu = \sqrt{m^2 - \left(\frac{D-1}{2}\right)^2}$, where $m$ is the mass of the particle.
In time dependent backgrounds there is no basis of harmonics which can diagonalize the free Hamiltonian once and forever. The choice of the harmonics in the calculations corresponds to the choice of the background state $a_p \, |vac\rangle = 0$, usually referred to as vacuum. Bunch--Davies (BD) vacuum \cite{Bunch:1978yq} corresponds to $h(p\eta) = e^{-\frac{\pi\mu}{2}}\, {\cal H}_{i\mu}^{(1)}(p\eta)$, where ${\cal H}^{(1)}$ is the Hankel function of the first kind. These harmonics behave as $e^{ip\eta}$ at the past infinity and diagonalize the free Hamiltonian only in that part of space--time.
The other so called $\alpha$--vacua can be obtained form the BD one via the corresponding Bogolyubov transformations and correspond to the harmonics which are linear combinations of the Hankel functions of both kinds ${\cal H}^{(1)}$ and ${\cal H}^{(2)}$. See e.g. \cite{Allen:1985ux} for a similar discussion in the global dS coordinates.
Because of the time dependence of the Hamiltonian one has to apply
the Schwinger--Keldysh diagrammatic technic instead of the Feynman one. In this technic every particle is characterized by the matrix of four propagators (see e.g. \cite{Kamenev}, \cite{LL}):
\bqa
G^0_{-+}(X,Y) = i\,\langle \phi(X) \phi(Y) \rangle, \quad G^0_{+-}(X,Y) = i\,\langle \phi(Y) \phi(X) \rangle, \nn \\ G^0_{++}(X,Y) = \langle T \,\phi(X) \phi(Y) \rangle = \theta(\eta_y - \eta_x) \, G^0_{-+}(X,Y) + \theta(\eta_x - \eta_y)\, G^0_{+-}(X,Y), \nn \\
G^0_{--}(X,Y) = \langle \bar{T} \,\phi(X) \phi(Y) \rangle = \theta(\eta_y - \eta_x) \, G^0_{+-}(X,Y) + \theta(\eta_x - \eta_y)\, G^0_{-+}(X,Y),
\eqa
which obey one relation $G^0_{+-} + G^0_{-+} = G^0_{++} + G^0_{--}$.
Here ($\bar{T}$) $T$ is the (anti--)time ordering. Note that the conformal time in our definition flows in the reverse direction ($\infty \to \eta \to 0$) with respect to the ordinary time $t$.
All these propagators can be written with the use of the
Wightman function $G^0(X,Y) \equiv i\,\langle \phi(X) \phi(Y)\rangle$. The latter solves the Klein--Gordon equation in the metric of PP. This equation is invariant under the full dS isometry although coordinates (\ref{induced}) are restricted only to the half of dS space. Hence, the solution of the Klein--Gordon equation should depend on the invariant distance between its two arguments --- the two points on the hyperboloid, $X_\mu^2 = 1$ and $Y_\mu^2 = 1$. The convenient function of the latter one on the hyperboloid is the so called hyperbolic distance $Z = - X_\mu \, Y^\mu$. As follows from (\ref{induced}) it is equal to $Z = 1 + \frac{(\eta_x - \eta_y)^2 - |\vec{x} - \vec{y}|^2}{2\eta_x \, \eta_y}$ in PP.
The Klein--Gordon operator, when acting on the function of $Z$, rather than on the function of the two points $X$ and $Y$ separately, is equivalent to $\Box(g) + m^2 = (Z^2 - 1) \pr_Z^2 + D\,Z\, \pr_Z + m^2$ \cite{Allen:1985ux}, \cite{Mottola:1984ar}. After the change of variables to $x=(1+Z)/2$ the Klein--Gordon equation acquires the form of the hypergeometric one. Its solution (away from the singularity) is the following linear combination of the$\phantom{1}_{2}F_1$ hypergeometric functions:
\bqa\label{green}
G^0(Z) = A_1 \, F\left(\frac{D-1}{2} + i\mu, \frac{D-1}{2} - i\mu; \frac{D}{2}; \frac{1+Z}{2}\right) + \nn \\ + A_2 \, F\left(\frac{D-1}{2} + i\mu, \frac{D-1}{2} - i\mu; \frac{D}{2}; \frac{1-Z}{2}\right), \quad
\mu = \sqrt{m^2 - \left(\frac{D-1}{2}\right)^2}.
\eqa
Here $A_{1,2}$ are some constants which depend on the choice of the $\alpha$--vacuum state with respect to which the averaging is done (see e.g. \cite{Allen:1985ux}). For the BD vacuum $A_2=0$.
To take care of the behavior of this function at its poles and to obtain it as the quantum average $\langle \phi(X) \phi(Y)\rangle$ one has to be more careful.
In fact, Green function (\ref{green}) has three singular points in the complex $Z$--plane: $Z=-1,1,\infty$. They correspond to the usual singular points $x\equiv (1+Z)/2 = 0,1,\infty$ of the hypergeometric equation. The singular behavior $G^0(Z)\propto 1/(Z-1)^{D/2 - 1}$ corresponds to the situation when $X$ and $Y$ sit on the same light--cone --- the standard UV singularity of the propagator. Similar singularity of $G^0(Z)$ at $Z=-1$ corresponds to the situation when $X$ sits on the light--cone with the apex at the antipodal point of $Y$. The antipodal point is obtained via the reflection at the origin of the ambient Minkwoski space \cite{Allen:1985ux}. Finally, at the infinity $G^0(Z)$ has the branching point: $\lim_{Z\to\infty} G^0(Z) \propto Z^{-\frac{D-1}{2}} \, \left[C_1 \, Z^{i\,\mu} + C_2\, Z^{- i\, \mu}\right]$ with some constants $C_{1,2}$.
To understand the behavior of $G^0(Z)$ at its poles it is instructive to consider Fourier transform of $G^0(Z)$ along the homogeneous spatial directions:
\bqa\label{Fourier}
\left\langle \phi\left(\eta_x, \vec{p}\right) \phi\left(\eta_y, -\vec{p}\right)\right\rangle \equiv \int d^{D-1}x \, e^{i\, \vec{p}\, \left(\vec{x} - \vec{y}\right)}\, G^0(Z) = \frac{\left(\eta_x\, \eta_y\right)^{\frac{D-3}{2}}}{2}\, h(p\eta_x)\, h^*(p\eta_y),
\eqa
The appearance of different solutions of the Bessel equation in place of $h(p\eta)$ here is in one--to--one correspondence with the concrete values of $A_{1,2}$ in (\ref{green}) \cite{Allen:1985ux}.
Let us consider the BD propagator. Its only singularity inside the complex $Z$--plane is at $Z=1$ and corresponds to the limit $p\to\infty$ in momentum space. In this limit the Hankel functions behave as the plane waves. As it should be high momentum modes are not sensitive to the curvature of the space--time, i.e. they coincide with the flat space harmonics.
For the inverse of the transformation (\ref{Fourier}) to be well defined there should be an appropriate shift as $\eta_x - \eta_y \to \eta_x - \eta_y \pm i\, \epsilon$ in (\ref{Fourier}). The sign of this shift depends on which one among $\eta_x$ and $\eta_y$ is grater. As the result for the BD state \cite{Polyakov:2007mm}:
\bqa
G^0_{++}[Z] = G^0[Z - i\, \epsilon], \quad G^0_{+-}[Z] = G^0[Z - i \, \epsilon \, sgn(\eta_x - \eta_y)], \nn \\ G^0_{--}[Z] = G^0[Z + i\, \epsilon], \quad G^0_{-+}[Z] = G^0[Z + i \epsilon \, sgn(\eta_x - \eta_y)].
\eqa
Here $G^0(Z)$ is analytic on the complex $Z$--plane with the single cut going from $Z=1$ to infinity along the real axis.
The situation for the other $\alpha$--vacua is different because in those situations harmonics are linear combinations of the Hankel functions of the two kinds ${\cal H}^{(1)}$ and ${\cal H}^{(2)}$. The latter behave at large momenta as $e^{-ip\eta}$ instead of $e^{ip\eta}$. As the result for the other $\alpha$--vacua $G^0(Z)$ is defined on the complex $Z$--plane with two cuts connecting $Z=1$ and $Z=-1$, correspondingly, to infinity and going, due to the $i\epsilon$ shifts, in the opposite halfs of the complex $Z$--plane.
Let us say a few words about the one--loop contribution to the propagators due to the $\lambda \, \phi^3$ self--interaction. In the Schwinger--Keldysh diagrams there are two types of the vertices: of the ``$+$'' and ``$-$'' type, correspondingly. In the ``$+$'' (``$-$'') type vertex only ``$+$'' (``$-$'') ends of the propagators can terminate. Correspondingly the one loop correction can be written as:
\bqa\label{oneloop}
\hat{G}^1(Z_{XY}) = \lambda^2 \, \int [dW] \int [dU]\, \hat{G}^0(Z_{XW}) \, \hat{\Sigma}^0(Z_{WU}) \, \hat{G}^0(Z_{UY}) \eqa
where
\bqa
\hat{G}^{0,1}(Z) = \left(
\begin{array}{cc}
G^{0,1}_{--}(Z) & G^{0,1}_{-+}(Z) \\
G^{0,1}_{+-}(Z) & G^{0,1}_{++}(Z) \\
\end{array}
\right), \quad {\rm and} \quad \hat{\Sigma}^0(Z) = \left(
\begin{array}{cc}
\left[G^0_{--}(Z)\right]^2 & \left[G^0_{-+}(Z)\right]^2 \\
\left[G^0_{+-}(Z)\right]^2 & \left[G^0_{++}(Z)\right]^2 \\
\end{array}
\right)
\eqa
and the measure is $[dW] = d^{(D+1)} W \, \delta\left(W_\mu^2 - 1\right)\,\theta \left(W_0 - W_D\right)$, which is equivalent to the measure $\frac{d\eta}{\eta^D}\, d^{D-1}x$ on PP.
This formula for $\hat{G}^1$ is valid for any $\alpha$--vacuum. Note that in (\ref{oneloop}) dS isometry is naively broken by the presence of the Heavyside $\theta$--function in the integration measure, which restricts to the PP.
But let us examine how $\hat{G}^{1}$ does change under those transformations of $SO(D,1)$ which change the argument of the $\theta$--function. (Here we reproduce the arguments of \cite{Polyakovtalk}.) Let us perform an infinitesimal rotation around $X_0$ towards say $X_1$: $X_D \to X_D - \varphi \, X_1$. Taylor expanding the integration measure up to the first order in $\varphi$, we get: $\delta\, \int[dW]\dots = \int d^{(D+1)} W \, \delta\left(W_\mu^2 - 1\right)\,\delta\left(W_0 - W_D\right)\, \varphi\, W_1 \dots = \int d(W_0 + W_D) \, d^{(D-1)}W \, \delta\left(W_\mu^2 - 1\right)\,\varphi\, W_1 \dots$.
Hence, the contribution of one diagram form (\ref{oneloop}) to the variation of say $G^1_{+-}$ over the BD vacuum state is as follows:
\bqa
\delta_{fisrt} G^{1}_{+-}(X,Y) = \nn \\ \lambda^2 \, \varphi\,\int d^{(D+1)} W \, \delta\left(W_\mu^2 - 1\right)\,\delta\left(W_0 - W_D\right)\, W_1 \int [dU]\times \nn \\ \times G\left[Z_{XW} - i\, \epsilon\right] \, G^2\left[Z_{WU} - i \, \epsilon\right] \, G\left[Z_{UY} - i \epsilon \, sgn\left(\frac{1}{U_0 - U_D} - \frac{1}{Y_0-Y_D}\right)\right] + \nn \\
+ \lambda^2 \, \varphi\,\int [dW] \int d^{(D+1)} U \, \delta\left(U_\mu^2 - 1\right)\,\delta\left(U_0 - U_D\right)\, U_1\times \nn \\ \times G\left[Z_{XW} - i\, \epsilon\right] \, G^2\left[Z_{WU} - i \, \epsilon\right] \, G\left[Z_{UY} - i \epsilon \, sgn\left(\frac{1}{U_0 - U_D} - \frac{1}{Y_0-Y_D}\right)\right] = \nn \\
\lambda^2 \, \varphi\,\int d(W_0 + W_D) \, d^{(D-1)} W \, \delta\left(W_\mu^2 - 1\right)\, W_1 \int [dU]\times \nn \\ \times G\left[Z_{XW} - i\, \epsilon\right] \, G^2\left[Z_{WU} - i \, \epsilon\right] \, G\left[Z_{UY} - i \epsilon \, sgn\left(\frac{1}{U_0 - U_D} - \frac{1}{Y_0-Y_D}\right)\right] + \nn \\
+ \lambda^2 \, \varphi\,\int [dW] \int d(U_0+U_D) \, d^{(D-1)} U \, \delta\left(U_\mu^2 - 1\right)\, U_1\times \nn \\ \times G\left[Z_{XW} - i\, \epsilon\right] \, G^2\left[Z_{WU} - i \, \epsilon\right] \, G\left[Z_{UY} - i \epsilon \right]
\eqa
We are going to show now that
both integrals in the last expressions do vanish because the integrands of $d(W_0+W_D)$ and $d(U_0+U_D)$ are analytical functions in the lower complex $(W_0+W_D)$-- and $(U_0+U_D)$--planes, correspondingly.
Let us examine first the situation with the $d(W_0+W_D)$ integral. As we have pointed out above its integrand is analytical in the lower half $Z$-plane, because the cut goes just above the real axis due to the shift by $i\epsilon$ in the arguments of the propagators. At the same time $Z_{XW} = -\frac12\, (X_0-X_D)\, (W_0+W_D) - \frac12\, (X_0+X_D)\, (W_0-W_D) + X_a\, W_a$. But $W_0-W_D=0$, because of the presence of the $\delta(W_0 - W_D)$ in the integration measure for $\delta G^1$ and $X_0 - X_D \ge 0$, because we are in PP. Hence, $G(Z)$ as the function of $W_0 + W_D$ has the same analytical properties as the function of $Z_{XW}$. Furthermore, because propagators have a power like decay as $(W_0+W_D)\to \infty$ one can close the integration contour by the infinite semicircle in the lower half of the complex $(W_0+W_D)$--plane. The integrand is analytical inside the contour. Hence, the integral is zero. Similar arguments work for the $d(U_0 + U_D)$ integral.
Along the same lines one can show that all the contributions to $\delta G^{1}_{+-}$ do vanish. That is true as well for the infinitesimal rotations in the other directions. Hence, in the case of BD state $G^{1}_{+-}(X,Y)$ is invariant under the full dS isometry and is the function of $Z_{XY}$ only. Similarly one can prove the invariance of the one loop contributions to the other propagators in BD vacuum. Furthermore, one can easily extend these arguments to higher loops.
But all this does not work for the other $\alpha$--vacua, because in that case, as we have mentioned, tree--level propagators have another cut going from $Z=-1$ to infinity and it should be shifted to the other half of the complex $Z$--plane. Hence, loop corrections to the propagators in $\alpha$--vacua respect only that subgroup of all dS isometry, which leaves the PP in question invariant.
|
1,314,259,993,302 | arxiv | \section{Introduction}\label{sec:Intro}
Protocol sequences, which were first introduced in~\cite{Massey}, provide feedback-free solutions for Media Access Control (MAC) in communication networks.
While the dominant MAC standards for cell-based systems, including cellular networks and Wireless LAN's, are feedback-based, the feedback-free approach has a strong appeal to networks without a backbone hierarchy.
For example, recent works have begun to explore the application of protocol sequences to ad hoc networks, such as \emph{vehicular ad hoc network} (VANET)~\cite{Wong_2014,Wu_Shum_Wong_Shen_2014}.
A fundamental challenge in MAC design is due to the lack of synchronicity among different users who try to access the shared medium.
Protocol sequences are constructed specifically to handle the asynchronous reality.
Intuitively, a good design should ensure that no matter how the sequences are shifted with respect to one another, each sequence should permit its affiliated user to transmit at least one packet without suffering interference from other users.
Protocol sequence sets with this property are commonly referred to as possessing the user-irrepressible (UI) property~\cite{Shum_Wong_2009,Wong_2007}.
It turns out that an important approach to construct UI protocol sequence sets is by means of CAC, which stands for Conflict-avoiding Codes \cite{Fu_Lo_Shum_2014,Momihara_2007,Shum_Wong_Chen_2010}.
Therefore, there is a close tie between protocol sequences and CAC.
The objective of finding UI protocol sequence sets with large number of sequence elements with short sequence period can be transformed to finding CAC sets with large code size and short code length.
Although it is difficult to ensure precise user-synchronicity in multi-user communication systems, in many applications it is relatively easy to maintain some rough degree of user synchronicity.
For example, mobile users may have access to a global clock via the GPS, which provides rough time synchronization.
However, due to propagation delays and other engineering restrictions, transmitted signals cannot be completely synchronized (see for example \cite{Wong_2014}).
For partially synchronous applications, protocol sequence sets are only required to observe the UI property for relative shifts up to a certain magnitude.
In this paper, we define a partial shift version of user-irrepressible sequence sets in Section~\ref{sec:UI}.
Two prior known constructions: TDMA and code-based scheduling (via Galois field or Reed-Solomon code), are then
introduced to provide some quick baseline comparison.
Next, we introduce a new concept, called partially conflict-avoiding code (PCAC), in order to build a partially user-irrepressible sequence set.
The definition of a partially conflict-avoiding code will be given in Section~\ref{sec:combin} together with its graphic representation.
A useful tool in combinatorial design called disjoint difference set is also introduced.
In Section~\ref{sec:MainResult} we provide a few families of partially user-irrepressible sequence sets by means of disjoint difference sets.
Comparison of the PCAC approach with TDMA and code-based scheduling will also be given in Section~\ref{sec:MainResult}.
Finally, we study the optimal partially conflict-avoiding codes of small weights in Section~\ref{sec:PCAC23}.
\section{User-Irrepressible sequences}\label{sec:UI}
Let $n$ be a positive integer and $X$ be a binary sequence of length $n$.
The \emph{cyclic shift operator}, $\mathcal{R}$, on $X$ is defined by
$$\mathcal{R}(X(0),X(1),\ldots,X(n-1)):=(X(n-1),X(0),\ldots,X(n-2)),$$
where $X(i)$ denotes the $i$-th component of $X$.
The following definition is an extension of \emph{user-irrepressible} property which is proposed in \cite{Shum_Wong_Chen_2010}.
\begin{definition}
Let $n,k,\Delta$ be integers satisfying $0<k\leq n$ and $0\leq\Delta<n$.
Consider a sequence set with $N$ ($\geq k$) elements, each having a length $n$.
Each element is represented by a shifted version that is obtained by applying the operator $\mathcal{R}$ for an arbitrary number (say $\tau$) of times, where $0\leq\tau\leq\Delta$.
Denote by $\mathbf{M}$ the $k\times n$ matrix obtained by stacking any $k$ representations one above the other.
The sequence set is \emph{$(n,k;\Delta)$-User-Irrepressible} (UI for short) if we can always find a $k\times k$ submatrix of $\mathbf{M}$ which is a permutation matrix.
\end{definition}
An $(n,k;\Delta)$-UI sequence set is obviously a solution to the problem we formulated in Section~\ref{sec:Intro}.
Throughout this paper, we use $N$, $k$, and $n$ to denote respectively the number of potential users in a system, the maximum number of active users at any time, and the common sequence period.
It is not hard to find an $(n,k;\Delta)$-UI sequence set.
One simple way is based on the TDMA approach.
For $0\leq i \leq k-1$, let $X_i$ be the binary sequence of length $k(\Delta+1)$ composed of all zeroes except for the $i(\Delta+1)$-th position, that is, $X_i(i(\Delta+1))=1$.
Then $\{X_0,X_1,\ldots,X_{k-1}\}$ is obviously an $(n,k;\Delta)$-UI sequence set of length $n=k(\Delta+1)$ and size $N=k$.
In practice, however, the set size $N$ is in theory larger than $k$.
An alternative construction for the case where $N$ is much larger than $k$ is based on Galois fields.
After appending $\Delta$ `zeroes' to all entries of each sequence constructed in \cite{Chlamtac_1994}, we have the following result.
\begin{theorem}(\cite{Chlamtac_1994}, \cite{Wong_2014}) \label{thm:Galois_field}
Given a prime power $q$ and a positive integer $m$.
Then for any $\Delta\geq 0$, there exists a $((\Delta+1)q^2,k;\Delta)$-UI sequence set of size $N=q^m$, where the positive integer $k$ satisfies
\begin{equation}\label{eq:GF_condition}
q\geq (k-1)(m-1)+1.
\end{equation}
In general, it provides an $(n,k;\Delta)$-UI sequence set of size $N$ with length
$$n = O\left( \Delta k^2m^2 \right) = O\left( \frac{\Delta k^2\ln^2N}{\ln^2k} \right).$$
\end{theorem}
Note that the parameter $m$ above must be larger than $1$ to make \eqref{eq:GF_condition} meaningful.
It is worth mentioning that in \cite{Rentel_Kunz_2005}, a solution based on Reed-Solomon Codes was proposed which has the same order behavior.
\section{Combinatorial structure}\label{sec:combin}
In this section, we define the new concept of partially conflict-avoiding codes and introduce
two relevant combinatorial structures for analyzing them: graph packings and disjoint difference sets.
The connection of these terms with UI sequence sets will be shown as
\begin{align}
\begin{array}{rcc}
(n,k;\Delta)\text{-UI sequence set} & {\Longleftarrow} \atop \text{Prop.~\ref{pro:UI_PCAC}} & \text{PCAC}_{\Delta}(n,k) \\
& & \hspace{0.8cm}\Updownarrow \substack{\text{Prop.~\ref{pro:PCAC_packing}}} \\
(n,k,r)\text{-DDS} & {\Longrightarrow} \atop \text{Prop.~\ref{pro:DDS_packing}} & (k,\Delta)\text{-packing} \text{ of }K_n
\end{array}
\end{align}
\medskip
\subsection{CAC and $\text{PCAC}_\Delta$}
Given a binary sequence $X$, the \emph{weight} of $X$, denoted by $\omega(X)$, is the number of `ones' in it.
For integers $n>k>0$, let $\mathcal{S}(n,k)$ denote the set of all binary sequences of length $n$ and weight $k$.
The \emph{Hamming cross-correlation} of binary sequences $X$ and $Y$ is defined by
\begin{equation} \label{eq:full-crosscorel}
H(X,Y):= \max_{\tau}\sum_{i=0}^{n-1}X(i)\mathcal{R}^\tau Y(i),
\end{equation}
where $\tau$ goes from $0$ up to $n-1$.
Note that $H(X,X)=\omega(X)$ for all $X$ and $H(X,Y)\geq 1$ if $X\neq Y$.
\begin{definition} \label{defi:CAC}
A set $\mathcal{C}\subseteq\mathcal{S}(n,k)$ is a \emph{conflict-avoiding code}, CAC, of length $n$ and weight $k$ if $H(X,Y)=1$ for any distinct $X,Y\in\mathcal{C}$.
\end{definition}
Denote by CAC$(n,k)$ the class of all CACs of length $n$ and weight $k$.
The maximum size of codes in CAC$(n,k)$ is denoted by $M(n,k)$.
A code $\mathcal{C}\in \mbox{CAC}(n,k)$ is said to be \emph{optimal} if $|\mathcal{C}|=M(n,k)$.
For more results on optimal CACs, please refer to \cite{Fu_Lin_Mishima_2010,Fu_Lo_Shum_2014,Levenshtein_Tonchev_2005,Momihara_2007,Shum_Wong_Chen_2010}.
In what follows, we generalize the constraint that $\tau$ is arbitrary in \eqref{eq:full-crosscorel}.
Assume that $\Delta$, an integer between 0 and $n-1$, is the maximum number of relative cyclic shifts.
Then the \emph{Hamming cross-correlation of $X,Y\in\mathcal{S}(n,k)$ with respect to $\Delta$} is defined by
\begin{equation}
H_{\Delta}(X,Y):= \max_{0\leq\tau \leq\Delta}\sum_{i=0}^{n-1}X(i)\mathcal{R}^\tau Y(i).
\end{equation}
\begin{definition} \label{defi:partial-CAC}
Let $n,k,\Delta$ be integers with $0<k<n$ and $0\leq\Delta<n$.
A set $\mathcal{C}\subseteq\mathcal{S}(n,k)$ is a \emph{partially conflict-avoiding code with respect to $\Delta$}, $\mbox{PCAC}_{\Delta}$, of length $n$ and weight $k$ if $H_{\Delta}(X,Y)\leq 1$ for any distinct $X,Y\in \mathcal{C}$.
\end{definition}
Similarly, $\text{PCAC}_{\Delta}(n,k)$ denotes the class of all $\mbox{PCAC}_{\Delta}$s of length $n$ and weight $k$, and $M_{\Delta}(n,k)$ denotes the maximum size of codes in $\mbox{PCAC}_{\Delta}(n,k)$.
It is obvious that a $\text{PCAC}_{\Delta}$ admits the UI-property.
\begin{proposition}\label{pro:UI_PCAC}
A code $\mathcal{C}\in\text{PCAC}_{\Delta}(n,k)$ is an $(n,k;\Delta)$-UI sequence set with size $N=|\mathcal{C}|$.
\end{proposition}
Let $n,k,\Delta$ be integers satisfying the setting of Definition~\ref{defi:partial-CAC}.
It is clear that
$$\text{PCAC}_{\Delta}(n,k)\supseteq \text{PCAC}_{\Delta+1}(n,k)\supseteq\cdots\supseteq \text{PCAC}_{n-1}(n,k) = \text{CAC}(n,k),$$
and thus
$$M_{\Delta}(n,k)\geq M_{\Delta+1}(n,k)\geq\cdots\geq M_{n-1}(n,k) = M(n,k).$$
Here is an interesting observation.
\begin{lemma} \label{lem:large-delta}
Let $n,k$ be integers with $n>k>0$.
If $\Delta$ is an integer with $\lfloor\frac n2\rfloor \leq \Delta < n$, then $M_{\Delta}(n,k)=M(n,k)$.
\end{lemma}
\begin{proof}
We first claim that $H_\Delta(X,Y)\geq 1$ for any two distinct sequences $X,Y$ in $\mathcal{S}(n,k)$.
Assume to the contrary that $H_\Delta(X,Y)=0$. Pick any two indices $i,j$ with $X(i)=Y(j)=1$.
For every $\tau=0,1,\ldots,\Delta$, since $X(i)\mathcal{R}^\tau Y(i)=0$, we have $Y(i-\tau)=0$, where the addition is taking modulo $n$.
Similarly, there are consecutive $\Delta+1$ `zeroes' from $X(j-\Delta)$ to $X(j)$.
Since $X(i)=Y(j)=1$, those $2(\Delta+1)$ indices are distinct (see Figure~\ref{fig:large-delta}).
Then we have $2(\Delta+1)\leq n$, which contradicts to $\lfloor\frac n2\rfloor \leq \Delta$.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{fig-large_delta.pdf}
\caption{Illustration of $X(i)=Y(j)=1$} \label{fig:large-delta}
\end{figure}
Let $\mathcal{C}\in \mbox{PCAC}_{\Delta}(n,k)$.
Above argument promises that $H_\Delta(X,Y)=1$ for any two distinct sequences $X,Y\in\mathcal{C}$.
We now claim that $\mathcal{C}\in \mbox{CAC}(n,k)$.
Assume to the contrary that there exist two distinct sequences $X,Y\in\mathcal{C}$ so that $H(X,Y)\geq 2$.
By symmetry there exist indices $i_1,i_2,j_1,j_2$ such that $X(i_1)=X(i_2)=1$ and $Y(j_1)=Y(j_2)=1$, where $i_1+\tau\equiv j_1$ (mod $n$) and $i_2+\tau\equiv j_2$ (mod $n$) for some $\tau\leq \Delta$. This contradicts to $H_{\Delta}(X,Y)= 1$.
Hence the proof is completed.
\qed
\end{proof}
\subsection{Graphic representation}\label{sec:PCAC_graph}
Let $\mathbb{Z}_n=\{0,1,\ldots,n-1\}$ denote the ring of residues modulo $n$.
Let $K_n$ denote the complete graph of order $n$ whose vertices are labeled by elements in $\mathbb{Z}_n$.
Given any subset $A\subseteq\mathbb{Z}_n$, let $C_A$ denote the \emph{clique} induced by $A$, namely, the subgraph with vertex set $A$ whose vertices are pairwise adjacent.
A clique of order $t$ is usually called a $t$-clique.
Given an integer $\Delta$ with $0\leq\Delta < n$, the \emph{supporting graph} of $A$ with respect to $\Delta$ is defined as
$$G_{\Delta}(A):= C_{A}\cup C_{A+1}\cup \cdots \cup C_{A+\Delta},$$
where $A+\tau=\{i+\tau ~(\mbox{mod }n) :\, i\in A\}$.
By putting the $n$ vertices of $K_n$ in clockwise direction from $0$ to $n-1$, $G_{\Delta}(A)$ can be viewed as the union of $(\Delta+1)$ $|A|$-cliques, each of which is obtained by rotating $C_{A}$ clockwise step by step.
For example, let $n=8$, $\Delta=2$ and $A=\{0,1,2\}$, $B=\{3,5,7\}$, then $A+1=\{1,2,3\}$, $A+2=\{2,3,4\}$, $B+1=\{4,6,0\}$ and $B+2=\{5,7,1\}$.
See Figure~\ref{fig:supporting} for the two supporting graphs: $G_{2}(A)$ and $G_{2}(B)$.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{fig-supporting.pdf}
\caption{$G_{2}(\{0,1,2\})$ and $G_{2}(\{3,5,7\})$ in $K_8$} \label{fig:supporting}
\end{figure}
For a binary sequence $X$ of length $n$, the \emph{characteristic set} of $X$ is given by
$$\mathcal{I}_X :=\{t\in\mathbb{Z}_n :\, X(t)=1\}.$$
A cyclic shift of $X$ by $\tau$ corresponds to a translation of $\mathcal{I}_X$ by $\tau$ in $\mathbb{Z}_n$, that is, $\mathcal{I}_{\mathcal{R}^\tau X} = \mathcal{I}_X + \tau$.
Let $n,k,\Delta$ be integers with $0<k<n$ and $0\leq\Delta<n$.
Given two distinct binary sequences $X,Y\in\mathcal{S}(n,k)$, it is easy to see that $H_{\Delta}(X,Y)\leq 1$ if and only if $G_{\Delta}(\mathcal{I}_X)$ and $G_{\Delta}(\mathcal{I}_Y)$ are edge-disjoint.
\begin{definition}\label{defi:packing}
Let $\mathcal{P}=\{P_1,P_2,\ldots,P_N\}$ be a set of $k$-subsets of $\mathbb{Z}_n$.
We say $\mathcal{P}$ is a \emph{$(k,\Delta)$-packing} of $K_n$ if $G_{\Delta}(P_i)$ and $G_{\Delta}(P_j)$ are edge-disjoint whenever $i\neq j$.
\end{definition}
The following follows directly from definitions.
\begin{proposition}\label{pro:PCAC_packing}
Let $n,k,\Delta$ be integers with $0<k<n$ and $0\leq\Delta<n$.
There exists a code $\mathcal{C}\in\text{PCAC}_{\Delta}(n,k)$ with $|\mathcal{C}|=N$ if and only if $K_n$ has a $(k,\Delta)$-packing $\mathcal{P}=\{P_1,P_2,\ldots,P_N\}$.
More precisely, $\mathcal{P}=\{\mathcal{I}_X:\,X\in\mathcal{C}\}$.
\end{proposition}
A $(k,\Delta)$-packing $\mathcal{P}$ of $K_n$ is said to be \emph{maximum} if the size of $\mathcal{P}$ is maximum.
That is, a maximum $(k,\Delta)$-packing of $K_n$ is equivalent to an optimal $\text{PCAC}_\Delta$ of length $n$ and weight $k$.
\subsection{Disjoint difference set}
\begin{definition}\label{defi:DDS}
An \emph{$(n,k,r)$-disjoint difference set} (\emph{DDS}) is a family $\{B_1,B_2,$ $\ldots,B_r\}$ of $k$-subsets of $\mathbb{Z}_n$ such that among the differences $\{x-y:\,x,y\in B_i,x\neq y,1\leq i\leq r\}$ each nonzero element $g\in\mathbb{Z}_n$ occurs at most once.
\end{definition}
A necessary condition for the existence of an $(n,k,r)$-DDS is
\begin{equation}\label{eq:DDS_necessary}
n\geq r k(k-1)+1.
\end{equation}
An $(n,k,r)$-DDS is called as an \emph{$(n,k)$-difference family} (DF) if the equality in \eqref{eq:DDS_necessary} holds.
That is, an $(n,k)$-DF is an $(n,k,\frac{n-1}{k(k-1)})$-DDS.
Let $\{B_1,B_2,\ldots,B_r\}$ be an $(n,k,r)$-DDS.
It is easy to check that for any $\Delta,t,t'\geq 0$, the two cliques $C_{B_i+t}$ and $C_{B_i+t'}$ have no common edges whenever $t\neq t'$, and the two supporting graphs $G_\Delta(B_i+t)$ and $G_\Delta(B_j+t')$ are edge-disjoint whenever $i\neq j$.
Hence, we have the following proposition.
\begin{proposition}\label{pro:DDS_packing}
Let $\{B_1,B_2,\ldots,B_r\}$ be an $(n,k,r)$-DDS.
For $0\leq\Delta<n$, there exists a $(k,\Delta)$-packing of $K_n$ with size $r\left\lfloor\frac{n}{\Delta+1}\right\rfloor$.
\end{proposition}
\begin{proof}
By the observation above, the set of supporting graphs $G_\Delta(B_i+t)$ for $i=1,2,\ldots,r$ and $t=0,(\Delta+1),2(\Delta+1),\ldots, (\lfloor\frac{n}{\Delta+1}\rfloor-1)(\Delta+1)$ will form a $(k,\Delta)$-packing of $K_n$.
This concludes the proof.
\qed
\end{proof}
Combining Proposition~\ref{pro:UI_PCAC}, \ref{pro:PCAC_packing} and \ref{pro:DDS_packing}, we conclude that
\begin{theorem}\label{thm:UI_DF}
If there exists an $(n,k,r)$-DDS, then for $0\leq\Delta<n$, there exists an $(n,k;\Delta)$-UI sequence set of size
\begin{equation}\label{eq:UI_size}
N=r\left\lfloor\frac{n}{\Delta+1}\right\rfloor.
\end{equation}
\end{theorem}
\smallskip
In order to obtain $(n,k,r)$-DDSs, we revisit a useful combinatorial structure called difference triangle sets.
\begin{definition} \label{defi:DTS}
A \emph{normalized $(r,k)$-difference triangle set} (\emph{DTS} for short) is a family $\{B_1,B_2,\ldots,B_r\}$, where $B_i=\{b_{i0},b_{i1},\ldots,b_{ik}\}$, $1\leq i\leq r$, are sets of integers such that $0=b_{i0}<b_{i1}<\cdots<b_{ik}$, for all $i$, and such that the differences $b_{ij'}-b_{ij}$ with $1\leq i\leq r$ and $0\leq j<j'\leq k$ are all distinct.
The \emph{scope} of an $(r,k)$-DTS is the maximum integer among $\{b_{1k},b_{2k},\ldots,b_{r k}\}$.
\end{definition}
It is known that a DDS can be obtained from a DTS.
\begin{theorem}\cite{Shearer_2007}\label{thm:DDS_DTS}
An $(r,k-1)$-DTS of scope $m$ is an $(n,k,r)$-DDS for all $n\geq 2m+1$.
\end{theorem}
Please refer to \cite{Chen_1994,Chee_Colbourn_1997,Chu_Colbourn_Golomb_2005,Chen_Fan_Jin_1992,Ling_2002,Shearer_2007} for more information on DDSs and DTSs.
Note that a DDS is also named as a \emph{difference packing (DP)} in literature.
\subsection{An example}
We use an example to illustrate our idea.
Suppose that we aim to construct a $(19,3;5)\text{-UI}$ set of size as large as possible.
The first step is to find a $(19,3,3)\text{-DDS}: B_1=\{0,4,5\},B_2=\{0,6,8\},B_3=\{0,7,10\}$.
Note that $\{B_1,B_2,B_3\}$ forms a difference family.
By Proposition~\ref{pro:DDS_packing}, we have a $(3,5)$-packing of $K_{19}$ as follows:
\begin{align*}
\text{From } B_1&: \{0,4,5\}, \{6,10,11\}, \{12,16,17\},\\
\text{From } B_2&: \{0,6,8\}, \{6,12,14\}, \{12,18,1\},\\
\text{From } B_3&: \{0,7,10\}, \{6,13,16\}, \{12,0,3\}.
\end{align*}
Therefore, by Proposition~\ref{pro:UI_PCAC} and \ref{pro:PCAC_packing}, the $9$ desired sequences are listed below.
$$
\begin{array}{crrr}
B_1: & 1000110000000000000, & 0000001000110000000, & 0000000000001000110, \\
B_2: & 1000001010000000000, & 0000001000001010000, & 0100000000001000001, \\
B_3: & 1000000100100000000, & 0000001000000100100, & 1001000000001000000.
\end{array}
$$
Let us consider a network of $9$ potential users with the constraint that at most $3$ of them are active at the same time and the maximum relative shift is $5$.
Then above example (PCAC approach) provides a solution with sequence length $n=19$.
If we consider TDMA approach, the length of sequences must be larger than $9\times 5=45$.
If we consider GF (or RS code) approach, by taking $k=3, \Delta=5$ and $N\geq 9$ into Theorem~\ref{thm:Galois_field}, we have $m\geq 2$ and $q\geq 3$, and thus $n\geq (5+1)\times 3^2=54$.
This indicates that applying PCAC approach is more efficient than the other two methods.
We will study this phenomenon in more details in the subsequent section.
\subsection{Remarks}
It must be noted that the connection in Proposition~\ref{pro:DDS_packing} is an old fashion.
In fact, such a link is widely used to construct a block design from a difference family, see \cite{Chen_Fan_Jin_1992,Shearer_2007}.
However, it is new to connect it with CAC or protocol sequences.
If we let $D(B)$ denote the set of differences of any two elements in a set $B\subset\mathbb{Z}_n$, then any two sequences $X$ and $Y$ in a CAC have the property that $D(\mathcal{I}_X)\cap D(\mathcal{I}_Y) = \emptyset$.
Since the quantity of sequences is what counts here, a good (or optimal) CAC is designed to make sure each $|D(\mathcal{I}_X)|$ is as small as possible, which is different from the demand of a difference family or a disjoint difference set.
\section{New construction of UI sequence sets}\label{sec:MainResult}
In this section, we first construct a few families of UI sequence sets by means of disjoint difference sets, and then compare them with the UI sequence sets produced in Section~\ref{sec:UI}.
Singer~\cite{Singer_1938} constructed $(q^2+q+1,q+1,1)$-DDS, and Bose~\cite{Bose_1942} constructed $(q^2-1,q,1)$-DDS, where $q$ is a prime power. With these DDSs and a construction of Colbourn-Bolbourn~\cite{Colbourn_1984}, Chen-Fan-Jin~\cite{Chen_Fan_Jin_1992} proposed two infinite families of disjoint difference sets.
\begin{theorem}\cite{Chen_Fan_Jin_1992}
Let $q$ be a prime power.
\begin{enumerate}[(a)]
\item There exists an $\left(r(q^2+q+1),q+1,r\right)$-DDS for any prime $r>q$.
\item There exists an $\left(r(q^2-1),q,r\right)$-DDS for any prime $r\geq q$.
\end{enumerate}
\end{theorem}
By Theorem~\ref{thm:UI_DF}, we have the following result.
\begin{theorem} \label{thm:constant-k}
Let $q$ be a prime power.
\begin{enumerate}[(a)]
\item For $r=1$ or $r>q$ is a prime, there exists an $\left(r(q^2+q+1),q+1;\Delta\right)$-UI sequence set with size $$N=r\left\lfloor\frac{r(q^2+q+1)}{\Delta+1}\right\rfloor.$$
\item For $r=1$ or $r\geq q$ is a prime, there exists an $\left(r(q^2-1),q;\Delta\right)$-UI sequence set with size $$N=r\left\lfloor\frac{r(q^2-1)}{\Delta+1}\right\rfloor.$$
\end{enumerate}
\end{theorem}
Theorem~\ref{thm:constant-k} provides a new method to construct $(n,k;\Delta)$-UI sequence sets for some particular $n$.
We now investigate the properties of the three constructions: PCAC, TDMA and GF (or RS code) methods.
See the following chart for the comparisons.
\begin{table}[h]
\setlength{\belowcaptionskip}{0pt}
\scriptsize
\begin{tabular}{|c||c|c|c|c|}
\hline
& potential users & sequence period & active users & \\ \hline \hline
\multirow{2}{*}{PCAC} & $r\left\lfloor\frac{r(q^2+q+1)}{\Delta+1}\right\rfloor$ & $r(q^2+q+1)$ &
$q+1$ & $\begin{array}{c} q\text{ is a prime power and} \\ r=1 \text{ or } r>q \text{ is a prime} \end{array}$ \\
\cline{2-5}
& $r\left\lfloor\frac{r(q^2-1)}{\Delta+1}\right\rfloor$
& $r(q^2-1)$ & $q$ & $\begin{array}{c} q\text{ is a prime power and} \\ r=1 \text{ or } r\geq q \text{ is a prime} \end{array}$ \\
\hline
$\begin{array}{c} \text{GF} \\ \text{RS code} \end{array}$ & $q^m$ & $q^2(\Delta+1)$ & $k$
& $\begin{array}{c} q\text{ is a prime power and} \\ q\geq (k-1)(m-1)+1 \end{array}$ \\ \hline
TDMA & $k$ & $k(\Delta+1)$ & $k$ & \\
\hline
\end{tabular}
\caption{Comparison of three approaches \label{tab:comparison}}
\normalsize
\end{table}
We first consider the case that all potential users can be active at the same time;
see Figure~\ref{fig_comparison_1} for examples.
For simple illustration, we fix the number of active users (or potential users) to be $k=p^2+1$ and $\Delta=p^{3/2}$ or $p^2-1$, where $p$ is a prime.
In order to attain $p^2+1$ active users, by Table~\ref{tab:comparison}, the sequence period provided by PCAC approach is at least $p^4+p^2+1$ (i.e., $r=1$ of Case $(a)$), and by GF/RS code approach is at least $(p^2+1)^2(\Delta+1)$ (since the parameter $q\geq p^2+1$ in this case).
Note that the curves of TDMA and PCAC approaches overlap in Figure~\ref{fig_comparison_1} (right) since the original sequence periods provided by them differ by 1 ($p^4+p^2+1$ for PCAC and $p^4+p^2$ for TDMA).
\begin{figure}[h]
\centering
\includegraphics[width=4.7in]{fig_comparison_1.pdf}
\caption{$(n,k;(k-1)^{3/4})$-UI and $(n,k;(k-2))$-UI sequence sets for $k=p^2+1$, where $p$ is a prime between $3$ and $73$} \label{fig_comparison_1}
\end{figure}
\noindent
The result reveals that when the number of potential users is almost equal to the maximum number of active users in a system, the TDMA approach has a better performance, where the difference between it with PCAC approach is getting smaller as $\Delta$ approaches $k$.
In practice, however, the number of potential users is much larger than the maximum number of active ones.
Consider the following two cases, shown in Figure~\ref{fig:comparison_2}:
The number of active users $k$ is set to be a prime $p$, the numbers of potential users is $p^3$, and $\Delta$ is $p-1$ or $p^2-1$.
For PCAC approach, we adopt the Case $(b)$ by letting $r=p$ in the case of $\Delta=p-1$, and $r$ be the smallest prime larger than $p^{3/2}$ in the case of $\Delta=p^2-1$.
By Table~\ref{tab:comparison}, the period of sequences with respect to PCAC (resp. GF/RS code and TDMA) approach is approximately $p^3$ (resp. $4p^3$ and $p^4$) in the first case where $\Delta=k^2$, and approximately $p^{7/2}$ (resp. $4p^4$ and $p^5$) in the second one, where $\Delta=k^3$.
Note that the parameter $m$ in GF/RS code approach is taken to be $3$ to attain the corresponding code size.
One can see that in these two cases, the PCAC approach is much more efficient than the other schemes.
\begin{figure}[h]
\centering
\includegraphics[width=4.7in]{fig_comparison_2.pdf}
\caption{$(n,k;k-1)$-UI and $(n,k;k^2-1)$-UI sequence sets with size $k^3$, where $k$ is a prime between 31 and 499}
\label{fig:comparison_2}
\end{figure}
Roughly speaking, by Table~\ref{tab:comparison}, the PCAC approach provides an $(n,k;\Delta)$-UI sequence set of length $O(k\sqrt{N\Delta})$, while the lengths
of sequences in the TDMA and GF/RS code approaches are respectively $O(N\Delta)$ and $O(\Delta k^2 m^2)$, where $N$ is the code size.
Therefore, the PCAC is more efficient under the condition:
$$k^2 m^4\Delta > N > \frac{k^2}{\Delta}.$$
\section{Partially conflict-avoiding codes of small weight} \label{sec:PCAC23}
In this section, we investigate optimal partially conflict-avoiding codes.
The main technique is to view an optimal $\text{PCAC}_\Delta$ of length $n$ as a maximum packing of $K_n$.
By Lemma~\ref{lem:large-delta}, we only need to consider $\Delta<\lfloor\frac{n}{2}\rfloor$.
\subsection{Weight $k=2$}
Let $i,j$ be the two endpoints of an edge $e$ in $K_n$.
The \emph{difference} of $e$, denoted by $d(e)$, is defined as the smallest nonzero integer $t$ such that
$$i+t\equiv j \text{ (mod $n$) or } j+t\equiv i \text{ (mod $n$).}$$
Note that $1\leq d(e)\leq\frac{n}{2}$ for any edge $e$ in $K_n$.
Note also that in $K_n$ there are exactly $n$ edges of difference $t$ for each $1\leq t < \frac{n}{2}$, and there are exactly $\frac{n}{2}$ edges of difference $\frac{n}{2}$ provided that $n$ is even.
We say an edge $e$ is \emph{exceptional} if $d(e)=\frac{n}{2}$ and is \emph{normal} otherwise.
\begin{lemma}\label{lem:PCAC_k=2}
For $0\leq\Delta<\lfloor\frac{n}2\rfloor$, the maximum size of a $(2,\Delta)$-packing of $K_n$ is $\frac{n-1}{2}\lfloor \frac n{\Delta+1} \rfloor$ if $n$ is odd, and $(\frac{n}{2}-1)\lfloor\frac{n}{\Delta+1}\rfloor + \lfloor\frac{n}{2\Delta+2}\rfloor$ if $n$ is even.
\end{lemma}
\begin{proof}
Assume that $\mathcal{P}$ is a maximum packing.
For each $A\in\mathcal{P}$, the supporting graph $G_A$ is consist of $\Delta+1$ edges with the same difference $d$.
Then the difference $d$ could produce at most $\lfloor\frac{n}{\Delta+1}\rfloor$ supporting graphs if $d<\frac{n}{2}$ or at most $\lfloor\frac{n/2}{\Delta+1}\rfloor$ supporting graphs if $d=\frac{n}{2}$.
Conversely, the construction is straightforward.
Hence the result follows.
\qed
\end{proof}
Combining Lemma~\ref{lem:PCAC_k=2} and Proposition~\ref{pro:PCAC_packing} together with the fact that $M(n,2)=\lfloor\frac{n}{2}\rfloor$, we have:
\begin{theorem} Let $n,\Delta$ be integers with $0\leq\Delta<n$. Then \[ M_\Delta(n,2)=\left\{
\begin{array}{ll}
\frac{n-1}{2}\lfloor \frac n{\Delta+1} \rfloor & \mbox{if $n$ is odd and } \Delta\leq\frac{n-3}2; \\
(\frac{n}{2}-1)\lfloor\frac{n}{\Delta+1}\rfloor + \lfloor\frac{n}{2\Delta+2}\rfloor & \mbox{if $n$ is even and } \Delta\leq\frac{n-2}2; \\
\lfloor \frac n2 \rfloor & \mbox{otherwise.}
\end{array} \right. \]
\end{theorem}
\subsection{Weight $k=3$}
Let $A$ be a $3$-subset of $\mathbb{Z}_n$ and $\Delta$ be an integer with $0\leq\Delta < \frac n2$.
If two of the three edges in $C_A$ have the same difference, then the number of edges in $G_\Delta(A)$, denoted by $\|G_\Delta(A)\|$, can be determined by the two distinct differences.
For example, let $n=8$.
There are seven edges (four of difference $1$ and three of difference $2$) in $G_2(\{0,1,2\})$, and eight edges (five of difference $2$ and three of difference $4$) in $G_2(\{3,5,7\})$, see Figure~\ref{fig:supporting}.
We characterize this phenomenon below.
\begin{lemma} \label{lem:n=3-size}
Let $A$ be a 3-subset of $\mathbb{Z}_n$ and $\Delta$ be an integer with $0\leq\Delta < \lfloor\frac{n}2\rfloor$.
If there exist two edges in $C_A$ with the same difference $d$ such that $d\neq \frac{n}{3}$, then
$$ \|G_\Delta(A)\|=\left\{
\begin{array}{ll}
2\Delta+2+d & \text{if } d\leq\Delta, \\
3(\Delta+1)& \text{if } d>\Delta,
\end{array}\right. $$
where $\|G_\Delta(A)\|$ is the number of edges in $G_\Delta(A)$.
\end{lemma}
\begin{proof}
Assume $A=\{i,j,k\}$ and $i-j\equiv j-k\equiv d$ (mod $n$).
Let $E_1=\bigcup_{\tau=0}^\Delta \{i+\tau,j+\tau\}$, $E_2=\bigcup_{\tau=0}^\Delta \{j+\tau,k+\tau\}$ and $E_3=\bigcup_{\tau=0}^\Delta \{i+\tau,k+\tau\}$ be the sets of edges in $G_\Delta(A)$.
It is easy to see that $E_1\cap E_2$ is empty if $d>\Delta$ and is equal to $\{i,j\}\cup \cdots \cup\{i+\Delta-d,j+\Delta-d\}$ if $d\leq \Delta$.
That is, there are $\Delta-d+1$ repeated edges if $d\leq \Delta$.
Since $d\neq \frac n3$, $E_1\cap E_3=\emptyset$ and $E_2\cap E_3=\emptyset$.
This completes the proof. \qed
\end{proof}
We note here that if $d=\frac{n}{3}$ in above lemma, then $\|G_\Delta(A)\|=3(\Delta+1)$ if $\frac{n}{3}>\Delta$ and $\|G_\Delta(A)\|=n$ if $\frac{n}{3}\leq\Delta$.
We have the following result.
\begin{lemma} \label{lem:k=3upper}
Given a maximum $(3,\Delta)$-packing $\mathcal{P}$ of $K_n$, where $n$ and $\Delta$ are positive integers with $\Delta<\lfloor\frac{n}2\rfloor$.
Then
\begin{equation}
|\mathcal{P}| < \frac{n(n-1)}{6(\Delta+1)} + \frac{2\ln{2}-1}{3}n + \frac{n}{3(\Delta+1)}.
\end{equation}
\end{lemma}
\begin{proof}
We only consider $3\nmid n$ because the case $3|n$ can be dealt with in the same way.
For $d=1,\ldots,\Delta$, let $T_d\subset\mathcal{P}$ be the collection of 3-subsets $A$ such that in $C_A$, some two edges are of the same difference $d$.
The cardinality of $T_d$ is denoted by $t_d$.
By Lemma~\ref{lem:n=3-size}, each $T_d$ corresponds to $(2\Delta+2+d)t_d$ edges and each of the remaining 3-subsets (not in some $T_d$) corresponds to $3(\Delta+1)$ edges.
Furthermore, every $G_{\Delta}(A)$ for $A\in T_d$ contains exactly $\Delta+d+1$ edges with difference $d$, so $t_d\leq\frac{n}{\Delta+d+1}$.
Then,
\begin{align*}
M &\leq ~t_1+t_2+\cdots+t_\Delta + \frac{{n\choose 2}-((2\Delta+3)t_1+(2\Delta+4)t_2+\cdots+(3\Delta+2)t_\Delta)}{3(\Delta+1)}\\
&= ~\frac{n(n-1)}{6(\Delta+1)} + \frac{\Delta t_1+(\Delta-1)t_2+\cdots +t_\Delta}{3(\Delta+1)}\\
&\leq ~\frac{n(n-1)}{6(\Delta+1)} + \frac{n}{3(\Delta+1)} \sum_{d=1}^\Delta \frac{\Delta+1-d}{\Delta+1+d}.
\end{align*}
Consider the last summation, we have
\begin{align*}
\sum_{d=1}^\Delta \frac{\Delta+1-d}{\Delta+1+d} \leq& ~\int_0^\Delta \left(\frac{\Delta+1-x}{\Delta+1+x}\right) \text{d}x = \int_0^\Delta \left(\frac{2(\Delta+1)}{\Delta+1+x}-1\right) \text{d}x \\
=& ~2(\Delta+1) \ln(\frac{2\Delta+1}{\Delta+1})-\Delta \leq 2(\Delta+1) \ln2 -\Delta,
\end{align*}
and thus the result follows. \qed
\end{proof}
The following result on difference triangle sets can be constructed from \emph{Skolem sequences}~\cite{Skolem_1957} and \emph{hooked Skolem sequences}~\cite{OKeefe_1961}.
\begin{theorem}\cite{OKeefe_1961,Skolem_1957} \label{thm:3-DF}
There exists a $(r,2)$-DTS with scope $3r$ whenever $r\equiv 0,1$ (mod 4), and scope $3r+1$ whenever $r\equiv 2,3$ (mod 4).
\end{theorem}
By Theorem~\ref{thm:DDS_DTS}, there exists an $(n,3,r)$-DDS for all $n\geq 6r+1$ whenever $r\equiv 0,1$ (mod 4), and $n\geq 6r+3$ whenever $r\equiv 2,3$ (mod 4).
Applying Proposition~\ref{pro:DDS_packing} we obtain the following result.
\begin{lemma} \label{lem:k=3lower}
Let $n,\Delta$ be positive integers such that $\Delta<\lfloor\frac{n}2\rfloor$.
There exists a $(3,\Delta)$-packing $\mathcal{P}$ of $K_n$ with
$$|\mathcal{P}|=\left\lfloor\frac{n-1}6\right\rfloor \left\lfloor\frac{n}{\Delta+1}\right\rfloor.$$
\end{lemma}
The following result can be obtained by Proposition~\ref{pro:PCAC_packing} together with Lemma~\ref{lem:k=3upper} and \ref{lem:k=3lower}.
\begin{theorem}
Let $n,\Delta$ be positive integers such that $\Delta<\lfloor\frac{n}2\rfloor$.
Then
$$\left\lfloor\frac{n-1}6\right\rfloor \left\lfloor\frac{n}{\Delta+1}\right\rfloor \leq M_\Delta(n,3)\leq \frac{n(n-1)}{6(\Delta+1)} + \frac{2\ln{2}-1}{3}n + \frac{n}{3(\Delta+1)}.$$
\end{theorem}
Table~\ref{tab:bounds_PCAC3} lists some upper and lower bounds of $M_{\Delta}(n,3)$ for $\Delta=\sqrt{n}$, where $n=200t$ for $t=1,2,\ldots,18$.
One can imagine that the larger the value $n$, the smaller the gap between the two bounds with respect to $n$.
Generally speaking, if $\Delta$ is fixed (a constant or a function of $n$), then the code size obtained by disjoint difference sets approximately attains the theoretical upper bound $O(\frac{n^2}{6\Delta})$ as $n\rightarrow \infty$.
\begin{table}[h]
\setlength{\belowcaptionskip}{0pt}
\scriptsize
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ & $1200$ & $1400$ & $1600$ & $1800$ \\ \hline \hline
Upper bound & $442$ & $1273$ & $2357$ & $3647$ & $5114$ & $6739$ & $8509$ & $10413$ & $12441$ \\ \hline
Lower bound & $429$ & $1254$ & $2277$ & $3591$ & $4980$ & $6567$ & $8388$ & $10374$ & $12259$ \\ \hline \hline
$n$ & $2000$ & $2200$ & $2400$ & $2600$ & $2800$ & $3000$ & $3200$ & $3400$ & $3600$ \\ \hline \hline
Upper bound & $14588$ & $16846$ & $19212$ & $21679$ & $24244$ & $26904$ & $29655$ & $32494$ & $35419$ \\ \hline
Lower bound & $14319$ & $16470$ & $19152$ & $21650$ & $23766$ & $26447$ & $29315$ & $32262$ & $35341$ \\ \hline
\end{tabular}
\caption{Upper and lower bounds on $M_{\sqrt{n}}(n,3)$}
\label{tab:bounds_PCAC3}
\normalsize
\end{table}
\subsection{Weight $k=4,5,6,7$}
Here are some difference family results on $k=4,5,6,7$.
\begin{theorem}\cite{Chen_Zhu_1999,Chen_Zhu_1998,Chen_Zhu_2002} \label{thm:DF_4567}
\begin{enumerate}[(i)]
\item For any prime $p\equiv 1$ (mod $12$) there exists a $(p,4)$-DF.
\item For any prime $p\equiv 1$ (mod $20$) there exists a $(p,5)$-DF.
\item For any prime $p\equiv 1$ (mod $30$) there exists a $(p,6)$-DF with one exception of $p=61$.
\item Let $p\equiv 1$ (mod $42$) be a prime and $p\neq 43,127,211$. Then there exists a $(p,7)$-DF whenever $(-3)^{\frac{p-1}{14}}\neq 1$ in $\mathbb{Z}_p$ or $p<261239791$ or $p>1.236597\times 10^{13}$.
\end{enumerate}
\end{theorem}
Since an $(n,k)$-DF is an $(n,k,\frac{n-1}{k(k-1)})$-DDS, the corresponding $\text{PCAC}_\Delta$s are obtained directly by Proposition~\ref{pro:PCAC_packing} and \ref{pro:DDS_packing}.
In Table~\ref{tab:PCAC_4567} we consider $\Delta=\sqrt{n}$ and list some examples of small $n$ which satisfy conditions in Theorem~\ref{thm:DF_4567}.
We note here that more $\text{PCAC}_\Delta$s, especially for small weights, can be produced by a recursive construction of DTSs with minimum scope \cite{Chu_Colbourn_Golomb_2005}.
\begin{table}[h]
\setlength{\belowcaptionskip}{0pt}
\scriptsize
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$n$ & $13$ & $37$ & $61$ & $73$ & $97$ & $109$ & $157$ & $181$ & $193$ & $229$ & $241$ & $277$ \\ \hline
$M_{\sqrt{n}}(n,4)$ & $2$ & $15$ & $30$ & $42$ & $64$ & $81$ & $143$ & $180$ & $192$ & $266$ & $280$ & $345$ \\ \hline \hline
$n$ & $41$ & $61$ & $101$ & $181$ & $241$ & $281$ & $401$ & $421$ & $461$ & $521$ & $541$ & $601$ \\ \hline
$M_{\sqrt{n}}(n,5)$ & $10$ & $18$ & $45$ & $108$ & $168$ & $210$ & $380$ & $399$ & $460$ & $546$ & $594$ & $690$ \\ \hline \hline
$n$ & $31$ & $151$ & $181$ & $211$ & $241$ & $271$ & $331$ & $421$ & $541$ & $571$ & $601$ & $631$ \\ \hline
$M_{\sqrt{n}}(n,6)$ & $4$ & $55$ & $72$ & $91$ & $112$ & $135$ & $187$ & $266$ & $396$ & $418$ & $460$ & $504$ \\ \hline \hline
$n$ & $337$ & $379$ & $421$ & $463$ & $547$ & $631$ & $673$ & $757$ & $883$ & $967$ & $1009$ & $1051$ \\ \hline
$M_{\sqrt{n}}(n,7)$ & $136$ & $162$ & $190$ & $220$ & $286$ & $360$ & $384$ & $468$ & $588$ & $690$ & $720$ & $775$ \\ \hline
\end{tabular}
\caption{Some lower bounds on $M_{\sqrt{n}}(n,k)$ for $k=4,5,6,7$}
\label{tab:PCAC_4567}
\normalsize
\end{table}
\section{Concluding remarks}\label{sec:conclusion}
In this paper we construct an infinite number of new partially UI sequence sets by means of $\text{PCAC}_\Delta$ or disjoint difference sets.
For some particular $n$, we are able to obtain an asymptotically optimal $\text{PCAC}_\Delta$ of length $n$ and weight three.
|
1,314,259,993,303 | arxiv |
\section{Introduction}
Power generation currently undergoes a paradigm shift:
The tremendous installation of renewable energy generation units leads to varying operating conditions, fluctuating reserve capacities, and increasingly changing power flows on different levels of the grid, among others.
As the investment in sustainable energy production continues, operational control strategies have to be developed allowing secure grid operation despite the presence of uncertainties.
Worst-case methods \cite{Warrington13} and stochastic programming methods \cite{Bouffard08} have been proposed to deal with uncertainties in power systems applications on the secondary and tertiary control level.
The present paper focuses on chance-constrained optimal power flow (cc\textsc{opf}\xspace) formulations \cite{Zhang10,Vrakopoulou12,Bienstock14,Roald13}.
These formulations offer a framework to compute power injections that guarantee power balance, while respecting generation and transmission limits with a user-specified probability.
Much of the literature on cc\textsc{opf}\xspace focuses on the \textsc{dc}\xspace setting \cite{Vrakopoulou12,Bienstock14,Roald13,Roald15b,Muehlpfordt17a}.
Recently, however, the \textsc{ac}\xspace case has been considered too \cite{Roald17,Muehlpfordt16b,Vrakopoulou13}.
The setup of cc\textsc{opf}\xspace is conceptually appealing, because it extends conventional automatic generation control to the uncertain case; control policies rather than single generator set points are determined.
The parameters of the control policies are the decision variables of cc\textsc{opf}\xspace problems.
Immediately upon measuring a specific realization, the optimal control policies from cc\textsc{opf}\xspace provide power injections which satisfy the generation and transmission limits in the usual chance constraint sense.
However, chance-constrained optimization problems can be intrinsically difficult to solve.
This is why much effort has been put into reformulating chance constraints as deterministic (convex) constraints that lead to tractable optimization problems \cite{Bienstock14,Roald15b,Vrakopoulou13c}.
Effectively, chance constraint reformulations introduce constraint tightening; the so-called \emph{uncertainty margin} \cite{Qu15}.
Constraint tightening, in turn, may lead to conservative cc\textsc{opf}\xspace solutions, which may then result in higher (expected) operational costs of the power system.
In other words, as claimed in \cite[p. 5]{Kall94}: ``The stochastic solution [\ldots] is normally never optimal [\ldots], at the same time, it is also hardly ever really bad.''
Chance-constrained optimal power flow is fundamentally challenging because control policies have to be determined \emph{before} the realization of the uncertainty is known.
This induces a henceforth called ``price of uncertainty'' that has---as the above considerations show---at least four dimensions in the context of power flow problems:\footnote{Relative priorities of these dimensions are mostly case-specific and thus in general hard to state.}
\begin{itemize}
\item \emph{Cost}: What additional (expected) monetary costs are induced due to uncertainties?
\item \emph{Operation}: How does the grid operation change in the presence of uncertainties?
\item \emph{Computation}: Is the optimization problem from cc\textsc{opf}\xspace computationally tractable?
\item \emph{Feasibility}: Are the operating limits satisfied for all realizations?
\end{itemize}
In contrast to cc\textsc{opf}\xspace that considers stochasticity of the uncertainty explicitly via random variables, online optimization approaches to power flow problems \cite{Hauswirth17,DallAnese16,Gan16} react directly to the \emph{actual realization} of the uncertainty.
Online optimization approaches hence admit an intuitive interpretation as (implicitly defined) feedback controllers acting on the physical grid, which becomes the controlled system \cite{Hauswirth17,Gan16}.
We remark that in the last 25 years the systems and control community has witnessed vast research efforts in the field of feedback control via numerical online optimization, i.e. model predictive control; moreover, it has changed industrial control practice \cite[p. xi]{maciejowski2002predictive}.
When online optimization is applied to power systems, it consequently brings about the challenges known from model predictive control: state estimation, and immediate, accurate and reliable solution of (non-convex) optimization problems, and implementation thereof on existing automation systems.
In view of the above, it is fair to ask for the potential losses/gains of control policies obtained via cc\textsc{opf}\xspace compared to online optimization.
To the end of providing first elements of an answer, herein we introduce fully-informed in-hindsight \textsc{opf}\xspace (h\textsc{opf}\xspace).
This refers to the unrealistic case that the solution of the \textsc{opf}\xspace problem is immediately known \emph{for all} realizations of the uncertainty, whereby
each individual h\textsc{opf}\xspace solution is per-sample optimal.
Per-sample optimality here means that for the respective realization the minimum cost is attained, and the constraints are strictly satisfied.
In contrast to cc\textsc{opf}\xspace, the constraints are never violated with h\textsc{opf}\xspace; to quote again \cite[p. 5]{Kall94}: ``The \textsc{iq} of hindsight is very high.''
Note that h\textsc{opf}\xspace contains the solution via online optimization.
The contribution of the present paper is to answer the question when cc\textsc{opf}\xspace and h\textsc{opf}\xspace provide equivalent solutions.
In that case, the reaction to any realization of the uncertainty is computed in a single numerical run \emph{and} known to be per-sample optimal.
An online optimization is then unnecessary and can be replaced by a single control policy that is computed once offline.
In case of cc\textsc{opf}\xspace and h\textsc{opf}\xspace providing different solutions, we suggest using the total variational distance as a metric to quantify the price of uncertainty.
Our findings are illustrated by means of a tutorial three-bus system.
\section{Problem Setup}
Consider a power system with $n$ buses and $m$ lines.
The bus index set is $\mathcal{N} = \{1, \hdots, n\}$ with $| \mathcal{N} | = n$.
For ease of presentation, we assume that a single generator and a single load are connected to every bus.
Additionally, the load is uncertain in the sense that
the actual value of the load is unknown.
Hence, the load is modeled as a continuous second-order random variable, i.e. $\rvpdvar{i} \in \mathrm{L}^2(\Omega_i, \mu_i; \mathbb{R})$ for every bus $i \in \mathcal{N}$.\footnote{%
The space $\mathrm{L}^2(\Omega_i, \mu_i; \mathbb{R})$ is the Hilbert space of second-order $\mathbb{R}$-valued random variables with support $\Omega_i$ and probability measure $\mu_i$ \cite{Sullivan15book}.
For simplicity, we assume the support of $\mu_i$ is equal to the sample space $\Omega_i$.}
To ensure power balance in the presence of uncertainties, at least one controllable active power injection must then necessarily be modeled as a random variable, too.
In other words, if a load changes unexpectedly, some generator has to change its injection too.
To simplify presentation, we assume that all controllable generators might in principle react to the uncertainties; thus, every generator is modeled as a random variable.\footnote{\label{footnote:TrivialUncertainty}If a generator/load is \emph{not} uncertain, it can be modeled as a random variable with a Dirac-delta probability density centered around the deterministic value.}
Then, the net active and net reactive power for bus $i$ are modeled as the following random variables
\begin{subequations}
\label{eq:NetPower_rv}
\begin{align}
\rv{p}_i &= \rvpvar{i} + \rvpdvar{i} \in \mathrm{L}^2(\Omega_i, \mu_i; \mathbb{R}), \\
\rv{q}_i &= \rvqvar{i} + \rvqdvar{i} \in \mathrm{L}^2(\Omega_i, \mu_i; \mathbb{R}).
\end{align}
\end{subequations}
For each realization of the random variables in \eqref{eq:NetPower_rv},
\begin{subequations}
\label{eq:NetPower_realization}
\begin{align}
p_i &= \pvar{i} + \pdvar{i} \in \mathbb{R}, \\
q_i &= \qvar{i} + \qdvar{i} \in \mathbb{R},
\end{align}
\end{subequations}
the optimal power flow (\textsc{opf}\xspace) problem can be formulated as
\begin{subequations}
\label{eq:AC_OPF}
\begin{align}
\underset{\pvar{}, \qvar{}}{\min}\quad & \sum_{i \in \mathcal{N}} c_i(\pvar{i}) \\
\mathrm{s.\, t.} \, ~ ~ ~ \,
\label{eq:PFE}
& g(\pvar{},\qvar{},v,\theta; \pdvar{}, \qdvar{}) = 0,\\
\label{eq:CON_PV}
& x_i^{\text{min}} \leq x_i \leq x_i^{\text{max}}, && \forall i \in \mathcal{N},\\
& \nonumber \forall x_i \in \{ \pvar{i}, \, \qvar{i}, v_i, \theta_i \}, \\
& \theta_{i_s} = 0, && i_s \in \mathcal{N},\\
\label{eq:CON_i}
& i_{i,j}^{\text{min}} \leq i_{i,j} \leq i_{i,j}^{\text{max}}, && \forall i,j \in \mathcal{N},
\end{align}
\end{subequations}
where $v_i$, $\theta_i$ are the magnitude, phase of the voltage phasor at bus~$i$, respectively, and $i_{i,j}$ is the magnitude of the transmitted current between buses~$i$ and~$j$.
Problem~\eqref{eq:AC_OPF} minimizes the sum of active power generation costs $c_i$ subject to the power flow equations \eqref{eq:PFE}, generation constraints and voltage constraints \eqref{eq:CON_PV}, the slack constraint, and transmission constraints \eqref{eq:CON_i}.
For simplicity, the high-voltage solution to Problem \eqref{eq:AC_OPF} is assumed to exist for all realizations of \eqref{eq:NetPower_rv}.
The minimizer of the \textsc{opf}\xspace Problem \eqref{eq:AC_OPF} depends on the specific realization of the power demands $\pdvar{}$ and $\qdvar{}$
\begin{equation}
\label{eq:argmin_operator}
\begin{bmatrix}
\pvar{}\phantom{}^\star \\
\qvar{}\phantom{}^\star
\end{bmatrix}
=
\begin{bmatrix}
\nu_p(\pdvar{}, \qdvar{}) \\
\nu_q(\pdvar{}, \qdvar{})
\end{bmatrix}
:= \underset{\pvar{}, \qvar{}}{\operatorname{argmin}} \: \, \text{Problem \eqref{eq:AC_OPF}},
\end{equation}
where $\nu_p, \nu_q: \mathbb{R}^{2 n} \rightarrow \mathbb{R}^{n}$ comprise the argmin operator of the \textsc{opf}\xspace Problem \eqref{eq:AC_OPF}.
In-hindsight \textsc{opf}\xspace refers to the unrealistic situation, in which the solution to the \textsc{opf}\xspace Problem~\eqref{eq:AC_OPF} is known and immediately available \emph{for all} realizations of \eqref{eq:NetPower_rv}.
Consequently, the result of h\textsc{opf}\xspace is itself a random variable.
More precisely, it is the random-variable optimal active and reactive power generation $\rv{p}^{g \star}, \rv{q}^{g \star}$.
Algorithm~\ref{fig:Algorithm_hopf} summarizes the pseudo code of h\textsc{opf}\xspace.
\begin{algorithm}
\setstretch{1.2}
\vspace{2mm}
\centering
\small
\begin{algorithmic}[1]
\State Choose number of samples $N \in \mathbb{N}$.
\State Draw $N$ samples $\{(\pdvar{})_k, (\qdvar{})_k \}_{k=1}^N$ from $\rvpdvar{}, \rvqdvar{}$.
\For{$k = 1, \hdots, N$}
\State Pick $k$\textsuperscript{th} sample $(\pdvar{})_k, (\qdvar{})_k$.
\State Solve
$$
\begin{bmatrix}
(\pvar{}\phantom{}^\star)_k\\
(\qvar{}\phantom{}^\star)_k
\end{bmatrix}
=
\begin{bmatrix}
\nu_p((\pdvar{})_k, (\qdvar{})_k) \\
\nu_q((\pdvar{})_k, (\qdvar{})_k)
\end{bmatrix}.
$$
\EndFor
\State Result: $\rv{p}^{g \star}, \rv{q}^{g \star}$.
\end{algorithmic}
\normalsize
\caption{Description of h\textsc{opf}\xspace.}
\label{fig:Algorithm_hopf}
\end{algorithm}
In-hindsight \textsc{opf}\xspace is an important limiting case with the following properties:
\begin{enumerate}
\item \label{item:OnlineOptimization} The solution of h\textsc{opf}\xspace includes the optimal realization $p^{g \star}$, $q^{g \star}$ of power injections for the a priori unknown \emph{actual} realization $\pdvar{}, \qdvar{}$ of random generation/demand from \eqref{eq:NetPower_rv}.
\item Every realization of the h\textsc{opf}\xspace solution $\rv{p}^{g \star}, \rv{q}^{g \star}$ satisfies the power flow equations and the inequality constraints.
\item h\textsc{opf}\xspace provides the best distribution of optimal costs in the sense that every sample solution is known to be optimal.
\item h\textsc{opf}\xspace provides the best distribution of optimal active power generations that are always feasible.
\end{enumerate}
Note that item \ref{item:OnlineOptimization}) corresponds to the situation from online optimization approaches to \textsc{opf}\xspace problems \cite{Hauswirth17,DallAnese16,Gan16}: assuming the state of the grid is immediately and accurately available, the \textsc{opf}\xspace Problem \eqref{eq:AC_OPF} is solved online to provide power injections that balance the grid and satisfy generation and transmission limits.\footnote{As mentioned in the introduction, the idea is similar to model predictive control, except that single-stage \textsc{opf}\xspace problems do not involve any dynamics.}
This way, stochasticity of uncertainties does not have to be accounted for explicitly, because the online optimization algorithm reacts to any of its effects, ideally, in real time.
In-hindsight \textsc{opf}\xspace has conceptual difficulties: The naive algorithm described in Algorithm~\ref{fig:Algorithm_hopf} results in a mere look-up table, i.e. how the optimal solutions depends on the respective realization is not immediate by means of a function/feedback law.\footnote{In case of \textsc{dc}-\textsc{opf}\xspace, multiparametric programming techniques \cite{Bemporad02a,Vrakopoulou17} overcome this issue;
yet, this might lead to implementation difficulties. A thorough investigation is beyond the scope of this paper.}
In case one resorts to fast online solutions based on measured disturbances, the control policy is defined implicitly via optimization, which complicates its analysis.
The online optimization further has to be implemented using existing control hardware, which poses another challenge.
This motivates a different approach, namely aforementioned chance-constrained optimal power flow (cc\textsc{opf}\xspace).
It alleviates the conceptual disadvantages of h\textsc{opf}\xspace mentioned above: in a single numerical run the generation response to all load fluctuations is obtained.
This is achieved by optimizing over control policies.
The cc\textsc{opf}\xspace problem can be formulated as follows \cite{Roald13,Muehlpfordt17a,Roald17}
\begin{subequations}
\label{eq:AC_sOPF}
\begin{align}
\label{eq:AC_sOPF_cost}
\underset{\alpha_p, \alpha_q}{\min}\quad & \mathbb{E}\Big[\sum_{i \in \mathcal{N}} c_i(\rvpvar{i}) \Big] \\
\mathrm{s.\, t.} \, ~ ~ ~ \,
\label{eq:PFE_RV}
& g(\rvpvar{},\rvqvar{},\rv{v},\uptheta; \rvpdvar{}, \rvqdvar{}) = 0,\\
\label{eq:CON_CC1}
& \pr{x_i^{\text{min}} \leq \rv{x}_i} \geq 1 - \varepsilon, && \forall i \in \mathcal{N},\\
\label{eq:CON_CC2}
& \pr{ \rv{x}_i \leq x_i^{\text{max}}} \geq 1 - \varepsilon, && \forall i \in \mathcal{N},\\
\label{eq:CON_CC3}
& \nonumber\forall \rv{x}_i \in \{ \rvpvar{i},\, \rvqvar{i},\, \rv{v}_i,\, \uptheta_i \}, \\
& \uptheta_{i_s} = 0, && i_s \in \mathcal{N},\\
\label{eq:CON_CC_thermal1}
& \pr{ i_{i,j}^{\text{min}} \leq \rv{i}_{i,j}} \geq 1 - \varepsilon, && \forall i, j \in \mathcal{N},\\
\label{eq:CON_CC_thermal2}
& \pr{ \rv{i}_{i,j} \leq i_{i,j}^{\text{max}}} \geq 1 - \varepsilon, && \forall i, j \in \mathcal{N},\\
\label{eq:Policy_p}
& \rvpvar{} = \eta_p(\rvpdvar{}, \rvqdvar{}; \alpha_p), \\
\label{eq:Policy_q}
& \rvqvar{} = \eta_q(\rvpdvar{}, \rvqdvar{}; \alpha_q),
\end{align}
\end{subequations}
where $\mathbb{E}[\cdot]$ is the expected value, and all sans-serif symbols denote random variables.
We remark that minimizing the expected value of the cost as in \eqref{eq:AC_sOPF_cost} is a modeling choice. We refer to \cite{Muehlpfordt17a} for other choices; for instance, the objective may also include terms penalizing the variance of the cost function.
The formulation \eqref{eq:PFE_RV} of the power flow equations in terms of random variables ensures that the power balance holds for all realizations of the uncertainty \cite{Muehlpfordt16b,Muehlpfordt17a}, assuming power flow feasibility of the high-voltage solution.
The inequality constraints for generation and voltage limits are reformulated in \eqref{eq:CON_CC1}--\eqref{eq:CON_CC3} as single-sided chance constraints, which is again a modeling choice.
Similarly, the transmission limits are modeled as single-sided chance constraints \eqref{eq:CON_CC_thermal1}, \eqref{eq:CON_CC_thermal2}.
The control policies $\eta_p$, $\eta_q$ are introduced in \eqref{eq:Policy_p}, \eqref{eq:Policy_q}, and parameterized by vector-valued variables $\alpha_p$, $\alpha_q$, respectively.
The control policy parameters $\alpha_p$, $\alpha_q$ are the degrees of freedom of Problem \eqref{eq:AC_sOPF} and can be interpreted as automatic generation control coefficients.
The control policies are generic but not arbitrary because the power balance has to hold.
For the \textsc{dc}\xspace setting (piece-wise) affine control policies can be optimal \cite{Bienstock14,Vrakopoulou17}.
To the best of the authors' knowledge, it is an open research question what policies are optimal for the \textsc{ac}\xspace setting.
Table \ref{tab:Comparison} summarizes the differences between h\textsc{opf}\xspace and cc\textsc{opf}\xspace.
In essence, cc\textsc{opf}\xspace achieves computational tractability because constraint satisfaction \emph{for every realization} is sacrificed by the introduction of chance constraints.
In general, the optimal solutions of h\textsc{opf}\xspace and cc\textsc{opf}\xspace cannot be expected to be equivalent, which leads to a \emph{price of uncertainty}.
The monetary price of uncertainty, for example, may be inferred from the different optimal distributions of costs.
\begin{table}[t]
\centering
\caption{Comparison between features of h\textsc{opf}\xspace and cc\textsc{opf}\xspace.}
\label{tab:Comparison}
\footnotesize
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{m{1.65cm}m{2.925cm}m{2.925cm}}
\toprule
& h\textsc{opf}\xspace & cc\textsc{opf}\xspace \\ \midrule
DoFs & power generation $\pvar{}, \qvar{}$ & policy parameters $\alpha_p, \alpha_q$ \\
Optimality & per-sample optimal & optimal policies \\
Solution characteristics & $\rv{p}^{g \star}$,
$\rv{q}^{g \star}$,
no functional dependency & $\rv{p}^{g \star} = \eta_p(\rvpdvar{}, \rvqdvar{}; \alpha_p^\star)$,
$\rv{q}^{g \star} = \eta_q(\rvpdvar{}, \rvqdvar{}; \alpha_q^\star)$,
functional dependency \\
Computation & one run per sample & single run \\
Power balance & satisfied & satisfied \\
Constraints & satisfied per sample & chance-constraint sense \\ \bottomrule
\end{tabular}
\vspace{-4.1mm}
\end{table}
It is worth asking whether the solutions of h\textsc{opf}\xspace and cc\textsc{opf}\xspace can be identical?
If so, what are necessary and sufficient conditions?
In this desirable case, a single numerical run provides the solution (cf. cc\textsc{opf}\xspace) that is known to strictly satisfy the inequality constraints for all realizations of uncertain generation/demand (cf. h\textsc{opf}\xspace).
The numerical and implementation challenges of online optimization would then be alleviated.
\section{Equivalence of h\textsc{opf}\xspace and cc\textsc{opf}\xspace}
We employ polynomial chaos expansion (\textsc{pce}\xspace) as a tool to derive conditions that lead to equivalence between h\textsc{opf}\xspace and cc\textsc{opf}\xspace.
Polynomial chaos is a spectral expansion technique for random variables with finite variance that allows to represent a random variable entirely by its deterministic, real-valued \textsc{pce}\xspace coefficients.
With respect to cc\textsc{opf}\xspace polynomial chaos has the following advantages:
(i) a stochastic problem is reformulated as a deterministic problem in terms of the deterministic, real-valued \textsc{pce}\xspace coefficients, and
(ii) \textsc{pce}\xspace offers a unified framework for several uncertainty descriptions common in power systems applications, for example Gaussian, Beta, Gamma, and/or Uniform distribution, cf. \cite{Atwa10a, Carpaneto08a, Soubdhan09a}.
The space limitations prohibit a thorough introduction of \textsc{pce}\xspace; we refer to \cite{Sullivan15book,Xiu10book} for further details.
For applications of \textsc{pce}\xspace in the power systems context we refer to \cite{Muehlpfordt17a,Muehlpfordt16b,kit:appino17a,kit:engelmann17b,Ni17, Tang16}.
The next result relies on \textsc{pce}\xspace to the end of presenting sufficient equivalence conditions for h\textsc{opf}\xspace and cc\textsc{opf}\xspace.
\begin{thm}[Equivalence of h\textsc{opf}\xspace and cc\textsc{opf}\xspace]~\\
\label{propo:equivalence}
Consider the cc\textsc{opf}\xspace Problem \eqref{eq:AC_sOPF}, and let the following hold:
\begin{mylist}
\item \label{item:DCpowerflow}the \textsc{dc}\xspace power flow assumptions are valid;
\item \label{item:QuadraticCost} the cost functions are quadratic and positive definite;
\item \label{item:FinitePCE}the uncertain loads $\rvpdvar{}$, $\rvqdvar{}$ admit a finite \textsc{pce}\xspace that is exact with dimension $L + 1$ with respect to the orthonormal polynomial basis $\{ \psi_\ell \}_{\ell = 0}^L$, i.e. $\rvpdvar{} = \sum_{\ell = 0}^{L} \pdvar{\ell} \psi_\ell$, where $\pdvar{\ell} \in \mathbb{R}^{n}$;\footnote{This slight abuse of notation w.r.t. \eqref{eq:NetPower_realization} is manageable, because we strictly use the subscript $\ell$ for \textsc{pce}\xspace coefficients in the remainder.}
\item \label{item:ActiveSet}for all realizations of the uncertain loads $\rvpdvar{}$, $\rvqdvar{}$ the set of active inequality constraints for the respective solution to \eqref{eq:AC_OPF} is the same.
\end{mylist}
Then, if the active power policy in the cc\textsc{opf}\xspace Problem \eqref{eq:AC_sOPF} is chosen according to
\begin{align}
\label{eq:DC_ControlPolicy}
\rvpvar{} & = \eta_p(\alpha_p) = \sum_{\ell = 0}^{L} \alpha_{p,\ell} \psi_\ell,
\end{align}
the cc\textsc{opf}\xspace solution is identical to the h\textsc{opf}\xspace solution.
\hfill $\square$
\end{thm}
Before proving the assertions of the theorem, we remark that the seemingly technical assumption \ref{item:FinitePCE} has a clear practical interpretation: for example, any ``canonical'' uncertainty (e.g. Gaussian, Beta, Gamma, Uniform) admits an exact \textsc{pce}\xspace with just two coefficients, i.e. $L + 1 = 2$.
For example, it is sufficient to know the mean and standard deviation of a Gaussian random variable to obtain all statistical moments and its probability density function.
In fact, the \textsc{pce}\xspace of a Gaussian random variable is finite and exact with the mean and standard deviation being the 0\textsuperscript{th} and 1\textsuperscript{st} order \textsc{pce}\xspace coefficient, respectively.\footnote{A basis orthogonal w.r.t. the Gaussian measure on the real line is $\{1, x\}$.}
Note that the active power control policy parameters $\alpha_p$ correspond to the deterministic \textsc{pce}\xspace coefficients of $\rvpvar{}$.
\begin{proof}
Due to space limitations, we present the proof for the case that the set of active inequality constraints from item \ref{item:ActiveSet} is empty.
Under \textsc{dc}\xspace power flow conditions the reactive power is constant due to constant voltage magnitudes, hence not a degree of freedom for the \textsc{opf}\xspace problem.
According to \cite{Muehlpfordt17a} the cc\textsc{opf}\xspace Problem \eqref{eq:AC_sOPF}, under assumptions \ref{item:DCpowerflow}--\ref{item:ActiveSet} from Theorem \ref{propo:equivalence}, can be written as a convex quadratic program (\textsc{qp}\xspace)
\begin{subequations}
\label{eq:DC_sOPF}
\begin{align}
\underset{\bs{\alpha}}{\min}\quad & \frac{1}{2} \, \bs{\alpha}^\top (I_{L+1} \otimes H) \bs{\alpha} + (e \otimes h)^\top \bs{\alpha} \\
\mathrm{s.\, t.} \, ~ ~ ~ \,
\label{eq:PFE_DC_PCE}
& (I_{L+1} \otimes \bs{1}^\top) (\bs{\alpha} + \bs{\bs{\pdvar{}}}) = 0,
\end{align}
\end{subequations}
where $\bs{\alpha} = [\alpha_{p,0}^\top, \hdots, \alpha_{p,L}^\top]^\top$, $\bs{\pdvar{}} = [\pdvar{0}, \hdots, \pdvar{L}]^\top$ are the vectors of stacked \textsc{pce}\xspace coefficients, and $e = [1, 0, \hdots, 0]^\top$ is the $(L+1)$-dimensional unit vector.
The matrix $H \in \mathbb{R}^{n \times n}$ is diagonal with positive entries, hence positive definite.
The \textsc{kkt}\xspace system for \eqref{eq:DC_sOPF} is linear
\begin{equation}
\label{eq:KKT_sOPF}
\bs{A}_{\text{s}} \bs{z}_{\text{s}} = \bs{b}_{\text{s}}.
\end{equation}
The coefficient matrix $\bs{A}_{\text{s}}$, the decision vector $\bs{z}_{\text{s}}$, and right-hand side vector $\bs{b}_{\text{s}}$ correspond to
\begin{align*}
\begin{bmatrix}
(I_{L+1} \otimes H) & (I_{L+1} \otimes \bs{1}^\top)^\top \\
(I_{L+1} \otimes \bs{1}^\top) & 0
\end{bmatrix}
\begin{bmatrix}
\bs{\alpha} \\
\bs{\lambda}
\end{bmatrix}
{=}
- \!
\begin{bmatrix}
(e \otimes h) \\
(I_{L+1} \otimes \bs{1}^\top) \bs{\pdvar{}}
\end{bmatrix}\!,
\end{align*}
where $\bs{\lambda}$ are the \textsc{pce}\xspace coefficients of the Lagrange multiplier for the power balance constraint \eqref{eq:PFE_DC_PCE}, and $I_{L+1}$ is the $(L+1)$ by $(L+1)$ identity matrix.
The coefficient matrix of the linear system \eqref{eq:KKT_sOPF} is regular due to positive definiteness of $H$.
Thus, \eqref{eq:KKT_sOPF} admits a unique solution for $\bs{\alpha}$.
We will now show that h\textsc{opf}\xspace leads to the same system of equations \eqref{eq:KKT_sOPF}, hence leads to the same control policy.
From items \ref{item:DCpowerflow}, \ref{item:QuadraticCost} it follows that the \textsc{opf}\xspace Problem \eqref{eq:AC_OPF} reduces to \textsc{dc}-\textsc{opf}\xspace which can be formulated as a convex \textsc{qp}\xspace
\begin{subequations}
\label{eq:DC_OPF}
\begin{align}
\underset{\pvar{}}{\min}\quad & \frac{1}{2} \, \pvar{}\phantom{}^\top H \pvar{} + h^\top \pvar{} \\
\mathrm{s.\, t.} \, ~ ~ ~ \,
\label{eq:PFE_DC}
& \bs{1}^\top (\pvar{} + \pdvar{}) = 0.
\end{align}
\end{subequations}
The \textsc{kkt}\xspace system for Problem \eqref{eq:DC_OPF} becomes
\begin{equation}
\label{eq:KKT_hOPF}
\begin{bmatrix}
H & \bs{1} \\
\bs{1}^\top & 0
\end{bmatrix}
\begin{bmatrix}
\pvar{} \\ \lambda
\end{bmatrix}
=
-
\begin{bmatrix}
h
\\
\bs{1}^\top \pdvar{}
\end{bmatrix},
\end{equation}
where $\lambda \in \mathbb{R}$ is the Lagrange multiplier for the \textsc{dc}\xspace power flow balance \eqref{eq:PFE_DC}.
The coefficient matrix in \eqref{eq:KKT_hOPF} is regular because $H$ is positive definite.
Hence, the argmin operator is linear in the demand $\pdvar{}$, and can be used directly for uncertainty propagation.
To that end, the \textsc{pce}\xspace for the uncertain demand $\rvpdvar{}$ and generation $\rvpvar{}$, see Assumption \ref{item:FinitePCE} and \eqref{eq:DC_ControlPolicy}, are substituted in \eqref{eq:KKT_hOPF}, and the \textsc{pce}\xspace for the Lagrange multiplier $\rv{\lambda} = \sum_{\ell = 0}^{L} \lambda_\ell \psi_\ell$ is introduced.
The Galerkin-projected system in matrix form becomes
\begin{subequations}
\label{eq:KKT_hOPF_Galerkin}
\begin{align}
\begin{bmatrix}
I_{L+1} \otimes \begin{bmatrix}
H & \bs{1} \\
\bs{1}^\top & 0
\end{bmatrix}
\end{bmatrix}
\bs{z}_{\text{h}}
=
\bs{A}_{\text{h}} \bs{z}_{\text{h}} =
\bs{b}_{\text{h}},
\end{align}
where $\bs{z}$ contains all \textsc{pce}\xspace coefficients $\alpha_\ell$ and $\lambda_\ell$, and $\bs{b}$ contains the \textsc{pce}\xspace coefficients $\pdvar{\ell}$ and the vector of linear cost coefficients~$h$,
\begin{align}
\bs{z}_{\text{h}} &= \begin{bmatrix}
\alpha_{p,0}\phantom{}^\top &
\lambda_0 &
\alpha_{p,1} &
\lambda_1 &
\hdots &
\alpha_{p,L}\phantom{}^\top &
\lambda_L
\end{bmatrix}^\top, \\
\bs{b}_{\text{h}} &=
-
\begin{bmatrix}
h^\top & \bs{1}^\top \pdvar{0} & 0 & \bs{1}^\top \pdvar{1} & \hdots & 0 & \bs{1}^\top \pdvar{L}
\end{bmatrix}^\top.
\end{align}
\end{subequations}
To show equivalence between cc\textsc{opf}\xspace and h\textsc{opf}\xspace from Theorem~\ref{propo:equivalence} it remains to show that the linear systems \eqref{eq:KKT_sOPF} and \eqref{eq:KKT_hOPF_Galerkin} admit the same solution.
To do so, introduce the following permutation matrix
\begin{equation}
M = \begin{bmatrix}
I_{L+1} \otimes \begin{bmatrix}
I_{n} & 0
\end{bmatrix}
\\
I_{L+1} \otimes \begin{bmatrix}
0 & 1
\end{bmatrix}
\end{bmatrix}
\in \mathbb{R}^{(L+1)(n+1) \times (L+1)(n+1) },
\end{equation}
and observe that
\begin{align}
\bs{z}_{\text{s}}
&=
M
\bs{z}_{\text{h}}, ~
\bs{b}_{\text{s}}
= M \bs{b}_{\text{h}}, ~
\bs{A}_{\text{s}} = M^\top \bs{A}_{\text{h}} M.
\end{align}
In other words, the linear systems \eqref{eq:KKT_sOPF} and \eqref{eq:KKT_hOPF_Galerkin} are equivalent after permuting with $M$.
Recall that in \eqref{eq:AC_OPF} we consider box constraints. Moreover, observe that whenever the set of active inequality constraints is nonempty, yet does not change for all realizations, the chance constraints in \eqref{eq:AC_sOPF} are not active. Instead they are replaced by active linear inequalities, i.e. by linear equalities. Thus, the same steps as above prove the assertion.
\end{proof}
We remark that the optimal solution to the linear systems \eqref{eq:KKT_sOPF}, \eqref{eq:KKT_hOPF_Galerkin} is affine in the \textsc{pce}\xspace coefficients of the demand.
If the active set changes for some realizations, the optimal solution can still be parameterized affinely for each active set.
Loosely speaking, Theorem~\ref{propo:equivalence} states conditions that lead to the price of uncertainty being zero.
That in turn means that if the conditions from Theorem~\ref{propo:equivalence} are satisfied, no online optimization scheme will yield better solutions than the cc\textsc{opf}\xspace solution.
It is worth asking how the solutions from cc\textsc{opf}\xspace and h\textsc{opf}\xspace can be compared in case the assumptions of Theorem~\ref{propo:equivalence} do not hold, i.e. in case of a non-zero price of uncertainty.
Importantly, the case when constraints are active for just a few number of realizations $\pdvar{}$ of the uncertain demand $\rvpdvar{}$.
Recall that the solutions stemming from cc\textsc{opf}\xspace and h\textsc{opf}\xspace are random variables.
Hence, their probability density functions (\textsc{pdf}s\xspace) can be compared using different metrics.
In statistics, popular choices include the Kullback-Leibler divergence or the Hellinger distance \cite{Gibbs02}.
However, the Kullback-Leibler divergence may not be meaningful, for instance, in case of compact supports that overlap.
The Hellinger distance leads to numerical difficulties when a distribution contains Dirac-deltas---which can be the case for h\textsc{opf}\xspace when constraints are active.
In the present paper, we suggest relying on the total variational distance \cite{Gibbs02}.
For two random variables $\rv{x}$ and $\rv{y}$ with \textsc{pdf}s\xspace $f_{\rv{x}}$ and $f_{\rv{y}}$, the total variational distance is given by $\Delta: \mathrm{L}^2(\Omega_x, \mu_x; \mathbb{R}) \times \mathrm{L}^2(\Omega_y, \mu_y; \mathbb{R}) \to [0,1]$
\begin{equation}
\Delta(\rv{x}, \rv{y}) = \frac{1}{2} \, \int_{\mathbb{R}} | f_{\rv{x}}(\tau) - f_{\rv{y}}(\tau)| \mathrm{d}\tau.
\end{equation}
The total variational distance $\Delta$ can serve as an indicator, for example, as when to prefer cc\textsc{opf}\xspace to fast online optimization.
A small value of $\Delta$ indicates that cc\textsc{opf}\xspace and h\textsc{opf}\xspace lead to similar power injections.
Thus, computing the cc\textsc{opf}\xspace once and applying it online, will yield good results.
In contrast, a large value of $\Delta$ indicates that fast online optimization approaches will outperform the cc\textsc{opf}\xspace solution in terms of constraint satisfaction and cost.
\section{Tutorial Example}
\begin{figure}
\centering
\includegraphics[]{mygrid.pdf}
\caption{Three-bus network with two generators and one load.}
\label{fig:ThreeBus}
\vspace{-4.1mm}
\end{figure}
The following tutorial example demonstrates Theorem~\ref{propo:equivalence} in action, and quantifies the price of uncertainty in case of h\textsc{opf}\xspace and cc\textsc{opf}\xspace yielding different solutions.
We focus on the rather small three-bus example because it is simple enough to have an analytical solution, and it is complex enough to provide helpful insight.
All units are in per-unit values.
Consider the connected three-bus network from Figure \ref{fig:ThreeBus}, which has a generator but no load connected to buses~1 and~2, respectively, and a load but no generator connected to bus~3.
With slight abuse of notation we set $\rvpdvar{} \equiv \rvpdvar{3}$, and $\rvpvar{} \equiv [\rvpvar{1}, \rvpvar{2}]^\top$.
The considered deterministic \textsc{dc}-\textsc{opf}\xspace is
\begin{subequations}
\label{eq:DCOPF_Example}
\begin{align}
\underset{\pvar{}}{\min}\quad & \frac{1}{2} \, \pvar{}\phantom{}^\top \begin{bmatrix}
H_{11} & 0 \\
0 & H_{22}
\end{bmatrix} \pvar{} + \begin{bmatrix}
h_1 & h_2
\end{bmatrix}^\top \pvar{} \\
\mathrm{s.\, t.} \, ~ ~ ~ \,
& \pvar{1} + \pvar{2} + \pdvar{} = 0, \\
& \pvar{1} \leq p_1^{\text{max}},
\end{align}
\end{subequations}
with $H_{11}, H_{22} > 0$.
Note that power demand is counted negative.
The argmin operator to Problem \eqref{eq:DCOPF_Example} is
\begin{equation}
\label{eq:DCOPF_example_argmin}
\mathbb{R}^2 \ni \pvar{}\phantom{}^\star =
\begin{cases}
\begin{bmatrix}
-1 \\ \phantom{-}1
\end{bmatrix} \beta
-
\begin{bmatrix}
\gamma \\
1 - \gamma
\end{bmatrix}
\pdvar{}, &- \pdvar{} < \frac{p_1^{\text{max}} + \beta}{\gamma}, \\
\begin{bmatrix}
p_1^{\text{max}} \\
-(\pdvar{} + p_1^{\text{max}})\end{bmatrix},
& -\pdvar{} \geq \frac{p_1^{\text{max}} + \beta}{\gamma},
\end{cases}
\end{equation}
with $\beta {=} (h_1 {-} h_2)/(H_{11} {+} H_{22})$, and $\gamma {=} H_{22}/(H_{11} + H_{22}) {>} 0$.
If the probability density function $f_{\rvpdvar{}}$ of the uncertain demand $\rvpdvar{}$ is given, the density of the h\textsc{opf}\xspace solution is obtained from the argmin~\eqref{eq:DCOPF_example_argmin} as
\begin{align}
\nonumber
&f_{\rvpvar{1}}(x_1) {=} \begin{cases}
\frac{1}{\gamma} \, f_{\rvpdvar{}}\!\left(\frac{x_1 + \beta}{-\gamma}\right), & x_1 < p_1^{\text{max}}, \\
\left(1 - \frac{1}{\gamma} F_{\rvpdvar{}}\!\! \left( \frac{p_1^{\text{max}} + \beta}{-\gamma} \right)\right) h(p_1^{\text{max}} {-} x_1), & x_1 = p_1^{\text{max}},
\end{cases} \\
\label{eq:PDF_HOPF}
&f_{\rvpvar{2}}(x_2) {=} \begin{cases}
\frac{1}{1 - \gamma} \, f_{\rvpdvar{}}\!\!\left( \frac{x_2 - \beta}{\gamma - 1} \right), & x_2 > \frac{p_1^{\text{max}} + \beta}{\gamma} - p_1^{\text{max}}, \\
f_{\rvpdvar{}}(-x_2 - p_1^{\text{max}}), & x_2 \leq \frac{p_1^{\text{max}} + \beta}{\gamma} - p_1^{\text{max}},
\end{cases}
\end{align}
where $h$ is the Dirac-delta.
The case-dependent definition of the \textsc{pdf}s\xspace is due to the upper generation limit $p_1^{\text{max}}$.
If generator~1 operates below its limit $p_1^{\text{max}}$, the share of active power generation assigned to generator~1 and~2 is determined by the cost coefficients $H_{11}, H_{22}, h_1, h_2$ via $\beta$ and $\gamma$; this situation corresponds to the upper cases in \eqref{eq:PDF_HOPF}.
At the generation limit $p_1^{\text{max}}$ the \textsc{pdf}\xspace of generator 1 becomes a delta-pulse with ``height'' equal to the mass of the \textsc{pdf}\xspace that is cut off to the right of $p_1^{\text{max}}$.
In case of generator~1 hitting its limit, generator 2 supplies the remaining active power to meet the power demand, and to guarantee power balance.
To this end, the \textsc{pdf}\xspace of power generation at bus 2 has a discontinuity and becomes equivalent to the shifted \textsc{pdf}\xspace of the demand, $f_{\rvpdvar{}}(-x_2 - p_1^{\text{max}})$.
\newpage
We now turn to the solution via cc\textsc{opf}\xspace.
We simplify the notation of the policy paramter to $\alpha \leftarrow \alpha_p$.
In light of Theorem~\ref{propo:equivalence}, Assumption~\ref{item:FinitePCE}, we assume that the \textsc{pce}\xspace for the demand $\rvpdvar{}$ is finite and exact with $L + 1 = 2$, and respective \textsc{pce}\xspace coefficients $\pdvar{0}$, $\pdvar{1}$.
For example, this is the case for any ``canonical'' uncertainty in the corresponding basis, i.e. Gaussian, Beta, Gamma, or Uniform distribution.
According to Theorem~\ref{propo:equivalence} the active power policy will consist of two elements, too.
Let $\alpha^\star_{\ell}$ denote the optimal solution for the $\ell$\textsuperscript{th} coefficient to Problem~\eqref{eq:AC_sOPF} tailored to the setting from Problem~\eqref{eq:DCOPF_Example}.
Then, the \textsc{pdf}\xspace of the optimal active power policies for generators~1 and 2 is
\begin{equation}
\label{eq:PDF_PCE}
f_{\rvpvar{i}}(\pvar{i}) = \left|\frac{\pdvar{1}}{\alpha_{1,i}^\star} \right| \, f_{\rvpdvar{}}\!\left( \frac{\alpha^\star_{1,i} \pdvar{0} - \alpha^\star_{0,i} \pdvar{1}}{\alpha^\star_{1,i}} + \frac{\pdvar{1} }{\alpha^\star_{1,i}}\, \pvar{i} \right),
\end{equation}
for $i = 1,2$, where $\alpha^\star_{\ell,i}$ denotes the $i$\textsuperscript{th} entry of the $\ell$\textsuperscript{th} coefficient $\alpha^\star_{\ell}$.
Compared to the \textsc{pdf}\xspace from h\textsc{opf}\xspace, the \textsc{pdf}\xspace \eqref{eq:PDF_PCE} from cc\textsc{opf}\xspace is always continuous, possibly leading to a price of uncertainty, because the generation limit may be violated (with user-specified low probability).
\begin{table}[t]
\centering
\caption{Numerical values for Problem~\eqref{eq:DCOPF_Example}. \label{tab:Numericalvalues}}
\footnotesize
\begin{tabular}{cccccc}
\toprule
Case & $H_{11}$ & $H_{22}$ & $h_1$ & $h_2$ & $p_1^{\text{max}}$ \\ \midrule
\textsc{c}1\xspace & $\{0.2, 0.3\}$ & $0.2$ & $0.5$ & $0.6$ & $1.5$ \\
\textsc{c}2\xspace & $0.2$ & $0.2$ & $0.5$ & $0.6$ & $0.85$ \\ \bottomrule
\end{tabular}
\vspace{-4.1mm}
\end{table}
Assume now, in light of item~\ref{item:ActiveSet} of Theorem~\ref{propo:equivalence}, that for all realizations $\pdvar{}$ of the uncertain demand $\rvpdvar{}$ it holds that $- \pdvar{} < (p_1^{\text{max}} + \beta)/\gamma$.
In other words, the demand never results in the generation constraint being active.
The \textsc{pce}\xspace coefficients of the optimal active power generation are obtained from the argmin operator \eqref{eq:DCOPF_example_argmin}
\begin{equation}
\label{eq:Example_PCE_Solution}
\mathbb{R}^2 \ni
\alpha_{\ell}^\star =
\begin{cases}
\begin{bmatrix}
-1 \\ \phantom{-}1
\end{bmatrix} \beta
-
\begin{bmatrix}
\gamma \\
1 - \gamma
\end{bmatrix}
\pdvar{0}, & \ell = 0, \\
\phantom{\begin{bmatrix}
-1 \\
\phantom{-}1
\end{bmatrix} \beta}
- \begin{bmatrix}
\gamma \\
1 - \gamma
\end{bmatrix}
\pdvar{\ell}, & \ell = 1, \hdots, L.
\end{cases}
\end{equation}
The coefficients from \eqref{eq:Example_PCE_Solution} are required to determine the \textsc{pdf}\xspace from cc\textsc{opf}\xspace according to \eqref{eq:PDF_PCE}.
\begin{figure*}
\centering
\subfloat[\textsc{pdf}\xspace of demand $\rvpdvar{}$.\label{fig:demand}]{%
\includegraphics[width=3.5cm]{ex_pd.pdf}%
}
\subfloat[Case \textsc{c}1\xspace yields zero price of uncertainty.\label{fig:equivalence}]
\includegraphics[width=3.5cm]{ex_p1g_noconstraints.pdf}%
\includegraphics[width=3.5cm]{ex_p2g_noconstraints.pdf}%
}
\subfloat[Case \textsc{c}2\xspace yields non-zero price of uncertainty.\label{fig:nonequivalence}]{%
\includegraphics[width=3.5cm]{ex_p1g_withconstraints.pdf}%
\includegraphics[width=3.5cm]{ex_p2g_withconstraints.pdf}%
}
\vspace{2mm}
\caption{Probability density functions of demand and generation for cases given in Table~\ref{tab:Numericalvalues}. Blue denotes h\textsc{opf}\xspace, black denotes cc\textsc{opf}\xspace.}
\vspace{-4.1mm}
\end{figure*}
Specifically, let the uncertain demand follow a Beta distribution with support $[p^{d,\text{min}}, p^{d,\text{max}}] = [-1.5, -0.9]$, and shape parameters $a = 4$, $b = 2$, i.e. $\rvpdvar{} \sim \mathrm{B}([-1.5, -0.9], 4, 2)$. The skewed \textsc{pdf}\xspace is shown in Figure~\ref{fig:demand}.
In the respective Jacobi polynomial basis the \textsc{pce}\xspace for $\rvpdvar{}$ is finite and exact with the following two \textsc{pce}\xspace coefficients
$ \pdvar{0} = -1.1$, $\pdvar{1} = 0.1$.
In case of all assumptions of Theorem~\ref{propo:equivalence} being satisfied, the densities from h\textsc{opf}\xspace, \eqref{eq:PDF_HOPF}, and cc\textsc{opf}\xspace, \eqref{eq:PDF_PCE}, are equivalent.
Consider case \textsc{c}1\xspace from Table \ref{tab:Numericalvalues}, for which all realizations satisfy
\begin{equation}
\label{eq:ConditionForEquivalence}
- \pdvar{} \leq 1.5 < (p_1^{\text{max}} + \beta)/\gamma \in \{2.50, 3.25 \}.
\end{equation}
Figure~\ref{fig:equivalence} shows the optimal densities for active power generation.
The share of generation is entirely determined by the cost coefficients from Table~\ref{tab:Numericalvalues} that enter the optimal solution~\eqref{eq:PDF_HOPF} via $\beta$ and $\gamma$.
The case $H_{11} = 0.2$, shown in solid blue in Figure~\ref{fig:equivalence}, leads to significantly higher generation at bus~1 compared to the case $H_{11} = 0.3$, shown dash-dotted blue in Figure~\ref{fig:equivalence}.
The active power limit $p_1^{\text{max}} = 1.5$ is not attained.
Equivalence of the optimal solutions means that the price of uncertainty is zero.
\begin{table}[t]
\centering
\caption{Constraint satisfaction and total variational distance.\label{tab:SimulationParameters}}
\footnotesize
\begin{tabular}{cccc}
\toprule
$\delta$ & $\pr{\rvpvar{1} \leq p_1^{\text{max}}}$ & $\Delta (\rvpvar{1,\text{h\textsc{opf}\xspace}}, \rvpvar{1,\text{cc\textsc{opf}\xspace}})$ & $\Delta (\rvpvar{2,\text{h\textsc{opf}\xspace}}, \rvpvar{2,\text{cc\textsc{opf}\xspace}})$ \\ \midrule
2.0 & 96.51\,\% & 0.3197 & 0.1882 \\
3.0 & 99.86\,\% & 0.4734 & 0.2451 \\ \bottomrule
\end{tabular}
\vspace{-4.1mm}
\end{table}
Consider now case \textsc{c}2\xspace from Table~\ref{tab:Numericalvalues}, instead, for which condition \eqref{eq:ConditionForEquivalence} is violated
$
- \pdvar{} \leq 1.5 \not< (p_1^{\text{max}} + \beta)/\gamma = 1.2.
$
The \textsc{pdf}\xspace~\eqref{eq:PDF_HOPF} from h\textsc{opf}\xspace is plotted in blue in Figure~\ref{fig:nonequivalence}.
It shows the discontinuity at the upper limit $p_1^{\text{max}} = 0.85$ where the \textsc{pdf}\xspace becomes a delta-pulse---denoted by the triangle in Figure~\ref{fig:nonequivalence}.
Effectively, the \textsc{pdf}\xspace for bus~1 from the unconstrained case~\textsc{c}1\xspace is cut off at the generation limit, and bus~2 accounts for the remainder.
To obtain the solution via cc\textsc{opf}\xspace the chance constraint $\pr{\rvpvar{1} \leq p_1^{\text{max}}}$ is reformulated as $\ev{\rvpvar{1}} + \delta \, \Sigma\left[\rvpvar{1}\right] \leq p_1^{\text{max}}$. The expected value $\ev{\rvpvar{1}}$ and the standard deviation $\Sigma[\cdot]$ can be computed from the \textsc{pce}\xspace coefficients \cite{Sullivan15book}.
The continuous \textsc{pdf}\xspace \eqref{eq:PDF_PCE} from cc\textsc{opf}\xspace violates the constraint limit, in contrast to the \textsc{pdf}\xspace via h\textsc{opf}\xspace; in Figure~\ref{fig:nonequivalence} the dashed black plot is for $\delta = 2$, and the solid black plot is for $\delta = 3$.
Qualitatively, a higher value of $\delta$ leads the cc\textsc{opf}\xspace solution to stay away from the constraint.
Quantitatively, the probabilities for constraint satisfaction are summarized in Table \ref{tab:SimulationParameters}.
The parameter $\delta$ has another influence on the quality of the solution:
Compared to the unconstrained case~\textsc{c}1\xspace, the \textsc{pdf}\xspace for bus~1 in case~\textsc{c}2\xspace is significantly more narrow and the mode is shifted to values of higher injection; the opposite effect can be observed for the power generation at bus~2 which becomes more wide.
This leads to less variability in the generation at bus~1, which necessarily leads to higher variability in power generation at bus~2 to ensure power balance.
The higher the value for $\delta$, the less variability is allowed at bus~1.
The value of $\delta$ also affects the total variational distance $\Delta(\cdot, \cdot)$; Table~\ref{tab:SimulationParameters} lists the numerival values.
The more narrow \textsc{pdf}\xspace at bus~1 for $\delta = 3$ leads to a 48\,\% increase in the total variational distance compared to $\delta = 2$.
A similar behavior is observed at bus~2, where the ``true'' \textsc{pdf}\xspace from h\textsc{opf}\xspace is fairly narrow, but the \textsc{pdf}s\xspace from cc\textsc{opf}\xspace are structurally too wide.
In that case, for $\delta = 3$ the total variational distance is 30\,\% larger compared to $\delta = 2$.
\section{Conclusion \& Outlook}
This paper relates chance-constrained \textsc{opf}\xspace to in-hindsight \textsc{opf}\xspace, which serves as a full-information yet unrealistic benchmark.
For cc\textsc{opf}\xspace, an entire control policy is computed by means of a single optimization problem \emph{before} the realization of the uncertainty is known; at the expense of possible constraint violations (with user-specified low probability).
For h\textsc{opf}\xspace, an \textsc{opf}\xspace problem is solved for every realization of the uncertainty, which leads to per-sample optimality, infinitely many \textsc{opf}\xspace problems, and no immediate control policy by means of a functional dependency.
We show that cc\textsc{opf}\xspace and h\textsc{opf}\xspace are equivalent for \textsc{dc}-\textsc{opf}\xspace problems for which the active set of inequality constraints is unchanged for all realizations of the uncertainty.
In that case, the policy from cc\textsc{opf}\xspace gives equivalent results to online optimization approaches.
If the solutions from cc\textsc{opf}\xspace and h\textsc{opf}\xspace do differ, the size of the total variational distance may indicate whether cc\textsc{opf}\xspace should be favored to fast online optimization, or vice versa.
A tutorial three-bus example underpins our results.
The present paper discusses elements of the relation of cc\textsc{opf}\xspace, h\textsc{opf}\xspace, and fast online optimization.
However, several questions remain open: What other dimensions enter the price of uncertainty? Can a measurement-based detection of changes in the active set trigger new cc\textsc{opf}\xspace computations? What can be said in the multi-stage \textsc{ac}\xspace setting?
\printbibliography
\end{document}
|
1,314,259,993,304 | arxiv | \section{Introduction}
\subsection{Background and Motivation}
Switching systems are dynamical systems described by a real-valued state vector and a discrete-valued operating mode.
For each of the operating modes, the system dynamics and output mapping may be different.
This is a wide class of systems, finding applications in a myriad of
different fields \cite{fischer_OptimalSequencebasedLQG_2013,schuurmans_LearningBasedRiskAverseModel_2020a,blair_ContinuousTimeRegulationClass_1975}.
We refer to Markov switching systems as the particular subclass of
such systems in which the discrete mode switches are governed by
a Markov chain.
Our main objective is to control Markov switching systems, while
simultaneously learning the switching probabilities between the modes,
which are initially assumed to be completely unknown.
A promising approach, which systematically balances controller
performance and robustness with respect to
misestimations of the probability distributions, is called
distributionally robust control.
Due to its attractive theoretical properties,
it has become popular for an increasing number of control tasks \cite{rahimian_DistributionallyRobustOptimization_2019,
coppens_DatadrivenDistributionallyRobust_2020,xu_DistributionallyRobustMarkov_2010,
schuurmans_LearningBasedDistributionallyRobust_2020a}.
When designing controllers for this class of systems,
it is often assumed that the discrete mode is directly measurable \cite{schuurmans_LearningBasedDistributionallyRobust_2020a,schuurmans_LearningBasedRiskAverseModel_2020a,
chow_FrameworkTimeconsistentRiskaverse_2014,patrinos_StochasticModelPredictive_2014}. However, in practice,
the discrete mode typically needs to be inferred from measurements of the continuous state or from output measurements. In this work,
we will restrict our attention to the particular case of \ac{MJLS} \cite{costa_DiscretetimeMarkovJump_2005}, and we will only
assume output measurements to be available.
Our goal is to recursively estimate the active mode,
in order to continually learn the transition probabilities of the underlying Markov chain,
and integrate this procedure with an online controller design which leverages this information to gradually improve performance.
By adopting a distributionally robust framework, we can do so while
retaining system theoretic guarantees, such as stability. We
will illustrate this by means of a linear controller design, but
keeping in mind more advanced applications, involving model predictive
approaches \cite{schuurmans_LearningBasedRiskAverseModel_2020a}.
\subsection{Related work}
In general, the task of estimating the state of \ac{MJLS}
has been studied extensively under various assumptions.
In the most general setting, where both the discrete mode and the
continuous state are to be estimated from (noisy) output
measurements, it is well-known that the optimal observer
requires exponentially growing memory, as was first shown in
\cite{ackerson_StateEstimationSwitching_1970}.
To combat this restriction, several approximate estimators
have been proposed \cite{ackerson_StateEstimationSwitching_1970,tugnait_DetectionestimationSchemeState_1979}, e.g.,
the \ac{IMM} approach \cite{mazor_InteractingMultipleModel_1998}
has been a popular choice in the case where the transition probabilities are known. More recently, in \cite{alriksson_ObserverSynthesisSwitched_2006}, techniques from
relaxed dynamic programming were used to design an efficient
receding horizon estimator which discards unlikely mode sequences, while retaining accuracy bounds.
Since most of these approximate methods rely on the knowledge of the transition probabilities to merge or discard certain mode sequences, they are not directly applicable in the present setting, as no prior knowledge of the transition probabilities is available.
Others have proposed to circumvent the mode estimation problem
and directly estimate the underlying state, e.g., \cite{bako_NewStateObserver_2011}. However, since our end goal involves learning the underlying transition matrix for prediction purposes, continuous state estimates alone do not suffice for our purposes.
A parallel line of study has focused on the possibility of uniquely
identifying the mode sequence and initial state from noiseless
output measurements, leading to different notions
of observability in the context of (linear) switching systems \cite{ji_ControllabilityObservabilityDiscretetime_1988}.
In \cite{vidal_ObservabilityIdentifiabilityJump_2002}, simple rank conditions for the observability of autonomous jump linear systems with unknown switching sequence were given.
These results were later extended to systems with a control input
\cite{elhamifar_RankTestsObservability_2009}.
Although these conditions are attractive from a computational point of view, since they do not suffer from exponential growth in complexity, they require a minimal dwell-time assumption on the (unknown) mode sequence. This assumption excludes the case where the underlying mode sequence is generated by a Markov chain, especially if there is no prior information on the transition probabilities.
Around the same time, Babaali and Egerstedt proved complementary results,
allowing for arbitrary switching behavior \cite{babaali_PathwiseObservabilityControllability_2003,babaali_ObservabilitySwitchedLinear_2004,babaali_AsymptoticObserversDiscreteTime_2005}. It has been
shown that in these circumstances, it is often not possible
to uniquely identify the mode sequence at the current time step. This
consideration has led to more relaxed notions of mode observability \cite{baglietto_ActiveModeObservability_2007,baglietto_ActiveModeObservation_2009,alessandri_RecedinghorizonEstimationSwitching_2005,baglietto_ModeobservabilityDegreeDiscretetime_2014}, which we will
utilize in this work.
Roughly speaking, a system is said to be mode observable if there
exists a finite observation horizon $N$, such that the mode sequence
(and the initial state) can be recovered from $N$ measurements (see \Cref{sec:mode-observability} for a formal definition).
In prior observer designs, this horizon was assumed to be determined offline. Although the determination of this horizon was shown to
be decidable in theory \cite{babaali_PathwiseObservabilityControllability_2003}, the computational complexity explodes rapidly with the required horizon $N$, potentially rendering this offline procedure intractable, even for small-scale systems \cite{babaali_PathwiseObservabilityControllability_2003}.
\subsection{Contributions}
Our first and main contribution is the development of a recursive estimator
for the mode and the continuous state of Markov linear systems.
As we do not assume any prior knowledge on the transition probabilities,
our scheme permits arbitrary switching behavior. This recursive procedure will automatically select the required window size, alleviating the need to determine the mode-observability index offline.
Additionally, we integrate the proposed mode observer with a data-driven
distributionally robust controller design and illustrate how potential non-uniqueness of the current mode and current state can be circumvented
using an output-feedback approach.
Finally, we demonstrate the practicality of our approach using illustrative numerical experiments.
\subsection{Notation}
Given $a \leq b \in \N$, we define $\natseq{a}{b} \dfn \{
n \in \N \mid a \leq n \leq b \}$. We define the strictly positive natural numbers by $\N_+ \dfn \N \setminus \{0\}$. Let $x = (x_t)_{t \in \N}$ denote some (potentially vector-valued) sequence; given indices $k\geq l \in \N$, $\seq{x}{k}{l} = (x_{k}, \dots, x_{l})$ is the subsequence over the time steps $k$ to $l$.
Given a matrix $A$, $A^{\dagger}$ is its (Moore-Penrose) pseudo-inverse.
Given matrices $A_i$, $i=0, \dots, n$, we denote their
product (in the order of the index $i$) by $\prod_{i=0}^{n} A_i = A_0 A_1 \dots A_n$. We denote the cardinality of a (finite) set $\Theta$ by $|\Theta|$. Given symmetric matrices $Q, Q'$, we write $Q \succ Q'$ to signify that $Q-Q'$ is positive definite.
Finally, we denote by $\simplex_{\nModes} \dfn \{ p \in \Re^{\nModes} \mid p \geq 0, \trans{\1}p =1 \}$ the $\nModes$-dimensional probability simplex.
\section{Preliminaries and problem statement}
Let $\{\md_t\}_{t \in \N}$ be a time-homogeneous Markov chain, defined
on some probability space $(\Omega, \F, \prob)$ and taking
values on the finite set $\W \dfn \{1,\dots, \nModes\}$.
We denote its (unknown) transition matrix $\transmat = (\transmat_{i,j})_{i,j \in \W} \in \simplex_{\nModes}^{\nModes}$, where $\simplex_{\nModes}^{\nModes}$ denotes the set of $\nModes$-by-$\nModes$ stochastic matrices, that is, $P_{i,j} = \prob[\md_{t}=j \mid \md_{t-1}=i]$.
We will denote the $i$th row of $\transmat$ by $\row{\transmat}{i} \in \simplex_{\nModes}$. We will assume that the Markov chain is
$(\md_t)_{t\in\N}$ is ergodic, i.e., there exists a value $k \in \N_{>0}$, such that $\transmat^k > 0$ element-wise, for some $k \geq 1$.
This assumption ensures that every mode of the chain gets visited infinitely often \cite[Ex. 8.7]{billingsley_ProbabilityMeasure_1995}, effectively allowing us to learn the complete transition matrix from a sample trajectory.
Driven by this Markov chain is the \ac{MJLS}
\begin{subequations} \label{eq:system}
\begin{align}\label{eq:dynamics}
x_{t+1} &= A_{\md_t} x_t + B_{\md_t}u_t\\
y_{t} &= C_{\md_t} x_t, \label{eq:measurement}
\end{align}
\end{subequations}
where $x_t \in \Re^{\ns}$ is the (continuous) state of the system, $u_t \in \Re^{\na}$ is the input and $y_{t} \in \Re^{\ny}$ is the output
at time step $t$.
We refer to the value $\md_t$ as the mode at time $t$.
Given the initial state $x_0$, a sequence of previous control
actions $\seq{u}{0}{t-1} = (u_0, \dots, u_{t-1}) \in \Re^{t \na}$ and the true mode sequence
$\md^{\star} = (\md_{0}, \dots, \md_t) \in \W^{t+1}$, the sequence
of measurements $\seq{y}{0}{t} = (y_{0}, \dots, y_{t}) \in \Re^{\ny(t+1)}$
satisfies $\seq{ y}{0}{t} = Y(\md^\star, x_0,\seq{ u}{0}{t-1})$, with
\begin{equation} \label{eq:measurement-equation}
Y(\md^{\star}, x_0, \seq{u}{0}{t-1}) \dfn \obs(\md^\star)x_0 + G(\md^\star)\seq{ u}{0}{t-1},
\end{equation}
where
\begin{align*}
\obs(\md)
{}\dfn{}
\smallmat{
C_{\md_0} \\
C_{\md_1} A_{\md_0} \\
\vdots \\
C_{\md_t} \prod_{k=1}^{t} A_{\md_{t-k}}
}
\end{align*}
is the observability matrix and
\begin{align} \label{eq:hankel}
G(\md) &\dfn H(\md) \blkdiag(B_{\md_0}, \dots, B_{\md_{t-1}}), \\
H(\md) &\dfn \smallmat{
0 & 0 & \dots & 0 \\
C_{\md_1} & 0 & \dots & 0 \\
C_{\md_2}A_{\md_{1}} & C_{\md_2} & \dots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
C_{\md_t}\prod_{k=1}^{t-1}A_{\md_{k-i}} & C_{\md_t}\prod_{k=1}^{t-2}A_{\md_{k-i}} & \dots & C_{\md_{t}}
},
\end{align}
describe the effects of the controls.
We will now review several standard notions from the literature regarding
observability of the mode sequence, and adapt them to our purposes where
needed.
\subsection{Mode observability} \label{sec:mode-observability}
The task of recovering the mode sequence (or \emph{path}) from a sequence of measurements rests on our ability
to distinguish two mode sequences from one another,
which leads to the following concept, originally introduced by \cite{babaali_ObservabilitySwitchedLinear_2004} as \emph{controlled discernibility}.
\begin{definition}[Discernibility] \label{def:discernible}
We say that a pair of mode sequences $\md, \md' \in \W^{N}$
of length $N$ is discernible with respect to a control sequence $u \in \Re^{(N-1)\na}$ if,
\begin{equation} \label{eq:discernible-condition}
Y(\md, x, u) \neq Y(\md', x', u), \quad \text{ for all }x, x' \in \Re^{\ns}.
\end{equation}
Otherwise, we say that the pair is indiscernible with respect to
$u$.
\end{definition}
In words, two paths are discernible if there is no pair of initial states for which the two paths would yield the same output measurements.
\begin{remark}
In the context of autonomous systems, i.e., whenever $B_i = 0$ for all $i \in \W$,
a weaker notion of discernibility is typically considered,
where a condition akin to \eqref{eq:discernible-condition} is imposed for \textit{almost
every} $x, x'$ \cite{babaali_ObservabilitySwitchedLinear_2004},
or all $x, x'$, such that $x \neq 0$ or $x'\neq 0$
\cite{alessandri_RecedinghorizonEstimationSwitching_2005}.
This relaxation is necessary in this case,
since \eqref{eq:discernible-condition} would require the
column spaces of $\obs(\md)$ and $\obs(\md')$ to be
fully disjoint,
which is impossible as both trivially contain the zero vector.
Unfortunately, this weakened discernibility notion is not sufficient to ensure that the state and mode sequence can be recursively determined online. An additional assumption needed for this purpose is known as \emph{backward discernibility}. We refer to \cite{babaali_AsymptoticObserversDiscreteTime_2005,halimi_ModelBasedModesDetection_2015} for more details as we will
not consider this case here.
\end{remark}
We can now introduce the following notions of mode observability
(\acs{MO}\acused{MO}) for
system \eqref{eq:system}, following roughly the terminology
of \cite{baglietto_ActiveModeObservability_2007,alessandri_RecedinghorizonEstimationSwitching_2005}.
\begin{definition} \label{def:alpha-omega-MO}
Consider $N \in \N_+$ and $\alpha, \omega \in \N$, with
$\alpha + \omega < N$. We say that
system \eqref{eq:system} is $(N,\alpha,\omega)$-\ac{MO}
if for all paths $\md, \md' \in \W^{N+1}$, with
$\seq{\md}{\alpha}{\omega} \neq\seq{ \md}{\alpha}{\omega}$,
$\md$ and $\md'$ are discernible with respect to
\emph{almost every} control sequence $u \in \Re^{\na(N-1)}$,
that is, all $u \in \Ufeas_{N}^{\alpha,\omega} \dfn \Re^{\na (N-1)} \setminus \mathcal{K}^{\alpha,\omega}_N$, with $\mathcal{K}_{N}^{\alpha,\omega}$ a set of Lebesgue measure 0.
\end{definition}
We refer to $\Ufeas_N \dfn \bigcup_{\alpha, \omega} \Ufeas_{N}^{\alpha,\omega}$ as the set of \emph{discerning} $N$-step control sequences.
This notion can be characterized as follows \cite{baglietto_ActiveModeObservability_2007,babaali_ObservabilitySwitchedLinear_2004}.
\begin{lem}[Controlled discernibility \cite{babaali_ObservabilitySwitchedLinear_2004}]
\label{lem:controlled-discernibility}
Two paths $\md, \md' \in \W^{N+1}$ are discernible with respect to almost every control sequence if
\begin{equation} \label{eq:discernibility}
(I - P)(G(\md)-G(\md')) \neq 0,
\end{equation}
where $P$ denotes the orthogonal projection onto the column space of $\regmatrix{\obs(\md) & \obs(\md')}$.
\end{lem}
We call any system satisfying the above \emph{weakly} \ac{MO}.
\begin{definition}[Weak mode observability]
We say that system \eqref{eq:system}
is weakly \ac{MO} at index $N$, if there exists a pair $(\alpha, \omega)$,
such that it is $(N,\alpha, \omega)$-\ac{MO}.
We call the lowest such $N$ the index of observability.
\end{definition}
\begin{assumption} \label{assum:MO}
System \eqref{eq:system} is weakly \ac{MO} at some (unknown) index
$N$.
\end{assumption}
One can argue that weak \ac{MO} is the minimal requirement
to guarantee a priori that any mode observation scheme will produce a meaningful result. If it does not hold, then there is no guarantee that for any initial state, even a single mode can ever be uniquely determined, regardless of the number of output measurements.
In practice, verification of (weak) mode observability may require a rapidly growing number of verifications of \eqref{eq:discernibility}, rendering even offline computation of the observability index $N$ intractable.
For this reason, we propose a scheme which does not require the
observation horizon to be selected beforehand.
Rather, it is automatically tuned to the smallest required value.
We will see that this typically yields a smaller observation window than
required by mode observability (see \Cref{sec:examples}).
\section{Online mode estimation}
In this section, we describe a mode estimation procedure
which stores a window of $N_t$ past measurements, where $N_t$
is adaptively selected online.
To ease notation, we will denote by $\mathbf{y}_t \dfn\seq{ y}{t-N_t}{t}$ the sequence of measurements collected over the observation window at time $t$. Similarly, we will denote by $\mathbf{u}_t \dfn\seq{ u}{t-1-N_t}{t-1}$ the known control sequence over the same window.
At every time instance, the estimation procedure, summarized in \Cref{alg:mode_estimation}, consists of two main steps, discussed below:
\begin{inlinelist}
\item update the set of \emph{consistent mode sequences} (\Cref{sec:step1-consistent-mode-sequences}); and
\item update the horizon length (\Cref{sec:horizon-length-selection}).
\end{inlinelist}
In the remainder of this section, we will make the following assumption.
\begin{assumption}\label{assum:discernible}
the applied control sequences $\mathbf{u}_t$ are discerning for all $N_t \geq N$,
where $N$ denotes the index of mode-observability of the system.
\end{assumption}
\begin{remark} \label{rem:noise}
This assumption can be enforced by imposing it explicitly as a condition during controller synthesis \cite{baglietto_ActiveModeObservability_2007}. Unfortunately, this leads to a nonconvex constraint, which is highly undesirable in most design procedures for linear systems. However, since by assumption on the underlying system, the set of non-discerning control sequences is null, it suffices in practice to
add a very small random number (e.g., $e_u \sim \mathcal{N}(0, \epsilon I_{\na})$, with $\epsilon>0$ close to zero) to the control action and thus apply $\tilde u_t = u_t + e_u$ to the system instead. In doing so, the sequence $\mathbf{u}_t$ will be discerning with probability one.
\end{remark}
\subsection{Consistent mode sequences} \label{sec:step1-consistent-mode-sequences}
At every time instance $t$,
the procedure keeps track of the set of modes sequences of length $N_t$, defined as follows.
\begin{definition}[Consistent mode sequences]
For a given measurement sequence $y \in \Re^{(N+1)\ny}$,
and control sequence $u \in \Re^{N \na}$, we denote by
\begin{equation*} \label{eq:definition-consistency}
\Theta(y, u) \dfn \{ \md \in \W^{N+1} \mid \exists x \in \Re^{\ns}: y = Y(\md, x, u) \}
\end{equation*}
the set of mode sequences consistent with $y$ and $u$.
\end{definition}
\begin{remark} \label{rem:indiscernible}
Note that by definition, any pair of elements $\md, \md' \in \Theta(y,u)$ is indiscernible with respect to $u$.
\end{remark}
\begin{remark}
In the following, we will always index a mode sequence $\md \in \Theta(\mathbf{y}_t, \mathbf{u}_t)$ relative to the start of the prediction window,
i.e., $\md = (\md_0, \dots, \md_{N_t})$,
and not relative to the absolute time index $t$,
as in $(\md_{t- N_t}, \md_{t- N_t+1}, \dots, \md_{t})$.
This should cause no confusion as the final element in the sequence, $\md_{N_t}$, will always correspond to the absolute time $t$.
\end{remark}
Given a mode sequence $\md$,
one can verify its consistency by solving a single least-squares problem.
More specifically, $\md \in \Theta(y,u)$ if and only if
\begin{equation} \label{eq:check-consistent}
(I-\obs(\md)\obs(\md)^\dagger)\tilde y(\md) = 0,
\end{equation}
with $\tilde{y}(\md) \dfn y - G(\md) u$.
In a naive implementation, constructing the set $\Theta(\mathbf{y}_t, \mathbf{u}_t)$
would require $O(\nModes^{N_t+1})$ verifications of \eqref{eq:check-consistent}.
However, as the next result shows, only the extensions
of mode sequences that are in $\Theta(\mathbf{y}_{t-1}, \mathbf{u}_{t-1})$ need to
be considered.
This reduces the computational burden to $O(|\Theta(\mathbf{y}_{t-1}, \mathbf{u}_{t-1})| \nModes)$, which could potentially entail quite a considerable reduction.
\begin{lem} \label{lem:incremental-update}
Let $\md' \dfn \cat{\md\bar{\md}} \in \W^{N+1}$ denote
the concatenation of a path $\md \in \W^{N}$ with a single mode $\bar{\md} \in \W$, for some $N \in \N_+$.
If $\md' \in \Theta(\seq{y}{1}{N+1},\seq{u}{1}{N})$, then there must exist a mode $\underline{\md} \in \W$
such that $\cat{\underline{\md} \md} \in \Theta(\seq{y}{0}{N},\seq{ u}{0}{N-1})$.
Similarly, if $\cat{\underline \md \md \bar{\md}} \in \Theta(\seq{y}{0}{N+1},\seq{ u}{0}{N})$, then $\cat{\underline \md \md} \in \Theta(\seq{y}{0}{N},\seq{ u}{0}{N-1})$,
\end{lem}
\begin{proof}
It follows directly from the definition of $\Theta(\argdot)$. See the \Cref{proof:lem:incremental-update} for details.
\end{proof}
\subsection{Window size selection} \label{sec:horizon-length-selection}
Once the set of consistent mode sequences has been computed,
the next step is to determine whether it is required to increase the window size $N_t$.
To this end, let $n_{\text{c}} \in \N_+$ be a user-specified positive number and define the set
\begin{equation} \label{eq:indices}
\begin{aligned}
I_{t}^{n_{\text{c}}} \dfn
\left\{
k \in \natseq{0}{\bar{N}_t}
{}\sep{}
\begin{matrix}
\theta_{k+i} = \theta'_{k+i}, \forall \theta, \theta' \in \Theta(\mathbf{y}_t, \mathbf{u}_t),\\
\forall i \in \natseq{0}{n_{\text{c}}-1}
\end{matrix}
\right \},
\end{aligned}
\end{equation}
with $\bar{N}_t \dfn \max\{N_t-n_{\text{c}}, 0\}$,
containing the indices at which all consistent mode sequences are identical for $n_{\text{c}}$ consecutive time steps. If $I_t^{n_{\text{c}}} = \emptyset$, then we set $N_{t+1} = N_{t} + 1$. By default, one would take $n_{\text{c}}=1$. However, in the case where one is interested in observing mode \emph{transitions} rather than simply the modes at potentially non-consecutive time instances (as we are), it is desirable to take $n_{\text{c}}=2$.
This criterion for the selection of $N_t$ is justified by the following observations.
\begin{lem} \label{prop:extension-of-MO-index}
If system \eqref{eq:system} is $(N, \alpha, \omega)$-\ac{MO}, then it is also $(N+1, \alpha, \omega)$-MO.
\end{lem}
\begin{proof}
See \Cref{proof:prop:extension-of-MO-index}.
\end{proof}
\begin{proposition} \label{lem:condition-not-MO}
If the system is weakly \ac{MO} at index $N$, then
there exists an index $k$, such that for all sufficiently large
$t$,
\begin{equation} \label{eq:test-MO}
\md_{k+i} = \md_{k+i}'\quad \forall \md, \md' \in \Theta(\mathbf{y}_t, \mathbf{u}_t)
\end{equation}
for all $i \in \natseq{0}{n_c-1}$.
\end{proposition}
\begin{proof}
This is a direct consequence of \Cref{def:alpha-omega-MO} and the fact that two paths $\theta, \theta' \in \Theta(\mathbf{y}_t, \mathbf{u}_t)$
are indiscernible (see \Cref{rem:indiscernible}). We consider
two distinct cases.
\begin{proofsteps}
\item \label{case:large-N} Suppose that $N_t = N$, then weak mode-observability and
\Cref{assum:discernible} imply that for some $\alpha, \omega$,
\[
\seq{\md}{\alpha}{N_t-\omega} = \seq{\md}{\alpha}{N_t-\omega}, \; \forall \md, \md' \in \Theta(\mathbf{y}_t, \mathbf{u}_t),
\]
since all paths in $\Theta(\mathbf{y}_t, \mathbf{u}_t)$ are indiscernible (see \Cref{rem:indiscernible}). By \Cref{prop:extension-of-MO-index}, this
can be extended inductively to any $N_t > N$. Hence, if $N_t \geq
\max\{N, n_c + \alpha + \omega - 1\}$, then \eqref{eq:test-MO} holds
for all $i \in \natseq{0}{n_{\text{c}} - 1}$.
\item If, alternatively $N_t < N$, then two situation may occur.
Either \eqref{eq:test-MO} holds and there is nothing to prove,
or \eqref{eq:test-MO} does not hold.
The latter case implies that $I_{t}^{n_{\text{c}}} = \emptyset$ and therefore
$N_{t}$ is increased by one.
\end{proofsteps}
This situation may occur a finite number
of times,
until eventually $N_{t} = \max\{N, n_{\text{c}} + \alpha + \omega -1\}$ and we arrive back in case \ref{case:large-N}.
\end{proof}
From \Cref{lem:condition-not-MO} it follows that the window sizes $N_t, t \in \N$ generated by \Cref{alg:mode_estimation} are upper bounded, and furthermore, that asymptotically, an infinite number of consecutive mode sequences of length $n_{\text{c}}$ will be observed. These mode sequences can be obtained at every time step from $\Theta_t$ and $I_{t}^{n_c}$.
\begin{algorithm}[t]
\caption{Mode estimation}\label{alg:mode_estimation}
\begin{algorithmic}[1]
\Require initial measurement $y_0$, system \eqref{eq:system}, $n_{\text{c}}>0$
\State $t\gets0$, $N_t \gets 0$, $\Theta_t \gets \{\md \in \W \mid \exists x: y_0 = C_{\md}x \}$,
\Loop
\State Compute $I_{t}^{n_{\text{c}}}$ using \eqref{eq:indices}
\If{$|I_{t}^{n_{\text{c}}}| = 0$} \label{step:MO-check}
\State $N_{t+1} \gets N_t+1$
\EndIf
\State $t \gets t + 1$
\State Measure: $\mathbf{y}_{t} \gets\seq{y}{t-N_{t}}{t}, \mathbf{u}_t \gets\seq{ u}{t-N_{t}-1}{t-1}$
\State $\Theta_{t} \gets $ \textproc{update}($\Theta_{t-1}$, $\mathbf{y}_t, \mathbf{u}_t$)
\EndLoop
\Procedure{update}{$\Theta, \mathbf{y}_t, \mathbf{u}_t$}
\State $\bar{\Theta} \gets \emptyset$
\ForAll{$\underline{\md} \in \Theta$}
\ForAll{$\bar \md \in \W$}
\State $\md \gets \cat{\seq{\underline{\md}}{t-N_t}{t} \bar{\md}}$ \Comment{Concatenate paths}
\If{\eqref{eq:check-consistent} holds for $y=\mathbf{y}_t$, $u=\mathbf{u}_t$}
\State $\bar \Theta \gets \bar \Theta \cup \theta$
\EndIf
\EndFor
\EndFor
\Return $\bar \Theta$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Recovery of the continuous state} \label{sec:continuous-state}
For control purposes, one may additionally wish to
recover the initial state corresponding to a path in $\Theta(\mathbf{y}_t, \mathbf{u}_t)$. For any consistent mode sequence $\md \in \Theta(\mathbf{y}_t, \mathbf{u}_t)$, there exists an initial state $x_{t-N_t}$ such that the linear system
\begin{equation} \label{eq:state_estimation}
\obs(\md) x_{t-{N_t}} = \tilde{\mathbf{y}_t}(\md)
\end{equation}
with $\tilde{\mathbf{y}_t}(\md) = \mathbf{y}_t - G(\md)\mathbf{u}_t$ is
consistent. Of course, the solution of this system is unique
if and only if $\rank(\obs(\md)) = n$. If this is the
case for any path $\md$, then it is said that the system is \emph{pathwise observable} \cite{babaali_PathwiseObservabilityControllability_2003}. However, this may require a larger number of measurements than the requirement that $I_{t}^{n_{\text{c}}} \neq \emptyset$.
To account for this, one may simply add the condition that
$\rank(\obs(\md)) = n$ for all $\theta \in \Theta_t$ in step~\ref{step:MO-check} of \Cref{alg:mode_estimation}. Provided that the system is \emph{pathwise observable}
at some index $N_{\text{p}}$, then a modified version of \Cref{lem:condition-not-MO} can be shown analogously to establish finiteness
of the window size $N_t$. However, even under this assumption, it is not
clear that the solutions of \eqref{eq:state_estimation} for different
paths will always coincide. in \Cref{sec:design}, we
discuss several alternative approaches for circumventing this indeterminism of the state in the controller design.
\section{Distributionally robust control} \label{sec:control}
We will now illustrate how the proposed mode-observation scheme
can be utilized for controller design.
Our objective is to construct a stabilizing controller
for system \eqref{eq:system}, given only the information obtained
by the observer described above.
To this end, we adopt a distributionally robust approach,
in which the available data is used to construct a
convex set of distributions which contains the true distribution with
high probability.
For simplicity, we will restrict our attention here
to the synthesis of stabilizing linear controllers, essentially
extending the work in
\cite{schuurmans_SafeLearningBasedControl_2019} to the partially observed case.
Although interesting in its own right,
this problem will prove particularly useful as a tool for the design of stabilizing model predictive control schemes \cite{rawlings_ModelPredictiveControl_2017}.
For instance, the controller described below could be used in conjunction with the methodology of \cite{bernardini_StabilizingModelPredictive_2012}
(with minor modifications), extending it to a distributionally robust variant.
\subsection{Data-driven ambiguity sets} \label{sec:ambiguity}
In order to account for misestimations of transition probabilities due to finite sample sizes, we replace empirical point estimates by so-called ambiguity sets.
Conceptually, an ambiguity set is a (data-dependent) set of probability distributions that contains the true data-generating distribution with
some specified confidence level.
Let $\set{D}_t \dfn \{(\md_k, \md_{k+1})\}_{k=0}^{t}$ denote the set of mode switches observed up to time $t$, using \Cref{alg:mode_estimation}.
Partitioning this set into sets $\set{D}_{t,i} \dfn \{ \md_{k+1} \mid (\md_{k}, \md_{k+1}) \in \set{D}_t, \md_k = i \}$, $i \in \W$, containing the
observed mode transitions originating in mode $i$, we obtain \iac{i.i.d.} sample
from the probability vector $\row{\transmat}{i}$ (i.e., the $i$th row
of the (unknown) transition matrix $\transmat$).
Let $\beta_t \in [0,1)$, $t \in \N$ denote a given (summable) sequence of desired confidence levels.
Following the approach of \cite{schuurmans_LearningBasedDistributionallyRobust_2020a},
we define the ambiguity set as
\begin{equation} \label{eq:ambiguity-set}
\amb_{t,i} = \amb(\set{D}_{t,i}) \dfn \{ p \in \simplex_{\nModes} \mid \nrm{p - \hat{p}_{t,i}}_{1} \leq r_{t,i}\},
\end{equation}
where
\(
\hat{p}_{t,i} = \tfrac{1}{|\set{D}_{t,i}|} \sum_{\md \in \set{D}_{t,i}} \delta(\md)
\) is the empirical distribution over the set $\set{D}_{t,i}$,
and the radius
\begin{equation} \label{eq:TV-radius}
r_{t,i} =
\sqrt{\tfrac{2(\nModes \log 2 - \log \beta_t)}{|\set{D}_{t,i}| }},
\end{equation}
is chosen using standard concentration inequalities to ensure that \cite[Thm. A.6.6]{vaart_WeakConvergenceEmpirical_2000}
\begin{equation} \label{eq:ambiguity-inclusion}
\prob[ \row{\transmat}{i} \in \amb(\set{D}_{t,i}) ] \geq 1 - \beta_t,
\end{equation}
for all $i \in \W$. Note that the probability here is taken with respect to the data set $\set{D}_{t,i}$.
We emphasize that the quantities $\hat{p}_{t,i}$ and $r_{t,i}$ can
easily be updated online, without requiring explicit storage of the
dataset $\set{D}_{t,i}$.
Finally, it is worthwhile to mention that several other classes of ambiguity sets have been proposed in the literature (see, for instance, \cite{rahimian_DistributionallyRobustOptimization_2019} for an overview). For our purposes, however, the $\ell_1$-ambiguity set defined as \eqref{eq:ambiguity-set} is particularly suitable due to its polytopic
structure, as we will discuss in the next section.
\subsection{Controller design} \label{sec:design}
As the mode at time $t$ is not necessarily uniquely defined,
even under pathwise observability \cite{babaali_PathwiseObservabilityControllability_2003},
the continuous state at time $t$ may not be uniquely recoverable.
Several approaches have been proposed to circumvent this issue.
For instance, in \cite{alessandri_LuenbergerObserversSwitching_2007}, path-dependent Luenberger-type observers are designed,
together with conditions under which the error dynamics are stable for any consistent path at time $t$.
However, these developments were made assuming no stochastic structure in the switching behavior.
By contrast, our goal is to use the constructed ambiguity sets \eqref{eq:ambiguity-set} to incrementally improve the controller using the observed data.
This without jeopardizing fundamental system-theoretic properties of the closed-loop system.
To this end, we employ a more direct approach based on linear output feedback.
More specifically,
Our goal is to construct a (time-varying) control law $u_t = K_{t} y_t$
such that the closed-loop system
\begin{equation} \label{eq:closed-loop}
x_{t+1} = (A_{\md_t} + B_{\md_t}K_t C_{\md_t}) x_{t},
\end{equation}
is stable in the \emph{mean-square} sense.
System \eqref{eq:closed-loop} is said to be mean-square stable if $\lim_{t \to \infty} \E[x_t \trans{x_t}] = 0$ \cite{costa_DiscretetimeMarkovJump_2005}.
A sufficient condition for this is the existence of a mode-dependent matrix $V_i \succ 0$, $i \in \W$ such that the Lyapunov-type condition
\begin{equation} \label{eq:lyapunov}
\sum_{j \in \W} \transmat_{i,j} \trans{(A_{j}+ B_{j}K_t C_j)} V_{j} (A_{j} + B_{j} K_t C_j) - V_i < 0,
\end{equation}
holds for all $i$, and for all $t \in \N$ \cite{costa_DiscretetimeMarkovJump_2005}.
The main challenge here is that $K_t$ is multiplied both on the
left and the right, which makes it impossible to use standard
techniques to reformulate \eqref{eq:lyapunov} as \iac{LMI} \cite{schuurmans_SafeLearningBasedControl_2019,bernardini_StabilizingModelPredictive_2012,kothare_RobustConstrainedModel_1996}.
This issue was addressed in \cite{shu_StaticOutputfeedbackStabilization_2010} for
the case where the transition matrix is known.
Exploiting the fact that the ambiguity sets $\amb_{t,i}$
are polytopic, i.e., they can be written as
$\amb_{t,i} = \conv\{q_{t,i}^{l}\}_{l=1}^{n_{\amb_{t,i}}}$, we can straightforwardly extend this methodology to the
distributionally robust case, leading to the following result.
\begin{thm}
Suppose that at every time $t$,
for sufficiently large $\alpha>0$, and any positive number $c>0$, there exists a solution $\gamma^{\star} < 0$ to
\begin{subequations} \label{eq:LMI}
\begin{align}
\gamma^{\star}{=}&\minimize_{\stackrel{\gamma, V_{1,i}, V_{2,i}, V_4, L}{Q \succ0, H_{j,i}, G_{j,i}}} &&\gamma \\
&\stt && \smallmat{V_{1,i} &\trans{V_{2,i}}\\
V_{2,i} & V_{4}} \succ 0\\
&&& O_{i}(q^{l}_{t,i}, M, \alpha) \prec \gamma I, \label{eq:condition-omega}\\
&&& \gamma \geq -c, \forall l \in \natseq{1}{n_{\amb_{t,i}}},
\end{align}
\end{subequations}
for all $j \in \{1,2\}$ and $i \in \W$, with $O_{i}$ as defined in \eqref{eq:control-design-matrix} (see \Cref{sec:LMI}), where
$M = (M_{i})_{i \in \W}$ is a given (potentially mode-dependent) stabilizing state-feedback gain with respect to all $\row{\transmat}{i} \in \amb_{t,i}, i \in \W$ and $\{q_{t,i}^{l}\}_{l=1}^{n_{\amb_{t,i}}}$ are the vertices of
the polytopic ambiguity set $\amb_{t,i}$.
Let $K_t = Q^{-1}L$ denote the corresponding output feedback gain.
If furthermore, the confidence levels $\beta_t$ are chosen to satisfy
$\sum_{t=0}^{\infty} \beta_{t} < \infty$,
then $u_t = K_t y_t$ is a mean-square stabilizing controller.
\end{thm}
\begin{proof}
If $\row{\transmat}{i} \in \amb_{t,i}$ for all $i \in \W$, then,
since $O_{i}(\argdot, M, \alpha)$ is affine, \cite[Thm. 1]{shu_StaticOutputfeedbackStabilization_2010} implies that at time
$t$ \eqref{eq:lyapunov} holds for all $i \in \W$.
Now let $E_t \dfn \{ \exists i \in \W : \row{\transmat}{i} \notin \amb_{t,i}\}$ denote the event where at least one row of the transition matrix does not lie in the corresponding ambiguity set at time $t$.
By \eqref{eq:ambiguity-inclusion} and the union bound, we have that $\prob[E_t] \leq \nModes \beta_t$.
Since $\beta_t$ was assumed to be summable, the Borel-Cantelli lemma \cite[Thm. 4.3]{billingsley_ProbabilityMeasure_1995} implies that
with probability 1, there exists a finite time $T$, such that the event $E_t$ occurs for all $t > T$. Thus, the Lyapunov condition \eqref{eq:lyapunov} holds for all $i \in \W$ and for all sufficiently large $t$.
\end{proof}
The stabilizing state-feedback gain $M$ can be computed
using standard techniques, often yielding a separate \ac{LMI} (e.g., \cite{bernardini_StabilizingModelPredictive_2012,schuurmans_SafeLearningBasedControl_2019}).
For the confidence levels we choose $\beta_{t} = 0.5 (t+1)^{-2}$ in our experiments, as this sequence is summable, but decreases sufficiently slowly to ensure that the radii $r_{t,i}$ will converge to 0.
Indeed, since by assumption, the Markov chain is ergodic,
\cite[Lem. 6]{wolfer_MinimaxLearningErgodic_2019} essentially states that the number of visits in every mode (i.e., $|\mathcal{D}_{t,i}|$) asymptotically grows linearly with $t$, so that by \eqref{eq:TV-radius}, $\lim_{t \to \infty} r_{t,i} = \tfrac{\log t}{t} = 0$.
As a result, we may expect subsequent refinements of $K_t$ to approach
the control gain obtained when the transition matrix $\transmat$ is known.
\section{Illustrative examples} \label{sec:examples}
\subsection{Mode estimation}
Consider the system \eqref{eq:system} with
\(
A_1 = \smallmat{0.45 & 0\\ 0 & 0.4},
A_2 = \smallmat{0.25 &-0.20 \\ 0.04 & 0.4},
B_1 = B_2 = \smallmat{0.3\\ 0.4},
C_1 = C_2 = \smallmat{2&1},
\)
driven by random input $u_t$, drawn \ac{i.i.d.} from a Gaussian distribution, i.e., $u_t \sim \mathcal{N}(0,I_{\na})$ as described in \cite{ragot_SwitchingTimeEstimation_2003}.
This system is not weakly \ac{MO} at least up to index $N=15$, in the sense of \Cref{lem:controlled-discernibility}. However,
running \Cref{alg:mode_estimation} with $n_{\text{c}}=2$ for 5000 randomly drawn states $x_0 \sim \mathcal{N}(0,I_{\ns})$, and with uniform transition probabilities (i.e., $\transmat_{i,j} = 0.5$, $i,j\in\W$) we find that in all cases, $N_t$ converges to $3$, with $|\Theta_t| \leq 4$ for all $t$.
Similarly, the system with $A_i$, $B_i$ and $C_i$ as in \cite[sec. V]{bako_NewStateObserver_2011} (omitted here for sake of space) is $(4,1,1)$-MO, and thus weakly MO in the sense of \Cref{lem:controlled-discernibility}. Yet, in practice,
we find that in almost all cases, \Cref{alg:mode_estimation} with $n_{\text{c}} =2$ converges to $N_t = 3$, with the path uniquely identified, i.e., $|\Theta_t| = 1$ for all $t \geq 1$.
It is clear that manual selection of the observation window size based on
offline observability computations will -- when feasible -- tend to lead to a strict overestimation of the practically required window length.
\subsection{Distributionally robust control} \label{sec:dr-control-ex}
Consider the (autonomously instable) system \eqref{eq:system} with
\[
\begin{aligned}
A_1 &= \smallmat{1.05& 1.8\\0 &1.1}, A_2 = \smallmat{0.95 & 0.7\\ 0 & 0.95},\\
B_1 &= \smallmat{0.9& 0 \\ 0&0}, B_2 = \smallmat{0.8 & 0 \\ 0& 1.4},
C_1 = \smallmat{1 & 1 \\ 0 & 0}, C_2 = \smallmat{1&0\\0&1},
\end{aligned}
\]
and with uniform transition matrix: $\bar{\transmat}_{i,j} = 0.5$, $i,j \in \W$.
We compare three controllers:
\begin{inlinelist}
\item \label{item:rob}A \textbf{robust} controller, assuming no knowledge of the
transition probabilities, computed by solving \eqref{eq:LMI}
with $\amb_{t,i} = \simplex_{2}^2$;
\item \label{item:stoch} A \textbf{stochastic} controller, which has access to the
transition matrix $\bar{\transmat}$, and is obtained by solving the \eqref{eq:LMI} with $\amb_{t,i} = \{\row{\bar{\transmat}}{i}\}$; and
\item a \textbf{distributionally robust} (DR) controller, using the ambiguity set described in \Cref{sec:ambiguity}.
\end{inlinelist}
We perform a closed-loop simulation of the three controllers for a
random realization of the mode sequence from a fixed initial state.
Initially, the DR controller behaves indistinguishably from the
robust controller, as no data is obtained yet and the ambiguity set coincides with the probability simplex. After 50 steps, we apply a sudden additive disturbance to the system.
Note that such a disturbance can be detected in the mode estimation scheme, as at this time, $\Theta_t$ will be empty.
In our implementation, we handle this case by discarding data within the observation window and resetting $N_t$ back to 0.
By this time, the ambiguity set radius has already decreased considerably and as illustrated in \Cref{fig:state_vs_time},
the DR controller shows faster disturbance rejection than the robust approach.
As more data is observed and the ambiguity sets $\amb_{t,i} \to \{\bar{\transmat_{i}}\}$, the DR controller converges to the stochastic controller.
\begin{figure}[ht!]
\centering
\includegraphics[]{states-vs-time}
\caption{Evolution of the state norm in the example of \Cref{sec:dr-control-ex}.}
\label{fig:state_vs_time}
\end{figure}
\section{Conclusion}
We have proposed an adaptive mode-observation scheme
for Markov jump linear systems, which does not require
explicit determination of the index of observability of the
system. We have illustrated, based on a simple
linear output feedback control design, how this mode observer can be used to construct ambiguity sets over the mode transition probabilities,
allowing to improve control performance over time.
In future work, we aim to derive more efficient formulations for the computation of stabilizing output feedback gains, akin to the ones proposed in \cite{schuurmans_SafeLearningBasedControl_2019}. Furthermore, we
plan to integrate the proposed mode observer with
distributionally robust model predictive control schemes \cite{schuurmans_LearningBasedDistributionallyRobust_2020a}.
\bibliographystyle{ieeetr}
|
1,314,259,993,305 | arxiv |
\section*{Acknowledgements}
This work was supported by JST CREST Grant Number JPMJCR19A1, Japan.
\section{Conclusion}
We proposed CASPER, a dialog system that bridges the gap between task- and non-task-oriented dialog using the domain shift.
Shifter and selector enables a natural and seamless flow of dialog from open-domain chat to task-oriented conversation.
In a user study, we showed that CASPER gave a better impression in terms of naturalness of response, lack of forced topic switching, and satisfaction compared with a baseline.
We also discussed how shifter and selector naturally introduced task-oriented dialog by conducting a qualitative evaluation of the conversation. Selector detected opportunities for natural topic transitions, and shifter seamlessly changed topic, maintaining consistency in questions and answers.
\section{Discussion}
\begin{table}[t]
\centering
\begin{tabular}{rll}
\hline
Speaker & text \\
\hline\hline
user&What should I call you? \\
chatter&Call me whatever you want. \\
user&OK, I'll call you Mr. Anonymous. \\
chatter&Yes, sir! What would you like me to call you?\\
user&Please call me xx. (authors anonymized)\\
…&…\\
chatter&What kind of hobbies do you like? \\
user&I like to travel. \\
shifter&Traveling sounds nice! I want to travel too!\\
&I want to go on a anime pilgrimage to\\
&the locations for \textit{Laid-Back Camp}!\\
…&…\\
performer&How about \textit{Laid-Back Camp}? \\
system&task success: \textit{Laid-Back Camp} \\
\hline
\end{tabular}
\caption{
\textbf{Interaction between participant and CASPER during experiment}\label{table:ours_example}
}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{rll}
\hline
Speaker & text \\
\hline\hline
user & What are you addicted to these days? \\
baseline & An anime called \textit{Tokyo Revengers} is interesting. \\
user & I see. Anime. I sometimes read comics. \\
baseline & I recommend \textit{Tokyo Revengers}! \\
…&…\\
baseline & I like the songs of \textit{BUMP OF CHICKEN}!\\
&I recommend them!\\
user & What are some famous songs? \\
baseline & They're all famous! \\
…&…\\
system & task success: \textit{BUMP OF CHICKEN} \\
\hline
\end{tabular}
\caption{
\textbf{Interaction between participant and baseline during experiment}
\label{table:base_example}
}
\end{table}
\subsection{Comparison with Baseline}
\begin{comment}
\ifthenelse{\value{language} = 0}{
CASPERの対話は,より自然(Naturalness)で強引さが無く(Transition),満足度が高かった(Satisfaction).
この理由として,推薦対話をend-to-endで学習したベースラインモデルの対話と比較して,より長いタイムステップを掛けて慎重に対話を進めたことが考えられる.
最初に商品を推薦する発話をしたタイムステップの中央値,平均値はベースラインが5, 9.65で,CASPERは13, 13.7であった.
より具体的な対話システムの挙動を確かめるため,
CASPERの対話例とベースラインの対話例をそれぞれTable. \ref{table:ours_example}, Table. \ref{table:base_example}に示す.
ベースラインモデルは推薦タスクを遂行するべく,商品を褒めたりおすすめしてタスク達成を目指す応答をした.
一方でCASPERは,タスクと関係のない対話をしばらく続けることで自然な話題遷移が可能なタイミングを待ち,適切なタイミングで質問と応答の一貫性を保ちながら話題を遷移させていることが分かる.
}{
\end{comment}
The dialog of CASPER was more natural (naturalness), less forced (transition), and more satisfying (satisfaction) compared to baseline, which suggests that CASPER can bridge the gap between chat- and task-oriented dialog better than a single-model system.
One possible reason for this is that the recommendation dialog was carefully developed over a longer time step than that in the baseline. In practice, the median and mean time steps of the first recommendation were 5 and 9.65 for the baseline, and 13 and 13.7 for CASPER.
To verify the interaction with the two dialog systems in more detail, we show the examples of CASPER and baseline dialogs in Tables \ref{table:ours_example} and \ref{table:base_example}, respectively. The baseline showed a dialog to accomplish a task by responding with praise and recommendations of products. CASPER, on the other hand, waited for the appropriate moment to make a natural topic transition by continuing a task-unrelated dialog for a while and shifted the topics naturally at specific times while maintaining consistency in questions and answers.
\begin{comment}
\ifthenelse{\value{language} = 0}{
NaturalnessおよびTransitionの定量評価と,例を用いた定性評価から,CASPERにおいて,selectorは適切に話題遷移先と話題遷移タイミング決定しており,またshifterは質問と応答の一貫性を保ちながらも話題を目的ドメインへ自然に遷移させるような発話ができたことがわかる.
すなわち,既存研究の課題であった,雑談対話からタスク指向対話の間のシームレスな接続およびその間の適切なタイミングでの切替の二つにCASPERはアプローチすることが出来た.
}{
\end{comment}
The quantitative evaluation of naturalness and transition and the qualitative evaluation using dialogue examples indicated that the selector appropriately selected the topic transition destination and timing, and shifter was able to make a natural transition of the topic to the target domain while maintaining consistency between the question and the answer.
In other words, CASPER was able to approach two issues that have not been sufficiently examined in prior studies: the seamless connection between chat- and task-oriented dialogue and the appropriate timing of switching between the two.
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,AttachmentやSatisfactionの項目でCASPERがベースラインモデルのスコアを上回ったことから,この対話システムフレームワークが,より家庭に置かれるAIアシスタントとして適していることが分かる.
}{
\end{comment}
CASPER also outperformed the baseline in attachment and satisfaction, indicating that it is more suitable as an artificial-intelligence assistant at home.
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,CASPERはベースラインと比較して有意に家に置きたい (Attachment)という結果が出た.
スマートスピーカーなどの身近な対話システムは,強引に商品を売りつけようとすると,家に置きたくないと考えられるようになると思われる.そのため,対話の流れで商品を紹介するのが自然なときに限り商品を推薦するという方針は,より対話システムが身近になった現代に適している.
}{
\end{comment}
Unlike store assistant robots against which people are likely to be prepared for task-oriented dialogue, the policy of performing a task only when it is natural would be suitable for domestic use of dialogue systems.
\subsection{Comparison with Ablation Systems}
\begin{comment}
\ifthenelse{\value{language} = 0}{
CASPERは,CASPER w/o shifterと比較して応答が自然で対話満足度が高いという結果が得られた.
これはshifterの話題遷移が,システム設計者に利するような話題誘導をするにとどまらず,ユーザにも対話を楽しくするような利益を与えたことを示している.
この要因の一つとして,shifterの適度な話題提供がchatterのGeneric Responseの対話のつまらなさを軽減したことが考えられる.
}{
\end{comment}
The original CASPER provided more natural responses and higher dialog satisfaction compared to CASPER w/o shifter, which corresponds to bi-model systems. This indicates that the topic transition of shifter not only benefits the designer by guiding the topic to the task but also benefits the user by making the interaction natural and satisfying. One of the reasons is that shifter's moderate topic provision reduced the dullness of the chatter's generic-response dialog.
\begin{comment}
\ifthenelse{\value{language} = 0}{
CASPER USは,CASPERと比較してattachmentとsatisfactionで少し低いスコアを得たが,有意差は出なかった.
これは,shifterのドメイン3つくらいだとシングルモデルでもなんとかなっちゃうってことだと考えられる.
CASPERとCASPERUSのスコアに差が生じる原因の一つに,様々なドメインの応答で学習をしたためにCASPER USのshifterが汎化しすぎてしまうことが考えられる.
すなわち,統合shifterはchatterに近いかたちで応答をするようになると考えられる.
実際,CASPERとCASPER USのスコアの差の傾向は,CASPERとCASPER w/o shifterの差の傾向と一致しており,ほとんどの対話がchatterでなされたCASPER w/o shifterの特徴をマイルドにしたような実験結果であった.
さらにドメインを増やした場合,unified shifterはより多くのドメインに共通する応答を学習し,chatterのような一般的な応答をすると考えられる.したがって,shifterをドメインごとに分ける手法がよりスケーラブルであることが分かる.
また将来研究として,本研究ではドメインごとのアルファを同じにしたが,これを変えることで対話のコントローラビリティを変えられるか検証したい.
}{
\end{comment}
We expected that the original CASPER to score higher than CASPER US, which employs a unified shifter for all the target task domains,
because the unified model would become overly generalized due to learning with responses from various domains.
However, although CASPER US scored slightly lower in attachment and satisfaction than CASPER, we could not find significant differences.
This results indicate that the unified shifter is capable of managing three domains, and a new question of how many tasks it can manage has remained for future work.
We would also like to further investigate the controllability of topics by changing the $\alpha$ parameters of selector (Sec. \ref{ss:selector}).
\section{Experiment\label{sec:experiment}
\begin{comment}
\ifthenelse{\value{language} = 0}{
私達は,以下を評価するため実験を行った.
(1) selectorが適切なタイミングでモデル選択することで,スムーズな対話の流れができる
(2) shifterが対話をブリッジするintermediate utteranceを適切に生成できている
}{
\end{comment}
We conducted experiments to investigate our research questions:
(RQ~1) whether selector activates the model at the appropriate time and to make the dialog flow seamless and
(RQ~2) whether shifter generates intermediate utterances to bridge the gap between chat- and task- oriented dialog.
\subsection{Procedure}
\begin{comment}
\ifthenelse{\value{language} = 0}{
我々は基礎検討とユーザスタディを,\ref{subsec: setup}で後述するベースラインと比較して行った.
基礎検討では,システムの流暢さを調査するため,CASPERとベースラインでperplexityを比較した.
ユーザスタディでは,
被験者を,CASPERかベースラインモデルのいずれかと対話させ,対話後にTable. \ref{table:eval_items}に挙げた項目について,7段階で評価をさせた.
CASPERとベースラインのスコアを比較することで,モデル全体のデザインの妥当性を検証した(1, 2).
また,オリジナルのCASPERと,CASPERのablationシステムについて,実験参加者のスコアを比較することで,shifterがチャット指向とタスク指向対話の橋渡しにどれほど貢献したかを調べた(2).
}{
\end{comment}
In the experiments, we conducted objective evaluations and user studies.
In the objective evaluations, we evaluated the fluency of the systems' responses by comparing perplexity with CASPER and baseline.
In the user studies, participants were asked to interact with a dialog system and were then asked to rate the items listed in Table \ref{table:eval_items} on a 7-point scale.
We validated our overall CASPER design by comparing these user ratings between CASPER and baseline (RQ~1, and 2).
We also evaluated the effectiveness of shifter by comparing user ratings among the original CASPER with the ablation models (RQ~2).
\begin{comment}
\ifthenelse{\value{language} = 0}{
ユーザスタディにおける実験参加者への質問項目はテーブルxの通りである.
特に,transitionとnaturalnessについては,CASPERが先行研究の課題である(1),(2)を解決したかどうかを判断する項目である.我々の仮説は,
(1) selectorが適切な話題遷移のタイミングを待つことで強引さが減り,transitionが上がる.
(2) shifterが自然な話題遷移発話を生成することでnaturalnessが上がる.
ことである.
}{
\end{comment}
After the conversation, participants answered a questionnaire containing all the questions listed in Table~\ref{table:eval_items} on a 7-point scale.
In particular, transition and naturalness are the items that are aimed to validate the research questions (RQ~1, and 2).
We expect the following results:
(1) detecting the appropriate timing of domain transitions by selector reduces the forcefulness and increases the transition score and
(2) appropriate intermediate utterance by shifter increases the naturalness score.
\subsection{Experimental Setup\label{subsec: setup}}
\noindent\textbf{Dataset.
\begin{comment}
\ifthenelse{\value{language} = 0}{
学習に用いる対話コーパスは全てTwitterから収集した.CASPERの学習全体に用いたcontext-replyのペア数は17.8Mペアであった.domainのアノテーションはTwitter API v2のアノテーションを使用した.
CASPERの扱うドメインとして,テレビ番組,ミュージシャン,ビデオゲームの3種のドメインを採用し,{shifter}, {performer}をそれぞれのドメインについて用意した.
}{
\end{comment}
All dialog samples were collected from Twitter, and the total number of context-reply pairs used for the entire training of CASPER was 17.8M pairs. The annotation of domain follows the annotation of Twitter API v2. We used three types of domains: TV programs, musicians, and video games, and prepared a {shifter} and {performer} for each domain.
\noindent\textbf{Implementation of CASPER.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
chatter, shifterの実装には,Encoder DecoderモデルであるBERT2BERT\cite{bert2bert}を用いた.また,selectorの文書分類モデルにはBERT\cite{bert}を用いた.
なお,日本語のBERTの事前学習済みチェックポイントとして\textit{cl-tohoku/bert-base-japanese-whole-word-masking}を用いた.
}{
\end{comment}
For the implementation of chatter and shifter, we used BERT2BERT \cite{bert2bert}, which is an encoder-decoder model using BERT checkpoints. The document classification model for the selector was based on BERT \cite{bert}. Both were implemented by fine-tuning \textit{cl-tohoku/bert-base-japanese-whole-word-masking}, a pre-trained checkpoints of BERT in Japanese.
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,{performer}はそれぞれのドメインにおける商品名を含むツイートの中から,頻出かつ他の商品のツイートでは含まれにくい単語を集めることで,キーワードマッチングと照応解析により商品を推薦できるシンプルなルールベースモデルを構築した.タスク指向対話への遷移後の推薦アルゴリズムの挙動で実験結果に差がでないように,マルチターンな推薦対話システムではなくシングルターンの単純なルールベースモデルを採用している.
}{
\end{comment}
We constructed {performer}, a simple rule-based model, to recommend products through keyword matching and co-reference resolution by collecting words that occur frequently in tweets containing the name of an entity but are not likely to be included in tweets of other entities. We chose such a simple model to avoid differences in experimental results due to the behavior of the recommendation algorithm.
\noindent\textbf{Baseline and ablation.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,評価の比較対象として,end-to-endで学習させたベースラインモデルを用意した.
ベースラインは提案手法の{chatter}や{shifter}と同様にBERT2BERTモデルで実装されている.
ベースラインモデルの実装手順は次の通りである.まず,我々が{shifter}の学習に用いた3種のドメインの対話全てで{chatter}を転移学習することでドメイン依存な対話を学習する.次に,それらのドメインにおいて商品を推薦するツイートを教師データとして,さらに転移学習させる.以上の手順で,3つのドメインにおける商品推薦対話を\cite{fb_end2end}と同様にend-to-endに学習した.
}{
\end{comment}
The baseline dialog system corresponds to a single-model dialog system. That is, it performs all the roles of the models in CASPER alone. Its model is implemented as the same BERT2BERT model as {chatter}. The training procedure is as follows:
We first trained the model in the same way as {chatter}. Then, we performed transfer learning with all the datasets used for {shifter} models to learn domain-specific dialog. Lastly,
the baseline learned recommendation dialogs across the three domains in an end-to-end manner as in a previous study \cite{fb_end2end}.
\begin{comment}
\ifthenelse{\value{language} = 0}{
shifterのモデルの妥当性を実験により検証する.
私達はshifterに関して以下の2つのことを検証したいと考えた.
(1)shifterをドメイン別に分けて学習したのがが妥当であったか.
(2)shifterを雑談対話とタスク指向対話の間で利用することが妥当であったか.
そのため,われわれは比較対象として以下の類似CASPERモデルを用意した.
(1)ドメイン毎にデータセット,モデルを分けずに一つのshifterを学習させたモデル(CASPER unified shifter)
(2)Shifterを用いずにchatterとperformerだけにしたモデル(CASPER w/o shifter)
}{
\end{comment}
We also conducted an ablation study to verify the following two points about shifter: (1) whether shifters contribute smooth topic transition and (2) whether it is appropriate to train shifter separately for each domain. To address these points, we prepared the following two ablation systems for comparison with the original CASPER. (1) CASPER without shifter (CASPER w/o shifter) is a model with only chatter and performer (without shifter), corresponding to bi-model systems.(2) CASPER unified shifter (CASPER US) is a model trained on a single shifter without separating the corpus and model for each domain.
\noindent\textbf{Conversation.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
対話実験は全て日本語で行った.
1つのモデルごとに50人の異なる被験者をクラウドソーシングで集め,自作のwebサイト上でユーザとの二人で対話させた.対話の終了条件はユーザが商品推薦を受け入れたか,40タイムステップが経過することである.商品推薦の受け入れは,システムが商品推薦に関する発話をしたとき,webシステムでポップアップを出すことで判断した.
被験者は対話終了後に,アンケートフォームでTable\ref{table:eval_items}の質問について全て7段階で回答した.
}{
\end{comment}
All conversations were conducted in Japanese.
For each model, we crowdsourced 50 different participants and had them interact with the system and the user on a web site of our own design. The conversation ended when the participant accepted the product recommendation or when 40 time steps elapsed. The acceptance of a product recommendation was determined by a pop-up on the web system when the participant refers to the product positively. After the conversation, the participants answered a questionnaire containing all the questions listed in Table. \ref{table:eval_items} on a 7-point scale.
\subsection{Results}
\noindent\textbf{Objective evaluation.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
perplexityおよびnaturalnessの比較結果はTable. \ref{table:ppl}のとおりである.なお,CASPERのPPLはchatterと全てのshifterのperplexityの平均値を取って評価値とした.
perplexityではCASPERは差0.22でベースラインをわずかに下回った.また,人手評価であるNaturalnessの評価は差0.79でCASPERがベースラインの評価を上回った.Studentのt検定でp値は0.012であり,統計的に有意な差であった}{
\end{comment}
The results of the comparison of perplexity and naturalness are shown in Table \ref{table:ppl}. The perplexity of CASPER was calculated from the average perplexities from chatter and all shifters. CASPER was slightly lower than the baseline with a difference of 0.22. In the user assessment of naturalness, CASPER exceeded the baseline by a difference of 0.79. The difference was statistically significant (Student's t-test).
\begin{comment}
\ifthenelse{\value{language} = 0}{
ユーザ評価全体の比較による
実験結果はFig.\ref{fig:base_casper}のとおりである.
先述のnaturalnessの他に, attachment, transitionの2項目については,それぞれ有意にCASPERがベースラインよりも高いスコアを得た.また,satisfactionについてはp=0.073の有意水準差でCASPERがベースラインよりも高いスコアを得た.favorに関してはCASPERとベースラインでp=0.354で優位な差は見られなかった.
}{
\end{comment}
\noindent\textbf{Comparison with baseline.}
The experimental results for the comparison of user evaluation between CASPER and baseline are shown in Fig. \ref{fig:base_casper}.
In addition to the aforementioned naturalness, in two other items, attachment and transition, CASPER scored significantly higher than the baseline. For satisfaction, CASPER scored higher than baseline by a marginally significant difference. For favor, there was no significant difference between CASPER and baseline.
\noindent\textbf{Comparison with ablated systems.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
ablation systemとのユーザ評価比較の
実験結果はFig.\ref{fig:ablation}のとおりである.
CASPER USについては,attachment, satisfactionにおいてはそれぞれ0.71, 0.58の差でCASPERの方ががわずかに高い評価を得た.p値はそれぞれ0.120, 0.242であった.その他の項目に関しては統計的に有意な差が見られなかった.}{
\end{comment}
The experimental results for CASPER and ablated systems are shown in Fig.~\ref{fig:ablation}.
The one-way ANOVA showed that there were significant differences for naturalness, attachment, and satisfaction ($p < .05$).
As post-hoc analysis, we conducted multiple comparisons for using Tukey's test.
Compared with CASPER US, the original CASPER was slightly higher in attachment and satisfaction by 0.71 and 0.58, respectively, but the differences were not significant ($p > .1$).
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,CASPER w/o shifterについては,naturalness, attachment, satisfactionにおいてそれぞれ0.83, 0.99, 0.86の差でCASPERの方が高い評価を得た.p値はそれぞれ0.025, 0.016, 0.043で,いずれも統計的に有意な差が得られた.
}{
\end{comment}
Compared with CASPER w/o shifter, the original CASPER was significantly higher in naturalness, attachment, and satisfaction by 0.86, 0.99, and 0.86, respectively ($p < .05$).
\section{Introduction}
\begin{comment}
\ifthenelse{\value{language} = 0}{
近年,スマートスピーカーの普及により対話システムはより私生活へ浸透しつつある.
これらの対話システムは店頭に置かれるような商品を販売する店員ロボットと異なり,
汎用的な利用をされているため,様々なタスクを扱うことが想定される.
また,このように多機能な対話システムに対し,ユーザは雑談で話しかけることが知られている.例えば,コルタナに入力される発言内容の3割は雑談的な内容である[引用].
このように,雑談システムとタスク指向対話システムを両立することは,現代の対話システムの活用シーンにおいて重要な課題である.
}{
\end{comment}
With the increase in the use of smart speakers, dialog systems have become more common in daily life. Unlike store-assistant robots with the main purpose of selling products in stores, these dialog systems are used for general purposes and are expected to perform a variety of task-oriented dialogs, including system-driven tasks such as item recommendation.
In addition, users expect to be able to chat with such systems. At least 38\% of calls to a dialog system were chatting~\cite{chat_detection}.
Therefore, dialog systems should support both chat- and task-oriented dialog.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,bb=0 0 395 323,clip,page=1]{fig1.pdf}
\caption{\textbf{Example of dialog between user and {CASPER}}.
Chatter conducts chatting, shifters change topics from chatting to the target domain of the system's tasks, and performers perform tasks.
}
\label{fig:eyecatch}
\vspace{-1em}
\end{figure}
\begin{comment}
\ifthenelse{\value{language} = 0}{
雑談とタスク指向対話の両立に向けた既存研究は大きく分けて2種類ある:シングルモデルが両方の対話をこなす手法と,マルチモデルを使い分ける手法である.1つ目の,シングルモデルが両方の対話をこなすモデルは,knowledge grounded Seq2Seq models~\cite{fb_end2end,akgcm,ner_seq2seq}と呼ばれる.
1つ目の,シングルモデルのアプローチとして代表的な研究に,knowledge grounded Seq2Seq models~\cite{fb_end2end,akgcm,ner_seq2seq}がある.
これらのモデルは,seq2seqモデルを応答文生成に利用することで雑談能力を担保しつつ,タスク遂行も担わせるものである.
}{
\end{comment}
There are two main types of dialog systems that manage both chat- and task-oriented dialog: single-model and bi-model.
A single-model dialog system executes both chat- and task-oriented dialog using one model. A representative single-model system uses the knowledge-grounded Seq2Seq model~\cite{fb_end2end,akgcm,ner_seq2seq,cr_walker}.
\begin{comment}
\ifthenelse{\value{language} = 0}{
2つ目の,雑談モデルとタスク指向対話モデルを組み合わせるモデルは多くの既存研究がある.例えば,マルチエキスパートモデル\cite{multi_expert_model}は,様々なタスク遂行が可能なロボットに雑談タスクを追加し,タスク指向対話と雑談対話のそれぞれを一つのフレームワークで実現した.また,\cite{chat_detection,adversarial_chat_detection}は雑談かタスクか,どちらのqueryをユーザが入力したのか判別できるようにして,雑談システムとタスク指向対話システムの使い分けに貢献した.
}{
\end{comment}
A bi-model system combines a chat-oriented dialog model with a task-oriented dialog model~\cite{multi_expert_model,chat_detection,adversarial_chat_detection}. \cite{multi_expert_model} implemented both chat- and task-oriented dialog models in a single robot. In addition, models proposed by~\cite{chat_detection,adversarial_chat_detection} determine whether dialog systems should respond in the chat- or task-oriented domains according to a user utterance by detecting whether the utterance is for chat- or task-oriented dialog.
\begin{comment}
\ifthenelse{\value{language} = 0}{
これらの研究は雑談とタスク指向対話の両立に貢献したが,雑談対話とタスク指向対話の境界に関する検討は十分ではない.具体的には,シングルモデルは2つの対話の切替の制御ができず,マルチモデルは2つの対話間に狭間があり,シームレスな接続がなされていない.
シングルモデルの既存手法では,データセットのタスク遂行のシナリオに従って対話を進めるのみで,これを制御して雑談とタスク指向対話を使い分けることは困難である.
またマルチモデルの既存手法は,雑談queryには雑談を,タスク依存なqueryにはタスク遂行をそれぞれ返すことが想定されており,雑談とタスク指向対話間の切替を対話の中でシームレスに行うことは想定されていない.雑談とタスク指向対話を両立した対話システムの場合,その間の話題遷移を扱わなければ,以下の問題が生じる可能性がある.(1) 雑談が続いてタスクを遂行できない(2) 雑談の中で唐突にタスク指向対話を開始し,対話が破綻する.
これらのような問題を解消するため,雑談とタスク指向対話を両立した対話システムは,オープンドメインな雑談から文脈を汲んだ応答をしつつも,話題を転換しタスクを遂行できることが望ましい.
}{
\end{comment}
Although these studies focused on both chat and task-oriented dialog, few studies have focused on the boundary between two types of dialog. Specifically, single-model dialog systems produce only a likely utterance based on a training dataset and cannot control the flow of dialog between chat- and task-oriented dialog. Such systems sometimes continue chatting and cannot move on to a task. (1) Bi-model dialog systems do not take into account an \textit{intermediate utterance} between chat- and task-oriented dialog, such as an utterance to guide the topic to a specific item that the system wants to recommend, so they cannot provide seamless transition. In addition, the switching between chat- and task-oriented dialog in prior studies was reactive, and (2) it is still unclear when to switch for a system to actively influence the dialog. Therefore, with current dialog systems, task-oriented dialog is suddenly initiated in the midst of chatting, resulting in a breakdown in the dialog.
To address these issues, a dialog system should first allow for both chat- and task-oriented dialog to respond to open-domain chatting then gradually change the topic for carrying out a task.
\begin{comment}
\ifthenelse{\value{language} = 0}{
本論文は,{chatter}, {shifter},{performer}の3階層構造により,雑談から破綻無くタスク指向対話へ文脈を引き込み,タスクを遂行する対話システム\textit{{CASPER}: {ChAt, Shift and PERform}}を提案する Fig.\ref{fig:eyecatch}.{chatter}, {performer}はそれぞれbi-modelアプローチの雑談対話システム,タスク指向対話システムに対応している.
本手法の優れている点は,{shifter}が話題遷移を専任することで,オープンドメインな対話からタスクのドメインへ誘導できることである.これにより,タスクに関心がなかったり,世間話から始めたがるユーザを雑談からタスクへ引き込み,システムが目的タスクを遂行することができる.
}{
\end{comment}
We propose \textit{{CASPER}: {ChAt, Shift and PERform}} a dialog system consisting of three types of dialog models ({chatter}, {shifter}, and {performer}) as well as a selector. {CASPER} was designed to seamlessly switch the dialog domain from chat- to task-oriented dialog to accomplish a task (Fig.\ref{fig:eyecatch}).
Chatter and performer are equivalent to bi-model chat- and task-oriented dialog systems, respectively. The unique features of our system are {shifter} and selector.
To tackle the problem (1) and make smooth topic transition possible, {shifter} was designed to generate an intermediate utterance. By training on a dataset of dialog that ends with responses related to the system's task domain (target domain), shifter generates intermediate utterances that seamlessly bridges gap between chat- and task-oriented dialog.
In addition, selector determines when to activate the appropriate model for the problem (2).
\if0
\begin{comment}
\ifthenelse{\value{language} = 0}{
我々が実現したい自然な話題遷移とは,q/aの一貫性を保ちながら,answerが目的のドメインに属する発話であるというものである.
したがって,自然な話題遷移を扱うためには,特定のドメインへ引き込む応答文で学習した対話システムが必要である.
我々は雑談対話システムを,話題遷移がなされた発話のみを集めたコーパスで転移学習することで,特定ドメインへの引き込みが可能なshifterを実現した.
しかし,shifterはあらゆるドメインのqueryに対して似たような応答を返すため応答多様性が低く,単体でマルチターンな対話をすることは困難である.
そこで我々は,shifterを特定の有効なタイミングのみで用いるようにモデル切り替えを行うselectorを導入した.shifterの話題遷移とselectorの応答モデル切替により,CASPERは雑談からタスク指向対話までの一連の対話をシームレスに遂行することができる.
本研究の貢献は以下の2つである.
(1) 話題転換を行うshifterを導入し,雑談対話からタスク指向対話の間をシームレスにつないだ
(2) 各モデルの選択を行うselectorを導入し,適切なタイミングでのモデル切り替えを実現したQ/Aペアで学習しているだけなので(token生成を尤度推定で出た物からいじらない)一貫性のある応答生成かつ,Aのドメインが絞られているから話題転換になってる
}{
\end{comment}
Natural topic transition can be achieved by generating response sentences such that the response belongs to the target domain and the question and response are consistent.
Therefore, in order to handle natural topic transitions, such dialog system is necessary that trained with response sentences leading the conversation partner into a specific domain.
\fi
\begin{comment}
\ifthenelse{\value{language} = 0}{
実験では,被験者がCASPERと対話した後,その印象についてアンケートに答えた.結果として,end-to-endで対話を学習したシングルモデルアプローチのベースラインと比較し,CASPERは応答の自然さや話題遷移のスムーズさが有意に優れていることが分かった.
}{
\end{comment}
In a user study, participants were asked to answer a questionnaire regarding their impressions of interaction with CASPER. As a result, CASPER was significantly better in terms of naturalness of responses and smoothness of topic transitions compared with single- and bi-model systems.
\section{Introduction}
本研究では,自由形式対話における言語タスクの遂行を取り扱う.
近年は,スマートスピーカーの普及により対話形式で言語タスクを遂行する機会が増えてきた.
人はAIアシスタントなどの機械的なインタフェースに対して,人と同じように接する\cite{media_equation}.
したがって,型にはまった形式の対話でなくともユーザがAIアシスタントを使えることは必要である.
人の自由形式な発話に対する応答文を生成するには,予め用意した応答文形式から選択する手法\cite{crm,saur,ear,opendialkg}と,
DNNを利用した生成手法\cite{redial,cr-walker}がある.
DNNによる生成手法は,比較的少ない労力で広範囲な応答文生成が可能なため有望な手段である.しかし,適切に言語タスクを遂行するにはタスクに関連する知識にアクセスする必要がある.
Seq2seq対話システム\cite{seq2seq}は,多種のドメインに渡る教師データを学習することで汎化させることが可能だが,タスクを遂行するためには,むしろ狭いドメインにおける深い知識が必要である.
この問題に対してアプローチした研究は次のようなものがある.
1つ目に,ネットワークとデータを巨大にし,生成モデルが持つ重みが内包する知識を,タスクを遂行できるほど深くする\cite{gpt2,gpt3}というものである.2つ目に,Knowledge grounded models\cite{knowledge_grounded,knowledge_aware}がある.タスクに必要な知識を生成モデルの外部に用意し,Attentionを用いてslot-fillingなどの形式で知識を取り出す.3つ目に,対話システムとタスク遂行システムを分けて用意し,相互に補助し合うもがの\cite{kbrd,cr-walker}ある.
DNNを用いた自由形式対話の実現により,ボットとの対話体験は良くなった.
一方で,ユーザの発話がタスクのOODへ及ぶ可能性が出てきた.
選択式やyes/noで答えるような対話は,NLUを簡易にするだけでなく,
ドメインの絞り込みにも機能していたからである.
OOD発話を入力されても対話継続可能なタスク指向対話システムの実現に向けた研究は,次のようなものがある.
\cite{ner_seq2seq,knowledge_grounded,knowledge_aware}はタスク固有な情報をNLUおよびNLGの外部に移すことで,タスク用のモデルであってもオープンドメインな対話を可能にした.
\cite{chat_detection,adversarial_chat_detection}は雑談モデルとタスク指向対話モデルの使い分けに向けて,雑談かタスクか,どちらのqueryをユーザが入力したのか判別するモデルを作った.
これらの既存研究はいずれも,雑談とタスク指向対話の両者をこなせる対話システムの実現に貢献した.
しかし,実際に雑談とタスク指向対話を一つの対話内で実現することを考えると,その間の話題転換が必須となる.
雑談からタスクへの話題転換に関する検討については,既存研究は不十分である.
OODから戻らなければユーザにタスクは遂行させらないが,OODから強引に戻れば対話が破綻する.
したがって,OODへ入った時に,タスクへ誘導することが大事である.従来のシステムは,OODからタスクへの話題転換を扱ってない.
話題転換のモチベーションをシステムに与えなければ,OODを検出したとしても,システムの取れる行動は2通りである.
すなわち,対話破綻か,OODから復帰できず偶然戻るのを待つか.
対話が自由形式でありながら,システムが予め想定したドメイン内の対話でしか機能しない場合,ユーザは特別な気遣いを持ってシステムと接する必要がある.
それは,システムが対応可能な話題エリアを予測し,そこからはみ出さないように対話するという気遣いである.
これは,AIアシスタントと人間のように対話したがる多くのユーザにとって,極めて困難なことである.
特に,タスクに特別関心が無いユーザはOODから復帰する意欲がなく,対話によってタスクを遂行することは極めて困難である.
我々は,雑談対話システムと,タスク指向対話システムに追加して,話題転換を専任するトピック転換システム(topic shifter)を導入することで,オープンドメインな雑談とタスク指向対話を,話題転換込みで両立する.
自由形式かつ自由ドメインな対話の中でtopic shifterが応答することでシステムの設計したタスクへユーザを誘導した後,タスクを遂行させることができる.
これにより,ユーザがOODへ逸れた際の復帰が可能なだけでなく,タスク遂行に自発的な関心がないユーザに対しても,
文脈遷移に基づいてリーチ可能性の高いタスクを遂行させることができる.
topic shifterはオープンドメインな対話データで学習したSeq2seq (chatter)の転移学習によって,タスクが属するドメインの数だけ作る.
生成手法において発話傾向を操作する手法に,\cite{emo_hred}がある.彼らはユーザの感情がポジティブになるようなシステムの発話を集めたデータセットを構築することでデータセットにバイアスをかけて発話傾向を操作した.
我々はこれに習い,ユーザに遂行させたい任意のタスクに関して,そのタスクが属するドメインへ話題が移動した対話データを収集し,ドメイン転換地点となった発話文を教師データとして転移学習を行う.
これにより,topic shifterは自身のドメインへの話題転換をするモチベーションをもった対話システムになる.
また,複数のtopic shifterとchatterの使い分けは対話contextの遷移履歴から尤度を推定して行う.
上記で収集した話題転換データを用いて,ある対話コンテキストにおいて次のタイムステップに話題転換がなされるか否かを文書分類により求める.
\section{Chat, Shift and Perform}
\subsection{General design of CASPER}
\begin{comment}
\ifthenelse{\value{language} = 0}{
本論文は\textit{{CASPER}: {ChAt, Shift and PERform}}を提案する.これは話題転換する発話を生成する{shifter}および話題遷移のタイミングと遷移先を決定するselectorを用いて,雑談から破綻無くタスク指向対話へ文脈を引き込み,ターゲットタスクを遂行することができる対話システムである (Fig.\ref{fig:system}).
}{
\end{comment}
\textit{{CASPER}} steers the dialog domain from chat- to task-oriented dialog without breaking down the dialog by using {shifter} to generate topic-changing utterances and selector to determine the timing and destination of topic transitions (Fig.\ref{fig:system}).
\begin{comment}
\ifthenelse{\value{language} = 0}{
CASPERはchatter, shifter, performerの3種の対話システムと,応答モデルを決定するselectorからなる.chatterが雑談をし,shifterが雑談からタスクを用意しているドメインへ話題を誘導し,performerが誘導先でタスク指向対話を行う.
また,それらのモデルの使い分けのタイミングやモデル選択をselectorが担う.
chatter, shifter, performerの入力は対話履歴で,出力は応答文である.
}{
\end{comment}
As mentioned above, CASPER consists of three types of dialog models: chatter, shifter, and performer, as well as selector. Chatter is a chat-oriented dialog model that conducts chatting. Shifter guides topics from chatting to a pre-defined task domain. Performer performs task-oriented dialog after the topic change. CASPER can have multiple performers, and each performer has its corresponding shifter. Selector is responsible for determining when and what model to activate in the dialog.
The input of chatter, shifter, and performer is the dialog history $s_q$, and the output is the response sentence $s_a$.
\begin{comment}
\ifthenelse{\value{language} = 0}{
我々は,shifterによって,マルチモデルの既存研究の課題であった,雑談とタスク指向対話の中間の話題遷移対話が扱えていないという問題を解決している.また,selectorは,シングルモデルの既存研究の課題であった,応答の方策がデータセットのシナリオに縛られて制御できない問題を解決した.
}{
\end{comment}
Shifter solves the problem with prior studies on bi-model dialog systems of not being able to handle topic transition dialog between chat- and task-oriented dialog.
Selector solves the problem of uncontrollable response strategies due to the corpus scenario, which has been an issue in prior studies on single-model dialog systems.
\begin{table*}[t]
\centering
\begin{tabular}{rlcc}
\hline
Label & Text & Negative label & Positive label \\
\hline\hline
\begin{comment}
\ifthenelse{\value{language} = 0}{
Naturalness & このAIの応答はあなたの発言に対して自然でしたか & 非常に不自然 & 非常に自然 \\
Attachment & このAIをあなたの家に置きたいですか & 全く置きたくない & とても置きたい \\
Satisfaction & 対話の満足度を教えて下さい & 非常に悪い & 非常に良い \\
Transition & AIの話題選びは強引でしたか.自然でしたか & 非常に強引 & 非常に自然 \\
Favor & AIが勧めた商品は,あなたにとって好みのものでしたか & 非常に嫌い & 非常に好き \\
}{
\end{comment}
Naturalness & Was the response of this dialog system natural to your utterance? & very unnatural & very natural \\
Attachment & Would you like to have this AI in your home? & not at all & definitely want \\
Satisfaction & How satisfied are you with the conversation? & not satisfied & very satisfied \\
Transition & Was the dialog system's choice of topic forceful or natural? & very forced & very natural \\
Favor & Did you like the products recommended by the dialog system? & hated them & loved them \\
\hline
\end{tabular}
\caption{
\label{table:eval_items}\textbf{Questionnaire items regarding conversational recommendation}.
Participants responded to each item on 7-point scale, with 1 being negative label and 7 being positive label.}
\end{table*}
\subsection{Chatter and performer}
\begin{comment}
\ifthenelse{\value{language} = 0}{
chatterはオープンドメインな雑談を行う対話システムであり,既存のニューラル雑談対話モデルを用いて実装できる.対話文脈を入力し,次のステップの応答文を出力するモデルである.
}{
\end{comment}
Chatter is an open-domain and chat-oriented dialog model that can be implemented using current neural dialog models.
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,performerはタスク遂行を行う対話システムであり,単純なシングルターンのq-aを行うルールベースモデルや,より複雑なCRSのようなマルチターンな対話システムなど,幅広い既存のタスク指向対話システムを用いて実装できる.対話文脈を入力し,タスク指向な応答文を出力するモデルである.
}{
\end{comment}
Performer is a task-oriented dialog model that can be implemented using prior research on task-oriented dialog systems, such as rule-based systems for simple single-turn question answering, or more complex multi-turn dialog systems such as a CRS.
\subsection{{Shifter}}
\begin{comment}
\ifthenelse{\value{language} = 0}{
shifterは,chatterの雑談対話とperformerのタスク指向対話の間の対話のための応答を生成する対話システムである.
直近のユーザのqueryを含む対話文脈全体$s_q$を入力とし,特定のドメインに属するreply $s_a$を出力する.
}{
\end{comment}
Shifter is a dialog model that generates responses that bridge chatter and performer.
Shifter is expected to generate an utterance that seamlessly guides the dialog topic from chat to a specific task domain performer is in charge of.
\begin{comment}
\ifthenelse{\value{language} = 0}{
話題転換は,replyがqueryの応答として成立していて,且つreplyが目的のドメインに属しているように応答文生成することで実現できる.
このような応答文生成を学習するため,我々は目的ドメインの外にある対話に対して,目的ドメインに属するanswerをしている対話データセットを収集した.そして,次の式(\ref{equaton: shifter})に従ってchatterを転移学習することで,目的ドメインの外から,目的ドメインへ話題転換する発話の生成を実現した.
}{
\end{comment}
For training shifter, we collected context-reply pairs that included topic transitions to a specific domain. To generate utterances that change the topic from outside to inside the target domain, we performed transfer learning on the chatter in accordance with the following loss function:
\begin{align}
L(s_q, s_a) & = -\sum_{t=1}^T \log p(w_t^a\ |\ s_q, w_{<t}^a)\label{equaton: shifter} \\
s_q & =\left\{w^q \ |\ \forall w^q \notin D\right\}\notag \\
s_a & =\left\{w^a \ |\ \exists w^a \in D\right\}\notag,
\end{align}
\begin{comment}
\ifthenelse{\value{language} = 0}{
ただし,$T$は出力文の単語数,$w_t^a$は出力文の$t$番目の単語,$w^q$は入力文脈の単語,$D$は目的ドメインに属する単語の集合である.
}{
\end{comment}
where $w_t^a$ is the $t$-th word in the output sentence, $T$ is the number of words in the output sentence, $w^q$ is the word in the input dialog, and $D$ is the set of words belonging to the target domain.
\begin{comment}
\ifthenelse{\value{language} = 0}{
なお,shifterはドメインごとに別々のモデルとして学習させた.これにより,Fig. \ref{fig:eyecatch}に示すようなドメイン内の固有名詞などを交えた,ドメイン独自の話題遷移を学習させることができた.
仮に,複数のドメインへの遷移を一つのshifterに学習させた場合,ドメイン数を増やすほどすべてのドメインへの話題遷移に共通するような応答へ収束し,chatterのような一般的な応答なってしまう可能性がある.また,遷移先のドメインがモデルのNLGで間接的に決定するため,ドメインの誘導先を設計者が細かく制御できなくなるという懸念がある.
}{
\end{comment}
Shifter was trained as a separate model for each domain. This enables it to learn topic transitions unique to each domain, including the use of proper nouns in the domain, as shown in Fig. \ref{fig:eyecatch}.
As we discuss in section \ref{sec:experiment}, a single shifter were to learn transitions to multiple domains, it would converge to a response that is common to all topic transitions to all domains as the number of domains increases, which may result in a general response similar to that of chat-oriented dialog systems. Since the destination domain is indirectly determined by the dialog system, we may not be able to control the destination of the domain.
\begin{comment}
\ifthenelse{\value{language} = 0}{
なお,我々はshifterの文生成の際に,トークンの生成確率を直接操作する手法\cite{kbrd} を応答ドメイン制御には用いず,コーパスにバイアスをかけてNLGを制御する手法\cite{emo_hred}を採用した.コーパスのバイアスによる応答制御は,実際にコーパス上に存在する話題遷移を参考にして学習ができるので,出力文の尤もらしさを維持しながら応答制御できるという利点がある.
なお,出力トークンを直接操作するものと異なり,コーパスによるNLGの制御は大雑把なドメインの制御ができても,具体的な発話内容の制御ができないという懸念があるが,我々は,shifterを話題をタスク付近へ誘導するという大雑把な作業に使用しており,タスクに関する具体的な対話はperformerが行うため,これは問題にならない.
}{
\end{comment}
Shifter is expected to generate an utterance that seamlessly guides the dialog topic from chat to a specific task domain performer is in charge of.
As mentioned above, shifter is trained on a dataset of dialog ending with biased response domain to generate a biased utterance that bridges the gap between chat- and task-oriented dialog.
There is another approach to bias an utterance.~\cite{kbrd} proposed directly manipulating the probability of a token generation in inference step.
In generating shifter's biased response, we did not use a method of directly manipulating the probability of token generation~\cite{kbrd}, but adopted a method of biasing the corpus beforehand~\cite{emo_hred}.
Compared with methods that directly manipulate the output tokens, methods that bias the training corpus are better suited for controlling the general domain but not for controlling utterances at the token level.
Because we use shifter for the rough task of guiding the topic to the target domain, and performer performs the specific dialog about the task, so this is not a problem in our case. In addition, response control by corpus bias has the advantage of maintaining the plausibility of the output sentences since shifter can be trained from the topic transitions that have actually occurred in the corpus.
\subsection{Selector}\label{ss:selector}
\begin{comment}
\ifthenelse{\value{language} = 0}{
selectorは応答モデルを決定する分類器である.
これまでの対話文脈$s_q$を入力とし,次に応答すべきモデルのインデックスを出力する.
CASPERは複数の独立した対話システムを内包しているため,全てのモデルが各タイムステップで応答文を生成できるが,Selectorは対話の進行状況に応じて
chatterと複数のshifterの中から一つだけを選び,実際に応答文生成させる.
また,performerが応答可能である場合は,chatterやshifterより優先的に用いる.タスク指向対話システムは,文書検索やキーワードマッチングなどで始動条件や応答条件がそれ自体に定義されている既存研究が多かったため,selectorの応答モデル推論の対象にはしなかった.
CASPERにおけるselectorの役割は,話題遷移をするタイミングと遷移先のドメインを決定することである.
}{
\end{comment}
Selector is a classifier that determines the response model, given the dialog history.
Each dialog model (chatter, shifter, and performer) has an appropriate context to be activated.
Selector selects when and what model to use to generate response according to the progress of the dialog.
\begin{comment}
\ifthenelse{\value{language} = 0}{
Selectorはchatterと複数のshifterの中から一つのモデルを応答モデルに指定する.ただし,performerが応答可能である場合は,chatterやshifterより優先的に用いる.
タスク指向対話システムは,文書検索やキーワードマッチングなどで始動条件や応答条件がそれ自体に定義されている場合が多かったため,selectorの応答モデル推論の対象から外した.
}{
\end{comment}
Selector specifies one model as the response model from among chatter and multiple shifter models. However, if performer can respond, it is preferred to the chatter and shifter.
The reason we exclude performer from the response model inference of selector is that task-oriented dialog systems often define their response conditions (e.g., confidence level in FAQ search or keyword matching).
\begin{comment}
\ifthenelse{\value{language} = 0}{
モデルを適切に切り替えることによって,タスク指向対話への導入や,適度に話題を変化させることで対話満足度の向上が見込まれる.
具体的には,performerのタスク指向な発話の強引さがshifterの話題導入により軽減されることや,chatterがGeneric Responseを繰り返す問題がshifterの話題提供により改善されることが期待できる.
}{
\end{comment}
Appropriate switching of the models is expected to enable the introduction of task-oriented dialogs and to improve dialog satisfaction by moderately changing the topic.
\begin{comment}
\ifthenelse{\value{language} = 0}{
selectorの学習は,入力に対話履歴$s_q$を用いて,次のタイムステップの応答$s_a$のドメイン$c_a$を教師データとする多クラス文書分類を次の式\ref{equation: classification}に従って行う.
}{
\end{comment}
Selector is trained by multi-class document classification, which predicts the domain $c_a$ of the future response $s_a$ given the dialog history $s_q$ with the following loss function:
\begin{align}
L(s_q, c_a) = -\sum_i^C c_a^i\log(f(s_q)_i)\label{equation: classification},
\end{align}
\begin{comment}
\ifthenelse{\value{language} = 0}{
ただし,$i$はCASPERが持つ全てのchatterおよびshifterに割り振られるドメインのインデックス,$C$はchatterの担当であるopen-domainを含めたCASPERが扱う全てのドメイン数,$c_a^i$は$s_a$の正解ドメインがi番目のドメインであれば1, そうでなければ0の値である.
また,$f(s_q)$は$s_q$から得た特徴量をsoftmaxに掛けた出力,$f_i$は$i$番目が正解ドメインである確信度である.
すなわち,selectorは過去の対話を入力とした次ドメイン推論タスクで訓練される.
注意すべきことは,selectorの訓練に$s_a$は用いないことである.$s_a$が発話される一つ前のタイムステップまでの対話文脈$s_q$を用いて,$s_a$のドメインである$c_a$を予測する.
}{
\end{comment}
where $i$ is the index of the domain assigned to all chatter and shifters in CASPER, $C$ is the number of all domains handled by CASPER including the open-domain in charge of chatter,
$c_a^i$ is 1 if the ground-truth domain of $s_a$ is the $i$-th domain, 0 otherwise. Also, $f(s_q)$ is the output of the feature obtained from $s_q$ into softmax, and $f()_i$ is the confidence that $i$-th is the index of correct answer domain.
In other words, selector is trained by next domain prediction with input of previous dialog history.
Note that we do not use the response sentence $s_a$ to predict its domain $c_a$. The $c_a$ is predicted using only the previous dialog $s_q$ up to the time step before the response $s_a$ is uttered.
\begin{comment}
\ifthenelse{\value{language} = 0}{
また,この文書分類で得られた次発話のドメインの確信度を用いて,実際に次に応答するべきモデルを式 (\ref{equation: selector})に従って推論する.
}{
\end{comment}
Selector uses the confidence of the domain of the next utterance to infer the actual model that should be responded to next in accordance with
\begin{align}
\newcommand{\mathop{\rm arg~max}\limits}{\mathop{\rm arg~max}\limits}
\text{Selector}(s_q) = \mathop{\rm arg~max}\limits_{i} \left[\frac{f(s_q)_i}{\alpha_i}\right]\label{equation: selector},
\end{align}
\begin{comment}
\ifthenelse{\value{language} = 0}{
ただし,$\alpha_i$は$i$番目のドメインへの転換のしにくさを制御する0より大きいパラメータである.$\alpha_i$を定義することで,話題転換の強引さや,転換先のドメインの出やすさを容易に制御できるようにした.
すなわち,$\alpha_i$はタスクへの遷移のしやすさを司っており,これを調整することで,対話の流れのコントロールにつながる.本研究では,時間経過に伴ってshifterが出やすくすることで,強引さの無い形で推薦に持っていくことができる.
なお,すべての$i$について$\alpha$が同じ値のとき,Selectorは単に次ドメイン推論から得られる尤もらしいドメインのモデルを選択する次ドメイン推論器である.
}{
\end{comment}
where $\alpha_i$ is a parameter greater than zero that controls how difficult it is to switch to the $i$-th domain. By defining $\alpha_i$, we can easily control the forcefulness of the topic change and the ease of transition to the $i$-th domain. We can also change $\alpha_i$ in time.
In this study, we gradually decreased the alphas for shifters to make them activate more as time passed, so the task could be performed in a less forced way.
When $\alpha$ is the same for all $i$, selector is simply a next-domain estimator that selects the model of the most likely domain.
\begin{comment}
\ifthenelse{\value{language} = 0}{
上述の通り,我々は対話計画を次ドメイン予測タスクの文書分類を応用して実装している.次ドメイン予測の文書分類は,話題が転換した対話の教師データが必要になるためコストがかかることが懸念されるが,CASPERにおいては,話題が転換しないデータはchatterの学習で,話題を転換するデータはshifterの学習で集まっているので,これらの対話データを次ドメイン推論の教師データとして使える.したがって,データを追加で用意する必要はない.また,既存研究\cite{crm}に習い,強化学習で対話ポリシーを決定することも検討したが,話題遷移は成功/失敗の境界が曖昧で,強化学習のフィードバックを与えることが困難であるため,CASPERの応答モデル選択に採用しなかった.
}{
\end{comment}
Document classification for next-domain prediction requires training data of topic-switching dialogs, which is normally expensive to collect. However, in CASPER, we have already collected data on topic switching and non-topic switching in training shifter and chatter, respectively, so we can use these dialog data as training data for the next domain inference. Therefore, for training selector, there is no need to prepare additional data other than the training data of chatter and shifter.
\begin{comment}
\ifthenelse{\value{language} = 0}{
なお,selectorがタイムステップごとに異なるモデルに応答させることは,shifterの役割を簡単にしている.
shifterは特定ドメインのみの応答文からなるコーパスで転移学習するため,あらゆるドメインのqueryに対して,shifterがoverfitしたドメインの応答を返す.したがって,ユーザがshifter単体と長く連続で対話を続けると,ユーザが飽きて対話満足度が低下することが予想される.
しかし,selectorのモデル選択があることで,ユーザは長いタイムステップにわたり単一のshifterと連続で対話することはない.selectorが各タイムステップで応答モデルを決定するという前提の元,shifterは様々なdomainから特定のドメインへ自然に引き込む機能を持つ応答生成モジュールとして利用できるのである.
}{
\end{comment}
The fact that selector selects a different model at each time step simplifies the role of shifter.
Since shifter performs transfer learning on a corpus consisting of responses of a specific domain, it always returns answers in its responsible domain regardless of the input query. Therefore, if the user continues to interact with shifter alone for a long time, it is expected that the user will get bored and the dialog satisfaction will decrease. However, in CASPER, the user does not continuously interact with a single shifter over a long time step because the model selection of selector occurs every time. Under the assumption that selector selects the response model at each time step, shifter can be used as a response-generation module with the ability to seamlessly pull from various domains into a specific domain.
\section{Related Work}
\noindent\textbf{Task-oriented dialog systems.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
近年特に研究が盛んなタスク指向対話システムの一つに,Conversational Recommender System (CRS)がある.
\cite{crm}は代表的なCRSである.NLUを用いてユーザのbeliefをtrackしながら,強化学習を用いて質問や推薦のポリシーを決定することで,推薦対話システムを実現した.
}{
\end{comment}
The task-oriented dialog systems that have been extensively studied are conversational recommender systems (CRSs).~\cite{crm} is a typical CRS that uses natural language understanding to track the user's beliefs and implements reinforcement learning to determine the policy of questions and recommendations.
\begin{comment}
\ifthenelse{\value{language} = 0}{
しかし,既存のタスク指向対話システムの多くは,out of domain (OOD)なqueryの入力を想定しておらず,結果としてユーザはシステムのカバーするドメインを推測しながら正しく応答されるように注意して対話する必要があった.
}{
\end{comment}
However, most of the existing task-oriented dialog systems do not assume out-of-domain (OOD) utterance by nature.
\if0
query input, and as a result, users have to guess the the domain covered by the system and interact carefully not going OOD.
\fi
\if0
\begin{comment}
\ifthenelse{\value{language} = 0}{
私達の{CASPER}では,ユーザが対話中にOODなクエリを入力しても,{chatter}が応答することが可能である.
}{
\end{comment}
In our {CASPER}, even if the user enters an OOD query during the dialog, the chatter can respond and continue the dialog.
\fi
\noindent\textbf{Non-task-oriented dialog systems.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
最も基本的な雑談対話システムは,Encoder, DecoderペアからなるSeq2Seqモデル\cite{seq2seq}である.
}{
\end{comment}
The most basic non-task-oriented dialog systems are based on the Seq2Seq model~\cite{seq2seq} that consists of encoder decoder pairs.
\begin{comment}
\ifthenelse{\value{language} = 0}{
Seq2Seqモデルをmaximum likelihood estimation (MLE)を用いて雑談対話を学習させる場合,MLEが1to1なcontext-response関係を前提としているため,相槌などのGeneric responseを生成してしまい,話題の変わらない対話をしてしまうことが知られている.非タスク指向対話システムからタスク指向対話への対話の制御を考えるときこれは問題である.
1-to-manyな問題の目的関数を応用することでこれを回避する手法\cite{less_generic_response}や,
学習時のコーパスにバイアスをもたせ,NLGの学習を学習データから制御する手法\cite{emo_hred}が検討されている.
}{
\end{comment}
When the Seq2Seq model is trained using maximum likelihood estimation (MLE),
it generates generic responses such as ``\textit{I don't know}'' because it is
assumed that with MLE that there is a 1-to-1 context-response relationship, resulting in a dialog that does not change the topic. This is a problem when considering the control from chat- to task-oriented dialog. To address this issue, several studies have investigated generating distinctive responses.~\cite{less_generic_response} solved this problem by applying the objective function to assume a one-to-many problem for natural language generation (NLG).~\cite{emo_hred} controls the training of NLG by biasing the training corpus beforehand.
\begin{comment}
\ifthenelse{\value{language} = 0}{
私達の{CASPER}では,雑談対話システムである{chatter}は,簡単な応答で対話を継続するが,{shifter}が話を発展させるためにより意味を持った応答を生成するため,Generic Responseという問題を解消している.なお,{shifter}の応答文生成の制御は\cite{emo_hred}と同様にコーパスにバイアスを与える手法を採用している.
}{
\end{comment}
In our {CASPER}, a chat-oriented dialog system, {chatter} continues the dialog with simple responses, while {shifter} generates more meaningful responses to develop the conversation, thus solving the problem of generic response. The {shifter}'s response generation can be controlled with biasing a corpus for training~\cite{emo_hred}.
\noindent\textbf{Hybrid model for task- and non-task-oriented dialog systems.}
\begin{comment}
\ifthenelse{\value{language} = 0}{
雑談とタスク指向対話を両立したモデルは,大きく分けて2種類ある.シングルモデルが両方の対話をこなす手法と,マルチモデルを使い分ける手法である.}
{
\end{comment}
As mentioned above, there are two main types of dialog systems for handling both task- and non-task oriented dialog: single-model that for both types of dialog using one model, and bi-model approach that uses two types of models.
\begin{comment}
\ifthenelse{\value{language} = 0}{
1つ目の,シングルモデルが両方の対話をこなすモデルとして,knowledge grounded Seq2Seq modelsがある.
これらのモデルは,seq2seqモデルを応答文生成に利用することで雑談能力を担保しつつ,タスク遂行も担わせるものである.
タスク固有な知識を事前学習で獲得する\cite{fb_end2end}ものや,推論時に外部から知識を抽出してqueryと共に入力\cite{akgcm}するものや,
queryにNERを施すことでSeq2Seqモデルがtask specificなqueryを扱う必要が無いようにする\cite{ner_seq2seq}ものがある.
また,言語処理と推薦entity選択をend2endで学習するモデル\cite{cr_walker}もある.
}{
\end{comment}
Representative single-model dialog systems use the knowledge-grounded Seq2Seq model~\cite{fb_end2end,akgcm,ner_seq2seq,cr_walker}. These systems use Seq2Seq both for generating chatting responses and carrying out tasks.
For example,~\cite{fb_end2end} acquires task-specific knowledge by pre-training in an end-to-end manner. The system proposed by~\cite{akgcm} extracts knowledge from outside the language model.~\cite{ner_seq2seq} applied named entity recognition (NER) before input so that the model does not need to handle task-specific queries. In addition,~\cite{cr_walker} learns language processing and recommendation-entity selection in an end-to-end manner.
\begin{comment}
\ifthenelse{\value{language} = 0}{
2つ目の,雑談モデルとタスク指向対話モデルを組み合わせるモデルは多くの既存研究がある.例えば,マルチエキスパートモデル\cite{multi_expert_model}は,様々なタスク遂行が可能なロボットに雑談タスクを追加し,タスク指向対話と雑談対話のそれぞれを一つのフレームワークで実現した.また,\cite{chat_detection,adversarial_chat_detection}は雑談かタスクか,どちらのqueryをユーザが入力したのか判別できるようにして,雑談システムとタスク指向対話システムの使い分けに貢献した.
}{
\end{comment}
Bi-model dialog systems that combine the chat- and task-oriented dialog models are being actively studied. For example, the multi-expert model~\cite{multi_expert_model} adds a chat task to a robot capable of performing various tasks and implements both chat- and task-oriented dialog in a single framework. \cite{chat_detection,adversarial_chat_detection} detects whether the query is for chat- or task-oriented dialog, enabling bi-model dialog systems to determine whether they should respond in the chat- or task-oriented domains.
\begin{comment}
\ifthenelse{\value{language} = 0}{
これらの研究は雑談とタスク指向対話の両立に貢献したが,雑談対話とタスク指向対話の境界に関する検討は十分ではない.具体的には,シングルモデルは2つの対話の切替の制御ができず,マルチモデルは2つの対話間に狭間があり,シームレスな接続がなされていない.
シングルモデルの既存手法では,データセットのタスク遂行のシナリオに従って対話を進めるのみで,これを制御して雑談とタスク指向対話を使い分けることは困難である.
またマルチモデルの既存手法は,雑談queryには雑談を,タスク依存なqueryにはタスク遂行をそれぞれ返すことが想定されており,雑談とタスク指向対話間の切替を対話の中でシームレスに行うことは想定されていない.雑談とタスク指向対話を両立した対話システムの場合,その間の話題遷移を扱わなければ,以下の問題が生じる可能性がある.(1) 雑談が続いてタスクを遂行できない(2) 雑談の中で唐突にタスク指向対話を開始し,対話が破綻する.
これらのような問題を解消するため,雑談とタスク指向対話を両立した対話システムは,オープンドメインな雑談から文脈を汲んだ応答をしつつも,話題を転換しタスクを遂行できることが望ましい.
}{
\end{comment}
Although these studies contributed to the compatibility of chat- and task-oriented dialog, there has not been sufficient research on the boundary between these types of dialogs. Specifically, single-model dialog systems do not control the switching between the two types of dialog, while the bi-model dialog systems have a gap between them and do not provide a seamless connection.
With single-model systems, it is difficult to control the flow of dialog domain, since the model only follows the scenario of the training corpus.
Bi-model systems are supposed to respond with chat for chat queries and task execution for task-specific queries, and not supposed to seamlessly switch between chat- and task-oriented dialog.
If dialog system that combines both chat- and task-oriented dialog cannot handle topic transitions between the two, the following problems may arise.
(a) The chatting just continues and the task is not accomplished.
(b) Task-oriented dialog is suddenly initiated in the midst of chatting, resulting in a breakdown in the dialog.
To address these issues, a dialog system should allow for both chat- and task-oriented dialog to first respond to open-domain chat then gradually change the topic for carrying out a task.
\begin{comment}
\ifthenelse{\value{language} = 0}{
私達の{CASPER}においては,{shifter}を話題遷移がなされた対話データで学習させることで,話題遷移の問題を解消している.また,話題遷移のタイミングは対話contextの遷移履歴を用いた尤度推定で決定する.
}{
\end{comment}
{CASPER} solves the topic transition problem by training {shifter} with a dialog dataset in which topic transition occurs. The timing of the topic transition is determined by likelihood estimation using the transition history of the dialog.
|
1,314,259,993,306 | arxiv | \section{Introduction} \label{sec-intro}%
The object here is the best possible general estimate of the number of solutions to a special type of the unit equations in two unknowns over the rationals.
The contents of this paper are motivated by some works of Scott \cite{Sc} and Bennett \cite{Be_cjm_01}, and they complement our previous work in \cite{MiPi}, but are independent of it.
First of all, we shall give a brief history on purely exponential Diophantine equations, and start with a general type, given as follows:
\begin{equation} \label{general}
a_1 {b_{11}}^{x_{11}} \cdots {b_{1 l}}^{x_{1 l}}
+a_2 {b_{21}}^{x_{21}} \cdots {b_{2 l}}^{x_{2 l}}
+ \cdots +a_k {b_{k1}}^{x_{k1}} \cdots {b_{k l}}^{x_{k l}}
=0
\end{equation}
with $k \ge 3$ and $l \ge 1$, where each letter using $a,b$ and $x$ denotes a fixed nonzero integer, a fixed integer greater than 1 and an unknown non-negative integer, respectively.
The above equation has a long and rich history, and includes several cases which have been actively studied to date (cf.~\cite[Ch.1]{ShTi}, \cite[D10]{Gu}, \cite[Ch.s 4 to 6]{EvGy}).
Since the number of prime divisors of each term in the left-hand side is finite, apparently equation \eqref{general} is a special case of unit equations which are a very important object in Diophantine number theory and appear in a number of topics concerning usual polynomial Diophantine equations as well (cf.~\cite{EvGy}).
Schmidt's subspace theorem is applied for the unit equations to conclude that
equation \eqref{general} has at most finitely many solutions $x_{i j}\,(1 \le i \le k, 1 \le j \le l)$ for which the left-hand side has no vanishing subsum, and the finiteness of solutions for general unit equations have been extensively investigated in the literature (cf.~\cite[Ch.6]{EvGy}).
Though it is in general not easy to find all solutions to even very special cases of \eqref{general}, only for the case where the number of the terms in the equation is least, that is, $k=3$, Baker's theory on linear forms in logarithms is applied in general to give an explicit upper bound, being effectively computable by the equation's parameters, for each unknown exponent.
This fact plays a fundamental role to explicitly resolve exponential Diophantine equations including \eqref{general} with $k=3$ in many of the existing works, which is also for the ones of this paper.
From now on, we consider special cases of \eqref{general} with $k=3$, where the coefficients do not appear and the number of unknown exponents is very small.
They are closely related to the generalized Fermat conjecture (cf.~\cite{BeMihSi}) and Catalan's conjecture (Mih\u{a}ilescu's theorem \cite{Mih}) as well as to the well-known conjecture of Pillai which asserts that there are only finitely many pairs of distinct perfect powers with their difference fixed.
One of such examples is the following:
\begin{equation} \label{pillai}
a^x-b^y=c,
\end{equation}
where $a,b,c$ are fixed positive integers with $a>1$ and $b>1$, and $x,y$ are unknown positive integers.
Note that the unknown exponents here are allowed to equal 1.
After the pioneer works of Piilai \cite{Pi,Pi2} on the above equation, several researchers have attempted to obtain its general estimate on the number of solutions (cf.~\cite[Sec.1]{Be_cjm_01}, \cite[Sec.1]{Be_JNT_03}).
A general and definitive result for this direction was finally obtained by Bennett \cite[Theorem 1.1]{Be_cjm_01} as follows:
\begin{prop} \label{atmost2pillai}
There are at most two solutions to equation $\eqref{pillai}.$
\end{prop}
The proof of this proposition is achieved by the combination of a special type of Baker's method on lower bounds for linear forms in two logarithms together with a gap principle arising from the existence of three hypothetical solutions.
It should be remarked that there are a number of examples which allow equation \eqref{pillai} to have two solutions (cf.~\eqref{excep-equs} below), and that the case in which the coprimality of $a$ and $b$ lacks is handled only by a simple (but skillful) argument.
In the same paper, as a further problem, Bennett posed a candidate of the complete list composed of the triples $(a,b,c)$ where there are two solutions to equation \eqref{pillai} (cf.~\cite[Conjecture 1.2]{Be_cjm_01}), and he gave a few partial results to support the validity of his question.
On the other hand, motivated by the celebrated theorem of Bennett (Proposition \ref{atmost2pillai}), we have attempted to generalize it to a 3-variable case, which concerns another particular case of equation \eqref{general} with $k=3$, given as follows:
\begin{equation} \label{abc}
a^x+b^y=c^z,
\end{equation}
where $a,b,c$ are fixed relatively prime positive integers greater than 1, and $x,y,z$ are unknown positive integers.
The above equation itself has a long history and have been actively studied by many researchers including pioneers Scott, Le and Terai etc. (see for example \cite{CiMi,Lu_aa_12,Mi_aa18} and the references therein).
The main result of our previous work \cite{MiPi} is as follows:
\begin{prop}\label{atmost2}
There are at most two solutions to equation $\eqref{abc},$ except when $(a,b,c)$ or $(b,a,c)$ equals $(5,3,2),$ where there are exactly three solutions.
\end{prop}
The proof of this proposition is achieved by the combination of Baker's method in both complex and $p$-adic cases together with improving the gap principle established by Hu and Le \cite{HuLe,HuLe2,HuLe3} arising from the existence of three hypothetical solutions as well as a certain 2-adic argument relying upon the striking result of Scott and Styer \cite{ScSt}.
From the viewpoint of the generalized Fermat equation (cf.~\cite[Ch.14]{Co}), or of the classical popular problem to seek for all relations that the sum of two perfect powers being `relatively prime' equals another perfect power, Proposition \ref{atmost2} is regarded as a 3-valiable version of Proposition \ref{atmost2pillai}.
Further it is definitive in the sense that there are (infinitely) many triples $(a,b,c)$ which allow equation \eqref{abc} to have two solutions.
Indeed, according to \cite[Sec.3]{ScSt}, we have
\begin{gather}
5^{}+3^{}=2^{3}, \ 5^{}+3^{3}=2^{5}, \ 5^{3}+3^{}=2^{7}; \nonumber\\
13^{}+3^{}=2^{4}, \ 13^{}+3^{5}=2^{8}; \nonumber\\
5^{}+2^{2}=3^{2}, \ 5^{2}+2^{}=3^{3}; \nonumber\\
7^{}+2^{}=3^{2}, \ 7^{2}+2^{5}=3^{4}; \nonumber\\
3^{}+2^{3}=11^{}, \ 3^{2}+2^{}=11^{}; \nonumber\\
10^{}+3^{}=13^{}, \ 10^{}+3^{7}=13^{3}; \nonumber\\
\label{excep-equs} 3^{}+2^{5}=35^{}, \ 3^{3}+2^{3}=35^{};\\
89^{}+2^{}=91^{}, \ 89^{}+2^{13}=91^{2}; \nonumber\\
5^{}+2^{7}=133^{}, \ 5^{3}+2^{3}=133^{};\nonumber\\
3^{}+2^{8}=259^{}, \ 3^{5}+2^{4}=259^{}; \nonumber\\
13^{}+3^{7}=2200^{}, \ 13^{3}+3^{}=2200^{}; \nonumber\\
91^{}+2^{13}=8283^{}, \ 91^{2}+2^{}=8283^{}; \nonumber\\
(2^k-1)^{}+2^{}={2^k+1}^{}, \ (2^k-1)^{2}+2^{k+2}=(2^k+1)^{2},\nonumber
\end{gather}
where $k \ge 2$ is any integer.
On the other hand, similarly to the situation of equation \eqref{pillai}, something rather stronger than Proposition \ref{atmost2} seems to be true.
Based on a computer search, Scott and Styer \cite{ScSt} conjectured that equalities in \eqref{excep-equs} present the complete list of all triples $(a,b,c)$ where there are at least two solutions to equation \eqref{abc}, as follows:
\begin{conj} \label{atmost1conj}
There is at most one solution to equation $\eqref{abc},$ except when $(a,b,c)$ or $(b,a,c)$ belongs to the following set$:$
\begin{align}\label{excep-set}
\{ \,&(5,3,2),(13,3,2),(5,2,3),(7,2,3),\\
&(3,2,11),(10,3,13),(3,2,35),(89,2,91),\nonumber\\
&(5,2,133),(3,2,259),(4,3,259),(16,3,259),\nonumber\\
&(13,3,2200),(91,2,8283),(2^k-1,2,2^k+1)\,\},\nonumber
\end{align}
where $k$ is any positive integer with $k \ge 2.$
\end{conj}
This conjecture seems to be an ultimate proposition through the studies on purely exponential Diophantine equations, and it is regarded as a 3-variable version of Bennett's open question mentioned on equation \eqref{pillai}.
The aim of this paper is to establish several results on Conjecture \ref{atmost1conj}.
Note that it is already known from the literature that the solutions to equation \eqref{abc} corresponding to each triple in \eqref{excep-set} are described as in \eqref{excep-equs}.
Before stating our results, we introduce a simple notion to extend the definition of multiplicative order on the irreducible residue class groups.
\begin{definition} \label{extend}
Let $M$ be a positive integer.
For any integer $\mathcal A$ coprime to $M,$ the extended multiplicative order of $\mathcal A$ modulo $M$ is defined as the least positive integer $E$ such that ${\mathcal A}^E$ is congruent to $1$ or $-1$ modulo $M.$
\end{definition}
Our first result is the fundamental work of this paper.
\begin{thm} \label{th1}
Assume that the extended multiplicative orders of $a$ and $b$ modulo $c$ are relatively prime.
Then Conjecture $\ref{atmost1conj}$ is true, namely, there is at most one solution to equation $\eqref{abc},$ except when $(a,b,c)$ or $(b,a,c)$ equals one of $(5,3,2),(13,3,2),(5,2,3)$ and $(7,2,3).$
\end{thm}
This has the following immediate corollary.
\begin{cor}\label{coro1}
Assume that at least one of $a$ and $b$ is congruent to $1$ or $-1$ modulo $c.$
Then there is at most one solution to equation $\eqref{abc},$ except when $(a,b,c)$ or $(b,a,c)$ equals one of $(5,3,2),(13,3,2),(5,2,3)$ and $(7,2,3).$
\end{cor}
Actually it will turn out that the above corollary is essentially equivalent to the first theorem (cf.~Section \ref{sec-reduce}).
The work for proving Corollary \ref{coro1} was motivated by attempting to obtain a 3-variable generalization of Bennett's result \cite[Theorem 1.6]{Be_cjm_01} which seems to be motivated for verifying his open question on equation \eqref{pillai} when $a$ takes fixed values.
It is worth noting that Corollary \ref{coro1} proves Conjecture \ref{atmost1conj} for $c=2,3$ and 6, and this particularly provides an analytic proof of the celebrated theorem of Scott \cite[Theorem 6; $p=2$]{Sc} solving Conjecture \ref{atmost1conj} for $c=2$ in a purely algebraic manner over imaginary quadratic fields, which is one of the most important advantages of this paper.
(For another direct application of Corollary \ref{coro1} see the end of Section \ref{sec-th1}.)
To state the next result, which is a non-explicit but effective generalization of Theorem \ref{th1}, we prepare some notation.
For a finite set $S$ of prime numbers, we denote by $\mathcal A[S]$ the $S$ part of a nonzero integer $\mathcal A$, namely,
\[
\mathcal A[S]=\prod_{p \in S} p^{\,\nu_p(\mathcal A)},
\]
where $\nu_p$ denotes the $p$-adic valuation.
For simplicity and convenience, we write $\mathcal A[\{p\}]=\mathcal A[p]$ and $\mathcal A[\emptyset]=1$, respectively.
\begin{thm}\label{th2}
Let $S$ be a $($possibly empty$)$ set of odd prime factors of $c.$
Define $M_S$ and $c_S$ as either
\begin{align*}
&M_S=\prod_{p \in S}\,p, \ \ c_S=\max\{c[S],c[2]\}; \ \ \ \text{or} \tag{I}\\
&M_S=4\prod_{p \in S}\,p, \ \ c_S=\frac{1}{2}\,c[ S \cup \{2\} ]. \tag{II}
\end{align*}
Assume that the extended multiplicative orders of $a$ and $b$ modulo $M_S$ are relatively prime and that $c_S>\sqrt{c}.$
Then there is at most one solution to equation $\eqref{abc},$ except when $(a,b,c)$ satisfies at least one of the following two restrictions{\rm :}
\begin{align*}
&\bullet \, \max\{a,b,c\}<\mathcal C_1;\\
&\bullet \, \max\{a,b\}<\exp\biggl(\frac{\mathcal C_2}{(\log c_S)/\log \sqrt{c}\,-1}\biggl), \ \ c_S<\exp(\mathcal C_2) \sqrt{c},
\end{align*}
where $\mathcal C_1$ and $\mathcal C_2$ are some positive absolute constants which are effectively computable.
\end{thm}
Theorem \ref{th1} is (almost) included in the above theorem for the case where $S$ is the set of odd prime factors of $c$.
Theorem \ref{th2} has a fruitful application to cases where $c$ takes fixed values.
The following two results are obtained by applying Theorem \ref{th2} for the case where $S$ is the intersection of the set of prime factors of $c$ and $\{3\}$, and for setting $c=p^n \cdot k$ with $p \in \{2,3\}$, $k$ a positive integer prime to $p$ and $n$ suitably large relative to $k$, respectively.
\begin{cor}\label{c2c3}
For any fixed $c$ satisfying $\max\{c[2],c[3]\}>\sqrt{c},$ there is at most one solution to equation $\eqref{abc},$ except for only finitely many pairs $(a,b).$
\end{cor}
\begin{cor}
Conjecture $\ref{atmost1conj}$ is true for infinitely many values of $c$ which are not perfect powers.
\end{cor}
We emphasize that from our method it is difficult to obtain a uniform version on $c$ of Corollary \ref{c2c3}, and further it seems to be hopeless to handle all exceptional triples appearing from such a version.
Our final result, with Corollary \ref{coro1}, confirms Conjecture \ref{atmost1conj} if $c$ takes the Fermat primes found so far.
This is regarded as a 3-variable generalization of \cite[Corollary 1.7]{Be_cjm_01}.
\begin{thm}\label{th3}
If $c \in \{5,17,257,65537\},$ then Conjecture $\ref{atmost1conj}$ is true, namely, there is at most one solution to equation $\eqref{abc},$ except when $(a,b)$ or $(b,a)$ equals $(c-2,2).$
\end{thm}
The organization of this paper is as follows.
In the next section we show some idea to reduce each theorem to one of its weak forms.
In each of the proofs of our theorems, we examine solutions to the system of two equations arising from exceptional triples $(a,b,c)$ which allow equation \eqref{abc} to have at least two solutions.
Towards the proof of Theorem \ref{th1}, Sections \ref{sec-th1-pre} and \ref{sec-th1-bound} are respectively devoted to find several congruence restrictions for solutions, and to find upper bounds for them in several cases by applying Bugeaud's results in \cite{Bu} on simultaneous non-Archimedean valuations on the difference between two powers of algebraic numbers, where the most important idea is found through applying Baker's method in its non-Archimedean analogue to a certain divisibility relation among solutions.
These results together leave us a finite search for proving Theorem \ref{th1}, and we sieve with it in Section \ref{sec-th1} by using computer extensively and the proof is completed.
The proof of Theorem \ref{th2} is basically similar to that of Theorem \ref{th1} and it is established in Section \ref{sec-th2}.
Sections 7 and 8 are devoted to prove Theorem \ref{th3}, where some other kinds of Baker's method and a striking result of Scott \cite{Sc} restricting parity information on unknown exponents appearing in the left-hand side of equations are also used as well as some results on ternary equations based on the so-called modular approach.
In the final section we make some remarks on our results with a few problems for readers.
All computations in this paper were performed using the computer package MAGMA and its total computational time were about 3 weeks.
\section{Reducing to weak forms} \label{sec-reduce}%
Let $M$ and $\mathcal A$ be as in Definition $\ref{extend}$.
The extended multiplicative order of $\mathcal A$ modulo $M$, denoted by $e_M(\mathcal A)$, has similar properties to those of the multiplicative order of $\mathcal A$ modulo $M$.
We state some of them in the following lemma without their proof.
\begin{lem} \label{property}
Assume that $M>2.$
Define $\epsilon_0 \in \{1,-1\}$ by $\mathcal A^{e_M(\mathcal A)} \equiv \epsilon_0 \pmod{M}.$
Then the following hold.
\begin{itemize}
\item[\rm (i)]
Assume that $\mathcal A^n \equiv \epsilon \pmod{M}$ for some $n \in \mathbb{N}$ and $\epsilon \in \{1,-1\}.$
Then $n$ is a multiple of $e_M(\mathcal A)$ and $\epsilon={\epsilon_0}^{n/e_M(\mathcal A)}.$
\item[\rm (ii)] If $\epsilon_0=-1,$ then the multiplicative order of $\mathcal A$ modulo $M$ equals $2e_M(\mathcal A).$
\item[\rm (iii)] $e_M(\mathcal A^k)=e_M(\mathcal A)/\gcd(e_M(\mathcal A),k)$ for any positive integer $k.$
\end{itemize}
\end{lem}
The next lemma gives a simple but non-trivial divisibility property of solutions.
\begin{lem} \label{trick}
Let $(x,y,z)$ be a solution to equation $\eqref{abc}.$
Then
\[ e_d(b)\,x \equiv 0 \mod{e_d(a)}, \quad e_d(a)\,y \equiv 0 \mod{e_d(b)} \]
for any positive divisor $d>2$ of $c.$
\end{lem}
\begin{proof}
Put $e_a:=e_d(a)$ and $e_b:=e_d(b)$.
Use the congruence $a^x \equiv - b^y \pmod{d}$ obtained from equation \eqref{abc} reduced modulo $d$.
Raising both sides of this congruence to the $e_a$-th power, one finds that $b^{e_a y} \equiv \pm (a^{e_a})^x \equiv \pm 1 \pmod{d}$.
Lemma \ref{property}\,(i) tells one that $e_b \mid e_a y$.
One also obtains $e_a \mid e_b x$ by symmetry of $a,b$.
\end{proof}
The following lemma tells us that proving Conjecture $\ref{atmost1conj}$ is reduced to studying its special case.
Indeed, this fact will be applied for each of the proofs of our three theorems in the forthcoming sections.
\begin{lem} \label{weakform}
Let $d>2$ be a positive divisor of $c.$
Define the positive integers $A$ and $B$ by
\[
A=a^{\,e_d(a)/g}, \quad B=b^{\,e_d(b)/g},
\]
where $g=\gcd(e_d(a),e_d(b)).$
Then the following hold.
\begin{itemize}
\item[\rm (i)]
The number of solutions to equation $\eqref{abc}$ equals that to equation \eqref{abc} corresponding to the triple $(A,B,c).$
\item[\rm (ii)]
The extended multiplicative orders of $A$ and $B$ modulo $d$ equal $g.$
\item[\rm (iii)]
$(a,b,c)$ belongs to set \eqref{excep-set} if and only if $(A,B,c)$ belongs to the same.
\end{itemize}
\end{lem}
\begin{proof}
Note that $A,B>1$ and $\gcd(A,B,c)=1$.
Put $e_a:=e_d(a), e_b:=e_d(b)$ for simplicity.\par
(i) Let $(x,y,z)$ be a solution to equation \eqref{abc}.
By Lemma \ref{trick}, it follows that
\[
x \equiv 0 \mod{e_a/g}, \quad y \equiv 0 \mod{e_b/g}.
\]
This leads to
\[
A^{ \,x / (e_a/g) } + B^{ \,y / (e_b/g) } = c^z.
\]
It is clear that $(\,x / (e_a/g),\,y / (e_b/g),\,z\,)$ is a solution to equation \eqref{abc} for the triple $(A,B,c)$.
This correspondence proves the assertion.\par
(ii) This follows from Lemma \ref{property}\,(iii).\par
(iii) This is easy.
\end{proof}
Note that assertion (iii) of the above lemma can be applied for the subsets of \eqref{excep-set} appearing in the statements in Theorems \ref{th1} and \ref{th3}.
\section{Preliminaries for Theorem \ref{th1}} \label{sec-th1-pre}%
For the proof of Theorem \ref{th1}, it suffices to prove that $(a,b,c)$ has to equal one of $(5,3,2),(13,3,2),(5,2,3)$ and $(7,2,3)$, whenever equation \eqref{abc}, with each of $a$ and $b$ is congruent to $1$ or $-1$ modulo $c$, has at least two solutions.
Indeed, suppose that this is established, and that equation \eqref{abc} has at least two solutions for some triple $(a,b,c)$ satisfying $\gcd(e_c(a),e_c(b))=1$.
Then Lemma \ref{weakform}\,(i,ii) tells us that the equation $A^x+B^y=c^z$ has at least two solutions, with $e_c(A)=e_c(B)=\gcd(e_c(a),e_c(b))=1$.
Thus one may conclude that $(A,B,c)$ equals one of the four exceptional triples mentioned before, so that the same holds for $(a,b,c)$.
Therefore, we assume that each of $a$ and $b$ is congruent to $1$ or $-1$ modulo $c$.
Clearly it suffices to consider only the case where $c$ is not a perfect power.
For each $h \in \{a,b\}$, we can uniquely define $\delta_h \in \{1,-1\}$ by the following congruence:
\begin{eqnarray}\label{cong-c'}
h \equiv \delta_h \mod {c'},
\end{eqnarray}
where
\[ c':= \begin{cases}
\, 4 & \text{if $c=2$},\\
\, c & \text{if $c>2$}.
\end{cases}\]
Note that
\[ \max\{a,b\} \ge c'+1, \quad \min\{a,b\} \ge c'-1.\]
Below, we often let $h$ denote any of $a$ and $b$.
We begin with the following lemma.
\begin{lem}\label{coprime}
Let $(x,y,z)$ be a solution to equation $\eqref{abc}.$
Then the following hold.
\begin{itemize}
\item[\rm (i)]
One of the following cases holds.
\[ \left\{ \begin{array}{lllll}
\delta_a=1, &\delta_b=-1, &y \text{ is odd\,}; \\
\delta_a=-1, &\delta_b=1, &x \text{ is odd\,}; \\
\delta_a=-1, &\delta_b=-1, &x \not\equiv y \pmod{2}.
\end{array} \right. \]
\item[\rm (ii)]
$x$ and $y$ are relatively prime.
\end{itemize}
\end{lem}
\begin{proof}
(i) It is easy to see that $c^z \equiv 0 \pmod{c'}$ since $z>1$ if $c=2$.
By congruence \eqref{cong-c'} one reduces equation \eqref{abc} modulo $c'$ to see that ${\delta_a}^x \equiv -{\delta_b}^y \pmod{c'}$.
This congruence holds trivially, namely ${\delta_a}^x=-{\delta_b}^y$, since $\delta_a, \delta_b \in \{1,-1\}$ and $c' >2$.
This implies the assertion.\par
(ii) Suppose on the contrary that $x,y$ have some common prime factor, say $p$.
Note that $p$ is odd by (i).
Write $x=p x_0,y=p y_0$.
Define positive integers $L,R$ as follows:
\[
L:=a^{x_0}+b^{y_0}, \quad R:=\frac{(a^{x_0})^p+(b^{y_0})^p}{a^{x_0}+b^{y_0}}.
\]
Note that $R$ is odd with $R>p$.
Equation \eqref{abc} becomes
\begin{equation}\label{factor}
L \cdot R=c^z.
\end{equation}
By elementary number theory we know that $\gcd(L,R) \in \{1,p\}$ and that $p \parallel R$ if $\gcd(L,R)=p$ (cf.~Lemma \ref{padic-lem} below).
Let $r$ be any prime factor of $c$.
It is obvious that $r$ divides $L$ or $R$, in particular,
\[
a^{x_0} + b^{y_0} \equiv 0 \mod{r} \quad \text{or} \quad (a^{x_0})^p + (b^{y_0})^p \equiv 0 \mod{r}.
\]
Since each of $a,b$ is congruent to $\pm 1$ modulo $r$ by the premise, and $p$ is odd, it follows that $(a^{x_0})^p \equiv a^{x_0} \pmod{r}$ and $(b^{y_0})^p \equiv b^{y_0} \pmod{r}$, so that
\[
a^{x_0} + b^{y_0} \equiv (a^{x_0})^p + (b^{y_0})^p \mod{r}.
\]
These obtained congruences show that $L$ is divisible by $r$.
Applying the above argument for $r$ as any prime factor of $R$, one concludes that $R$ has no prime factor other than $p$.
It follows that $R=p$, which is however absurd as $R>p$.
\end{proof}
Suppose that equation \eqref{abc} has two solutions, say $(x,y,z)$ and $(X,Y,Z)$, with $(x,y,z) \ne (X,Y,Z)$.
Then
\begin{eqnarray}
&a^x+b^y=c^z, \label{1st}\\
&a^X+b^Y=c^Z. \label{2nd}
\end{eqnarray}
Without loss of generality, we may assume that $z \le Z$.
From \eqref{1st} and \eqref{2nd}, it holds trivially that
\begin{equation}\label{trivial-ineqs}
x<\frac{\log c}{\log a}\,z, \ \ y<\frac{\log c}{\log b}\,z, \ \ X<\frac{\log c}{\log a}\,Z, \ \ Y<\frac{\log c}{\log b}\,Z.
\end{equation}
In what follows, we put
\[
\Delta:=|x Y-X y|.
\]
Note that $\Delta$ is nonzero in general (cf.~\cite[Lemma 3.3]{HuLe}).
In what follows, we put
\[ C:=
\begin{cases}
\,c & \text{if $c=2$ or $c \not\equiv 2 \pmod{4}$},\\
\,c/2 & \text{if $c>2$ and $c \equiv 2 \pmod{4}$}.
\end{cases} \]
It will turn out that the size of $C$ relative to $c$ is crucially important on several places in the proof of Theorem \ref{th1}.
In particular,
\begin{equation}\label{essential}
C>\sqrt{c}, \quad \lim_{c \to \infty} \frac{C}{\sqrt{c}} = \infty.
\end{equation}
\begin{lem}\label{basic-cong}
$h^{\Delta} \equiv {\delta_h}^{\Delta} \pmod{C^z}$ for each $h \in \{a,b\}.$
\end{lem}
\begin{proof}
Since $z \le Z$, one reduces equations \eqref{1st}, \eqref{2nd} modulo $c^z$ to see that
\[
a^x \equiv -b^y \mod c^z, \quad a^X \equiv -b^Y \mod c^z,
\]
respectively.
From these observe that
\[
a^{xY} \equiv (-b^y)^Y \equiv (-1)^Y (b^Y)^y \equiv (-1)^Y (-a^X)^y \equiv (-1)^{y+Y} a^{Xy} \mod{c^z}.
\]
Similarly, $b^{xY} \equiv (-1)^{x+X} b^{Xy} \pmod{c^z}$.
Since $a,b$ are prime to the modulus, these obtained congruences imply
\[
h^{\Delta} \equiv \varepsilon \mod c^z
\]
for some $\varepsilon \in \{1,-1\}$.
It suffices to show that $\varepsilon={\delta_h}^{\Delta}$.
On the other hand, from congruence \eqref{cong-c'},
\[ h \equiv \delta_h \mod {c'}. \]
Thus Lemma \ref{property}\,(i) tells that $\varepsilon={\delta_h}^{\Delta}$ if $\gcd(c^z,c')>2$.
It is clear that $\gcd(c^z,c')=c$ if $c>2$.
For $c=2$, since $z>1$ in equation \eqref{1st}, it follows that $\gcd(c^z,c')=\gcd(2^z,4)=4$.
To sum up, the lemma is proved.
\end{proof}
\begin{lem}\label{basic-cong-2}
For each $h \in \{a,b\},$ $h$ is congruent to $\delta_h$ modulo every prime factor of $c.$
Further, $h \equiv \delta_h \pmod{4}$ if either $c=2$ or $c \equiv 0 \pmod{4}.$
\end{lem}
\begin{proof}
The two assertions easily follow by reducing congruence \eqref{cong-c'} modulo every prime factor of $c$ and modulo 4, respectively.
\end{proof}
For any integer $M>1$, we denote by $\nu_M(\mathcal A)$ the $M$-adic valuation of a nonzero integer $\mathcal A$, that is, the highest exponent $e$ such that $M^e$ divides $\mathcal A$.
Further, if $p/q$ is a nonzero rational number with $p$ and $q$ coprime integers, we set $\nu_M(p/q):=\nu_M(p)-\nu_M(q)$.
The next lemma is well-known
and gives a precise information on the $p$-adic valuations of integers in a special form.
\begin{lem} \label{padic-lem}
Let $p$ be a prime number.
Let $U$ and $V$ be relatively prime nonzero integers.
Assume that
\[
\begin{cases}
\, U \equiv V \mod{p} & \text{if $p \ne 2$},\\
\, U \equiv V \mod{4} & \text{if $p=2$}.
\end{cases}
\]
Then, for any positive integer $N,$
\[
\nu_p(U^N-V^N)=\nu_p(U-V)+\nu_p(N).
\]
\end{lem}
\begin{lem}\label{div}
$C^z$ divides $\gcd(a-\delta_a,b-\delta_b) \cdot \Delta.$
\end{lem}
\begin{proof}
Let $p$ be a prime factor of $C$.
If $p=2$, then $C$ is even, so that either $c=2$ or $c \equiv 0 \pmod{4}$.
Observe from Lemma \ref{basic-cong-2} that $h \equiv \delta_h \pmod{p}$, and that $h \equiv \delta_h \pmod{4}$ if $p=2$.
Then one may apply Lemma \ref{padic-lem} with $(U,V)=(h,\delta_h)$ and $N=\Delta$, to see that
\[
\nu_p (h^{\Delta}-{\delta_h}^{\Delta})=\nu_p( h-\delta_h) + \nu_p(\Delta) =\nu_p\bigr( (h-\delta_h) \cdot \Delta \bigr).
\]
From Lemma \ref{basic-cong} it follows that
\[
\nu_p (C^z) \le \nu_p\bigr( (h-\delta_h) \cdot \Delta \bigr).
\]
This inequality holds for an arbitrary prime factor $p$ of $C$.
Therefore, $C^z$ divides $(h-\delta_h) \cdot \Delta$, and the assertion follows.
\end{proof}
When $C=c/2$, the above lemma lacks the 2-adic divisibility information.
The following lemma complements it (cf.~\cite[Lemma 4.2]{MiPi}).
\begin{lem}\label{complement}
Assume that $C=c/2.$
Then $2^z$ divides $\gcd(a-\delta_{a,4},b-\delta_{b,4}) \cdot \Delta,$ where $\delta_{a,4} \in \{1,-1\}$ and $\delta_{b,4} \in \{1,-1\}$ are defined as $a \equiv \delta_{a,4} \pmod{4}$ and $b \equiv \delta_{b,4} \pmod{4}$ respectively.
\end{lem}
\section{Upper bounds for solutions} \label{sec-th1-bound}%
For any algebraic number $\gamma$, we define the absolute logarithmic height of $\gamma$ as follows:
\[
{\rm h}(\gamma) =\frac{1}{[\mathbb{Q}(\gamma):\mathbb{Q}]}\,\Bigl(\, \log |c_0| \,+ \,\sum \,\log \max\bigr\{ 1,\,| \gamma' |\bigr\} \,\Bigl),
\]
where $c_0$ is the leading coefficient of the minimal polynomial of $\gamma$ over $\mathbb Z$, and the sum extends over all conjugates $\gamma'$ of $\gamma$ in the filed of complex numbers.
We will make use of the following result which is a simple consequence of [Bu, Theorem 2;\,$\mu = 4$].
\begin{prop} \label{Bu-madic}
Let $M$ be a positive integer with $M>1.$
Let $\alpha_1$ and $\alpha_2$ be multiplicatively independent rational numbers such that $\nu_q(\alpha_1)=0$ and $\nu_q(\alpha_2)=0$ for any prime factor $q$ of $M.$
Assume that ${\rm g}$ is a positive integer with $\gcd({\rm g},M)=1$ satisfying
\[
\nu_q( {\alpha_1}^{{\rm g}}-1 ) \ge \nu_{q}(M), \quad \nu_q( {\alpha_2}^{{\rm g}}-1 ) \ge 1
\]
for any prime factor $q$ of $M.$
If $M$ is even, then further assume that
\[
\nu_2( {\alpha_1}^{{\rm g}}-1 ) \ge 2, \quad \nu_2( {\alpha_2}^{{\rm g}}-1 ) \ge 2.
\]
Let $H_1$ and $H_2$ be positive numbers such that
\[
H_j \ge \max \{ {\rm h}(\alpha_j),\log M \} \quad (j=1,2).
\]
Then, for any positive integers $b_1$ and $b_2$ with $\gcd(b_1,b_2,M)=1,$
\[
\nu_M( {\alpha_1}^{b_1} - {\alpha_2}^{b_2}) \le \frac{53.6\,{\rm g}\,H_1 H_2}{\log^4 M}\, \Bigr(\!\max \{ \log b'+\log \log M+0.64,\,4 \log M \} \Big)^2
\]
with $b'=b_1/H_2+b_2/H_1.$
\end{prop}
\begin{lem} \label{Kc}
$Z<\dfrac{K_c (\log a) \log b}{\log^2 c}<\dfrac{K_c\,z^2}{x y},$ where $K_c$ is given by
\[ K_c= \begin{cases}
\,13100 & \text{if $c=2$},\\
\,7400 & \text{if $c=3$},\\
\,1900 & \text{if $c=5$},\\
\,12500 & \text{if $c=6$},\\
\,1100 & \text{if $c=7$},\\
\,3600 & \text{if $c=10$},\\
\,2000 & \text{if $c=14$},\\
\,\dfrac{857.6 \, \kappa_c\log^2 c}{\log^2 C} & \text{otherwise},
\end{cases} \]
where $\kappa_c=1$ if $c \equiv 2 \pmod{4},$ and $\kappa_c=\frac{\log c}{\log(c-1)}$ otherwise.
\end{lem}
\begin{proof}
First, observe from equation \eqref{2nd} that
\begin{equation}\label{Kc-lbound}
\nu_C (\varLambda) = Z,
\end{equation}
where $\varLambda:=a^X+b^Y$.
To obtain an upper bound for the left-hand side above, we apply Proposition \ref{Bu-madic} for $M=C$.
According to Lemma \ref{coprime}\,(i) for the solution $(X,Y,Z)$, we shall set the parameters $(\alpha_1,\alpha_2)$ and $(b_1,b_2)$ as follows:
\[ \left(\alpha_1,b_1,\alpha_2,b_2\right):= \begin{cases}
\,(a,X,-b,Y) & \text{if $\delta_a=1, \delta_b=-1$, $Y$\, is odd}, \\
\,(-a,X,b,Y) & \text{if $\delta_a=-1, \delta_b=1$, $X$\, is odd}, \\
\,(-a,X,-b,Y) & \text{if $\delta_a=\delta_b=-1$, $X \equiv Y \pmod{2}$}.
\end{cases} \]
Then ${\alpha_1}^{b_1} - {\alpha_2}^{b_2}=\pm \varLambda$, and both $\alpha_1,\alpha_2$ are congruent to 1 modulo $c'$ by congruence \eqref{cong-c'}.
Note that $C \mid c'$.
Since $c'$ is divisible by $4$ if $C$ is even, one may take ${\rm g}=1$.
Further, recall that
\[ \min\{a,b\} \ge c'-1 = \begin{cases}
\, 3 & \text{if $c=2$},\\
\, c-1 & \text{if $c>2$}.
\end{cases} \]
Since $\min\{a,b\}<C\,(=M)$ holds only when $C=c>2$ and $\min\{a,b\}=c-1$, one may set
\[ (H_1,H_2):= \begin{cases}
\,(\log a,\kappa_c\log{b})
& \text{if $a>b$},\\
\,(\kappa_c\log a,\log{b})
& \text{if $a<b$}.
\end{cases} \]
Since $\gcd(b_1,b_2)=\gcd(X,Y)=1$ by Lemma \ref{coprime}\,(ii), Proposition \ref{Bu-madic} gives
\begin{equation}\label{Kc-ubound}
\nu_{M}(\varLambda) \le \frac{53.6 \cdot 1 \cdot \kappa_c \log a\, \log b}{\log^4 C} \cdot \mathcal B^2,
\end{equation}
where
\[
\mathcal B=\max \biggl\{ \log \biggl( \frac{X}{H_2}+\frac{Y}{H_1} \biggr)+\log \log C+0.64, \,4\log C \biggl\}.
\]
Noting the latter two inequalities in \eqref{trivial-ineqs} and $\kappa_c \ge 1$, one has
\begin{align*}
&\log \biggl( \frac{X}{H_2}+\frac{Y}{H_1} \biggr)+\log \log C+0.64 \\
<&\log \biggl( \frac{Z(\log c)/\log a}{\log b}+\frac{Z(\log c)/\log b}{\log a} \biggr)+\log ({\rm e}^{0.64}\log C) \\
=&\log \biggl( \frac{2\,{\rm e}^{0.64}\log C}{\log c}\,T\biggl),
\end{align*}
where ${\rm e}=\exp(1)$, and
\[
T:=\frac{\log^2 c}{\log a\,\log b}\,Z.
\]
Thus \eqref{Kc-lbound}, \eqref{Kc-ubound} together lead to
\begin{equation} \label{ubound-T}
T < \frac{53.6 \, \kappa_c \log^2 c}{\log^4 C} \cdot {\mathcal B'}^2,
\end{equation}
where
\[
\mathcal B':=\log \,\max \biggr\{ \frac{2\,{\rm e}^{0.64}\log C}{\log c}\,T ,\,C^4 \biggr\}.
\]
It remains to find an absolute upper bound of $T$ for each $c$ by using \eqref{ubound-T}.
If $2\,{\rm e}^{0.64}(\log C)\,T \le C^4 \log c$, then \eqref{ubound-T} gives
\[
T < \frac{53.6 \, \kappa_c\log^2 c}{\log^4 C} \cdot (4\log C)^2 = \frac{857.6 \, \kappa_c\log^2 c}{\log^2 C},
\]
so that
\begin{equation}\label{Kc-1stcase}
T \le \min \biggl\{ \frac{C^4\log c}{2\,{\rm e}^{0.64} \log C}\,, \, \frac{857.6 \, \kappa_c\log^2 c}{\log^2 C} \biggl\}.
\end{equation}
While if $2\,{\rm e}^{0.64}(\log C)\,T>C^4 \log c$, then
\begin{equation} \label{Kc-2ndcase}
\frac{C^4 \log c}{2\,{\rm e}^{0.64}\log C} < T < \frac{53.6 \, \kappa_c \log^2 c}{\log^4 C} \cdot \log^2 \biggl(\frac{2\,{\rm e}^{0.64}\log C}{\log c}\,T \biggl).
\end{equation}
For each $c$, one can, by calculus, combine \eqref{Kc-1stcase}, \eqref{Kc-2ndcase} to find an upper bound for $T$ as asserted, where inequalities in \eqref{Kc-2ndcase} are compatible only if $c \le 10$ or $c=14$.
\end{proof}
In what follows, we define $\Delta'$ and $\ell=\ell(c,z,\Delta')$ as follows:
\begin{align*}
&\Delta':=\gcd(\Delta,C^z), \\ &\ell:=\lcm (c',C^z/\Delta').
\end{align*}
By Lemma \ref{div}, together with congruence \eqref{cong-c'},
\begin{equation} \label{cong-ell}
\quad \quad h \equiv \delta_h \mod{\ell}
\end{equation}
for each $h \in \{a,b\}$.
In particular, $\min\{a,b\} \ge \ell-1$.
The following proposition corresponds to a special case of \cite[Theorem 1]{Bu}.
In the notation of \cite{Bu} it corresponds to the case where $x_1/y_1$ and $x_2/y_2$ are multiplicatively independent, $h=0$ and ${\rm g}=1$, where the numbers $\max\{|x_i|,|y_i|\}$ for $i=1,2$ appearing in \cite[(1)]{Bu} should be replaced by their logarithms respectively.
\begin{prop} \label{Bu-madic-strong}
Let $M$ be a positive integer with $M>1.$
Let $\alpha_1$ and $\alpha_2$ be nonzero rational numbers which are multiplicatively independent.
Assume that
\begin{equation} \label{strong1}
\nu_q(\alpha_1-1) \ge \nu_q(M), \ \ \nu_q(\alpha_2-1) \ge 1
\end{equation}
for any prime factor $q$ of $M.$
If $M$ is even, then further assume that
\begin{equation} \label{strong2}
\nu_2(\alpha_1-1) \ge 2, \ \ \nu_2(\alpha_2-1) \ge 2.
\end{equation}
Put
\[
\varLambda={\alpha_1}^{b_1}-{\alpha_2}^{b_2},\]
where $b_1$ and $b_2$ are positive integers such that at least one of $b_1$ and $b_2$ is prime to $M.$
Let $K, L, R_1,R_2,S_1$ and $S_2$ be positive integers with $K \ge 3$ and $L \ge 2.$
Put $R=R_1+R_2-1$ and $S=S_1+S_2-1.$
Assume that
\begin{equation}\label{strong3}
R_1 S_1 \ge L,
\end{equation}
\begin{multline}\label{strong4}
\operatorname{Card}\, \{r b_2+s b_1\,|\, r \in \mathbb{Z}, s \in \mathbb{Z}, 0 \le r < R_2, 0 \le s < S_2 \}>(K-1)L.
\end{multline}
Then $\nu_M(\varLambda) \le KL-1,$ whenever
\begin{multline} \label{strong5}
K(L-1)\log M > (1+2w)\log (KL)+(K-1)\log{\beta}\\+\gamma L R\,{\rm h}(\alpha_1)+\gamma L S\,{\rm h}(\alpha_2),
\end{multline}
where $w$ is the number of distinct prime divisors of $M,$ and
\[
\beta=\frac{(R-1)b_2 + (S-1)b_1}{2}\left(\prod_{k=1}^{K-1}{k!}\right)^{-2/(K^2-K)}, \quad \gamma=\frac{1}{2}-\frac{KL}{6RS}.
\]
\end{prop}
Note that Lemma \ref{coprime}\,(ii) ensures that at least one of $X,Y$ is odd, and that at least one of $X,Y$ is prime to $C$ when $C$ is a prime power.
According to these, in the next lemma, we shall consider each of the following (not necessarily independent) three cases:
\[
{\rm (C1)} \ c=2, \ \ \ {\rm (C2)} \ c>2, \ \ \ {\rm (C3)} \ C^z/\Delta'>2.
\]
In what follows, we put \[ \mathcal X:=\max\{x,y\}.\]
The next lemma will be actually applied when $c$ is very small, and $z,\mathcal X,\Delta'$, an upper bound for $Z$ are given explicitly.
\begin{lem} \label{strong-applied}
Assume that $a>b$ and $C$ is a prime power.
Put
\[ M=\begin{cases}
\,4 & \text{for {\rm (C1)}},\\
\,C & \text{for {\rm (C2)}},\\
\,C^z/\Delta' & \text{for {\rm (C3)}}.
\end{cases} \]
Let $Z_u$ be an upper bound for $Z.$
Let $k>0$ be a real number and $L \ge 2$ be an integer.
Put $a_1,a_2,K$ and $B$ as follows$:$
\begin{gather*}
a_1=\frac{z \log c}{\log M}, \quad a_2=\frac{z \log c}{\mathcal X \log M}, \quad K=\lfloor{kLa_1a_2}\rfloor+1,\\
B=\log \log M+\log (Z_u/z) +\log \biggl( \frac{\mathcal X}{ \log(\ell+1)}+\frac{1}{\log(\ell-1)} \biggl)- \frac{1}{2}\log k+\varepsilon(K),
\end{gather*}
where $\varepsilon(K)=3/2+\log{\frac{(1+\sqrt{K-1})\sqrt{K}}{2K-2}}.$
Further, put $R_1,R_2,S_1,S_2,R,S,$ $\gamma,f_0, f_1, f_2, f_3$ and $f_4$ as follows$:$
\begin{align*}
&R_1=\lfloor \sqrt{La_2/a_1} \rfloor+1, \ R_2=\lfloor \sqrt{(K-1)La_2/a_1} \rfloor+1,\\
&S_1=\lfloor \sqrt{La_1/a_2} \rfloor+1, \ S_2=\lfloor \sqrt{(K-1)La_1/a_2} \rfloor+1,\\
&R=R_1+R_2-1, \ S=S_1+S_2-1, \ \gamma=\frac{1}{2}-\frac{KL}{6RS},\\
&f_0=K(L-1), \ f_1=\frac{3\log (KL)}{\log M}, \ f_2=\frac{(K-1)B}{\log M},\\
&f_3=\gamma L R a_1, \ f_4=\gamma L S a_2.
\end{align*}
If $K \ge 3$ and $f_0>f_1+f_2+f_3+f_4,$ then
\[ Z \le \begin{cases}
\,\max\bigr\{2KL-1, Z_2\bigr\} & \text{for {\rm (C1)}},\\
\,\max \bigr\{KL-1, Z_2 \bigr\}
& \text{for {\rm (C2)}},\\
\,\max \bigr\{KL (z-t)-1,Z_2 \bigr\}
& \text{for {\rm (C3)}},
\end{cases}\]
where $Z_2=\lfloor \sqrt{k}L z\, (a_1/\mathcal X+a_2)\rfloor+1$ and $t$ is the nonnegative integer defined as $C^t=\Delta'.$
\end{lem}
\begin{proof}
First, observe that
\begin{equation} \label{strong-applied-lbound}
\nu_M(\varLambda) \ge
\begin{cases}
\,\lfloor Z/2 \rfloor & \text{for {\rm (C1)},}\\
\,Z & \text{for {\rm (C2)},} \\
\,\lfloor Z/(z-t) \rfloor & \text{for {\rm (C3)},}
\end{cases}
\end{equation}
where $\varLambda:=a^X+b^Y$.
Indeed, since $\varLambda=c^Z$ by equation \eqref{2nd}, the above clearly holds for both cases (C1) and (C2), further, for (C3),
\[
\nu_M(\varLambda)=\nu_{C^{z-t}}(c^Z) \ge \nu_{c^{z-t}}(c^Z)=\lfloor Z/(z-t) \rfloor.
\]
To obtain upper bounds for the left-hand side of \eqref{strong-applied-lbound}, we will apply Proposition \ref{Bu-madic-strong}.
For this, set $(\alpha_1,\alpha_2)$ and $(b_1,b_2)$ in the same way as in the proof of Lemma \ref{Kc}.
Note that at least one of $b_1,b_2$ is prime to the prime power $M$ as $\gcd(b_1,b_2)=\gcd(X,Y)=1$ by Lemma \ref{coprime}\,(ii).
Recall that ${\alpha_1}^{b_1} - {\alpha_2}^{b_2}=\pm \varLambda$, and that $\alpha_1 \equiv \alpha_2 \equiv 1 \pmod{c'}$.
Next, we shall observe all conditions \eqref{strong1} to \eqref{strong5} required in Proposition \ref{Bu-madic-strong}.
In both cases (C1) and (C2), $M$ divides $c'$, so that $\alpha_1 \equiv \alpha_2 \equiv 1 \pmod{M}$, thereby condition \eqref{strong1} holds.
The same congruences hold also for case (C3) by congruence \eqref{cong-ell}.
These imply condition \eqref{strong2} since $M \not\equiv 2 \pmod{4}$ by assumption.
Condition \eqref{strong3} holds by the definitions of $R_1,S_1$.
To investigate the validity of condition \eqref{strong4}, we distinguish two cases.
\vspace{0.2cm}\noindent{\it Case I.}\,
${\rm Card} \{r Y+s X\,|\, 0 \le r < R_2, 0 \le s < S_2 \} < R_2 S_2$.\par
Clearly there exist two distinct pairs $(r_1,s_1)$ and $(r_2,s_2)$ of integers with $0 \le r_1,r_2 < R_2$ and $0 \le s_1,s_2 < S_2$ such that $r_1 Y+s_1 X=r_2 Y+s_2 X$.
Since $Y(r_1-r_2)=X(s_2-s_1)\,(\ne0)$ with $\gcd(X,Y)=1$, one has $X \mid r_1-r_2$ and $Y \mid s_2-s_1$, so that $X<R_2$ and $Y<S_2$.
Then
\[
X \le R_2-1=\lfloor \sqrt{(K-1)La_2/a_1} \rfloor \le \sqrt{k L a_1a_2 \cdot La_2/a_1}=\sqrt{k}La_2.
\]
Similarly, $Y \le \sqrt{k}La_1$.
Since, by equations \eqref{1st}, \eqref{2nd} with $b<a$,
\begin{align*}
Z&<\frac{1}{\log c}\, \log \bigr( 2 \max\{a^X,b^Y\} \bigr) \\
&<\frac{\log 2}{\log c}+\max \biggl\{ \frac{\log a}{\log c}\,X,\,\frac{\log b}{\log c}\,Y \biggl\}<1+\max \Bigl\{ z X,\,\frac{z}{\mathcal X}\,Y \Bigl\},
\end{align*}
one has
\begin{equation} \label{strong-applied-ubound1}
Z < \sqrt{k} L z (a_1/\mathcal X+a_2)+1.
\end{equation}
\vspace{0.2cm}\noindent{\it Case II.}\,
${\rm Card} \{r Y+s X \,|\, 0 \le r < R_2, 0 \le s < S_2 \} = R_2 S_2.$\par
Condition \eqref{strong4} holds by the definitions of $R_2,S_2$.
We shall check the last condition, namely, \eqref{strong5}, which is equivalent to
\[
f_0>f_1+f_2 \cdot \frac{\log \beta}{B}+f_3 \cdot \frac{\log a}{z\log c}+f_4 \cdot \frac{\mathcal X \log b}{z \log c},
\]
where $\beta$ is defined as in Proposition \ref{Bu-madic-strong}.
Since $\max\{a,b^{\mathcal X}\}<c^z$, the above inequality holds if $f_0>f_1+f_2+f_3+f_4$ and $\log \beta \le B$.
According to the proof of \cite[Lemme 13]{BuLa},
\[
\left(\prod_{k=1}^{K-1}{k!}\right)^{-2/(K^2-K)} \le \frac{{\rm e}^{3/2}}{K-1},
\]
whenever $K \ge 3$.
Also observe that
\begin{align*}
R-1 & = R_1-1 + R_2-1 \\
& \le \sqrt{L a_2 / a_1} + \sqrt{(K-1) L a_2 / a_1} \\
& = (1+\sqrt{K-1}) \sqrt{L a_1 a_2} \cdot 1/a_1
< (1+\sqrt{K-1}) \sqrt{K/k} \cdot 1/a_1.
\end{align*}
Similarly, $S-1 \le (1+\sqrt{K-1}) \sqrt{K/k} \cdot 1/a_2$.
These together with the inequalities $\min\{a,b\} \ge \ell-1,X<\frac{\log c}{\log a}\,Z_u, Y <\frac{\log c}{\log b}\,Z_u$ lead us to see that
\begin{align*}
\beta & = \frac{(R-1)Y + (S-1)X}{2} \cdot \left(\prod_{k=1}^{K-1}{k!}\right)^{-2/(K^2-K)}\\
&<\frac{(1+\sqrt{K-1}) \sqrt{K/k}}{2} \left( \frac{Y}{a_1}+\frac{X}{a_2} \right) \cdot \frac{{\rm e}^{3/2}}{K-1} \\
& = \exp( \varepsilon(K) ) \cdot 1/\sqrt{k} \cdot (Y+X \mathcal X) \cdot \frac{\log M}{z\log c}\\
& < \exp( \varepsilon(K) ) \cdot 1/\sqrt{k} \cdot \left( \frac{1}{\log b}+\frac{\mathcal X}{\log a} \right) \frac{Z_u\log M}{z} \le \exp(B),
\end{align*}
whenever $K \ge 3$.
To sum up, if $K \ge 3$ and $f_0>f_1+f_2+f_3+f_4$, then condition \eqref{strong5} holds, and Proposition \ref{Bu-madic-strong} gives
\begin{equation} \label{strong-applied-ubound2}
\nu_M(\Lambda) \le KL-1.
\end{equation}
Finally, the combination of \eqref{strong-applied-lbound}, \eqref{strong-applied-ubound1}, \eqref{strong-applied-ubound2} yields the asserted bounds for $Z$.
\end{proof}
Later we will be lead to distinguish two cases according to $\mathcal X>1$ or $\mathcal X=1$.
For the latter case, we can find another application of Proposition \ref{Bu-madic} as follows:
\begin{lem}\label{bound-x1y1}
If $x=1$ and $y=1,$ then $Z<858.6\,z.$
\end{lem}
\begin{proof}
First, from equation \eqref{1st} with $x=y=1$, observe that
\[
-a/b-1=-c^z/b, \ \ -b/a-1=-c^z/a.
\]
Since both $a,b$ are prime to $c$, the above equalities particularly show that rationals $-a/b,-b/a$ are very close to 1 in $c$-adic sense.
This is a key idea in the proof.
By Lemma \ref{coprime}\,(i), we are in one of the following cases:
\[ \left\{ \begin{array}{lllll}
\delta_a=-1, &\delta_b=1, &X \text{ is odd}; \\
\delta_a=1, &\delta_b=-1, &Y \text{ is odd}.
\end{array} \right. \]
In each of these cases, to obtain an upper bound for $Z$, we will apply Proposition \ref{Bu-madic} for $M:=c^z$ in a different way from that of the proof of Lemma \ref{Kc}.
Note that $X \ne Y$ and $M \not\equiv 2 \pmod{4}$.
We shall set the parameters $(\alpha_1,\alpha_2)$ and $(b_1,b_2)$ as follows:
\[ \left(\alpha_1,b_1,\alpha_2,b_2\right):= \begin{cases}
\,(-a/b,X,b^{\,{\rm sgn}(Y-X)},|X-Y|) & \text{if $X$ is odd},\\
\,(-b/a,Y,a^{\,{\rm sgn}(X-Y)},|X-Y|) & \text{if $Y$ is odd},
\end{cases}\]
where ${\rm sgn}$ is the sign function.
Put $\varLambda:={\alpha_1}^{b_1} - {\alpha_2}^{b_2}$.
Then $\varLambda \in \{\pm c^Z / b^X\,, \pm c^Z / a^Y\}$, so that
\begin{equation}\label{bound-x1y1-lbound}
\nu_M (\varLambda) = \nu_{c^z} (c^Z) = \left\lfloor \frac{Z}{z} \right\rfloor.
\end{equation}
On the other hand, as remarked in the beginning, one has $\nu_M (\alpha_1-1) \ge 1$.
Also, $\nu_{c'} (\alpha_2-1) \ge 1$ by congruence \eqref{cong-c'}.
Thus one may take ${\rm g}=1$.
Further, since $\max\{a,b\}<c^z=M$, one may set $H_1:=\log M$ and $H_2:=\log M$.
To sum up, noting that $\gcd(b_1,b_2)=\gcd(X,Y)=1$ by Lemma \ref{coprime}\,(ii), Proposition \ref{Bu-madic} gives
\begin{equation}\label{bound-x1y1-ubound}
\nu_M (\varLambda) \le \frac{53.6}{z^2\log^2 c} \cdot \mathcal B^2,
\end{equation}
where
\[
\mathcal B=\log\,\max \bigr\{ {\rm e}^{0.64}(b_1+|X-Y|), \, c^{4z} \bigr\}.
\]
Since $\max\{b_1,|X-Y|\} \le \max\{X,Y\}<\frac{\log c}{\log \min\{a,b\}}\,Z$, one has
\[
\mathcal B \le \log \,\max \biggr\{ \frac{2\,{\rm e}^{0.64} \log c}{\log (c'-1)}\,Z, \, c^{4z} \biggl \}.
\]
Thus \eqref{bound-x1y1-lbound}, \eqref{bound-x1y1-ubound} together lead to
\begin{equation} \label{Z/z}
\left\lfloor \frac{Z}{z} \right\rfloor \le \frac{53.6}{z^2 \log^2 c} \cdot {\mathcal B'}^2,
\end{equation}
where
\[
\mathcal B':=\log\,\max \biggr\{ \frac{2\,{\rm e}^{0.64} \log c}{\log (c'-1)}\,Z, \, c^{4z} \biggl\}.
\]
If $2\,{\rm e}^{0.64}(\log c)Z \le c^{4z}\log (c'-1)$, then \eqref{Z/z} yields
\[
\left\lfloor \frac{Z}{z} \right\rfloor < \frac{53.6}{z^2 \log^2 c} \cdot (4z\log c)^2=857.6,
\]
which leads to the assertion.
While if $2\,{\rm e}^{0.64}(\log c)Z > c^{4z}\log (c'-1)$, then
\[
Z>\frac{c^{4z} \log (c'-1)}{2\,{\rm e}^{0.64} \log c}, \quad \left\lfloor \frac{Z}{z} \right\rfloor \le \frac{53.6}{z^2 \log^2 c} \cdot \log^2 \biggl( \frac{2\,{\rm e}^{0.64} \log c}{ \log (c'-1)}\,Z \biggl).
\]
It is not hard to see that there are only finitely many pairs $(c,z)$ satisfying the above inequalities.
Indeed, either $c=2$ and $z \le 3$, or $c=3$ and $z=2$.
This implies that $(a,b,c)$ is $(5,3,2),(5,4,3)$ or $(7,2,3)$, where the assertion holds by classical results in the literature.
\end{proof}
\section{Proof of Theorem \ref{th1}} \label{sec-th1}%
In this section, we solve the following system of the equations:
\begin{eqnarray}
&a^x+b^y=c^z, \label{1st-calX}\\ &a^X+b^Y=c^Z, \label{2nd-calX}
\end{eqnarray}
where $a,b,c$ are given positive integers such that each of $a,b$ is congruent to $1$ or $-1$ modulo $c$, and $x,y,z,X,Y,Z$ are unknown positive integers with $(x,y,z) \ne (X,Y,Z)$ and $z \le Z$.
It suffices to consider when $a>b$ and $c$ is not a perfect power.
We shall keep the used notation $\delta_a,\delta_b,c',\Delta,C,K_c,\Delta',\ell,\mathcal X$ and use the established results in the previous sections.
In particular, we will frequently and implicitly rely on inequalities \eqref{trivial-ineqs} and $\min\{a,b\} \ge \ell-1$, together with the following notation:
\[
\tau_b:=\frac{\log c}{\log b}, \quad \tau_\ell:=\frac{\log c}{\log(\ell-1)}, \quad \tau_c:=\frac{\log c}{\log(c'-1)} \ \biggl( \le \frac{\log 3}{\log 2} \biggl).
\]
We begin with giving some restrictions on the solutions to the system of equations \eqref{1st-calX} and \eqref{2nd-calX}.
\begin{lem} \label{Delta-ineqs}
Let $(x,y,z,X,Y,Z)$ be a solution to the system of equations \eqref{1st-calX} and \eqref{2nd-calX}$.$
Then the following hold.
\begin{itemize}
\item[\rm (i)]
$\mathcal X<\tau z$ and $\max\{X,Y\}<\tau Z$ for $\tau \in \{\tau_b,\tau_\ell,\tau_c\}.$
\item[\rm (ii)]
$\Delta<K_c\,z.$
\item[\rm (iii)]
$\Delta<\tau \mathcal X Z$ for $\tau \in \{\tau_b,\tau_\ell,\tau_c\}.$
\end{itemize}
\end{lem}
\begin{proof}
(i) This follows from inequalities \eqref{trivial-ineqs}.\par
(ii) Observe that if $x Y>X y$ then $\Delta=|x Y - X y|<x \cdot Y$, so that
\begin{equation} \label{Delta-ele-upp}
\Delta<\frac{\log c}{\log a}\,z \cdot \frac{\log c}{\log b}\,Z=\frac{\log^2 c}{\log a\,\log b}\,z Z.
\end{equation}
This holds also for $x Y<X y$.
The assertion now easily follows from Lemma \ref{Kc}.\par
(iii) This holds since $\Delta<\mathcal X \max\{X,Y\}$ with (i).
\end{proof}
\begin{lem} \label{Cz-ineqs}
Let $(x,y,z,X,Y,Z)$ be a solution to the system of equations \eqref{1st-calX} and \eqref{2nd-calX}$.$
Then
\[
C^z<K_c\,z\,\gcd(a-\delta_a,b-\delta_b)<K_c\,z\,(c^{z/\mathcal X}+1).
\]
\end{lem}
\begin{proof}
Since $C^z \mid G \cdot \Delta$ by Lemma \ref{div}, where $G:=\gcd(a-\delta_a,b-\delta_b)$, one has $C^z \le G \cdot \Delta$.
On the other hand,
\[
G \le \min\{a-\delta_a,b-\delta_b\} \le \min\{a,b\}+1<c^{z/\mathcal X}+1.
\]
This together with Lemma \ref{Delta-ineqs}\,(ii) yields the asserted inequalities.
\end{proof}
Note that the above lemma can give absolute upper bounds for both $c$ and $z$ only if $\mathcal X>1$, where the premise that the extended multiplicative orders of $a$ and $b$ modulo $c$ equal 1 is essentially used as well as for the proofs of Lemmas \ref{sols-calX=1-Z<2z} and \ref{Zge2z} below.
The following lemmas corresponds to applying Lemma \ref{complement} to complement the case where $C=c/2$.
\begin{lem}\label{2adic-iota}
Assume that $C=c/2.$
Let $(z,X,Y,Z)$ be a solution to the system of equations \eqref{1st-calX} and \eqref{2nd-calX}$.$
Then
\[
h \equiv \delta_{h,4} \mod{2^{\iota}}
\]
for each $h \in \{a,b\},$ where $\delta_{h,4}$ is defined as in Lemma $\ref{complement}$ and
\[
\iota=\max \biggr\{2,\,z - \biggl \lfloor \frac{\log (\Delta / \Delta')}{\log 2} \biggl \rfloor \biggr\}.
\]
\end{lem}
\begin{proof}
Lemma \ref{complement} yields $\nu_2(h-\delta_{h,4}) \ge z-\nu_2(\Delta)$.
On the other hand, since $C$ is odd, one finds from the definition of $\Delta'$ that
\[
\nu_2(\Delta) = \nu_2(\Delta/\Delta') \le \frac{\log (\Delta / \Delta')}{\log 2}.
\]
The obtained two inequalities together show the assertion.
\end{proof}
Below we distinguish two cases according to $\mathcal X>1$ or $\mathcal X=1$.
\subsection{Case where $x>1$ or $y>1$} \label{subsec-calX>1}
The aim of this subsection is to prove the following:
\begin{lem} \label{sols-calX>1}
All solutions to the system of equations \eqref{1st-calX} and \eqref{2nd-calX} satisfying $\mathcal X>1$ are given by
\[ (x,y,z,X,Y,Z)=\begin{cases}
\,(1,3,5,3,1,7) & \text{for $(a,b,c)=(5,3,2)$},\\
\,(1,2,2,2,1,3) & \text{for $(a,b,c)=(5,2,3)$}.
\end{cases} \]
\end{lem}
By a technical reason for applying Lemma \ref{Kc} and Proposition \ref{Bu-madic-strong} and for making the presentation simple, we distinguish two cases according as $c$ takes some very small values or not.
\subsubsection{Case where $c \notin \{2,3,5,6,7,10,14\}$}
\label{subsec-calX>1-clarge}
We perform the algorithm consisting of the following three steps to sieve all possible cases of the system of equations \eqref{1st-calX} and \eqref{2nd-calX}, on which the other three programs are based.
The computational time was less than 20 minutes.
\noindent \vskip.2cm
\noindent {\bf Step 1.}
{\it Find all possible pairs $(z,\mathcal X)$ with a corresponding upper bound for $c.$}\par
Observe that $c \ge 11$ and $C/\sqrt{c} \ge 9/\sqrt{18}\,(>1)$.
Since $\mathcal X \ge 2$, the second inequality in Lemma \ref{Cz-ineqs} yields an absolute upper bound for $z$, namely, $z \le 13$.
Thus $\mathcal X$ is also finite as $\mathcal X<\tau_c\,z$ by Lemma \ref{Delta-ineqs}\,(i).
Moreover, since $\lim_{c \to \infty}C/\sqrt{c}=\infty$ by \eqref{essential}, an upper bound for $c$ corresponding to each of all possible pairs $(z,\mathcal X)$ can be found, say $c_1$.
Put the found triples $[z,\mathcal X,c_1]$ into a list, say $list1$, which contains 25 elements.
\noindent \vskip.2cm
\noindent {\bf Step 2.}
{\it Find all possible numbers $a,b,c,x,y,z,\Delta'.$}\par
First, for each element in $list1$, take any possible $c$ at most $c_1$.
Second, take any possible $\Delta'$ satisfying
\[
\Delta' < \Delta_u, \quad \Delta' \mid C^z,
\]
where $\Delta_u:=K_c\,z$.
The above inequality follows from Lemma \ref{Delta-ineqs}\,(ii) as $\Delta' \le \Delta$.
Third, for each possible $(\delta_a,\delta_b)$ restricted by Lemma \ref{coprime}\,(i), take any possible $b$ satisfying congruence \eqref{cong-ell} and $b \le b_1$, where $b_1:=\lfloor c^{z/\mathcal X}\rfloor$.
Here \eqref{cong-ell} is a key sieving relation.
Fourth, after checking a restriction on the size of $\mathcal X$ from Lemma \ref{Delta-ineqs}\,(i), take any possible $x$ and $y$ satisfying $\gcd(x,y)=1$ by Lemma \ref{coprime}\,(ii), and check whether $c^z-b^y$ is a $x$-th power, and put $a:=(c^z-b^y)^{1/x}$.
Finally, if $a$ and $\Delta'$ satisfies suitable conditions including the first inequality in Lemma \ref{Cz-ineqs}, put the tuple $[a,b,c,x,y,z,\Delta']$ into a new list.
Through this procedure on the case where $C=c/2$, further use the following congruence in Lemma \ref{2adic-iota}:
\begin{equation} \label{eff-sieve}
h \equiv \delta_{h,4} \mod{2^{\iota}}
\end{equation}
for each $h \in \{a,b\}$, where
\[
\iota=\max \biggr\{2,\,z - \biggl \lfloor \frac{\log (\Delta_1 / \Delta')}{\log 2} \biggl \rfloor \biggr\}
\]
with $\Delta_1$ any upper bound for $\Delta$.
The above congruence is efficient to sieve (only when $C=c/2$).
The resulting list, say $list2$, contains 3026 elements.
The program for this step on the case where $C=c/2$ is as follows:
\vspace{0.2cm}{\tt
begin
\hskip.2cm for each element in list1 do
\vskip.1cm
\hskip.2cm for $c:=11$ to $c_1$ do
\vskip.1cm
\hskip.2cm if $\text{IsPower}(c)=\text{false}$ and $c \ne 14$ then
\vskip.1cm
\hskip.2cm Create the set $H:=\{D_v: D_v < \Delta_u \text{ and } C^z \text{ mod } D_v =0\}$
\vskip.1cm
\hskip.2cm for $D_v$ in $H$ do
\vskip.1cm
\hskip.2cm $\ell:=\text{lcm}(c',C^z \text{ div } D_v)$;
\vskip.1cm
\hskip.2cm for $s$ in $[\,[1,-1],[-1,1],[-1,-1]\,]$ do
\vskip.1cm
\hskip.2cm $\delta_a:=s[1]$; \ $\delta_b:=s[2]$; \ $b_0:=\delta_b$;
\vskip.1cm
\hskip.2cm for $s_b$ in $[1,-1]$ do
\vskip.1cm
\hskip.2cm By the Chinese Remainder Theorem, calculate the least
\vskip.1cm
\hskip.2cm positive integer $b_0$ satisfying
\vskip.1cm
\hskip.2cm \hspace{2cm}$b_0 \equiv \delta_b \pmod{\ell}, \ b_0 \equiv s_b \pmod{2^\iota} $
\vskip.1cm
\hskip.2cm with $\Delta_1=\Delta_u$;
\vskip.1cm
\hskip.2cm for $b:=b_0$ to $b_1$ by $\ell$ do
\vskip.1cm
\hskip.2cm if $b >1$ and $\mathcal X<\tau_b\,z$
then
\vskip.1cm
\hskip.2cm for $x:=1$ to $\text{min}(\,\mathcal X,\,\text{floor}(\,z\,(\log c)/\log (b+1)\,)\,)$ do
\vskip.1cm
\hskip.2cm for $y:=1$ to $\mathcal X$ do
\vskip.1cm
\hskip.2cm if $\text{max}(x,y)=\mathcal X$ and $\text{gcd}(x,y)=1$ then
\vskip.1cm
\hskip.2cm if IsIntegral$(\,(c^z-b^y)^{1/x}\,)$ = true then
\vskip.1cm
\hskip.2cm $a:=(c^z-b^y)^{1/x}$;
\vskip.1cm
\hskip.2cm if $a>b$ and $(a-\delta_a)$ mod $\ell=0$
\vskip.1cm
\hskip.2cm and $C^z<\Delta_u\,\text{gcd}(a-\delta_a,b-\delta_b)$ then
\vskip.1cm
\hskip.2cm Sieve with \eqref{eff-sieve} with $h=a$ and $\Delta_1=\Delta_u$
\vskip.1cm
\hskip.2cm $\Delta':=D_v$ and put $[a,\delta_a,b,\delta_b,c,x,y,z,\Delta']$ into $list2$
end}
\vskip.3cm
In the above program the for-loop on $s_b$ composed of 5 lines is omitted when $C \ne c/2$.
\noindent \vskip.3cm
\noindent {\bf Step 3.}
{\it Find all possible numbers $a,b,c,x,y,z,X,Y,Z.$}\par
Use the following restrictions:
\begin{gather}
{\delta_a}^X=-{\delta_b}^Y, \ \gcd(X,Y)=1,\label{alg-calX>1-csmall-step3-1}\\
{\delta_a}^{X-1}(a-\delta_a)X+{\delta_b}^{Y-1}(b-\delta_b)Y \equiv 0 \mod{c^2}, \label{alg-calX>1-csmall-step3-2}\\
z \le \alpha+\nu_2(\Delta) \ \ \text{if $C=c/2$}, \label{alfa}
\end{gather}
where $\alpha:=\min\{ \nu_2(a^2-1), \nu_2(b^2-1)\}-1$.
These follow from Lemma \ref{coprime}, reducing equation \eqref{2nd-calX} modulo $c^2$ (cf.~proof of Lemma \ref{Zge2z} below) and Lemma \ref{complement}, respectively.
First, for each element in $list2$, take any possible $X$ by using the upper bound $Z_1$ for $Z$ from Lemma \ref{Kc}, where $Z_1:=\lfloor K_c(\log a)(\log b)/\log^2 c \rfloor$.
Second, take any possible product of $x$ and $Y$.
If the difference between its value and $X \cdot y$ satisfies suitable conditions, then define $Y$ suitably.
Third, sieve with \eqref{alg-calX>1-csmall-step3-1}, \eqref{alg-calX>1-csmall-step3-2}.
Fourth, define $\Delta$ suitably, and sieve with \eqref{alfa} and the definition of $\Delta'$.
Finally, check whether $a^X+b^Y$ is a power of $c$ and find $Z$.
The program for this step is as follows:
\vspace{0.2cm}{\tt
begin
\hskip.2cm for each element in list2 do
\vskip.1cm
\hskip.2cm $Xu:=\text{floor}(Z_1 (\log c)/\log a)$; \ $Yu:=\text{floor}(\tau_b Z_1)$;
\vskip.1cm
\hskip.2cm for $X:=1$ to $Xu$ do
\vskip.1cm
\hskip.2cm for $x Y:=x$ to $x \cdot Yu$ by $x$ do
\vskip.1cm
\hskip.2cm if $xY-X \cdot y \ne 0$ and $(xY-X \cdot y)$ mod $\Delta'=0$ then
\vskip.1cm
\hskip.2cm $Y:=xY$ div $x$;
\vskip.1cm
\hskip.2cm sieve with \eqref{alg-calX>1-csmall-step3-1} and \eqref{alg-calX>1-csmall-step3-2
\vskip.1cm
\hskip.2cm $\Delta:=\text{abs}(xY - X \cdot y)$;
\vskip.1cm
\hskip.2cm sieve with \eqref{alfa}
\vskip.1cm
\hskip.2cm if $\text{gcd}(\Delta,C^z)=\Delta'$ then
\vskip.1cm
\hskip.2cm $W:=a^X+b^Y$; $i:=0$; repeat;
\vskip.1cm
\hskip.2cm if $W$ mod $c$ $=0$ then
\vskip.1cm
\hskip.2cm $W:=W$ div $c$; $i:=i+1$;
\vskip.1cm
\hskip.2cm until $W$ mod $c$ $\ne 0$;
\vskip.1cm
\hskip.2cm if $W=1$ then $Z:=i$; print $[a,b,c,x,y,z,X,Y,Z]$
\vskip.1cm
end}
\vskip.3cm
It turns out there is no output, and this completes the proof of Lemma \ref{sols-calX>1} for the values of $c$ under consideration.
\subsubsection{Case where $c \in \{2,3,5,6,7,10,14\}$}
\label{subsec-calX>1-csmall}
Note that $C$ is a prime, so that $\Delta'=C^t$ for some integer $t$ with $0 \le t \le z$, and $\ell=\lcm(c',C^{z-t})$.
We perform the algorithm below consisting of the following four steps.
The computational time was less than 5 hours.
\noindent \vskip.2cm
\noindent {\bf Step 1.}
{\it For each $c,$ find all possible pairs $(z,\mathcal X).$}\par
This step is basically the same as Step 1 of Section \ref{subsec-calX>1-clarge}.
Since $C/\sqrt{c}>1$ for each $c$, and $\mathcal X \ge 2$, the second inequality of Lemma \ref{Cz-ineqs} yields an absolute upper bound for $z$, so that $\mathcal X$ is also finite as $\mathcal X<\tau_c\,z$.
Put the found triples $[c,z,\mathcal X]$ into a list, say $list1$, which contains 526 elements.
\noindent \vskip.2cm
\noindent {\bf Step 1/a.}
{\it For each element in $list1,$ find all possible $t.$}\par
For each element in $list1$, take any $t$ with $0 \le t \le z$ satisfying
\[
{\rm lcm}(c',C^{z-t})-1 \le b_1, \quad C^t<K_c\,z,
\]
where $b_1$ is defined as in Step 2 of Section \ref{subsec-calX>1-clarge}.
The above first inequality holds since $b \ge \ell-1$, and the second one is by Lemma \ref{Delta-ineqs}\,(ii) as $C^t=\Delta'$.
Put the found quadruples $[c,z,\mathcal X,t]$ into a new list, say $list1a$, which contains 1322 elements.
\noindent \vskip.2cm
\noindent {\bf Step 1/b.}
{\it For each element in $list1/a,$ find an upper bound for $Z.$}\par
Take any element in $list1/a$.
In this step, in order to find a sharper upper bound for $Z$ than $Z_1$, where $Z_1:=\lfloor K_c\,z^2/\mathcal X \rfloor$ by Lemma \ref{Kc}, apply Lemma \ref{strong-applied} with $Z_u:=Z_1$ as follows.
Put $M$ as
\[ M:=\begin{cases}
\,4 & \text{if $t \ge z-1$ and $c=2$},\\
\,C & \text{if $t \ge z-1$ and $c>2$},\\
\,C^{z-t} & \text{if $t<z-1$}.
\end{cases}\]
These choices correspond to cases (C1),(C2), (C3) respectively.
After defining $a_1=a_1(c,z,M),a_2=a_2(c,z,M,\mathcal X)$ as in Lemma \ref{strong-applied}, for suitable $k$ and $L$, use all other notation in that lemma.
Here find the pair $(k,L)$ in the following way.
First, for each pair $(k_0,L_0)$ of integers satisfying $1 \le k_0 \le 60$ and $2 \le L_0 \le 35$, put $k:=k_0/15$ and $L:=L_0$ and check whether inequalities $K \ge 3$ and $f_0>f_1+f_2+f_3+f_4$ hold.
Among all such suitable pairs, take one for which the upper bound for $Z$ obtained from Lemma \ref{strong-applied} becomes the least, and redefine $Z_u$ by the value found in this way.
Iterate this procedure three times, and let $Z_1$ be the resulting $Z_u$.
Finally sieve with the inequality $C^t<\tau_\ell \mathcal X Z_1$ by Lemma \ref{Delta-ineqs}\,(iii).
Put the found tuples $[c,z,\mathcal X,t,Z_1]$ into a new list, say $list1b$, which contains 700 elements.
\vskip.2cm \noindent {\bf Step 2.}
{\it Find all possible numbers $a,b,c,x,y,z,X,Y,Z.$}\par
For each element in $list1/b$, perform the same algorithms as in Steps 2 \& 3 of Section \ref{subsec-calX>1-clarge}, with only one natural modification.
Namely, for generating $list2$ the definition of $D_v$ is simply replaced by $D_v:=C^t$.
The final output coincides with the solutions described in Lemma \ref{sols-calX>1}, and this completes the proof of Theorem \ref{th1} for the case where $\mathcal X>1$.
\subsection{Case where $x=1$ and $y=1$}
Here we examine the system of equations \eqref{1st-calX} and \eqref{2nd-calX} with $(x,y)=(1,1)$, that is,
\begin{eqnarray}
&a+b=c^z, \label{1st-calX=1}\\ &a^X+b^Y=c^Z. \label{2nd-calX=1}
\end{eqnarray}
The notation used in Section \ref{subsec-calX>1} are also used below.
The aim of this and the next subsections is to prove the following:
\begin{lem} \label{sols-calX=1}
All solutions to the system of equations \eqref{1st-calX=1} and \eqref{2nd-calX=1} are given by
\[ (z,X,Y,Z) = \begin{cases}
\,(3,1,3,5),(3,3,1,7) & \text{for $(a,b,c)=(5,3,2)$},\\
\,(4,1,5,8) & \text{for $(a,b,c)=(13,3,2)$}, \\
\,(2,2,5,4) & \text{for $(a,b,c)=(7,2,3)$}.
\end{cases} \]
\end{lem}
Firstly, we finish the case where $Z<2z$.
\begin{lem} \label{sols-calX=1-Z<2z}
The only solution to the system of equations \eqref{1st-calX=1} and \eqref{2nd-calX=1} satisfying $Z<2z$ is given by $(z,X,Y,Z)=(3,1,3,5)$ for $(a,b,c)=(5,3,2).$
\end{lem}
\begin{proof}
Note that $z>1$ as it is clear that $z<Z$.
From equations \eqref{1st-calX=1} and \eqref{2nd-calX=1},
\begin{equation} \label{zZ}
a^X+b^Y=\frac{(a+b)^2}{c^{2z-Z}}.
\end{equation}
This yields that $a^X<a^X+b^Y<4a^2/c^{2z-Z}$, whence
\begin{equation} \label{zZ2}
a^{X-2}c^{2z-Z}<4
\end{equation}
with $2z>Z$.
In particular, $X \le 2$.
Suppose that $X=2$.
Then $c \le 3$ and $Z=2z-1$ by \eqref{zZ2}, thereby \eqref{zZ} becomes $(a^2+b^Y) c=(a+b)^2$.
Reducing this equation modulo $b$ implies that $c \equiv 1 \pmod{b}$, so that $(b,c)=(2,3)$.
However the equation used previously does not hold as $3(a^2+2^Y)>(a+2)^2$.
Thus $X=1$.
Eliminating $a$ from equations \eqref{1st-calX=1}, \eqref{2nd-calX=1} yields
\begin{equation}\label{bYcZz}
b^Y-b=c^Z-c^z.
\end{equation}
Note that $Y \ge 3$ as $b^{Y-1} \equiv 1 \pmod{c^z}$ with $Y>1$ and $c^z=a+b>b$.
Since $2z>Z$, and $b \ge C^z/\Delta'-1$ by congruence \eqref{cong-ell}, it follows that
\[
c^{2z-1} \ge c^Z>b^Y \ge \biggl( \frac{C^z}{\Delta'}-1 \biggr)^Y,
\]
so that $C^z < \Delta'(c^{(2z-1)/Y}+1)$.
Since $\Delta' \le \Delta=Y-1$, one has
\[
C^z <(Y-1)(c^{(2/Y)z-1/Y}+1).
\]
Suppose that $Y \ge 4$.
Recall that $C>\sqrt{c}$ by \eqref{essential}.
The above displayed inequality together with the inequalities $4 \le Y<\tau_c Z$ and $Z<2z$, implies that $(c,z,Y) \in \{(3,2,4),(6,3,4),(6,3,5)\}$.
For each of these triples equation \eqref{bYcZz} does not hold for any possible $b,Z$ with $z<Z<2z$.
Finally, we shall examine the case where $Y=3$, where \eqref{bYcZz} is
\begin{equation}\label{bYcZz-2}
b(b^2-1)=c^z(c^{Z-z}-1).
\end{equation}
Since $b^2=1+K c^z$ for some integer $K \ge 1$, one has $b K=c^{Z-z}-1$.
Noting that $z \ge Z-z$, one reduces this equation modulo $c^{Z-z}$ and squares the resulting one to see that $K^2 \equiv 1 \pmod{c^{Z-z}}$.
If $K=1$, then $b^2=1+c^z$ and $b=c^{Z-z}-1$.
It is not hard to see that these two equations together lead to $(b,c,z,Z)=(3,2,3,5)$, whence $a=c^z-b=5$.
Suppose that $K>1$.
Since $K^2 \equiv 1 \pmod{c^{Z-z}}$, it follows that $K>\sqrt{c^{Z-z}}$.
Therefore,
\[
b=\sqrt{1+K c^z}>\sqrt{K \cdot c^z}>c^{\frac{Z+z}{4}} \ge c^{\frac{3Z+1}{8}}.
\]
On the other hand, from \eqref{bYcZz-2},
\[
b^3=c^Z \cdot \frac{1-1/c^{Z-z}}{1-1/b^2} <c^Z \cdot \frac{4}{3}.
\]
These inequalities together imply that $c^{Z+3}<(4/3)^8\,(<10)$, which clearly does not hold.
\end{proof}
\subsection{Case where $Z \ge 2z$} \label{subsec-Zge2z}
Here we examine the case where $Z \ge 2z$.
By Lemma \ref{coprime}\,(i), we can write $\delta=\delta_a=-\delta_b$ for some $\delta \in \{1,-1\}$.
In what follows, we put
\[ \mathcal D:=\frac{C^z}{\Delta'}. \]
From equation \eqref{1st-calX=1} and congruence \eqref{cong-ell}, numbers $a$ and $b$ can be written as follows:
\begin{equation} \label{ab-form}
a=A \cdot \mathcal D+\delta, \quad b=B \cdot \mathcal D-\delta,
\end{equation}
where $A$ and $B$ are some positive integers satisfying
\[ A+B=\frac{c^z}{\mathcal D}.\]
One substitutes the forms of $a$ and $b$ in \eqref{ab-form} into equation \eqref{2nd-calX=1}:
\begin{equation} \label{ABcalDcZ}
(A \cdot \mathcal D+\delta)^X+(B \cdot \mathcal D-\delta)^Y=c^Z.
\end{equation}
\begin{lem}\label{Zge2z}
Let $(z,X,Y,Z)$ be a solution to the system of equations \eqref{1st-calX=1} and \eqref{2nd-calX=1} satisfying $Z \ge 2z.$
Then
\[ (C/\sqrt{c})^z \le {\Delta'} \sqrt{ \max\{X,Y\}}.\]
Further, $AX+BY \equiv 0 \pmod{\mathcal D}.$
\end{lem}
\begin{proof}
Since $c^Z$ is divisible by $\mathcal D^2$ as $Z \ge 2z$, one reduces equation \eqref{ABcalDcZ} modulo $\mathcal D^2$ to find that
\[ a_0+a_1\mathcal D \equiv 0 \mod{\mathcal D^2},\]
where $a_0=\delta^X+(-\delta)^Y$ and $a_1=\delta^{X-1} AX+(-\delta)^{Y-1} BY$.
Since $a_0=0$ by Lemma \ref{coprime}\,(i), one has $(-\delta)^{Y-1}=(-\delta)^Y/(-\delta)=-\delta^X/(-\delta)=\delta^{X-1}$, so that $a_1=\delta^{X-1} (AX+BY)$.
These together show that $a_1$ is divisible by $\mathcal D$, in particular,
\[ AX+BY \ge \mathcal D.\]
On the other hand, by \eqref{ab-form},
\[ AX+BY \le \max\{X,Y\}(A+B)=\max\{X,Y\} \cdot \frac{c^z}{\mathcal D}.\]
These obtained bounds for $AX+BY$ together yield
\[ \frac{\mathcal D^2}{c^z} \le \max\{X,Y\}, \]
which is equivalent to the asserted inequality.
\end{proof}
Below, we distinguish two cases in the same way as in the case where $\mathcal X>1$.
\subsubsection{Case where $c \not \in \{2,3,5,6,7,10,14\}$} \label{subsec-calX=1-clarge}
We perform the algorithm consisting of the following three steps to sieve all possible cases of the system of equations \eqref{1st-calX=1} and \eqref{2nd-calX=1}.
Since it is similar to that of Section \ref{subsec-calX>1-clarge}, we thus mainly refer to distinct places in each step.
The computational time was about 20 days.
\noindent \vskip.2cm
\noindent {\bf Step 1.}
{\it Find all possible $z$ with a corresponding upper bound for $c.$}\par
Adopt $Z_1$ as the upper bound from Lemma \ref{bound-x1y1}, namely, $Z_1:=\lfloor 858.6\,z \rfloor$.
Use the inequality
\[ (C/\sqrt{c})^z <(\tau_c Z_1)^{3/2}. \]
This follows by Lemma \ref{Zge2z} since $\Delta' \le \Delta=|X-Y|<\max\{X,Y\} <\tau_c Z$.
It yields an absolute upper bound for $z$, namely, $z \le 19$.
Moreover, since $\lim_{c \to \infty}C/\sqrt{c}=\infty$ by \eqref{essential}, an upper bound for $c$ corresponding to each of $z$ can be found, say $c_1$.
Put the found pairs $[z,c_1]$ into a list, say $list1$, which contains 18 elements.
\noindent \vskip.2cm
\noindent {\bf Step 2.}
{\it Find all possible numbers $a,b,c,z,\Delta'.$}\par
Set $\Delta_u:=\lfloor \tau_c Z_1 \rfloor, b_1:=\lfloor {c^z}/2 \rfloor$ and $\Delta_l:=\lceil (C/\sqrt{c})^z/ \sqrt{\tau_c Z_1}\, \rceil$.
Further, to restrict the size of $D_v$, use the inequalities
\[
\Delta'<\tau Z_1, \quad (C/\sqrt{c})^z < {\Delta'} \sqrt{\tau Z_1}
\]
for $\tau \in \{\tau_b,\tau_c\}$.
The second inequality easily follows from Lemma \ref{Zge2z}.
Put the found tuples $[a,b,c,z,\Delta']$ into a new list, say $list2$, which contains about 50 million elements.
The program for this step for the case where $C=c/2$ is as follows:
\vspace{0.2cm}{\tt
begin
\hskip.2cm for each element in list1 do
\vskip.1cm
\hskip.2cm for $c:=11$ to $c_1$ do
\vskip.1cm
\hskip.2cm if $\text{IsPower}(c)=\text{false}$ and $c \ne 14$ then
\vskip.1cm
\hskip.2cm Create the set
\vskip.1cm
\hskip.2cm $H:=\{D_v: \Delta_l \le D_v < \Delta_u \text{ and } C^z \text{ mod } D_v=0\}$
\vskip.1cm
\hskip.2cm for $D_v$ in $H$ do
\vskip.1cm
\hskip.2cm $\ell:=\text{lcm}(c',C^z \text{ div } D_v)$;
\vskip.1cm
\hskip.2cm for $s$ in $[1,-1]$ do
\vskip.1cm
\hskip.2cm $\delta_a:=s$; \ $\delta_b:=-s$; \ $b_0:=\delta_b$;
\vskip.1cm
\hskip.2cm for $s_b$ in $[1,-1]$ do
\vskip.1cm
\hskip.2cm By the Chinese Remainder Theorem, calculate the least
\vskip.1cm
\hskip.2cm positive integer $b_0$ satisfying
\vskip.1cm
\hskip.2cm \hspace{2cm}$b_0 \equiv \delta_b \pmod{\ell}, \ b_0 \equiv s_b \pmod{2^\iota} $
\vskip.1cm
\hskip.2cm with $\Delta_1=\tau_c Z_1$;
\vskip.1cm
\hskip.2cm for $b:=b_0$ to $b_1$ by $\ell$ do
\vskip.1cm
\hskip.2cm if $b>1$ and $(C/\sqrt{c})^z/\sqrt{\tau_b Z_1}<D_v<\tau_b Z_1$ then
\vskip.1cm
\hskip.2cm Sieve with \eqref{eff-sieve} with $h=b$ and $\Delta_1=\tau_b Z_1$
\vskip.1cm
\hskip.2cm $a:=c^z-b$; \ $\Delta':=D_v$; \ put $[a,\delta_a,b,\delta_b,c,z,\Delta']$ into $list2$
end}
\vskip.3cm
In the above program the for-loop on $s_b$ is omitted when $C \ne c/2$.
One probably has no choice but to distinguish the cases to keep the number of elements in $list2$ under control due to its huge size.
The most time consuming part was the case with $z=2$.
\noindent \vskip.3cm
\noindent {\bf Step 3.}
{\it Find all possible numbers $a,b,c,z,X,Y,Z.$}\par
This is basically the same as Step 3 of Section \ref{subsec-calX>1-clarge} to the elements in $list2$.
It turns out there is no output, and this completes the proof of Lemma \ref{sols-calX=1} for the values of $c$ under consideration.
\subsubsection{Case where $c \in \{2,3,5,6,7,10,14\}$} \label{subsec-calX=1-csmall}
We perform the algorithm consisting of the following several steps, where the notation in Section \ref{subsec-calX=1-clarge} are used.
It is basically similar to that of Section \ref{subsec-calX=1-clarge}.
The computational time was about 1 hour.
\noindent \vskip.2cm
\noindent {\bf Step 1.}
{\it For each $c,$ find all possible $z.$}\par
For each $c$, find all $z$ satisfying the inequality
\[
(C/\sqrt{c})^z<(\tau_c Z_1)^{3/2},
\]
where $Z_1$ is defined as in Step 1 of Section \ref{subsec-calX=1-clarge}.
Put the found pairs $[c,z]$ into a new list, say $list1$, which contains 235 elements.
\noindent \vskip.2cm
\noindent {\bf Step 1/a.}
{\it For each element in $list1,$ find all possible $t.$}\par
This is similar to Step 1/a of Section \ref{subsec-calX>1-csmall}.
Find all $t$ satisfying
\begin{equation}\label{ineq-calX=1-csmall-1/b}
\ell-1 \le b_1, \ \ C^t<\tau_{\ell} Z_1, \ \ (C/\sqrt{c})^z < C^t \sqrt{\tau_{\ell} Z_1},
\end{equation}
where $b_1$ is defined as in Step 2 of Section \ref{subsec-calX=1-clarge}.
Put the found triples $[c,z,t]$ into a new list, say $list1/a$, which contains 629 elements.
\noindent \vskip.2cm
\noindent {\bf Step 1/b.}
{\it For each element in $list1/a,$ find an upper bound for $Z.$}\par
Perform the same algorithm as in Step 1/b of Section \ref{subsec-calX>1-csmall} with $Z_u:=Z_1$, and sieve with the last two inequalities in \eqref{ineq-calX=1-csmall-1/b}.
Put the found quadruples $[c,z,t,Z_1]$ into a new list, say $list1/b$, which contains 351 elements.
\vskip.2cm \noindent {\bf Step 2.}
{\it Find all possible numbers $a,b,c,z,X,Y,Z.$}\par
For each element in $list1/b$, perform the same algorithms as in Steps 2 \& 3 of Section \ref{subsec-calX=1-clarge}, with the definition of $D_v$ replaced by $D_v:=C^t$.
The output, together with Lemma \ref{sols-calX=1-Z<2z}, are the solutions described in Lemma \ref{sols-calX=1}.
\vskip.3cm
This completes the proof of Theorem \ref{th1}.
\vskip.3cm We finish this section by showing one application of Corollary \ref{coro1}, which concerns an unsolved problem posed by Je\'smanowicz concerning Pythagorean triples (for this topic see for example \cite{Mi_aa18}).
This is regarded as an analogue to \cite[Theorems 1.2 and 1.3]{FuMi_colloq12} and \cite[Theorems 1 and 2]{MiYuWu} in a reciprocal sense between modulus and residue.
\begin{cor}
Let $\{a,b,c\}$ be a primitive Pythagorean triple such that $a^2+b^2=c^2.$
If $a$ or $b$ is congruent to $1$ or $-1$ modulo $c,$ then Sierpi\'nski-Je\'smanowicz conjecture is true, namely, equation \eqref{abc} has only the trivial solution.
\end{cor}
\section{Proof of Theorem \ref{th2}} \label{sec-th2}%
The proof proceeds along similar lines to that of Theorem \ref{th1}.
Below thus we mainly discuss different places.
Put $P:=\prod_{p \in S}p$.
Then $P$ is a divisor of $c$, and $M_S=P$ or $4P$ according to cases (I) or (II).
Note that we are in the case where $c$ is even if $S$ is empty, and recall that
\begin{equation}\label{cS-bound}
c_S > \sqrt{c}.
\end{equation}
By Lemma \ref{weakform}\,(i,ii) for $d=P$, we may assume for case (I) that each of $a$ and $b$ is congruent to $1$ or $-1$ modulo $M_S$.
This is similar to case (II) except when $c$ is not divisible by 4, where $4P$ is not a divisor of $c$.
In this exceptional case, $c[S]=\frac{1}{2}c[S] \cdot 2 \ge \frac{1}{2}c[S \cup \{2\}]=c_S>\sqrt{c}$ by \eqref{cS-bound}, so that this case is reduced to case (I).
If $S$ is non-empty, then, since $P>2$, for each $h \in \{a,b\}$, we can uniquely define $\delta_h \in \{1,-1\}$ by the following congruence:
\begin{eqnarray}\label{ass-cong-th3}
h \equiv \delta_h \mod{P}.
\end{eqnarray}
When $c$ is even, since $h$ is odd, we can also uniquely define $\delta_{h,4} \in \{1,-1\}$ by the following congruence:
\begin{eqnarray}\label{ass-cong-mod4-th3}
h \equiv \delta_{h,4} \mod 4.
\end{eqnarray}
Note that $\delta_{h,4}=\delta_h$ in case (II).
We begin with the following lemma.
\begin{lem}\label{coprime-th3}
Let $(x,y,z)$ be a solution to equation $\eqref{abc}.$
Then the following hold.
\begin{itemize}
\item[\rm (i)]
$x$ or $y$ is odd.
\item[\rm (ii)]
$x,y$ and $P$ are relatively prime.
\end{itemize}
\end{lem}
\begin{proof}
(i) First consider the case where $S$ is non-empty.
By \eqref{ass-cong-th3} one reduces equation \eqref{abc} modulo $P$ to find that ${\delta_a}^x \equiv -{\delta_b}^y \pmod{P}$.
Thus ${\delta_a}^x=-{\delta_b}^y$, implying the assertion.
Finally, suppose that $S$ is empty and $c$ is even.
Similarly to the previous case, one can obtain the assertion by using \eqref{ass-cong-mod4-th3} and reducing equation \eqref{abc} modulo $4$, whenever $c^z \equiv 0 \pmod{4}$.
If $c^z \not\equiv 0 \pmod{4}$, then $2 \parallel c$ and $z=1$, so that $c=2$ since $\sqrt{c}<c_S \le c[2]=2$ by \eqref{cS-bound}, which clearly contradicts equation \eqref{abc}.\par
(ii) By (i) suppose that $x,y$ have some common odd prime factor belonging to $S$, say $p$.
We shall use the notation in the proof of Lemma \ref{coprime}\,(ii), and we take $r$ as any prime belonging to $S'$, where $S'=S \cup \{2\}$ if $c$ is even, and $S'=S$ if $c$ is odd.
Since each of $a,b$ is congruent to $\pm1$ modulo $r$, it turns out that $L$ is divisible by $r$.
Recalling that $R$ is odd, and that $\nu_r(R)>0$ if and only if $r=p$ with $\nu_p(R)=1$, one infers from equation \eqref{factor} that $(c[S'])^z/p$ divides $L$.
Replacing this divisibility relation by an inequality, one finds that
\[
\frac{R}{L} = \frac{c^z}{L^2} \le \frac{c^z}{\bigr( (c[S'])^z/p \bigr)^2}=p^2 \cdot \left( \frac{c}{c[S']^2}\right)^z.
\]
Since $c[S']=c[S \cup \{2\}] \ge c_S$, it follows from \eqref{cS-bound} that
\begin{equation}\label{ABp}
\frac{A^p+B^p}{(A+B)^2}< p^2,
\end{equation}
where $A:=a^{x_0}$ and $B:=b^{y_0}$.
Note that each of $A,B$ is congruent to $\pm 1$ modulo $p$.
If $A>B$ (the other case is similar), then
\[ A^p<A^p+B^p<p^2(A+B)^2<p^2(2A)^2, \]
so that $A^{p-2}<4p^2$.
Since $A \ge p+1$, one has $p=3$, and $A<36$.
However, none of all possible pairs $(A,B)$ satisfies \eqref{ABp}.
\end{proof}
Suppose that the system of equations \eqref{1st} and \eqref{2nd} has a solution $(x,y,z,X,Y,Z)$ with $(x,y,z) \ne (X,Y,Z)$ and $z \le Z$.
Define the positive integer $\Delta$ as in the proof of Theorem \ref{th1}.
\begin{lem}\label{div-th3}
${c[S]}^z$ divides $\gcd(a-\delta_a,b-\delta_b) \cdot \Delta,$ and ${c[2]}^z$ divides $\gcd(a-\delta_{a,4},b-\delta_{b,4}) \cdot \Delta.$
\end{lem}
\begin{proof}
In the same way as in the proof of Lemma \ref{basic-cong}, for each $h \in \{a,b\}$ one obtains the congruence $h^{\Delta} \equiv \varepsilon \pmod{c^z}$ for some $\varepsilon=\pm1$.
Reducing this congruence modulo each $p \in S$, and combining the resulting one with congruence \eqref{ass-cong-th3} yields $\varepsilon \equiv {\delta_h}^{\Delta} \pmod{p}$, so that $\varepsilon={\delta_h}^{\Delta}$, and $h^{\Delta} \equiv {\delta_h}^{\Delta} \pmod{p^z}$.
Now the first assertion follows similarly to the proof of Lemma \ref{div}.
The second one is just a redisplaying of Lemma \ref{complement}.
\end{proof}
In what follows, we frequently use the Vinogradov symbol $f \ll g$, which means that is $|f/g|$ is less than some absolute positive constant.
Any implied constants appearing below are effectively computable.
\begin{lem}\label{1st-bound-th2}
$Z \ll (\log \log c)^2(\log a)\log b.$
\end{lem}
\begin{proof}
We shall independently consider the two cases where $S$ is nonempty and $c$ is even.
According to these cases, we put $M:=P$ and $M:=4$, respectively.
From equation \eqref{2nd},
\[ \nu_{M}(a^{2X}-b^{2Y}) \gg Z. \]
To obtain an upper bound for the left-hand side above, we apply Proposition \ref{Bu-madic} for $(\alpha_1,\alpha_2):=(a^2,b^2)$ and $(b_1,b_2):=(X,Y)$.
Congruences \eqref{ass-cong-th3}, \eqref{ass-cong-mod4-th3} together with Lemma \ref{coprime-th3} enable one to take ${\rm g}:=1$.
Also, since $M \le \min\{a,b\}+1 \le \min\{a,b\}^2$, one may set $H_1:=2\log a$ and $H_2:=2\log b$.
Proposition \ref{Bu-madic} gives
\[
\nu_{M}(a^{2X}-b^{2Y}) \ll \frac{\log a\,\log b}{\log^4 M} \cdot \mathcal B^2,
\]
where
\[
\mathcal B=\max \biggl\{ \log \biggl( \frac{X}{2\log b}+\frac{Y}{2\log a} \biggr)+\log \log M, \,\log M \biggl\}.
\]
Since
\[
\log \biggl( \frac{X}{2\log b}+\frac{Y}{2\log a} \biggr)+\log \log M <\log \biggl( \frac{\log c \ \log M}{\log a \ \log b}\,Z\biggl),
\]
the obtained bounds for $\nu_{M}(a^{2X}-b^{2Y})$ together lead to
\[ T \ll \frac{1}{\log^4 M} \cdot {\mathcal B'}^2, \]
where
\[
T:=\frac{Z}{\log a\,\log b}, \quad \mathcal B':=\log\,\max \bigr\{ (\log c)(\log M)\,T ,\,M \big\}.
\]
It is not hard to see from the above inequality that $T \ll (\log \log c)^2$.
\end{proof}
By Lemma \ref{1st-bound-th2} with inequality \eqref{Delta-ele-upp}, we estimate $\Delta$ from above as follows:
\[
\Delta<\frac{\log^2 c}{\log a\,\log b}\,zZ \ll (\log \log c)^2(\log c)^2 z.
\]
This shows that the size of $\Delta$ is much small than that of $c^z$.
Put $\Delta'$ and $\Delta_2$ as follows:
\[ \begin{cases}
\,\Delta':=\gcd(\Delta,{c[S]}^z), \ \ \Delta_2:=\gcd(\Delta,{c[2]}^z)
& \text{for case (I)},\\
\,\Delta':=\gcd(\Delta,(c[S]c[2])^z)
& \text{for case (II)}.
\end{cases}\]
By Lemma \ref{div-th3},
\[ \begin{cases}
\,h \equiv \delta_h \mod{{c[S]}^z/\Delta'}, \quad h \equiv \delta_{h,4} \mod{{c[2]}^z/\Delta_2}
& \text{for case (I)},\\
\,h \equiv \delta_h \mod{(c[S]c[2])^z/\Delta'}
& \text{for case (II)}
\end{cases}\]
for each $h \in \{a,b\}$.
In what follows, for case (I), we put
\[ \mathcal D:=\begin{cases}
\, {c[S]}^z/\Delta' & \text{if $c[S]>c[2]$},\\
\, {c[2]}^z/\Delta_2 & \text{if $c[2]>c[S]$},
\end{cases} \]
and $\mathcal D:=(c[S]c[2])^z/\Delta'$ for case (II).
Then, for each $h \in \{a,b\}$, it holds that
\begin{align}\label{cong-D-th3}
h \equiv \epsilon_h \mod{\mathcal D}
\end{align}
for some $\epsilon_h \in \{1,-1\}$.
Further,
\[
\mathcal D \ge \frac{{c_S}^z}{\Delta} \gg \frac{{c_S}^z}{(\log \log c)^2(\log c)^2 z}.
\]
In view of \eqref{cS-bound}, except for only finitely many pair $(c,z)$ being effectively determined, we may estimate $\mathcal D$ from below as follows:
\begin{equation}\label{calD-estimate}
\mathcal D>2, \quad \log \mathcal D \gg z \log c.
\end{equation}
In what follows, we assume that $\max\{a,b,c\}$ is sufficiently large so that at least one of $c$ and $z$ is sufficiently large, thereby inequalities \eqref{calD-estimate} hold.
\begin{lem}\label{second-bound-th2}
$z \cdot Z \ll \dfrac{(\log a)\log b}{\log^2 c}.$
\end{lem}
\begin{proof}
We proceed along similar lines to the proof of Lemma \ref{1st-bound-th2}, namely, we shall again apply Proposition \ref{Bu-madic}, with the same parameters other than $M$ as those in the proof of that lemma.
In this case, we set $M:=\mathcal D$.
This choice is justified by Lemma \ref{coprime-th3}, \eqref{cong-D-th3}, \eqref{calD-estimate}.
Proposition \ref{Bu-madic} gives
\[
\nu_{M}(a^{2X}-b^{2Y}) \ll \frac{\log a\, \log b}{\log^4 M} \cdot \mathcal B^2,
\]
where $\mathcal B$ is given as in the proof of Lemma \ref{1st-bound-th2}.
Since $\nu_{M}(a^{2X}-b^{2Y}) \gg Z/z$, the obtained two inequalities together lead to
\[ T \ll \frac{z^2}{\log^4 M} \cdot {\mathcal B'}^2, \]
where
\[
T:=\frac{zZ}{\log a\,\log b}, \quad \mathcal B':=\log\,\max \bigr\{ (\log c)(\log M)\,T/z, \,M \bigr \}.
\]
If $(\log c)(\log M)\,T/z \le M$, then, since one may assume that $\log M=\log \mathcal D \gg z \log c$ by \eqref{calD-estimate},
\[
T \ll \frac{z^2}{\log^2 M} \ll \frac{1}{\log^2 c},
\]
proving the assertion.
While if $(\log c)(\log M)\,T/z>M$, by Lemma \ref{1st-bound-th2},
\[ \frac{M}{(\log c)\log M}<T/z \ll (\log \log c)^2, \]
implying that $\max\{c,z\} \ll 1$ as $\log M \gg z \log c$.
\end{proof}
Now, Lemma \ref{second-bound-th2} immediately yields
\[
z \cdot Z \ll \frac{\log a}{\log c} \cdot \frac{\log b}{\log c} \le \min \biggl\{\dfrac{z^2}{xy},\dfrac{Z^2}{XY} \biggl\},
\]
so that $Z \ll z$ and $\max\{x,y,X,Y\} \ll 1$.
In particular, $\Delta \ll 1$ and $\mathcal D \gg {c_S}^z$.
\begin{lem}\label{xy>1-th2}
If $\max\{x,y\}>1,$ then the second restriction for the exceptional triples stated in Theorem $\ref{th2}$ holds.
\end{lem}
\begin{proof}
Since $\mathcal D \le G$, where $G:=\gcd(a \pm 1,b \pm 1)$ for some possibles signs, and $\Delta \ll 1$, it follows that ${c_S}^z \ll G$.
On the other hand, $G \le \min\{a,b\}+1$.
Since $\min\{a,b\}<c^{\,z/\max\{x,y\}}$, the obtained inequalities together lead to ${c_S}^z \ll c^{\,z/\max\{x,y\}}$.
Suppose that $\max\{x,y\}>1$.
Then ${c_S}^z \ll c^{z/2}$, so that $({c_S}/\sqrt{c})^z<\mathcal C$ for some absolute positive constant $\mathcal C$.
This together with \eqref{cS-bound} implies that
$c_S/\sqrt{c}<\mathcal C$.
Further, by equation \eqref{1st},
\[
\max\{a,b\}<c^z<c^{\,\frac{\log \mathcal C}{\log (\,c_S/\sqrt{c}\,)}} =\exp \biggl( \frac{2\log \mathcal C}{(\log c_S)/\log \sqrt{c}-1} \biggl).
\]
To sum up, the assertion holds by setting $\mathcal C_2$ as $2 \log \mathcal C$.
\end{proof}
By the above lemma, we may assume that $x=1$ and $y=1$.
Since $\mathcal D \mid c^z$ with $\mathcal D>2$, it turns out that $\epsilon_a=-\epsilon_b$ by reducing equation \eqref{1st} modulo $\mathcal D$ with congruence \eqref{cong-D-th3}.
Thus, $a$ and $b$ are written as in \eqref{ab-form} with $\delta$ replaced by $\epsilon_a$.
We finish the proof of Theorem \ref{th2} by proving the following lemma.
\begin{lem}
If $(x,y)=(1,1),$ then the same conclusion as that of Lemma $\ref{xy>1-th2}$ holds.
\end{lem}
\begin{proof}
It suffices to show that $({c_S}/\sqrt{c})^z \ll 1$ as seen in the proof of Lemma \ref{xy>1-th2}.
Suppose that $Z<2z$.
Assuming that $a>b$, one can closely follow the proof of Lemma \ref{sols-calX=1-Z<2z} to show that $X=1,Y \ge 4,$ further, $\mathcal D<c^{(2/Y)z-1/Y}+1 \le c^{z/2}$ by the fact that $b \ge \mathcal D -1$.
This leads to $D \ll c^{z/2}$, so that $({c_S}/\sqrt{c})^z \ll 1$.
Suppose that $Z \ge 2z$.
One can closely follow the proof of Lemma \ref{Zge2z} to show that $\mathcal D^2/c^z \le \max\{X,Y\}$.
This leads to $({c_S}/\sqrt{c})^z \ll 1$.
\end{proof}
\section{Preliminaries for Theorem \ref{th3}} \label{sec-th3-pre}%
Let $c \in \{5,17,257, 65537\}$.
Note that $c$ is an odd prime and $\varphi(c)=c-1$ is a power of 2.
Applying Lemma \ref{weakform} for $d=c$ in a similar way seen in the proof of Theorem \ref{th1}, we may assume that $e_c(a)=e_c(b)$.
Put $E:=e_c(a)=e_c(b)$.
Thanks to Theorem \ref{th1}, we may further assume that $E>1$.
Then
\begin{equation} \label{hE-cong}
h^E \equiv -1 \mod{c}
\end{equation}
for each $h \in \{a,b\}$.
Note that the multiplicative order of each of $a$ and $b$ modulo $c$ equals $2E$ (cf.~Lemma \ref{property}\,(ii)), in particular $E$ is a power of 2 as it divides $\varphi(c)/2$.
From now on, suppose that the following system of equations:
\begin{eqnarray}
&a^x+b^y=c^z,\label{1st-th3}\\ &a^X+b^Y=c^Z \label{2nd-th3}
\end{eqnarray}
holds for some positive integers $x,y,z,X,Y,Z$ with $(x,y,z) \ne (X,Y,Z)$.
Define $\Delta$ as in previous sections.
We show several lemmas below.
\begin{lem}\label{Delta-div}
$\Delta$ is divisible by $E.$
Further, $\Delta/E$ is odd if $x \not\equiv X \pmod{2}$ or $y \not\equiv Y \pmod{2}.$
\end{lem}
\begin{proof}
As observed in the proof of Lemma \ref{basic-cong},
\[
a^{\Delta} \equiv (-1)^{y+Y}, \ \ b^{\Delta} \equiv (-1)^{x+X} \mod{c}.
\]
By Lemma \ref{property}\,(i), this together with \eqref{hE-cong} implies the assertions.
\end{proof}
The next lemma directly follows from the above lemma, and it relies upon the fact that $\varphi(c)$ is a power of 2.
\begin{lem}\label{Deltaeven}
$\Delta$ is even.
\end{lem}
\begin{lem}\label{XYeven}
$x \not\equiv X \pmod{2}$ or $y \not\equiv Y \pmod{2}.$
Further, both $x$ and $y$ are even, or both $X$ and $Y$ are even.
\end{lem}
To prove this lemma, we rely on the following striking result of Scott which is a direct consequence of \cite[Lemma 6]{Sc}.
\begin{prop}\label{twoclass}
Assume that $c$ is a prime.
Let $(x_1,y_1,z_1)$ and $(x_2,y_2,z_2)$ be two solutions to equation $\eqref{abc}.$
Then $x_1 \not\equiv x_2 \pmod{2}$ or $y_1 \not\equiv y_2 \pmod{2},$ except when $(a,b,c)$ or $(b,a,c)$ equals one of $(5,3,2),(13,3,2)$ and $(10,3,13).$
\end{prop}
\begin{proof}[Proof of Lemma $\ref{XYeven}$]
Since $c \ne 2,13$, Proposition \ref{twoclass} is applied for the two solutions $(x,y,z)$ and $(X,Y,Z)$, and the first assertion clearly follows.
For the second one, consider the case where $x \not\equiv X \pmod 2$.
Since $xY \equiv Xy \pmod{2}$ by Lemma \ref{Deltaeven}, it follows that $y$ or $Y$ is even according as $x$ or $X$ is even.
The case where $y \not\equiv Y \pmod{2}$ is handled similarly.
\end{proof}
By the above lemma, without loss of generality, we may assume that both $X,Y$ are even, and that at least one of $x,y$ is odd.
Write \[ X=2X', \quad Y=2Y'.\]
Equation \eqref{2nd-th3} becomes
\begin{eqnarray} \label{2nd'-th3}
a^{2X'}+b^{2Y'}=c^Z.
\end{eqnarray}
Further, by Lemma \ref{Delta-div},
\begin{equation}\label{E/2divDel}
E \parallel \Delta.
\end{equation}
In what follows, we often argue over the ring of Gaussian integers.
We can write \[c=m^2+1,\] where $m=2^e$ with $e \in \{1,2,4,8\}$.
Also, put \[ \beta := m+i,\] where $i:=\sqrt{-1}$.
Note that $\beta$ is a prime element in $\mathbb Z[i]$, since $\beta$ divides prime $c$.
In what follows, without loss of generality, we may assume that $a$ is odd and $b$ is even.
\begin{lem}\label{gauss-fac}
The following hold.
\begin{itemize}
\item[\rm (i)]
$\{a^{X'},b^{Y'}\}=\{\,|\operatorname{Re} (\beta^Z)|,|\operatorname{Im} (\beta^Z)|\,\}.$
More precisely,
\[
a^{X'}=\dfrac{1}{2}\,|\beta^Z+(-\bar{\beta})^Z|, \ \ b^{Y'}=\dfrac{1}{2}\,|\beta^Z-(-\bar{\beta})^Z|,
\]
where $\bar{\beta}$ denotes the complex conjugate of $\beta.$
\item[\rm (ii)]
$Y'=\dfrac{e+\nu_2(Z)}{\nu_2(b)}.$
\end{itemize}
\end{lem}
\begin{proof}
(i) Equation \eqref{2nd'-th3} is rewritten as follows:
\[ (a^{X'}+b^{Y'} i)(a^{X'}-b^{Y'} i) = c^Z. \]
A usual argument on the above factorization yields that $a^{X'}\pm b^{Y'} i= u \beta^Z$ for some sign and some unit $u \in \{\pm1,\pm i\}$.
This immediately shows the first assertion.
Further,
\[
a^{X'}=\frac{u}{2}\,(\beta^Z+\bar{\beta}^Z\cdot \bar{u}/u), \quad
\pm\,b^{Y'}=\frac{u}{2}\,(\beta^Z-\bar{\beta}^Z\cdot \bar{u}/u).
\]
Now the second assertion follows since $b$ is assumed to be even, and the difference between $\beta,-\bar{\beta}$ is divisible by 4. \par
(ii) Since numbers $\beta,-\bar{\beta}$ are coprime, and their difference is divisible 4, one uses (i) to see that
\[
\nu_{2}(b^{Y'})=\nu_{2}\biggl( \frac{\beta+\bar{\beta}}{2}\biggr)+\nu_{2}\biggl( \frac{\beta^Z-(-\bar{\beta})^Z}{\beta-(-\bar{\beta})}\biggr)=\nu_{2}(m)+\nu_{2}(Z),
\]
leading to the assertion.
\end{proof}
\begin{lem}\label{Zsmall}
If $Z \le 3,$ then
\[ (a,b)=(c-2,2), \ (x,y,z)=(1,1,1), \ (X,Y,Z)=(2,2e+2,2).\]
\end{lem}
\begin{proof}
We shall apply Lemma \ref{gauss-fac}\,(i) for each $Z \in \{1,2,3\}$.
It turns out that $Z>1$ as $\min\{a,b\}>1$, and that all possible pairs $(a^{X'},b^{Y'})$ satisfying $Z \le 3$ are given as follows:
\begin{align*}
(c,Z,a^{X'},b^{Y'}) \in \{ \, & (5,2,3,4),(17,2,15,8),(257,2,255,32),\\
&(65537,2,65535,512),(5,3,11,2),(17,3,47,52),\\
&(257,3,767,4048),(65537,3,196607,16776448)\,\}.
\end{align*}
If $Z=3$, then $(X',Y')=(1,1)$, so that equation \eqref{abc} corresponding to each of the above cases is one handled by \cite[Theorem 1]{CaDo}, which tells us that there is only one solution to it.
For the case where $Z=2$, one finds that $X'=1, a=c-2$ and $b$ is power of 2.
Equation \eqref{abc} corresponding to each of the cases is one handled by \cite[Theorem 1.4; $m=1$]{Miy}, and it turns out that $b=2$, and solutions $(x,y,z),(X,Y,Z)$ are given as asserted.
\end{proof}
By the above lemma, we may suppose in what follows that \[ Z \ge 4.\]
Below, we shall observe that this leads to a contradiction.
Though the following lemma seems to be dealt with by the existing methods for determining all square terms in (concrete) binary linear recurrent sequences (cf.~\cite{NaPe}), we choice to rely on results on ternary Diophantine equations based on the so-called modular approach.
\begin{lem}\label{X'Y'odd}
$X'$ and $Y'$ are odd.
\end{lem}
\begin{proof}
This follows from a simple application of the works \cite{BeElNg,Br,El} on the generalized Fermat equation (cf.~\cite[Ch.14]{Co}) of signature $(2,4,n)$ with $n \ge 4$ to equation \eqref{2nd'-th3}.
\end{proof}
\begin{lem}\label{X'1Y'1}
$X'=1$ or $Y'=1$ according as $Z$ is even or odd.
\end{lem}
\begin{proof}
The assertion for odd $Z$ holds by Lemmas \ref{gauss-fac}\,(ii) and \ref{X'Y'odd}.
Assume that $Z$ is even and observe the factorization $a^{2X'}=(c^{Z/2}+b^{Y'})(c^{Z/2}-b^{Y'})$ with $a$ odd.
Then
\[ c^{Z/2}+b^{Y'}=u^{2X'}, \ \ c^{Z/2}-b^{Y'}=v^{2X'}. \]
for some coprime odd positive integers $u,v$ with $u>v$.
Adding these equations leads to the following factorization among integers:
\[ 2c^{Z/2}=(u^2+v^2) \cdot \frac{u^{2X'}+v^{2X'}}{u^2+v^2}, \]
where the fact that $X'$ is odd by Lemma \ref{X'Y'odd} is used.
By the primality of $c$, the above equation implies that the set of prime factors of $u^{2X'}+v^{2X'}$ (which is $\{2,c\}$) is included in that of $u^2+v^2$.
Then an old version of primitive divisor theorem of Zsigmondy (cf.~\cite{Zs}) to the sequence $\{(u^2)^t+(v^2)^t\}_{t \ge 1}$ is applied for its $X'$-th term to obtain $X'=1$.
\end{proof}
Note that the formula of Lemma \ref{gauss-fac}\,(i) helps us to easily find that $X'=1$ and $Y'=1$ for small values of $Z$ by checking that both $a^{X'}$ and $b^{Y'}$ are not perfect powers.
\begin{lem}\label{order8}
$E$ is divisible by $4.$
In particular, $\Delta$ is divisible by $4.$
\end{lem}
\begin{proof}
Suppose on the contrary that $4 \nmid E$, that is, $E=2$, so that $a^2 \equiv b^2 \equiv -1 \pmod{c}$.
However, in this case, it is observed from Lemma \ref{X'Y'odd} that the left-hand side of equation \eqref{2nd'-th3} should be congruent to $-2$ modulo $c$, which is clearly absurd.
\end{proof}
Note that the above lemma excludes the case where $c=5$.
\begin{lem}\label{xyodd}
$x$ and $y$ are odd.
\end{lem}
\begin{proof}
Since $\Delta=\pm 2(xY'-X'y)$, Lemmas \ref{X'Y'odd} and \ref{order8} together are used to find that $x-y$ is even.
This implies the assertion as $x$ or $y$ is already known to be odd.
\end{proof}
\begin{lem}\label{Evalue}
$E=E(e,Z)=2e/\gcd(2e,Z-1).$
\end{lem}
\begin{proof}
Recall that $c=m^2+1$, and $\{a^{X'},b^{Y'}\}=\{|\operatorname{Re} (\beta^Z)|,|\operatorname{Im} (\beta^Z)|\}$ by Lemma \ref{gauss-fac}\,(i).
Since $E$ is a power of $2$, and $X',Y'$ are odd, one finds from Lemma \ref{property}\,(iii) that
\[
e_c(a^{X'})=\frac{e_c(a)}{\gcd(e_c(a),X')}=\frac{E}{\gcd(E,X')}=E,
\]
and $e_c(b^{Y'})=E$ similarly.
Therefore, it suffices to show that
\begin{equation}\label{E-value}
\begin{cases}
\,\operatorname{Re} (\beta^Z) \equiv \pm 2^{Z-1} \mod{c} & \text{if $Z$ is even},\\
\,\operatorname{Im} (\beta^Z) \equiv \pm 2^{Z-1} \mod{c} & \text{if $Z$ is odd}.
\end{cases}
\end{equation}
Indeed, $e_c(\pm 2^{Z-1})=e_c(2^{Z-1})=e_c(2)/\gcd(e_c(2),Z-1)$, where $e_c(2)=2e$ as $2^{2e}=m^2 \equiv -1 \pmod{c}$.
For showing \eqref{E-value}, on the modulus $(m^2+1)$, observe the following.
If $Z$ is even or odd, then
\begin{align*}
\operatorname{Re} (\beta^Z)=\sum_{j=0}^{Z/2} \binom{Z}{2j}m^{Z-2j} i^{2j}
& \equiv \sum_{j=0}^{Z/2} \binom{Z}{2j}(m^2)^{Z/2-j}(-1)^j\\
& \equiv \sum_{j=0}^{Z/2} \binom{Z}{2j}(-1)^{Z/2} \equiv \pm 2^{Z-1},
\end{align*}
\begin{align*}
\operatorname{Im} (\beta^Z)&=\sum_{j=0}^{(Z-1)/2} \binom{Z}{2j+1}m^{Z-2j-1}(\sqrt{-1})^{2j}\\
& \equiv \sum_{j=0}^{(Z-1)/2} \binom{Z}{2j+1}(m^2)^{(Z-1)/2-j}(-1)^j\\
& \equiv \sum_{j=0}^{(Z-1)/2} \binom{Z}{2j+1}(-1)^{(Z-1)/2} \equiv \pm 2^{Z-1},
\end{align*}
respectively.
\end{proof}
Since $E \ge 4$ by Lemma \ref{order8}, it holds from Lemma \ref{Evalue} that $Z$ is even for $c=17$.
Further,
\begin{equation}\label{Ebounds}
4 \le E \le E_u,
\end{equation}
where $E_u=2e$ or $e$ according as $Z$ is even or odd.
\section{Proof of Theorem \ref{th3}} \label{sec-th3}%
We begin with the following lemma.
\begin{lem}\label{1stbound-th3}
The following hold.
\begin{itemize}
\item[\rm (i)]
$x,y$ and $c$ are relatively prime.
\item[\rm (ii)]
\[
z \le \max\biggl\{\frac{t_1 E^3}{\log^2 c},\,2.2\cdot10^4\biggr\}\, (\log a) \log b,
\]
where $t_1=53.6 \cdot 2 \cdot 4^2.$
\end{itemize}
These hold also for the solution $(X,Y,Z)$.
\end{lem}
\begin{proof}[Proof of Lemma $\ref{1stbound-th3}$]
(i) Suppose on the contrary that $x,y$ are divisible by odd prime $c$.
Then, it is observed, similarly to the proof of Lemma \ref{coprime}\,(ii), that $R=(a^x+b^y)/(a^{x/c}+b^{y/c})$ has to equal $c$.
This is absurd as $R>c$.\par
(ii) We proceed similarly to the proof of Lemma \ref{Kc}.
By equation $\eqref{1st-th3},$
\[ \nu_c(a^{2x}-b^{2y}) \ge z. \]
To obtain an upper bound for the left-hand side above, we shall apply Proposition \ref{Bu-madic} for $(\alpha_1,\alpha_2):=(a,b)$, $(b_1,b_2):=(2x,2y)$.
Note that $\gcd(b_1,b_2,c)=1$ by (i).
In this case, we set $M:=c$.
One may set ${\rm g}:=2E$, and $H_1:=\log a', H_2:=\log b'$, where $a'=\max\{a,c\}$ and $b'=\max\{b,c\}$.
Then
\[
\nu_{c}(a^{2x}-b^{2y}) \le \frac{53.6\cdot 2E\,\log a'\,\log b'}{\log^4 c} \cdot \mathcal B^2,
\]
where
\[
\mathcal B=\max \biggl\{ \log \biggl( \frac{2x}{\log b'}+\frac{2y}{\log a'} \biggr)+\log \log c+0.64, \,4\log c \biggl\}.
\]
Observe that
\[
\log \biggl( \frac{2x}{\log b'}+\frac{2y}{\log a'} \biggr)+\log \log c+0.64 \le \log \biggl( \frac{4\,{\rm e}^{0.64} \log^2 c}{\log a\,\log b}\,z \biggl).
\]
The two obtained bounds for $\nu_{c}(a^{2x}-b^{2y})$ together yield
\[
T \le 53.6 \cdot 2 \cdot \frac{\log a'}{\log a} \cdot \frac{\log b'}{\log b} \cdot \frac{E}{\log^4 c} \cdot {\mathcal B'}^2,
\]
where
\[
T:=\frac{z}{\log a\,\log b}, \quad \mathcal B':=\log\, \max \bigr\{ 4\,{\rm e}^{0.64} (\log^2 c)\,T ,\,c^4 \bigr\}.
\]
Since $a \ge (2c-1)^{1/E}$ and $b \ge (c-1)^{1/E}$ as $a^E \equiv b^E \equiv -1 \pmod{c}$ with $a$ odd, one easily observes that
\[
\frac{\log a'}{\log a} \le \frac{E\log c}{\log(2c-1)}, \quad \frac{\log b'}{\log b} \le \frac{E\log c}{\log(c-1)}.
\]
Therefore,
\[
T \le 53.6 \cdot 2 \cdot \frac{E^3}{\log^4 c} \cdot {\mathcal B'}^2,
\]
If $4\,{\rm e}^{0.64} (\log^2 c)\,T \le c^4$, then $\mathcal B' =4\log c$, so that
\begin{equation}\label{ineq-max<<}
T \le 53.6 \cdot 2 \cdot 4^2 \cdot \frac{E^3}{\log^2 c}
\end{equation}
Finally suppose that $4\,{\rm e}^{0.64} (\log^2 c)\,T>c^4$.
Then
\[
\frac{c^4}{4\,{\rm e}^{0.64} \log^2 c} < T \le \frac{53.6 \cdot 2 \cdot E^3}{\log^4 c} \cdot \log^2 \bigr(4\,{\rm e}^{0.64} (\log^2 c)\,T\bigr).
\]
Since $E \le 2e$ by Lemma \ref{Evalue}, the above inequalities together imply that $c=17$ and $T<2.2 \cdot 10^4$.
This together with \eqref{ineq-max<<} gives the assertion.
\end{proof}
In what follows, we put \[ \Delta':=\gcd(\Delta/E, c^{\,\min\{z,Z\}}).\]
Note that $\Delta'$ equals either 1 or a power of $c$.
\begin{lem}\label{DeltaDelta'}
The following hold.
\begin{itemize}
\item[\rm (i)]
$\Delta < \max\{ t_1 E^3, 2.2 \cdot 10^4\log^2 c\} \cdot \min\{z,Z\}.$
\item[\rm (ii)]
$\Delta' < \max\{t_1 E^2, \, 2.2 \cdot 10^4(\log^2 c)/E\} \cdot \min\{z,Z\}.$
\item[\rm (iii)]
$h^E \equiv -1 \pmod{ c^{\,\min\{z,Z\} } /\Delta' }$ for each $h \in \{a,b\}.$
\end{itemize}
\end{lem}
\begin{proof}
(i) This follows from Lemma \ref{1stbound-th3}\,(ii) with inequality \eqref{Delta-ele-upp}.\par
(ii) This follows from (i) since $\Delta' \le \Delta/E$.\par
(iii) Let $h \in \{a,b\}$.
We know that $h^{\Delta} \equiv \epsilon \pmod{c^{\,\min\{z,Z\}}}$ for some $\epsilon \in \{1,-1\}$.
Since $h^E \equiv -1 \pmod{c}$, it follows from Lemma \ref{property}\,(i) that $\epsilon=(-1)^{\Delta/E}$.
Lemma \ref{padic-lem} is applied for $p=c$ and $(U,V,N)=(h^E,-1,\Delta/E)$ to show that $c^{\,\min\{z,Z\}}$ divides $(h^E+1) \cdot \Delta/E$.
This yields the assertion.
\end{proof}
\begin{lem}\label{max<<min}
The following hold.
\begin{itemize}
\item[\rm (i)]
$Z<4z$ if $Z$ is even.
\item[\rm (ii)]
If $\Delta' \ge c^{\,\min\{z,Z\}/3},$ then
\[ \min\{z,Z\} \le \begin{cases}
\,10 & \text{for $c=17$}, \\
\,6 & \text{for $c=257$}, \\
\,3 & \text{for $c=65537$}.
\end{cases} \]
\item[\rm (iii)]
If $\Delta'<c^{\,\min\{z,Z\}/3},$ then
\[ \max\{z,Z\} < t_2 E \cdot \min\{z,Z\},\]
where $t_2=53.7 \cdot 2 \cdot 4^2 \cdot (3/2)^2.$
\end{itemize}
\end{lem}
\begin{proof}
(i) Assume that $Z$ is even.
Then $X'=1$ by Lemma \ref{X'1Y'1}.
Since $\{a,b^{Y'},c^{Z/2}\}$ forms a primitive Pythagorean triple, one has $c^{Z/2}<a^2$, so that $c^{Z/2}<c^{2z}$ by equation \eqref{1st}, whence $Z<4z$. \par
(ii) If $\Delta' \ge c^{\,\min\{z,Z\}/3}$, then, by inequalities \eqref{Ebounds} and Lemma \ref{DeltaDelta'}\,(ii),
\[
c^{\,\min\{z,Z\}/3}< \max \bigr\{ t_1 {E_u}^2, \, 2.2 \cdot 10^4(\log^2 c)/E_l \bigr\} \cdot \min\{z,Z\},
\]
where $E_l:=4, E_u:=2e$.
This implies the assertion. \par
(iii) We only consider the case where $z \le Z$ because the case where $z \ge Z$ is dealt with similarly by changing the roles of $X,Y$ and $Z$ by $x,y$ and $z$ respectively.
The proof proceeds along similar lines to that of Lemma \ref{bound-x1y1}.
We shall apply Proposition \ref{Bu-madic} for $(\alpha_1,\alpha_2):=(a,b)$, $(b_1,b_2):=(2X,2Y)$ in a little rough manner.
In this case, we set $M:=c^z/\Delta'$.
From here we assume that $\Delta'<c^{\,z/3}$, that is,
\begin{equation}\label{M-low}
M>c^{\,2z/3}.
\end{equation}
By Lemma \ref{DeltaDelta'}\,(iii) one may take ${\rm g}:=2E$.
Since $\max\{a,b\}<c^z$, one may set $H_1:=z\log c$ and $H_2:=z\log c$.
Then
\[
\nu_{M}(a^{2X}-b^{2Y}) \le \frac{53.6 \cdot 2 E\log^2 c}{\log^4 M} \cdot z^2 \cdot \mathcal B^2,
\]
where
\[
\mathcal B=\max \biggl\{ \log \biggl( \frac{2X+2Y}{z\log c} \biggr)+\log (\log M)+0.64, \,4\log M \biggl \}.
\]
Observe that
\[
\mathcal B \le \log \max \{\kappa Z,\,M^4\}
\]
with $\kappa=\frac{4\,{\rm e}^{0.64}\log c}{\log \min\{a,b\}}$.
On the other hand,
\[
\nu_{M}(a^{2X}-b^{2Y}) \ge \left \lfloor \frac{Z}{z} \right \rfloor.
\]
Since we may assume that $Z/z$ is suitably large, the two obtained bounds for $\nu_{M}(a^{2X}-b^{2Y})$ together lead to
\begin{equation}\label{ineq-max/min}
Z \le \frac{53.7 \cdot 2 E\log^2 c}{\log^4 M} \cdot z^3 \cdot {\mathcal B}^2.
\end{equation}
Suppose that $\kappa Z \le M^4$.
Then $\mathcal B=4\log M$.
Since $\log M>\frac{2}{3}z \log c$ by \eqref{M-low}, inequality \eqref{ineq-max/min} gives
\[
Z \le 53.7 \cdot 2 \cdot 4^2 \cdot \frac{E\log^2 c}{\log^2 M} \cdot z^3 < t_2 E\cdot z,
\]
showing the assertion.
Suppose that $\kappa Z>M^4$.
Since $E \le 2e$ by \eqref{Ebounds}, it follows from \eqref{ineq-max/min} that
\[
\frac{z Z}{\log^2 (\kappa Z)} \le \frac{53.7 \cdot 4e \cdot (3/2)^4}{\log^2 c}.
\]
However this is not compatible with the inequality $Z>M^4/\kappa\,(>c^{8z/3}/\kappa)$.
\end{proof}
For a number field $\mathbb K$ and a prime ideal $\pi$ in $\mathbb K$, we denote by $\nu_{\pi}(\alpha)$ the exponent of $\pi$ in the factorization of the fractional ideal generated by a nonzero element $\alpha$ in $\mathbb K$.
\begin{prop}[Th\'eor\`eme 3 in \cite{BuLa}] \label{BL}
Let $\mathbb K$ be a number field.
Let $\pi$ be a prime ideal in $\mathbb K,$ and $p$ the rational prime lying above $\pi.$
Let $\alpha_1$ and $\alpha_2$ be nonzero elements in $\mathbb K$ which do not belong to $\pi.$
Assume that $\alpha_1$ and $\alpha_2$ multiplicatively independent.
Let ${\rm g}$ be a positive integer such that
\[
{\alpha_1}^{\rm g} - 1 \in \pi, \quad {\alpha_2}^{\rm g} - 1 \in \pi.
\]
Let $H_1$ and $H_2$ be positive numbers such that
\[
\quad H_j \ge \max \biggl\{ \frac{D}{f_{\pi}}\,{\rm h}(\alpha_j), \, \log p \biggl\} \quad (j=1,2),
\]
where $D=[\mathbb Q(\alpha_1,\alpha_2):\mathbb Q]$ and $f_{\pi}$ is the inertia index of $\pi.$
Then, for any positive integers $b_1$ and $b_2,$
\begin{multline*}
\nu_{\pi}({\alpha_1}^{b_1}-{\alpha_2}^{b_2}) \le
\frac{24\,p\,{\rm g}\,D^2 H_1 H_2}{{f_{\pi}}^2(p-1)(\log p)^4}\\
\times \Bigr(\! \max\bigl\{ \log b'+\log \log p+0.4,{\textstyle \frac{10 f_{\pi}}{D}}\log p, 10\bigl\} \Big)^2
\end{multline*}
with $b'=b_1/H_2+b_2/H_1.$
\end{prop}
A bright idea of Luca \cite[Lemma 7]{Lu_aa_12} used in the proof of the following lemma together with the previous application of Proposition \ref{twoclass} plays the most important role to derive absolute upper bounds for the solutions.
\begin{lem}\label{min<<1}
Assume that $\Delta'<c^{\,\min\{z,Z\}/3}.$
Then the following hold.
\begin{itemize}
\item[\rm (i)]
If $z \le Z$ and $Z$ is odd, then
\[ z< \begin{cases}
\,1.2 \cdot 10^5 & \text{for $c=257$},\\
\,77000 & \text{for $c=65537$}.
\end{cases} \]
\item[\rm (ii)]
If $z \le Z$ and $Z$ is even, then
\[ Z< \begin{cases}
\, 6 \cdot 10^5 & \text{for $c=17$}, \\
\, 3 \cdot 10^5 & \text{for $c=257$},\\
\, 3.1 \cdot 10^5 & \text{for $c=65537$}.
\end{cases} \]
\item[\rm (iii)]
If $Z \le z,$ then
\[ Z< \begin{cases}
\,1.4 \cdot 10^5 & \text{for $c=17$}, \\
\,69000 & \text{for $c=257$},\\
\,77000 & \text{for $c=65537$}.
\end{cases} \]
\end{itemize}
\end{lem}
\begin{proof}
We know that $a^{\Delta} \equiv \pm1 \pmod{c^{\,\min\{z,Z\}}}$ with $\Delta$ even.
One raises this congruence to $2X'$-th power to find that
\[
(a^{4X'})^{\Delta/2} \equiv 1 \mod{c^{\,\min\{z,Z\}}}.
\]
Since, by Lemma \ref{gauss-fac}\,(i),
\[
(a^{X'})^4=\dfrac{1}{2^4}\,|\beta^Z + (-\bar{\beta})^Z|^4=\dfrac{1}{2^4}\,(\,\beta^Z + (-\bar{\beta})^Z\,)^4,
\]
it follows that
\[
(\,\beta^Z+(-\bar{\beta})^Z\,)^{2\Delta} \equiv 2^{2\Delta} \mod{c^{\,\min\{z,Z\}}}.
\]
Recalling that $\bar{\beta}$ is a prime element dividing $c$, one reduces the above congruence modulo $\bar{\beta}^{\,\min\{z,Z\}}$ to obtain $
\beta^{2Z\Delta} \equiv 2^{2\Delta} \pmod{\bar{\beta}^{\,\min\{z,Z\}}}$, whence
\[
\nu_{\bar{\beta}} (\beta^{2Z\Delta}-2^{2\Delta}) \ge \min\{z,Z\}.
\]
To obtain an upper bound for the left-hand side above, we apply Proposition \ref{BL} for $\pi:=\bar{\beta}$, $(\alpha_1,\alpha_2):=(\beta,2)$ and $(b_1,b_2):=(2Z\Delta,2\Delta)$.
Note that $(p,f_{\pi},D)=(c,1,2)$.
Since $\beta \equiv 2 \pmod{\bar{\beta}}$, one may take ${\rm g}:=4e$.
Further, one may set $H_1:=\log c$ and $H_2:=\log c$ as ${\rm h}(\beta)=\frac{1}{2}\log c$.
Therefore,
\[
\nu_{\bar{\beta}} (\beta^{2Z\Delta}-2^{2\Delta}) \le \frac{t_3 \,c\,e}{(c-1)\log^2 c} \cdot \mathcal B^2,
\]
where $t_3:=24\cdot4\cdot2^2$, and
\[
\mathcal B=\log\,\max \bigr\{ 2\,{\rm e}^{0.4}\Delta(Z+1), \, c^5 \bigr\}.
\]
To sum up, the two obtained bounds for $\nu_{\bar{\beta}} (\beta^{2Z\Delta}-2^{2\Delta})$ together yield
\begin{equation}\label{ineq-minzZ}
\min\{z,Z\} \le \frac{t_3\,c\,e}{(c-1)\log^2 c} \cdot \mathcal B^2.
\end{equation}
Below we mainly distinguish two cases.
\vspace{0.1cm}{\it Case where $z \le Z.$} \
If $2\,{\rm e}^{0.4}\Delta(Z+1) \le c^5$, then \eqref{ineq-minzZ} becomes
\begin{equation}\label{ineq-final-2-1}
z \le \frac{25 t_3\,c\,e}{c-1}.
\end{equation}
While if $2\,{\rm e}^{0.4}\Delta(Z+1)> c^5$, then
\begin{equation}\label{ineq-final-2-2}
Z+1>\frac{c^5}{2\,{\rm e}^{0.4}\Delta}, \quad z < \frac{t_3 \,c\,e\,\log^2 \bigr(2\,{\rm e}^{0.4}\Delta(Z+1)\big)}{(c-1)\log^2 c}.
\end{equation}
On the other hand, we know from Lemma \ref{DeltaDelta'}\,(i) and Lemma \ref{max<<min}\,(i,iii) that
\[ \Delta \le \Delta_u, \quad Z+1 \le T z, \]
respectively, where
\begin{gather*}
\Delta_u:=\max\{ t_1 {E_u}^3, 2.2 \cdot 10^4\log^2 c\} \cdot \min\{z,Z\},\\
T:=\begin{cases}
\, 4 & \text{for even $Z$}, \\
\, t_2 E_u & \text{for odd $Z$}.
\end{cases}
\end{gather*}
These together with \eqref{ineq-final-2-2} show that
\begin{equation}\label{ineq-final-2-3}
\frac{c^5}{2\,{\rm e}^{0.4}\,T\,\Delta_u}< z < \frac{t_3 \,c\,e\,\log^2 \big(2\,{\rm e}^{0.4}\,T\,\Delta_u\,z\big)}{(c-1)\log^2 c},
\end{equation}
The combination of \eqref{ineq-final-2-1}, \eqref{ineq-final-2-3} (with Lemma \ref{max<<min}\,(i)) implies assertions (i,ii).
\vspace{0.1cm}{\it Case where $Z \le z.$} \
Similarly to the previous case, inequality \eqref{ineq-minzZ} leads to either
\[ \begin{array}{lllll}
Z \le \min \biggl\{\dfrac{25 t_3\,c\,e}{c-1}, \dfrac{c^5}{8\,{\rm e}^{0.4}}-1 \biggl\}
\ \ \ \text{or}, \\
\dfrac{c^5}{2\,{\rm e}^{0.4}\Delta_u}-1<Z \le \dfrac{t_3\,c\,e\,\log^2 \bigr(2\,{\rm e}^{0.4}\Delta_u(Z+1)\big)}{(c-1)\log^2 c}.
\end{array} \]
This implies assertion (iii).
\end{proof}
The following is an easy consequence of \cite[Th\'eor\`eme 3]{LaMiNe} (cf.~\cite[Theorem 2.6]{Bu-book}).
\begin{prop} \label{bu-mig-thm2}
Let $\alpha$ be an algebraic number with $|\alpha|=1$ which is not a root of unity.
Put
\[
H(\alpha)=\max\bigr\{ D\,{\rm h}(\alpha)+22\,|\log \alpha|, \,40 \bigr\},
\]
where $D=[\mathbb{Q}(\alpha):\mathbb{Q}]$ and $\log$ denotes the principal value of the logarithm.
Then, for any positive integer $k,$
\[
\log|\alpha^k-1| \ge -\frac{9}{8}\,D^2\,H(\alpha)\,{\mathcal B}^2,
\]
where
\[
\mathcal B=\max\bigr\{ \log (k/25)+2.35+10.2/D, \, 34/D, \, 0.1/\sqrt{D/2}\, \bigr\}.
\]
\end{prop}
\begin{lem} \label{complex-baker}
Suppose that $Z>\chi z$ with a positive number $\chi>2$ and $Z$ is odd.
Then
\[
Z<\frac{9}{1-2/\chi} \left(1+\frac{22\pi}{\log c}\right) \bigl(\max \{ \log Z+4.3,\,17 \} \bigl)^2+1.
\]
\end{lem}
\begin{proof}
As seen in the proof of Lemma \ref{gauss-fac}\,(i), it holds that
\[
a^{X'}+b^{Y'} i=u \gamma^Z, \ \ a^{X'}-b^{Y'} i=\bar{u}\,{\bar{\gamma}}^{Z},
\]
where $u \in \{\pm1,\pm i\}$ and $\gamma \in \mathbb{Z}[i]$ is associated with $\beta$ or $\bar{\beta}$.
We may assume that $u=1$ since $Z$ is odd.
By eliminating the term $a^{X'}$ from the above two equations, since $Y'=1$ by Lemma \ref{X'1Y'1}, one has
\[
\biggl(\frac{\gamma}{\bar{\gamma}}\biggr)^Z-1=\frac{2bi}{{\bar{\gamma}}^{\,Z}}.
\]
Considering the absolute values of both sides above, since $b<c^z,|\bar{\gamma}|=|\beta|=c^{1/2}$, and $Z>\chi z$ by assumption, one obtains
\[
\left| (\gamma/\bar{\gamma})^Z-1 \right| < 2c^{z-Z/2}<2c^{-(1/2-1/\chi)Z}.
\]
To obtain a lower bound for the left-hand side above, we apply Proposition \ref{bu-mig-thm2} for $\alpha:=\gamma/\bar{\gamma}$ and $k:=Z$.
It is easy to see that the minimal polynomial of $\alpha$ over $\mathbb{Q}$ is $T^2 \pm(2-4/c)T+1$ for some sign.
From this it follows that $\alpha$ is quadratic and not an algebraic integer, further, ${\rm h}(\alpha)=\frac{1}{2}\log c$.
Since $|\log{\alpha}| \le \pi$, Proposition \ref{bu-mig-thm2} gives
\[
\log \left|(\gamma/\bar{\gamma})^Z-1 \right| \ge -\frac{\,9\,}{2}(\log c+22\pi)\,\bigl( \max \{ \log Z+4.3,\,17\}\bigl)^2.
\]
Finally, the two obtained bounds for $|(\gamma/\bar{\gamma})^Z-1|$ together imply the assertion.
\end{proof}
\begin{lem} \label{Z-bound-sharp-thm2}
\[ Z< \begin{cases}
\, 6 \cdot 10^5 & \text{if $c=17$ and $Z$ is even}, \\
\, 3 \cdot 10^5 & \text{if $c=257$ and $Z$ is even},\\
\, 3.1 \cdot 10^5 & \text{if $c=65537$ and $Z$ is even},\\
\,2.8 \cdot 10^5 & \text{if $c=257$ and $Z$ is odd},\\
\,1.8 \cdot 10^5 & \text{if $c=65537$ and $Z$ is odd}.
\end{cases}\]
\end{lem}
\begin{proof}
The assertion for even $Z$ follows from the combination of Lemmas \ref{max<<min}\,(i,ii) and \ref{min<<1}\,(i,ii).
For the case where $Z$ is odd, we may assume by Lemmas \ref{max<<min}\,(ii) and \ref{min<<1}\,(i,iii) that $Z>\chi z$, where $\chi=2.29$ for $c=257$ and $\chi=2.24$ for $c=65537$.
Then applying Lemma \ref{complex-baker} yields the remaining assertions.
\end{proof}
Define the quantity $V$ as follows:
\[ V:=\begin{cases}
\,\nu_c \bigr({a(\beta,Z)}^{2e}+1\bigr) & \text{if $Z$ is even},\\
\,\nu_c \bigr({b(\beta,Z)}^{2e}-1\bigr) & \text{if $Z$ is odd},
\end{cases}\]
where
\[
a(\beta,Z):=\dfrac{1}{2}\,|\beta^Z+(-\bar{\beta})^Z|, \quad b(\beta,Z):=\dfrac{1}{2}\,|\beta^Z-(-\bar{\beta})^Z|.
\]
Number $V$ is an upper bound for $\min_{h \in \{a,b\}} \nu_c(h^E+1)$ by Lemmas \ref{gauss-fac}\,(i), \ref{X'1Y'1} and \ref{Evalue}, and it depends only on $c$ and $Z$ (recall that $c=m^2+1$ and $\beta=m+i$).
Many heuristic observations in the study of kinds of Wieferich primes predict that $V$ is expected to be very small in general.
For each $c$ and each $Z$ bounded from above as in Lemma \ref{Z-bound-sharp-thm2}, we use a computer to calculate $V$ (within 11 hours), and the result is read as follows:
\begin{lem} \label{Wieferich}
\[ V \le \begin{cases}
\,5 & \text{for $c=17$}, \\
\,3 & \text{for $c=257$},\\
\,2 & \text{for $c=65537$}.
\end{cases} \]
\end{lem}
\begin{lem} \label{minzZ-very-sharp-thm2}
$Z<36000$ and
\[ \min\{z,Z\} \le \begin{cases}
\,12 & \text{for $c=17$}, \\
\,6 & \text{for $c=257$},\\
\,3 & \text{for $c=65537$}.
\end{cases} \]
\end{lem}
\begin{proof}
Lemma \ref{DeltaDelta'}\,(iii) says that $c^{\,\min\{z,Z\}}/\Delta'$ divides $h^E+1$ for each $h \in \{a,b\}$.
Since one may assume that $\Delta'<c^{\,\min\{z,Z\}/3}$ by Lemma \ref{max<<min}\,(ii), it follows that $
(2/3)\min\{z,Z\}<\nu_c(h^E+1) \le V$.
Now the second assertion follows from Lemma \ref{Wieferich}.
From this, for the first assertion we may assume that $Z>200z$.
Then $Z$ is odd by Lemma \ref{max<<min}\,(i).
Applying Lemma \ref{complex-baker} with $\chi=200$ gives the first assertion.
\end{proof}
We are ready to complete the proof of Theorem \ref{th3}.
\begin{proof}[Proof of Theorem $\ref{th3}$]
First suppose that $z \le Z$.
Then $z \le 12$ by Lemma \ref{minzZ-very-sharp-thm2}.
Lemmas \ref{X'1Y'1} and \ref{Wieferich} say that $a=a(\beta,z)$ or $b=b(\beta,Z)$, and that $Z \le 36000$ respectively.
Then one can use a computer to check that $\min\{a(\beta,z),b(\beta,Z)\}>c^{12}$ whenever $Z \ge 26$.
From equation \eqref{1st-th3} it turns out that $Z<26$.
Now that brute force computation suffices for checking the system of equations \eqref{1st-th3}, \eqref{2nd'-th3} does not hold for any possible case (with $Z \ge 4$).
Finally suppose that $z>Z$.
We know that $Z$ is even with $Z \le 12$ for $c=17$, and $Z \le 6$ for $c=257$.
Note that $c \ne 65537$ as $Z \ge 4$.
It is easy to see that $X'=1,Y'=1$, so that $a=a(\beta,Z),b=b(\beta,Z)$.
For each $m$ and each possible $Z$ one can fortunately find a positive integer $d>1$ satisfying either
\begin{equation}\label{Jacobi-2-th1}
\begin{split}
&d \mid a, \ \Big(\frac{b}{d}\Big)=-1, \ \left(\frac{c}{d}\right)=1 \quad \text{or}\\
&d \mid b, \ \left(\frac{a}{d}\right)=-1, \ \left(\frac{c}{d}\right)=1,
\end{split}
\end{equation}
where $\left(\frac{\cdot}{\cdot}\right)$ denotes the Jacobi symbol.
More precisely, $d$ (with $d \mid h$) is taken as in the following table:
\[
\begin{tabular}{c|cccccccc}
$c$ & 17 & 17 & 17 & 17 & 17 & 257 & 257 & 257 \\
$Z$ & 4 & 6 & 8 & 10 & 12 & 4 & 5 & 6 \\ \hline
$d$ & 15 & 15 & 15 & 19 & 47 & 15 & 139 & 6 \\
$h$ & $b$ & $a$ & $b$ & $b$ & $b$ & $b$ & $a$ & $b$
\end{tabular}
\]
On the other hand, reducing equation \eqref{1st-th3} modulo such $d$ implies that
\begin{equation}\label{Jacobi-2-th2}
\begin{split}
\Big(\dfrac{b}{d}\Big)^x=\left(\dfrac{c}{d}\right)^z \ \ \text{if $d \mid a$},\\
\left(\dfrac{a}{d}\right)^x=\left(\dfrac{c}{d}\right)^z \ \ \text{if $d \mid b$}.
\end{split}
\end{equation}
However the combination of \eqref{Jacobi-2-th1}, \eqref{Jacobi-2-th2} implies that $x$ or $y$ is even, contradicting Lemma \ref{xyodd}.
\end{proof}
\section{Concluding remarks} \label{sec-rem}%
As introduced in Section \ref{sec-intro}, each of Theorems \ref{th1} and \ref{th3} gives a 3-variable version of some work in \cite{Be_cjm_01}, and this complements the work of \cite{MiPi} in a sense.
Unfortunately the method of this paper seems not to be enough to consider similar trials for Bennett's other results \cite[Theorems 1.4 to 1.6]{Be_cjm_01}), for instance, for the case where $a$ or $b$ is fixed.
However, such a case seems to be much harder than the case where $c$ is fixed.
A reason for this is that even the case where $b=2$ on equation \eqref{pillai} is still difficult to be resolved due to its partial results obtained in \cite{Lu_indag_03,ScSt_jnt_04}.
Thus we surely need a new idea for this purpose.
Finally, for ambitious readers, we leave a few problems, concerning Theorems \ref{th1} and \ref{th3}, for which the method of this paper can be applied in principle.
\begin{prob}
Prove Conjecture $\ref{atmost1conj}$ for each of the following cases$:$
\begin{itemize}
\item[\rm (i)]
each of $a$ and $b$ is congruent to $1$ or $-1$ modulo $\prod_{p \mid c}p.$
\item[\rm (ii)]
$e_a(c)=e_b(c)$ with $e_a(c)$ even and $c$ is a prime.
\end{itemize}
\end{prob}
It seems that for handling (i) one needs a more extensive computation than that needed for proving Theorem \ref{th1}, and that for (ii) one needs to manage to find absolute upper bounds for corresponding solutions.
|
1,314,259,993,307 | arxiv | \section{Introduction}
Sign language, as a visual language, is the primary communication tool for the deaf community.
To facilitate the communication between the deaf and hearing people, sign language recognition~(SLR) has been widely studied with broad social influence.
Isolated SLR serves as a fundamental task in visual sign language research.
It aims to recognize sign language at the word-level and is a challenging fine-grained classification problem.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figure/Intro.pdf
\caption{The overview of our framework, which contains self-supervised pre-training and downstream-task fine-tuning.}
\label{fig:intro}
\vspace{-0.4cm}
\end{figure}
Hand gesture serves as a dominant role during the expression of sign language.
It occupies a relatively small area with dynamic backgrounds, exhibits similar appearance and encounters self-occlusion among joints.
Such fact leads to the difficulty in hand representation learning.
Current deep-learning-based methods~\cite{camgoz2017subunets,koller2019weakly,huang2018video} learn feature representations adaptively from the cropped RGB hand sequence.
Given the highly articulated characteristic of hand, some methods represent them as sparse poses for recognition~\cite{albanie2020bsl,li2020word,joze2018ms}.
Pose is a compact and semantic representation, which is robust to appearance change and brings potential computation efficiency.
However, hand poses are usually extracted from the off-the-shelf extractor, which suffers failure detection.
Therefore, the performance of pose-based methods lags largely behind RGB-based counterparts.
Besides, the aforementioned methods all follow a data-driven paradigm and may suffer insufficient interpretability and overfitting due to limited sign data sources.
Meanwhile, the effectiveness of pre-training has been validated for computer vision~(CV) and natural language processing~(NLP).
Recent advance in NLP is largely derived from self-supervised pre-training strategies on large text corpus~\cite{radford2018improving,devlin2018bert,yang2019xlnet}.
Among them, BERT~\cite{devlin2018bert} is one of the most popular methods due to its simplicity and superior performance.
Its success is largely attributed to the powerful attention-based Transformer backbone~\cite{vaswani2017attention}, jointly with a well-designed pre-training strategy for modeling context inherent in text sequence.
To tackle the aforementioned issues, we develop a self-supervised pre-trainable framework with model-aware hand prior incorporated, namely SignBERT, as shown in Figure~\ref{fig:intro}.
Considering the compactness and expressiveness of hand pose representation, we view hand pose as a visual token.
Each hand token is embedded with gesture state, temporal and hand chirality information, and both hands are involved as input.
SignBERT first performs self-supervised pre-training on a large volume of hand pose data, which is derived from sign language data sources using the off-the-shelf extractor.
Specifically, inspired by BERT~\cite{devlin2018bert}, we pre-train our framework on the encoder-decoder backbone by masking and reconstructing visual tokens.
We design several mask modeling strategies to enforce the network capturing hierarchical contextual information.
To better capture context and ease optimization, the decoder introduces hand prior in a model-aware method.
For the downstream isolated SLR, the pre-trained encoder is fine-tuned with the added prediction head to perform recognition.
Our contributions are summarized as follows,
\vspace{-0.26cm}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item To our best knowledge, we propose the \emph{first} model-aware pre-trainable framework for sign language recognition, namely SignBERT.
It performs self-supervised learning on a large volume of hand pose data for better performance on the downstream task.
\item To better exploit hierarchical contextual information contained in the sign data sources, we design mask modeling strategies and incorporate model-aware hand prior during self-supervised pre-training.
\item We perform extensive experiments to validate the feasibility of our framework and its effectiveness on the downstream SLR task.
Our method achieves state-of-the-art performance on four popular benchmarks, \emph{i.e.,} NMFs-CSL, SLR500, MSASL and WLASL.
\end{itemize}
\vspace{-0.2cm}
\section{Relate Work}
In this section, we will briefly review the related topics, including sign language recognition, pre-training strategy and hand-modeling technique.
\subsection{Sign Language Recognition}
Previous works~\cite{koller2020quantitative} on sign language recognition are generally grouped into two categories based on the input modality, \emph{i.e.}, RGB-based~(using the RGB video) and pose-based~(using the pose sequence) methods.
\noindent \textbf{RGB-based methods.}
With the strong representation capability of CNNs, many works in SLR adopt it as the backbone~\cite{cheng2020fully,koller2018deep,joze2018ms,zhou2021improving}.
Necati~\emph{et al.}~\cite{camgoz2020sign} introduce a network consisting of 2D-CNNs for spatial representation and Transformer for modeling temporal dependencies by supervised learning.
Some other works~\cite{huang2018attention, joze2018ms, li2020transfer, li2020word, albanie2020bsl} utilize 3D-CNNs for modeling spatio-temporal information.
\noindent \textbf{Pose-based methods.}
As compact and semantic-aware data, pose sequences are processed by CNNs~\cite{li2018co, cao2018skeleton, albanie2020bsl} or RNNs~\cite{du2015hierarchical,min2020efficient,song2017end}.
Considering its well-structured nature, more and more works represent it as a graph and adopt graph convolutional networks~(GCNs) to model its representation~\cite{du2015hierarchical, song2017end,tunga2020pose}.
Yan~\emph{et al.} \cite{yan2018spatial} first propose a spatial-temporal GCN for action recognition.
These GCN-based methods show both efficiency and promising performance.
There also exists work combining Transformer without pre-training for SLR~\cite{tunga2020pose}.
\subsection{Pre-Training Strategy}
Pre-training, a common strategy in NLP and CV, produces more generic feature representation and may alleviate overfitting for target tasks.
In NLP tasks, early works focused on improving word embedding~\cite{pennington2014glove,kiros2015skip}.
With the advance of Transformer~\cite{vaswani2017attention}, many works propose to pre-train generic feature representations~\cite{devlin2018bert, radford2018improving, yang2019xlnet}.
Of them, BERT is one of the most popular methods due to its simplicity and superior performance.
Specifically, two tasks are adopted in BERT pre-training, \emph{i.e.,} masked language modeling~(MLM) and next sentence prediction~(NSP).
In MLM, BERT attempts to predict the masked words based on the cues from unmasked context words.
In NSP, it defines a binary classification problem, which tries to predict whether two input sentences are consecutive.
In CV counterparts, it is common to pre-train the backbone on ImageNet~\cite{deng2009imagenet}, Kinetics~\cite{carreira2017quo} or large web sources~\cite{duan2020omni} for the downstream tasks.
There also exist works attempting to leverage the idea of BERT to CV tasks~\cite{sun2019videobert,Su2020VLBERT,li2020unicoder,Zhu_2020_CVPR,chen2020generative}.
In sign language, Albanie~\emph{et al.}~\cite{albanie2020bsl} propose to pre-train on a large annotated dataset and directly fine-tune on a small-scale one.
Li~\emph{et al.}~\cite{li2020transfer} fertilize recognition models by transferring knowledge of subtitled news sign videos to them.
To our best knowledge, there exists no work focusing on the self-supervised pre-training for SLR.
\subsection{Hand-Modeling Technique}
There have been many works to model the hand using various techniques, including sum-of-Gaussians~\cite{sridhar2013interactive}, shape primitives~\cite{oikonomidis2014evolutionary, qian2014realtime} and sphere-meshes~\cite{tkach2016sphere}.
In order to model the hand shape more precisely, some works~\cite{ballan2012motion, tzionas2016capturing} propose to utilize a triangulated mesh with Linear Blend Skinning~(LBS)~\cite{lewis2000pose}.
Recently, MANO~\cite{romero2017embodied} has become the most popular model with successful applications~\cite{habermann2020deepcap, boukhayma20193d, hu2021model, hu2021hand}.
As a statistical model, MANO is learned from a large volume of high-quality hand scans.
Considering its capability of representing hand geometric changes in the low-dimensional shape and pose space, we adopt it as a constraint in the pose decoder to import hand prior.
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{figure/Framework.pdf
\caption{Illustration of our SignBERT framework, which contains self-supervised pre-training and fine-tuning for the downstream sign language recognition. The pre-extracted 2D hand pose sequence of both hands is fed into the framework. Each hand pose is viewed as a visual token, embedded with gesture state, temporal and hand chirality information. In self-supervised pre-training, we design several mask modeling strategies and incorporate model-aware hand prior to better exploit hierarchical contextual representation. For the downstream SLR task, the pre-trained Transformer encoder is fine-tuned with the prediction head to perform recognition.}
\label{fig:overview}
\vspace{-0.2cm}
\end{figure*}
\section{Our Approach}
\noindent \textbf{Overview.}
As shown in Figure~\ref{fig:overview}, SignBERT contains two stages, \emph{i.e.,} pre-training for modeling context in sign videos and fine-tuning for the downstream SLR task.
The hand poses, as visual tokens, are embedded with their gesture state, temporal and hand chirality information.
Since sign language is performed by two hands, we jointly feed them into our framework.
During pre-training, the whole framework works in a self-supervised paradigm by masking and reconstructing visual tokens.
Jointly with the mask modeling strategies, the decoder incorporates hand prior for better capturing hierarchical context of both hands and temporal dependencies during the sign.
When applying SignBERT to downstream recognition task, the hand-model-aware decoder is replaced by the prediction head, which is learned in a supervised paradigm by the corresponding video label.
In the following, we will first elaborate each component of our framework.
Then we will describe the proposed pre-training and fine-tuning procedures, respectively.
\subsection{Framework Architecture}
The hand pose in each frame is viewed as a visual token.
For each visual token, its input representation is constructed by summing the corresponding gesture state, temporal and hand chirality embeddings.
\noindent \textbf{Gesture state embedding $f_p$.}
Since the hand pose is well-structured with the physical connection among joints, we organize it as a spatial graph.
In this work, we adopt the spectral-based GCN from~\cite{cai2019exploiting, yan2018spatial} with a few modifications.
Given a 2D hand pose $\widetilde{J}_{t}$ representing the 2D location~(x and y coordinates) at frame $t$, an undirected spatial graph is defined by the node $V$ and edge $E$ set, respectively.
The node set includes all the corresponding hand joints, while the edge set contains the physical and symmetrical connections.
The hand pose sequence is first fed into several graph convolutional layers frame-by-frame.
Then graph pooling is performed based on neighbors to generate the frame-level semantics representation $f_{p,t}$.
\noindent \textbf{Temporal embedding $f_o$.}
Temporal information matters in video-level SLR.
Since self-attention does not consider the order information, we add the temporal order information by utilizing the position encoding strategy in~\cite{vaswani2017attention}.
Specifically, for the same hand, we add different temporal embeddings for different moments.
Meanwhile, since two hands simultaneously convey the meaning during sign, we add the same temporal embedding for the same moment, regardless of hand chirality.
\noindent \textbf{Hand chirality embedding $f_h$.}
Considering the meaning of sign language is conveyed by both hands, we introduce two special tokens to represent hand chirality of each frame, \emph{i.e.,} `L' and `R' for the left and right hand, respectively.
Specially, it is implemented by the WordPiece embeddings~\cite{wu2016google} with the same dimension as the gesture state and temporal embedding.
Notably, all the frames belonging to the same hand contain the identical hand chirality embedding.
\noindent \textbf{Transformer encoder.}
Given the aforementioned embedding representing the gesture status, temporal index and hand chirality, we sum them and feed it into the Transformer encoder following the original architecture~\cite{vaswani2017attention}, which contains a multi-head attention module and a feed forward network.
The encoder output $\mathbf{F}_N$, which retains the same size with the input, is computed as follows,
\begin{equation}
\begin{split}
\mathbf{F}_0 &= \{f_{p}+f_{o}+f_{h}\}, \\
\widetilde{\mathbf{F}}_i &= L(M(\mathbf{F}_{i-1}) + \mathbf{F}_{i-1}), \\
\mathbf{F}_i &= L(C(\widetilde{\mathbf{F}}_i) + \widetilde{\mathbf{F}}_i),
\end{split}
\end{equation}
where $i$ denotes the $i$-th layer of the Transformer encoder, and we utilize totally $N$ layers.
$L(\cdot)$, $M(\cdot)$ and $C(\cdot)$ denote the layer normalization, multi-head self-attention and feed forward network, respectively.
$\mathbf{F}_i$ denotes the feature representation in $i$-th layer.
\noindent \textbf{Hand-model-aware decoder.}
In our self-supervised pre-training paradigm, the framework needs to reconstruct the masked input sequence, in which the hand-model-aware decoder converts the feature to the pose sequence.
Specifically, a fully-connected layer $D(\cdot)$ first extracts a latent semantic embedding describing the hand status and camera parameters from the representation generated by the Transformer encoder, which is formulated as follows,
\begin{equation}
\label{equ:cnn_tcn}
\mathbf{F}_{la} = \{\bm{\theta}, \bm{\beta}, \mathbf{c}_r, \mathbf{c}_o, c_s\}_{t=1}^T = D(\mathbf{F}_N),
\end{equation}
where $\bm{\theta} \in \mathbb{R}^{25}$ and $\bm{\beta} \in \mathbb{R}^{10}$ are the pose and shape embedding for the following MANO, while $\mathbf{c}_r \in \mathbb{R}^{3\times3}$, $\mathbf{c}_o \in \mathbb{R}^{2}$, and $c_s \in \mathbb{R}$ are the weak-perspective camera parameters, indicating the rotation, translation and scale, respectively.
Then MANO~\cite{romero2017embodied} imports hand prior in a model-aware method and decodes the latent semantic embedding to hand representation.
MANO is a fully-differentiable model providing a mapping from low-dimensional pose $\bm{\theta}$ and shape $\bm{\beta}$ space to the triangulated hand mesh $\mathbf{M} \in \mathbb{R}^{N_v \times 3}$ with $N_v=$778 vertices and $N_f=$1538 faces.
To produce a physically plausible mesh, the pose and shape are constrained in a PCA space learned from a large volume of hand scan data.
The decoding process is formulated as follows,
\begin{equation}
\label{equ:mano}
\mathbf{M}(\bm{\beta}, \bm{\theta}) = W(\mathbf{T}(\bm{\beta}, \bm{\theta}), J(\bm{\beta}), \bm{\theta}, \mathbf{W}),
\vspace{-0.5cm}
\end{equation}
\begin{equation}
\label{equ:mano2}
\mathbf{T}(\bm{\beta}, \bm{\theta}) = \bar{\mathbf{T}} + B_S(\bm{\beta}) + B_P(\bm{\theta}),
\end{equation}
where $\mathbf{W}$ is a set of blend weights.
$B_S(\cdot)$ and $B_P(\cdot)$ denote shape and pose blend functions, respectively.
The hand template $\bar{\mathbf{T}}$ is first posed and skinned based on the pose and shape corrective blend shapes, \emph{i.e.,} $B_P(\bm{\theta})$ and $B_S(\bm{\beta})$,
Then the mesh is generated by rotating each part around joints $J(\bm{\beta})$ using the linear skinning function $W(\cdot)$~\cite{kavan2005spherical}.
Besides, we are able to extract sparse 3D joints $\widetilde{J}_{3D}$ from the mesh.
To keep consistent with the widely-used hand annotation format, we further add 5 extra vertices with the index of 333, 443, 555, 678 and 734 as the fingertips, leading to total 21 3D joints.
Based on the predicted camera parameter, the predicted 3D joints are projected to the 2D plane.
The projected 2D hand pose is derived as follows,
\begin{equation}
\label{equ:weak}
\widetilde{J}_{2D} = c_s\prod{({\mathbf{c}_{r}}{{\widetilde{J}}_{3D}})}+\mathbf{c}_o,
\end{equation}
where $\prod(\cdot)$ denotes the orthographic projection.
\noindent \textbf{Prediction head.}
Since discriminative cues may only contain in certain frames, we utilize a simple attention mechanism to weight features temporally.
Then the weighted features are summed to perform final classification.
\subsection{Pre-Training SignBERT}
In this section, we elaborate SignBERT pre-training paradigm on a large volume of sign data sources to exploit semantic context hierarchically.
Different from the original BERT pre-training on discrete word space, we aim to pre-train on continuous hand pose space.
Substantially, the classification problem is transformed into regression, which poses new challenges on the reconstruction of the hand pose sequence.
To tackle this issue, we view hand poses as visual `words' (continuous tokens) and jointly utilize the aforementioned model-aware decoder as a constraint with hand prior incorporated.
Given a hand sequence containing both hands, we first randomly choose 50\% tokens.
Similar to BERT, if the token is chosen, we randomly perform one of three operations with equal probability, \emph{i.e.,} masked joint modeling, masked frame modeling and identity modeling.
\noindent \textbf{Masked joint modeling.}
Since current pose detectors may contain failure detection on some joints, we incorporate masked joint modeling to mimic the usual failure cases.
In a chosen token, we randomly choose $m$ joints ranging from 1 to $M$.
For these chosen joints, we perform two operations with equal probability, \emph{i.e.,} zero masking~(masking the coordinates of joints with zeros) or random spatial disturbance.
This modeling attempts to embed our framework the capability to infer the gesture state from remaining hand joints, thus capturing context at the joint level.
\noindent \textbf{Masked frame modeling.}
Masked frame modeling is performed on a more holistic view.
For a chosen token, all the joints are zero masked.
The framework is enforced to reconstruct this token by observations from remaining pose tokens of the other hand or different temporal points.
In this way, temporal context in each hand and mutual context between hands are captured.
\noindent \textbf{Identity modeling.}
Identity modeling makes the unchanged token fed into the framework.
This operation is indispensable for the framework to learn identity mapping on those unmasked tokens.
\subsection{Objective Functions in Pre-Training}
The proposed three strategies allow the network to maximize the likelihood of the joint probability distribution to reconstruct the hand pose sequence.
In this manner, the context contained in the sequence is captured.
During pre-training, only the output corresponding to chosen tokens are included in the following loss calculation as follows,
\begin{equation}
\label{equ:pre-train}
\mathcal{L} = \mathcal{L}_{rec} + \lambda \mathcal{L}_{reg},
\end{equation}
where $\lambda$ denotes the weighting factor.
\noindent \textbf{Hand reconstruction loss $\mathcal{L}_{rec}$.}
Since hand pose detection results $J_{2D}$ serve as the pseudo label, we ignore the joints with the prediction confidence lower than $\epsilon$ and utilize the remaining joints weighted by the confidence in the calculation of this loss term.
\begin{equation}
\small
\label{equ:rec}
\mathcal{L}_{rec} = \sum\limits_{t, j}\mathds{1}(c(t,j)>=\epsilon)c(t,j){{\left\|\widetilde{J}_{2D}(t,j) - J_{2D}(t,j) \right\|}_{1}},
\end{equation}
where $\mathds{1}(\cdot)$ denotes the indicator function, and $c(t,j)$ denotes the confidence of the ${J}_{2D}$ with joint $j$ at time $t$.
\noindent \textbf{Regularization loss $\mathcal{L}_{reg}$.}
To ensure the hand model working properly, a regularization loss is added.
It is implemented by constraining magnitude and derivative of the MANO input, which is responsible for generating the plausible mesh and keeping the signer identity unchanged.
The regularization loss is calculated as follows,
\begin{equation}
\label{equ:reg}
\mathcal{L}_{reg} = \sum\limits_{t}( {\left\|\theta_t\right\|}_{2}^2 + w_{\beta}{\left\|\beta_t \right\|}_{2}^2 + w_{\delta}{\left\|\beta_{t} - \beta_{t-1} \right\|}_{2}^{2}),
\end{equation}
where $w_{\beta}$ and $w_{\delta}$ denote the weighting factor.
\subsection{Fine-Tuning SignBERT}
After pre-training SignBERT, it is relatively simple to fine-tune it for the downstream SLR task.
The hand-model-aware decoder is replaced by the prediction head.
The input hand pose sequence is all unmasked and we use the cross-entropy loss to supervise the output of the prediction head.
Considering only the hand pose sequence is insufficient to convey the full meaning of sign language, it is necessary to fuse recognition results based on hands with that of full frame.
The full frame can be represented by full RGB data or full keypoints.
In our work, we use the simple late fusion strategy, which directly sums their prediction results.
Besides, the full RGB and keypoints baseline method utilized for fusion are marked in each dataset for clarity.
In the following, we refer our method with only hands, fusion of hands and full RGB data, fusion of hands and full keypoints as \textbf{Ours~(H)}, \textbf{Ours~(H + R)} and \textbf{Ours~(H + P)}, respectively.
\section{Experiments}
\subsection{Datasets and Evaluation}
\noindent
\textbf{Datasets.}
We evaluate our proposed method on four public sign language datasets, including NMFs-CSL \cite{hu2020global}, SLR500 \cite{huang2018attention}, MSASL \cite{joze2018ms} and WLASL \cite{li2020word}.
\textbf{NMFs-CSL} is the most challenging Chinese sign language~(CSL) dataset due to a large variety of confusing words caused by fine-grained cues.
It totally contains 1,067 words with 610 confusing words and 457 normal words.
There are 25,608 and 6,402 samples for training and testing, respectively.
\textbf{SLR500} is another CSL dataset, which contains 500 daily words with 125,000 recording samples performed by 50 signers.
Specifically, 90,000 and 35,000 samples are utilized for training and testing, respectively.
\textbf{MSASL} is an American sign language dataset~(ASL) containing a vocabulary size of 1,000, with 25,513 samples in total for training, validation and testing, respectively.
Besides, the Top-100 and Top-200 most frequent words are chosen as its two subsets, referred to as MSASL100, MSASL200.
\textbf{WLASL} is another ASL dataset with a vocabulary of 2,000 words and 21,083 samples.
Similar to MSASL, it releases WLASL100 and WLASL300 as its subsets.
MSASL and WLASL are both collected from Web videos and bring new challenges due to unconstrained real-life recording conditions and limited samples for each word.
Meanwhile, since STB~\cite{zhang2017hand} and HANDS17~\cite{yuan20172017} provide 2D hand joint annotations, we utilize them to validate the feasibility of our proposed framework.
\textbf{STB} is a real-world hand pose estimation datasets, which contains 18,000 samples.
Following Zimmermann~\emph{et al.} \cite{zimmermann2017learning}, we split this dataset into 15,000 training and 3,000 testing samples for single-frame validation.
\textbf{HANDS17} is a video-level hand pose estimation dataset, containing a total of 292,820 frames from 99 video sequences.
In this dataset, we split the first 70$\%$ and last 30$\%$ frames in each sequence for training and testing, respectively.
\noindent
\textbf{Evaluation.}
For the downstream isolated SLR task, we utilize the accuracy metrics, \emph{i.e.,} the per-class~(\textbf{P-C}) and per-instance~(\textbf{P-I}) metrics, which denote the average accuracy over each class and each instance, respectively.
We report the Top-1 and Top-5 accuracy under both per-instance and per-class for MSASL and WLASL.
Since NMFs-CSL and SLR500 contain the same number of samples for each class, we only report per-instance accuracy following~\cite{hu2020global, huang2018attention}.
For STB and HANDS17, we report the Percentage of Correct Keypoints (PCK) score and the area under the curve (AUC) on the PCK ranging from 20 to 40 pixels, which are widely-used criteria to evaluate pose estimation accuracy.
Specifically, PCK defines a candidate keypoint to be correct if it falls within a circle (2D) of a given radius around the ground truth, where the distances are expressed in pixels.
\subsection{Implementation Details}
In our experiment, all the models are implemented by PyTorch~\cite{paszke2019pytorch} and trained on NVIDIA RTX 3090.
Since no pose annotation is available in sign language datasets, we use MMPose~\cite{mmpose2020} for its efficiency to extract the 133 full 2D keypoints, \textit{i.e.}, the 23 body joints, 68 face and 42 hand joints.
The extracted hand and shoulder joints are further utilized to crop the left and right hand pose and rescale them to 256 $\times$ 256.
Both hands are fed into the framework.
The framework is trained with the Adam optimizer.
The weight decay and momentum are set to 0.0001 and 0.9, respectively.
We start at the initial learning rate of 0.001 and reduce it by a factor of 0.1 every 20 epochs.
In all experiments, the hyper parameters $\epsilon$, $\lambda$, $w_{\beta}$ and $w_{\delta}$ are set as 0.5, 0.01, 10.0 and 100.0, respectively.
During the pre-training stage, we include the training data from all four aforementioned sign language datasets.
For the downstream task, we temporally extract 32 frames using random and center sampling during training and testing, respectively.
\subsection{Ablation Study}
In this section, we first validate the feasibility of our framework.
Then we perform ablation studies to demonstrate the effectiveness of the main components in our framework.
\noindent \textbf{Framework feasibility.}
We validate the feasibility of our framework on the datasets with hand pose annotation available.
As shown in Table~\ref{STB}, we first validate reconstruction ability under the single-frame setting on the STB dataset.
Specifically, a single frame is fed into the framework.
We only perform the masked joint modeling, where $M$ indicates the number of masked joints ranges from 1 to $M$, resulting the average number as $M/2$.
With the gradual increase of $M$, the PCK and AUC metrics of reconstructed joints are consistently higher than those of the input.
It demonstrates that our framework is able to hallucinate the whole hand pose by observing partial joints.
\begin{table}
\small
\tabcolsep=11pt
\begin{center}
\begin{tabular}{c|cc|cc}
\hline
\multirow{2}{*}{M} & \multicolumn{2}{c|}{Input} & \multicolumn{2}{c}{Output} \\
& P@20 & AUC & P@20 & AUC \\ \hline \hline
3 & 88.81 & 91.02 & 99.90 & 99.54 \\
5 & 82.26 & 85.65 & 99.89 & 99.53 \\
7 & 76.19 & 80.91 & 99.85 & 99.53 \\
9 & 70.85 & 76.63 & 99.81 & 99.50 \\
11 & 66.29 & 72.85 & 99.79 & 99.44 \\ \hline
\end{tabular}
\end{center}
\caption{Frame-level framework feasibility on the STB dataset. `P@20' denotes the PCK metrics with the error threshold set as 20 pixel. We only utilize the masked joint modeling, and $M$ denotes the number of masked joints ranges from 1 to $M$.}
\label{STB}
\vspace{-0.3cm}
\end{table}
\begin{table}
\small
\tabcolsep=6.5pt
\begin{center}
\begin{tabular}{cc|cc|cc}
\hline
\multicolumn{2}{c|}{Mask} & \multicolumn{2}{c|}{Input} & \multicolumn{2}{c}{Output} \\
Joint & Frame & P@20 & AUC & P@20 & AUC \\ \hline \hline
\checkmark & & 86.38 & 89.02 & 95.13 & 95.49 \\
& \checkmark & 80.85 & 80.85 & 95.33 & 95.57 \\
\checkmark & \checkmark & 81.43 & 82.32 & 95.14 & 95.48 \\ \hline
\end{tabular}
\end{center}
\caption{Video-level framework feasibility on HANDS17. `P@20' denotes the PCK metrics with the error threshold set as 20 pixel. `Joint' and `Frame' denote the masked joint modeling and masked frame modeling, respectively.}
\label{HANDS17}
\vspace{-0.2cm}
\end{table}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figure/Vis.pdf
\caption{Visualization of the framework feasibility on HANDS17. We choose 6 continuous frames from one video. The four rows denote the ground truth~(GT) pose sequence, input sequence after performing masking on GT, the reconstructed sequence and middle results of the mesh sequence, respectively. Notably, two blanks in the second row represent these poses are all masked.
}
\vspace{-0.3cm}
\label{fig:HANDS17}
\end{figure}
From Table~\ref{HANDS17}, the framework feasibility under the video-level setting is tested on the HANDS17 dataset.
We utilize all masking strategies on the original pose sequence to formulate the input.
It can be also observed that the PCK and AUC performance of the output sequence are higher than those of input, which verifies the framework capability of reconstructing from inaccurate hand joint sequence.
Besides, we visualize the hand pose reconstruction in Figure~\ref{fig:HANDS17}.
\begin{table*}
\small
\tabcolsep=11.2pt
\begin{center}
\begin{threeparttable}
\begin{tabular}{l|ccc|ccc|ccc}
\hline
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{Total} & \multicolumn{3}{c|}{Confusing} & \multicolumn{3}{c}{Normal} \\
& Top-1 & Top-2 & Top-5 & Top-1 & Top-2 & Top-5 & Top-1 & Top-2 & Top-5 \\ \hline \hline
\textbf{Pose-based} & & & & & & & & & \\
ST-GCN~\cite{yan2018spatial} & 59.9 & 74.7 & 86.8 & 42.2 & 62.3 & 79.4 & 83.4 & 91.3 & 96.7 \\
Ours~(H) & 67.0 & 86.8 & 95.3 & 46.4 & 78.2 & 92.1 & 94.5 & 98.1 & 99.6 \\
Ours~(H + P) & \textbf{74.9} & \textbf{93.2} & \textbf{98.2} & \textbf{58.6} & \textbf{88.6} & \textbf{96.9} & \textbf{96.7} & \textbf{99.3} & \textbf{99.9} \\ \hline
\textbf{RGB-based} & & & & & & & & & \\
3D-R50~\cite{qiu2017learning} & 62.1 & 73.2 & 82.9 & 43.1 & 57.9 & 72.4 & 87.4 & 93.4 & 97.0 \\
DNF~\cite{cui2019deep} & 55.8 & 69.5 & 82.4 & 33.1 & 51.9 & 71.4 & 86.3 & 93.1 & 97.0 \\
I3D~\cite{carreira2017quo} & 64.4 & 77.9 & 88.0 & 47.3 & 65.7 & 81.8 & 87.1 & 94.3 & 97.3 \\
TSM~\cite{lin2019tsm} & 64.5 & 79.5 & 88.7 & 42.9 & 66.0 & 81.0 & 93.3 & 97.5 & 99.0 \\
Slowfast~\cite{feichtenhofer2019slowfast} & 66.3 & 77.8 & 86.6 & 47.0 & 63.7 & 77.4 & 92.0 & 96.7 & 98.9 \\
GLE-Net~\cite{hu2020global} & 69.0 & 79.9 & 88.1 & 50.6 & 66.7 & 79.6 & 93.6 & 97.6 & 99.3 \\
Ours~(H + R) & \textbf{78.4} & \textbf{92.0} & \textbf{97.3} & \textbf{64.3} & \textbf{86.5} & \textbf{95.4} & \textbf{97.4} & \textbf{99.3} & \textbf{99.9} \\ \hline
\end{tabular}
\end{threeparttable}
\end{center}
\caption{Accuracy comparison on NMFs-CSL dataset. \cite{yan2018spatial} and \cite{qiu2017learning} denote the pose and RGB baseline, respectively.}
\label{NMFs-CSL}
\vspace{-0.3cm}
\end{table*}
\begin{table}
\small
\tabcolsep=2.8pt
\begin{center}
\begin{tabular}{cc|cc|cc|cc}
\hline
\multicolumn{2}{c|}{Mask} & \multicolumn{2}{c|}{100} & \multicolumn{2}{c|}{200} & \multicolumn{2}{c}{1000} \\
Joint & Frame & P-I & P-C & P-I & P-C & P-I & P-C \\ \hline \hline
& & 63.01 & 62.72 & 57.69 & 57.56 & 41.85 & 38.30 \\
\checkmark & & 72.66 & 72.75 & 68.51 & 69.72 & 48.87 & 45.39 \\
& \checkmark & 74.77 & 75.48 & 68.65 & 69.20 & 49.02 & 46.02 \\
\checkmark & \checkmark & \textbf{76.09} & \textbf{76.65} & \textbf{70.64} & \textbf{70.92} & \textbf{49.54} & \textbf{46.39} \\ \hline
\end{tabular}
\end{center}
\caption{Effectiveness of the masking strategy on MSASL dataset. The first row denotes the baseline, \emph{i.e.,} our framework is trained without pre-training. `Joint' and `Frame' denote the masked joint modeling and masked frame modeling, respectively.}
\label{mask}
\vspace{-0.3cm}
\end{table}
Since we focus on the performance of the downstream recognition task, we perform extensive experiments on MSASL and its subsets to demonstrate the effectiveness of the masking strategies, model-aware decoder, Transformer layers $N$ and pre-training data scale.
We report per-instance and per-class Top-1 accuracy as the performance indicator.
\noindent \textbf{Effectiveness of the masking strategy.}
As illustrated in Table~\ref{mask}, the first row denotes the baseline method, \emph{i.e.,} our framework is directly trained under the video label supervision without pre-training.
It is worth mentioning that compared with this baseline, our designed pre-training brings notable performance gain, with 13.08\%, 12.95\% and 7.69\% Top-1 per-instance accuracy improvement.
Both joint-level and frame-level masking strategies are beneficial for the framework capturing different levels of context, thus bringing performance improvement.
When two masking strategies are both utilized, it reaches the best performance.
\begin{table}[t]
\small
\tabcolsep=4pt
\begin{center}
\begin{tabular}{c|cc|cc|cc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Decoder}} & \multicolumn{2}{c|}{100} & \multicolumn{2}{c|}{200} & \multicolumn{2}{c}{1000} \\
& P-I & P-C & P-I & P-C & P-I & P-C \\ \hline \hline
1-layer fc & 73.05 & 72.62 & 67.55 & 68.21 & 47.94 & 45.07 \\
2-layer fc & 74.24 & 74.21 & 68.29 & 69.12 & 48.03 & 45.25 \\
Ours & \textbf{76.09} & \textbf{76.65} & \textbf{70.64} & \textbf{70.92} & \textbf{49.54} & \textbf{46.39} \\ \hline
\end{tabular}
\end{center}
\caption{Effectiveness of the model-aware decoder on MSASL dataset. We compare ours with different pose decoders.}
\label{model-aware}
\vspace{-0.2cm}
\end{table}
\noindent \textbf{Effectiveness of the model-aware decoder.}
As shown in Table~\ref{model-aware}, we compare the effect of different pose decoders on SLR.
The first two rows denote utilizing the fully-connected layers to regress the hand pose.
Our decoder work in a model-aware method to import hand prior during pre-training, which eases optimization and brings performance improvement for downstream isolated SLR.
Besides, the model-aware decoder has additional benefits, which inflates the 2D hand pose sequence to the 3D plane.
\begin{table}[t]
\small
\tabcolsep=6pt
\begin{center}
\begin{tabular}{c|cc|cc|cc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{$N$}} & \multicolumn{2}{c|}{100} & \multicolumn{2}{c|}{200} & \multicolumn{2}{c}{1000} \\
& P-I & P-C & P-I & P-C & P-I & P-C \\ \hline \hline
2 & 74.11 & 74.61 & 67.70 & 67.92 & 48.23 & 45.17 \\
3 & \textbf{76.09} & \textbf{76.65} & \textbf{70.64} & \textbf{70.92} & \textbf{49.54} & \textbf{46.39} \\
4 & 75.69 & 75.51 & 70.20 & 70.66 & 47.36 & 44.04 \\
5 & 74.90 & 75.68 & 68.14 & 68.40 & 47.29 & 44.42 \\ \hline
\end{tabular}
\end{center}
\caption{Effectiveness of the Transformer layers $N$ on MSASL dataset. $N$ denotes the number of the layers in the Transformer encoder.}
\label{layer}
\vspace{-0.3cm}
\end{table}
\noindent \textbf{Effectiveness of Transformer layers $N$.}
From Table~\ref{layer}, the accuracy increases, when the number of Transformer layers increases.
It reaches the peak when $N=3$.
The difference of the best layers in BERT and our model may be due to different characteristics between sign pose and NLP domain, and the overfitting issue.
Unless stated, we utilize $N=3$ in all our experiments.
\noindent \textbf{Effectiveness of the pre-training data scale.}
As shown in Table~\ref{data-scale}, as the ratio of pre-training data volume increases, the performance on the downstream SLR task gradually increases on the accuracy metrics.
It indicated that SignBERT may benefit from larger pre-training datasets.
\begin{table}[t]
\small
\tabcolsep=5pt
\begin{center}
\begin{tabular}{c|cc|cc|cc}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Ratio}} & \multicolumn{2}{c|}{100} & \multicolumn{2}{c|}{200} & \multicolumn{2}{c}{1000} \\
& P-I & P-C & P-I & P-C & P-I & P-C \\ \hline \hline
0\% & 63.01 & 62.72 & 57.69 & 57.56 & 41.85 & 38.30 \\
25\% & 73.18 & 72.83 & 67.91 & 69.30 & 46.18 & 43.97 \\
50\% & 73.18 & 73.42 & 67.18 & 67.71 & 46.57 & 43.79 \\
75\% & 74.50 & 74.36 & 68.72 & 68.97 & 47.21 & 43.67 \\
100\% & \textbf{76.09} & \textbf{76.65} & \textbf{70.64} & \textbf{70.92} & \textbf{49.54} & \textbf{46.39} \\ \hline
\end{tabular}
\end{center}
\caption{Effectiveness of the ratio of pre-training data scale on the MSASL dataset.}
\label{data-scale}
\vspace{-0.2cm}
\end{table}
\begin{table*}
\small
\tabcolsep=7pt
\begin{center}
\begin{threeparttable}
\begin{tabular}{l|cc|cc|cc|cc|cc|cc}
\hline
\multirow{3}{*}{Method} & \multicolumn{4}{c|}{MSASL100}
& \multicolumn{4}{c|}{MSASL200}
& \multicolumn{4}{c}{MSASL1000}\\ \cline{2-13}
& \multicolumn{2}{c|}{Per-instance} & \multicolumn{2}{c|}{Per-class}
& \multicolumn{2}{c|}{Per-instance} & \multicolumn{2}{c|}{Per-class}
& \multicolumn{2}{c|}{Per-instance} & \multicolumn{2}{c}{Per-class} \\
& Top-1 & Top-5 & Top-1 & Top-5
& Top-1 & Top-5 & Top-1 & Top-5
& Top-1 & Top-5 & Top-1 & Top-5 \\ \hline \hline
\textbf{Pose-based} & & & & & & & & & & & & \\
ST-GCN~\cite{yan2018spatial} & 59.84 & 82.03 & 60.79 & 82.96
& 52.91 & 76.67 & 54.20 & 77.62
& 36.03 & 59.92 & 32.32 & 57.15 \\
Ours~(H) & 76.09 & 92.87 & 76.65 & 93.06
& 70.64 & 89.55 & 70.92 & 90.00
& 49.54 & 74.11 & 46.39 & 72.65 \\
Ours~(H + P) & \textbf{81.37} & \textbf{93.66} & \textbf{82.31} & \textbf{93.76}
& \textbf{77.34} & \textbf{91.10} & \textbf{78.02} & \textbf{91.48}
& \textbf{59.80} & \textbf{81.86} & \textbf{57.06} & \textbf{80.94} \\ \hline
\textbf{RGB-based} & & & & & & & & & & & & \\
I3D~\cite{joze2018ms} & - & - & 81.76 & 95.16
& - & - & 81.97 & 93.79
& - & - & 57.69 & 81.05 \\
TCK~\cite{li2020transfer} & 83.04 & 93.46 & 83.91 & 93.52
& 80.31 & 91.82 & 81.14 & 92.24
& - & - & - & - \\
BSL~\cite{albanie2020bsl} & - & - & - & -
& - & - & - & -
& 64.71 & 85.59 & 61.55 & 84.43 \\
Ours~(H + R) & \textbf{89.56} & \textbf{97.36} & \textbf{89.96} & \textbf{97.51}
& \textbf{86.98} & \textbf{96.39} & \textbf{87.62} & \textbf{96.43}
& \textbf{71.24} & \textbf{89.12} & \textbf{67.96} & \textbf{88.40} \\ \hline
\end{tabular}
\end{threeparttable}
\end{center}
\caption{Accuracy comparison on MSASL dataset. \cite{yan2018spatial} and \cite{joze2018ms} denote the pose and RGB baseline, respectively.}
\label{msasl}
\vspace{-0.2cm}
\end{table*}
\begin{table*}
\small
\tabcolsep=6.5pt
\begin{center}
\begin{threeparttable}
\begin{tabular}{l|cc|cc|cc|cc|cc|cc}
\hline
\multirow{3}{*}{Method} & \multicolumn{4}{c|}{WLASL100}
& \multicolumn{4}{c|}{WLASL300}
& \multicolumn{4}{c}{WLASL2000}\\ \cline{2-13}
& \multicolumn{2}{c|}{Per-instance} & \multicolumn{2}{c|}{Per-class}
& \multicolumn{2}{c|}{Per-instance} & \multicolumn{2}{c|}{Per-class}
& \multicolumn{2}{c|}{Per-instance} & \multicolumn{2}{c}{Per-class} \\
& Top-1 & Top-5 & Top-1 & Top-5
& Top-1 & Top-5 & Top-1 & Top-5
& Top-1 & Top-5 & Top-1 & Top-5 \\ \hline \hline
\textbf{Pose-based} & & & & & & & & & & & & \\
ST-GCN~\cite{yan2018spatial} & 50.78 & 79.07 & 51.62 & 79.47
& 44.46 & 73.05 & 45.29 & 73.16
& 34.40 & 66.57 & 32.53 & 65.45 \\
Pose-TGCN~\cite{li2020word} & 55.43 & 78.68 & - & -
& 38.32 & 67.51 & - & -
& 23.65 & 51.75 & - & - \\
PSLR~\cite{tunga2020pose} & 60.15 & 83.98 & - & -
& 42.18 & 71.71 & - & -
& - & - & - & - \\
Ours~(H) & 76.36 & 91.09 & 77.68 & 91.67
& 62.72 & 85.18 & 63.43 & 85.71
& 39.40 & 73.35 & 36.74 & 72.38 \\
Ours~(H + P) & \textbf{79.07} & \textbf{93.80} & \textbf{80.05} & \textbf{94.17}
& \textbf{70.36} & \textbf{88.92} & \textbf{71.17} & \textbf{89.36}
& \textbf{47.46} & \textbf{83.32} & \textbf{45.17} & \textbf{82.32} \\ \hline
\textbf{RGB-based} & & & & & & & & & & & & \\
I3D~\cite{li2020word} & 65.89 & 84.11 & 67.01 & 84.58
& 56.14 & 79.94 & 56.24 & 78.38
& 32.48 & 57.31 & - & - \\
TCK~\cite{li2020transfer} & 77.52 & 91.08 & 77.55 & 91.42
& 68.56 & 89.52 & 68.75 & 89.41
& - & - & - & - \\
BSL~\cite{albanie2020bsl} & - & - & - & -
& - & - & - & -
& 46.82 & 79.36 & 44.72 & 78.47 \\
Ours~(H + R) & \textbf{82.56} & \textbf{94.96} & \textbf{83.30} & \textbf{95.00}
& \textbf{74.40} & \textbf{91.32} & \textbf{75.27} & \textbf{91.72}
& \textbf{54.69} & \textbf{87.49} & \textbf{52.08} & \textbf{86.93} \\ \hline
\end{tabular}
\end{threeparttable}
\end{center}
\caption{Accuracy comparison on WLASL dataset. ST-GCN~\cite{yan2018spatial} and I3D~\cite{li2020word} denote the pose and RGB baseline, respectively.}
\label{wlasl}
\vspace{-0.3cm}
\end{table*}
\begin{table}
\small
\begin{center}
\tabcolsep=14pt
\begin{threeparttable}
\begin{tabular}{l|c}
\hline
Method & Accuracy \\ \hline \hline
\textbf{Pose-based} \\
ST-GCN~\cite{yan2018spatial} & 90.0 \\
Ours~(H) & 94.5 \\
Ours~(H + P) & \textbf{96.6} \\ \hline
\textbf{RGB-based} \\
STIP~\cite{laptev2005space} & 61.8 \\
GMM-HMM~\cite{tang2015real} & 56.3 \\
3D-R50~\cite{qiu2017learning} & 95.1 \\
GLE-Net~\cite{hu2020global} & 96.8 \\ \hline
Ours~(H + R) & \textbf{97.6} \\ \hline
\end{tabular}
\end{threeparttable}
\end{center}
\vspace{-0.2cm}
\caption{Accuracy comparison on SLR500 dataset. \cite{yan2018spatial} and \cite{qiu2017learning} denote the pose and RGB baseline, respectively. }
\label{slr500}
\vspace{-0.3cm}
\end{table}
\subsection{Comparison with State-of-the-art Methods}
We compare our method with previous state-of-the-art methods on four benchmark datasets.
For clarity, previous methods are grouped by their input modality, \emph{i.e.,} pose-based and RGB-based methods.
\noindent \textbf{Evaluation on NMFs-CSL.}
As illustrated in Table~\ref{NMFs-CSL}, we compare with methods~\cite{yan2018spatial, qiu2017learning, cui2019deep, carreira2017quo, lin2019tsm, feichtenhofer2019slowfast, hu2020global} utilizing the pose and RGB sequence as input.
GLE-Net \cite{hu2020global} is the most challenging method, which enhances discriminative cues from global and local views.
It is worth noting that our method with purely using hand pose achieves comparable performance with a majority of them.
Ours~(H + R) outperforms all previous methods with a notable margin.
\noindent \textbf{Evaluation on SLR500.}
As shown in Table~\ref{slr500}, STIP~\cite{laptev2005space} and GMM-HMM~\cite{tang2015real} are traditional methods based on hand-crafted features.
GLE-Net \cite{hu2020global} still achieves the best performance.
Notably, our method achieves the best performance, reaching $97.6\%$ top-1 accuracy.
\noindent \textbf{Evaluation on MSASL.}
MSASL brings new challenges due to unconstrained recording settings.
As shown in Table~\ref{msasl}, compared with the RGB baseline~\cite{joze2018ms}, ST-GCN~\cite{yan2018spatial} shows inferior performance.
It may be caused by the failure of pose detection on sign videos, which contains the partially occluded upper body, motion blur and noisy backgrounds.
Albanie \emph{et al.}~\cite{albanie2020bsl} and Li~\emph{et al.} \cite{li2020transfer} both use more external RGB sign data to boost the performance on MSASL or its subsets.
It is worth noting that our method achieves noticeable performance improvement when compared with both pose-based and RGB-based methods.
\noindent \textbf{Evaluation on WLASL.}
Compared with MSASL, WLASL contains fewer samples and double vocabulary size.
It can be observed that Ours~(H + P), which only utilizes pose as the input modality, even outperforms the most challenging RGB-based method~\cite{albanie2020bsl}.
Besides, Ours~(H + R) further outperforms the best competitor by $7.87\%$ per-instance top-1 accuracy improvement on WLASL2000.
With incorporated hand prior and self-supervised pre-training, our method is more effective under the benchmark with limited samples.
\section{Conclusion}
In this paper, we introduce the \emph{first} self-supervised pre-trainable SLR framework with model-aware hand prior incorporated, namely SignBERT.
We involve both hands and view hand pose as a visual token.
The visual token is embedded with gesture state, temporal and hand chirality information before feeding into the framework.
We first perform self-supervised pre-training on a large volume of hand poses by masking and reconstructing the hand tokens.
During pre-training, our framework consists of the Transformer encoder and hand-model-aware decoder.
Jointly with incorporated hand prior by the decoder, we elaborately design several masking strategies for better capturing hierarchical contextual information.
Then our pre-trained framework is fine-tuned to perform recognition.
We perform extensive experiments on four popular benchmark datasets.
Experiment results demonstrate the effectiveness of our method, achieving new state-of-the-art performance on all benchmarks with a notable margin.
\footnotesize {\flushleft \bf Acknowledgements}.
This work was supported in part by the National Natural Science Foundation of China under Contract U20A20183, 61632019, and 62021001, and in part by the Youth Innovation Promotion Association CAS under Grant 2018497. It was also supported by the GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC.
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,993,308 | arxiv | \subsection*{Acknowledgements
} M.V.d.H. gratefully acknowledges support from the Simons Foundation under the MATH + X program, the National Science Foundation under grant
DMS-1815143, and the corporate members of the Geo-Mathematical Imaging Group at Rice University. G.U. was partly supported by NSF, a Walker Family Endowed Professorship at UW and a Si-Yuan Professorship at IAS, HKUST. S.B. was partly supported by Project no.: 16305018 of the Hong Kong Research Grant Council.
\section{Introduction}
We consider the problem of recovering piecewise smooth wave speeds and density across a curved interface from reflected wave amplitudes.
Such amplitudes have been exploited for decades in seismology in this context. However, the analysis in seismology uses a linearization and assumes mostly flat interfaces. Here, we present a nonlinear analysis allowing curved interfaces, establish uniqueness and provide a reconstruction, while making the notion of amplitude precise through a procedure rooted in microlocal analysis. While our focus is on elastic waves in isotropic media, we consider in parallel the acoustic case with vanishing shear modulus.
By measuring the amplitudes of reflected acoustic or elastic waves above a curved interface at various incidence angles, we recover the jet of the material parameters infinitesimally below the interface as well as the shape operator associated to the interface.
Following the notation in \cite{SUV2019transmission} and \cite{Hansen-CPDEInverse}, let $\Omega \subset \mathbb R^3$ be a smooth, bounded domain and $\Gamma$ a closed, smooth hypersurface splitting $\Omega$ into two subdomains $\Omega_+$ and $\Omega_-$. For the isotropic elastic wave equation, we assume the density of mass $\rho$ and Lam\'{e} parameters $\lambda, \mu$ are smooth up to the surface $\Gamma$ with a possible jump there.
When an acoustic or elastic wave hits an interface, the strength of the reflected wave depends not only on the material parameters infinitesimally above and below the interface, but it also depends on the angle of incidence and the curvature of the interface. There is a certain reflection operator that we denote $R$ throughout this work, which will be a PsiDO of order zero on the interface, that determines the amplitude of a reflected wave. Concretely, we aim to recover all material parameters (and their derivatives) directly below an interface from knowledge of $R$ and the material parameters above an interface. We will also determine the curvature of the interface from such data. This is a variant of the boundary determination problem (see for example \cite{RachBoundary}) from the hyperbolic Dirichlet-to-Neumann map, but in our case, reflected amplitudes (specifically, we use the full symbol of $R$) is in place of the Dirichlet-to-Neumann map. We are not aware of mathematical literature of this exact problem even though there is plenty of geophysical literature on a simplified version of this problem (see \cite{DavydenkoScatteringandReflection,HammadAVO,Skopintsevacurvatureofinterface}). We treat this inverse problem both in the acoustic and elastic wave setting.
The result closest to ours was obtained by Rachele in \cite{RachBoundary}. She showed that one can uniquely determine the Lam\'{e} parameters and density of mass (including all their derivatives) at the boundary of a domain from the hyperbolic Dirichlet-to-Neumann map.
Aside from boundary recovery, through asymptotic analysis, following the propagation of singularities, amplitudes and reflection coefficients have been used by seismologists to obtain wave speeds and density of mass just below an interface. The procedure, which is derived from a linearization of the inverse problem considered here, is termed ``amplitude versus offset analysis'' (AVO). The procedure has ``locally'' elastic plane waves sent into an elastic medium with a reflector at various angles and their reflected amplitudes are measured. The amplitude variation due to change in angle of a wave hitting the reflector indicates contrasts in lithology, shear properties, and fluid content in rocks above and below the interface. Recent work for this type of analysis in heterogeneous media can be found in \cite{HammadAVO}, and an inverse problem that incorporates both multiple scattering and recovering reflection coefficients can be found in \cite{DavydenkoScatteringandReflection}. A genetic algorithm for nonlinear recovery of material parameters from reflection coefficients can be found in \cite{deHoopAVO00}. We also refer to \cite{HoopAVO_1997} for an inverse problem with anisotropic media and reflection coefficients. Our analysis here gives a concrete mathematical framework and proof that at least in an isotropic setting (we will study anisotropic settings in another work), one can determine the full elastic properties and density across an interface from reflection operators. Most works in the geophysics literature assumed some type of homogeneity and a simple interface, while several works such as \cite{Skopintsevacurvatureofinterface,Cerveny1974Curvature} consider curved interfaces and their effect on the reflection coefficients. We make no simplifying geometric assumptions about the interface except that it is a smooth hypersurface, and a nice byproduct of our construction, aside from the inverse problem, shows concretely the effect of curvature on the reflection operators. In addition, we do not linearize the problem as is usually done in articles on AVO. Much of our proof is constructive, and will serve as a basis for reconstruction algorithms.
Our primary motivation is to eventually recover a piecewise smooth density of mass (in the isotropic elastic wave equation) in the interior of the domain, which we present in a subsequent paper. We essentially want to ``image'' the density using high frequency waves.
In fact, one may recover piecewise smooth wavespeeds without using reflected amplitudes as in \cite{SUV2019transmission}. In \cite{SUV2019transmission}, Stefanov, Vasy, and Uhlmann first construct the parametrix for the isotropic elastic wave equation away from any glancing rays. They use the principal symbol of the parametrix (in particular, the polarization set) to then recover local travel times for the $P$ and $S$ wavespeeds near a particular interface. By using rays that are near tangential to the interface, they can recover travel times (for both the $P$ and $S$ wavespeeds) between two nearby points at the interface. This allows them to recover the wavespeeds initially at the interface, and then in the interior using local boundary rigidity theorems.
This argument only relies on the principal symbol of the elastic operator and the parameterix. As noted in their \cite[Remark 10.2]{SUV2019transmission}, their argument does not address unique determination of the density of mass past the first interface, nor at the interface itself. Since the density appears in the lower order part of the elastic operator, it is natural to look at the lower order symbols of the elastic parametrix to initially recover the density at the interface. This leads to the inverse problem considered here where we study the full symbol of the reflection operator, which is a constituent of the elastic parameterix \cite{CHKUElastic}, to recover the jet of the density of mass at the interface.
In the smooth setting, boundary determination of material parameters is usually needed to prove uniqueness in the interior \cite{Oksanen2020}, while in our setting where material parameters have jump discontinuities, unique determination at the interface is needed to solve the interior problem for the density of mass (recovering piecewise smooth wavespeeds in the interior may already be done without such an interface determination result as in \cite{SUV2019transmission}). The focus of this paper is to prove an interface determination result from reflected amplitudes.
The proof leads us to compute the full symbol of the acoustic and elastic reflection (and transmission) operators that are used to construct a parametrix to solve the acoustic/elastic wave equation near an interface. We do not know of any literature that has done this computation beyond the principal symbol level, while the principal symbol is computed in many works such as \cite{SUV2019transmission,CHKUElastic,Knott1899,Zoeppritz1919} so that this paper may also be viewed as a generalization of these results. We provide a toy example to illustrate the type of inverse problem we are after.
Consider a simple half space in $\mathbb R^3$
with a flat interface $\Gamma$ that separates a layer above denoted $\Omega_-$ and a layer below $\Omega_+$. Suppose that there are two piece-wise constant material parameters $c$ and $\rho$. Let $c_{\pm}$ be $c$ restricted to $\Omega_\pm$ and likewise for $\rho.$ An elastic $P$-wave (say) that hits $\Gamma$ at angle $\theta$ from the normal, with transmitted angle $\theta_t$, the reflection coefficient that determines the reflected wave amplitude is
\[R(\theta) = \frac{\rho_+c_+\cot(\theta) -\rho_-c_-\cot(\theta)}{\rho_+c_+\cot(\theta) + \rho_-c_-\cot(\theta)}.\]
Ideally, one would like to reconstruct $c_+ - c_-$ and $\rho_+ - \rho_-$ from $R$ at various angles. Due to the nonlinearities involved, this is difficult even in this simplest of settings. Instead, we are interested in determining $c_+$ and $\rho_+$ from knowing $c_-, \rho_-$ and $R$ for different $\theta$ values. In our setting, we allow $c$ and $\rho$ to be piecewise smooth functions and $\Gamma$ is not restricted to be flat. Hence, we want to determine all derivatives of $c_+$ and $\rho_+$ restricted to $\Gamma$, and the shape operator of $\Gamma$.
We consider two types of wave fields: acoustic and elastic waves, defined in a bounded domain $\Omega\subset \mathbb{R}^3$.
Though the inverse problems for the acoustic as well as the elastic waves have been studied extensively in the last few decades, a large portion of them concerns domains with smooth material parameters. This manuscript is the analog to a boundary determination result that will enable future results in interior determination, including the density of mass, when material parameters contain conormal singularities. The history of boundary determination problems of a coefficient from the Dirichlet-to-Neumann map is summarized in \cite{RachBoundary}. In the case of the conductivity equation involving an elliptic partial differential operator, Kohn and Vogelius \cite{KV84} and then Sylvester and Uhlmann \cite{SylU88}, prove boundary determination in the case of real analytic and then $C^\infty$ conductivities. Sylvester and Uhlmann \cite{SylU87} and Nachman \cite{Na96} use the result at the boundary to show an interior uniqueness result in certain situations.
As described in \cite{RachBoundary}, in the case of a scalar hyperbolic wave operator associated with a Laplace-Beltrami operator $\Delta_g$ for a metric $g$, Sylvester and Uhlmann \cite{SylU91} show that the Dirichlet-to-Neumann map uniquely determines the metric (up to the pullback by a diffeomorphism of $\Omega$ that fixes $ \partial \Omega$) to infinite order at the boundary $\Omega$. In this setting, boundary determination of multiple parameters with stability results can be found in \cite{Montalto2014} and \cite{stefanovYang2018}.
For the elastic setting, Nakamura and Uhlmann have solved the inverse problem for elasticity in the static case in \cite{NU93,NU94} together with the erratum \cite{NU03Erratum}, and independently in \cite{Eskin_2002}, Eskin and Ralston proved uniqueness for both Lam\'{e} coefficients in the isotropic setting when the Lam\'{e} paramater $\mu(x)$ is close to a constant.
The classic paper for boundary determination of parameters for the isotropic elastic wave equation is \cite{RachBoundary}. An extension of that result to elastic media with residual stress is in \cite{RachResidualStress}, and boundary determination in certain anisotropic cases can be found in \cite{HNZ19}. Our inverse problem is an analog to these results where the interface acts as our boundary and we instead determine the jet of multiple material parameters at one side of an interface from the reflection operator and knowledge of the material parameters on the other side of the interface.
\begin{comment}
The main hurdle of considering non-smooth parameters is that one needs to deal with the multiple scattering of the wave fields.
There are limited tools which have been developed to handle the scattered wave fields and one of them is the Scattering control method.
\emph{Scattering control} was introduced in \cite{CHKUControl}; in \cite{CHKUUniqueness,CHKUElastic} the authors have recovered discontinuous wave speeds in a bounded domain probed with acoustic or elastic waves.
Until now, the Scattering control method has been used successfully on domains with piecewise smooth wave speeds without considering density of mass.
In this article, we remove this limitation by considering a bounded domain with an unknown piecewise-smooth density of mass, probed with acoustic and elastic wave fields. We aim to eventually extend the Scattering control method to include jumps in the density of mass.
As a first step we consider, here, the (nonlinear) recovery of wave speeds and density of mass across a smooth interface.
\end{comment}
There is a natural forward problem associated to the inverse problem of this paper. In his 1975 paper \cite{Taylor75}, Michael Taylor microlocally analyzed the reflection and transmission of waves from a boundary or an interface. The scattering due to the boundary was governed by a boundary PsiDO denoted $\beta \in \Psi^0( \partial \Omega)$ in \cite{Taylor75} (giving boundary conditions) under certain geometric assumptions such as away from any glancing rays. Transmission conditions can locally be written as a boundary value problem (see \cite{Taylor75,yamamoto1989}) with such a PsiDO as well.
Taylor uses the original operator and the boundary conditions to construct tangential pseudodifferential operators, that he denotes $P^I,P^{II}, P^{III}, P^{IV}$, near the boundary. For a vector valued solution $u$ to a hyperbolic partial differential equation with boundary conditions, $P^I u\restriction_{\Omega}$ roughly represents the trace at the boundary of the ``incoming waves'' and $P^{II}u\restriction_{\Omega}$ represents the trace at the boundary of ``outgoing waves''. The boundary condition leads to a pseodifferential equation involving a derived operator $\gamma$ at the boundary \cite[Equation (3.2)]{Taylor75}. When this equation is elliptic, one may construct a parametrix near the boundary with a constituent at the boundary relating the incoming and outgoing waves. Our inverse problem is: Given the full symbol of several entries of a matrix PsiDO $\gamma$ (or entries of a pseudodifferential operator involving submatrices of $\gamma$) that microlocally determines the amplitudes of scattered waves at an interface, can we recover the jets of certain parameters of the partial differential operator at the boundary, in particular for PDEs describing acoustic and elastic waves? We are not aware of any such results for interfaces.
\begin{comment}
In order to extend such results to a layered medium (which will be a forthcoming paper), the natural analogue to boundary determination of density and elasticity from the Dirichlet to Neumann map \cite{RachBoundary} is uniquely recovering the Lam\'{e} parameters at each interface using purely reflected data measured near the interface. In the geometric optics construction of elastic wave solutions, the reflected waves are naturally determined by a reflection operator which is a $3 \times 3$ matrix of pseudodifferential operators in $\Psi^0$ at the interface. For example, in a simple piecewise constant acoustic setting with speed $c^{(-)}$ above a horizontal interface and $c^{(+)}$ below it, the reflection operator at normal incidence is just the constant multiplier $(c^{(-)} - c^{(+)})/(c^{(-)} + c^{(+)})$.
Hence, if one has the reflection operator and the speed above the interface, one may determine the speed below. In the piecewise smooth elastic setting, the reflection operator becomes a $\Psi$DO that depends on all three elastic parameters and their derivatives restricted to the interface. In this note, we apply ideas in \cite{RachBoundary} to show that this reflection $\Psi$DO together with the parameters above an interface uniquely determine the parameters below the interface.
\end{comment}
\begin{comment}
Inverse problems in the piecewise smooth setting often involve layer stripping past an interface and using known local results. Recovery past an interface inevitably leads to a problem on unique determination of material parameters at the interface (both sides!), and this note demonstrate that this recovery is indeed possible by adapting techniques used for recovery at the boundary from the Dirichlet to Neumann map in \cite{RachBoundary}.
\end{comment}
\subsection{Basic notations and definitions}\label{Sec_Setup}
In this article we consider two types of wave operators, one is the acoustic wave operator $P$, given by (\ref{Acoustic_OP}) defined on a function $u(t,x)$, the second is the linear, isotropic elastic wave operator $Q$, given in (\ref{eq_2}), acting on a vector-field $u(t,x) = (u_1(t,x),u_2(t,x),u_3(t,x))$.
Here we fix our basic assumptions and notational conventions for the rest of the article. Though we work with two types of wave operators, the following discussion in this section is common for both of them. Later we divide this article into two sections, each dedicated to the two types of waves.
Throughout this article, we will work on the space $(0,\infty)\times {\mathbb R}^3$ or its subsets.
We denote $(t,x)$ to be the coordinates on the space $(0,\infty)\times {\mathbb R}^3$.
Let $\Omega \subset {\mathbb R}^3$ be an open bounded domain with smooth boundary.
We assume that the parameters $\mu,\rho$ in the case of the acoustic waves and $\lambda,\mu,\rho$ in the case of the elastic waves are piecewise smooth functions in $\Omega$. Concretely, we assume that $\lambda, \mu, \rho$ are smooth on $\bar \Omega$ except for a jump discontinuity at a smooth closed connected hypersurface $\Gamma \subset \Omega$.
In general, $\Gamma$ can be a collection of disjoint closed connected, orientable hypersurfaces, but we can deal with multiple interfaces via an iterative argument. For the purpose of this article, we restrict ourselves to the fact that $\Gamma$ is a single smooth closed hypersurface in $\Omega$.
We define $\Omega_{\pm}$ to be the portions of $\Omega$ on the two sides of $\Gamma$, where $\Omega_{-}$ is the portion outside $\Gamma$ and $\Omega_{+}$ is the part inside $\Gamma$.
Let $u_I$ be a wave field (Acoustic/Elastic), travelling through $\Omega_{-}$, approaching the interface $\Gamma$. We write the suffix $I$ to indicate $u_I$ to be an incoming wave field (see \cite{SU-TATBrain, SUV2019transmission} for more details).
After hitting the interface $\Gamma$, the wave field splits into two parts $u_R$ and $u_T$, where $u_R$ is the reflected wave field travelling through $\Omega_{-}$ and
$u_T$ is the transmitted wave field travelling through $\Omega_{+}$, perturbed by a refraction according to Snell's law.
The wave fields $u_{I}$, $u_R$, $u_T$ are standardly related by the transmission conditions corresponding to the acoustic or the elastic waves, given on $(0,\infty)\times\Gamma$.
Thus, we can write the solution $u$ of the acoustic/elastic wave equation near an interface as (see \cite{SUV2019transmission})
\begin{equation}\label{e: u = uI + uR+uT}
u = u_I + u_R + u_T,
\end{equation}
where $u_I$ is the incident wave, $u_R$ is the reflected wave, and $u_T$ is the transmitted wave, where $u_T$ is supported in $\bar{\Omega}_+$ and $u_I,u_R$ are supported in $\bar\Omega_-$. Using various incident waves, we are interested in whether we can determine all the elastic parameters on $\Gamma^+$ from $u_R$. We restrict ourselves only to hyperbolic ``points'' (see \cite{SUV2019transmission}) and this is sufficient data. In this article we prove that, by knowing the material parameters on one side of the interface along with the knowledge of $u_R$ at the interface, one can determine those parameters and their derivatives (of any order) on $\Gamma$. The theorems are stated precisely for the acoustic case in Section \ref{Sec_Acoustic} and in Section \ref{s: elastic statement of theorem} for the elastic case.
Without loss of generality, we assume $\Gamma \subset \{x_3 = 0\}$ in $ {\mathbb R}^3$ is a closed, connected smooth hypersurface. We show in Section \ref{s: nonflat case} how the general case of curved interfaces follows quite easily with some additional terms showing the effect of curvature on the reflection operator. Using local diffeomorphisms, we can set $\Gamma$ to be any closed, connected, smooth hypersurface in $ {\mathbb R}^3$, but for the sake of simplicity, at present, we work with $\Gamma \subset \{x_3 = 0\}$. We write the $x_3 < 0 $ is ``above'' while $x_3 > 0$ is ``below'' the interface.
Since our analysis in this article is mostly on the interface $\Gamma$, therefore, we can shrink $\Omega$ to be a small neighborhood of $\Gamma$ in $ {\mathbb R}^3$. For notational convenience, we denote $\Gamma_{\pm}$ to be two copies of $\Gamma$ when approached from $\Omega_{\pm}$. We add the suffix $(\pm)$ to denote parameters on the different sides of $\Gamma$. For instance, we write $\rho^{(\pm)}$ to denote the density function $\rho$ on the domains $\Omega_{\pm}$, and similarly for the other material parameters.
We write $\Omega := \overline{\Omega}_{-} \sqcup \overline{\Omega}_{+}$ to be the disjoint union of $\overline{\Omega}_{+}$ and $\overline{\Omega}_{-}$, with $\Gamma_\pm$ included in the respective boundaries.
For any function $p(x)$ on $\Omega$, we denote $p\restriction_{\Gamma_\mp}$ as the limit of $p(x)$ as $x$ approaches $\Gamma$ from above/below. It will also be convenient to denote $p^{(\mp)} = p\restriction_{\Gamma_\mp}$. We also denote $ \partial_{\nu}p$ as the normal derivative to $\Gamma$ where $\nu$ is a fixed unit normal to $\Gamma$.
We consider two sets of parameters $(\mu,\rho)$, $(\widetilde{\mu},\widetilde{\rho})$ for the acoustic wave equation and $(\lambda,\mu,\rho)$, $(\widetilde{\lambda}, \widetilde{\mu},\widetilde{\rho})$ for the isotropic elastodynamic wave operator on $\Omega$.
We have two acoustic wave operators $P$ and $\widetilde{P}$, corresponding to the two sets of parameters $(\mu,\rho)$, $(\widetilde{\mu},\widetilde{\rho})$ and elastic wave operators $\Op$, $\widetilde{\Op}$ for the two sets of parameters $(\lambda,\mu,\rho)$, $(\widetilde{\lambda}, \widetilde{\mu},\widetilde{\rho})$ respectively.
We use the notation $\widetilde{f}$ to refer to a corresponding quantity associated to $\widetilde{P}$(or $\widetilde{\Op}$) when $f$ is a quantity associated to $P$(or $\Op$). In the next two subsections, we state the main theorems in the acoustic and elastic cases.
\subsection{Notation and statement of the theorems}\label{s: elastic statement of theorem}
\subsection*{Acoustic case}
First, consider the acoustic wave equation written in the classical form as
\begin{equation}\label{e: acoustic wave eq strong form}
Pu := \rho \partial_{t}^2u - \nabla_x\cdot \mu \nabla_{x}u =0,
\end{equation}
where $u$ is a scalar function, and $\rho(x)$, $\mu(x)$ are two piecewise smooth functions. This is not the standard notation for an acoustic wave equation. Normally, one first considers the elastic equation (\ref{eq_2}), and $\kappa:= \lambda + 2/3\mu$ is the incompressibility (or bulk modulus). In the fluid regions, one sets $\mu \equiv 0$ in (\ref{eq_2}). One then obtains the acoustic wave equation for the pressure field $p = -\kappa \nabla_x \cdot \partial_t u$, that is $\kappa^{-1} \partial^2_{t} p - \nabla \cdot (\rho^{-1}\nabla p) = 0$. This notation is awkward to use in our paper since we will compare our formulas to that of \cite{RachBoundary, RachDensity}. In order to make the comparison clearer, we replace $\kappa^{-1}$ by $\rho$ and $\rho^{-1}$ by $\mu$ to obtain (\ref{e: acoustic wave eq strong form}), and we will refer to it as the acoustic wave equation. This will allow for easy comparisons between our formulas and to those of \cite{RachBoundary, RachDensity}.
On a bounded domain $\Omega$ with smooth boundary $\partial\Omega$, the wave equation for $P$ with initial Cauchy data at time $t=0$ and a boundary condition on $(0,\infty)\times \partial\Omega$ is well-posed. The acoustic wave equation in an open, bounded domain $\Omega \subset {\mathbb R}^3$ with transmission conditions is given as
\begin{equation} } \def\eeq{\end{equation}\label{Acoustic_OP}
\begin{aligned}
Pu(t,x) := \left(\rho(x) \partial_{t}^2 - \nabla_{x}\cdot\mu(x)\nabla_x\right)u(t,x) =& 0,\qquad &&\mbox{in }(0,\infty)\times\Omega\setminus \Gamma,\\
u\restriction_{\Gamma_-} =& u\restriction_{\Gamma_+}, \\
\mu \frac{ \partial u}{ \partial \nu}\restriction_{\Gamma_-} =& \mu \frac{ \partial u}{ \partial \nu}\restriction_{\Gamma_+},\\
u(t,x) =& f(t,x) \quad &&\mbox{on }(0,\infty)\times\partial\Omega,\\
u\restriction_{t=0} = 0, \quad &\partial_t u\restriction_{t=0} = 0\quad &&\mbox{on }\Omega,
\end{aligned}
\eeq
where as in \cite{SU-TATBrain}, $u\restriction_{\Gamma_\mp}$ is the limit value (the trace) of $u$ on $\Gamma$ when taking the limit from ``above'' and from ``below'' $\Gamma$ respectively. We will denote this limit when applied to a parameter $c$ by $c^{(\pm)}$. We similarly define the interior and exterior normal derivatives, and $\nu$ is the exterior unit (in the Euclidean metric) normal to $\Gamma$. The conditions at $\Gamma_\pm$ are called the \emph{transmission conditions.}
They can be shortened to $[u] = 0$, and $[\mathcal N u]=0$ where $[v]$ stands for the jump of $v$ from the exterior to the interior across $\Gamma$ and $\mathcal N$ is the normal operator: $\mathcal N u = \mu \frac{ \partial u}{ \partial \nu}$.
We consider the parameters $\mu$ and $\rho$ to be piecewise smooth in $\Omega$.
The set of discontinuities of $\mu$ and $\rho$ is known as the \emph{interface} and denoted by $\Gamma$.
We denote $c_S:=\sqrt{\mu/\rho}$ to be the wave speed in $\Omega$.
For each given $f$, a solution $u$ to \eqref{Acoustic_OP}, can be written microlocally near the interface in the form (\ref{e: u = uI + uR+uT}), with $u_I$ the incoming wave hitting $\Gamma_-$, $u_R$ the reflected wave, and $u_T$ the transmitted wave initially moving away from $\Gamma_+$ inside $\Omega_+$ (see \cite[Section 4]{SU-TATBrain} for the construction).\footnote{The notion of incoming and outgoing is characterized in terms of its wavefront set. See \cite{SUV2019transmission,SU-TATBrain}.} Denote
\begin{equation}\label{e: definition of h at Gamma}
h := \rho_{\Gamma_-}u_I \in \mathcal{E}'(\Gamma_- \times {\mathbb R}_t),
\end{equation}
where $\rho_{\Gamma^-}$ is the restriction to $\Gamma$ from $\Omega_{-}$. We assume $h$ is microsupported away from the glancing set (see \cite{SUV2019transmission} for the relevant definition). It is well known that \cite{SU-TATBrain}
\[\rho_{\Gamma_-}u_R \equiv R(\rho_- u_I) = R h,
\]
where $R \in \Psi_{cl}^0(\Gamma_- \times {\mathbb R}_t)$ is the reflection operator (see \cite{CHKUControl,CHKUElastic}) derived explicitly in section \ref{s: acoustic zeroth order derivation} with principal symbol given by $(a_R)_0$ in equation \eqref{zeroth_order_wave}, and `$\equiv$' denotes equality modulo functions in the class $C^\infty(\Gamma_{\pm} \times {\mathbb R}_t)$, and when used between pseudodifferential operators, it means equality modulo operators in $\Psi^{-\infty}$. In our notation, $R$ has principal symbol $b_R^{(0)}\restriction_\Gamma$ in the notation of \cite[Section 4]{SU-TATBrain} and $R$ is constructed microlocally from $P$ and the transmission conditions so that if $\tilde P$ is another acoustic wave operator, there a corresponding reflection operator $\tilde R$. Since we are interested in recovering the material parameters and their derivatives on $\Gamma$, we can shrink $\Omega$ so that $\Omega_{\pm}$ becomes a small one-sided neighbourhoods of $\Gamma_{\pm}$. Hence, without loss of generality, we assume $\Omega$ to be a thin open neighbourhood of $\Gamma$ in $ {\mathbb R}^3$.
\begin{theorem}\label{Th_main_acoustic}
Let $P$ and $\widetilde{P}$ be two acoustic wave operators with parameters $(\mu,\rho)$ and $(\widetilde{\mu},\widetilde{\rho})$ on $(0,T_0)\times\Omega$. We assume the notations defined above.
Suppose that $R \equiv \tilde R$ on $(0,T_0)\times\Gamma_-$ and $c_{S} =\tilde c_{S}$ and $\rho = \tilde \rho$ in a small neighbourhood of $\Gamma_{-}$ in $\Omega_-$.
Then $ \partial_{\nu}^j c^{(+)}_{S} = \partial_{\nu}^j\tilde c^{(+)}_{S}$ and $ \partial_{\nu}^j\rho^{(+)} = \partial_{\nu}^j\tilde \rho^{(+)}$ on $\Gamma_+$ for $j = 0, 1, 2, \dots$.
\end{theorem}
\begin{rem}
Note that we need the data only on a infinitesimally small neighbourhood on only one side of the interface. From the data measured on one side of the interface, we can determine information about the transmitted wave which lies on the other side of the interface.
\end{rem}
\begin{rem}
We require measurements for a very short period of time at the interface, but we consider time $T_0>0$ for the wave to travel from $\partial\Omega$ to $\Gamma$. That is, we start with generating an initial pulse at $t=0$ on $\partial\Omega$. Let $T>0$ be the time required for the wave to reach the interface $\Gamma$.
We consider $T_0>T$ slightly bigger than $T_0$ and take measurements for a small time neighbourhood of $T$ at $\Gamma$.
\end{rem}
\begin{rem}\label{rem_1}
An estimate for the time $T_0$ can be given as $\mbox{diam}_{g}(\Omega) < T_0 < 2\mbox{diam}_{g}(\Omega)$, where the diameter is taken in the wave-speed metric $g:=c_S^{-2}dx^2$ in $\Omega$.
Now, since the wave-speed $c_S$ is piecewise smooth in $\Omega$, we take the distance function defined by adding lengths of the connected geodesics on $\Omega_{-}$ and $\Omega_{+}$. To see in details of such non-smooth distance functions, see \cite{CHKUControl}.
\end{rem}
\subsection*{Elastic case}
For a bounded domain $\Omega \subset {\mathbb R}^3$ (representing an elastic object), we consider the isotropic elastic equation with operator $\Op$ given formally in the classical form as
\begin{equation}\label{eq_2}
\Op u:= \rho \partial_{t}^2u - \nabla_x \cdot( \lambda \text{div}} \def \grad{ \text{grad}} \def \rot{ \text{rot} \otimes \text{Id} } \def \sgn{ \text{sgn} + 2\mu \widehat{\nabla}_x)u = 0,
\end{equation}
where $\rho$ is the density, $\lambda$ and $\mu$ are the Lam\'{e} parameters, and $\widehat{\nabla}$ is the \emph{symmetric gradient} used to define the strain tensor for an elastic system via $\widehat{\nabla} u = (\nabla u + (\nabla u)^T)/2$ for a vector valued distribution $u$.
Operator $\Op$ acts on a vector-valued distribution $u(x,t)= (u_1,u_2,u_3)$, the \emph{displacement} of the elastic object.
We assume that the Lam\'{e} parameters $\lambda(x)$ and $\mu(x)$ are bounded and satisfy the \emph{strong convexity conditions}, namely $\mu>0$ and $3\lambda+2\mu >0$ on $\overline{\Omega}$.
We consider the initial boundary value problem as
\begin{align}\label{Elastic_OP_1}
\Op U(t,x) =& 0,\qquad &&\mbox{in }(0,\infty)\times\Omega,\\
U(t,x)\restriction_{\partial\Omega} =& f(t,x), \qquad &&\mbox{for } (t,x) \in (0,\infty)\times\partial\Omega,\\
U\restriction_{t=0} = 0,\quad \partial_t U\restriction_{t=0} =& 0, \qquad &&\mbox{in }\Omega.
\end{align}
Here $U(t,x)$ in the above system denotes the displacement in $\Omega$ at time $t\geq 0$ and $c_P = \sqrt{\frac{\lambda+2\mu}{\rho}}$, $c_S = \sqrt{\frac{\mu}{\rho}}$ are the compressional and the shear wave-speeds in $\Omega$ respectively. As described in \cite{SUV2019transmission,CHKUElastic} we impose the following transmission conditions
\begin{equation}\label{e: elastic transmission conditions}
[U] = 0, \qquad [ \mathcal N u ] = 0
\end{equation}
where $[v]$ stands for the jump of $v$ from the exterior to the interior across $\Gamma$ and $\mathcal N f$ are the normal components of the stress tensor (see (\ref{elastic_Neumann})).
The strong convexity conditions on $\lambda$, $\mu$ ensures that $c_P > c_S$ on $\overline{\Omega}$. Near an interface, we have the decomposition $u = u_I + u_R + u_T$ from (\ref{e: u = uI + uR+uT}). As in the acoustic setting, with $h = \rho_{\Gamma^-} u_I$ and assuming $h$ is microsupported away from the glancing set (see \cite{SUV2019transmission} for the relevant definition), one has $\rho_{\Gamma_-}u_R \equiv R h$, where $R \in \Psi^0(\Gamma_- \times {\mathbb R}_t)$ is the reflection operator (see \cite{CHKUControl,CHKUElastic}) and it is denoted by $M_R$ in \cite{CHKUElastic}, `$\equiv$' denotes equality modulo functions in the class $C^\infty(\Gamma_{\pm} \times {\mathbb R}_t)$. The operator $R$ will be derived microlocally from $Q$ and the transmission conditions. Note that in this setting, $\Psi^*(\Gamma_- \times {\mathbb R}_t)$ are pseudodifferential operators operating on vector bundles.
We now state our theorem:
\begin{theorem}\label{t: Elastic case}
Let $\Op$ and $\widetilde{\Op}$ be two isotropic elastic wave operators with parameters $(\lambda,\mu,\rho)$ and $(\widetilde{\lambda},\widetilde{\mu},\widetilde{\rho})$ on $(0,T_0)\times\Omega$. Assuming the above notational conventions,
suppose that $R \equiv \tilde R$ on $(0,T_0)\times\Gamma_-$ and $c_{\PS} =\tilde c_{\PS}$, $\rho = \tilde \rho$ near $\Gamma$ in $\overline \Omega_-$.
Then $ \partial_{\nu}^j c^{(+)}_{\PS} = \partial_{\nu}^j\tilde c^{(+)}_{\PS}$ and $ \partial_{\nu}^j\rho^{(+)} = \partial_{\nu}^j\tilde \rho^{(+)}$ on $\Gamma_+$ for all $j = 0, 1, 2, \dots$.
\end{theorem}
\begin{rem}
More generally, we can show unique determination of the parameters below the interface using \emph{relative amplitude reflections}. In seismic experiments, one often only has access to normalized reflected amplitudes rather than the exact ones (see Remark \ref{rem: relative amplitudes computation}) for the normalization (c.f. \cite{ZhouRelAmp}). Hence, it becomes natural to ask whether our reconstruction methods would apply to these cases. For a reconstruction formula, the answer is essentially yes, modulo solving an intricate nonlinear equation involving one of the parameters, while unique determination can be done completely. The argument is briefly summarized in Remark \ref{rem: relative amplitudes computation}, which deals with the simpler acoustic case, but similar arguments hold for the elastic case.
\end{rem}
\begin{comment}
\begin{rem}
We assumed in this inverse problem that the upper layer ($\rho^{(-)}, c_\PS^{(-)}$) is known, but it is also natural to ask how much can one recover knowing only the reflected operator $R$ without any knowledge of the material parameters on either side of the interface. Concretely, can one reconstruct $c_\PS^{(+)} - c_\PS^{(-)}, \rho^{(+)}- \rho^{(-)}$ and all their derivatives at an interface from $R$? This question is more subtle due to the nonlinear coupling of the material parameters in the equation for the symbol of $R$, but we give some partial results in the acoustic case with an explanation of the obstructions in Remark \ref{rem: recovery of contrasts}.
\end{rem}
\end{comment}
We also state an obvious corollary regarding the unique recovery of the transmission operator $T \in \Psi^0(\Gamma_- \times \mathbb R)$ using the reflection operator. Here, $\rho_{\Gamma_+} u_T = Th$ with $u_T$ and $h$ introduced earlier. On a principal symbol level, this is usually proved using a conservation of energy argument. However, since we show that the full symbol of the transmission operator is determined by the jet of all three parameters on both sides of the interface, we also have
\begin{cor}\label{c: recover T from R}
Suppose that $R = \tilde R \text{ mod }\Psi^{-\infty}(\Gamma_- \times \mathbb R)$ and $c_{\PS} =\tilde c_{\PS}$ and $\rho = \tilde \rho$ in $\overline \Omega_-$. Then $T = \tilde T \text{ mod }\Psi^{-\infty}(\Gamma_{\pm} \times \mathbb R)$.
\end{cor}
\begin{rem}
We require measurements for a very short period of time at the interface, but we consider a time $T_0>0$ for the waves to travel from $\partial\Omega$ to $\Gamma$. Since, we have two waves (compressional and shear) travelling with different wave-speeds ($c_{\PS}$), the estimate of the time $T_0$ is not as straight-forward as in the acoustic case (Remark \ref{rem_1}).
We avoid this dilemma with the help of the strong convexity condition of the Lam\'e parameters, which ensures that $c_P>c_S$ in $\Omega$.
We can define distance functions corresponding to the non-smooth metrics $g_{\PS}$ by joining geodesics on the both sides of the interface (see \cite{CHKUElastic}) and estimate the time $T_0$ for elastic waves as $\mbox{diam}_{g_S}(\Omega) < T_0 < 2\mbox{diam}_{g_P}(\Omega)$, where the two wave-speed metrics are given as $g_{\PS}:=c_{\PS}^{-2}dx^2$ in $\Omega$.
\end{rem}
\begin{rem}
Since we can take $\Omega$ to be as arbitrarily small and $T_0$ is bounded above and below by $\text{diam}_{g_{\PS}}(\Omega)$, therefore, one can have $T_0$ to be small enough by choosing a thin enough $\Omega$.
\end{rem}
For clarity of the exposition, we first assume a flat interface and prove the theorems in this case in Sections \ref{Sec_Acoustic} and \ref{Sec_statement}. In Section \ref{s: nonflat case}, we extend the arguments to the general case.
\section{Acoustic waves and proof of Theorem \ref{Th_main_acoustic}}\label{Sec_Acoustic}
The parameters for the acoustic waves are $\mu(x)>0$ and $\rho(x)>0$, where $\rho$ is the density of the domain $\Omega$ and $c_S := \sqrt{\frac{\mu}{\rho}}$ is the wave speed in $\Omega$.
We write the coordinates in $(0,\infty)\times\Omega$ as $(t,x)=(t,x_1,x_2,x_3) = (t,x',x_3)$ and consider $(\tau,\xi)=(\tau,\xi_1,\xi_2,\xi_3):=(\tau,\xi',\xi_3)$ to be the dual coordinates of $(t,x)$ in the cotangent space $T^*\Omega$.
\subsubsection*{Summary of the proof of Theorem \ref{Th_main_acoustic}} Let us here provide a brief summary on how we prove Theorem \ref{Th_main_acoustic}.
As a first step, we give a complete derivation of the reflection operator $R$ by using ``geometric optics solutions'' of \eqref{Acoustic_OP} near $\Gamma$. One can use Fourier integral operators to construct a microlocal parameterix for the fundamental solution of \eqref{Acoustic_OP} when $f$ is a delta distribution. After imposing the transmission conditions, we derive the reflection operator $R$ that is a constituent of this parameterix. $R$ will end up being a $0$'th order classical PsiDO and we derive each symbol in the polyhomogeneous expansion of the symbol of $R$. We also show how the curvature of $\Gamma$ affects the lower order symbols.
Afterwards, we proceed with a series of lemmas and propositions showing how to recover the material parameters $\mu$ and $\rho$, and their derivatives, at the interface using each term in the polyhomogeneous expansion of the full symbol of $R$. We start with the principal symbol of $R$ and then successively use lower order symbols to recover more derivatives of the coefficients. The shape operator of $\Gamma$ gets recovered as well. The final proof just combines the lemmas and propositions and will follow easily.
Since the elastic case follows an analogous procedure in a more complicated case, we leave most of the proofs to Appendix \ref{s: proofs of acoustic lemmas}, and instead focus on the proofs for the elastic case.
We consider a geometric optic solution for the acoustic wave equation \eqref{Acoustic_OP} locally near $\Gamma$ as \[
U = U_I+ U_R + U_T
\]
with
\begin{equation*}
U_{\bullet}(t,x)
= \int e^{i\phi_{\bullet}(t, x, \tau, \xi')} a_{\bullet}(t, x, \tau, \xi')\hat h(\tau, \xi') d \tau d\xi',
\end{equation*}
where $\bullet = I/R/T$ denotes the incoming, reflected or the transmitted wave fields and $\hat{h}$ is the Fourier transform of the $h:=\rho_{\Gamma_{-}}U_I$.
The wave fields $U_{I/R}$ are supported on $\overline\Omega_{-}$ and $U_T$ is supported in $\overline \Omega_{+}$.
The phase function $\phi_{\bullet}(t,x,\tau,\xi')$ satisfies the usual Eikonal equation
\begin{equation}\label{Eikonal_eq}
\abs{\partial_t \phi_{\bullet}}^2 = c^2_{S}\abs{\nabla_{x} \phi_{\bullet}}^2,
\end{equation}
with the boundary condition $\phi_{\bullet}\restriction_{x_3 = 0} = -t\tau + x' \cdot \xi'$. We observe that $\phi_{\bullet}$ is of homogeneity $1$ in the $(\tau,\xi')$ variables.
We write a formal asymptotic series for the amplitude function $a_{\bullet}(t,x,\tau,\xi')$ as
\begin{equation*}
a_{\bullet}(t,x,\tau,\xi^{\prime}) = \sum_{J=0}^{-\infty} (a_{\bullet})_J(t,x,\tau,\xi^{\prime}), \qquad \bullet = I,R,T,
\end{equation*}
where $(a_{\bullet})_J$ is homogeneous of order $|J|$ in $\abs{\left(\tau,\xi^{\prime}\right)}$.
From the equation $PU=0$, separating orders of $\abs{(\tau,\xi^{\prime})}$ we obtain recursive transport equations for the terms $(a_{\bullet})_J$.
Without loss of generality, we assume a flat metric near $\Gamma$, i.e. $g = c_S^{-2}dx^2$ and assume $\Gamma \subset \{ x_3 = 0 \}$ so that $x_3$ is a defining function for $\Gamma$. We show in Section \ref{s: nonflat case} how the general case follows easily from this case.
The reflection and the transmission operators $R$, $T$ on the interface are derived from the transmission conditions so that $R(U_I) = U_R$ and $T(U_I) = U_T$ when restricted to $\Gamma$. One can calculate the full symbol of $R$ and $T$ microlocally (see \cite{SU-TATBrain,Hansen-CPDEInverse} as well) from the transmission conditions on $\Gamma$ induced by the acoustic wave equation.
In Theorem \ref{Th_main_acoustic} we prove that one can determine $\partial_{\nu}^k\mu$, $\partial_{\nu}^k\rho$ on $\Gamma$, for $k=0,1,2,\dots$, from the knowledge of the reflection operator $R$ at the interface $\Gamma$ and the parameters $\rho$, $\mu$ on $\Omega_{-}$.
\subsection{Derivation of reflection operator}\label{s: acoustic zeroth order derivation}
Since $h(t,x')$ on $\Gamma$ can be made arbitrary, one can work with the acoustic wave parametrix
\begin{equation*}
u_{\bullet}(t,x,\tau,\xi') = e^{i\phi_{\bullet}(t,x,\tau,\xi')}a_{\bullet}(t,x,\tau,\xi').
\end{equation*}
Let $u_I$ be an incoming wave approaching the interface $\Gamma$ and $u_R$ be the reflected wave with the condition $\rho_{\Gamma_{-}}u_R \equiv R h$, where $R \in \Psi^0(\Gamma_{-} \times \mathbb R)$ is a well-known pseudodifferential reflection operator. Thus, $a_R\restriction_{\Gamma}$ is the symbol of $R$ in the statement of Theorem \ref{Th_main_acoustic} and the discussion preceding it, and the symbol of $R$ has an asymptotic expansion as $\sum_{J=0}^{-\infty} (a_{R})_J\restriction_{\Gamma_-}$.
The interface condition for acoustic waves reads
\begin{align*}\label{e: interface conditions}
a_I + a_R &= a_T, \\
\mu^{(-)}( \partial_{x_3} \phi_I a_I + \partial_{x_3}a_I) + \mu^{(-)}( \partial_{x_3} \phi_R a_R + \partial_{x_3}a_R) &= \mu^{(+)}( \partial_{x_3} \phi_T a_T + \partial_{x_3}a_T ), \\
& \text{ on }\Gamma.
\end{align*}
Now, we must have $u_I\restriction_\Gamma = h$ so that this imposes the boundary conditions of $(a_{I})_J$, $J=0,-1,\dots$. Indeed we get
\begin{equation}\label{e: bdy conditions of a_I}
(a_I)_0 = 1 \text{ and }
(a_I)_J = 0 \text{ on } \Gamma.
\end{equation}
Now, observe that, from the interface conditions of $\phi_{\bullet}$ in (\ref{Eikonal_eq}) we obtain $ \partial_{x_k}e^{i\phi_{\bullet}} = i\xi_k e^{i\phi_{\bullet}}$ for $k=1,2$ and $ \partial_{t}e^{i\phi_{\bullet}} = -i\tau e^{i\phi_{\bullet}}$ on $\Gamma$.
Furthermore, we define the quantity
\begin{equation}
\xi_{3, \bullet} = \sqrt{\abs{ \partial_{x'}\phi_{\bullet}}^2 - c^{-2}_{S} \abs{ \partial_t \phi_{\bullet}}^2} = \sqrt{\abs{\xi'}^2 - c^{-2}_{S}\abs{\tau}^2}
,\qquad \mbox{where}\quad \bullet = I, R, T.
\end{equation}
Also, note that $\xi_{3,R} = -\xi_{3,I}$ on $\Gamma$. The interface conditions for the $0$'th order term $(a_{\bullet})_0$ are
\begin{align*}
&\begin{cases}
-(a_R)_0 + (a_T)_0 = (a_I)_0 \\
-\mu^{(-)}\xi_{3, R} (a_R)_0 + \mu^{(+)}\xi_{3, T} (a_T)_0
= \mu^{(-)}\xi_{3, I}((a_I)_0 = 1)
\end{cases}
\qquad \mbox{on }\Gamma.
\\
\text{Hence, } \quad
&\left[\begin{matrix} -1 & 1 \\ -\mu^{(-)}\xi_{3, R} & \mu^{(+)}\xi_{3, T}\end{matrix}\right] \col{(a_R)_0 \\ (a_T)_0 }
= \col{ 1 \\ \mu^{(+)}\xi_{3, I}} \qquad \text{ on } \Gamma.
\end{align*}
Since $\xi_{3, R} = - \xi_{3, I}$, we compute
\begin{equation}\label{zeroth_order_wave}
\col{(a_R)_0\\ (a_T)_0}
= \frac{-1}{\mu^{(-)}\xi_{3, I} + \mu^{(+)}\xi_{3, T}}
\left[\begin{matrix} \mu^{(+)}\xi_{3, T} & -1 \\ -\mu^{(-)}\xi_{3,I} & -1\end{matrix}\right]
\col{1 \\ \mu^{(+)}\xi_{3, I} } = B_0\col{1 \\ \mu^{(+)}\xi_{3, I} },
\end{equation}
where the matrix $B_0$ above depends only on the parameters $\lambda,\mu,\rho$ at the boundary, but \emph{not} on their derivatives.
For the order of homogeneity $-J = 1,2,\dots$ in $\abs{\xi^{\prime}}$, using the boundary conditions for $(a_{\bullet})_{J}$, we get
\begin{multline}\label{e: transmission conditions}
\left[\begin{matrix} -1 & 1 \\ -\mu^{(-)}\xi_{3, R} & \mu^{(+)}\xi_{3, T}\end{matrix}\right]\col{(a_R)_J \\ (a_T)_J }
\\ = \col{ 0 \\ \mu^{(-)} \partial_{x_3}(a_I)_{J+1} + \mu^{(-)} \partial_{x_3}(a_R)_{J+1}-\mu^{(+)} \partial_{x_3}(a_T)_{J+1}} \quad \text{on } \Gamma.
\end{multline}
Thus, $(a_R)_0$ restricted to $\mathbb R_t \times \Gamma$ is the principal symbol of $R$ and $(a_R)_J$ for $J = -1, -2, \dots$ restricted to $\mathbb R_t \times \Gamma$ are the lower order symbols in the polyhomogeneous expansion of the symbol of $R$.
\subsection{Some lemmas and proof of Theorem \ref{Th_main_acoustic}}\label{s: lemmas and proof of acoustic thm}
We can uniquely determine both material parameters restricted to the interface from $(a_R)_0$. The proofs of the lemmas in this section are in Appendix \ref{s: proofs of acoustic lemmas}.
\begin{lemma}\label{l: recover 2 params from 0'th order reflect}
Suppose $(a_R)_0(x,\tau_i,\xi'_i) = (\tilde a_R)_0(x,\tau_i,\xi_i')$ for $i=1,2$ such that $\abs{\xi_1'}/\tau_1 \neq \pm \abs{\xi_2'}/\tau_2$ and $(x,\tau_i,\xi'_i)$ are not in the glancing set. Suppose also that $\mu^{(-)} = \tilde \mu^{(-)}, \rho^{(-)} = \tilde \rho^{(-)}$. Then $\mu^{(+)} = \tilde \mu^{(+)}, \rho^{(+)} = \tilde \rho^{(+)}$. That is, the reflection coefficient at two different covectors in the nonglancing region uniquely determine both material parameters infinitesimally below the interface when those parameters are known above the interface.
\end{lemma}
The next step is to recover all the higher order normal derivatives of the parameters at the non-glancing region of the interface from the lower order symbols in the polyhomogeneous expansion of the full symbol of $R$.
Since we are considering the non-glancing region only, therefore, we may very well assume that $\xi_{3,\bullet}$ is bounded away from $0$.
{\bf Notation:} We denote by $R_j$ terms that depend on
\begin{enumerate}
\item[$\bullet$]
normal derivatives of $c_S,\rho$ of order at most $j$, and
\item[$\bullet$]
quantities determined completely by the transmission conditions (\ref{zeroth_order_wave}),(\ref{e: transmission conditions}) in $\Gamma$ for $J=0,-1, \dots, 1-j$, and
\item[$\bullet$] any quantity in the known region $\overline\Omega_-$.
\end{enumerate}
By a direct calculation, $P u_{\bullet} =0$ reduces to the equation
\begin{equation*}
p(x, \partial_{t,x} \phi_\bullet) a_{\bullet}
+ 2i(\rho \partial_t \phi_\bullet \partial_t - \mu \partial_x \phi_\bullet \cdot \partial_x)a_\bullet
+ i (P\phi_\bullet)a_\bullet + P(x,D_t,D_x) a_{\bullet} = 0,
\end{equation*}
where $p(t,x,\tau\,\xi)$ is the principal symbol of the operator $P = \rho \partial_t^2 - \nabla \cdot \mu \nabla$, given as
\begin{equation*}
p(t,x,\tau,\xi) = -\left(\rho(x)\tau^2 - \mu(x)\abs{\xi}^2\right).
\end{equation*}
Separating orders of $\abs{\xi^{\prime}}$ we obtain the transport equations
\begin{equation}\label{e: transport equations}
\begin{gathered}
( \partial_t \phi_\bullet \partial_t - c_S^2 \partial_x \phi_\bullet \cdot \partial_x)(a_\bullet)_0
+ ((1/2\rho)P\phi_\bullet)(a_\bullet)_0 = 0, \\
( \partial_t \phi_\bullet \partial_t - c_S^2 \partial_x \phi_\bullet \cdot \partial_x)(a_\bullet)_J
+ ((1/2\rho)P\phi_\bullet)(a_\bullet)_J \\ = -(1/i2\rho)P(x,D_t,D_x) (a_{\bullet})_{J+1},
\quad \mbox{for }J<0.
\end{gathered}
\end{equation}
The Hamiltonian to describe downgoing and upgoing waves is
\begin{equation*}
q_\pm(t,x,\tau,\xi) = \xi_3 \mp \sqrt{c_S^{-2}\tau^2 - \abs{\xi'}^2},
\end{equation*}
with the Hamilton's equations
\begin{align*}
\frac{dt}{ds} = \frac{-\tau}{c_S^2 \xi_{3,\bullet}}
\qquad\mbox{and}\qquad
\frac{dx}{ds} = \frac{\xi}{\xi_{3,\bullet}}.
\end{align*}
Here $s$ is the parameter along the Hamiltonian fields. Along the Hamilton vector fields we obtain
\begin{multline}\label{key_5}
( \partial_t \phi_\bullet \partial_t - c_S^2 \partial_x \phi_\bullet \cdot \partial_x)a_\bullet
= -c_S^2\xi_{3,\bullet} \left(\frac{d}{ds} a_\bullet\right)
\\
= -c_S^2\xi_{3,\bullet} \left( \partial_{x_3}a_\bullet
+ (\tau/\xi_{3,\bullet}) \partial_t a_\bullet - (\xi'/\xi_{3,\bullet}) \cdot \partial_{x'}a_\bullet\right).
\end{multline}
Observe that, when we restrict to $\Gamma$, the second two terms
viz. $(\tau c^2_S)\partial_t a_{\bullet}$ and $c^2_S \left(\xi^{\prime}\cdot\partial_{x^{\prime}}a_{\bullet}\right)$
are completely determined by $0$ derivatives of $c_S$, $\rho$ at $\Gamma$ and the transmission conditions (\ref{e: transmission conditions}).
Thus, using our notation $R_j$, for $J=0$, (\ref{key_5}) can be written as
\begin{equation}\label{e: Hamilton derivative to normal deriv}
( \partial_t \phi_\bullet \partial_t - c_S^2 \partial_x \phi_\bullet \cdot \partial_x)(a_\bullet)_0
= -(c_S^2\xi_{3,\bullet}) \frac{d}{ds}(a_\bullet)_0
= -c_S^2\xi_{3,\bullet} \partial_{x_3}(a_\bullet)_0
+ R_0.
\end{equation}
Along with the transport equations, this identity shows that the term $\partial_{x_3}(a_\bullet)_0$ can be expressed in terms of the normal derivatives of the parameters at $\Gamma$.
In order to illustrate this fact and to get an explicit relation between the normal derivatives of the amplitude and the normal derivatives of the coefficients, we state the following technical lemma.
\begin{lemma}\label{l: acoustic partial_x_3 a_0 formula}
$ \partial_{x_3}(a_\bullet)_0$ are $R_1$, that is, they depend on at most $1$ derivative of $\rho$, $c_S$ on $\Gamma$.
In fact we have the following explicit relation
\begin{equation}\label{e: acoustic partial_{x_3}a_0 formula}
\partial_{x_3}(a_\bullet)_0 = -\left[( \partial_{x_3}\log \sqrt \rho)
- ( \partial_{x_3}\log c_S)\left(1 - \frac{( \partial_t \phi_\bullet)^2}{2c_S^2 \xi^2_{3,\bullet}}\right)\right](a_\bullet)_0 + R_0.
\end{equation}
\end{lemma}
Next, higher order normal derivatives of the material parameters can be uniquely determined from the knowledge of $(a_{\bullet})_J$ at $\Gamma$.
We first consider the case for the first order normal derivatives.
\begin{lemma}\label{l: first derivative}
One may recover the first normal-derivatives of both parameters, that is $ \partial_{x_3}\rho^{(+)}$ and $ \partial_{x_3}c_S^{(+)}$ at $\Gamma$ from $(a_R)_{-1}$.
\end{lemma}
For the higher order derivatives of the coefficients we have an analogous lemma. For the smooth case, the following lemma reduces to \cite[Lemma 3.10]{RachBoundary}.
\begin{lemma}\label{l: acoustic higher derivatives derivation}
Fix $J \in \mathbb \{-2, -3, \dots\}$ and assume that $\mu$, $\rho$ are known on $\Omega_{-}$, and $(a_R)_0,\dots,(a_R)_{1+J}$ are known on $\Gamma$. Then
$ \partial_{x_3}^{\abs{J}}c^{(+)}_S$ and $ \partial_{x_3}^{\abs{J}}\rho^{(+)}$ are uniquely determined by $(a_R)_J$ at $\Gamma$. In fact, we have the following explicit relation
\begin{align}\label{e: (a_R)_J equation for acoustic}
(a_R)_J &= -(-i/(2\xi_{3,T}))^J \left[( \partial^{\abs{J}}_{x_3}\log \sqrt {\rho^{(+)}}) \right.\\ \nonumber
&\qquad \qquad \qquad
\left.+ \partial^{\abs{J}}_{x_3}\log c^{(+)}_S\left(1 - \frac{( \partial_t \phi_T)^2}{2c_S^2 \xi^2_{3,T}}\right)\right]\frac{(a_T)_{J+1}}{R_{\abs{J+1}}} + R_{\abs{J+1}}
\end{align}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{Th_main_acoustic}] The assumption $R \equiv \tilde R$ implies that the full symbol of $R$ coincides with the full symbol of $\tilde R$ away from the glancing set. By construction, this means $(a_R)_J = (\tilde a_R)_J$ for each $J$ on $\mathbb R_t \times \Gamma_-$ so by Lemma \ref{l: acoustic higher derivatives derivation}, we may recover $\mu^{(+)}$ and $\rho^{(+)}$. By Lemma \ref{l: acoustic partial_x_3 a_0 formula} and \ref{l: acoustic higher derivatives derivation} we recover $ \partial_{x_3}^{J}c^{(+)}_S$ and $ \partial_{x_3}^{J}\rho^{(+)}$ for each $J=1, 2,\dots$.
\end{proof}
\section{Elastic waves and proof of Theorem \ref{t: Elastic case}}\label{Sec_statement}
Recall the isotropic elastodynamic wave equation as
\begin{equation} } \def\eeq{\end{equation}
\begin{aligned}\label{Elastic_OP}
\rho \partial_{t}^2U - \nabla \cdot( \lambda \text{div}} \def \grad{ \text{grad}} \def \rot{ \text{rot} \otimes \text{Id} } \def \sgn{ \text{sgn} + 2\mu \widehat{\nabla})U =& 0,\qquad &&\mbox{in }(0,\infty)\times\Omega,\\
U(t,x)\restriction_{\partial\Omega} =& f(t,x), \qquad &&\mbox{for } (t,x) \in (0,\infty)\times\partial\Omega,\\
U\restriction_{t=0} = 0,\quad \partial_t U\restriction_{t=0} =& 0, \qquad &&\mbox{in }\Omega.
\end{aligned}
\eeq
We define the compressional wave speed $c_P$ and the shear wave speed $c_S$ as
\begin{equation*}
c_P= \sqrt{\frac{\lambda+2\mu}{\rho}}, \qquad c_S = \sqrt{\frac{\mu}{\rho}}, \quad \mbox{in }\Omega.
\end{equation*}
The proof of Theorem \ref{t: Elastic case} will proceed in a series of steps analogous to the acoustic case and detailed at the start of section \ref{Sec_Acoustic}. As in that case, we start with a geometric optics solution of \eqref{Elastic_OP} near $\Gamma$ and then derive the reflection operator $R$ with its full symbol.
Since we consider our analysis only on $\Gamma$, we can shrink $\Omega$ to be a small neighbourhood of $\Gamma$. Considering $\Omega$ as a neighbourhood of $\Gamma$
we construct the geometric optic solutions for the elastic wave equation \eqref{Elastic_OP} given as
\begin{equation}\label{GO_form}
\left(U_{\bullet}\right)_l
= \sum_{\star=\PS}\sum_{m=1,2,3}\int e^{i\phi_{\bullet,\star}(t, x, \tau, \xi')} A^{l,m}_{\bullet,\star}(t, x, \tau, \xi')\widehat{f}_m(\tau, \xi') d \tau d\xi', \quad l=1,2,3
\end{equation}
where $\bullet = I/R/T$ denotes the incoming, the reflected or the transmitted wave field.
Note that $U_{I/R}$ travels through $\Omega_{-}$, whereas $U_T$ is on $\Omega_{+}$.
The phase functions $\phi_{\bullet,\PS}$ satisfies the Eikonal equations
\begin{equation}\label{elastic_eikonal_equation}
\abs{ \partial_t \phi_{\bullet,\PS}}^2 = c^2_{\PS}\abs{\nabla_{x} \phi_{\bullet,\PS}}^2,
\quad \mbox{such that }\quad
\phi_{\bullet,\PS}(t,x)\restriction_{x_3=0} = -t \tau + x' \cdot \xi'.
\end{equation}
Similar to the acoustic wave case, $\phi_{\bullet,\PS}$ is homogeneous of order $1$ in $\abs{(\tau,\xi')}$.
We define the quantity
\begin{equation}\label{xi_3_ps}
\xi_{3,\bullet,\PS} := \sqrt{\abs{\nabla_{x'}\phi_{\bullet,\PS}}^2- c_{\PS}^{-2}\abs{ \partial_t \phi_{\bullet,\PS}}^2},
\end{equation}
and observe that $\xi_{3,R,\PS} = -\xi_{3,I,\PS}$.
The amplitudes $(A_{\bullet, \PS})_J$ are homogeneous of order $|J|$ in $\abs{(\tau,\xi')}$ and solves the following iterative equations
\begin{multline}\label{eq_transport}
p(t,x,\partial_t\phi_{\bullet,P/S},\nabla_x \phi_{\bullet,P/S})(A_{\bullet,\PS})_{J-1}\\
= \mathcal{B}_{\bullet,P/S}(A_{\bullet,\PS})_{J} + \mathcal{C}_{\bullet,P/S}(A_{\bullet,\PS})_{J+1}, \quad \forall J=0,-1,-2,\dots,
\end{multline}
where $(A_{\bullet,\PS})_{1} = 0$ and $p(t,x,\tau,\xi)$ is the principal symbol of the operator $P$, given as $p(t,x,\tau,\xi):= (-\rho\tau^2 + \mu\abs{\xi}^2) \text{Id} } \def \sgn{ \text{sgn} + (\lambda+\mu)(\xi\otimes\xi)$. Also, $p_{i,j}$ refers to the $ij$'th entry of the matrix symbol.
The operators $\mathcal{B}_{\bullet,P/S}$ and $\mathcal{C}_{\bullet,P/S}$ are given as
\begin{align*}
\left(\mathcal{B}_{\bullet,P/S}M\right)_{k_1,k_2}
&:=i\left(\partial_{\tau,\xi}p(t,x,\partial_t\phi_{\bullet,P/S},\nabla_x \phi_{\bullet,P/S}) \cdot \partial_{t,x}M\right)_{k_1,k_2}\\
&\qquad - \left(p_1(t,x,\partial_t\phi_{\bullet,P/S},\nabla_x \phi_{\bullet,P/S})M\right)_{k_1,k_2}\\
&\qquad +\frac{i}{2}\sum_{\abs{\alpha}=2}\sum_{l=1}^{3}\partial^{\alpha}_{\tau,\xi}p_{k_1,l}(t,x,\partial_t\phi_{\bullet,P/S},\nabla_x \phi_{\bullet,P/S}) \\
&\qquad \qquad \qquad \qquad \qquad \cdot \left(\partial^{\alpha}_{t,x}\phi_{\bullet,P/S}\right)M_{l,k_2}\\
\left(\mathcal{C}_{\bullet,P/S}M\right)_{k_1,k_2}
:=\ &i\left(\partial_{\tau,\xi}p_1(t,x,\partial_t\phi_{\bullet,P/S},\nabla_x \phi_{\bullet,P/S}) \cdot \partial_{t,x}M\right)_{k_1,k_2}\\
&+\frac{1}{2}\sum_{\abs{\alpha}=2}\sum_{l=1}^{3}\partial^{\alpha}_{\tau,\xi}p_{k_1,l}(t,x,\partial_t\phi_{\bullet,P/S},\nabla_x \phi_{\bullet,P/S}) \cdot \left(\partial^{\alpha}_{t,x}M_{l,k_2}\right),
\end{align*}
where $p_1(t,x,\tau,\xi) = -i\left[ \nabla_x\lambda \otimes \xi + (\nabla_x\mu\cdot\xi) \text{Id} } \def \sgn{ \text{sgn} + (\xi \otimes \nabla_x\mu) \right]$ is the lower order terms of the symbol of $P$.
We consider the elastic wave parametrix as
\begin{multline*}
\left(u_\bullet(t,x,\tau,\xi')\right)_m := \sum_{\star=\PS} e^{i\phi_{\bullet,\star}(t, x, \tau, \xi')} A^{\cdot,m}_{\bullet,\star}(t, x, \tau, \xi')
\\= \sum_{\star=\PS} e^{i\phi_{\bullet,\star}(t, x, \tau, \xi')} a^{m}_{\bullet,\star}(t, x, \tau, \xi'),
\end{multline*}
where $a^{m}_{\bullet,\PS}$ is the vector-field with components to be $\left(a^{m}_{\bullet,\PS}\right)_l = A^{l,m}_{\bullet,\PS}$.
For the sake of notational simplicity, we denote $a^{1}_{\bullet,\PS}$ by $a_{\bullet,\PS}$.
We define $N:=\frac{\nabla_x\phi_{\bullet,P}} {\abs{\nabla_x\phi_{\bullet,P}}}$ be the unit vector in the kernel of $p(t,x,\partial_t\phi_{\bullet,P},\nabla_x \phi_{\bullet,P})$. Take $N_1,N_2$ two orthonormal vectors in the kernel of $p(t,x,\partial_t\phi_{\bullet,S},\nabla_x \phi_{\bullet,S})$ such that $\{ N_1, N_2, \frac{\nabla_x\phi_{\bullet,S}}{\abs{\nabla_x\phi_{\bullet,s}}}\}$ forms an orthonormal basis for $ {\mathbb R}^3$.
From the transport equation \eqref{eq_transport} one easily obtains the following compatibility condition
\begin{equation}\label{eq_compatibility}
N_{P/S}\left[ \mathcal{B}_{\bullet,P/S}(a_{\bullet,\PS})_{J} + \mathcal{C}_{\bullet,P/S}(a_{\bullet,\PS})_{J+1} \right] = 0, \quad \forall J=0,-1,-2,\dots,
\end{equation}
where $N_P = N = \frac{\nabla_x\phi_{\bullet,P}}{\abs{\nabla_x\phi_{\bullet,P}}}$ and $N_S = N_1$ or $N_2$.
The amplitudes $(a_{\bullet,\PS})_0$ are written in the form
\begin{multline}\label{eq_1}
(a_{\bullet,\PS})_J = (h_{\bullet,\PS})_J+\begin{cases}
(\alpha_{0,\bullet})_J N_\bullet & \text{ if }\phi_\bullet = \phi_{\bullet, P} \\
(\alpha_{1,\bullet})_J N_{1, \bullet}
+(\alpha_{2,\bullet})_J N_{2, \bullet} & \text{ if }\phi_\bullet = \phi_{\bullet, S}
\end{cases}, \\ \qquad J=0, -1,\dots,
\end{multline}
for some vector $(h_{\bullet,\PS})_J$ in the co-kernel of $p(t,x, \partial_{t,x}\phi_\bullet)$, to be determined ($(h_{\bullet,\PS})_0 = 0$).
We write
\begin{equation*}
\mbox{for } J=-1,-2,\dots,\qquad
\begin{cases}
h_{\bullet,P} = (\gamma_{1,\bullet})_J M_1 + (\gamma_{2,\bullet})_J M_2,\\
h_{\bullet,S} = (\gamma_{\bullet})_J M,
\end{cases}
\end{equation*}
where $M$, $M_1$ and $M_2$ are orthogonal to $\PS$ waves, given as
\begin{equation*}
M_1 = -ie_3 \times (\xi',0),
\qquad M_2 = -\xi_{3,\bullet,P}(\xi',0) + \abs{\xi'}^2e_3,
\qquad M = -i\xi_{\bullet,S},
\end{equation*}
where $\xi_{\bullet,\PS} := (\xi',0) + \xi_{3,\bullet,\PS}e_3$ and the $\alpha_{i,\bullet}$ are vector symbols for $i=0,1,2$.
\subsection{P/S mode projections}
First we construct a $\PS$-mode projector $\Pi_{\PS}$, microlocally projects the elastic wave field $u$ to the compressive ($P$) and the shear ($S$) wave fields for a small time-interval, as
$\Pi_{\PS} u_{\bullet} = u_{\bullet,\PS} = e^{i\phi_{\bullet}} a_{\bullet,\PS}$.
Observe that the elasticity operator $\Op$, as defined in \eqref{Elastic_OP}, has the principal symbol $p(t,x,\tau,\xi)$ given by a $3\times 3$-matrix as
\begin{equation}
p(t,x,\tau,\xi) = -\rho\left[\left(\tau^2-c_S^2\abs{\xi}^2\right)I_{3\times3} - \left(c_P^2-c_S^2\right)(\xi\otimes\xi)\right].
\end{equation}
Observe that $p(t,x,\tau,\xi)$ has eigenvalues $\rho\left(\tau^2-c_P^2\abs{\xi}^2\right)$ and $\rho\left(\tau^2-c_S^2\abs{\xi}^2\right)$ with multiplicity $1$ and $2$ respectively. The matrix $p$ can be diagonalised and there exists unitary matrix $V(t,x,\tau,\xi)$ such that
\begin{equation*}
V p(t,x,\tau,\xi) V^{-1} = \rho\begin{pmatrix}\tau^2-c_P^2\abs{\xi}^2 &0 &0\\0 &\tau^2-c_S^2\abs{\xi}^2 &0\\0 &0 &\tau^2-c_S^2\abs{\xi}^2\end{pmatrix} = D(t,x,\tau,\xi).
\end{equation*}
We now consider the symbol
\begin{equation}\label{Symbol_mode-projector}
\Pi_P(t,x,\tau,\xi) := V^{-1}\bmat1 &0 &0\\0 &0 &0\\0 &0 &0\end{matrix}\right] V
\qquad \mbox{and}\qquad
\Pi_S(t,x,\tau,\xi) := V^{-1}\left[\begin{matrix} 0 &0 &0\\0 &1 &0\\0 &0 &1 \end{matrix}\right] V.
\end{equation}
One can equivalently write the mode projection operators $\Pi_{\PS}$ as
\begin{equation*}
\Pi_{\PS}u_{\bullet} = \int e^{i\phi_{\bullet,\PS}}a_{P/S}(t,x,\tau,\xi) \widehat{h}(\tau,\xi')\,d\tau\,d\xi' = u_{\bullet,\PS},
\end{equation*}
where $u_{\bullet}$ is as defined in \eqref{GO_form}.
Observe that the symbol of $\Pi_{\PS}$ is homogeneous of order $0$ in $\abs{\xi}$ and thus $\Pi_{\PS}$ represents a $0$-th order pseudodifferential operator.
\subsection{The elastic transmission conditions}
Let $u_I$ be the incoming wave parametrix corresponding to \eqref{Elastic_OP}, travels through $\Omega_{-}$, approaching the interface $\Gamma$. Let $h:=\rho_{\Gamma_{-}}u_I$, where $\rho_{\Gamma_{\pm}}$ are the restriction operators on $\Gamma_{\pm}$.
Denote $R$ and $T$ to be the well-known reflection and transmission operators on $\Gamma$ respectively. As calculated in \cite{CHKUElastic} $R$, $T$ are pseudo-differential operators ($\Psi$DO) of order $0$ on $\Gamma$.
Let $f := \rho_{\Gamma^-}u_I \in \mathcal{E}'(\Gamma^- \times \mathbb R_t)$, where $\rho_{\Gamma^-}$ is the restriction to $\Gamma$ from above. The reflected wave field $u_R$ and the transmitted wave field $u_T$ starts from $\Gamma_{-}$ and $\Gamma_{+}$ respectively, with the boundary data as $\rho_{\Gamma_{-}} u_R = Rf$ and $\rho_{\Gamma_{+}} u_T = Tf$.
We define the Neumann operator at $\Gamma$, given as
\begin{equation}\label{elastic_Neumann}
\mathcal N_{\bullet} u_{\bullet} = (\lambda \text{div}} \def \grad{ \text{grad}} \def \rot{ \text{rot} \otimes \text{I} + 2\mu \hat \nabla)u_{\bullet} \cdot \nu_{\bullet}\restriction_\Gamma,
\end{equation}
where $\nu$ is the outward unit normal vector at $\Gamma$ i.e. for $\bullet = I/R$ consider $\nu_{\bullet}$ to be the normal unit vector on $\Gamma$ pointing towards $\Omega_{+}$ and for $\bullet=T$, $\nu_{T}$ is the unit normal vector pointing towards $\Omega_{-}$.
The elastic transmission conditions on the interface $\Gamma$ from \eqref{e: elastic transmission conditions} become
\begin{align}\label{elastic_transmission}
u_I + u_R =& u_T\\
\mathcal{N}_{I}u_I + \mathcal{N}_{R}u_R =& \mathcal{N}_{T}u_T. \nonumber
\end{align}
Recall that we assume $\Gamma = \{ x_3=0\}$ and $\Omega_{\pm} \subset \{ x\in {\mathbb R}^3 : \pm x_3>0 \}$.
Now, with $\nu_{\bullet} = \pm (0,0,1) = \pm e_3$ we see that
\begin{equation*}
\mathcal{N}_{\bullet}u_{\bullet} = B_{\bullet}(x,D_x)u_{\bullet},
\end{equation*}
where the matrix operator $B_{\bullet} \in \text{Diff}^1(\Omega)$ is defined as
\begin{multline*}
B_{\bullet}(x,D_x) u_{\bullet}
= \left[\begin{matrix} 0 &0 &\mu^{(\pm)} \partial_{x_1}\\0&0&\mu^{(\pm)} \partial_{x_2}\\\lambda^{(\pm)} \partial_{x_1}&\lambda^{(\pm)} \partial_{x_2}&0 \end{matrix}\right] u_{\bullet}\\
\pm \left[\begin{matrix} \mu^{(\pm)}&0&0\\ 0&\mu^{(\pm)} &0\\0&0&(\lambda^{(\pm)}+2\mu^{(\pm)})\end{matrix}\right] \partial_{x_3} u_{\bullet},
\quad \mbox{on }\Gamma,
\end{multline*}
where the sign $(\pm)$ in the above expression changes according to the sign of $\nu_{\bullet}$.
Since we are working only at the boundary, we will sometimes use the operators $B_{\bullet}$ and $\rho_\Gamma \circ B_{\bullet}$ interchangeably where $\rho_\Gamma$ is restriction to the interface.
First, we work with the case $(u_{\bullet})_0$ i.e. the term of $u_{\bullet}$ which are homogeneous of order $0$ in $\abs{\xi}$.
We also compute
\begin{equation*}
B_{\bullet}(x,D_x)e^{i\phi}a = e^{i\phi}(B_{\bullet}(x, \partial_x \phi)a + B_{\bullet}(x,D_x)a).
\end{equation*}
It is convenient to introduce the shorthand $B(\phi_{\bullet,\PS})$ as the bundle endomorphism $B_{\bullet}(x, \partial_x \phi_{\bullet,\PS}).$
Note that the transmission conditions of different order of homogeneity should be dealt separately.
Here we discuss the case of the $0$-th order transmission conditions on the interface $\Gamma$. The higher order transmission conditions have been discussed in the later subsections.
For $J=0$, using the form of the parametrix \[(u)_0 = \left(e^{\phi_{\bullet,P}}(a_P)_0 + e^{\phi_{\bullet,S}}(a_S)_0\right),\]
we form the $3 \times 3$ matrix $S_\bullet = [N_\bullet \abs{ N_{1,\bullet} }N_{2,\bullet}]$, where $N_\bullet$, $N_{1,\bullet}$, $N_{2,\bullet}$ are as in \eqref{eq_1} and \[(\mathcal{A}_\bullet)_0= \col{(\alpha_{\bullet})_0\\(\alpha_{1,\bullet})_0\\ (\alpha_{2,\bullet})_0}.\]
It is convenient to define
\begin{equation*}
\mathcal{T}_{\bullet} = \left[\begin{matrix} l & l & l \\ B(\phi_{\bullet,P})N_\bullet & B(\phi_{\bullet,S})N_{1,\bullet} & B(\phi_{\bullet,S})N_{2,\bullet}\ \langle } \def \r{ \rangle&l&l \end{matrix}\right],
\end{equation*}
where this $3 \times 3$ matrix is $0$-th order in the parameters.
Since, $\mathcal{T}_{\bullet}$ is of order $1$ in $\abs{\xi}$, therefore,
the transmission conditions in \eqref{elastic_transmission} become
\begin{align*}\label{E_T_2}
S_I(\mathcal{A}_{I})_0 + S_R (\mathcal{A}_{R})_0 =& S_T (\mathcal{A}_{T})_0\\
\mathcal{T}_{I} (\mathcal{A}_I)_0 + \mathcal{T}_{R} (\mathcal{A}_R)_0
=& \mathcal{T}_{T} (\mathcal{A}_T)_0
+ \left(B_{T}(x,D_x)\left(a_{T,P}+a_{T,S}\right)\right)_{1}\\
&- \left(B_{I}(x,D_x)(a_{I,P} + a_{I,S})\right)_{1}\\
&- \left(B_{R}(x,D_x)(a_{R,P} + a_{R,S})\right)_{1}.
\end{align*}
Since $B_{\bullet}(x,D_x)\left(a_{\bullet,P/S}\right)_J$ is homogeneous of order $J$ in $\abs{(\tau,\xi)}$, hence, $\left(B_{\bullet}(x,D_x)a_{\bullet,P/S}\right)_1 = 0$.
Therefore, the elastic transmission conditions for $J=0$ implies
\begin{equation}\label{e: elastic 0'th order trans conditions}
\left[\begin{matrix} -S_R & S_T \\- \mathcal{T}_R & \mathcal{T}_T \end{matrix}\right] \col{ (\mathcal{A}_R)_0 \\
(\mathcal{A}_T)_0} = \col{ S_I (\mathcal{A}_I)_0 \\ \mathcal{T}_I (\mathcal{A}_I)_0}
\text{ on }\Gamma.
\end{equation}
\subsection{Parameters at the interface}
We start from the transmission conditions \eqref{e: elastic 0'th order trans conditions}. Observe that this is not quite the same situation as in the acoustic case \eqref{zeroth_order_wave}, since the given reflection coefficient is not $\alpha_R$ but rather $S_R \alpha_R$.
However, note that $S_R$ is completely determined by the material parameters above the interface, i.e. $\Gamma_-$ that we have access to. Hence, we may assume instead of $R = S_R \alpha_R$, we are indeed given $\alpha_R$, and the goal is to determine the material parameters below the interface. However, this is a calculation already done in \cite{CHKUElastic}. To make the connection, we will write using the ansantz, $(\mathcal{A}_R)_0 = R(\mathcal A_I)_0, (\mathcal A_T)_0 = T(\mathcal A_I)_0$ with $R, T$ being $3\times 3$ matrices of symbols. Then
\begin{equation*}
\col{(\mathcal{A}_R)_0 \\ (\mathcal{A}_T)_0} = \left[\begin{matrix} -S_R & S_T \\- \mathcal{T}_R & \mathcal{T}_T \end{matrix}\right] \col{ S_I \\ \mathcal{T}_I}(\mathcal{A}_I)_0.
\end{equation*}
The symbols $R, T$ are exactly those computed in \cite{CHKUElastic}. Since $(\mathcal{A}_I)_0$ can be anything, we indeed recover $R$.
\begin{lemma}\label{lem_1}
Let $\rho^{(-)}$, $c_S^{(-)}$, $c_P^{(-)}$ be known on $\Omega_{-}$. Then the knowledge of $R$ at the interface $\Gamma$ uniquely determines $\rho^{(+)}$, $c_P^{(+)}$ and $c_S^{(+)}$ on $\Gamma$.
\end{lemma}
\begin{proof}
From \cite[Appendix A]{CHKUElastic}, the $r_{33}$ entry of $R$ is given as
\begin{equation}\label{key_2}
r_{33} = \frac{\mu^{(-)}\xi_{3,I,S} - \mu^{(+)}\xi_{3,T,S}}{ \mu^{(-)}\xi_{3,I,S} + \mu^{(+)}\xi_{3,T,S}}.
\end{equation}
For the term $r_{33}$, we are in the same situation as in the Lemma \ref{l: recover 2 params from 0'th order reflect} for the acoustic case and using the same calculations done in the proof we recover $\rho^{(+)},c_S^{(+)}$ using just two values of $(\abs{\xi'}/\tau)$.
For the completion of the article we present the proof here.
We denote $f = \xi_{3,T,S}/\xi_{3,I,S}$, $a = \mu^{(-)}$, $c = \mu^{(+)}$. Note that $f = f(\abs{\xi'}/\tau)$ i.e. it is a function of the parameter $\abs{\xi'}/\tau$ while $a,c$ only depend on $x$.
Now, assume that $r_{33} = \widetilde{r}_{33}$ and $\mu^{(-)} = \tilde{\mu}^{(-)}$ (i.e. $a=\tilde{a}$) on $\Gamma$ we obtain
\begin{align}\label{key_3_1}
\frac{af-c}{af+c} = \frac{a\tilde f-\tilde c}{a\tilde f+\tilde c},\quad
\Leftrightarrow \quad a\tilde c f = ac \tilde f, \quad
\Leftrightarrow \quad \frac{c}{\tilde c} = \frac{\tilde f}{f}.
\end{align}
Varying $\abs{\xi'}/\tau$ and keeping everything else constant, we get
\begin{equation*}
\frac{c}{\tilde c} = \frac{\tilde f_1}{f_1},
\end{equation*}
where $f_1$ is $f$ evaluated at different value of $\abs{\xi'}/\tau$. Thus,
\begin{align*}
\frac{\tilde f}{f}=\frac{\tilde f_1}{f_1}\quad
\Leftrightarrow \quad
\frac{(\tilde c^{(+)}_S)^{-2} - b^2}{(c^{(+)}_S)^{-2} - b^2}
= \frac{(\tilde c^{(+)}_S)^{-2} - b_1^2}{(c^{(+)}_S)^{-2} - b_1^2},
\end{align*}
where we write $b = \abs{\xi'}/\tau$, $b_1 = \abs{\xi'_1}/\tau_1$ and observe that $c^{(-)}_S = \tilde c^{(-)}_S$ on $\Gamma$.
Cross multiplying we get the algebraic equation
\begin{equation*}
((\tilde c^{(+)}_S)^{-2}- ( c^{(+)}_S)^{-2})(b^2 - b_1^2) = 0.
\end{equation*}
Note that, as long as we pick $b_1 \neq \pm b$, we recover $c_S^{(+)} = \tilde{c}_S^{(+)}$.
Then going back to \eqref{key_3_1} one gets $c=\tilde{c}$, that is $\mu^{(+)} = \tilde \mu^{(+)}$ on $\Gamma$.
Finally, from $c_S^{(+)} = \tilde{c}_S^{(+)}$ and $\mu^{(+)} = \tilde \mu^{(+)}$ we obtain $\rho^{(+)} = \tilde{\rho}^{(+)}$.
The other entries of $R$ are can be used to recover the remaining parameter $c_P^{(+)}$ with the analogous argument.
\end{proof}
\begin{comment}
For instance, the $r_{11}$ entry of $R$ is (as computed in \cite{CHKUElastic})
\begin{equation}\label{key_1}
\begin{aligned}
r_{11} = \frac{1}{\mathcal{A}}&\left[\hat{\tau}^4\left(\xi_{3,I,S} + \xi_{3,T,S}\right)\left(\xi_{3,I,P} - \xi_{3,T,P}\right) + 4\hat{\tau}^2\left(\mu^{(-)} - \mu^{(+)}\right)\left(\xi_{3,I,P}\xi_{3,I,S} - \xi_{3,T,S}\xi_{3,T,P}\right) \right.\\
&\left.+ 4\left(\mu^{(-)} - \mu^{(+)}\right)^2\left(1+\xi_{3,I,S}\xi_{3,I,P}\right)\left(1-\xi_{3,T,S}\xi_{3,T,P}\right)\right],\\
\text{ where }\quad
\mathcal{A} =& \hat{\tau}^4\left(\xi_{3,I,S} - \xi_{3,T,S}\right)\left(\xi_{3,I,P} + \xi_{3,T,P}\right) - 4\hat{\tau}^2\left(\mu^{(-)} - \mu^{(+)}\right)\left(\xi_{3,I,P}\xi_{3,I,S} - \xi_{3,T,S}\xi_{3,T,P}\right) \\
&+ 4\left(\mu^{(-)} - \mu^{(+)}\right)^2\left(1+\xi_{3,I,S}\xi_{3,I,P}\right)\left(1+\xi_{3,T,S}\xi_{3,T,P}\right),
\end{aligned}
\end{equation}
and $\hat{\tau} = \frac{\tau}{|\xi^{\prime}|} = \frac{\tau}{\left(|\xi_1|^2 + |\xi_2|^2\right)^{1/2}}$.
Having $r_{11} = \tilde{r}_{11}$ we obtain
\begin{equation*}
\begin{aligned}
\tilde{\mathcal{A}}&\left[\hat{\tau}^4\left(\xi_{3,I,S} + \xi_{3,T,S}\right)\left(\xi_{3,I,P} - \xi_{3,T,P}\right) + 4\hat{\tau}^2\left(\mu^{(-)} - \mu^{(+)}\right)\left(\xi_{3,I,P}\xi_{3,I,S} - \xi_{3,T,S}\xi_{3,T,P}\right) \right.\\
&\left.+ 4\left(\mu^{(-)} - \mu^{(+)}\right)^2\left(1+\xi_{3,I,S}\xi_{3,I,P}\right)\left(1-\xi_{3,T,S}\xi_{3,T,P}\right)\right],\\
%
&= \mathcal{A}\left[\hat{\tau}^4\left(\xi_{3,I,S} + \xi_{3,T,S}\right)\left(\xi_{3,I,P} - \tilde{\xi}_{3,T,P}\right) + 4\hat{\tau}^2\left(\mu^{(-)} - \mu^{(+)}\right)\left(\xi_{3,I,P}\xi_{3,I,S} - \xi_{3,T,S}\tilde{\xi}_{3,T,P}\right) \right.\\
&\quad\left.+ 4\left(\mu^{(-)} - \mu^{(+)}\right)^2\left(1+\xi_{3,I,S}\xi_{3,I,P}\right)\left(1-\tilde{\xi}_{3,T,S}\tilde{\xi}_{3,T,P}\right)\right].
\end{aligned}
\end{equation*}
Here we use that fact that so far we have $\rho^{(\pm)} = \tilde{\rho}^{(\pm)}$, $\mu^{(\pm)} = \tilde{\mu}^{(\pm)}$, $c_S^{(\pm)} = \tilde{c_S}^{(\pm)}$ on $\Gamma$ and therefore, $\xi_{3,\bullet,S} = \tilde{\xi}_{3,\bullet,S}$.
Equating the coefficient of $\hat{\tau}^8$ on the both sides of the above equation we obtain
\begin{equation*}
\begin{aligned}
\frac{\left(\xi_{3,I,S} + \xi_{3,T,S}\right)\left(\xi_{3,I,P} - \xi_{3,T,P}\right)}{\left(\xi_{3,I,S} - \xi_{3,T,S}\right)\left(\xi_{3,I,P} + \xi_{3,T,P}\right)}
=& \frac{\left(\xi_{3,I,S} + \xi_{3,T,S}\right)\left(\xi_{3,I,P} - \tilde{\xi}_{3,T,P}\right)}{\left(\xi_{3,I,S} - \xi_{3,T,S}\right)\left(\xi_{3,I,P} + \tilde{\xi}_{3,T,P}\right)}\\
\implies\quad
\frac{\left(\xi_{3,I,P} - \xi_{3,T,P}\right)}{\left(\xi_{3,I,P} + \xi_{3,T,P}\right)}
=& \frac{\left(\xi_{3,I,P} - \tilde{\xi}_{3,T,P}\right)}{\left(\xi_{3,I,P} + \tilde{\xi}_{3,T,P}\right)}.
\end{aligned}
\end{equation*}
After a cross multiplication and simplification, on $\Gamma$ we obtain
\begin{equation*}
\frac{\xi_{3,T,P}}{\tilde{\xi}_{3,T,P}} = \frac{\xi_{3,I,P}}{\xi_{3,I,P}} = 1
\quad \implies\
\frac{|\xi'|^2 - \tau^2(c_P^{(+)})^{-2}}{|\xi'|^2 - \tau^2(\tilde{c}_P^{(+)})^{-2}} = 1
\quad\implies\
(c_P^{(+)})^{-2} = (\tilde{c}_P^{(+)})^{-2}, \quad [\because \tau \neq 0].
\end{equation*}
Since $c_P^{(+)}$ and $\tilde{c}_P^{(+)}$ are wave speeds and cannot be negative, hence, we obtain $c_P^{(+)} = \tilde{c}_P^{(+)}$ on $\Gamma$.
\end{comment}
So far we have seen that from the knowledge of the $0$-th order parameters on $\Gamma$ and the parameters in $\Omega_{-}$ we can uniquely determine $c_P^{(+)}$, $c_S^{(+)}$ and $\rho^{(+)}$ on $\Gamma_{+}$.
\subsection{Recovery of the derivatives of the material parameters at the interface}
In this subsection we determine the $J$-th order derivatives of the material parameters at the interface $\Gamma$ from the $J$-th transmission conditions.
We first establish a relation between the $J$-th reflection asymptotic term on $\Gamma_{-}$ with the Neumann data of the $J$-th asymptotic of the transmitted waves on $\Gamma_{+}$. Then we study the relation between the lower order asymptotic terms of the transmitted rays and the higher order derivatives of the material parameters at $\Gamma$.
We observe that the calculations for the higher order derivatives of the parameters at the interface $\Gamma$ is similar to the calculations done in \cite[Section 3]{RachBoundary}. We try to use similar notations, wherever possible, to draw a relation between the two articles.
{\bf Notation:} We denote by $R_j$ the terms depending on
\begin{enumerate}
\item[$\bullet$]
normal derivatives of $c_P$, $c_S$, $\rho$ of order at most $j$, and
\item[$\bullet$]
quantities determined by the transmission conditions \eqref{e: elastic 0'th order trans conditions}, \eqref{e: elastic J'th order trans conditions} in $\Gamma$ for $J=0,-1, \dots,$ $1-j$.
\end{enumerate}
\begin{lemma}\label{lemma_4}
If $(u_R)_j = (\tilde{u}_R)_{j}$, $(u_I)_j = (\tilde{u}_I)_{j}$, $c_{\PS} = \widetilde{c}_{\PS}$ and $\rho = \widetilde{\rho}$ on $\Omega_{-}$, for $j = -1,-2,\dots,$ $J$, then
\begin{equation*}
\left( \partial_{x_3}u_{T}\right)_{J+1}
= \left( \partial_{x_3}\widetilde{u}_{T}\right)_{J+1},\quad\mbox{for }J\leq 0 \quad \mbox{on }\Gamma_{+}.
\end{equation*}
\end{lemma}
\begin{proof}
We recall the elastic transmission conditions (\ref{elastic_transmission}) and the boundary Neumann data as
\begin{align}\label{Neumann_data_J-th}
\mathcal{N}u_{\bullet} =& B(x,\phi_{\bullet,P})a_{\bullet,P} + B(x,\phi_{\bullet,S})a_{\bullet,S} + B(x,\nabla_x)\left(a_{\bullet,P}+a_{\bullet,S}\right) \\
=& \sum_{\star=P/S}B(x,\phi_{\bullet,\star})a_{\bullet,\star} + \left[\begin{matrix} 0 &0 &\mu \partial_{x_1}\\0 &0 &\mu \partial_{x_2}\\\lambda \partial_{x_1} &\lambda \partial_{x_2} &0\\ \end{matrix}\right] \sum_{\star=P/S} a_{\bullet,\star}
\\\nonumber
&\qquad \qquad \qquad \qquad \qquad + \left[\begin{matrix} \mu &0 &0\\ 0&\mu&0\\ 0&0&(\lambda+2\mu)\end{matrix}\right] \sum_{\star=P/S} \partial_{x_3} a_{\bullet,\star}.
\end{align}
Observe that, $\mathcal{N}^{(-)}u_{I/R}$ is completely determined in $\Gamma_{-}$ from the knowledge of $\phi_{I/R,\PS}$, $c_{\PS}$, $\rho$ and $a_{I/R,\PS}$ on $\Omega_{-}$ except the term $ \partial_{x_3}a_{I/R,\PS}$. That is, one may write $\mathcal{N}^{(-)}u_{I/R} = R_0 + P^{(-)} \partial_{x_3} u_{I/R}$, where $P^{(\pm)}$ is the diagonal matrix diag$(\mu^{(\pm)},\mu^{(\pm)},(\lambda^{(\pm)}+2\mu^{(\pm)}))$.
Note that $ \partial_{x_k}\phi_{\bullet,\PS} = \xi_k$, for $k=1,2$ and $ \partial_{x_3}\phi_{\bullet,\PS} = \xi_{3,\bullet,\PS}$ on $\Gamma$.
Therefore, the $J$-th order transmission conditions become
\begin{equation} } \def\eeq{\end{equation}
\begin{aligned}\label{e: elastic J'th order trans conditions}
(u_T)_{J} =& (u_I)_{J} + (u_R)_{J}\\
\left(\mathcal{N}_{T}(x,\xi)u_{T}\right)_{J+1}
=&\left(\mathcal{N}_{I}(x,\xi)u_{I}\right)_{J+1} + \left(\mathcal{N}_{R}(x,\xi)u_{R}\right)_{J+1}
\end{aligned}
\eeq
Now if $(u_I)_J = (\widetilde{u}_I)_J$ and $(u_R)_J = (\widetilde{u}_R)_J$ on $\Gamma$ for $J \leq -1 $, then
\begin{equation}
(u_T)_J = (u_I)_J + (u_R)_J = (\widetilde{u}_I)_J + (\widetilde{u}_R)_J = (\widetilde{u}_T)_J, \qquad \mbox{on }\Gamma.
\end{equation}
Moreover, $\Pi_{\PS}(u_{\bullet})_J = \Pi_{\PS}(\tilde{u}_{\bullet})_J$ on $\Gamma$ since $\Pi_{\PS}$ is a $R_0$ quantity (see \eqref{Symbol_mode-projector}).
Therefore, from \eqref{Neumann_data_J-th} along with the fact that $(u_{I/R})_{J+1}=(u_{I/R})_{J+1}$ on $\Gamma$ we obtain
\begin{equation*}
\left(\mathcal{N}_Iu_{I}\right)_{J+1} = \left(\widetilde{\mathcal{N}}_I \widetilde{u}_{I}\right)_{J+1}
\qquad \mbox{and}\quad
\left(\mathcal{N}_Ru_{R}\right)_{J+1} = \left(\widetilde{\mathcal{N}}_R \widetilde{u}_{R}\right)_{J+1}
\qquad \mbox{on }\Gamma.
\end{equation*}
Now, let $\mathcal{N}$ be the Neumann derivative for the parameters $\widetilde{\lambda}, \widetilde{\mu}, \widetilde{\rho}$ on $\Gamma$.
We readily obtain
\begin{multline}
\left(\mathcal{N}_{T} u_{T}\right)_{J+1}
=\left(\mathcal{N}_{I} u_{I}\right)_{J+1}
+ \left(\mathcal{N}_{R} u_{R}\right)_{J+1}
= \left(\widetilde{\mathcal{N}}_{I} \widetilde{u}_{I}\right)_{J+1} + \left(\widetilde{\mathcal{N}}_{R} \widetilde{u}_{R}\right)_{J+1}
\\= \left(\widetilde{\mathcal{N}}_{T} \widetilde{u}_{T}\right)_{J+1},
\qquad \mbox{on }\Gamma.
\end{multline}
Note that, from $\Pi_{\PS}(u_T)_{J+1}=\Pi_{\PS}(\widetilde{u}_T)_{J+1}$ on $\Gamma$ and \eqref{Neumann_data_J-th} we see
\begin{multline*}
0 = \left(\mathcal{N}_{T} u_T\right)_{J+1} - \left(\widetilde{\mathcal{N}}_{T} \widetilde{u}_T\right)_{J+1}
= P^{(+)}\left( \partial_{x_3}u_{T}\right)_{J} - \widetilde{P}^{(+)}\left( \partial_{x_3}\widetilde{u}_{T}\right)_{J}
\\= P^{(+)}\left[\left( \partial_{x_3}u_{T}\right)_{J} - \left( \partial_{x_3}\widetilde{u}_{T}\right)_{J}\right],
\end{multline*}
on $\Gamma$. The last identity holds due to the fact that Lemma \ref{lem_1} asserts $P^{(+)} = \widetilde{P}^{(+)}$ on $\Gamma$.
Therefore, we essentially obtain $\left( \partial_{x_3}\widetilde{u}_{T}\right)_{J} = \left( \partial_{x_3}u_{T}\right)_{J}$ on $\Gamma$ since $P^{(+)}(x)$ is invertible.
\end{proof}
\begin{rem}
A similar lemma can be proved if the transmitted wave fields $(u_T)_J$ are known instead of $(u_R)_J$ on $\Gamma$. That is, if we know the elastic parameters on $\Omega_{-}$, $(u_I)_J$ on $\Gamma_{-}$ and $(u_T)_J$ on $\Gamma_{+}$, then one can determine the reflected wave fields $( \partial_{x_3}u_R)_J$ on $\Gamma_{-}$.
\end{rem}
In the rest of the section we will show that knowing $\left( \partial_{x_3}u_T\right)_J$ on $\Gamma_{+}$ implies knowing $ \partial_{x_3}^{\abs{J}+1}c_{P/S}^{(+)}$ and $ \partial_{x_3}^{\abs{J}+1}\rho^{(+)}$ on $\Gamma_{+}$.
We start with the following computation whose proof follows from \cite[Proposition 3.1]{RachBoundary}. One can describe $\left( \partial_{x_3}u_{\bullet}\right)_J\restriction_{\Gamma_{+}}$ as
\begin{multline}\label{eq3.1}
\left( \partial_{x_3}u_{\bullet}\right)_J
= i(\gamma_{2,\bullet})_{J-1} \left[ \xi_{3,\bullet,P}\left(M_2 + \frac{\abs{\xi'}^2(\xi_{3, \bullet,S}
- \xi_{3,\bullet,P})}{\abs{\xi'}^2 + \xi_{3, \bullet,P}\xi_{3, \bullet,S}} \xi_{\bullet,P}\right)\right.
\\\left.
- \xi_{3,\bullet,S}\left(\frac{\abs{\xi'}\,\abs{\xi_{\bullet,P}}^2 \,\abs{\xi_{\bullet,S}}}{\abs{\xi'}^2 + \xi_{3, \bullet,P}\xi_{3, \bullet,S}} N_2\right)\right]\\
\qquad - (\gamma_{\bullet})_{J-1} \left[ \xi_{3,\bullet,P}\left(\frac{\abs{\xi_{\bullet,S}}^2}{\abs{\xi'}^2 + \xi_{3, \bullet,P}\xi_{3, \bullet,S}} \xi_{\bullet,P}\right)
\right.\\
\left.-\xi_{3,\bullet,S}\left(\xi_{\bullet,S} + \frac{\abs{\xi'}\,\abs{\xi_{\bullet,S}}(\xi_{3,\bullet,S} - \xi_{3,\bullet,P})}{\abs{\xi'}^2 + \xi_{3, \bullet,P}\xi_{3, \bullet,S}} N_2\right)\right]\\
\vspace{2 mm} \qquad + i(\gamma_{1,\bullet})_{J-1}\left[ \left(\xi_{3,\bullet,P}-\xi_{3,\bullet,S}\right) M_1 \right]
+ \left[ \partial_{x_3}\left(\gamma_{\bullet}\right)_J\right]M
+ \left[ \partial_{x_3}\left(\gamma_{1,\bullet}\right)_J\right]M_1\\
\qquad +\left[ \partial_{x_3}\left(\gamma_{2,\bullet}\right)_J\right]M_2
+ \left[ \partial_{x_3}\left(\alpha_{\bullet}\right)_J\right]N
+ \left[ \partial_{x_3}\left(\alpha_{1,\bullet}\right)_J\right]N_1 \\
\qquad + \left[ \partial_{x_3}\left(\alpha_{2,\bullet}\right)_J\right]N_2
+ \left(\gamma_{\bullet}\right)_J\left[ \partial_{x_3}M\right]
+ \left(\gamma_{1,\bullet}\right)_J\left[ \partial_{x_3}M_1\right]\\
\qquad + \left(\gamma_{2,\bullet}\right)_J\left[ \partial_{x_3}M_2\right]
+ \left(\alpha_{\bullet}\right)_J\left[ \partial_{x_3}N\right]
+ \left(\alpha_{1,\bullet}\right)_J\left[ \partial_{x_3}N_1\right]
+ \left(\alpha_{2,\bullet}\right)_J\left[ \partial_{x_3}N_2\right].
\end{multline}
Before going into the full general case for recovering $|J|$-th order derivatives of the elastic parameters on the interface, we consider the case for $|J|=1$ in the following proposition.
\begin{prop}\label{prop 5}
The terms $ \partial_{x_3}c_P^{(+)}$, $ \partial_{x_3}c_S^{(+)}$ and $ \partial_{x_3}\rho^{(+)}$ are uniquely determined on $\Gamma_{+}$ from the knowledge of $(u_R)_0$ and $(u_R)_{-1}$ on $\Gamma_{-}$.
\end{prop}
\begin{proof}
We start with the following relation, obtained from a similar calculation done in \cite[Equation (64)]{RachBoundary}, given as
\begin{equation}\label{key_e_5}
\frac{M_1}{\abs{M_1}^2}\cdot \partial_{x_3}(u_{T})_0
= -\left( \partial_{x_3}\log\sqrt{\rho^{(+)}} + \frac{1}{(2c_s^{(+)})^2 f_S} \partial_{x_3}\log c_S^{(+)}\right)(\alpha_{1,T})_0 + R_0,
\end{equation}
where \[f_S = f(\abs{\xi'}/\tau) = \left(\frac{1}{(c_S^{(+)})^2} - \frac{\abs{\xi'}}{\tau}\right)\] is a $R_0$ quantity.
Since, $M_1$ is a $R_0$ quantity, hence, from Lemma \ref{lemma_4} for $J=0$ and \eqref{key_e_5} we get
\begin{equation}\label{key_e_6}
\partial_{x_3}\log\sqrt{\frac{\rho^{(+)}}{\tilde{\rho}^{(+)}}}
= \frac{1}{2(c_s^{(+)})^2 f_S}\left( \partial_{x_3}\log \tilde{c}_S^{(+)} - \partial_{x_3}\log c_S^{(+)}\right), \qquad\mbox{on }\Gamma_{+}.
\end{equation}
Similar to the case of acoustic waves, we fix $x$ and consider two different values of $(\abs{\xi'}/\tau)$ to have two different quantities $f_S$ and $f_S^{(1)}$.
Thus, we obtain
\begin{equation*}
\left(f_S - f^{(1)}_S \right)\left( \partial_{x_3}\log \tilde{c}_S^{(+)} - \partial_{x_3}\log c_S^{(+)}\right) = 0
\quad\mbox{on }\Gamma.
\end{equation*}
Hence, $ \partial_{x_3}\log \tilde{c}_S^{(+)} = \partial_{x_3}\log c_S^{(+)}$ on $\Gamma_{+}$. From \eqref{key_e_6} we get $ \partial_{x_3}\log \sqrt{\tilde{\rho}^{(+)}}\restriction_{\Gamma_{+}}$ $= \partial_{x_3}\log \sqrt{\rho^{(+)}}\restriction_{\Gamma_{+}}$.
Now, to recover $ \partial_{x_3}c_P^{(+)}\restriction_{\Gamma_+}$, we observe that a similar calculation as in \cite[Proposition 3.8]{RachBoundary} gives us
\begin{equation} } \def\eeq{\end{equation}
\begin{aligned}\label{key_e_8}
\frac{M_2}{\abs{M_2}}\cdot \partial_{x_3}(&u_{T})_0
%
\\=& \left(( \partial_{x_3}\log c_P^{(+)})\left[ i\frac{(\xi_{3,T,P} - \xi_{3,T,S})} {2\xi_{3,T,P}^2} + \frac{\abs{\xi'}}{\xi_{3,T,P}}\right]\right.
\\
&-( \partial_{x_3}\log c_S^{(+)}) \left[i\frac{4 (c_S^{(+)})^2(\xi_{3,T,P} - \xi_{3,T,S})\abs{\xi'}}{\abs{\xi_{T,P}}^2}\right] \\
&\left. - \left( \partial_{x_3}\log \sqrt{\rho^{(+)}}\right) \left[ i\left(1- \frac{2(c_S^{(+)})^2}{(c_{\lambda+\mu}^{(+)})^2}\right)\frac{(\xi_{3,T,P} - \xi_{3,T,S})\abs{\xi'}}{\abs{\xi_{T,P}}^2}\right] \right)(\alpha_{1,T})_0 \\
&\qquad + R_0,
\end{aligned}
\eeq
where $c_{\lambda+\mu}^{(\pm)}:= \sqrt{(c_P^{(\pm)})^2 - (c_S^{(\pm)})^2} = \sqrt{\frac{\lambda^{(+)} + \mu^{(+)}}{\rho^{(+)}}}$.
Since \[ \partial_{x_3}\log \tilde{c}_S^{(+)}\restriction_{\Gamma_{+}} = \partial_{x_3}\log c_S^{(+)}\restriction_{\Gamma_{+}} ,\qquad \partial_{x_3}\log\sqrt{\tilde{\rho}^{(+)}}\restriction_{\Gamma_{+}}= \partial_{x_3}\log\sqrt{\rho^{(+)}}\restriction_{\Gamma_{+}},\] then $(a_R)_{-1} = (\tilde{a}_R)_{-1}$ on $\Gamma_{-}$ implies
\begin{multline*}
( \partial_{x_3}\log c_P^{(+)})\left[ i\frac{(\xi_{3,T,P} - \xi_{3,T,S})} {2\xi_{3,T,P}^2} + \frac{\abs{\xi'}}{\xi_{3,T,P}}\right]\\
= ( \partial_{x_3}\log \tilde{c}_P^{(+)})\left[ i\frac{(\xi_{3,T,P} - \xi_{3,T,S})} {2\xi_{3,T,P}^2} + \frac{\abs{\xi'}}{\xi_{3,T,P}}\right]
\quad \mbox{on }\Gamma,\\
\text{which implies } \quad
\partial_{x_3} c_P^{(+)}\restriction_{\Gamma_{+}} =\ \partial_{x_3} \tilde{c}_P^{(+)}\restriction_{\Gamma_{+}} \quad\quad
\left[\because \frac{(\xi_{3,T,P} - \xi_{3,T,S})} {2\xi_{3,T,P}^2} \neq i\frac{\abs{\xi'}}{\xi_{3,T,P}}\right].
\end{multline*}
\end{proof}
Next we define the quantities
\begin{align*}
A_P =& \left[\begin{matrix} 1 &-(c_P/\tau)^{3}\\ \\ 0 &-2c_p^2\xi_{3,\bullet,P} \end{matrix}\right],
\quad C_P = \left[\begin{matrix}(c_P/\tau)^{2}\left(\frac{c_S^2}{c_{\lambda+\mu}^2}+\frac{c_P^2 \abs{\xi'}^2}{\tau^2}\right) &0\\ \\ -c_{\lambda+\mu}^2\frac{c_P}{\tau}\abs{\xi'}^2\xi_{3,\bullet,P} &0 \end{matrix}\right],\\
B_P =& \left[\begin{matrix} \frac{2ic_S^2c_P^2\xi_{3,\bullet,P}}{c_{\lambda+\mu}^2\tau^2} &\frac{c_P^5\xi_{3,\bullet,P}}{\tau^5}\\ \\ -c_{\lambda+\mu}^2\abs{\xi'}^2\frac{\tau}{c_p} &c_{\lambda+\mu}^2\frac{c_P\xi_{3,\bullet,P}}{\tau}+c_S^2 \end{matrix}\right],\quad
\\
&
D_P^{J+1} = \col{-(\gamma_{2,\bullet})_{J+1}\frac{c_P^2 c_S^2}{c_{\lambda+\mu}^2\tau^2} - \frac{(\alpha_{\bullet})_{J+1}c_{P}^3 \left(\frac{c_{\lambda+\mu}^2c_P^2\abs{\xi'}^2}{\tau^2} + c_S^2\right)}{c_{\lambda+\mu}^2\tau^3\xi_{3,\bullet,P}}\\ \\
(\gamma_{2,\bullet})_{J+1}c_S^2 \frac{\tau \abs{\xi'}^2}{c_P\xi_{3,\bullet,P}}- (\alpha_{\bullet})_{J+1}c_{\lambda+\mu}^2\frac{c_P^2\abs{\xi'}^2}{\tau^2}}.
\end{align*}
Observe that $A_P$, $B_P$, $C_P$ are $R_0$ terms and $D_{P}^{J+1}$ are $R_{\abs{J}-1}$ terms, for $J = 0,-1,-2,\dots$.
We have the following recursive relation from \cite[Theorem 3.7]{RachBoundary} as
\begin{prop}\label{Prop_3.7}
For P waves, we have the following recurrence relation for $(\gamma_{2,\bullet})_{J-1}$ and \newline $ \partial_{x_3}(\alpha_{\bullet})_{J}$ as
\begin{multline*}
A_P\col{(\gamma_{2,\bullet})_{J-1}\\ \partial_{x_3}(\alpha_{\bullet})_{J}}
= B_P \partial_{x_3}\col{(\gamma_{2,\bullet})_{J}\\ \partial_{x_3}(\alpha_{\bullet})_{J+1}} + C_P \partial_{x_3}^2\col{(\gamma_{2,\bullet})_{J+1}\\ \partial_{x_3}(\alpha_{\bullet})_{J+2}}
\\
+ D_P^{J+1}\left( \partial_{x_3}^2\log c_{\bullet,P}\right) + R_{\abs{J}+1},
\end{multline*}
for $J\leq-1$.
\end{prop}
Using the recurrence relation in Proposition \ref{Prop_3.7} we state the following lemma.
\begin{lemma}\label{Lem_3.12}
$(\gamma_{2,\bullet})_{J-1}$ and $ \partial_{x_3}(\alpha_{\bullet})_J$ can be written in terms of $ \partial_{x_3}^{1+\abs{J}} \log c_P, \partial_{x_3}^{1+\abs{J}} \log c_s$ and
$ \partial_\nu^{1+\abs{J}} \log \sqrt \rho$.
In fact,
\begin{multline*}
\col{ (\gamma_{2,\bullet})_{J-1}\\ \partial_{x_3}(\alpha_{\bullet})_J}
= (I \ 0) \cdot \mathcal M_J \cdot \mathcal M
\cdot \partial_{x_3}^{1+\abs{J}} \col{ \log c_P \\ \log c_S \\ \log \sqrt \rho}
(\alpha_{\bullet})_0 + R_{\abs{J}}, \\
\qquad \mbox{for } J=-1, -2,\dots,
\end{multline*}
where
\begin{align*}
\mathcal{M}
=& \col{ A_p^{-1}B_p \\ I}\mathcal M_{\gamma_2,\alpha} + \col{I\\0}
\left[\begin{matrix} \left[\begin{matrix} A_p^{-1}D_p^0 \end{matrix}\right] & \left[\begin{matrix} 0&0\\0&0 \end{matrix}\right] \end{matrix}\right],
\qquad I = \left[\begin{matrix} 1 &0\\ 0 &1 \end{matrix}\right],\\
\mathcal{M}_J
=& \col{ A_p^{-1}B_p & A_p^{-1}C_p\\ I &0}^{\abs{J}-1},
\end{align*}
in which,
\begin{align*}
&\col{ (\gamma_2)_{-1}\\ \partial_\nu(\alpha)_0}
=\mathcal{M}_{\gamma_2,\alpha}\cdot \partial_{x_3}\col{ \log c_P \\ \log c_S \\ \log \sqrt \rho}(\alpha_{\bullet})_0,\\
\mathcal{M}_{\gamma_2,\alpha} =& \left[\begin{matrix} -\frac{c_{P,\bullet}^2}{2\tau^2\xi_{3,\bullet,P}^2} &\frac{4ic_{P,\bullet}^3c_{S,\bullet}^2}{\tau^3c_{\lambda+\mu,\bullet}^2} &i\left(1-\frac{2c_{S,\bullet}^2}{c_{\lambda+\mu,\bullet}^2}\right)\frac{c_{p,\bullet}^3}{\tau^3}\\
-\frac{1}{2}\left(1- \frac{\abs{\xi'}^2}{\xi_{3,\bullet,P}^2}\right) &0 &-1 \end{matrix}\right].
\end{align*}
\end{lemma}
\noindent The proof of the above Lemma follows from similar calculations done in \cite[Lemma 3.12]{RachBoundary}.
\begin{lemma}\label{Lem_3.13_1}
One can determine $ \partial_{x_3}^{\abs{J}}c^{(\pm)}_{S}$ and $ \partial_{x_3}^{\abs{J}}\rho^{(\pm)}$ on $\Gamma_{-}$ from the knowledge of $(u_R)_{j}$, $c_{\PS}$ and $\rho$ on $\Omega_{-}$, for $j = 0,-1,\dots,J-1$.
\end{lemma}
\begin{proof}
From the equation \eqref{eq3.1} and Lemma \ref{Lem_3.12} we obtain the following relation
\begin{multline}\label{equation_67_1}
\left( \partial_{x_3}u_{T}\right)_J \cdot \frac{M_1}{\abs{M_1}^2}
\\= -\left(\frac{i}{2\xi_{3,T,S}}\right)^{\abs{J}} \col{0\\\frac{1}{2}\left(1-\frac{\abs{\xi'}}{\xi_{3,T,S}^2}\right)\\1} \cdot \partial_{x_3}^{\abs{J}+1}\col{ \log c_P \\ \log c_S \\ \log \sqrt \rho}(\alpha_{1,T})_0 + R_{\abs{J}}.
\end{multline}
From Lemma \ref{lemma_4} we get $\left( \partial_{x_3}u_{T}\right)_J = \left( \partial_{x_3}\widetilde{u}_{T}\right)_J$ and fact that $M_1$ is a $R_0$ quantity we obtain
\begin{equation} } \def\eeq{\end{equation}
\begin{aligned}\label{J-th_relation_1}
\left(\frac{i}{2\xi_{3,T,S}} \right)^{\abs{J}}
&\left[
\frac{1}{2}( \partial_{x_3}^{1+\abs{J}} \log c_S^{(+)})\left(1 - \frac{\abs{\xi'}^2}{(\xi_{3,T,S})^2} \right) + ( \partial_{x_3}^{1+\abs{J}}\log \sqrt{\rho^{(+)}})
\right]\\
&=
\left(\frac{i}{2\xi_{3,T,S}} \right)^{\abs{J}}
\left[
\frac{1}{2}( \partial_{x_3}^{1+\abs{J}} \log \tilde{c}_S^{(+)})\left(1 - \frac{\abs{\xi'}^2}{(\xi_{3,T,S})^2} \right)\right. \\
&\left. \qquad \qquad \qquad \qquad \qquad \qquad + ( \partial_{x_3}^{1+\abs{J}}\log \sqrt{\tilde{\rho}^{(+)}})\right],
\quad\mbox{on }\Gamma_{+}.
\end{aligned}
\eeq
Varying $(1+\abs{\xi'}^2/\xi_{3,T,S}^2) = (c_S^{(+)})^{-2}f_S^{-2}$ as in the proof of Proposition \ref{prop 5} we get
\begin{multline*}
\frac{1}{(c_S^{(+)})^2f_S^2}( \partial_{x_3}^{1+\abs{J}} \log c_S^{(+)})
- \frac{1}{(\tilde{c}_S^{(+)})^2f_S^2}( \partial_{x_3}^{1+\abs{J}} \log \tilde{c}_S^{(+)})\\
= \frac{1}{(c_S^{(+)})^2(f_S^{(1)})^2}( \partial_{x_3}^{1+\abs{J}} \log c_S^{(+)})
- \frac{1}{(\tilde{c}_S^{(+)})^2(f_S^{(1)})^2}( \partial_{x_3}^{1+\abs{J}} \log \tilde{c}_S^{(+)}),\\
\text{so that}\quad
(c_S^{(+)})^2\left(f_S^2 - (f_S^{(1)})^2\right)\left( \partial_{x_3}^{1+\abs{J}} \log c_S^{(+)} - \partial_{x_3}^{1+\abs{J}} \log \tilde{c}_S^{(+)} \right) = 0
\quad\mbox{on }\Gamma_{+}.
\end{multline*}
Choosing $f_S \neq f_S^{(1)}$ we obtain $ \partial_{x_3}^{1+\abs{J}}c_S^{(+)}\restriction_{\Gamma_{+}} = \partial_{x_3}^{1+\abs{J}}\tilde{c}_S^{(+)}\restriction_{\Gamma_{+}}$. Going back to \eqref{J-th_relation_1} we further obtain $ \partial_{x_3}^{1+\abs{J}}\rho^{(+)}\restriction_{\Gamma_{+}} = \partial_{x_3}^{1+\abs{J}}\tilde{\rho}^{(+)}\restriction_{\Gamma_{+}}$.
\end{proof}
\begin{lemma}\label{l: higher order p speed from reflection}
One can determine $ \partial_{x_3}^{\abs{J}+1}c_P^{(+)}\restriction_{\Gamma_{+}}$ from the knowledge of $(u_R)_{j}\restriction_{\Gamma_{-}}$, for $j=0,-1,$ $\dots,J-1$, where $J\leq -1$.
\end{lemma}
\begin{proof}
In order to determine $ \partial_{x_3}^{\abs{J}+1}c_P^{(+)}$ on $\Gamma_{+}$ we go back to equation \eqref{eq3.1}, Lemma \ref{Lem_3.12} and observe that
\begin{multline*}
\partial_{x_3}(u_{T})_J \cdot \frac{M_2}{\abs{M_2}^2}
=\left(\col{i(\xi_{3,T,P}-\xi_{3,T,S}) \\0 \\1 \\0} \mathcal{M}_J\mathcal{M}\right) \cdot \partial_{x_3}^{\abs{J}+1}\col{\log c_P\\ \log c_S\\ \log \sqrt{\rho}}(\alpha_{T})_0 + R_{\abs{J}}, \\ \mbox{for } J=-1,-2,\dots.
\end{multline*}
Now we are in the exact same situation as in the proof of \cite[Theorem 3.13]{RachBoundary} and following the exact same calculations there one finally obtains $ \partial_{x_3}^{\abs{J}+1}\left(c_P^{(+)}\right) = \partial_{x_3}^{\abs{J}+1}\left(\widetilde{c}_P^{(+)}\right)$ on ${\Gamma_{+}}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t: Elastic case}]
The proof follows the same argument as the proof of Theorem \ref{Th_main_acoustic} in section \ref{s: lemmas and proof of acoustic thm} using the above lemmas in this section.
\end{proof}
\section{Extending the previous results to a non-flat interface}\label{s: nonflat case}
We will briefly show how the earlier proofs extend to the nonflat case. Essentially, the only changes are that lower order terms such as $(a_R)_J$, $J=-1,-2,\dots$ will contain terms involving the curvature of the interface, also known as the shape operator. However, when we try to determine $ \partial_\nu^j c_\PS$, any such curvature term in $(a_R)_{-j-1}$ will be a $R_j$ term and hence completely determined from the previous step in the induction argument. Hence, the proof will proceed with little change, and the formulas from the previous sections continue to hold. Nevertheless, it is worthwhile to do the calculation to see how the geometry of the interface is incorporated in the reflection operator. We do the main calculation in the acoustic case, but make it clear that similar calculation continue to hold in the elastic case.
\subsection*{Some geometric notation}
First, define boundary normal coordinates
\[
\tilde x = (\tilde x', \tilde x_3)
\]
near $\Gamma = \{ \tilde x_3 = 0\}$, here with respect to the Euclidean metric. Then $\Omega_-$ is given by $\tilde x_3 < 0$ and $\Omega_+$ is $\tilde x_3 > 0$ (see \cite{RachBoundary}). The directions of the boundary-normal-coordinate axes are given by the orthogonal vectors $\nabla_x \tilde x_1, \nabla_x \tilde x_2, \nabla_x \tilde x_3 = -\nu$. Here, we denote $\nu$ as the vector field that is normal to the interface when restricted to $\Gamma$. In semigeodesic coordinates, the Euclidean metric has the form $g = d\tilde x_3^2 + h(x, d\tilde x')$ where $h\restriction_\Gamma$ is the induced metric on $\Gamma$.
If $\tilde \xi'$ are the dual coordinates to $\tilde x'$, then $\xi_{tan} = \left( \frac{ \partial \tilde x'}{ \partial x}\right)^t \tilde \xi'$. We also have $\nabla_{tan} \phi = \left( \frac{ \partial \tilde x'}{ \partial x}\right)^t (\nabla_{\tilde x'} \phi)$, and $\nabla_x \phi = \nabla_{tan} \phi + \partial_\nu \phi \nabla_x \tilde x_3$.
Similarly, $\xi_{3,\bullet}$ is defined as before using the $\tilde x$ coordinates: \[ \partial_\nu \phi_{\bullet}= \xi_{3,\bullet} =
\sqrt{ c_S^{-2}\abs{ \partial_t \phi_\bullet}^2-\abs{\nabla_{tan} \phi_\bullet }^2 } =
\sqrt{c_S^{-2}\abs{\tau}^2 - \abs{ \xi'_{tan}}^2}\] on $\Gamma$.
We also define a useful object for studying submanifolds.
\begin{definition}
Let $\Gamma$ be a surface, $p \in \Gamma$, and $\nu$ a smooth unit normal vector field defined along a neighborhood of $p$. The \emph{shape operator} is the map $S_\Gamma: T_p \Gamma \to T_p \Gamma$ defined by,
\[
S_\Gamma(X) = -\nabla_X \nu
\]
where $\nabla_X$ is the covariant derivative.
\end{definition}
\subsection*{Curvature contributions to the symbols}
We are ready to prove the following proposition. $a_R \sim \sum (a_R)_J$ will be defined as before, but we assume $\Gamma$ is a smooth interface and not necessarily flat.
\begin{prop}Assume $\Gamma$ is a smooth hypersurface.
Equation \eqref{e: (a_R)_J equation for acoustic} continues to hold for $(a_R)_J$ with $ \partial_{x_3}$ replaced by $ \partial_\nu$. That is,
\begin{multline*}
(a_R)_J
= -(-i/(2\xi_{3,T}))^J \left[( \partial^{\abs{J}}_{\nu}\log \sqrt {\rho^{(+)}}) \right.
\\
\left. + \partial^{\abs{J}}_{\nu}\log c^{(+)}_S\left(1 - \frac{( \partial_t \phi_T)^2}{2c_S^2 \xi^2_{3,T}}\right)\right]\frac{(a_T)_{J+1}}{R_{\abs{J+1}}} + R_{\abs{J+1}}
\end{multline*}
Hence, Theorem \ref{Th_main_acoustic} continues to hold. Moreover, $R_{\abs{J+1}}$ above differs from $R_{\abs{J+1}}$ computed in \eqref{e: (a_R)_J equation for acoustic} by terms depending \emph{only} on $S_\Gamma$. Thus, the full reflection operator $R$ in the general case differs from $R$ in the flat case only by terms depending on $S_\Gamma$, i.e. the curvature of $\Gamma$. Similarly, in the elastic case Theorem \ref{t: Elastic case} and Corollary \ref{c: recover T from R} continue to hold as well.
\end{prop}
The second statement about curvature is nontrivial and requires a careful geometric argument since as seen in the above equation, $(a_R)_J$ involves higher order normal derivatives and in the non-flat case, it will involve higher order normal derivatives of quantities related to the curvature of $\Gamma$ (c.f. \eqref{e: (a)_J with higher curvature}). Nevertheless, we show that all such high order derivatives are still determined by just the curvature of $\Gamma$ (in fact, the eigenvalues of $S_\Gamma$) and no other information is needed to compute the full reflection operator.
\begin{proof}
First, we consider the acoustic case since the elastic case will follow from analogous calculations. Our goal is to compute the full symbol of the reflection operator in the non-flat case and show that the only additional terms from that of the flat case done earlier are completely determined by the shape operator $S_\Gamma$.
Obsserve that
\begin{align*}
\nabla \cdot \mu \nabla_x \phi
&= \nabla_x \mu \cdot \nabla_x \phi+ \mu \nabla \cdot (\nabla_{tan} \phi + \partial_\nu \phi \nabla_x \tilde x_3)\\
&= \nabla_x \mu \cdot \nabla_x \phi
+ \mu \text{div}} \def \grad{ \text{grad}} \def \rot{ \text{rot}_x(\nabla_{tan}\phi) + \mu\nabla \partial_\nu \phi \cdot \nabla_x \tilde x_3
+ \mu \partial_\nu \phi \text{div}} \def \grad{ \text{grad}} \def \rot{ \text{rot}_x(\nabla_x \tilde x_3) \\
&= \partial_\nu \mu \partial_\nu \phi + \mu R(\nabla_{tan}\phi)+
\mu \partial^2_\nu \phi
+ \mu \partial_\nu \phi H(x) + R_0
\end{align*}
where $H(x)$ is proportional to the mean curvature of the interface at $x$, which can be computed by taking the divergence of the normal vector field and is determined by the eigenvalues of $S_\Gamma$. $R(X) = \langle \nabla_\nu X, \nu \rangle$ will also be a term containing curvature. However, here, $R_0$ will be non-curvature terms with $0$ normal derivatives of the material parameters.
From \eqref{e: transport equations} and \eqref{e: Hamilton derivative to normal deriv} we obtain
\begin{multline*}
i c_S^2 \xi_{3,\bullet} \partial_\nu (a)_0
= -c_S^2 \xi_{3,\bullet}\left[( \partial_{\nu}\log \sqrt \rho)
- ( \partial_{\nu}\log c_S)\left(1 - \frac{( \partial_t \phi_\bullet)^2}{2c_S^2 \xi^2_{3,\bullet}}\right)\right.
\\\left.+ H(x)/2 + \frac{R(\nabla_{tan}\phi)}{2\xi_{3,\bullet}}\right](a_\bullet)_0 + R_0.
\end{multline*}
so that
\begin{multline}
\partial_\nu (a_\bullet)_0 = -\left[( \partial_{\nu}\log \sqrt \rho)
- ( \partial_{\nu}\log c_S)\left(1 - \frac{( \partial_t \phi_\bullet)^2}{2c_S^2 \xi^2_{3,\bullet}}\right)\right.
\\\left.+ H(x)/2 + \frac{R(\nabla_{tan}\phi)}{2\xi_{3,\bullet}}\right](a_\bullet)_0 + R_0
\end{multline}
Note that the $R$ and $H$ terms are $R_0$ terms an have no normal derivatives of any material parameters so that Lemma \ref{l: acoustic partial_x_3 a_0 formula} continues to hold even in this non-flat case.
Using semigeodesic coordinates actually allows us to simplify the $R(\nabla_{tan}\phi)$ term.
Let $e_1, e_2, e_3$ denote the basis of vector fields corresponding to the coordinates $\tilde x_1, \tilde x_2, \tilde x_3$ with $e_3 = \nu$ being the normal vector.
Then
\begin{equation}
\langle \nabla_\nu\nabla_{tan}\phi_\bullet, \nu \rangle
=\langle \nabla_\nu(\sum_{j=1}^2 \partial_{\tilde x_j} \phi_\bullet e_j), \nu \rangle
=\sum_{j=1}^2 \partial_{\tilde x_j} \phi_\bullet\langle \nabla_{e_3} e_j, e_3 \rangle
=\sum_{j=1}^2 \partial_{\tilde x_j} \phi_\bullet \Gamma^3_{j3} =0
\end{equation}
where $\Gamma^3_{j3}$ are Christoffel symbols, and these ones vanish in semigeodesic coordinates \cite[Section 2.4]{SUV2019transmission}. Hence, we conclude that $R(\nabla_{tan}\phi)=0$
First note that the computation for $(a_R)_0$ in \eqref{zeroth_order_wave} is identical to the flat case and has no curvature terms.
For $(a_R)_{-1}$ we need to compute $P\phi_\bullet$ and $ \partial_\nu (a_T)_0$
which will have the curvature terms above. But to compute $(a_R)_{-2}$, we will need $ \partial_\nu (a_T)_{-1}$ which will involve $P\phi_T$ and $P (a_T)_0$. This will involve $ \partial_\nu^2 (a_T)_0$ which in turn involves $ \partial_\nu P\phi_T$ which will have second derivatives of the elastic parameters, first normal derivatives of curvature terms, and curvature terms. However, any term with curvature will be $R_1$ and known from the previous step.
Thus,
\begin{align*}
i c_S^2 \xi_{3,\bullet} \partial_\nu(a_\bullet)_{-1}
&= \frac{1}{2\rho} (P\phi_\bullet)(a_\bullet)_{-1}
+ \tau \partial_{t}(a_\bullet)_{-1} -c_S^2 \eta_{tan} \cdot \nabla_{tan}(a_\bullet)_{-1}
-P(a_\bullet)_0\\
&=-c_S^2 \left[( \partial^2_{\nu}\log \sqrt \rho)
- ( \partial^2_{\nu}\log c_S)\left(1 - \frac{( \partial_t \phi_\bullet)^2}{2c_S^2 \xi^2_{3,\bullet}}\right)+ \partial_\nu H(x)/2 \right](a_\bullet)_0 \\
&\qquad \qquad + R_1
\end{align*}
where $R_1$ has at most one normal derivative of parameters and no normal derivatives of the curvature. The quantity $ \partial_\nu H$ is related to both the mean curvature and the Gauss curvature of $\Gamma$ \cite[Lemma 3.2]{DouganMean_Curv_derivative}. Again, any curvature term will be $R_1$ so that the main formulas remain the same.
After iteration as in the previous section, we obtain
\begin{multline}\label{e: (a)_J with higher curvature}
\partial_\nu(a_\bullet)_{J}
=\ \left(\frac{-i}{\xi_{3,\bullet}}\right)^{\abs{J}+1} \left[( \partial^{\abs{J}+1}_{\nu}\log \sqrt \rho) \right.
\\\left.- ( \partial^{\abs{J}+1}_{\nu}\log c_S)\left(1 - \frac{( \partial_t \phi_\bullet)^2}{2c_S^2 \xi^2_{3,\bullet}}\right)+ \partial^{\abs{J}}_\nu H(x)/2 \right](a_\bullet)_0 + R_{\abs{J}},
\end{multline}
where $R_{\abs{J}}$ includes up to $|J|-1$ normal derivatives of $H$. Using \eqref{e: transport equations} and \eqref{e: transmission conditions} with the same argument in the flat case, we arrive at the equation for $(a_R)_J$ in the statement of the Proposition. Lemma \ref{l: higher der of mean curvature in terms of S_Gamma} applied to $ \partial^{\abs{J}}_\nu H$ shows that all curvature terms can be determined from $S_\Gamma$.
This implies that one only needs the shape operator (and not its derivatives!) to compute the full reflection operator.
In the elastic case as well, we may use boundary normal coordinates and this creates interface curvature terms as in \cite{RachBoundary}. However, these terms will contain one normal derivative less than the highest order normal derivatives of the material parameters, and would still be included in the $R_{\abs{J}}$ remainder terms. Hence, as in the above calculation for the acoustic case and in \cite{RachBoundary}, the same formulas hold in Proposition \ref{prop 5}, Proposition \ref{Prop_3.7}, Lemma \ref{Lem_3.12}, and Lemma \ref{l: higher order p speed from reflection} where $ \partial_{x_3}$ becomes $ \partial_\nu$. The remaining argument to prove Theorem \ref{t: Elastic case} proceeds as in the flat case in the previous section.
\end{proof}
The higher order normal derivatives $ \partial_\nu^k H(x)$ can be related to the principal curvatures of the interface , using the methods of \cite{DouganMean_Curv_derivative}. Note this is irrelevant for Theorem \ref{Th_main_acoustic} since we just showed \eqref{e: p_x_3 (a)_J higher order} continues to hold even in the general case since $ \partial^{\abs{J}}_\nu H(x)$ are indeed $R_{\abs{J}}$ terms. Next, we show that even these higher order normal derivatives only depend on the curvature (shape operator) of the interface and not the higher order derivatives.
We follow \cite{DouganMean_Curv_derivative} to introduce a natural defining function for $\Gamma$ for the interface normal coordinates that we use to compute.
The signed distance function $b(x)$ to the surface $\Gamma$ is defined as
\[
b(x,\Gamma) = \begin{cases}
\text{dist}(x,\Gamma) & \mbox{for } x \in \Omega_- \\
0 & \mbox{for } x \in \Gamma \\
-\text{dist}(x,\Gamma) & \mbox{for } x \in \Omega_+ \\
\end{cases}
\]
where
\[
\text{dist}(x,\Gamma) = \text{inf}_{y \in \Gamma}|y-x|.
\]
Then $\tilde x_3 = b$ is the defining function of $\Gamma$ and
\[
\nu = \nabla b(x)\restriction_\Gamma
\]
and we sometimes denote $\nu$ for the vector field $\nabla b = \nabla_x \tilde x_3$ where convenient. Since $b$ is a distance function, $\abs{\nabla b} = 1$ \cite{DouganMean_Curv_derivative}. Denote by $\kappa(x)$ at $\Gamma$ the mean curvature of $\Gamma$ at $x$ and $\kappa_i$ are the principal curvatures of the surface, which are the eigenvalues of $S_\Gamma$. As mentioned, $H(x)$ is proportional to $\kappa$ by a constant so that all our results for $\kappa$ extend naturally to $H$. We first mention the following important lemma
\begin{lemma}(\cite[Lemma 3.2]{DouganMean_Curv_derivative})\label{l: first der of mean curvature}
The normal derivative of the mean curvature of a surface $\Gamma$ of class $C^3$ only depends on the shape operator of $\Gamma$. More precisely
\[
\partial_\nu \kappa = - \sum_i \kappa_i^2.
\]
For a two-dimensional surface in $3d$, this is equal to
\[
\partial_\nu \kappa = - (\kappa_1^2 + \kappa_2^2) =
- (\kappa^2 - 2\kappa_G)
\]
where $\kappa_G = \kappa_1 \kappa_2$ denotes the Gauss curvature.
\end{lemma}
We shall extend this type of result to higher order derivatives as well.
\begin{lemma}
\label{l: higher der of mean curvature in terms of S_Gamma}
All higher order normal derivative of the mean curvature of a surface $\Gamma$ of class $C^\infty$ only depends on the shape operator of $\Gamma$. More precisely
\[
\partial^J_\nu \kappa = (-1)^J J! \sum_i \kappa_i^{J+1},
\]
and $ \partial_\nu^J H$ differs from this by a constant depending only on the dimension.
\end{lemma}
\begin{proof}
Observe that
\[
\partial_\nu \kappa = (\nabla \kappa) \cdot \nabla b \restriction_\Gamma
\]
It is shown in \cite[Lemma 3.2]{DouganMean_Curv_derivative} that $ \partial_{\tilde x_3} \kappa = -\abs{D^2b}^2$ where $\abs{ \cdot }$ denotes the Frobenius norm of a matrix.
Thus
\begin{equation*}
- \partial^2_{\tilde x_3} \kappa = \nabla (\abs{D^2 b}^2) \cdot \nabla b
= \nabla (b^2_{x_i x_j}) \cdot \nabla b
= 2b_{x_i x_j}b_{x_i x_j x_k}b_{x_k}
\end{equation*}
Next, we can use $1 = \abs{\nabla b}^2 = \sum_k b^2_{x_k} = b_{x_k} b_{x_k}$
where we understand the last equality as a sum over $k$, to obtain after a brief calculation
\begin{align}\label{e: partial^2 kappa with b_ij only}
\partial_{\tilde x_3}^2 \kappa
&= 2 \text{tr}( (D^2 b)^3)
\end{align}
Next, $D^2 b\restriction_\Gamma = \nabla_\Gamma \nu$ whose eigenvalues are precisely the principal curvatures $\kappa_1, \dots, \kappa_{n-1}$ so that the eigenvalues of $D^2 b$ are $\kappa_i^3$ for $i = 1, \dots, n-1$.
Thus, for a constant $c_n$, we conclude
\[
\partial_\nu^2 H = 2c_n \sum_i \kappa_i^3.
\]
We can obtain the higher order derivatives $ \partial^J_\nu \kappa$ analogously by using $1 = \abs{\nabla b}$ together with \eqref{e: partial^2 kappa with b_ij only} so that only terms in $D^2 b$ appear. In fact, we can show inductively that
\[
\partial_{\tilde x_3}^J \kappa = (-1)^J J! \text{trace}((D^2 b)^{J+1}).
\]
Denote $b_{pq} = b_{qp} = b_{x_p x_q}$. Then by the inductive step
\[
\frac{1}{(-1)^{J-1} (J-1)!} \partial_{\tilde x_3}^{J-1} \kappa = \text{trace}((D^2 b)^{J}) = \sum_{i_1,\dots,i_{J}} b_{i_1 i_2} b_{i_2 i_3} \dots b_{i_{J-1} i_{J}} b_{i_{J} i_1}
\]
After a brief computation we obtain
\begin{align*}
\frac{1}{(-1)^{J-1} (J-1)!} \partial_{\tilde x_3}^{J} \kappa &= \nabla \left(\sum_{i_1,\dots,i_{J}} b_{i_1 i_2} b_{i_2 i_3} \dots b_{i_{J-1} i_{J}} b_{i_{J} i_1} \right) \cdot \nabla b \\
&= -\sum_{p=1}^J \text{tr}(D^2b)^{J+1} = - J \text{tr}(D^2b)^{J+1}.
\end{align*}
Hence, using induction and taking the trace in the above formula allows us to conclude
\begin{equation}\label{e: partial^J H in terms of shape operator}
\partial^J_\nu H = c_n(-1)^J J! \sum_i \kappa_i^{J+1}.
\end{equation}
\end{proof}
\section{An alternate viewpoint that relates to boundary determination}
As mentioned in the introduction, a relevant forward problem to our inverse problem is Michael Taylor's work in \cite{Taylor75}. Following \cite{yamamoto1989}, the elastic transmission problem may locally be cast as a first order boundary value problem near the interface, with $\Gamma$ acting as a boundary.
Denote $\Omega_1 = \Omega_+$ and $\Omega_2 = \Omega_-$ from the introduction. We assume boundary normal coordinates are chosen so that locally, $\Gamma$ is given by $\{ x_3 = 0\}$. Again, $u$ denotes the solution to the elastic transmission problem on $\Omega \times \mathbb R$ and we denote $u_i$ as $u$ restricted to $\Omega_i \times \mathbb R$.
Then, we denote $U_i = .^t(\Lambda(D_{x'},D_t) u_i, D_{x_3}u_i)$ where $\Lambda$ is a pseudo-differential operator with the symbol $\Lambda_1(\xi',\tau) = (\abs{\xi'}^2+\tau^2+1)^{1/2}.$
The transmission problem becomes the following boundary value problem
with the form (taken from \cite{yamamoto1989})
\[\begin{cases}
D_{x_3}U_i = M_i(x',D_{x'}, D_t) U_i \qquad & \text{ in }(-1)^{i+1}x_3 > 0\\
(I_3, 0 ) U_1 = (I_3, 0)U_2 \qquad & \text{ on } x_3 = 0, \\
B_1(x',D_{x'},D_t)U_1 = B_2(x',D_{x'}, D_t) U_2 \qquad & \text{ on } x_3 = 0
\end{cases}
\]
where $I_3$ is the $3 \times 3$ identity matrix, $M_i$ is a $3 \times 6$ matrix pseudo-differential operator of order one depending on the parameters in $\Omega_i$, and the $6 \times 3$ matrix principal symbol $(B_{i1}, B_{i2})(x',\xi',\tau)$ of $B_i = (B_{i1},B_{i2})(x',D_{x'}, D_t)$ is determined by the Neumann operator \eqref{elastic_Neumann} and depend on the parameters in region $\Omega_i$ (see \cite[Equation (2.2)]{yamamoto1989} for the exact definitions).
One may then construct the boundary operator $\gamma$ appearing in \cite{Taylor75} that determines a pseudodifferential equation between the ``incoming'' and ``outgoing'' elastic waves at the interface \cite{yamamoto1989}. The principal amplitudes of the outgoing waves at the interface are determined by $\gamma$, which are used to form the parametrix for the elastic wave equation away from glancing rays. Our inverse problem is to use these scattered amplitudes at the interface to determine the jet of the material parameters at a certain side of an interface.
\begin{comment}
\ack M.V.d.H. gratefully acknowledges support from the Simons Foundation under the MATH + X program, the National Science Foundation under grant
DMS-1815143, and the corporate members of the Geo-Mathematical Imaging Group at Rice University. G.U. was partly supported by NSF, a Walker Family Endowed Professorship at UW and a Si-Yuan Professorship at IAS, HKUST.
\end{comment}
\section{Declarations}
\subsection*{Funding} M.V.d.H. gratefully acknowledges support from the Simons Foundation under the MATH + X program, the National Science Foundation under grant
DMS-1815143, and the corporate members of the Geo-Mathematical Imaging Group at Rice University. G.U. was partly supported by NSF, a Walker Family Endowed Professorship at UW and a Si-Yuan Professorship at IAS, HKUST. S.B. was partly supported by Project no.: 16305018 of the Hong Kong Research Grant Council.
\subsection*{Conflict of interest/Competing interests}
{\bf Financial interests:} The authors declare they have no financial interests.
\\
\noindent {\bf Non-financial interests:} The authors declare they have no non-financial interests.
\subsection*{Availability of data and material} Not applicable
\subsection*{ Code availability} Not applicable
|
1,314,259,993,309 | arxiv | \section{Introduction}
Some programming techniques make use of the ability to keep a~pointer to internal
parts of a~data structure. Such a~pointer is usually called a~\emph{finger}
\cite{finger}. As an example, a~finger can be used to track the
most recently used node in a~tree. Tree operations can then start from the finger
instead of starting from the root of the tree, which can lead to a~speedup if the
program frequently operates on elements that are stored near each other.
However, fingers lose most of their utility when applied to purely functional data
structures. Operations that make use of fingers frequently require the structure to
contain pointers to parent nodes or require mutability. Pointers to parent nodes
create loops which hugely complicate update operations.
A~\emph{zipper} \cite{huet} is a~technique of representing purely functional data
structure in a~way that allows direct access to an element at a~selected position.
Different data structures have different zipper representations: we, therefore,
distinguish between list zippers, tree zippers, etc. Zippers differ from fingers
in a~crucial way. Unlike a~finger, a~zipper contains the data structure. A~finger
can be removed, and the structure it was pointing to remains intact while removing
a~zipper removes the structure it contains. As a~consequence, while two fingers
give direct access to two positions, two zippers do not.
Despite these differences, there is a~variety of tasks that can be solved by both
approaches. Our goal was to compare the effectiveness of these two techniques. We
chose two tasks where the ability to directly access a~position inside a~data
structure and perform local updates is beneficial: traversing a~tree in an arbitrary
way and building a~tree from a~sorted sequence. Each task was implemented in Haskell,
\mbox{C\Rplus\Rplus}\xspace, and \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace, using the programming style common to that language. Note that we
compared the performance difference between these techniques, rather than
performance across programming languages.
This work is organized as follows. In the next section, we discuss zipper
representations. The third section looks at single position zippers in detail.
The testing methodology, as well as the programming tasks themselves, are presented
in the fourth section. Finally, the fifth section details our findings.
The source code used for performance testing is available online.
\footnote{\url{https://github.com/vituscze/performance-zippers}}
\section{Related Work}
Huet's original zipper technique \cite{huet} relies on manually analyzing the data
type and then defining the corresponding zipper structure. \autoref{lst:listzip}
shows an example of such a~zipper.
\begin{lstlisting}[caption={List and its zipper},label={lst:listzip},language=haskell]
data List a = Nil | Cons a (List a)
data ListZipper a = ListZipper
{ before :: List a
, focus :: a
, after :: List a
}
\end{lstlisting}
This approach becomes problematic when working with heterogeneous data structures
(a~structure containing elements of multiple types), or when working with many
different zipper representations.
For heterogeneous collections, Huet's zipper can be used to represent only the
positions of one type of elements, which is quite limiting. Adams
\cite{scrapzippers} shows how to build a~zipper for heterogeneous collections by
using generic programming techniques based on the ideas of L\"ammel and Peyton Jones
\cite{scrapboilerplate}. Another benefit of this approach is that new data
structures do not need a~custom implementation of the zipper structure, which
reduces the boilerplate that is usually present when dealing with zippers.
Instead of using an explicit data structure, the zipper can be represented as
a~suspended traversal of the original structure. Kiselyov \cite{oleg} uses
delimited continuations to implement suspended computation to great effect.
Applications include creating a~zipper for any type that is a~member of Haskell's
\inlcode{Traversable} type class, zipping two data structures for side-by-side
comparison and various operations on zippers capable of representing multiple positions.
Another way of dealing with the boilerplate code is to automate the generation of
auxiliary data structures. For each regular algebraic data type, the type of
one-hole contexts can be obtained by differentiating the original type, not unlike
differentiation in calculus \cite{derive,formalderive}. A~zipper is obtained by
combining an element of the original structure and the one-hole context. As a~result, the
zipper does not need to be defined for each data structure separately
\cite{typeindexeddata}. We explore this technique in more detail in the following
section.
Ramsey and Dias \cite{controlflow} use zippers to represent control flow
graphs in a~low-level optimizing compiler. The compiler is written in OCaml, giving the
opportunity to use an imperative approach based on mutable pointers as well as
a~purely functional approach based on zippers. As part of their analysis, the authors
also include performance comparison. Zippers are shown to perform slightly better
than mutable pointers.
\section{Zipper}
Huet's zipper is based on the idea of pointer reversal. Reversing all pointers
along the path from the root of the structure to a~selected position called a~\emph{focus}
creates a~structure that is rooted at the focus. This reversal has multiple advantages.
Direct access to the focus allows its modification in constant time. Even in
a~purely functional setting where in-place modifications are not available, creating
a~copy of the focused node may be used instead. The rest of the structure stays
intact and can be shared.
Similarly, accessing the parent and children of the focus can be done in
constant time, which can be used to efficiently move the focus around the
structure. Moving the focus is accomplished by reversing the pointers.
Huet shows how to represent this kind of pointer reversal as a~purely functional
structure. The nodes on the path from the root to the focus are stored in a~list.
Each element of the list must contain the values and substructures that are not
descended into as well as the direction taken when moving towards the focus. The
list is reversed, ensuring the parent of the focus is in the head position
(instead of the root of the structure).
\begin{lstlisting}[caption={Binary tree and its zipper},label={lst:treezip},language=haskell]
data Tree a = Leaf | Node (Tree a) a (Tree a)
data PathChoice a
= NodeL a (Tree a) -- Focus is in the left subtree
| NodeR (Tree a) a -- Focus is in the right subtree
data Context = Context
(Tree a) -- Left subtree of the focus
(Tree a) -- Right subtree of the focus
[PathChoice a] -- Path to the root
data Zipper a = Zipper a (Context a)
\end{lstlisting}
\autoref{lst:treezip} defines a binary tree and its zipper. \autoref{lst:focusmove}
shows how to move the focus of this zipper to the parent node.
\begin{lstlisting}[caption={Focus movement},label={lst:focusmove},language=haskell]
up :: Zipper a -> Maybe (Zipper a)
up (Zipper _ (Context _ _ [])) = Nothing
up (Zipper x (Context l r (NodeL p pr:ps))) = Just $
Zipper p (Context (Node l x r) pr ps)
up (Zipper x (Context l r (NodeR pl p:ps))) = Just $
Zipper p (Context pl (Node l x r) ps)
\end{lstlisting}
However, since the zipper structure depends on the original data structure, these
types and operations need to be defined for each structure separately. One way to
solve this problem is to automate this process by using data type differentiation
\cite{derive,formalderive}. We give a~brief overview of this technique here.
An \emph{algebraic data type} is a~data type defined as a~combination of products (tuples)
and sums (variants), potentially in a~recursive way. Algebraic data types that do
not change the parameters in recursive occurrences are known as \emph{regular types}. For
these types, the derivative is defined as follows.
\begin{align*}
\partial_x(0)& = 0 & \text{(empty type)}\\
\partial_x(1)& = 0 & \text{(unit type)}\\
\partial_x(y)& = 0 & \text{(type variable)} \\
\partial_x(x)& = 1 & \text{(type variable)} \\
\partial_x(F + G)& = \partial_x(F) + \partial_x(G) & \text{(sum type)}\\
\partial_x(F \times G)& = \partial_x(F) \times G +
F \times \partial_x(G) & \text{(product type)}\\
\partial_x(\mu y. F)& = [\mu y. F/y] \partial_x(F) \times
\text{List}\ ([\mu y. F/y] \partial_y(F)) & \text{(least fixed point)}
\end{align*}
The expression $[y/x]t$ denotes a~capture-avoiding substitution.
The variables can be introduced as parameters of the entire type (such as $a$ in $\text{List}\ a$)
or by the least fixed point operation, which is used to define recursive types.
The resulting derivative is a~type of \emph{one-hole contexts}. A~one-hole context is
a~structure that uniquely describes one position within the original data structure.
Zipper then consists of a~one-hole context together with an element of the original
structure.
For example, a~binary tree is a~regular algebraic data type, and its zipper can be
obtained by computing the derivative.
\begin{align*}
\partial_a(\text{Tree}\ a)
&= \partial_a(\mu x. 1 + x \times a \times x)\\
&= [\text{Tree}\ a/x]\partial_a(1 + x \times a \times x) \times
\text{List}\ ([\text{Tree}\ a/x]\partial_x(1 + x \times a \times x))\\
&= [\text{Tree}\ a/x](x \times x) \times
\text{List}\ ([\text{Tree}\ a/x](a \times x + x \times a))\\
&= \text{Tree}\ a \times \text{Tree}\ a \times
\text{List}\ (a \times \text{Tree}\ a + \text{Tree}\ a \times a)
\end{align*}
This derivative matches the definition of the tree context given in
\autoref{lst:treezip}.
The zippers used for performance testing in this work were based on algebraic
data type differentiation. The resulting zipper representation was manually adjusted
to provide better control over its strictness properties.
\section{Performance Testing}
To compare the performance of zippers and fingers, we implemented tree traversal
and tree insertion in three different programming languages. The approach based
on zippers was implemented in Haskell. The approach based on fingers was implemented
in \mbox{C\Rplus\Rplus}\xspace and \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace. We included two imperative languages, one with manual memory
management and the other with garbage collection, to check how the memory management
model affected the relative performance. Unless specified otherwise, when discussing
the imperative solutions, we are talking about the \mbox{C\Rplus\Rplus}\xspace solution.
The tasks were chosen to test the performance under two different memory allocation
requirements. Tree traversal can avoid memory allocation altogether, while tree
insertion cannot. Both tasks were tailored to the finger- and zipper-based
approaches, which was done to better represent the common use case of these approaches.
In the following, we use the term \emph{cursor} to refer to either a~zipper or
a~finger.
\subsection{Tree Traversal}
The first task focuses on tree traversal. We are given a~binary tree and a~vector
describing positions within the tree together with replacement values. The goal is
to replace the specified elements of the original tree with the given values.
For cursor-based approach, the input vector contains instructions that specify
the movement of the cursor relative to its previous position. These movement
instructions are interspersed with the replacement instructions. The element
under the cursor is replaced with the given value whenever such instruction is
encountered. As an example, replacing the left child of the root with 10 and
the right child with 20 would be represented as
\inlcode{Vector.fromList [Mov~L, Set~10, Mov~U, Mov~R, Set~20]}.
We compared this approach to a~solution where the replacement operation always
starts at the root of the tree. The input vector describes the positions relative
to the root of the tree. When a~replacement value is encountered, the specified
element is replaced, and the position is reset back to the root of the tree. The
vector corresponding to the previous example would be \inlcode{Vector.fromList [Mov~L, Set~10, Mov~R, Set~20]}.
We do not allow \inlcode{Mov~U} as it is not necessary to describe a~position.
This input format was chosen for better control over the spatial locality of
the positions, which allowed us to observe how the cursor-based approach behaves
depending on the average distance between positions. This task also allowed us to
compare the performance of imperative solutions when memory allocation is not
a~factor.
\autoref{lst:ttspec} specifies the desired behavior of the root- and cursor-based
approaches. For simplicity, the specification does not handle incorrect inputs
(such as positions outside the tree).
\begin{lstlisting}[caption={Tree traversal specification},label={lst:ttspec},language=haskell]
data Tree a = Leaf | Node (Tree a) a (Tree a)
data Dir = L | R | U
-- Replace an element at position determined by a list
-- of left/right directions.
replace :: a -> [Dir] -> Tree a -> Tree a
replace v [] (Node l _ r) = Node l v r
replace v (L:ds) (Node l x r) = Node (replace v ds l) x r
replace v (R:ds) (Node l x r) = Node l x (replace v ds r)
replace _ _ t = t
data Cmd a = Mov Dir | Set a
-- Specifies the behavior of the cursor-based approach.
cursor :: Tree a -> Vector (Cmd a) -> Tree a
cursor tree = fst . Vector.foldl step (tree, [])
where
step (t, ds) (Mov U) = (t, tail ds)
step (t, ds) (Mov d) = (t, d:ds)
step (t, ds) (Set v) = (replace v (reverse ds) t, ds)
-- Specifies the behavior of the root-based approach.
root :: Tree a -> Vector (Cmd a) -> Tree a
root tree = fst . Vector.foldl step (tree, [])
where
step (t, ds) (Mov d) = (t, d:ds)
step (t, ds) (Set v) = (replace v (reverse ds) t, [])
\end{lstlisting}
\subsubsection{Imperative Solution}
\autoref{lst:ttlayout} defines the structures used to represent the binary
tree. Member functions are omitted for brevity.
\begin{lstlisting}[caption={Imperative binary tree (memory layout)},label={lst:ttlayout},language=c++]
struct node_t {
node_t* parent;
node_t* left;
node_t* right;
int64_t value;
};
struct tree_t {
node_t* root;
node_t* finger;
};
\end{lstlisting}
Movement instructions are represented by integer constants to simplify the code.
The input vector is processed by iterating over all its elements, applying
the corresponding finger operation at each step. We evaluated the imperative
solutions on a~perfect binary tree of a~specified depth.
\subsubsection{Functional Solution}
The functional solution is more involved. Since the task is meant for a~cursor-based
approach, the zipper lends itself to this problem naturally. However, the root-based
approach presents a~few problems that have to be addressed.
The tree and zipper definitions shown in \autoref{lst:ttzip} follow the
definitions from \autoref{lst:treezip}, with the exception that each data type
contains strictness annotations. Fields annotated with \inlcode{!} are evaluated
whenever the enclosing data constructor is, which ensures that these structures
are fully evaluated at all times.
\begin{lstlisting}[caption={Binary tree and its zipper (with strictness annotations)},label={lst:ttzip},language=haskell]
data Tree = Node !Tree !Int64 !Tree | Leaf
data Path
= PathLeft !Int64 !Tree !Path
| PathRight !Tree !Int64 !Path
| Nil
data Zipper = Zipper !Tree !Int64 !Tree !Path
\end{lstlisting}
As a~consequence, the standard list type is replaced with a~custom type. GHC is also
instructed to unbox the integer fields, which is done to ensure that the cost of
operating on boxed values does not have any impact on the performance. Unboxed
vectors from the vector package are used to represent the input vector.
The zipper comes with operations that replace the focused element and move
the focus left, right, and up. Processing the input vector is implemented as
a~strict left fold. The zipper is the accumulator value, and in each step, we
apply zipper operation that corresponds to the element of the vector.
When starting from the root, replacing an element of the tree can be done easily
with a~recursive function that reads the vector in each recursive call and
descends into the correct subtree. The problem is propagating the information
about how many elements of the input vector were consumed so that the next
operation can start from the correct position. To make sure the root-based
approach is efficient, we compared a~few ways of dealing with this issue.
\paragraph{State Monad Solution}
The obvious solution is to use a~state monad. Note that laziness in the state is
unwanted, and the strict monad version is about twice as fast. Analyzing GHC's
core language \cite{ghccore}, the monadic code was optimized away, and most
values were unboxed. The only value that was not unboxed was the state returned
by the replacement operation. Replacing the standard state monad with
a~handwritten one that uses unboxed integer did not improve the performance in
a~statistically significant way, however.
\paragraph{ST Monad Solution}
Another way of passing the state is to use the imperative \inlcode{ST} monad.
The standard implementation of \inlcode{STRef} is limited to boxed types, which
hugely degraded the performance. The standard references had to be replaced with
unboxed references from the unboxed-ref package.
\paragraph{findIndices Solution}
Instead of propagating the new position via various versions of the state monad,
the replacement operation can be given hints on where to start. These hints can
be provided by an auxiliary vector containing the positions where each descent
starts. We can create this vector by using the \inlcode{findIndices} function
from the vector package. This solution has a~few issues. The input vector has to
be traversed twice, and the auxiliary vector has to be stored in the memory.
\paragraph{findIndex Solution}
We can avoid the memory allocation by computing the hints as needed, instead of
all at once, by using the \inlcode{findIndex} function.
\paragraph{Precomputed Vector Solution}
To measure the impact of the double traversal, we also implemented a~function
where the vector of hints is a~part of its input. The vector is precomputed, and
its time requirements were not included in the comparison.
Much like the imperative solution, all functional solutions were evaluated on
a~perfect binary tree of a~specified depth.
\subsection{Tree Insertion}
The second task focuses on tree building. Building a~search tree can be done much
more efficiently when the input sequence is sorted. The search for a~new insertion
point can be skipped since it will always be the leftmost or the rightmost node
(depending on the order of the input sequence).
This node can be tracked with a~finger that is updated
each time a~new element is inserted. The same can be done with a~zipper, although
the standard tree insert operation cannot be reused.
To test a~zipper for a~different structure, we chose 2-3 trees \cite{algo}
for this task. The structure is redundant: all data is kept in the leaf nodes,
and internal nodes contain the minimum of their right subtree (and of the middle
subtree, whenever applicable). The task is then to build a~redundant 2-3 tree
from a~descending sequence of a~given length. The standard approach starts
from the root of the tree when looking for the insertion point. The cursor-based
approach starts in the leftmost node and perform no additional search.
\subsubsection{Imperative Solution}
\autoref{lst:tilayout} defines the structures used to represent the 2-3 tree.
Member functions are omitted for brevity.
\begin{lstlisting}[caption={Imperative 2-3 tree (memory layout)},label={lst:tilayout},language=c++]
struct node_t {
std::array<int64_t, 2> values;
std::array<node_t*, 3> children;
node_t* parent;
bool is_two_node;
};
struct tree_t {
node_t* root;
node_t* last_inserted;
};
\end{lstlisting}
Tree insertion follows the standard algorithm. We obtain the insertion point
and attempt to insert the element into the corresponding leaf node. When the leaf
node is full, we allocate a~new node and redistribute all the elements from the
original node. After this split, we are left with a~two-node and a~three-node.
We take the middle element and the right node and attempt to insert them into the
parent node. We repeat this process until no split occurs or the root is reached.
Note that splitting an inner node results in two two-nodes because the middle
element does not need to be duplicated.
The split operation puts the inserted element into a~two-node when inserting
elements in descending order. As a~result, leaf nodes are only split every second
insertion. The implementation could be improved to also provide similar benefit
for insertion in ascending order.
We also tried the following variations of the tree operations: non-recursive
destructor, split operation that allocates the left node, and recursive
root-based insertion. The impact on the performance was either detrimental or
statistically insignificant.
We repeatedly inserted elements into the tree in descending order and measured the
time taken. In the case of \mbox{C\Rplus\Rplus}\xspace solution, this measurement also included the time
spent on deallocation, giving a~fairer comparison to the languages with
garbage collection.
\subsubsection{Functional Solution}
\autoref{lst:titree} shows a definition of 2-3 trees with strictness annotations.
\begin{lstlisting}[caption={Functional 2-3 tree},label={lst:titree},language=haskell]
data Tree
= Leaf
| Node2 !Tree !Int64 !Tree
| Node3 !Tree !Int64 !Tree !Int64 !Tree
\end{lstlisting}
To insert an element into the tree, we recursively insert it into the correct subtree.
The result of this insertion is either one subtree or two subtrees and an element.
The first case is handled by replacing the corresponding subtree; the second case
indicates that a~split occurred and is handled similarly to the imperative solution.
To obtain a~zipper, we compute the derivative of a~parametrized version of the
2-3 tree type.
\begin{align*}
F &= 1 + a x^2 + a^2 x^3\\
\partial_a(F) &= x^2 + 2 a x^3\\
\partial_x(F) &= 2 a x + 3 a^2 x^2\\
\partial_a(\text{Tree}\ a)
&= \partial_a(\mu x. F)\\
&= [\text{Tree}\ a/x]\partial_a(F)
\times \text{List}\ ([\text{Tree}\ a/x]\partial_x(F))\\
&= ((\text{Tree}\ a)^2 + 2 a (\text{Tree}\ a)^3)
\times \text{List}\ (2 a (\text{Tree}\ a) + 3 a^2 (\text{Tree}\ a)^2)
\end{align*}
If the focus is in a~two-node, then there is only one choice for the position, and
the context is given by the two subtrees. This case is represented by
$(\text{Tree}\ a)^2$. If the focus is in a~three-node, there are two choices for
the position (left or right). The context is given by the three subtrees and the
element that is not focused, or $2 a (\text{Tree}\ a)^3$.
The path also distinguishes between two-nodes and three-nodes. In the case of a~two-node,
there are two choices for the focus position (left or right subtree). The context
is given by the element and the other subtree. This case is represented by
$2 a (\text{Tree}\ a)$. In the case of a~three-node, there are three choices for
the focus position (left, middle, or right subtree) and the context is given by
the two elements and the other two subtrees, resulting in the final term
$3 a^2 (\text{Tree}\ a)^2$.
Since the insertion algorithm only needs to know the leftmost node and not the
particular element, we simplify the zipper by removing this choice point. The
type variable is replaced with \inlcode{Int64} and the list type is replaced
with a~custom strict list. \autoref{lst:tizip} shows the resulting type.
\begin{lstlisting}[caption={2-3 tree zipper},label={lst:tizip},language=haskell]
data Nonempty
= Nonempty2 !Tree !Int64 !Tree
| Nonempty3 !Tree !Int64 !Tree !Int64 !Tree
data PathChoice
= Path2L !Int64 !Tree
| Path2R !Tree !Int64
| Path3L !Int64 !Tree !Int64 !Tree
| Path3M !Tree !Int64 !Int64 !Tree
| Path3R !Tree !Int64 !Tree !Int64
data Path = Nil | Cons !PathChoice !Path
data Zipper = Zipper !Nonempty !Path
\end{lstlisting}
Inserting an element by using a~zipper more closely resembles the imperative
solution. The key difference is that instead of pointers to parent nodes, the
zipper contains a~list of choices along the path from the root to the focus.
Instead of descending into the tree, the zipper-based insertion needs to descend
into this list.
When a~node splits and we attempt to add the element and one of the freshly split
nodes to the parent node, we also need to include information about the position
of the split node in relation to the element. This position is necessary to
reconstruct the extra information contained in the zipper. The imperative
solution assumes the split node is always to the right.
Much like the imperative solution, we repeatedly inserted elements into the tree in
descending order and measured time taken.
\section{Results}
All experiments were performed on Intel Core i7-4750HQ processor with 24 GB of main
memory under Windows 10 operating system. Each program was compiled with the
highest available level of compiler optimizations, and in the case of GHC,
LLVM backend was used for code generation. Garbage collectors were allowed to only
run in a~single thread. Each solution was executed with an increasing number of
iterations until a~time limit of three minutes was reached. The measured times were
normalized to one iteration. Mean execution time, as well as standard deviation,
were computed. Error bars represent one standard deviation. The raw measurements
are available online. \footnote{\url{https://github.com/vituscze/performance-zippers/blob/master/data.csv}}
\subsection{Tree Traversal}
The input files were generated by randomly picking 1,000,000 elements out of a~perfect binary
tree with 20 levels and outputting the path between them. We evaluated the tree
traversal in four scenarios which were obtained by biasing the random generator
towards particular areas of the tree: no bias, bottom bias, right bias, and bottom-right bias.
One input file was generated for each scenario to ensure any performance differences
were not due to different input data.
The results of the functional root-based approach are based on the
\inlcode{findIndex} solution. Its precomputed version is only marginally faster,
showing that the double traversal has a~low impact on the performance. The state
and \inlcode{ST} solutions are much slower. Interestingly, the \inlcode{ST}
solution is slightly slower than the purely functional state solution. Full
comparison of these variants can be found in \autoref{fig:traversal-hs}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ymin=0,
enlarge x limits=0.2,
legend pos=outer north east,
ymajorgrids,
yminorgrids,
minor grid style={line width=.1pt,draw=gray!20},
minor y tick num=4,
legend entries={State,ST,findIndices,findIndex,Precomputed},
bar width=5pt,
ylabel={Relative time (\%)},
xtick=data,
symbolic x coords={Uniform, Bottom, Right, Bottom-right},
x tick label style={text width=40pt, align=center},
width=\axisdefaultwidth,
height=150pt
]
\addplot [area legend,fill=red!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(Uniform,129.01)
(Bottom,127.91)
(Right,184.83)
(Bottom-right,193.06)
};
\addplot [area legend,fill=orange!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(Uniform,129.62)
(Bottom,127.58)
(Right,198.96)
(Bottom-right,208.25)
};
\addplot [area legend,fill=yellow!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(Uniform,100.97)
(Bottom,98.60)
(Right,122.80)
(Bottom-right,127.44)
};
\addplot [area legend,fill=green!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(Uniform,100.00)
(Bottom,100.00)
(Right,100.00)
(Bottom-right,100.0)
};
\addplot [area legend,fill=cyan!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(Uniform,97.98)
(Bottom,97.40)
(Right,91.61)
(Bottom-right,94.75)
};
\end{axis}
\end{tikzpicture}
\caption{Tree Traversal Performance (Haskell)}
\label{fig:traversal-hs}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ymin=0,
enlarge x limits=0.3,
legend pos=outer north east,
ymajorgrids,
yminorgrids,
minor grid style={line width=.1pt,draw=gray!20},
minor y tick num=4,
legend entries={Root,Cursor},
bar width=10pt,
legend columns=1,
ylabel={Time (ms)},
xtick=data,
symbolic x coords={\mbox{C\Rplus\Rplus}\xspace,\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,Haskell},
width=\axisdefaultwidth,
height=150pt
]
\addplot [area legend,fill=red!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,341.16)+-(2.17,2.17)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,361.12)+-(6.78,6.78)
(Haskell,1272.65)+-(24.84,24.84)
};
\addplot [area legend,fill=blue!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,537.31)+-(2.84,2.84)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,536.74)+-(5.93,5.93)
(Haskell,1532.65)+-(13.72,13.72)
};
\end{axis}
\end{tikzpicture}
\caption{Tree Traversal Performance (no bias)}
\label{fig:traversal-no}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ymin=0,
enlarge x limits=0.3,
legend pos=outer north east,
ymajorgrids,
yminorgrids,
minor grid style={line width=.1pt,draw=gray!20},
minor y tick num=4,
legend entries={Root,Cursor},
bar width=10pt,
legend columns=1,
ylabel={Time (ms)},
xtick=data,
symbolic x coords={\mbox{C\Rplus\Rplus}\xspace,\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,Haskell},
width=\axisdefaultwidth,
height=150pt
]
\addplot [area legend,fill=red!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,388.17)+-(1.72,1.72)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,393.65)+-(5.02,5.02)
(Haskell,1417.70)+-(17.41,17.41)
};
\addplot [area legend,fill=blue!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,589.31)+-(4.64,4.64)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,573.92)+-(4.98,4.98)
(Haskell,1693.21)+-(14.43,14.43)
};
\end{axis}
\end{tikzpicture}
\caption{Tree Traversal Performance (bottom bias)}
\label{fig:traversal-bottom}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ymin=0,
enlarge x limits=0.3,
legend pos=outer north east,
ymajorgrids,
yminorgrids,
minor grid style={line width=.1pt,draw=gray!20},
minor y tick num=4,
legend entries={Root,Cursor},
bar width=10pt,
legend columns=1,
ylabel={Time (ms)},
xtick=data,
symbolic x coords={\mbox{C\Rplus\Rplus}\xspace,\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,Haskell},
width=\axisdefaultwidth,
height=150pt
]
\addplot [area legend,fill=red!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,47.84)+-(0.40,0.40)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,115.82)+-(1.09,1.09)
(Haskell,137.94)+-(1.25,1.25)
};
\addplot [area legend,fill=blue!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,19.18)+-(0.29,0.29)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,49.01)+-(1.37,1.37)
(Haskell,43.64)+-(0.31,0.31)
};
\end{axis}
\end{tikzpicture}
\caption{Tree Traversal Performance (right bias)}
\label{fig:traversal-right}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ymin=0,
enlarge x limits=0.3,
legend pos=outer north east,
ymajorgrids,
yminorgrids,
minor grid style={line width=.1pt,draw=gray!20},
minor y tick num=4,
legend entries={Root,Cursor},
bar width=10pt,
legend columns=1,
ylabel={Time (ms)},
xtick=data,
symbolic x coords={\mbox{C\Rplus\Rplus}\xspace,\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,Haskell},
width=\axisdefaultwidth,
height=150pt
]
\addplot [area legend,fill=red!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,48.89)+-(0.68,0.68)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,117.86)+-(0.88,0.88)
(Haskell,135.69)+-(0.73,0.73)
};
\addplot [area legend,fill=blue!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,15.94)+-(0.14,0.14)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,42.95)+-(0.81,0.81)
(Haskell,35.91)+-(0.44,0.44)
};
\end{axis}
\end{tikzpicture}
\caption{Tree Traversal Performance (bottom-right bias)}
\label{fig:traversal-bottomright}
\end{figure}
When the spatial locality is low (\autoref{fig:traversal-no} and
\autoref{fig:traversal-bottom}), the root-based approach shows a~clear advantage
over the cursor-based approach. The relative gains of the root-based approach
are in the range of 50\% to 60\% for the imperative solutions and around 20\%
for the functional solution.
When the spatial locality is high (\autoref{fig:traversal-right} and
\autoref{fig:traversal-bottomright}), the cursor-based approach takes over. In
the case of the right bias, \mbox{C\Rplus\Rplus}\xspace reaches 150\% speedup, \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace 135\% and Haskell
220\%. Bottom-right bias increases this gap even more. \mbox{C\Rplus\Rplus}\xspace reaches 205\% speedup,
\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace 175\% and Haskell 280\%.
Notice that the root-based approach also shows a~considerable performance boost
when the input data has high spatial locality. This boost is a consequence of
cache-friendly memory access pattern. In all scenarios, the zipper-based
approach exhibits smaller performance losses (low spatial locality) and higher
performance gains (high spatial locality) when compared to the finger-based
approach.
\subsection{Tree Insertion}
Evaluating insertion into a~2-3 tree was done by repeatedly constructing a~tree
containing 10,000,000 elements. The ordered sequence was not part of the input.
Instead, the elements of this sequence were generated on the fly and inserted into the
tree directly, without any auxiliary structure. As mentioned earlier, this task
compared fingers and zippers in an environment where memory allocation is necessary.
For this reason, the \mbox{C\Rplus\Rplus}\xspace solution also evaluated the time it took to deallocate
the structure, giving a~better comparison with \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace and Haskell.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ymin=0,
enlarge x limits=0.3,
legend pos=outer north east,
ymajorgrids,
yminorgrids,
minor grid style={line width=.1pt,draw=gray!20},
minor y tick num=4,
legend entries={Root,Cursor},
bar width=10pt,
legend columns=1,
ylabel={Time (ms)},
xtick=data,
symbolic x coords={\mbox{C\Rplus\Rplus}\xspace,\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,Haskell},
width=\axisdefaultwidth,
height=150pt
]
\addplot [area legend,fill=red!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,1543.57)+-(32.32,32.32)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,3504.60)+-(89.59,89.59)
(Haskell,3256.66)+-(64.52,64.52)
};
\addplot [area legend,fill=blue!30,error bars/.cd,y dir=both,y explicit]
coordinates
{
(\mbox{C\Rplus\Rplus}\xspace,1290.40)+-(41.57,41.57)
(\mbox{C\raisebox{.35ex}{$\sharp$}}\xspace,3015.65)+-(109.26,109.26)
(Haskell,1052.39)+-(54.06,54.06)
};
\end{axis}
\end{tikzpicture}
\caption{2-3 Tree Insertion Performance}
\label{fig:insert}
\end{figure}
The results are shown in \autoref{fig:insert}. All three solutions show
a~preference for the cursor-based approach. In \mbox{C\Rplus\Rplus}\xspace and \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace, the finger-based
insertion is roughly 20\% faster than the root-based insertion. In Haskell,
the zipper-based insertion is 210\% faster.
Note that both the root-based and finger-based insertion allocate $\mathcal{O}(1)$
nodes (amortized) per insertion in imperative languages. The root-based
functional solution needs to copy the path from the root to the insertion point,
leading to $\mathcal{O}(\log{} n)$ new nodes per insertion. The zipper-based
insertion, therefore, not only avoids the cost of finding the insertion point
but also leads to significantly reduced allocation count.
Comparing the \mbox{C\Rplus\Rplus}\xspace and \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace results did not point to memory management as a major
factor. Reducing the size of the tree (by performing fewer insertions) showed that
the gap between \mbox{C\Rplus\Rplus}\xspace and \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace decreased slightly, which hints to a~minor performance
benefit from using garbage collection.
The \mbox{C\Rplus\Rplus}\xspace solution could be further optimized by using a memory pool instead of
the standard \inlcode{new} and \inlcode{delete} operators. However, we did not
want to deviate from the standard memory management models. In a similar vein,
we decided against fine-tuning the garbage collector parameters for the
Haskell and \mbox{C\raisebox{.35ex}{$\sharp$}}\xspace solutions.
\section{Conclusion}
While zippers lack the flexibility and ease of use of mutable pointers, they are
nevertheless a~powerful tool when working with purely functional data
structures. However, it was unclear whether zippers offer the same performance
benefit as the imperative approach.
We compared fingers and zippers in two scenarios: arbitrary tree traversal
and tree insertion. The first test measured the effectiveness of zippers when its
imperative counterpart does not have to allocate memory. This test focused on
fast access to a~selected element as well as the ability to move the focus. The second
test considered the case where both the imperative and functional solutions need
to allocate memory. This test focused on the pointer reversal aspect of zippers.
We provided evidence that when zippers are used in a functional setting, they offer
higher performance gains compared to mutable pointers used in an imperative setting.
More importantly, zippers provide this gain without undermining the benefits of
purely functional data structures. We hope that this work encourages functional
programmers to use zippers before reaching for imperative techniques when optimizing
their code.
\clearpage
\bibliographystyle{splncs03}
|
1,314,259,993,310 | arxiv | \section{Introduction}
The Hochschild (co)homology of associative algebras has been extensively studied since its first appearance in 1945 with the paper {\it On The Cohomology Groups of an Associative Algebra} by Gerard Hochschild \cite{Hoch}. There is a rich algebraic structure on the Hochschild cohomology of an associative algebra. It is a graded algebra given by the cup product. In \cite{Ger}, Gerstenhaber proves that the cup product is commutative, and even more that exist a Lie bracket that endows $HH^{*}(A,A)$ with a structure of Lie algebra. These two structures satisfy some compatibility conditions that are now known to define a \textit{Gerstenhaber algebra}.
In \cite{Tradler}, Tradler proves that if $A$ is a symmetric algebra up to homotopy then $HH^{*}(A,A)$ is a \textit{Batalin-Vilkovisky algebra}. In \cite{M2}, Menichi presents another proof for Tradler's result for symmetric differential graded algebras. These structures play an important role due to its connection with string topology as can be found in \cite{Bur}, \cite{cohenjonesyan}, \cite{cohenjones}, \cite{FelMenTh}, \cite{FelTh}, \cite{M3}, \cite{Wes1} and \cite{Wes2}.
Given a symmetric algebra, such as a group ring of a finite group, the Batalin-Vilkovisky structure depends on the duality isomorphism, by using different symmetric forms we get different Batalin-Vilkovisky structures with the same underlying Gesternhaber algebra. The Batalin-Vilkovisky algebra structure on the Hochschild cohomology of cyclic groups of prime order over $\ensuremath{\mathbb{F}}_p$ was calculated by Yang \cite{Yang} using the isomorphism between the group ring and the truncated polynomial ring. However, the symmetric form used on those calculations do not correspond to the canonical form over group rings. For cyclic groups using the canonical symmetric form, we get
\begin{teo*}
Let $R$ be an integral domain with $char(R)\nmid n$ and $A=R[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}]$. Then as a BV-algebra
\begin{align}
HH^{*}(A;A)&=R [x,z]/(x^{n}-1,nz) \notag \\
\Delta(a)&=0 \quad \forall a\in HH^{*}(A;A) \notag
\end{align}
where $|x|=0$ and $|z|=2$.
\end{teo*}
\begin{teo*}
Let $R$ be a commutative ring with $char(R)=p>0$ and $A=R[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}]$ with $n=mp$. If $p\neq 2$, or $p=2$ and $m$ is even. Then as a BV-algebra
\begin{align}
HH^{*}(A;A)&=R [x,y,z]/(x^{n}-1,y^{2}) \notag \\
\Delta (z^{k}y^{r}x^{l}) &= r(l-1)z^{k}x^{l-1} \notag
\end{align}
If $p=2$ and $m$ is odd. Then as a BV-algebra
\begin{align}
HH^{*}(A;A)&=R [x,y,z]/(x^{n}-1,y^{2}-x^{n-2}z) \notag \\
\Delta (z^{k}y^{r}x^{l}) &= r(l-1)z^{k}x^{l-1} \notag
\end{align}
where $|x|=0$, $|y|=1$ and $|z|=2$.
\end{teo*}
The aim of this paper is to present a Batalin-Vilkovisky algebra structure on the Hochschild cohomology of the group ring of finitely generated abelian groups. In order to achieve this goal, we study the behavior of the Batalin-Vilkovisky structure for tensor products. Over fields in \cite{Tensor}, Le and Zhou prove that the Künneth formula for Hochschild cohomology is an isomorphism of Gerstenhaber algebras if at least one of the algebras is finite dimensional, and if the algebras are symmetric is an isomorphism of Batalin-Vilkovisky algebras. In section 3, we extend their result for a general class of rings. As a particular case over the integers, we get the following new result
\begin{teo*}
Let $A=\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}]$ and $B=\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/m\ensuremath{\mathbb{Z}}]$ with $n=km$. Then, as a BV-algebra
\begin{align}
HH^{*}(A\otimes B;A\otimes B)&= \frac{\ensuremath{\mathbb{Z}}[x,t,a,b,c]}{(x^n-1, t^m-1, na, mb, mc, c^2)} \notag \\
\Delta(x^{i}t^{j}a^{l}b^{r}c^{s}) &= sx^{i-1}t^{j}a^{l}b^{r}((i-1)b-jka) \notag
\end{align}
in all cases except when $m$ is even and $k$ is odd, in which case we get
\begin{align}
HH^{*}(A\otimes B;A\otimes B)&= \frac{\ensuremath{\mathbb{Z}}[x,t,a,b,c]}{(x^n-1, t^m-1, na, mb, mc, c^2-\frac{m}{2}x^{n-2}ab(b+ka))} \notag \\
\Delta(x^{i}t^{j}a^{l}b^{r}c^{s}) &= sx^{i-1}t^{j}a^{l}b^{r}((i-1)b-jka) \notag
\end{align}
where $|x|=|t|=0$, $|a|=|b|=2$ and $|c|=3$.
\end{teo*}
Notice that the tensor product of the corresponding Hochschild cohomology rings gives a trivial BV-structure. Nevertheless, the Hochschild cohomology of the tensor product gives a highly non-trivial BV-structure.
When the algebra is not symmetric but satisfies some sort of Poincaré duality. Ginzburg \cite{Ginz} and Menichi \cite{M2} prove that $HH^*(A;A)$ is also a Batalin-Vilkovisky algebra by transferring the Connes $B$-operator through the isomorphism between Hochschild homology and Hochshild cohomology. For the tensor product of two such algebras, we prove that if the algebras satisfy some finiteness condition on their resolutions (\ref{isoResAB}), there is also an isomorphism of Batalin-Vilkovisky algebras between the Hochschild cohomology of the tensor product and the tensor product of their cohomologies. In particular, for free abelian groups of finite rank, we have
\begin{teo*}
As BV-algebras,
\begin{align}
HH^{*}(R[\ensuremath{\mathbb{Z}}^{n}]&;R[\ensuremath{\mathbb{Z}}^{n}])= R[x_1,x_1 ^{-1},\dots,x_n,x_n ^{-1}]\otimes \Lambda(y_1,\dots ,y_n) \notag \\
\Delta(x_1 ^{i_1}\cdots x_n ^{i_n}y_1 ^{r_1}\cdots y_n ^{r_n}) &= \displaystyle \sum_{k=1} ^{n} (-1)^{^{r_1+\cdots +r_{k-1}}} r_k( i_k-1) x_1 ^{i_1}\cdots x_k^{i_k-1}\cdots x_n ^{i_n}y_1 ^{r_1}\cdots \widehat{y_k ^{r_k}}\cdots y_n ^{r_n}\notag
\end{align}
where $|x_i|=|x_i ^{-1}|=0$ and $|y_i|=1$ for $1\leq i\leq n$.
\end{teo*}
\section{Hochschild (Co)homology}\label{Hoch(co)}
Let $A$ be a $R$-projective $R$-algebra with unit and $R$ be a commutative ring. Denote by $A^{op}$ the opposite algebra of $A$ and by $A^{e}$ the enveloping algebra $A\otimes A^{op}$. Recall that any left and right $A$-module can be considered as a left, or right, $A^{e}$-module. Let $M$ be an $A^{e}$-module. The \textit{Hochschild homology of $A$ with coefficients in $M$} is
$$
HH_{*}(A;M) := Tor_{*} ^{A^{e}}(A;M)
$$
and the \textit{Hochschild cohomology of $A$ with coefficients in $M$} is
$$
HH^{*}(A;M) := Ext^{*} _{A^{e}}(A;M)
$$
Besides the additive structure, the Hochschild cohomology $HH^{*}(A;A)$ has a graded algebra structure induced from the cup product defined over cochains by
\begin{equation}\label{usucup}
(f \smile g)(a_1, \dots ,a_{k+j}) = f (a_1 , \dots , a_{k})g(a_{k+1}, \dots , a_{k+j})
\end{equation}
where $f\in Hom(\bar{A}^k, A)$ and $g\in Hom(\bar{A}^j, A)$.
Since Hochschild cohomology can be computed by using different resolutions. A more general notion of the cup product can be defined as follows. Let ${\ensuremath{\mathbb{P}}}(A)\xrightarrow{\mu}A$ be an $A^{e}$-projective resolution of $A$, and let $\Delta: {\ensuremath{\mathbb{P}}}(A) \rightarrow {\ensuremath{\mathbb{P}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ be a diagonal approximation map, i.e., an $A^{e}$-chain map such that $(\mu\otimes \mu)\circ \Delta=\mu$. If $M$ and $N$ are $A^e$-modules the Hochschild cup product is defined by
\begin{align}
\smile: HH^{*}(A;M)\otimes HH^{*}(A;N) &\longrightarrow HH^{*}(A;M \underset{A}\otimes N) \notag \\
\alpha\otimes \beta &\longmapsto (-1)^{|\alpha||\beta|}(\alpha \underset{A}\otimes \beta)\Delta
\end{align}
Notice that if $M=A$ the cup product endows $HH^{*}(A;N)$ with the structure of $HH^{*}(A;A)$-module
$$
\xymatrix{
HH^{*}(A;A)\otimes HH^{*}(A;N) \ar[r]^(0.58){\smile} & HH^{*}(A;A \underset{A}\otimes N) \ar[r]^(0.56){\cong} & HH^{*}(A;N)
}
$$
and if $M=A=N$ the cup product is a product in $HH^{*}(A;A)$
$$
\xymatrix{
HH^{*}(A;A)\otimes HH^{*}(A;A) \ar[r]^(0.58){\smile} & HH^{*}(A;A \underset{A}\otimes A) \ar[r]^(0.56){\cong} & HH^{*}(A;A)
}
$$
that will coincide with the one defined over the bar resolution.
\begin{remk}
The diagonal approximation map that recovers the cup product defined on the bar resolution is given by
\begin{align}
\Delta_{{\ensuremath{\mathbb{B}}}(A)}: {\ensuremath{\mathbb{B}}}(A) &\longrightarrow {\ensuremath{\mathbb{B}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{B}}}(A) \notag \\
a_{0}\otimes \cdots \otimes a_{n+1} &\longmapsto \sum_{i=0}^{n} a_{0} \otimes \cdots \otimes a_{i}\otimes 1 \underset{A}\otimes 1 \otimes a_{i+1} \otimes \cdots \otimes a_{n+1}
\end{align}
\end{remk}
\begin{lem}
Let $A$ be a $R$-projective $R$-algebra. Then any Hochschild diagonal approximation map calculates the cup product in $HH^{*}(A;A)$.
\end{lem}
\begin{proof}
Let ${\ensuremath{\mathbb{P}}}(A)\xrightarrow{\mu}A$ be an $A^{e}$-projective resolution of $A$, and let $\Delta: {\ensuremath{\mathbb{P}}}(A) \rightarrow {\ensuremath{\mathbb{P}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ be a diagonal approximation map. We only need to prove that ${\ensuremath{\mathbb{P}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{P}}}(A) \xrightarrow{\mu\otimes \mu}A$ is an $A^{e}$-projective resolution. Since
\[
\left({\ensuremath{\mathbb{P}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{P}}}(A) \right)_{n}=\bigoplus_{i+j=n} P_i\underset{A}\otimes P_j
\]
and each $P_i$ is $A^{e}$-projective $(P_i\oplus Q\cong \oplus A^{e})$, it suffices to show that $A^{e}\underset{A}\otimes A^{e}$ is $A^{e}$-projective. By hypothesis, $A$ is $R$-projective and $A^{e}\underset{A}\otimes A^{e}\cong A^{e}\otimes A$ as $A^{e}$-modules then $A^{e}\underset{A}\otimes A^{e}$ is $A^{e}$-projective.
Now, to see that the complex is acyclic, notice that each $P_i$ is $A$-projective because $A$ is $R$-projective, and $H_{*}({\ensuremath{\mathbb{P}}}(A))\cong A$ which is $A$-free then
\[
Tor^{A}_{p}(H_s({\ensuremath{\mathbb{P}}}(A));H_t({\ensuremath{\mathbb{P}}}(A)))=0 \qquad \forall p\geq 1
\]
and
\[
Tor^{A}_{0}(H_s({\ensuremath{\mathbb{P}}}(A));H_t({\ensuremath{\mathbb{P}}}(A)))=H_s({\ensuremath{\mathbb{P}}})\underset{A}\otimes H_t({\ensuremath{\mathbb{P}}})=\begin{cases}
A\underset{A}\otimes A &\text{if $s=t=0$} \notag\\
0 &\text{otherwise}
\end{cases}
\]
Applying the K\"{u}nneth spectral sequence, we get
$$
H_{*}({\ensuremath{\mathbb{P}}}\otimes_{A}{\ensuremath{\mathbb{P}}})\cong A\underset{A}\otimes A\cong A
$$
Since ${\ensuremath{\mathbb{P}}}(A)\xrightarrow{\mu}A$ and ${\ensuremath{\mathbb{P}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)\xrightarrow{\mu\otimes \mu}A$ are both $A^{e}$-projective resolutions of $A$, by the comparison theorem, $\Delta: {\ensuremath{\mathbb{P}}}(A) \rightarrow {\ensuremath{\mathbb{P}}}(A) \underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ exists and it is unique up to homotopy. Therefore, the usual cup product given by the bar resolution (\ref{usucup}) coincides with any other cup product given by different resolutions and diagonal approximation maps.
\end{proof}
Recall that $HH^*(A;A)$ acts on $HH_*(A;A)$. For $n\geq m$, $f\in Hom(\bar{A}^m, A)$ and $a_1\otimes \cdots \otimes a_{n}\otimes a\in \bar{A}^n\otimes A$ the action is given by
$$
(a_1\otimes \cdots \otimes a_{n}\otimes a)\cdot f= (-1)^{nm} a_{m+1}\otimes \cdots \otimes a_n \otimes a f(a_1\otimes \cdots \otimes a_{m})
$$
This action can be calculated over any resolution as follows
\begin{pro}\label{actionHH}
Let $A$ be a $R$-projective $R$-algebra and $\Delta$ be any diagonal approximation map. The action of Hochschild cohomology on Hochschild homology is given by
\begin{align}
\rho: HH_{n}(A;A)\otimes HH^m(A;A)&\longrightarrow HH_{n-m}(A;A) \notag \\
(x \underset{A^e}\otimes a)\otimes f&\longmapsto (-1)^{nm} (f\underset{A}\otimes id)\Delta(x)\underset{A^e}\otimes a \notag
\end{align}
\end{pro}
\begin{proof}
Notice that $f$ is a cochain iff the map $f:{\ensuremath{\mathbb{P}}}(A)\rightarrow A$ is a chain map. Then $\rho$ is well defined because $(f\underset{A}\otimes id)\Delta:{\ensuremath{\mathbb{P}}}(A)\rightarrow {\ensuremath{\mathbb{P}}}(A)$ is a chain map. Since any approximation map is unique up to homotopy, it is sufficient to prove that the formula coincided with the one given for the bar resolution. Let $1\otimes a_1\otimes \cdots \otimes a_n\otimes 1\underset{A^e}\otimes a\in B_{n}(A)$ and $f\in Hom_{A^e}(B_m(A),A)$
\begin{align}
& (-1)^{nm} (f\underset{A}\otimes id)\Delta(1\otimes a_1\otimes \cdots \otimes a_n\otimes 1)\underset{A^e}\otimes a \notag \\
&=(-1)^{nm} (f\underset{A}\otimes id) \left( \sum_{i=0}^{n} 1 \otimes a_1\otimes \cdots \otimes a_{i}\otimes 1 \underset{A}\otimes 1 \otimes a_{i+1} \otimes \cdots \otimes a_{n} \otimes 1 \right) \underset{A^e}\otimes a\notag \\
&=(-1)^{nm} f(1 \otimes a_1\otimes \cdots \otimes a_{m}\otimes 1 ) \underset{A}\otimes 1 \otimes a_{m+1} \otimes \cdots \otimes a_{n} \otimes 1 \underset{A^e}\otimes a \notag
\end{align}
\end{proof}
In \cite{Ger}, Gerstenhaber proves that the cup product on Hochschild cohomology is graded commutative and that there exists a Lie bracket that endows $HH^{*}(A;A)$ with a structure of Lie algebra. The Gerstenhaber bracket on $HH^{*}(A;A)$ using the bar resolution is defined as follows
\[
\lbrace f, g \rbrace = f \circ g - (-1)^{(|f|-1)(|g|-1)} g \circ f
\]
where $\circ$ is defined by
\begin{align}\label{Bracket}
(f \circ g)(a_1\otimes \cdots \otimes a_{k+j-1}) &= \notag \\
\sum_{i=1}^{k}(-1)^{(j-1)(i-1)} f (a_1\otimes \cdots &\otimes a_{i-1}\otimes g(a_i\otimes \cdots \otimes a_{i+j-1})\otimes a_{i+j} \otimes \dots \otimes a_{k+j-1}) \notag
\end{align}
The cup product and the bracket satisfy the following compatibility conditions.
\begin{defi}\label{DefGersalg}
A \textit{Gerstenhaber algebra} is a graded commutative algebra $A$ with a linear map $\left\{-,-\right\}: A_{i} \otimes A_{j}\rightarrow A_{i+j-1}$ of degree $-1$ such that
\begin{enumerate}
\item The bracket $\left\{-,-\right\}$ endows $A$ with a structure of graded Lie algebra of degree $1$, i.e., for all $a,b$ and $c\in A$
\begin{align}
&\left\{a,b\right\}=-(-1)^{(\left|a\right|+1)(\left|b\right|+1)}\left\{b,a\right\} \notag \\
&\left\{a,\left\{b,c\right\}\right\}=\left\{\left\{a,b\right\},c\right\}+(-1)^{(\left|a\right|+1)(\left|b\right|+1)}\left\{b,\left\{a,c\right\}\right\} \notag
\end{align}
\item The product and the Lie bracket satisfy the Poisson identity, i.e., for all $a,b$ and $c\in A$
\[
\left\{a,bc\right\}=\left\{a,b\right\}c+(-1)^{(\left|a\right|+1)\left|b\right|}b\left\{a,c\right\}
\]
\end{enumerate}
\end{defi}
If there is a differential of degree $-1$ of a Gerstenhaber algebra such that the Gerstenhaber bracket is the obstruction of the operator to be a graded derivation, then the Gerstenhaber algebra is called a Batalin-Vilkovisky algebra.
\begin{defi}\label{DefBValg}
A \textit{Batalin-Vilkovisky algebra} is a Gerstenhaber algebra $A$ with a linear map of degree $-1$, $\Delta: A_{i} \rightarrow A_{i-1}$ such that $\Delta \circ \Delta=0$ and
\[
\left\{a,b\right\}=-(-1)^{\left|a\right|}(\Delta(ab)-\Delta(a)b-(-1)^{\left|a\right|}a\Delta(b))
\]
for all $a$ and $b\in A$.
\end{defi}
The way to construct BV-structures on Hochschild cohomology is by dualizing or transferring the Connes $B$-operator.
\begin{defi}\label{BConnesHH}
Let $A$ be a unital algebra. The {\it Connes $B$-operator} is a map on Hochschild homology defined on normalized chains as follows
\begin{align}
&B_n: \bar{A}^n \otimes A \longrightarrow \bar{A}^{n+1} \otimes A \notag \\
&B_n(a_1\otimes \cdots \otimes a_{n}\otimes a)= \sum^{n}_{i=0} (-1)^{in} a_i\otimes \cdots \otimes a_{n}\otimes a\otimes a_1 \otimes \cdots \otimes a_{i-1}\otimes 1
\end{align}
\end{defi}
The dual of this operator
$$
B^{\vee} : Hom(\bar{A}^{*+1}\otimes A, R)\rightarrow Hom(\bar{A}^{*}\otimes A,R)
$$
defines by adjunction an operator on $Hom(\bar{A}^{*},A^{\vee})\cong Hom(\bar{A}^{*}\otimes A,R)$, where $A^{\vee}=Hom(A,R)$ When $A$ is a symmetric algebra the non-degenerate bilinear form of $A$ induces a chain complex isomorphism
\[
Hom(\bar{A}^{*},A^{\vee})\cong Hom(\bar{A}^{*},A)
\]
which defines a BV-operator, $\Delta$, on the Hochschild cochains.
\begin{defi}
Let $A$ be a finitely generated projective $R$-algebra. $A$ is called a {\it Frobenius algebra} if there exists an isomorphism of left, or right, $A$-modules
$$
\varphi: A \xrightarrow{\cong} A^{\vee}=Hom_R (A,R)
$$
If the isomorphism is of $A^e$-modules, $A$ is called a {\it symmetric algebra}.
\end{defi}
\begin{remk}
Given a Frobenius algebra $A$, it can be defined a non-degenerate bilinear form,
$$
\langle \cdot , \cdot \rangle : A\otimes A \longrightarrow R
$$
as follows
$$
\langle a, b\rangle :=
\begin{cases}
\varphi(b)(a)=\varphi(1)(ab) &\text{if $\varphi$ is a left isomorphism} \notag \\
\varphi(a)(b)=\varphi(1)(ab) &\text{if $\varphi$ is a right isomorphism} \end{cases}
$$
Notice that the pairing is associative
\begin{align}
\langle ab,c\rangle=\varphi(c)(ab)=\varphi(1)(abc)=\varphi(bc)(a)=\langle a,bc\rangle \quad &\text{($\varphi$ left isomorphism)}\notag \\
\langle ab,c\rangle=\varphi(ab)(c)=\varphi(1)(abc)=\varphi(a)(bc)=\langle a,bc\rangle \quad &\text{($\varphi$ right isomorphism)}\notag
\end{align}
Moreover, if $\varphi$ is a two sided isomorphism the pairing is symmetric
$$
\langle a,b\rangle=\varphi(a)(b)=\varphi(1)(ab)=\varphi(1)(ba)=\varphi(b)(a)=\langle b,a\rangle
$$
\end{remk}
From now on, an associative nonsingular bilinear form will be called a \textit{Frobenius form}. As in the case over fields, Frobenius algebras over commutative rings can be characterized by Frobenius forms.
\begin{pro}
A finitely generated projective $R$-algebra $A$ is Frobenius if and only if there exists a non-degenerate bilinear form, and it is symmetric if and only if there exists such a form which is also symmetric.
\end{pro}
\begin{exa}\label{frob}
Let $R$ be a commutative ring. If $G$ is a finite group then the group ring $R\left[G\right]$ is a symmetric algebra with Frobenius form given by
$$
\langle \cdot,\cdot \rangle: R\left[G\right]\times R\left[G\right] \longrightarrow R, \qquad \qquad \langle g, h \rangle = \begin{cases}
1 &\text{if $g=h^{-1}$} \notag\\
0 &\text{otherwise}
\end{cases}
$$
Notice that the Frobenius form of the group ring $R\left[G\right]$ could be defined by using the canonical augmentation of the group ring,
$$
\langle a, b \rangle:=\varepsilon(ab)
$$
where
$$
\varepsilon: R\left[G\right] \longrightarrow R, \quad \quad \varepsilon \left( \sum_{g\in G} \alpha_{_{g}} g \right)=\alpha_{e}
$$
\end{exa}
In the case when $A$ is a symmetric algebra, the BV-operator, $\Delta$, is defined as follows
\begin{pro}
The operator $\Delta: Hom(\bar{A}^{m+1},A) \rightarrow Hom(\bar{A}^{m},A)$ is given by
$$
\Delta(f)(a_1,\dots ,a_m)= \sum_{j=1} ^{N} \sum_{i=0} ^{m} (-1)^{im} \langle 1, f(a_i,\dots ,a_n,a^j,a_1,\dots ,a_{i-1})\rangle {a^{j}}^{\vee}
$$
where $\lbrace a^1,\dots ,a^N\rbrace$ is a basis of $A$ and $\lbrace {a^1}^{\vee},\dots ,{a^N}^{\vee}\rbrace$ is the dual basis with respect to the Frobenius form.
\end{pro}
In \cite{Tradler}, Tradler proves that $\Delta$ induces a BV-structure on $HH^{*}(A;A)$, which furthermore induces the Gerstenhaber structure of $HH^{*}(A;A)$.
\begin{teo}[\cite{Tradler}, \cite{M3}]\label{HHBV}
Let $A$ be a symmetric $R$-algebra. Then $HH^*(A,A)$ is a BV-algebra with $\Delta$ given by the dual of the Connes operator.
\end{teo}
When the algebra is not symmetric but satisfies some sort of Poincaré duality. It is posible to obtain a BV-algebra structure on Hochschild cohomology by transferring the Connes operator.
\begin{teo}\label{HHBVact}[\cite{Ginz}, \cite{M2}]
Let $a\in HH_{n}(A,A)$ such that
\begin{align}
\rho_a:HH^*(A;A)&\longrightarrow HH_{n-*}(A;A) \notag \\
b&\longmapsto \rho(a\otimes b) \notag
\end{align}
is an isomorphism. If $B(a)=0$ then $HH^*(A,A)$ is a BV-algebra with $\Delta$ given by $\Delta_a:=\rho_{a}^{-1}B\rho_{a}$.
\end{teo}
\section{Hochschild (Co)homology for Tensor Products}\label{TensorHH(co)}
In \cite{Tensor}, Le and Zhou prove the following
\begin{teo}[\cite{Tensor} Theorem 3.3]\label{AxB}
Let $R$ be a field and $A$ and $B$ be two $R$-algebras such that one of them is finite dimensional. Then there is an isomorphism of Gerstenhaber algebras
$$
HH^{*}(A\otimes B; A\otimes B) \cong HH^{*}(A; A)\otimes HH^{*}(B; B)
$$
If furthermore, $A$ and $B$ are finite dimensional symmetric algebras, the above isomorphism becomes an isomorphism of Batalin-Vilkovisky algebras.
\end{teo}
In this section, we extend their result for a general class of rings and present an analogous for algebras that satisfy some sort of Poincaré duality.
\begin{pro}
Let $A$ and $B$ be $R$-projective $R$-algebras with $R$ a commutative ring. Suppose that ${\ensuremath{\mathbb{P}}}(A)\rightarrow A$ is an $A^{e}$-projective resolution of $A$ and ${\ensuremath{\mathbb{P}}}(B)\rightarrow B$ is a $B^{e}$-projective resolution of $B$. Then
\[
{\ensuremath{\mathbb{P}}}(A\otimes B):={\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B) \longrightarrow A\otimes B
\]
is an $(A\otimes B)^{e}$-projective resolution of $A\otimes B$.
\end{pro}
\begin{proof}
Since
\[
{\ensuremath{\mathbb{P}}}_n(A\otimes B)=\bigoplus_{i+j=n} P_i(A)\otimes P_j(B)
\]
and $A^{e}\otimes B^{e}\cong (A\otimes B)^{e}$, ${\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B)\rightarrow A\otimes B$ is an $(A\otimes B)^{e}$-projective complex of $A\otimes B$. It only remains to check that the complex is acyclic. Since $H_{*}({\ensuremath{\mathbb{P}}}(A))\cong A$ and $H_{*}({\ensuremath{\mathbb{P}}}(B))\cong B$ which are $R$-projective. Then
\[
Tor^{R}_{p}(H_s({\ensuremath{\mathbb{P}}}(A));H_t({\ensuremath{\mathbb{P}}}(B)))=0 \qquad \forall p\geq 1
\]
and
\[
Tor^{R}_{0}(H_s({\ensuremath{\mathbb{P}}}(A));H_t({\ensuremath{\mathbb{P}}}(B)))=H_s({\ensuremath{\mathbb{P}}}(A))\otimes H_t({\ensuremath{\mathbb{P}}}(B))=\begin{cases}
A\otimes B &\text{if $s=t=0$} \notag\\
0 &\text{otherwise}
\end{cases}
\]
Applying the K\"{u}nneth spectral sequence, we have
$$
H_{*}({\ensuremath{\mathbb{P}}}(A) \otimes {\ensuremath{\mathbb{P}}}(B))\cong A\otimes B
$$
Therefore, ${\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B)\rightarrow A\otimes B$ is an $(A\otimes B)^{e}$-projective resolution of $A\otimes B$.
\end{proof}
\begin{pro}
The following map is an isomorphism of complexes
\begin{align}
\tau: ({\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A))\otimes ({\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B)) &\longrightarrow {\ensuremath{\mathbb{P}}}(A\otimes B) \underset{A\otimes B}\otimes {\ensuremath{\mathbb{P}}}(A\otimes B) \notag \\
a_1\underset{A}\otimes a_2 \otimes b_1\underset{B}\otimes b_2 &\longmapsto (-1)^{|a_2||b_1|}a_1\otimes b_1 \underset{A\otimes B}\otimes a_2\otimes b_2\notag
\end{align}
\end{pro}
\begin{proof}
Let $a_1, a_2\in {\ensuremath{\mathbb{P}}}(A)$ and $b_1,b_2\in {\ensuremath{\mathbb{P}}}(B)$ with $|a_1|=i$, $|a_2|=j$, $|b_1|=k$ and $|b_2|=l$.
\begin{align}
\tau \delta_n(a_1 \underset{A}\otimes a_2 \otimes b_1\underset{B}\otimes &b_2) = \tau ( \partial^{A}_{i+j}(a_1 \underset{A}\otimes a_2)\otimes b_1\underset{B}\otimes b_2 \notag \\
&\quad + (-1)^{i+j}a_1\underset{A}\otimes a_2 \otimes \partial^{B}_{k+l}(b_1\underset{B}\otimes b_2) ) \notag \\
&= \tau((d^A_i(a_1)\underset{A}\otimes a_2 + (-1)^i a_1\underset{A}\otimes d^A_j(a_2))\otimes b_1\underset{B}\otimes b_2 \notag \\
&\quad + (-1)^{i+j}a_1\underset{A}\otimes a_2 \otimes (d^B_k(b_1)\underset{B}\otimes b_2 + (-1)^k b_1\underset{B}\otimes d^B_l(b_2))) \notag \\
&= (-1)^{kj} d^A_i(a_1)\otimes b_1 \underset{A\otimes B}\otimes a_2\otimes b_2 \notag \\
&\quad + (-1)^{i+ k(j-1)} a_1\otimes b_1 \underset{A\otimes B}\otimes d^A_j(a_2)\otimes b_2 \notag \\
&\quad + (-1)^{i+ kj} a_1\otimes d^B_k(b_1) \underset{A\otimes B}\otimes a_2\otimes b_2 \notag \\
&\quad + (-1)^{i+j+k(j+1)} a_1\otimes b_1 \underset{A\otimes B}\otimes a_2\otimes d^B_l(b_2) \notag \\
&= (-1)^{kj}((d^A_i(a_1)\otimes b_1 + (-1)^{i} a_1\otimes d^B_k(b_1))\underset{A\otimes B}\otimes a_2\otimes b_2) \notag \\
&\quad + (-1)^{kj+i+k} a_1\otimes b_1 \underset{A\otimes B}\otimes (d^A_j(a_2)\otimes b_2 + (-1)^{j} a_2\otimes d^B_l(b_2)) \notag \\
&= (-1)^{kj}(d^{\otimes}_{i+k}(a_1\otimes b_1)\underset{A\otimes B}\otimes a_2\otimes b_2 \notag \\
&\quad + (-1)^{i+k}a_1\otimes b_1\underset{A\otimes B}\otimes d^{\otimes}_{j+l}(a_2\otimes b_2))\notag \\
&= (-1)^{kj}\partial^{\otimes}_n(a_1\otimes b_1 \underset{A\otimes B}\otimes a_2\otimes b_2) \notag \\
&= \partial^{\otimes}_n \tau(a_1\underset{A}\otimes a_2 \otimes b_1\underset{B}\otimes b_2) \notag
\end{align}
Therefore, $\tau$ is a map of complexes and it is clear that is an isomorphism in each degree, since the inverse of $\tau$ is $\tau$ itself.
\end{proof}
\begin{pro}\label{TenDiag}
Let $\Delta^{A}:{\ensuremath{\mathbb{P}}}(A)\rightarrow {\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ and $\Delta^{B}:{\ensuremath{\mathbb{P}}}(B)\rightarrow {\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B)$ be diagonal approximation maps. Then
\[
\Delta: {\ensuremath{\mathbb{P}}}(A\otimes B) \xrightarrow{\Delta^{A}\otimes \Delta^{B}} ({\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A))\otimes ({\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B)) \xrightarrow{\tau} {\ensuremath{\mathbb{P}}}(A\otimes B)\underset{A\otimes B}\otimes {\ensuremath{\mathbb{P}}}(A\otimes B)
\]
is a diagonal approximation map for $A\otimes B$.
\end{pro}
\begin{proof}
Let $a\in {\ensuremath{\mathbb{P}}}(A)$ and $b\in {\ensuremath{\mathbb{P}}}(B)$ with $|a|=i$ and $|b|=j$
\begin{align}
\partial^{\otimes}_{i+j}\Delta_{i+j}(a\otimes b)&= \partial^{\otimes}_{i+j}\tau(\Delta^{A}_i(a)\otimes \Delta^{B}_j(b)) = \tau \delta_{i+j}(\Delta^{A}_i(a)\otimes \Delta^{B}_j(b)) \notag \\
&= \tau (\partial^{A}_{i}\Delta^{A}_i(a)\otimes \Delta^{B}_j(b) + (-1)^i \Delta^{A}_i(a)\otimes \partial^{B}_{j}\Delta^{B}_j(b)) \notag \\
&= \tau (\Delta^{A}_{i-1}d^{A}_{i}(a)\otimes \Delta^{B}_j(b) + (-1)^i \Delta^{A}_i(a)\otimes \Delta^{B}_{j-1}d^{B}_{j}(b)) \notag \\
&= \Delta_{i+j-1}(d^{A}_{i}(a)\otimes b + (-1)^i a\otimes \Delta^{B}_{j-1}d^{B}_{j}(b)) \notag \\
&= \Delta_{i+j-1} d^{\otimes}_{i+j}(a\otimes b) \notag
\end{align}
For $|a|=|b|=0$, we have
\[
((\mu_A\otimes\mu_B)\underset{A\otimes B}\otimes (\mu_A\otimes\mu_B))\tau(\Delta^{A}_0\otimes \Delta^{B}_0)(a\otimes b)= (\mu_A\otimes\mu_B))(a\otimes b)
\]
\end{proof}
\begin{teo}\label{injection}
Let $A$ and $B$ be $R$-projective $R$-algebras with $R$ a commutative hereditary ring. Suppose that ${\ensuremath{\mathbb{P}}}(A)\rightarrow A$ is a resolution of $A$ of finitely generated projective $A^{e}$-modules and ${\ensuremath{\mathbb{P}}}(B)\rightarrow B$ is a $B^{e}$-resolution of $B$ such that
\begin{equation}\label{isoResAB}
Hom_{(A\otimes B)^{e}}({\ensuremath{\mathbb{P}}}(A\otimes B), A\otimes B)\cong Hom_{A^{e}}({\ensuremath{\mathbb{P}}}(A), A)\otimes Hom_{B^{e}}({\ensuremath{\mathbb{P}}}(B), B)
\end{equation}
Then
\[
HH^{*}(A; A)\otimes HH^{*}(B;B) \hookrightarrow HH^{*}(A\otimes B; A\otimes B)
\]
is an injection of graded algebras.
\end{teo}
\begin{proof}
By K\"{u}nneth theorem, there is an injective map of modules. Let $\Delta^{A}:{\ensuremath{\mathbb{P}}}(A)\rightarrow {\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ and $\Delta^{B}:{\ensuremath{\mathbb{P}}}(B)\rightarrow {\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B)$ be diagonal approximation maps. By proposition \ref{TenDiag}, $\Delta=\tau (\Delta^{A}\otimes \Delta^{B})$ is a diagonal approximation map for ${\ensuremath{\mathbb{P}}}(A\otimes B)$. Let $f,f'\in Hom_{A^{e}}({\ensuremath{\mathbb{P}}}(A), A)$ and $g,g'\in Hom_{B^{e}}({\ensuremath{\mathbb{P}}}(B), B)$. Notice that the following diagram commutes
\[
\xymatrix{
{\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B) \ar[d]_{\Delta^A\otimes \Delta^B} & \\
{\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B) \ar[d]_{(-1)^{|f'||g|}f\underset{A}\otimes f'\otimes g\underset{B}\otimes g'} \ar[r]^-{\tau} & \left({\ensuremath{\mathbb{P}}}(A\otimes B)\right)\underset{A\otimes B}\otimes \left({\ensuremath{\mathbb{P}}}(A \otimes B)\right) \ar[d]^{f\otimes g\underset{A\otimes B}\otimes f'\otimes g'} \\
A\underset{A}\otimes A\otimes B\underset{B}\otimes B \ar[r]^{\tau} & A\otimes B \underset{A\otimes B}\otimes A\otimes B
}
\]
Therefore,
\begin{align}
((f\otimes g)\smile (f'\otimes g')) &= (-1)^{(|f|+|g|)(|f'|+|g'|)}((f\otimes g)\underset{A\otimes B}\otimes (f'\otimes g'))\Delta \notag \\
&= (-1)^{|f'||g|+|f||f'|+|g||g'|} ((f \underset{A}\otimes f')\Delta^{A}) \otimes ((g \underset{B} \otimes g')\Delta^{B})\notag \\
&= (-1)^{|f'||g|}(f \smile_A f') \otimes (g \smile_B g') \notag
\end{align}
\end{proof}
\begin{cor}\label{isoalgebras}
Under the same hypothesis as in theorem \ref{injection}, if $HH^{*}(A; A)$, or $H^{*}(B;B)$, is $R$-projective. Then
\[
HH^{*}(A; A)\otimes HH^{*}(B;B) \cong HH^{*}(A\otimes B; A\otimes B)
\]
as graded algebras.
\end{cor}
\begin{proof}
By K\"{u}nneth Theorem, there is an isomorphim of modules
\begin{align}
HH^{n}(A\otimes B; A\otimes B)\cong &\bigoplus_{r+s=n} HH^{r}(A; A)\otimes HH^{s}(B;B) \notag \\
&\oplus \bigoplus_{r+s=n+1} Tor_1 ^{R} (HH^{r}(A; A), HH^{s}(B;B)) \notag
\end{align}
Since $HH^{*}(A; A)$, or $H^{*}(B;B)$, is $R$-projective, the proof of Theorem \ref{injection} extends to an isomorphism of graded algebras.
\end{proof}
\begin{defi}
A $(p,q)${\it -shuffle} is a sequence of integers
$$
[i_1\cdots i_{p}|j_1\cdots j_q]
$$
represented by a permutation $\sigma\in S_{p+q}$, such that
$$
\sigma(1)=i_1<\cdots <i_p=\sigma(p) \quad \text{and} \quad \sigma(p+1)=j_1<\cdots <j_q=\sigma(p+q)
$$
The sign of a $(p,q)$-shuffle is defined by
$$
|\sigma|:=|\lbrace (i,j)|1\leq i<j\leq p+q \text{ and } \sigma(i)>\sigma(j)\rbrace|
$$
The set of $(p,q)$-shuffles will be denoted by $S_{p,q}$.
\end{defi}
\begin{defi}
The {\it Alexander-Whitney map} $AW: {\ensuremath{\mathbb{B}}}(A\otimes B) \rightarrow {\ensuremath{\mathbb{B}}}(A)\otimes {\ensuremath{\mathbb{B}}}(B)$ is defined as follows
\begin{align}
AW_0(a_1\otimes b_1\otimes a_2\otimes b_2) &= a_1\otimes a_2\otimes b_1\otimes b_2 \notag \\
AW_r(1\otimes 1 \otimes a_1\otimes b_1\otimes \cdots \otimes a_r\otimes b_r\otimes 1\otimes 1)&= \notag \\
\sum_{t=0}^{r} (-1)^{t(r-t)}a_1a_2\cdots a_t\otimes a_{t+1}\otimes \cdots \otimes a_r &\otimes 1\otimes 1\otimes b_1\otimes \cdots \otimes b_t\otimes b_{t+1}\cdots b_r \notag
\end{align}
for $r\geq 1$, and by convention for $t=0$, $a_1\cdots a_t =1$ and for $t=r$, $b_{t+1}\cdots b_r =1$.
The {\it Eilenberg-Zilber map} $EZ: {\ensuremath{\mathbb{B}}}(A)\otimes {\ensuremath{\mathbb{B}}}(B) \rightarrow {\ensuremath{\mathbb{B}}}(A\otimes B)$ is defined as follows
\begin{align}
EZ_0(a_1\otimes a_2\otimes b_1\otimes b_2) &= a_1\otimes b_1\otimes a_2\otimes b_2 \notag \\
EZ_r(1\otimes a_1\otimes \cdots \otimes a_{r-t}\otimes 1\otimes 1 \otimes b_1\otimes \cdots b_t \otimes 1)&= \notag \\
\sum_{\sigma\in S_{r-t,t}} (-1)^{|\sigma|} 1\otimes 1\otimes F(x_{\sigma^{-1}(1)})&\otimes \cdots \otimes F(x_{\sigma^{-1}(r)}) \otimes 1\otimes 1 \notag
\end{align}
for $r\geq 1$, where $F(a)=a\otimes 1$ and $F(b)=1\otimes b$.
\end{defi}
\begin{remk}\label{AWEZid}
These two maps gives an equivalence of complexes. Moreover,
$$
AWEZ=id \quad \text{and} \quad EZAW\simeq id
$$
\end{remk}
\begin{pro}
The induced maps for $AW_*$ and $EZ_*$ are
\begin{align}
&\overline{AW}_n: (A\otimes B)^n \otimes A\otimes B \longrightarrow \displaystyle \bigoplus_{i+j=n} A^i \otimes A \otimes B^j \otimes B \notag \\
&\overline{AW}_0\equiv id \notag \\
&\overline{AW}_n((a_1\otimes b_1\otimes \cdots \otimes a_n \otimes b_n) \otimes a\otimes b)= \notag \\
&\qquad \sum_{k=0}^{n} (-1)^{k(n-k)}(a_{k+1}\otimes \cdots \otimes a_n\otimes a a_1a_2\cdots a_k) \otimes (b_1\otimes \cdots \otimes b_k\otimes b_{k+1}\cdots b_n b) \notag \\
&\overline{EZ}_n: \displaystyle \bigoplus_{i+j=n} A^i\otimes A \otimes B^j\otimes B \longrightarrow (A\otimes B)^n\otimes A\otimes B \notag \\
&\overline{EZ}_0\equiv id \notag \\
&\overline{EZ}_n((a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes ( b_1\otimes \cdots \otimes b_t \otimes b))= \notag \\
&\qquad \sum_{\sigma\in S_{n-t,t}} (-1)^{|\sigma|} \left( F(x_{\sigma^{-1}(1)})\otimes \cdots \otimes F(x_{\sigma^{-1}(n)})\right)\otimes a\otimes b \notag
\end{align}
\end{pro}
The Connes $B$-operator on the Hochschild homology of the tensor product of two algebras satisfies the following equation
\begin{pro}\label{TensorConnes}
$
\overline{AW}B^{^{A\otimes B}}\overline{EZ}=B^{^{A}}\otimes id + id\otimes B^{^{B}}
$
\end{pro}
\begin{proof}
Let $(a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_1\otimes \cdots \otimes b_t\otimes b)\in A^{n-t}\otimes A\otimes B^t\otimes B$. Applying $B^{^{A\otimes B}}_n\overline{EZ}_n$, we get
$$
\sum_{\sigma\in S_{n-t,t}} \sum^n_{i=0}(-1)^{|\sigma|+in} F_i\otimes \cdots \otimes F_n \otimes a\otimes b \otimes F_1 \otimes \cdots \otimes F_{i-1}\otimes 1\otimes 1
$$
where $F_i=F(x_{\sigma^{-1}(i)})$ for $1\leq i\leq n$. Reordering the inner sum, we get
\begin{align}
\sum_{\sigma} \Bigg( &(-1)^{|\sigma|}a\otimes b \otimes F_1 \otimes \cdots \otimes F_{n}\otimes 1\otimes 1 \tag*{\circled{1}} \\
&+ \sum^{n-t}_{i=1}(-1)^{|\sigma|+in} F_i\otimes \cdots \otimes F_n \otimes a\otimes b \otimes F_1 \otimes \cdots \otimes F_{i-1}\otimes 1\otimes 1 \tag*{\circled{2}} \\
&+(-1)^{|\sigma|+(n-t+1)n} F_{n-t+1}\otimes \cdots \otimes F_n \otimes a\otimes b \otimes F_1 \otimes \cdots \otimes F_{n-t}\otimes 1\otimes 1 \tag*{\circled{3}} \\
&+ \sum^t_{i=2}(-1)^{|\sigma|+(n-t+i)n} F_{n-t+i}\otimes \cdots \otimes F_n\otimes \notag \\
& \qquad \qquad\otimes a\otimes b \otimes F_1 \otimes \cdots \otimes F_{n-t+i-1} 1\otimes 1 \Bigg) \tag*{\circled{4}}
\end{align}
Consider the following permutations
\begin{align}
\sigma_i &=
\begin{pmatrix}
1 & \cdots & i-1 & i & \cdots & n-t & n-t+1 & \cdots & n \\
1 & \cdots & i-1 & i+t & \cdots & n & i & \cdots & i+t-1
\end{pmatrix} \notag \\
\tag*{$1\leq i\leq n-t$} \\
\tilde{\sigma}_j &=
\begin{pmatrix}
1 & \cdots & n-t & n-t+1 & \cdots & n-t+j-1 & n-t+j & \cdots & n \\
j & \cdots & n-t+j-1 & 1 & \cdots & j-1 & n-t+j & \cdots & n
\end{pmatrix} \notag \\
\tag*{$2\leq j\leq t$}
\end{align}
Notice that $|\sigma_i|=(n-t-i+1)t$ and $|\tilde{\sigma}_i|=(n-t)(j-1)$. Now, applying $\overline{AW}_{n+1}$ to \circled{1} the only non-zero term arises when $\sigma=\sigma_1$ and $k=0$
$$
\overline{AW}_{n+1}(\circled{1})=(-1)^{n-t}(a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b\otimes b_1\otimes \cdots \otimes b_t\otimes 1)
$$
Applying $\overline{AW}_{n+1}$ to \circled{2} the only non-zero terms arise when $\sigma=\sigma_i$ for $1\leq i\leq n-t$ and $k=t$
\begin{align}
\overline{AW}_{n+1}(\circled{2})&=\sum^{n-t}_{i=1} (-1)^{i(n-t)} (a_i\otimes \cdots \otimes a_{n-t}\otimes a\otimes a_1\otimes \cdots \otimes a_{i-1}\otimes 1)\notag \\
&\qquad \qquad \otimes (b_1\otimes \cdots \otimes b_t\otimes b) \notag
\end{align}
Applying $\overline{AW}_{n+1}$ to \circled{3} the only non-zero terms arise when $\sigma=\sigma_i$ for $1\leq i\leq n-t$, $k=t$ and $k=t+1$
\begin{align}
\overline{AW}_{n+1}(\circled{3})=& (a\otimes a_1\otimes \cdots \otimes a_{n-t}\otimes 1) \otimes (b_1\otimes \cdots \otimes b_t\otimes b) \notag \\
&+(-1)^n(a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_1\otimes \cdots \otimes b_t\otimes b\otimes 1) \notag
\end{align}
Applying $\overline{AW}_{n+1}$ to \circled{4} the only non-zero terms arise when $\sigma=\tilde{\sigma}_i$ for $2\leq i\leq t$ and $k=t+1$
\begin{align}
\overline{AW}_{n+1}(\circled{4})&=\sum^{t}_{i=2} (-1)^{it+n-t} (a_1\otimes \cdots \otimes a_{n-t}\otimes a) \notag \\
&\qquad \qquad \otimes (b_i\otimes \cdots \otimes b_{t}\otimes b\otimes b_1\otimes \cdots \otimes b_{i-1}\otimes 1) \notag
\end{align}
Applying $B^{^{A}}_{n-t}\otimes id + (-1)^{n-t} id\otimes B^{^{B}}_t$ to $( a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_1\otimes \cdots \otimes b_t\otimes b)$, we get
\begin{align}
&\sum^{n-t}_{i=0} (-1)^{i(n-t)} (a_i\otimes \cdots \otimes a_{n-t}\otimes a\otimes a_1\otimes \cdots \otimes a_{i-1}\otimes 1)\otimes (b_1\otimes \cdots \otimes b_t\otimes b) \notag \\
&+(-1)^{n-t}\sum^{t}_{i=0} (-1)^{it} (a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_i\otimes \cdots \otimes b_{t}\otimes b\otimes \cdots \otimes b_{i-1}\otimes 1) \notag \\
= &(a\otimes a_1\otimes \cdots \otimes a_{n-t}\otimes 1) \otimes (b_1\otimes \cdots \otimes b_t\otimes b) \notag \\
&+ \sum^{n-t}_{i=1} (-1)^{i(n-t)} (a_i\otimes \cdots \otimes a_{n-t}\otimes a\otimes a_1\otimes \cdots \otimes a_{i-1}\otimes 1)\otimes (b_1\otimes \cdots \otimes b_t\otimes b) \notag \\
&+ (-1)^{n-t}(a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b\otimes b_1\otimes \cdots \otimes b_t\otimes 1) \notag \\
&+\sum^{t}_{i=2} (-1)^{it+n-t} (a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_i\otimes \cdots \otimes b_{t}\otimes b\otimes b_1\otimes \cdots \otimes b_{i-1}\otimes 1) \notag \\
&+(-1)^n(a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_1\otimes \cdots \otimes b_t\otimes b\otimes 1) \notag \\
=& \overline{AW}_{n+1}(\circled{1}) +\overline{AW}_{n+1}(\circled{2})+\overline{AW}_{n+1}(\circled{3})+\overline{AW}_{n+1}(\circled{4}) \notag
\end{align}
Therefore,
\begin{align}
\overline{AW}_{n+1}B^{^{A\otimes B}}_n&\overline{EZ}_n((a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_1\otimes \cdots \otimes b_t\otimes b)) \notag \\
&=\left(B^{^{A}}_{n-t}\otimes id + (-1)^{n-t} id\otimes B^{^{B}}_t\right) \left((a_1\otimes \cdots \otimes a_{n-t}\otimes a) \otimes (b_1\otimes \cdots \otimes b_t\otimes b)\right) \notag
\end{align}
\end{proof}
\begin{teo}\label{BVinj}
Let $A$ and $B$ be finite dimensional symmetric $R$-algebras with $R$ a commutative hereditary ring. Then
\[
HH^{*}(A; A)\otimes HH^{*}(B;B) \hookrightarrow HH^{*}(A\otimes B; A\otimes B)
\]
is an injection of BV-algebras.
\end{teo}
\begin{proof}
Since both algebras are finite dimensional, we have
$$
Hom_{(A\otimes B)^{e}}({\ensuremath{\mathbb{B}}}(A\otimes B), A\otimes B)\cong Hom_{A^{e}}({\ensuremath{\mathbb{B}}}(A), A)\otimes Hom_{B^{e}}({\ensuremath{\mathbb{B}}}(B), B)
$$
Therefore, by theorem \ref{injection} there is an injection of graded algebras
\[
HH^{*}(A; A)\otimes HH^{*}(B;B) \hookrightarrow HH^{*}(A\otimes B; A\otimes B)
\]
By theorem \ref{HHBV}, the BV-operator is given by the dual of the Connes operator. By dualizing equation \ref{TensorConnes}, we get
$$
\overline{EZ}^{\vee} \Delta^{A\otimes B} \overline{AW}^{\vee} = \Delta^{A}\otimes id+ id\otimes \Delta^{B}
$$
on the cochain level, which gives the desire injection on the cohomological level.
\end{proof}
\begin{cor}[\cite{Tensor} Theorem 3.5]\label{tensorBViso}
Let $A$ and $B$ be finite dimensional symmetric $R$-algebras with $R$ a commutative hereditary ring. If $HH^{*}(A; A)$, or $H^{*}(B;B)$, is $R$-projective. Then
\[
HH^{*}(A; A)\otimes HH^{*}(B;B) \cong HH^{*}(A\otimes B; A\otimes B)
\]
is an isomorphism of BV-algebras.
\end{cor}
Next, we study the action of Hochschild cohomology on Hochschild homology of tensor products
\begin{pro}\label{tensoractionfinite}
If at least one of the algebras is finite dimensional, the action of $HH^{*}(A\otimes B;A\otimes B)$ on $HH_{*}(A\otimes B;A\otimes B)$ is given by the tensor product of the actions.
\end{pro}
\begin{proof}
Let $(a_1\otimes \cdots \otimes a_{n-t}\otimes a) \in A^{n-t}\otimes A$, $(b_1\otimes \cdots \otimes b_t\otimes b)\in \otimes B^t\otimes B$, $\alpha\in Hom(A^{m-i},A)$ and $\beta\in Hom(B^{i},B)$ with $n-t\geq m-i \geq 0$ and $n-m\geq t-i \geq 0$. We claim that
$$
\overline{AW}\rho^{A\otimes B} (\overline{EZ}\otimes \overline{AW}^{\vee})=\pm\left(\rho^{A}\otimes \rho^{B}\right)
$$
on the (co)chain level, which implies the assertion on the (co)homological level.
\begin{align}
&(a_1\otimes \cdots \otimes a_{n-t}\otimes a)\otimes (b_1\otimes \cdots \otimes b_t\otimes b)\otimes (\alpha \otimes \beta) \notag \\
&\xmapsto{\overline{EZ}_n\otimes \overline{AW}^{\vee}_m} \left(\sum_{\sigma}(-1)^{t(m-i)+|\sigma|} F_1\otimes \cdots \otimes F_{n}\otimes a\otimes b\right)\otimes \overline{AW}^{\vee}_m(\alpha\otimes \beta) \xmapsto{\rho^{A\otimes B}} \notag \\
&\sum_{\sigma}(-1)^{t(m-i)+|\sigma|+nm} F_{m+1}\otimes \cdots \otimes F_{n} \otimes (a\otimes b) \; \overline{AW}^{\vee}_m(\alpha\otimes \beta)(F_1\otimes \cdots \otimes F_m) \notag
\end{align}
Applying $\overline{AW}_{n-m}$ the only non-zero term arise when $k=t-i$ and $\sigma$ is the following permutation
\begin{equation}
\resizebox{\textwidth}{!}{$
\begin{pmatrix}
1 &\cdots & m-i & m-i+1 & \cdots & n-t & n-t+1 & \cdots & n-t+i & n-t+i+1 \cdots n\\
i+1 & \cdots & m & m+t-i+1 & \cdots & n & 1 & \cdots & i & m+1 \cdots m+t-i
\end{pmatrix}$} \notag
\end{equation}
Since $|\sigma|=i(m-i)+t(n-t-m+i)$, we get
\begin{align}
(-1)^{t(m-i)+(m-i)(n-t)+it}(a_{m-i+1}\otimes \cdots \otimes a_{n-t}\otimes a\, \alpha(a_1\otimes \cdots \otimes a_{m-i}))\notag \\
\otimes (b_{i+1}\otimes \cdots \otimes b_{t}\otimes b\, \beta(b_1\otimes \cdots \otimes b_{i})) \notag
\end{align}
which is precisely
$$
(-1)^{t(m-i)}\left(\rho^A\otimes \rho^B\right)\left((a_1\otimes \cdots \otimes a_{n-t}\otimes a)\otimes \alpha \otimes (b_1\otimes \cdots \otimes b_t\otimes b) \otimes \beta \right)
$$
\end{proof}
The following proposition is a slightly generalization of the previous proposition \ref{tensoractionfinite}
\begin{pro}\label{tensoraction}
Under the same hypothesis as in theorem \ref{injection}, the action of $HH^{*}(A\otimes B;A\otimes B)$ on $HH_{*}(A\otimes B;A\otimes B)$ is given by the tensor product of the actions.
\end{pro}
\begin{proof}
Let $\Delta^{A}:{\ensuremath{\mathbb{P}}}(A)\rightarrow {\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ and $\Delta^{B}:{\ensuremath{\mathbb{P}}}(B)\rightarrow {\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B)$ be diagonal approximation maps. By proposition \ref{TenDiag}, $\Delta=\tau (\Delta^{A}\otimes \Delta^{B})$ is a diagonal approximation map for ${\ensuremath{\mathbb{P}}}(A\otimes B)$. Let $x\otimes a\in {\ensuremath{\mathbb{P}}}(A)\underset{A^e}\otimes A$, $y\otimes b\in {\ensuremath{\mathbb{P}}}(B)\underset{B^e}\otimes B$, $f\in Hom_{A^{e}}({\ensuremath{\mathbb{P}}}(A), A)$ and $g\in Hom_{B^{e}}({\ensuremath{\mathbb{P}}}(B), B)$. Notice that the following diagram commutes up to the sign $(-1)^{|x||g|}$
$$
\xymatrix{
{\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B) \ar[d]_{\Delta^A\otimes \Delta^B} & \\
{\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)\otimes {\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes {\ensuremath{\mathbb{P}}}(B) \ar[d]_{(-1)^{|x||g|} (f\underset{A}\otimes id) \otimes (g\underset{B}\otimes id)} \ar[r]^-{\tau} & {\ensuremath{\mathbb{P}}}(A\otimes B)\underset{A\otimes B}\otimes {\ensuremath{\mathbb{P}}}(A \otimes B) \ar[d]^{(f\otimes g)\underset{A\otimes B}\otimes id} \\
{\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes A \otimes {\ensuremath{\mathbb{P}}}(B)\underset{B}\otimes B \ar[r]^{\tau} & {\ensuremath{\mathbb{P}}}(A\otimes B) \underset{A\otimes B}\otimes A\otimes B
}
$$
Therefore,
$$
\rho^{A\otimes B}(((x\otimes a)\otimes (y\otimes b))\otimes (f\otimes g))=(-1)^{|f||y|}\rho^{A}((x\otimes a)\otimes f)\otimes \rho^{B}((y\otimes b)\otimes g)
$$
\end{proof}
To sum up, we get the following
\begin{teo}\label{BVisoCY}
Let $R$ be a commutative hereditary ring. Let $A$ and $B$ be two $R$-algebras satisfying the following hypothesis:
\begin{itemize}
\item Suppose that ${\ensuremath{\mathbb{P}}}(A)\rightarrow A$ is a resolution of $A$ of finitely generated projective $A^{e}$-modules and ${\ensuremath{\mathbb{P}}}(B)\rightarrow B$ is a $B^{e}$-resolution of $B$ such that
$$
Hom_{(A\otimes B)^{e}}({\ensuremath{\mathbb{P}}}(A\otimes B), A\otimes B)\cong Hom_{A^{e}}({\ensuremath{\mathbb{P}}}(A), A)\otimes Hom_{B^{e}}({\ensuremath{\mathbb{P}}}(B), B)
$$
\item $HH^{*}(A; A)$, or $H^{*}(B;B)$, is $R$-projective.
\item Let $a\in HH_n(A;A)$ and $b\in HH_m(B;B)$ such that
\begin{align}
\rho_a:HH^*(A;A)&\longrightarrow HH_{n-*}(A;A) & \rho_b:HH^*(B;B)&\longrightarrow HH_{m-*}(B;B) \notag \\
c&\longmapsto \rho(a\otimes c) & c&\longmapsto \rho(b\otimes c)\notag
\end{align}
are isomorphisms, and $B(a)=0=B(b)$.
\end{itemize}
Then there is an isomorphism of BV-algebras
\[
HH^{*}(A\otimes B) \cong HH^{*}(A)\otimes HH^{*}(B)
\]
\end{teo}
\begin{proof}
By proposition \ref{tensoraction}, the action for $A\otimes B$ is given by the tensor product of the actions. Therefore,
$$
\rho_a\otimes \rho_b: HH^*(A\otimes B; A\otimes B) \rightarrow HH_{n+m-*}(A\otimes B;A\otimes B)
$$
is an isomorphism. Then
$$
\Delta^{A\otimes B}= (\rho^{-1}_a\otimes \rho^{-1}_b) (B^A\otimes id + id\otimes B^B)( \rho_a\otimes \rho_b) = \Delta^{A}\otimes id + id\otimes \Delta^{B} \notag
$$
\end{proof}
\section{BV-Algebra Structure on \texorpdfstring{\boldmath{$HH^{*}(\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}])$}}{HH*(R[Z/nZ])}}
From now on, we assume that $A$ is $R\left[\ensuremath{\mathbb{Z}} /n \ensuremath{\mathbb{Z}} \right]\cong R [\sigma] /(\sigma^{n}-1)$ with $R$ a commutative ring. Since the Hochschild (co)homology of an associative algebra can be calculated using projective $A^{e}$-resolutions and the bar construction is not convenient to make explicit calculations, we are going to use the following 2-periodical resolution \cite{cyho}, \cite{Holm}.
\begin{pro}
The following is a $A^{e}$-projective resolution of $A$
\[
{\ensuremath{\mathbb{P}}}(A): \, \cdots \rightarrow A\otimes A \xrightarrow{d_2} A\otimes A \xrightarrow{d_1} A\otimes A \xrightarrow{\mu} A \rightarrow 0
\]
with
\begin{align}
\mu (a\otimes b)&=ab \notag \\
d_{2k+1}(a\otimes b)&= (a\otimes b)(1\otimes \sigma - \sigma \otimes 1) \notag \\
d_{2k}(a\otimes b)&= (a\otimes b) \sum_{i=0} ^{n-1} \sigma^i \otimes \sigma^{n-i-1} \notag
\end{align}
\end{pro}
\begin{proof}
First of all, notice that $A\otimes A\cong A^{e}$ as $A^{e}$-modules, so $A^2$ is $A^{e}$-free. From the definition, it follows that $d_{r}d_{r+1}=0$. Now, we are going to define the following $A$-right maps
\[
\, \cdots \xleftarrow{\tilde{s}_3} A\otimes A \xleftarrow{\tilde{s}_2} A\otimes A \xleftarrow{\tilde{s}_1} A\otimes A \xleftarrow{\tilde{s}_0} A \leftarrow 0
\]
\begin{align}
\tilde{s}_0 : A &\longrightarrow A^2, & \tilde{s}_0 (\sigma ^{i})&=1\otimes \sigma^{i} \notag \\
\tilde{s}_{2k+1} : A^2 &\longrightarrow A^2, & \tilde{s}_{2k+1} (\sigma ^{i}\otimes 1)&= \begin{cases}
-\displaystyle\sum_{j=0} ^{i-1} \sigma^j \otimes \sigma^{i-j-1} &\text{if $i\neq 0$} \notag\\
0 &\text{if $i=0$}
\end{cases} \notag \\
\tilde{s}_{2k} : A^2 &\longrightarrow A^2, & \tilde{s}_{2k} (\sigma ^{i}\otimes 1)&= \begin{cases}
1\otimes 1 &\text{if $i=n-1$} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
and by direct calculations, it follows that $\mu \tilde{s}_0= id$ and $d_{k+1}\tilde{s}_{k+1} + \tilde{s}_{k}d_k = id$ for all $k\geq 1$. Therefore, the complex is acyclic.
\end{proof}
Tensoring this resolution by $A$ as $A^{e}$-modules and using the identification $A^2 \underset{A^e}\otimes A \cong A$, $((x\otimes y)\otimes a\mapsto yax)$, we obtain the complex
\begin{equation}\label{ho2Per}
\cdots \rightarrow A \xrightarrow{n\sigma ^{n-1}} A \xrightarrow{0} A \xrightarrow{n\sigma ^{n-1}} A \xrightarrow{0} A
\end{equation}
Taking $Hom_{A^{e}}(- , A)$ of ${\ensuremath{\mathbb{P}}}(A)$ and using the identification $Hom_{A^{e}}(A^2 , A)\cong A$, $(f \mapsto f(1\otimes 1))$, we obtain the complex
\begin{equation}\label{coh2Per}
A \xrightarrow{0} A \xrightarrow{n\sigma ^{n-1}} A \xrightarrow{0} A \xrightarrow{n\sigma ^{n-1}} A \rightarrow \cdots
\end{equation}
Then
$$
HH_i(A) = \begin{cases}
A &\text{if $i=0$} \notag\\
A/(n\sigma ^{n-1}A) &\text{if $i=2k+1$} \notag \\
Ann(n\sigma ^{n-1}) &\text{if $i=2k$}
\end{cases} \notag
$$
$$
HH^i(A) = \begin{cases}
A &\text{if $i=0$} \notag\\
Ann(n\sigma ^{n-1}) &\text{if $i=2k+1$} \notag \\
A/(n\sigma ^{n-1}A) &\text{if $i=2k$}
\end{cases} \notag
$$
To calculate the algebraic structures of $HH^{*}(A;A)$, we use two chain maps between ${\ensuremath{\mathbb{P}}}(A)$ and the normalized bar resolution ${\ensuremath{\mathbb{B}}}(A)$
\begin{align}
\psi_*:& \; {\ensuremath{\mathbb{P}}}(A)\rightarrow {\ensuremath{\mathbb{B}}}(A) \notag \\
\varphi_*:& \; \,{\ensuremath{\mathbb{B}}}(A)\rightarrow {\ensuremath{\mathbb{P}}}(A) \notag
\end{align}
which are homotopy equivalences.
The $A^{e}$-homomorphisms $\psi_*$ will be defined by
\begin{align}
&\psi_0 = id : A^{2} \longrightarrow A^{2} \notag \\
&\psi_{r+1}: A^{2} \longrightarrow A\otimes \bar{A}^{r+1}\otimes A, \qquad \psi_{r+1}(1\otimes 1):= s_r \psi_{r} d_{r+1}(1\otimes 1) \notag
\end{align}
By direct computations, it follows that
\begin{align}
&\psi_{2r}(1\otimes 1)= \displaystyle \sum_{0\leq i_1,\dots ,i_r \leq n-1} (-1)^r 1\otimes \sigma^{i_1}\otimes \sigma \otimes \sigma^{i_2}\otimes \cdots \otimes \sigma^{i_r}\otimes \sigma\otimes \sigma^{r(n-1)-\sum_{k=1} ^{r} i_k} \notag \\
&\psi_{2r+1}(1\otimes 1)= \displaystyle \sum_{0\leq i_1,\dots ,i_r \leq n-1} (-1)^{r+1} 1\otimes \sigma \otimes \sigma^{i_1}\otimes \cdots \otimes \sigma^{i_r}\otimes \sigma\otimes \sigma^{r(n-1)-\sum_{k=1} ^{r} i_k} \notag
\end{align}
And the $A^{e}$-homomorphisms $\varphi_*$ will be defined by
\begin{align}
&\varphi_0 = id : A^{2} \longrightarrow A^{2} \notag \\
&\varphi_1: A\otimes \bar{A}\otimes A \longrightarrow A^{2}, \; \varphi_1 (1\otimes \sigma^{i}\otimes 1):= -\sum_{j=0} ^{i-1} \sigma^j \otimes \sigma^{i-j-1} \notag \\
&\varphi_2: A\otimes \bar{A}^{2}\otimes A \longrightarrow A^{2}, \, \varphi_2 (1\otimes \sigma^{i}\otimes \sigma^{k}\otimes 1):= \begin{cases}
-1\otimes \sigma^{i+k-n} &\text{if $i+k\geq n$} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
$\varphi_r: A\otimes \bar{A}^{r}\otimes A \longrightarrow A^{2}$ for $r>2$
\begin{equation}
\resizebox{\textwidth}{!}{$
\varphi_r(1 \otimes \sigma^{i_1}\otimes \cdots \otimes \sigma^{i_r}\otimes 1)= \varphi_{r-2}(1 \otimes \sigma^{i_1}\otimes \cdots \otimes \sigma^{i_{r-2}}\otimes 1)\cdot \varphi_2 (1\otimes \sigma^{i_{r-1}}\otimes \sigma^{i_r}\otimes 1) \notag $}
\end{equation}
By direct computations, it follows that
\begin{align}
\varphi_{2r}(1 \otimes \sigma^{i_1}\otimes \cdots \otimes &\sigma^{i_{2r}}\otimes 1)= \displaystyle \prod_{k=1}^{r}\varphi_2 (1\otimes \sigma^{i_{2k-1}}\otimes \sigma^{i_{2k}}\otimes 1) \notag \\
&= \begin{cases}
(-1)^r \otimes \sigma^{\sum_{k=1} ^{2r} i_k-rn} &\text{if $i_{2k-1}+i_{2k}\geq n$ for $1\leq k \leq r$} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
\begin{align}
\varphi_{2r+1}(1 &\otimes \sigma^{i_1}\otimes \cdots \otimes \sigma^{i_{2r+1}}\otimes 1)= \varphi_1(1\otimes \sigma^{i_1}\otimes 1)\displaystyle \prod_{k=1}^{r}\varphi_2 (1\otimes \sigma^{i_{2k}}\otimes \sigma^{i_{2k+1}}\otimes 1) \notag \\
&= \begin{cases}
(-1)^{r+1} \displaystyle \sum_{j=0}^{i_1 -1} \sigma^{j} \otimes \sigma^{\sum_{k=1} ^{2r+1} i_k-j-rn-1} &\text{if $i_{2k}+i_{2k+1}\geq n$ for $1\leq k \leq r$} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
\begin{remk}
These two maps gives an equivalence of complexes. Moreover,
$$
\varphi_* \psi_*=id \quad \text{and} \quad \psi_* \varphi_* \simeq id
$$
\end{remk}
\begin{pro}
Using the identifications
$$
A\otimes \bar{A}^*\otimes A \underset{A^{e}}\otimes A\cong \bar{A}^*\otimes A \quad \text{and} \quad A^2 \underset{A^{e}}\otimes A\cong A
$$
the induced maps for $\psi_*$ and $\varphi_*$ are
\begin{align}
&\bar{\psi}_{*}: A \longrightarrow \bar{A}^* \otimes A \notag \\
&\bar{\psi}_{2r}(a)= \displaystyle \sum_{0\leq i_1,\dots ,i_r \leq n-1} (-1)^r \sigma^{i_1}\otimes \sigma\otimes \sigma^{i_2}\otimes \cdots \otimes \sigma^{i_r}\otimes \sigma \otimes \sigma^{r(n-1)-\sum_{k=1} ^{r} i_k} a \notag \\
&\bar{\psi}_{2r+1}(a)= \displaystyle \sum_{0\leq i_1,\dots ,i_r \leq n-1} (-1)^{r+1} \sigma\otimes \sigma^{i_1}\otimes \cdots \otimes \sigma^{i_r}\otimes \sigma \otimes \sigma^{r(n-1)-\sum_{k=1} ^{r} i_k} a \notag \\
&\bar{\varphi}_{*}:\bar{A}^* \otimes A \longrightarrow A \notag \\
&\bar{\varphi}_{2r}(\sigma^{i_1}\otimes \cdots \otimes \sigma^{i_{2r}}\otimes a)= \begin{cases}
(-1)^r \sigma^{\sum_{k=1} ^{2r} i_k-rn} a &\text{if $i_{2k-1}+i_{2k}\geq n$} \notag\\
0 &\text{otherwise}
\end{cases} \notag \\
&\bar{\varphi}_{2r+1}(\sigma^{i_1}\otimes \cdots \otimes \sigma^{i_{2r+1}}\otimes a)= \begin{cases}
(-1)^{r+1} a i_1 \sigma^{\sum_{k=1} ^{2r+1} i_k-rn-1} &\text{if $i_{2k}+i_{2k+1}\geq n$} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
where $1\leq k \leq r$.
\end{pro}
\begin{pro}
Using the identifications
$$
Hom_{_{A^{e}}} (A \otimes \bar{A}^* \otimes A, A)\cong Hom (\bar{A}^*, A) \quad \text{and} \quad Hom_{A^{e}}(A^2 , A)\cong A
$$
the induced maps for $\psi_*$ and $\varphi_*$ are
\begin{align}
&\bar{\psi}_{r}^{*}: Hom (\bar{A}^r, A) \longrightarrow A \notag \\
&\bar{\psi}_{2r}^{*}(f)= \displaystyle \sum_{0\leq i_1,\dots ,i_r \leq n-1} (-1)^r f(\sigma^{i_1}, \sigma, \sigma^{i_2}, \dots ,\sigma^{i_r}, \sigma) \sigma^{r(n-1)-\sum_{k=1} ^{r} i_k} \notag \\
&\bar{\psi}_{2r+1}^{*}(f)= \displaystyle \sum_{0\leq i_1,\dots ,i_r \leq n-1} (-1)^{r+1} f(\sigma, \sigma^{i_1}, \dots , \sigma^{i_r}, \sigma ) \sigma^{r(n-1)-\sum_{k=1} ^{r} i_k} \notag \\
&\bar{\varphi}_{r}^{*}:A \longrightarrow Hom_{_{R}} (\bar{A}^r, A) \notag \\
&\bar{\varphi}_{2r}^{*}(a)(\sigma^{i_1}, \dots , \sigma^{i_{2r}})= \begin{cases}
(-1)^r a \sigma^{\sum_{k=1} ^{2r} i_k-rn} &\text{if $i_{2k-1}+i_{2k}\geq n$} \notag\\
0 &\text{otherwise}
\end{cases} \notag \\
&\bar{\varphi}_{2r+1}^{*}(a)(\sigma^{i_1}, \dots , \sigma^{i_{2r+1}})= \begin{cases}
(-1)^{r+1} a i_1 \sigma^{\sum_{k=1} ^{2r+1} i_k-rn-1} &\text{if $i_{2k}+i_{2k+1}\geq n$} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
where $1\leq k \leq r$.
\end{pro}
\subsection{Cup Product and Cohomology Ring}
\begin{lem}\label{cupeven}
Let $R$ be a commutative ring. Then the cup product on the even Hochschild cohomology of $A=R [\sigma] /(\sigma^{n}-1)$ is induced by multiplication in $A$.
\end{lem}
\begin{proof}
Let $a\in HH^{2r}(A;A)$ and $b\in HH^{2s}(A;A)$. Then
$$
\bar{\varphi}_{2r}^{*}(a) \smile \bar{\varphi}_{2s}^{*}(b)\in Hom(\bar{A}^{2(r+s)}, A)
$$
and
\begin{equation}
\resizebox{\textwidth}{!}{$
(\bar{\varphi}_{2r}^{*}(a) \smile \bar{\varphi}_{2s}^{*}(b))(\sigma^{i_1}, \dots , \sigma^{i_{2(r+s)}}) = (\bar{\varphi}_{2r}^{*}(a))(\sigma^{i_1}, \dots , \sigma^{i_{2r}})\cdot (\bar{\varphi}_{2s}^{*}(b))(\sigma^{i_{2r+1}}, \dots , \sigma^{i_{2(r+s)}})$} \notag
\end{equation}
\begin{align}
&= \begin{cases}
(-1)^{r+s} ab \sigma^{\sum_{k=1} ^{2(r+s)} i_k-(r+s)n} &\text{if $i_{2k-1}+i_{2k}\geq n$ for $1\leq k \leq r+s$} \notag\\
0 &\text{otherwise}
\end{cases} \notag \\
&= (\bar{\varphi}_{2(r+s)}^{*}(ab))(\sigma^{i_1}, \dots , \sigma^{i_{2(r+s)}})\notag
\end{align}
Since $\bar{\psi}^{*}\bar{\varphi}^{*}=id$, then the cup product is induced by multiplication in $A$.
\end{proof}
\begin{lem}\label{cupodd}
$\smile : HH^{i}(A;A)\otimes HH^{j}(A;A)\rightarrow HH^{i+j}(A;A)$ is induced by multiplication if $i$ or $j$ is even, and by the formula
$$
a\smile b = - \frac{(n-1)n}{2} ab\sigma^{n-2}
$$
if $i$ and $j$ are odd.
\end{lem}
\begin{proof}
Let $a\in HH^{2r+1}(A;A)$ and $b\in HH^{2s}(A;A)$. Then
$$
\bar{\varphi}_{2r+1}^{*}(a) \smile \bar{\varphi}_{2s}^{*}(b)\in Hom (\bar{A}^{2(r+s)+1}, A)
$$
and
\begin{equation}
\resizebox{\textwidth}{!}{$
(\bar{\varphi}_{2r+1}^{*}(a) \smile \bar{\varphi}_{2s}^{*}(b))(\sigma^{i_1}, \dots , \sigma^{i_{2(r+s)+1}}) = (\bar{\varphi}_{2r+1}^{*}(a))(\sigma^{i_1}, \dots , \sigma^{i_{2r+1}})\cdot (\bar{\varphi}_{2s}^{*}(b))(\sigma^{i_{2r+2}}, \dots , \sigma^{i_{2(r+s)+1}})$} \notag
\end{equation}
\begin{align}
&= \begin{cases}
(-1)^{r+s+1} ab i_1 \sigma^{\sum_{k=1} ^{2(r+s)+1} i_k-(r+s)n-1} &\text{if $i_{2k}+i_{2k+1}\geq n$ for $1\leq k \leq r+s$} \notag\\
0 &\text{otherwise}
\end{cases} \notag \\
&= (\bar{\varphi}_{2(r+s)+1}^{*}(ab))(\sigma^{i_1}, \dots , \sigma^{i_{2(r+s)+1}})\notag
\end{align}
Then the cup product is induced by multiplication in $A$ if $\left|a\right|$ or $\left|b\right|$ is even.
Assume now that $a\in HH^{2r+1}(A;A)$ and $b\in HH^{2s+1}(A;A)$. Then
$$
\bar{\varphi}_{2r+1}^{*}(a) \smile \bar{\varphi}_{2s}^{*}(b)\in Hom (\bar{A}^{2(r+s+1)}, A)
$$
and
\begin{equation}
\resizebox{\textwidth}{!}{$
(\bar{\varphi}_{2r+1}^{*}(a) \smile \bar{\varphi}_{2s+1}^{*}(b))(\sigma^{i_1}, \dots , \sigma^{i_{2(r+s+1)}}) = (\bar{\varphi}_{2r+1}^{*}(a))(\sigma^{i_1}, \dots , \sigma^{i_{2r+1}})\cdot (\bar{\varphi}_{2s+1}^{*}(b))(\sigma^{i_{2r+2}}, \dots , \sigma^{i_{2(r+s+1)}})$} \notag
\end{equation}
\begin{align}
&= \begin{cases}
(-1)^{r+s} ab i_1 i_{2r+2} \sigma^{\sum_{k=1} ^{2(r+s+1)} i_k-(r+s)n-2} &\text{\footnotesize{if $i_{2k}+i_{2k+1}\geq n$ for $1\leq k \leq r$ and}} \notag\\
&\text{\footnotesize{$i_{2k-1}+i_{2k}\geq n$ for $r+2\leq k \leq r+s+1$}} \notag\\
0 &\text{otherwise}
\end{cases} \notag
\end{align}
Applying $\bar{\psi}_{2(r+s+1)}^{*}$ to $f=\bar{\varphi}_{2r+1}^{*}(a) \smile \bar{\varphi}_{2s+1}^{*}(b)$, we have
\begin{align}
&\bar{\psi}_{2(r+s+1)}^{*}(f) \notag \\
&= \displaystyle \sum_{0\leq i_1,\dots ,i_{r+s+1} \leq n-1} (-1)^{r+s+1} f(\sigma^{i_1}, \sigma, \sigma^{i_2}, \dots ,\sigma^{i_{r+s+1}}, \sigma) \sigma^{(r+s+1)(n-1)-\sum_{k=1} ^{r+s+1} i_k} \notag \\
&= - \displaystyle \sum_{i_1 =1}^{n-1} ab i_1\sigma^{n-2} = - \frac{(n-1)n}{2} ab\sigma^{n-2} \notag
\end{align}
Therefore, if $char(R)=p>0$ and $n=mp$, we have
\begin{equation}\label{cupp}
a\smile b = \begin{cases}
mab\sigma^{2m-2} &\text{if $p=2$} \\
0 &\text{if $p\neq 2$}
\end{cases}
\end{equation}
\end{proof}
From these results, now we can described the cohomology ring.
\begin{teo}\label{ringeven}
Let $R$ be a commutative ring and $A=R [\sigma] /(\sigma^{n}-1)$. Then
$$
HH^{2*}(A;A)=R [x,z]/(x^{n}-1,nz)
$$
where $x\in HH^{0}(A;A)$ and $z\in HH^{2}(A;A)$.
\end{teo}
\begin{proof}
Consider $x\in HH^{0}(A;A)$ to be the coset $[\sigma]\in A$ and $z\in HH^{2}(A;A)$ the coset $[1]\in A$. By lemma \ref{cupeven}, the cup product for even degrees is induced by multiplication in $A$. Then $x$ generates $HH^{0}(A;A)$, and $HH^{2}(A;A)$ is generated by $z$ and $HH^{0}(A;A)$. In higher degrees $HH^{2i}(A;A)$ is generated by $z^{i}$ and $HH^{0}(A;A)$. The relations are given by $x^{n}-1=0$ and $nx^{n-1}z=0$.
\end{proof}
\begin{cor}\label{integraldomain}
Let $R$ be an integral domain with $char(R)\nmid n$. Then
$$
HH^{*}(A;A)=R [x,z]/(x^{n}-1,nz)
$$
\end{cor}
\begin{proof}
Since $R$ is an integral domain with $char(R)\nmid n$, we have $Ann(n\sigma^{n-1})=0$.
\end{proof}
\begin{cor}
Let $R$ be a commutative ring such that $n\in R^{*}$. Then
$$
HH^{*}(A;A)=R [x]/(x^{n}-1)=A
$$
\end{cor}
\begin{proof}
Since $n\in R^{*}$, we have $Ann(n\sigma^{n-1})=Ann(\sigma^{n-1})=0$, and $x^{n-1}z=0$ implies that $z=0$.
\end{proof}
\begin{teo}\label{algebraHHZp}
Let $R$ be a commutative ring with $char(R)=p>0$ and $A=R [\sigma] /(\sigma^{n}-1)$ with $n=mp$. If $p\neq 2$, or $p=2$ and $m$ is even. Then
$$
HH^{*}(A;A)=R [x,y,z]/(x^{n}-1,y^{2})
$$
If $p=2$ and $m$ is odd. Then
$$
HH^{*}(A;A)=R [x,y,z]/(x^{n}-1,y^{2}-x^{n-2}z)
$$
where $x\in HH^{0}(A;A)$, $y\in HH^{1}(A;A)$ and $z\in HH^{2}(A;A)$.
\end{teo}
\begin{proof}
By theorem \ref{ringeven}, we know that
$$
HH^{2*}(A;A)=R [x,z]/(x^{n}-1)
$$
Consider $y\in HH^{1}(A;A)$ to be the coset $[1]\in A$. Since cup product of an odd degree cohomology class and an even degree cohomology class is induced by multiplication in $A$, $HH^{1}(A;A)$ is generated by $y$ and $HH^{0}(A;A)$. By (\ref{cupp}), for $p\neq 2$, or $p=2$ and $m$ even the cup product in odd degrees is zero. Therefore, $y^{2}=0$ and we have
$$
HH^{*}(A;A)=R [x,y,z]/(x^{n}-1,y^{2})
$$
For $p=2$ and $m$ odd, $y^{2}$ is the coset $[\sigma^{n-2}]\in A$ then $y^{2}-x^{n-2}z=0$, and
$$
HH^{*}(A;A)=R [x,y,z]/(x^{n}-1,y^{2}-x^{n-2}z)
$$
\end{proof}
\begin{remk}
These calculations agree with the ones presented in \cite{SolotarCibils} and \cite{Holm}.
\end{remk}
\subsection{BV-Algebra Structure}
\begin{teo}\label{BVZn}
Let $R$ be an integral domain with $char(R)\nmid n$ and $A=R [\sigma] /(\sigma^{n}-1)$. Then the canonical Frobenius form of the group ring induces a BV-algebra structure on $HH^{*}(A;A)$ given by
\begin{align}
HH^{*}(A;A)&=R [x,z]/(x^{n}-1,nz) \notag \\
\Delta(a)&=0 \quad \forall a\in HH^{*}(A;A) \notag
\end{align}
\end{teo}
\begin{proof}
By corollary \ref{integraldomain}, we have $HH^{*}(A;A)=HH^{2*}(A;A)$. Then $\Delta(a)=0$ for all $a\in HH^{*}(A;A)$. However, this can be proved directly from the definition of $\Delta$, and the fact that in a BV-algebra we have the following equation
\begin{align}\label{Dabc}
\Delta(abc)=&\,\Delta(ab)c+(-1)^{|a|}a\Delta(bc)+(-1)^{(|a|-1)|b|}b\Delta(ac) \\
&-\Delta(a)bc-(-1)^{|a|}a\Delta(b)c-(-1)^{|a|+|b|}ab\Delta(c) \notag
\end{align}
Since the BV-operator is defined over the bar complex, we need the cochains that represent the generators. The class $x$ is represented by the cochain
$$
\bar{\varphi}^{*}_{0}(\sigma)(1)=\sigma
$$
and the class $z$ by
$$
\bar{\varphi}^{*}_{2}(1)(\sigma^{i},\sigma^{k})= \begin{cases}
-\sigma^{i+k-n} &\text{if $i+k\geq n$} \\
0 &\text{otherwise}
\end{cases}
$$
Now, taking $\lbrace 1,\sigma,\dots ,\sigma^{n-1}\rbrace$ as a basis for $A$ and $\lbrace 1,\sigma^{n-1},\dots ,\sigma\rbrace$ as the dual basis induced by the canonical Frobenius form (\ref{frob}), we have
\begin{align*}
\Aboxed{&\Delta(x)=\, \, 0} \; \text{ by degree.} \notag \\
\Delta(\bar{\varphi}^{*}_{2}(1))(\sigma^{i})=\, &\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{2}(1)(\sigma^{k},\sigma^{i})\rangle \sigma^{n-k} - \sum_{k=0} ^{n-1} \langle 1,\bar{z}(\sigma^{i},\sigma^{k})\rangle \sigma^{n-k} \notag \\
\Aboxed{&\Delta(z) =\; 0} \notag \\
\Delta(\bar{\varphi}^{*}_{2}(1)\smile \bar{\varphi}^{*}_{0}(\sigma))(\sigma^{i})=\, &\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{2}(\sigma)(\sigma^{k},\sigma^{i})\rangle \sigma^{n-k} - \sum_{k=0} ^{n-1} \langle 1,\bar{z}\bar{x}(\sigma^{i},\sigma^{k})\rangle \sigma^{n-k} \notag \\
\Aboxed{&\Delta(zx) =\; 0} \notag \\
\Delta(\left(\bar{\varphi}^{*}_{2}(1)\right)^{2})(\sigma^{i}, \sigma^{j}, \sigma^{h})=\, &\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{4}(1)(\sigma^{k},\sigma^{i},\sigma^{j},\sigma^{h})\rangle \sigma^{n-k} - \sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{4}(1)(\sigma^{i},\sigma^{j},\sigma^{h},\sigma^{k})\rangle \sigma^{n-k}\notag \\
+&\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{4}(1)(\sigma^{j},\sigma^{h},\sigma^{k},\sigma^{i})\rangle \sigma^{n-k} - \sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{4}(1)(\sigma^{h},\sigma^{k},\sigma^{i},\sigma^{j})\rangle \sigma^{n-k}\notag \\
\Aboxed{&\Delta(z^{2}) =\, \, 0} \notag
\end{align*}
In the last case, $\langle 1,\cdot \rangle\neq 0$ only if $k+i+j+h-2n=n$, i.e., $k+i+j+h=3n$, but also $k+i,j+h,i+j,h+k\geq n$. Therefore, all the coefficients are zero. Using equation (\ref{Dabc}) and induction on powers of $x$ and $z$, we have $\Delta(a)=0$ for all $a\in HH^{*}(A;A)$.
\end{proof}
\begin{teo}\label{BVZncharp}
Let $R$ be a commutative ring with $char(R)=p>0$ and $A=R [\sigma] /(\sigma^{n}-1)$ with $n=mp$. If $p\neq 2$, or $p=2$ and $m$ is even. Then the canonical Frobenius form of the group ring induces a BV-algebra structure on $HH^{*}(A;A)$ given by
\begin{align}
HH^{*}(A;A)&=R [x,y,z]/(x^{n}-1,y^{2}) \notag \\
\Delta (z^{k}x^{l}) &=0 \notag \\
\Delta (z^{k}yx^{l}) &= (l-1)z^{k}x^{l-1} \notag
\end{align}
If $p=2$ and $m$ is odd. Then as a BV-algebra
\begin{align}
HH^{*}(A;A)&=R [x,y,z]/(x^{n}-1,y^{2}-x^{n-2}z) \notag \\
\Delta (z^{k}x^{l}) &=0 \notag \\
\Delta (z^{k}yx^{l}) &= (l-1)z^{k}x^{l-1} \notag
\end{align}
where $x\in HH^{0}(A;A)$, $y\in HH^{1}(A;A)$ and $z\in HH^{2}(A;A)$.
\end{teo}
\begin{proof}
As in the previous theorem, we need the cochains that represent the generators. The class $y$ is represented by the cochain
$$
\bar{\varphi}^{*}_{1}(1)(\sigma^{i})=-i\sigma^{i-1}
$$
\begin{align*}
\Delta(\bar{\varphi}^{*}_{1}(1))(1)=\, &\sum_{j=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{1}(1)(\sigma^{j})\rangle \sigma^{n-j} = \sum_{i=0} ^{n-1} - j\langle 1,\sigma^{j-1}\rangle \sigma^{n-j} \notag \\
= & - \sigma^{n-1} = -\bar{\varphi}^{*}_{0}(\sigma^{n-1})(1) \notag \\
\Aboxed{&\Delta(y)=\, -x^{n-1}} \notag \\
\Delta(\bar{\varphi}^{*}_{0}(\sigma)\smile \bar{\varphi}^{*}_{1}(1))(1)=\, &\sum_{j=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{1}(\sigma)(\sigma^{j})\rangle \sigma^{n-j} = \sum_{i=0} ^{n-1} - j\langle 1,\sigma^{j}\rangle \sigma^{n-j} \notag \\
\Aboxed{&\Delta(xy)=\, 0} \notag \\
\Delta(\bar{\varphi}^{*}_{2}(1)\smile \bar{\varphi}^{*}_{1}(1))(\sigma^{i}, \sigma^{j})=\, &\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{3}(1)(\sigma^{k},\sigma^{i},\sigma^{j})\rangle \sigma^{n-k} \notag \\
+ &\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{3}(1)(\sigma^{i},\sigma^{j},\sigma^{k})\rangle \sigma^{n-k}\notag \\
+&\sum_{k=0} ^{n-1} \langle 1,\bar{\varphi}^{*}_{3}(1)(\sigma^{j},\sigma^{k},\sigma^{i})\rangle \sigma^{n-k} \notag
\end{align*}
If $i+j<n$, $\langle 1,\cdot \rangle= 0$ for all $k$. When $i+j\geq n$, $\langle 1,\cdot \rangle\neq 0$ only if $k+i+j-1-n=n$, i.e., $k=2n+1-(i+j)$. Therefore,
\begin{align*}
\Delta(\bar{\varphi}^{*}_{3}(1))(\sigma^{i}, \sigma^{j})=\, &(2n+1-(i+j)) \sigma^{i+j-n-1} + i\sigma^{i+j-n-1} + j\sigma^{i+j-n-1}\notag \\
= &(2n+1) \sigma^{i+j-n-1} = \sigma^{i+j-n-1} = -\bar{\varphi}^{*}_{2}(\sigma^{n-1})(\sigma^{i}, \sigma^{j}) \tag{$char(R)=p$ and $n=mp$} \\
\Aboxed{&\Delta(zy)=\, -zx^{n-1}} \notag
\end{align*}
Using equation (\ref{Dabc}) and induction on powers of $x$, $y$ and $z$, we have
\begin{align}
\Delta (z^{k}x^{l}) &=0 \notag \\
\Delta (z^{k}yx^{l}) &= (l-1)z^{k}x^{l-1} \notag
\end{align}
\end{proof}
Since in a BV-algebra, the Gerstenhaber bracket is defined by the following equation
\begin{align}\label{aDb}
\lbrace a,b\rbrace = -(-1)^{\left|a\right|}( \Delta(ab) - \Delta(a)b - (-1)^{|a|}a\Delta(b))
\end{align}
It follows that
\begin{cor}
Let $A=R [\sigma] /(\sigma^{n}-1)$ with $R$ an integral domain and $char(R)\nmid n$. The Gerstenhaber bracket on $HH^{*}(A;A)$ is given by
$$
\lbrace a, b\rbrace =0 \quad \forall a,b\in HH^{*}(A;A)
$$
\end{cor}
\begin{cor}
Let $R$ be a commutative ring with $char(R)=p>0$ and $A=R [\sigma] /(\sigma^{n}-1)$ with $n=mp$. The Gerstenhaber bracket on $HH^{*}(A;A)$ is given by
\begin{align}
\lbrace z^{k_1}x^{l_1},z^{k_2}x^{l_2}\rbrace &= 0 \notag \\
\lbrace z^{k_1}x^{l_1},z^{k_2}yx^{l_2}\rbrace &= -l_1 z^{k_1 + k_2}x^{l_1 + l_2 -1} \notag \\
\lbrace z^{k_1}yx^{l_1}, z^{k_2}yx^{l_2}\rbrace &= (l_2 - l_1)z^{k_1 + k_2}yx^{l_1 + l_2 -1} \notag
\end{align}
\end{cor}
\begin{remk}
These calculations agree with the Gerstenhaber bracket presented in \cite{Sanchez} and \cite{AltBracket}.
\end{remk}
For the cyclic group of order $p$ prime, $\ensuremath{\mathbb{Z}} /p \ensuremath{\mathbb{Z}}$, we have that the group ring $\ensuremath{\mathbb{F}_{p}}\left[\ensuremath{\mathbb{Z}} /p \ensuremath{\mathbb{Z}} \right]= \ensuremath{\mathbb{F}_{p}} [\sigma] /(\sigma^{p}-1)$ is naturally isomorphic, as algebra, to a truncated polynomial ring $ \ensuremath{\mathbb{F}_{p}} [x] /(x^{p})$. In \cite{AADD}, the authors transfer the canonical Frobenius form of the group ring to the truncated polynomial ring and get the following Frobenius form
$$
\varepsilon \left( \sum_{i=0} ^{p-1} \alpha_i x^i \right) = \sum_{i=0} ^{p-1} (-1)^{i}\alpha_i
$$
Using this Frobenius form, the BV-algebra structure is given by
\begin{teo}
Let $A=\ensuremath{\mathbb{F}_{p}} [x] /(x^{p})$ with $p$ an odd prime. Then the canonical Frobenius form of the group ring induces a BV-algebra structure on $HH^{*}(A;A)$ given by
\[
HH^{*}(A;A)=\ensuremath{\mathbb{F}_{p}} [x,v,t]/(x^{p},v^{2})
\]
\begin{align}
&\Tilde{\Delta} (t^{l}x^{k}) =0 \notag \\
&\Tilde{\Delta} (t^{k}vx^{2l}) = 2lt^{k}x^{2l-1} + \sum_{i=2l} ^{p-1} (-1)^{i+1}t^{k}x^{i} \notag \\
&\Tilde{\Delta} (t^{k}vx^{2l+1}) = (2l+1)t^{k}x^{2l} + \sum_{i=2l+1} ^{p-1} (-1)^{i}t^{k}x^{i} \notag
\end{align}
where $x\in HH^{0}(A;A)$, $v\in HH^{1}(A;A)$ and $t\in HH^{2}(A;A)$.
\end{teo}
\begin{cor}
There is an isomorphism of BV-algebras
$$
\phi: HH^{*}(\ensuremath{\mathbb{F}_{p}} [x] /(x^{p}); \ensuremath{\mathbb{F}_{p}} [x] /(x^{p}))\overset{\cong}{\longrightarrow} HH^{*}(\ensuremath{\mathbb{F}_{p}}\left[\ensuremath{\mathbb{Z}} /p \ensuremath{\mathbb{Z}} \right];\ensuremath{\mathbb{F}_{p}}\left[\ensuremath{\mathbb{Z}} /p \ensuremath{\mathbb{Z}} \right])
$$
\end{cor}
\begin{proof}
The isomorphism $\phi$ is defined as follows
$$
\phi(x)=x-1, \qquad \phi(v)=y \qquad \text{and} \qquad \phi(t)=z
$$
It is clear that $\phi$ is a ring isomorphism. To verify that it is an isomorphism of BV-algebras, we need to check that $\phi \Tilde{\Delta}=\Delta \phi$.
\begin{itemize}
\item $\phi \Tilde{\Delta}(x)=\phi(0)=0=\Delta (x-1)=\Delta \phi(x)$.
\item $\Tilde{\Delta} (v) = \sum_{i=0} ^{p-1} (-1)^{i+1}x^{i}$, then
\begin{align}
\phi\Tilde{\Delta} (v) &= \sum_{i=0} ^{p-1} (-1)^{i+1}(x-1)^{i} = \sum_{i=0} ^{p-1} \sum_{k=0}^{i}(-1)^{k+1} \binom {i}{k}x^{k} \notag \\
&= \sum_{k=0} ^{p-1} (-1)^{k+1}\sum_{i=k}^{p-1} \binom {i}{k}x^{k} \notag \\
&\equiv -x^{p-1} \tag{mod $p$} \\
&=\Delta \phi(v) \notag
\end{align}
the equivalence module $p$ is due to
$$
\sum_{i=k}^{p-1} \binom {i}{k} = \binom {p}{k+1} \equiv 0 \qquad \text{(mod $p$)}
$$
for $k+1\neq 0$ or $k+1\neq p$.
\item $\phi \Tilde{\Delta}(t)=\phi(0)=0=\Delta (z)=\Delta \phi(t)$.
\item $\phi \Tilde{\Delta}(t^{2})=\phi(0)=0=\Delta (z^{2})=\Delta \phi(t^{2})$.
\item $\phi \Tilde{\Delta}(tx)=\phi(0)=0=\Delta (zx)-\Delta(z)=\Delta(z(x-1))=\Delta \phi(tx)$.
\item $\Tilde{\Delta} (tv) = \sum_{i=0} ^{p-1} (-1)^{i+1}tx^{i}$, then
\begin{align}
\phi\Tilde{\Delta} (tv) &= \sum_{i=0} ^{p-1} (-1)^{i+1}z(x-1)^{i} = z\sum_{i=0} ^{p-1} \sum_{k=0}^{i}(-1)^{k+1} \binom {i}{k}x^{k} \notag \\
&= z\sum_{k=0} ^{p-1} (-1)^{k+1}\sum_{i=k}^{p-1} \binom {i}{k}x^{k} \notag \\
&\equiv -zx^{p-1} \tag{mod $p$} \\
&= \Delta \phi(tv) \notag
\end{align}
\item $\Tilde{\Delta} (vx) = \sum_{i=0} ^{p-1} (-1)^{i}x^{i}$, then
\begin{align}
\phi\Tilde{\Delta} (vx) &= \sum_{i=0} ^{p-1} (-1)^{i}(x-1)^{i} = \sum_{i=0} ^{p-1} \sum_{k=0}^{i}(-1)^{k} \binom {i}{k}x^{k} \notag \\
&= \sum_{k=0} ^{p-1} (-1)^{k}\sum_{i=k}^{p-1} \binom {i}{k}x^{k} \notag \\
&\equiv x^{p-1} \tag{mod $p$} \\
&= \Delta(yx)-\Delta(y)=\Delta(y(x-1))=\Delta \phi(vx) \notag
\end{align}
\end{itemize}
Since both are BV-algebras, formula (\ref{Dabc}) holds and $\phi \Tilde{\Delta}=\Delta \phi$.
\end{proof}
And for $p=2$,
\begin{teo}
Let $A=\ensuremath{\mathbb{F}_{2}} [x] /(x^{2})$. Then the canonical Frobenius form of the group ring induces a BV-algebra structure on $HH^{*}(A;A)$ given by
$$
HH^{*}(A;A)=\ensuremath{\mathbb{F}_{2}} [x,v,t]/(x^{2},v^{2}-t)\cong \Lambda(x)\otimes \ensuremath{\mathbb{F}_{2}} [v]
$$
$$
\Tilde{\Delta}(v^{k}x^{l})=k(1+x)v^{k-1}
$$
where $x\in HH^{0}(A;A)$, $v\in HH^{1}(A;A)$ with $|x|=0$ and $|v|=1$.
\end{teo}
\begin{cor}
There is an isomorphism of BV-algebras
$$
\phi: HH^{*}(\ensuremath{\mathbb{F}_{2}} [x] /(x^{2}); \ensuremath{\mathbb{F}_{2}} [x] /(x^{2}))\overset{\cong}{\longrightarrow} HH^{*}(\ensuremath{\mathbb{F}_{2}}\left[\ensuremath{\mathbb{Z}} /2 \ensuremath{\mathbb{Z}} \right];\ensuremath{\mathbb{F}_{2}}\left[\ensuremath{\mathbb{Z}} /2 \ensuremath{\mathbb{Z}} \right])
$$
\end{cor}
\begin{proof}
For $p=2$ and $n=2$, we have
$$
HH^{*}(\ensuremath{\mathbb{F}_{2}}\left[\ensuremath{\mathbb{Z}} /2 \ensuremath{\mathbb{Z}} \right];\ensuremath{\mathbb{F}_{2}}\left[\ensuremath{\mathbb{Z}} /2 \ensuremath{\mathbb{Z}} \right])=R [x,y,z]/(x^{2}-1,y^{2}-z)\cong \ensuremath{\mathbb{F}_{2}} [y]\otimes \ensuremath{\mathbb{F}_{2}} [x]/(x^{2}-1)
$$
and the BV-operator, $\Delta$, is given by
\begin{align}
\Delta (y^{k}x) &=0 \notag \\
\Delta (y^{k}) &= ky^{k-1}x \notag
\end{align}
The isomorphism $\phi$ is defined as follows
$$
\phi(x)=x-1 \qquad \text{and} \qquad \phi(v)=y
$$
It is clear that $\phi$ is a ring isomorphism. Now,
$$
\phi \Tilde{\Delta}(v^{k}x^{l})=\phi(k(1+x)v^{k-1})=kxy^{k-1}=\Delta(y^{k}(x-1)^{l})=\Delta \phi(v^{k}x^{l})
$$
Therefore, $\phi$ is an isomorphism of BV-algebras.
\end{proof}
\section{BV-Algebra Structure for Finite Abelian Groups}
Let $G$ be a finite abelian group. Then $G$ can be decomposed as follows
$$
G \cong \ensuremath{\mathbb{Z}} /p_1^{\alpha_1} \ensuremath{\mathbb{Z}} \oplus \ensuremath{\mathbb{Z}} /p_2^{\alpha_2} \ensuremath{\mathbb{Z}} \oplus \cdots \oplus \ensuremath{\mathbb{Z}} /p_k^{\alpha_k} \ensuremath{\mathbb{Z}}
$$
with the property that $\alpha_i \leq \alpha_{i+1}$, if $p_i = p_{i+1}$. Therefore,
$$
R[G] \cong R[\ensuremath{\mathbb{Z}} /p_1^{\alpha_1} \ensuremath{\mathbb{Z}}] \otimes R[\ensuremath{\mathbb{Z}} /p_2^{\alpha_2} \ensuremath{\mathbb{Z}}] \otimes \cdots \otimes R[\ensuremath{\mathbb{Z}} /p_k^{\alpha_k} \ensuremath{\mathbb{Z}}]
$$
and by corollary \ref{tensorBViso}, we have
\begin{teo}\label{BVfiniteabelian}
Let $R$ be a field and $G$ a finite abelian group. Then as BV-algebras
$$
HH^*(R[G];R[G]) \cong HH^*(R[\ensuremath{\mathbb{Z}} /p_1^{\alpha_1} \ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}} /p_1^{\alpha_1} \ensuremath{\mathbb{Z}}]) \otimes \cdots \otimes HH^*(R[\ensuremath{\mathbb{Z}} /p_k^{\alpha_k} \ensuremath{\mathbb{Z}}]; R[\ensuremath{\mathbb{Z}} /p_k^{\alpha_k} \ensuremath{\mathbb{Z}}])
$$
where the BV-structure for each factor is given by theorem \ref{BVZn} or \ref{BVZncharp}.
\end{teo}
\section{BV-Algebra Structure on \texorpdfstring{\boldmath{$HH^{*}(\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}]\otimes\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/m\ensuremath{\mathbb{Z}}])$}}{HH*(R[Z/nZxZ/mZ])}}
By theorem \ref{injection}, we have an injection of BV-algebras
$$
HH^{*}(\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}])\otimes HH^{*}(\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/m\ensuremath{\mathbb{Z}}])\hookrightarrow HH^{*}(\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}]\otimes\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/m\ensuremath{\mathbb{Z}}])
$$
where the BV-operator on the left hand side is trivial. Nevertheless, the BV-operator on the right hand side is highly non-trivial as follows
\begin{teo}
Let $A=\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/n\ensuremath{\mathbb{Z}}]$ and $B=\ensuremath{\mathbb{Z}}[\ensuremath{\mathbb{Z}}/m\ensuremath{\mathbb{Z}}]$ with $n=km$. Then as a BV-algebra,
\begin{align}
HH^{*}(A\otimes B;A\otimes B)&\cong \frac{\ensuremath{\mathbb{Z}}[x,t,a,b,c]}{(x^n-1, t^m-1, na, mb, mc, c^2)} \notag \\
\Delta(x^{i}t^{j}a^{l}b^{r}c^{s}) &= sx^{i-1}t^{j}a^{l}b^{r}((i-1)b-jka) \notag
\end{align}
in all cases except when $m$ is even and $k$ is odd, in which case we get
\begin{align}
HH^{*}(A\otimes B;A\otimes B)&\cong \frac{\ensuremath{\mathbb{Z}}[x,t,a,b,c]}{(x^n-1, t^m-1, na, mb, mc, c^2-\frac{m}{2}x^{n-2}ab(b+ka))} \notag \\
\Delta(x^{i}t^{j}a^{l}b^{r}c^{s}) &= sx^{i-1}t^{j}a^{l}b^{r}((i-1)b-jka) \notag
\end{align}
where $x,t\in HH^{0}(A\otimes B;A\otimes B)$, $a,b\in HH^{2}(A\otimes B;A\otimes B)$ and $c\in HH^{3}(A\otimes B;A\otimes B)$.
\end{teo}
\begin{proof}
By K\"{u}nneth Theorem, there is an isomorphim of modules
\begin{align}
HH^{i}(A\otimes B; A\otimes B)\cong &\bigoplus_{r+s=i} HH^{r}(A; A)\otimes HH^{s}(B;B) \notag \\
&\bigoplus \bigoplus_{r+s=i+1} Tor_1 ^{\ensuremath{\mathbb{Z}}} (HH^{r}(A; A), HH^{s}(B;B)) \notag
\end{align}
Since,
$$
HH^{*}(A;A)=\ensuremath{\mathbb{Z}} [x,a]/(x^{n}-1,na) \qquad \text{and} \qquad HH^{*}(B;B)=\ensuremath{\mathbb{Z}} [t,b]/(t^{m}-1,mb)
$$
where $|x|=|t|=0$ and $|a|=|b|=2$. All Tor groups vanish except when $r$ and $s$ are both even. In order to calculate $Tor_1 ^{\ensuremath{\mathbb{Z}}} (A/nx^{n-1}A, B/mt^{m-1}B)$, we use the following $\ensuremath{\mathbb{Z}}$-projective resolution
$$
0 \rightarrow A \xrightarrow{nx^{n-1}} A \xrightarrow{} A/nx^{n-1}A \rightarrow 0
$$
Applying $-\otimes B/mt^{m-1}B$, we get
$$
0 \rightarrow A\otimes B/mt^{m-1}B \xrightarrow{nx^{n-1}\otimes id} A \otimes B/mt^{m-1}B \rightarrow 0
$$
Thus,
\begin{align}
Tor_1 ^{\ensuremath{\mathbb{Z}}} (A/nx^{n-1}A, B/mt^{m-1}B)&=Ker(nx^{n-1}\otimes id)=Ker(kx^{n-1}\otimes m\cdot \,) \notag \\
&= A\otimes B/mt^{m-1}B \notag
\end{align}
Therefore,
$$
HH^i(A\otimes B) = \begin{cases}
A\otimes B &\text{if $i=0$} \notag\\
0 &\text{if $i=1$} \notag \\
A/nx^{n-1}A\otimes B \oplus A\otimes B/mt^{m-1}B &\text{if $i=2j$} \notag \\
\quad \oplus \displaystyle\bigoplus_{l=1}^{j-1} A/nx^{n-1}A\otimes B/mt^{m-1}B \notag \\
\displaystyle\bigoplus_{l=1}^{j} A\otimes B/mt^{m-1}B &\text{if $i=2j+1$} \notag \\
\end{cases} \notag
$$
Since $$HH^{*}(A; A)\otimes HH^{*}(B;B) \hookrightarrow HH^{*}(A\otimes B; A\otimes B)$$ is an injection of BV-algebras, we only need to find a generator for the odd dimensions. Let $c\in HH^{3}(A\otimes B; A\otimes B)$ to be the coset $[1\otimes 1]\in A\otimes B/mt^{m-1}B$. Consider
$$
c = 1\otimes 1 - kx^{n-1}\otimes t
$$
to be the representative of $[1\otimes 1]$ in the total complex which calculate the Tor group. In the tensor product of the bar resolutions, $c$ is represented by
$$
c=\overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)-k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{1}^{*}(t)
$$
Since $HH^{2*}(A\otimes B;A\otimes B)\cong HH^{2*}(A;A)\otimes HH^{2*}(B;B)$. Let $y\in HH^{2i}(A;A)$ and $z\in HH^{2j}(B;B)$. Then
\begin{align}
y\otimes z \smile c \Rightarrow & \; (\overline{\varphi_{_A}}_{2i}^{*}(y)\otimes \overline{\varphi_{_B}}_{2j}^{*}(z))\smile (\overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)-k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{1}^{*}(t)) \notag \\
=& \; (\overline{\varphi_{_A}}_{2i}^{*}(y)\smile \overline{\varphi_{_A}}_{1}^{*}(1))\otimes (\overline{\varphi_{_B}}_{2j}^{*}(z))\smile \overline{\varphi_{_B}}_{2}^{*}(1)) \notag \\
&-k (\overline{\varphi_{_A}}_{2i}^{*}(y)\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1}))\otimes (\overline{\varphi_{_B}}_{2j}^{*}(z))\smile \overline{\varphi_{_B}}_{1}^{*}(t)) \notag \\
=& \; (\overline{\varphi_{_A}}_{2i+1}^{*}(y)\otimes \overline{\varphi_{_B}}_{2(j+1)}^{*}(z)) - k (\overline{\varphi_{_A}}_{2(i+1)}^{*}(yx^{n-1})\otimes \overline{\varphi_{_B}}_{2j+1}^{*}(zt)) \notag
\end{align}
Applying $\bar{\psi_{_A}^{*}}\otimes \bar{\psi_{_B}^{*}}$, we have
$$
(y\otimes z)(1\otimes 1 - k x^{n-1}\otimes t)
$$
Therefore, $HH^{2k+3}(A\otimes B;A\otimes B)$ is generated by $HH^{2k}(A\otimes B;A\otimes B)$ and $c\in HH^{3}(A\otimes B;A\otimes B)$. Now, consider $x$ to be the coset $[x\otimes 1]\in HH^{0}(A;A)\otimes HH^{0}(B;B)$, $t$ to be the coset $[1\otimes t] \in HH^{0}(A;A)\otimes HH^{0}(B;B)$, $a$ to be the coset $[1\otimes 1]\in HH^{2}(A;A)\otimes HH^{0}(B;B)$ and $b$ to be the coset $[1\otimes 1] \in HH^{0}(A;A)\otimes HH^{2}(B;B)$. Notice that $x,t,a$ and $b$ generate $HH^{2*}(A\otimes B;A\otimes B)$, and satisfy the relations $x^n-1=0$, $t^m-1=0$, $nx^{n-1}a=0$ and $mt^{m-1}b=0$. Now,
\begin{equation}
\resizebox{\textwidth}{!}{$
\begin{aligned}
c^2 =& \; (\overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)-k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{1}^{*}(t))\smile (\overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)-k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{1}^{*}(t)) \notag \\
c^2 =& \; \overline{\varphi_{_A}}_{1}^{*}(1)\smile \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)\smile \overline{\varphi_{_B}}_{2}^{*}(1) + k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\smile \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{1}^{*}(t)\smile \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
&- k \overline{\varphi_{_A}}_{1}^{*}(1)\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{2}^{*}(1)\smile \overline{\varphi_{_B}}_{1}^{*}(t) + k^2 \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1}) \otimes \overline{\varphi_{_B}}_{1}^{*}(t)\smile \overline{\varphi_{_B}}_{1}^{*}(t) \notag \\
c^2 =& -\frac{(n-1)n}{2} x^{n-2}\otimes 1 - \frac{(m-1)mk^2}{2} x^{n-2}\otimes 1 \notag \notag \\
c^2=& -\frac{(n-1)n}{2} x^{n-2}ab^2 - \frac{(m-1)mk^2}{2} x^{n-2}a^2b \notag
\end{aligned}$}
\end{equation}
Thus,
\begin{itemize}
\item If $n$ and $m$ are odd then $c^2=0$.
\item If $n$ is even and $m$ is odd then $k$ is even and $c^2=0$.
\item If $m$ is even then $n$ is even and
\begin{align}
c^2&= -\frac{km}{2}((km-1)x^{n-2}ab^2+(m-1)kx^{n-2}a^2b) \notag \\
c^2&=\frac{km}{2}x^{n-2}ab(b+ka) \notag
\end{align}
\item If $k$ is even then $c^2=0$.
\item If $k$ is odd then $c^2=\frac{m}{2}x^{n-2}ab(b+ka)$.
\end{itemize}
Also, notice that $mc=0$. To sum up, as algebras
$$
HH^{*}(A\otimes B;A\otimes B)\cong \frac{\ensuremath{\mathbb{Z}}[x,t,a,b,c]}{(x^n-1, t^m-1, na, mb, mc, c^2)}
$$
in all cases except when $m$ is even and $k$ is odd, in which case we get
$$
HH^{*}(A\otimes B;A\otimes B)\cong \frac{\ensuremath{\mathbb{Z}}[x,t,a,b,c]}{(x^n-1, t^m-1, na, mb, mc, c^2-\frac{m}{2}x^{n-2}ab(b+ka))}
$$
It only remains to calculate the BV-operator. Using theorem \ref{BVinj}, the BV-operator $\Delta^{A\otimes B}$ can be calculated using $\Delta^A$ and $\Delta^B$. Using the equations calculated before for $\Delta^A$ and $\Delta^B$ on the cochain level (\ref{BVZncharp}), we have
\begin{align*}
\Delta(c)=&\Delta ( \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)- k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{1}^{*}(t) ) \notag \\
=& \Delta^A (\overline{\varphi_{_A}}_{1}^{*}(1))\otimes\overline{\varphi_{_B}}_{2}^{*}(1) - \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \Delta^B(\overline{\varphi_{_B}}_{2}^{*}(1)) \notag \\
&- k \Delta^A(\overline{\varphi_{_A}}_{2}^{*}(x^{n-1}))\otimes \overline{\varphi_{_B}}_{1}^{*}(t) - k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \Delta^B(\overline{\varphi_{_B}}_{1}^{*}(t)) \notag \\
=& -\overline{\varphi_{_A}}_{0}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
\Aboxed{&\Delta(c)=-x^{n-1}b} \notag
\end{align*}
\begin{align*}
\Delta(xc)=&\Delta ( \overline{\varphi_{_A}}_{0}^{*}(x)\smile \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{0}^{*}(1)\smile \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
&- k \overline{\varphi_{_A}}_{0}^{*}(x)\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{0}^{*}(1)\smile \overline{\varphi_{_B}}_{1}^{*}(t) ) \notag \\
=& \Delta^A (\overline{\varphi_{_A}}_{1}^{*}(x))\otimes\overline{\varphi_{_B}}_{2}^{*}(1) - \overline{\varphi_{_A}}_{1}^{*}(x)\otimes \Delta^B(\overline{\varphi_{_B}}_{2}^{*}(1)) \notag \\
&- k \Delta^A(\overline{\varphi_{_A}}_{2}^{*}(1))\otimes \overline{\varphi_{_B}}_{1}^{*}(t) - k \overline{\varphi_{_A}}_{2}^{*}(1)\otimes \Delta^B(\overline{\varphi_{_B}}_{1}^{*}(t)) \notag \\
=& 0 \notag \\
\Aboxed{&\Delta(xc)=0} \notag
\end{align*}
\begin{align*}
\Delta(tc)=&\Delta ( \overline{\varphi_{_A}}_{0}^{*}(1)\smile \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{0}^{*}(t)\smile \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
&- k \overline{\varphi_{_A}}_{0}^{*}(1)\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{0}^{*}(t)\smile \overline{\varphi_{_B}}_{1}^{*}(t) ) \notag \\
=& \Delta^A (\overline{\varphi_{_A}}_{1}^{*}(1))\otimes\overline{\varphi_{_B}}_{2}^{*}(t) - \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \Delta^B(\overline{\varphi_{_B}}_{2}^{*}(t)) \notag \\
&- k \Delta^A(\overline{\varphi_{_A}}_{2}^{*}(x^{n-1}))\otimes \overline{\varphi_{_B}}_{1}^{*}(t^2) - k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \Delta^B(\overline{\varphi_{_B}}_{1}^{*}(t^2)) \notag \\
=& - \overline{\varphi_{_A}}_{0}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{2}^{*}(t) - k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{0}^{*}(t) \notag \\
\Aboxed{&\Delta(tc)=-x^{n-1}t(b+ka)} \notag
\end{align*}
\begin{align*}
\Delta(ac)=&\Delta ( \overline{\varphi_{_A}}_{2}^{*}(1)\smile \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{0}^{*}(1)\smile \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
&- k \overline{\varphi_{_A}}_{2}^{*}(1)\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{0}^{*}(1)\smile \overline{\varphi_{_B}}_{1}^{*}(t) ) \notag \\
=& \Delta^A (\overline{\varphi_{_A}}_{3}^{*}(1))\otimes\overline{\varphi_{_B}}_{2}^{*}(1) - \overline{\varphi_{_A}}_{3}^{*}(1)\otimes \Delta^B(\overline{\varphi_{_B}}_{2}^{*}(1)) \notag \\
&- k \Delta^A(\overline{\varphi_{_A}}_{4}^{*}(x^{n-1}))\otimes \overline{\varphi_{_B}}_{1}^{*}(t) - k \overline{\varphi_{_A}}_{4}^{*}(x^{n-1})\otimes \Delta^B(\overline{\varphi_{_B}}_{1}^{*}(t)) \notag \\
=& -(2n+1)\overline{\varphi_{_A}}_{2}^{*}(x^{n-1}) \otimes \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
\Aboxed{&\Delta(ac)=-x^{n-1}ab} \notag
\end{align*}
\begin{align*}
\Delta(bc)=&\Delta ( \overline{\varphi_{_A}}_{0}^{*}(1)\smile \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \overline{\varphi_{_B}}_{2}^{*}(1)\smile \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
&- k \overline{\varphi_{_A}}_{0}^{*}(1)\smile \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \overline{\varphi_{_B}}_{2}^{*}(1)\smile \overline{\varphi_{_B}}_{1}^{*}(t) ) \notag \\
=& \Delta^A (\overline{\varphi_{_A}}_{1}^{*}(1))\otimes\overline{\varphi_{_B}}_{4}^{*}(1) - \overline{\varphi_{_A}}_{1}^{*}(1)\otimes \Delta^B(\overline{\varphi_{_B}}_{4}^{*}(1)) \notag \\
&- k \Delta^A(\overline{\varphi_{_A}}_{2}^{*}(x^{n-1}))\otimes \overline{\varphi_{_B}}_{3}^{*}(t) - k \overline{\varphi_{_A}}_{2}^{*}(x^{n-1})\otimes \Delta^B(\overline{\varphi_{_B}}_{3}^{*}(t)) \notag \\
=& -\overline{\varphi_{_A}}_{0}^{*}(x^{n-1}) \otimes \overline{\varphi_{_B}}_{4}^{*}(1) + 2km \overline{\varphi_{_A}}_{2}^{*}(x^{n-1}) \otimes \overline{\varphi_{_B}}_{2}^{*}(1) \notag \\
\Aboxed{&\Delta(bc)=-x^{n-1}b^2} \notag
\end{align*}
Using equation \ref{Dabc} and induction on powers of $x,t,a,b$ and $c$, we have
$$
\Delta(x^{i}t^{j}a^{l}b^{r}c^{s}) = sx^{i-1}t^{j}a^{l}b^{r}((i-1)b-jka)
$$
\end{proof}
\section{BV-Algebra Structure on \texorpdfstring{\boldmath{$HH^{*}(R[\ensuremath{\mathbb{Z}}^{k}])$}}{HH*(R[Zk])}}
From now on, we assume that $A$ is $R[\ensuremath{\mathbb{Z}}]\cong R [t,t^{-1}]$ with $R$ a commutative ring.
\begin{pro}
The following is a $A^{e}$-projective resolution of $A$
\begin{equation}\label{Pres}
{\ensuremath{\mathbb{P}}}(A): 0 \rightarrow A\otimes A \xrightarrow{d_1} A\otimes A \xrightarrow{\mu} A \rightarrow 0
\end{equation}
with $\mu (a\otimes b)=ab$ and $d_{1}(a\otimes b)= (a\otimes b)(1\otimes t - t \otimes 1)$.
\end{pro}
\begin{proof}
From the definition, it follows that $\mu d_{1}=0$. Now, we are going to define the following $A$-right maps
\[
0 \leftarrow A\otimes A \xleftarrow{s_1} A\otimes A \xleftarrow{s_0} A \leftarrow 0
\]
\begin{align}
s_0 (a)&=1\otimes a \notag \\
s_1 (t^{i}\otimes 1)&= \begin{cases}
-\displaystyle\sum_{j=0} ^{i-1} t^j \otimes t^{i-j-1} &\text{if $i\geq 1$} \notag\\
0 &\text{if $i=0$} \notag \\
\displaystyle\sum_{j=0} ^{-i-1} t^{-j-1} \otimes t^{i+j} &\text{if $i\leq -1$} \notag
\end{cases} \notag
\end{align}
By direct calculations, it follows that $\mu s_0= id$ and $d_{k+1}s_{k+1} + s_{k}d_{k} = id$ for all $k\geq 0$. Therefore, the complex is acyclic.
\end{proof}
Tensoring this resolution by $A$ as $A^{e}$-modules, we obtain the complex
\begin{equation}\label{hoZ}
0 \rightarrow A \xrightarrow{0} A
\end{equation}
Taking $Hom_{A^{e}}(-, A)$ of ${\ensuremath{\mathbb{P}}}(A)$, we obtain the complex
\begin{equation}\label{cohZ}
A \xrightarrow{0} A \rightarrow 0
\end{equation}
Then
$$
HH_i(A;A) = HH^i(A;A) = \begin{cases}
A &\text{if $i=0,1$} \notag\\
0 &\text{otherwise} \notag
\end{cases} \notag
$$
To calculate the cup product, we define $\Delta_{{\ensuremath{\mathbb{P}}}(A)}: {\ensuremath{\mathbb{P}}}(A) \longrightarrow {\ensuremath{\mathbb{P}}}(A)\underset{A}\otimes {\ensuremath{\mathbb{P}}}(A)$ as follows
\begin{align}\label{diagA}
\Delta_{{\ensuremath{\mathbb{P}}}(A)_{0}}: A^{2} &\longrightarrow A^{2}\underset{A}\otimes A^{2} \notag \\
a\otimes b &\longmapsto a\otimes 1\underset{A}\otimes 1\otimes b \\
\Delta_{{\ensuremath{\mathbb{P}}}(A)_{1}}: A^{2} &\longrightarrow A^{2}\otimes_{A} A^{2} \oplus A^{2}\underset{A}\otimes A^{2} \notag \\
a\otimes b &\longmapsto (a\otimes 1\underset{A}\otimes 1\otimes b, a\otimes 1\underset{A}\otimes 1\otimes b) \notag
\end{align}
By direct computations, it follows that $\Delta_{{\ensuremath{\mathbb{P}}}(A)}$ is a diagonal approximation map.
\begin{pro}\label{isoalgZ}
As algebras,
$$
HH^{*}(R[\ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}}])\cong R[x,x^{-1}]\otimes \Lambda(y)
$$
where $x,x^{-1}\in HH^{0}(A;A)$ and $y\in HH^{1}(A;A)$.
\end{pro}
\begin{proof}
Using the diagonal approximation map (\ref{diagA}), it can be checked that the cup product is given by multiplication in degrees $0$ and $1$, and $0$ in degrees greater than $2$. Therefore, taking $x,x^{-1}\in HH^{0}(A;A)$ to be $t,t^{-1}\in A$ and $y\in HH^{1}(A;A)$ to be $1\in A$, we get the desire isomorphism of algebras.
\end{proof}
From the definition of the action \ref{actionHH} and the diagonal map \ref{diagA} follows that
\begin{lem}
The action of $HH^*(A;A)$ on $HH_1(A;A)$ is given by
\begin{align}
\rho: HH_{1}(A;A)\otimes HH^*(A;A)&\longrightarrow HH_{1-*}(A;A) \notag \\
a\otimes b&\longmapsto (-1)^{|b|} ab \notag
\end{align}
\end{lem}
Let $\psi: {\ensuremath{\mathbb{P}}}(A) \rightarrow {\ensuremath{\mathbb{B}}}(A)$ and $\varphi: {\ensuremath{\mathbb{B}}}(A) \rightarrow {\ensuremath{\mathbb{P}}}(A)$ be the chain maps defined as follows
\[
\xymatrix{ \cdots \ar[r] & 0 \ar[r]^{0} \ar@<-0.5ex>[d]_{0} & A^{2} \ar[r]^{d_1} \ar@<-0.5ex>[d]_{\psi_1} & A^{2} \ar[r]^(0.56){\mu} \ar@<-0.5ex>[d]_{\psi_0} & A \ar[r] \ar@{=}[d] & 0\\
\cdots \ar[r] & A\otimes \bar{A}^{2}\otimes A \ar[r]^(0.53){\partial_{2}} \ar@<-0.5ex>[u]_{0} & A\otimes \bar{A}\otimes A \ar[r]^(0.64){\partial_{1}} \ar@<-0.5ex>[u]_{\varphi_1} & A^{2} \ar[r]^(0.56){\partial_{0}} \ar@<-0.5ex>[u]_{\varphi_0} & A \ar[r]& 0 }
\]
\begin{align}
\psi_0 &\equiv id & \varphi_0 &\equiv id \notag \\
\psi_{1}(1\otimes 1) &= -1\otimes t\otimes 1 & \varphi_{1}(1\otimes t^{k} \otimes 1)&=\begin{cases}
-\displaystyle\sum_{j=0} ^{k-1} t^j \otimes t^{k-j-1} &\text{if $k\geq 1$} \notag\\
0 &\text{if $k=0$} \notag \\
\displaystyle\sum_{j=0} ^{-k-1} t^{-j-1} \otimes t^{k+j} &\text{if $k\leq -1$} \notag
\end{cases} \notag \notag
\end{align}
\begin{pro}\label{iden}
Using the identifications $A^{n+2} \underset{A^e}\otimes A\cong A^{n}\otimes A$ and $A^{2}\underset{A^e}\otimes A\cong A$ \\
the induced maps for $\psi_*$ and $\varphi_*$ are
\begin{align}
\bar{\psi}_0 &\equiv id & \bar{\varphi}_0 &\equiv id \notag \\
\bar{\psi}_1:A &\rightarrow A\otimes A & \bar{\varphi}_1:A\otimes A &\rightarrow A \notag \\
a &\mapsto -a\otimes t & a\otimes t^k &\mapsto -kat^{k-1} \notag \
\end{align}
Using the identifications $Hom_{A^{e}} (A^{n+2}, A)\cong Hom (A^{n}, A)$ and $Hom_{A^{e}}(A^{2} , A)\cong A$ \\
the induced maps for $\psi_*$ and $\varphi_*$ are
\begin{align}
\bar{\psi}^*_0 &\equiv id & \bar{\varphi}^*_0 &\equiv id \notag \\
\bar{\psi}^*_1:Hom(A,A) &\rightarrow A & \bar{\varphi}^*_1:A &\rightarrow Hom(A,A) \notag \\
f &\mapsto -f(t) & a &\mapsto f_a: A\rightarrow A \notag \\
& & & \qquad \quad \; t^k\mapsto -kat^{k-1} \notag
\end{align}
\end{pro}
The BV-structure on Hochschild cohomology of the group ring of the integers is given by
\begin{teo}\label{BVKZ}
Let $a=ut^k$ with $u\in R^{\times}$ and $k\in\ensuremath{\mathbb{Z}}$. As a BV-algebra,
\begin{align}
HH^{*}(R[\ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}}])&\cong R[x,x^{-1}]\otimes \Lambda(y) \notag \\
\Delta_a (x^{i}) &= 0 \notag \\
\Delta_a (yx^{i}) &= (i+k)x^{i-1} \notag
\end{align}
where $x,x^{-1}\in HH^{0}(A;A)$ and $y\in HH^{1}(A;A)$.
\end{teo}
\begin{proof}
Let $a\in HH_1(A;A)\cong R[\ensuremath{\mathbb{Z}}]$ and $\rho_a$ be the map defined as follows
\begin{align}
\rho_a:HH^*(A;A)&\longrightarrow HH_{1-*}(A;A) \notag \\
b&\longmapsto \rho(a\otimes b) \notag
\end{align}
Since the action is given by multiplication, $\rho_a$ is an isomorphism for any unit $a\in R[\ensuremath{\mathbb{Z}}]$. Even more, any unit in $R[\ensuremath{\mathbb{Z}}]$ is of the form $a=ut^k$ with $u\in R^{\times}$ and $k\in\ensuremath{\mathbb{Z}}$. By theorem \ref{HHBVact}, $HH^*(A;A)$ is a BV-algebra and the BV-operator $\Delta_a$ is given by
\[
\xymatrix{
\Delta_a: HH^*(A;A) \ar[d]_{\rho_a} \ar[r] & HH^{*-1}(A;A) \\
HH_{1-*}(A;A) \ar[r]^-{B} & HH_{1-(*-1)}(A;A) \ar[u]_{\rho^{-1}_a}
}
\]
By degree reasons $\Delta_a(x^i)=0$ and $\Delta_a(yx^i)$ is given by
$$
t^i \xmapsto{\rho_a} -ut^{i+k} \xmapsto{\bar{\psi}^*_{0}} -ut^{i+k} \xmapsto{B} -u\otimes t^{i+k} \xmapsto{\bar{\varphi}_{1}} u(i+k)t^{i+k-1} \xmapsto{\rho^{-1}_a} (i+k)t^{i-1}
$$
\end{proof}
In \cite{M3}, Menichi calculates the BV-algebra structure of the homology of the free loop space of ${\ensuremath{\mathbb{S}^{1}}}$
\begin{teo}[\cite{M3} Theorem 10]
As a BV-algebra,
\begin{align}
\ensuremath{\mathbb{H}}_{*}(L{\ensuremath{\mathbb{S}^{1}}};R)&\cong R[x,x^{-1}]\otimes \Lambda(z) \notag \\
\Delta(x^{i}) &= 0 \notag \\
\Delta(zx^{i}) &= ix^{i} \notag
\end{align}
where $|x|=0$ and $|z|=-1$.
\end{teo}
This BV-algebra and the BV-algebra of the Hochschild cohomology of the group ring of the integers are related by
\begin{cor}
There is an isomorphism of BV-algebras
$$
\phi: \ensuremath{\mathbb{H}}_{*}(L{\ensuremath{\mathbb{S}^{1}}};R)\xrightarrow{\cong} HH^{*}(R[\ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}}])
$$
\end{cor}
\begin{proof}
By theorem \ref{BVKZ}, $HH^{*}(R[\ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}}])$ can be endowed with many BV-algebra structures as units in $R[\ensuremath{\mathbb{Z}}]$. For the existence of this isomorphism, we are considering the BV-operator given by the unit $a=t^{-1}$. Then as a BV-algebra
\begin{align}
HH^{*}(R[\ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}}])&\cong R[x,x^{-1}]\otimes \Lambda(y) \notag \\
\tilde{\Delta} (y^r x^{i}) &= r(i-1)x^{i-1} \notag
\end{align}
The isomorphism $\phi$ is defined as follows
$$
\phi(x)=x \qquad \text{and} \qquad \phi(z)=yx
$$
It is clear that $\phi$ is an isomorphism of graded algebras, and
$$
\phi\Delta(z^r x^i) = \phi(rix^i) = rix^{i} =r(i+r-1)x^{i+r-1}= \tilde{\Delta}(y^rx^{i+r}) = \tilde{\Delta}\phi(z^r x^i)
$$
\end{proof}
Since $HH^{*}(R[\ensuremath{\mathbb{Z}}];R[\ensuremath{\mathbb{Z}}])$ is $R$-projective and the resolution ${\ensuremath{\mathbb{P}}}(A)$ (\ref{Pres}) satisfies the conditions of theorem \ref{injection}. By theorem \ref{BVisoCY}, we get
\begin{teo}\label{BVfiniterank}
As BV-algebras,
\begin{align}
HH^{*}(R[\ensuremath{\mathbb{Z}}^{n}]&;R[\ensuremath{\mathbb{Z}}^{n}])= R[x_1,x_1 ^{-1},\dots,x_n,x_n ^{-1}]\otimes \Lambda(y_1,\dots ,y_n) \notag \\
\Delta(x_1 ^{i_1}\cdots x_n ^{i_n}y_1 ^{r_1}\cdots y_n ^{r_n}) &= \displaystyle \sum_{k=1} ^{n} (-1)^{^{r_1+\cdots +r_{k-1}}} r_k( i_k-1) x_1 ^{i_1}\cdots x_i^{i_k-1}\cdots x_n ^{i_n}y_1 ^{r_1}\cdots \widehat{y_k ^{r_k}}\cdots y_n ^{r_n}\notag
\end{align}
where $|x_i|=|x_i ^{-1}|=0$ and $|y_i|=1$ for $1\leq i\leq n$.
\end{teo}
As a corollary, we have
\begin{cor}
As Gerstenhaber algebras,
$$
HH^{*}(R[\ensuremath{\mathbb{Z}}^{n}];R[\ensuremath{\mathbb{Z}}^{n}])= R[x_1,x_1 ^{-1},\dots,x_n,x_n ^{-1}]\otimes \Lambda(y_1,\dots ,y_n)
$$
where $|x_i|=|x_i ^{-1}|=0$ and $|y_i|=1$ for $1\leq i\leq n$. The bracket is generated by
$$
\lbrace x^{r}_i,x^{s}_j\rbrace =0, \qquad \lbrace y_i,y_j\rbrace=0, \qquad \text{and} \qquad \lbrace x^{r}_i,y_j\rbrace=-r\delta_{ij}x_i^{r-1}
$$
\end{cor}
Let $G$ be a finitely generated abelian group. Then $G$ can be decomposed as $G \cong \ensuremath{\mathbb{Z}}^{n}\oplus H
$ with $H$ a finite abelian group. Therefore,
$$
R[G] \cong R[\ensuremath{\mathbb{Z}}^n]\otimes R[H]
$$
By theorem \ref{AxB}, there is an isomorphism of Gerstenhaber algebras
$$
HH^*(R[G];R[G]) \cong HH^*(R[\ensuremath{\mathbb{Z}}^{n}];R[\ensuremath{\mathbb{Z}}^{n}]) \otimes HH^*(R[H]; R[H])
$$
\begin{cor}
Let $G$ be a finitely generated abelian group. Then as a BV-algebra
$$
HH^*(R[G];R[G]) \cong HH^*(R[\ensuremath{\mathbb{Z}}^{n}];R[\ensuremath{\mathbb{Z}}^{n}]) \otimes HH^*(R[H]; R[H])
$$
with BV-operator given by
$$
\Delta= \Delta^{\ensuremath{\mathbb{Z}}^n}\otimes id \pm id \otimes \Delta^{H}
$$
where $\Delta^{\ensuremath{\mathbb{Z}}^n}$ is given by theorem \ref{BVfiniterank} and $\Delta^H$ is the BV-operator for the finite group $H$.
\end{cor}
|
1,314,259,993,311 | arxiv | \section{Introduction}
The development of efficient and selective catalysts has been one of the
keystones of technological progress in the last century \cite{catalysis}.
Currently, most fossil fuels used in transportation are refined using catalytic
processes \cite{Lee2014,deLasa2011,Primo2014}, while a substantial fraction of
chemical products are manufactured through technologies based on catalysis
\cite{Jessop1994,Busca2007,Suryanto2019}. Additionally, catalysis is critical
for the electrochemical processes necessary to generate clean energy or/and to
produce clean fuels, such as hydrogen \cite{Norskov2006, Koper2010}. In
particular, hydrogen economy is based in two critical reactions that are
account for the production of hydrogen and the generation of clean energy, the
hydrogen evolution reaction (HER) and the oxygen reduction reaction (ORR),
respectively \cite{Vesborg2015,Shao2016}. So far, Pt stands as the best
catalyst for these reactions but its high cost and limited availability
hinders the industrial application of this technology, leading to a search for
cheaper and more widely available catalysts \cite{Hansen2021}.
It is known that the rate limiting steps for both HER and ORR reactions are
associated with the adsorption of intermediate species (H, O, and OH) onto the
surface of the catalyst \cite{Nrskov2004,Dubouis2019}. These processes are controlled by
the electronic structure of the catalysts, that can be modified using different techniques, e.g. addition of alloying elements, introduction of defects and/or surface orientation \cite{EscuderoEscribano2016, Fu2019,ZamoraZeledn2021}.
Another effective mechanism to modify the electronic structure is through the
introduction of large elastic strains (1-3\%) in the lattice
\cite{Shuttleworth2016,Shuttleworth20177}. This strategy - denominated elastic
strain engineering - opens the possibility to modify the electronic, optical,
magnetic, and catalytic properties of solids through the systematic application
of elastic strains \cite{Feng2018, Wang2018,Kong2017,Rudi2012}. Moreover,
recent experimental work has shown that much larger tensile elastic strains
(close to the theoretical limit of 10\%) can be attained in particular
nanomaterials (such as nanoneedle diamond structures as well as carbon
nanotubes) \cite{LLorca2018,Banerjee2018} and metallization of diamond has been
predicted at this strain level by means of first principles calculations
\cite{Shi2019}. It is obvious that dramatic changes in both the electronic
structure and the catalytic properties of materials could be expected when the
elastic strains approach the instability limit. However, this territory is
unexplored due to the experimental difficulties associated with the application
of such large elastic strains.
The effect of mechanical strains on the electronic properties of transition
metals, which are the most important for the HER and ORR reactions, has been
qualitatively analyzed by many authors
\cite{Bhattacharjee2016,Grunze1982,Rao1991,Ruban1997,Cheng1995,Xin2014}. Their
findings can be summarized in the so-called d-band center model
\citep{Hammer2000, Hammer1995, Hammeeer1995}, developed more than two decades
ago. This model is based on the effective medium theory
\cite{PhysRevB.35.7423,Jacobsen1996} and the Newns-Anderson model
\cite{NEWNS1969,Anderson1961}, and relates the adsorption energy to the change
in the local electron density of the surface. The changes in the adsorption
energy with mechanical strain in the d-band model are related to the change of
the energy of the d-band center, $\epsilon_{d}$, and the variation in the
adsorption energy from one transition metal surface to another correlates with
the upward shift of $\epsilon_{d}$ with respect to the Fermi level. A shift
towards higher energies allows the formation of a larger number of empty
anti-bonding states, leading to a stronger (more negative) binding energy. Even
though the d-band model can be used to rationalize the influence of elastic strains in the catalytic activity, there is not a model to predict directly the adsorption energy as a function of the
applied strain using $\epsilon_{d}$.
In this investigation, the relationship between the applied elastic strain
tensor and the adsorption energy of H, O, and OH on the surface of eleven
transition metals is determined by means of density functional theory (DFT)
calculations. A simple relationship between the area of the deformed surface
hole where the adsorbates lay and the energy associated with the adsorption
process is found for all metals. It was also determined that the adsorption
energy only depends on the deformed area of the hole and is independent of the
elastic strain tensor applied to achieve this area. Thus, variations in the
adsorption energy with elastic strain could be obtained with very limited
computational effort. Moreover, a linear relationship between the adsorption
energy and the Fermi energy was also found for all metals, the
slope indicating the sensitivity of each metal to change the catalytic
properties through elastic strain engineering. This information is relevant as
a first approach to provide a quantitative understanding of the application of elastic
strain engineering to modulate the adsorption energy of transition metals and
support the search for better catalysts.
\section{Computational details}
The DFT plane wave simulations were performed using the Quantum Espresso
package \cite{Giannozzi2009}. The electron exchange-correlation was described
using the generalized gradient approximation (GGA) with the
Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional \cite{Perdew1996}
and the calculations were carried out using ultrasoft pseudopotentials.
Brillouin zone calculations were performed using a
Marzari-Vanderbit-DeVita-Payne cold smearing of 0.015 Ry \cite{Marzari1999}.
The planewave basis was expanded to a cutoff energy of 80 Ry, and the
Monkhorst-Pack k-points were sized ($20 \times 20 \times 20$) for the unit
cells of all metals, and ($4 \times 4 \times 1$) for all slabs supercells. DFT
calculations were performed for 11 transition metals from the groups 9th to
12th in the periodic table with either fcc (Rh, Ir, Ni, Pd, Pt, Cu, Ag, Au) or
hcp (Co, Zn, Cd) structure. The corresponding equilibrium bulk lattice
constants are shown in Tables S1 and S2 of the supporting information for fcc
and hcp metals, respectively.
The adsorption of H, O, and OH was modeled on four-layer slabs with the (111)
surface facet in the fcc metals and the (0001) surface facet in the hcp metals.
These surfaces were selected because they present the lowest surface energy -- and, thus, stand for the most stable ones -- for each lattice ~\cite{Jain2013}. The (2x2) slab supercells with four layers of atoms perpendicular to the
surface were generated with the Atomic Simulation Environment (ASE)
\cite{998641} from the equilibrium lattice parameters. The periodic slabs were separated by 10 \AA~ of vacuum (Figure \ref{supercells}) in the direction perpendicular to the surface. Adsorption energies were
calculated of each chemical specie assuming a coverage of 1/4 monolayers. All metal atoms in the top two layers of the slab and all adsorbed H, O, and OH
were fully relaxed, while the positions of metal atoms in the bottom two layers
were fixed. The adsorption energies of H in the case of Pt (111) surfaces were
evaluated for slabs including 3, 4, and 5 atomic layers perpendicular to the
surface and the results obtained with 4 and 5 layers were very close,
indicating that 4 layers were enough to reach convergence. Moreover,
simulations of H adsorption on Pt (111) surfaces carried with ultrasoft
pseudopotentials were compared with those obtained using the projector
augmented-wave method (PAW), which provides more accurate results
~\cite{Blchl1994,Prandini2018,Fearon2006,Hanh2014,Kolb2014} with
higher computational cost. The differences in adsorption energies between both simulations were
negligible. Thus, it was concluded that the combination of the PBE functional
with an ultrasoft pseudopotential in four-layer surface slabs offered the best
balance between accuracy and computational time.
\begin{figure}[!]
\centering
\includegraphics[width=\textwidth]{supercells.pdf}
\caption{(a) Four-layer slab supercell of a (111) fcc surface. (b) {\it Idem}
for a four-layer slab supercell of a (0001) hcp surface. The surfaces in the
supercells are separated by 10\AA ~of vacuum.}
\label{supercells}
\end{figure}
The adsorption energy of atomic H and O was calculated as
\begin{align}
E_\mathrm{adsX} = E_\mathrm{slab+X} - ( E_\mathrm{slab} + \frac{1}{2} E_\mathrm{X_{2}} )
\end{align}
\noindent where $E_\mathrm{slab}$ and $E_\mathrm{slab+X}$ stand for the total
energies of the slab without and with the absorbate X which represents H and O,
respectively. $E_\mathrm{X_{2}}$ accounts for the total energy of the hydrogen
and oxygen molecule in the gaseous state. It should be noted that the adsorption of O$_2$ from a molecule of H$_2$O (instead from O$_2$) should be used to calculate the catalytic activity but the difference from both adsorption energies is given by constant that depends only on the formation energies of O$_2$, H$_2$ and H$_2$O and is independent of the material of the slab and of the applied strain. Thus, isolated molecules of H$_2$ and O$_2$ were assumed to calculate the adsorption energies in this investigation.
The adsorption energy of OH was calculated as
\begin{align}
E_\mathrm{adsOH} = E_\mathrm{slab+OH} - ( E_\mathrm{H_{2}O} - \frac{1}{2} E_\mathrm{H_{2}} )- E_\mathrm{slab}
\end{align}
\noindent where $E_\mathrm{slab+OH}$ stands for the total energy of the slab
with OH, and $E_\mathrm{H_{2}}$ and $E_\mathrm{H_{2}O}$ account for the total
energies of the hydrogen and water molecules in the gaseous state,
respectively.
The metal surfaces in Figure \ref{supercells} were subjected to normal and shear
stresses in the surface plane. Mixed boundary conditions are imposed to solve the elastic problem in the DFT calculations. They include imposed strains in the slab plane and zero stresses on the free surface. The deformation gradient ${\mathbf F}$ applied
to the supercell was
\begin{equation}
\begin{array}{ccc}
{\mathbf F}= \begin{pmatrix}
1+\epsilon_1 & \gamma & 0\\
0 & 1+\epsilon_2 & 0\\
0 & 0 & 1
\end{pmatrix}
\end{array}
\end{equation}
\noindent where $\epsilon_1$ and $\epsilon_2$ stand for the normal strains along $x$ and $y$ directions while and $\gamma$ for the shear distortion in the $xy$ plane. Uniaxial deformation was applied when $\epsilon = \epsilon_1$ and $\epsilon_2 = \gamma$ = 0, while $\epsilon = \epsilon_1 = \epsilon_2$ and $\gamma =0$ for biaxial deformation.
In addition, the mechanical stability of surface slab supercells under strain
was analyzed from the harmonic phonon spectrum using the Phonopy code
\cite{phonopy}. To calculate the phonon spectrum, the atomic forces after
perturbing slightly the atomic positions from the equilibrium positions were
calculated for different strains in large slab supercells. They were obtained by repeating
the basic slab supercells in Figure \ref{supercells} by 2 times in the $x$
direction and two times in the $y$ direction, leading to supercells with 64
atoms. The number of perturbations to obtain the phonon spectrum depends on the
symmetries of the supercell which in turn depend on the applied strain and can
vary from 4 (no strain applied) to 24 (15\% biaxial deformation).
It should be noted that the phonon calculations are computationally very
expensive and they were used to assess the maximum strain levels that should be
explored in the adsorption calculations under normal and shear strains because
mechanical instabilities are likely to appear well before these limits.
\section{Results}
\subsection{Adsorption energies of H, O, and OH}
Adsorption energies were calculated in the absence of applied strains in all
the available positions for all studied metals. The four possible positions in
which H, O, and OH can be adsorbed onto the (111) fcc and the (0001) hcp
surfaces are shown in figure \ref{junta}(a-b) and (c-d), respectively. They are labeled H, F, O,
and B in the figure and stand for the HCP, FCC, ONTOP, and BRIGDE positions on
the surfaces, respectively. It was found that adsorbates at BRIDGE position
diffused towards more favorable positions and they were omitted from this
study. In the case of OH adsorption, the molecule was placed perpendicularly
to the adsorption plane with the O atom closer to the surface.
The adsorption energies for H, O, and OH onto the surfaces are shown in Table
\ref{table:nostrain} for the different adsorption sites. There are very large
differences between the strongest (FCC) and the weakest (ONTOP) adsorption
sites in many cases. For instance, the adsorption of oxygen onto nickel is
associated with an energy of -3.13 eV in the FCC position but is reduced to
-1.20 eV on the ONTOP site. On the contrary, the differences in adsorption
energies between the FCC and HCP positions are much smaller.
These trends are in agreement with previous experimental and theoretical results ~\cite{Nrskov2004,Nrskov2005,Pang2011,Lvvik1998}. For instance, the difference with the calculated adsorption energy of H/Co in ~\cite{Nrskov2005} was only 0.03 eV and 0.23 eV in the case of OH/Cu adsorption ~\cite{Nrskov2004}. Moreover, The FCC position was found to be the most favorable adsorption
site (most negative energy) in the fcc metals while the FCC and HCP positions
had very similar adsorption energies in the hcp metals. The ONTOP site
presented the least favorable adsorption energy in all cases, except for
Ir.
Regarding the different adsorbates, the adsorption process of O is always
exothermic for the FCC and HCP positions whereas the adsorption of OH is an
endothermic process in most cases (with the exception of Co) with positive
adsorption energies. The adsorption energies for H are, in general, smaller in
absolute values and can be endothermic or exothermic depending on the
transition metal.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{junta.pdf}
\caption{(a) fcc unit cell. The (111) plane is shadowed in red. (b) Perpendicular view to the (111) plane of the fcc lattice showing the binding sites
for adsorbates. (c) hcp unit cell. The (0001) plane is shadowed in red. (d) Perpendicular view to the (0001) plane of the hcp lattice showing the binding
sites for adsorbates. The red circles in (b) and (d) denote the binding
sites and the labels 'H', 'F', 'O', and 'B' represent the HCP, FCC, ONTOP,
and BRIDGE positions, respectively.}
\label{junta}
\end{figure}
\begin{table}[h]
\centering
\caption{Adsorption energies of H, O, and OH on FCC, HCP, and ONTOP sites for
all the metals. The adsorption surface of the fcc metals (Ni, Cu, Pd, Ag, Pt,
Au, Rh, and Ir) was the (111) plane, whereas adsorption occurred at the (0001)
plane in the hcp metals (Zn, Cd, and Co). The adsorption energies for OH in Zn
and Cd in the ONTOP position are not indicated because the molecule moves
towards the BRIDGE position. The energy values are expressed in eV.}
\small
\hspace*{-2cm}
\begin{tabular}{@{}lccc|lccc|lccc@{}}
\toprule
System & FCC & HCP & ONTOP & System & FCC & HCP & ONTOP & System & FCC & HCP & ONTOP \\
\midrule
H/Ni & -0.52 & -0.50 & 0.06 & O/Ni & -3.13 & -3.03 & -1.29 & OH/Ni & 0.03 & 0.57 & 1.04 \\
H/Cu & -0.25 & -0.23 & 0.39 & O/Cu & -2.68 & -2.54 & -0.82 & OH/Cu & 0.14 & 0.18 & 1.05 \\
H/Pd & -0.54 & -0.51 & -0.01 & O/Pd & -2.16 & -1.99 & -0.58 & OH/Pd & 0.71 & 0.85 & 1.64 \\
H/Ag & 0.17 & 0.18 & 0.73 & O/Ag & -1.38 & -1.27 & 0.13 & OH/Ag & 0.72 & 0.75 & 1.57 \\
H/Pt & -0.49 & -0.43 & -0.42 & O/Pt & -2.16 & -1.75 & -0.73 & OH/Pt & 1.19 & 1.50 & 1.91 \\
H/Au & 0.09 & 0.14 & 0.39 & O/Au & -0.98 & -0.72 & 0.50 & OH/Au & 1.52 & 1.58 & 2.22 \\
H/Rh & -0.53 & -0.53 & -0.25 & O/Rh & -2.96 & -2.84 & -1.55 & OH/Rh & 0.27 & 0.45 & 1.15 \\
H/Ir & -0.39 & -0.36 & -0.47 & O/Ir & -2.62 & -2.40 & -1.52 & OH/Ir & 0.76 & 0.98 & 1.29 \\
\midrule
H/Co & -0.54 & -0.51 & 0.11 & O/Co & -3.34 & -3.42 & -1.97 & OH/Co & -0.13 & -0.2 & 0.55 \\
H/Zn & 0.70 & 0.67 & 0.81 & O/Zn & -2.53 & -2.62 & -2.59 & OH/Zn & 0.46 & 0.37 & --- \\
H/Cd & 0.80 & 0.81 & 0.79 & O/Cd & -2.19 & -2.24 & -2.22 & OH/Cd & 0.36 & 0.31 & --- \\
\bottomrule
\end{tabular}
\label{table:nostrain}
\end{table}
\subsection{Effect of elastic strain on the adsorption energies of H, O, and OH}
\subsubsection{Stability limits}
The mechanical stability limits of fcc Cu and Pt slabs under biaxial tension
and compression as well as pure shear were determined from the harmonic phonon
spectra. Calculations were limited to 5\%, 10\%, and 15\% biaxial strains in
tension, -5\% and -10\% biaxial strains in compression, and 10\% strain in shear
because of the high computational cost. The phonon density of states spectra
for the Pt and Cu slabs can be found in the supporting information (Figures S1
and S2). Negative frequencies appeared in the fcc slabs subjected to 15\%
biaxial tension in Pt and to 10\% biaxial tension in Cu. Similarly, negative
frequencies appeared in both metal surfaces at -10\% biaxial compression or
shear. In the case of the (0001) slabs of the hcp metals, it was found
that that the order in the supercell was lost for small strains, indicating that the mechanical instabilities triggered by compressive or shear strains in (0001) hcp slabs develop sooner than in (111) fcc slabs.
Taking into account the theoretical values of the stability limits obtained
with the phonon calculations, it was decided to explore the effect of elastic
strains on the adsorption energies in the (111) fcc slabs up to 8\% tensile
strain, -5\% compressive strain, and 4\% shear strain. In the case of (0001)
hcp slabs, only the effect of tensile strains (and of small compressive strains in particular
cases) on the adsorption energy was analyzed.
\subsubsection{Influence of mechanical strains on the optimum adsorption site}
The effect of mechanical strains (either uniaxial, biaxial or shear) on the
adsorption energies of H and O onto the different adsorption sites (FCC, HCP,
ONTOP) were determined by DFT calculations in Pt. The results are plotted in
Figure \ref{Ptpositions} for either uniaxial ($\epsilon = \epsilon_1$,
$\epsilon_2 = \gamma$ = 0) or biaxial ($\epsilon = \epsilon_1=\epsilon_2$,
$\gamma$ = 0) deformations in the range -3\% up to 8\%, and for shear strains
$\gamma$ in the range 0\% to 4\% ($ \epsilon_1=\epsilon_2$ =0), in agreement
with the limits indicated above.
\begin{figure}[h!]
\hspace*{-1.5cm}
\centering
\includegraphics[width=1.2\textwidth]{Ptpositions.pdf}
\caption{Effect of strains (either uniaxial, biaxial or shear) on the adsorption
energy of (a) H, $E\mathrm{ads_H}$ and (b) O, $E\mathrm{ads_O}$ onto different sites of (111) Pt surfaces.}
\label{Ptpositions}
\end{figure}
The same behavior can be observed for all the adsorption sites and adsorbates:
compressive strains increase the adsorption energy (less negative and,
therefore, adsorption is less favorable) while tensile strains lead to the
opposite behavior. Moreover, the variation in adsorption energy with strain is
always higher in the case of biaxial deformation. In addition, shear
deformations behave as compressive deformations and increase the adsorption
energy. The only exception to these trends is found in the adsorption energy of
H onto the ONTOP position subjected to very large biaxial tensile strains ($>$
5\%), which lead to a slight increase in the adsorption energy. This difference
may be caused by the proximity to the instability limits of the slab at
this strain.
The effect of mechanical strains on the adsorption energy of O is higher than
that of H but it should be noted that the absolute values of the adsorption
energies are also much higher in the former. In addition, the effect of
mechanical strains is similar for all the adsorption sites for a given
adsorbate. Thus, application of mechanical strains does not change the most
favorable adsorption site for H and O in Pt, which is always FCC. Similar
results were obtained for the (111) surfaces of other fcc metals, while the
optimum adsorption site for (0001) surfaces of hcp metals is the HCP and it
is also independent of the strain state.
\subsubsection{Influence of mechanical strains on the adsorption energy of H, O and OH}
The analysis of the influence of mechanical strains on the adsorption energy of
H, O, and OH was focussed in the FCC sites of (111) fcc surfaces and in the HCP
sites of the (0001) hcp surfaces, which are the most favorable locations for
adsorption.
The effect of mechanical strain (either uniaxial, biaxial or shear) on the
adsorption energies of H, O, and OH on the FCC sites of the (111) surfaces of
eight fcc transition metals (Cu, Pt, Ni, Au, Ir, Rh, Pd, and Ag) is plotted
in Figure \ref{Figure_1y2}. The corresponding results on the HCP sites of the
(0001) surfaces are plotted for Co, Zn, and Cd in Figure \ref{nueva}. As
indicated above, application of compressive and shear strains was restricted in
hcp metals because of the development of instabilities in the supercell.
\begin{figure}[t!]
\hspace*{-1.5cm}
\centering
\includegraphics[width=1.2\textwidth]{Figure_1y2}
\caption{Effect of strain (either uniaxial, biaxial or shear) on the
adsorption energy of (a) H, $E\mathrm{ads_{H}}$; (b) O, $E\mathrm{ads_{O}´}$
and (c) OH, $E\mathrm{ads_{OH}´}$ onto the FCC sites of (111) surfaces of fcc
transition metals. Curves for Ag, Pd, and Rh are omitted because they overlap
with the curves for Au, Pt, and Ni, respectively. Separate figures of all
metals can be found in Figures S3 to S8 of the supporting information.}
\label{Figure_1y2}
\end{figure}
\begin{figure}[t!]
\hspace*{-1.5cm}
\centering
\includegraphics[width=1.2\textwidth]{nueva}
\caption{Effect of strains (either uniaxial or biaxial) on the adsorption
energy of (a) H, $E\mathrm{ads_H}$; (b) O, $E\mathrm{ads_O}$ and (c) OH,
$E\mathrm{ads_{OH}}$ onto the HCP sites of (0001) surfaces of hcp transition
metals.}
\label{nueva}
\end{figure}
The highest (less favorable) adsorption energies are always found for OH while
the lowest (more favorable) are reported for O. Adsorption energies for H are
smaller (in absolute values) than those calculated for O and OH in all metals,
following the trends reported above for Pt. In addition, the adsorption
energies increase (become less negative) with compressive strains and decrease (become more
negative) with tensile strains, while biaxial deformations have stronger
influence than uniaxial deformation on the adsorption energy for the same
strain. Shear strains increase slightly the adsorption energies in all cases.
For a given metal, the effect of mechanical strains on the adsorption energy of
O and OH quite similar, very likely because adsorption is dominated by the
larger O atom.
The variation in adsorption of energy of O and OH with strain is significantly
higher than that of H and these differences can be attributed to the larger
atomic radius of the O atom. The additional p-orbitals increase the size of the
electronic environment and, therefore, the overlap with the d-band structure of
the metals. The presence of p-electrons further apart from the nucleus favors
the interaction with the d-electrons of the surfaces.
Indeed, the effect of mechanical strains on the adsorption energies depends on
the metal. Noble metals, such as Au and Pt, show the highest sensitivity to the
strain while Ni shows much lower sensitivity. It is also worth noting that the
adsorption of H onto Co is not affected by either uniaxial or biaxial strains
in the range - 5\% to 8\% but the largest changes in the adsorption energy of O
and OH ($>$ 0.5 eV) are found in Co for the same strain range.
The trends observed for the adsorption energies in this work are in agreement
with the predictions of the d-band theory. This theory states that the more
favorable adsorption energies related with tensile and the less favorable ones
corresponding to compressive and shear strains can be explained in terms of
displacements in the d-band center \cite{Hammer2000, Hammer1995, Pang2011,Mavrikakis1998}. Indeed, the d-band theory indicates that tensile strains shift the d-band center towards higher energies for transition metals with more than half-filled d-bands. A higher d-band center results in stronger bonding and leads to more favorable adsorption energies, while compression and shear strains produce a shift towards a lower d-band center and lead to less favorable interactions.
\subsection{Relationship between adsorption energy and hole area}
While the d-band model provides a qualitative explanation of the trends
reported in Figures \ref{Figure_1y2} and \ref{nueva}, models that are able to
quantify the effect of mechanical strains on the adsorption energy are lacking.
In this respect, the adsorption energies of H, O, and OH in each fcc transitions
metals and in three hcp transitions metals are plotted in Figures \ref{AREAS1}
and \ref{AREAS2}, respectively, as a function of the area of hole (either FCC
or HCP) at which adsorption takes place. The area of the hole was calculated
from the length of the sides of the triangle which conform the FCC and the HCP
adsorption sites (Figure \ref{junta}(b) and (d)), which depends on the
deformation gradient $\bf{F}$ applied to the supercell. After relaxation, the
area of the hole was measured again using the code XCrysDen\citep{Kokalj1999}
and it was found that the differences in the hole area between the unrelaxed
and relaxed structure are negligible.
The adsorption energies under biaxial strains (solid circles), uniaxial strains
(open circles) and a combination of shear and axial strains (open triangles)
are plotted for each metal in these figures. They show that the actual
adsorption energy only depends on the area of the hole and that the effect of
different strains can be superposed, i.e. different combinations of normal and
shear strains that lead to the same in hole area have the same effect of the
adsorption energy. Moreover, the relationship between the adsorption energy and
the hole area is fairly linear in most cases (although the DFT results are
better approximated by a parabola in several cases). Finally, the data in
Figures \ref{AREAS1} and \ref{AREAS2} allow to make a fast and accurate
estimation of the effect of mechanical strains on the adsorption energies of H,
O, and OH for any of these transition metals from the geometrical analysis of
the change in either FCC or HCP hole area under the application of a
deformation gradient $\bf{F}$. Because the hole area does not change during
relaxation, it is not necessary to perform the DFT simulations to determine the
adsorption energy.
\begin{figure}[t!]
\hspace*{-1.5cm} \centering
\includegraphics[width=1.2\textwidth]{AREAS1}
\caption{Adsorption energies as a function of the area of the adsorption hole
that is function of the applied deformation gradient. (a) H adsorption at FCC
positions of (111) surfaces of fcc transition metals. (b) O adsorption at FCC
positions of (111) surfaces of fcc transition metals. (c) OH adsorption at
FCC positions of (111) surfaces of fcc transition metals. Solid and open
circles stand for the adsorption energies calculated under biaxial and
uniaxial strains, respectively, while open triangles refer to adsorption
energies under shear strains or a combination of shear and axial strains. The
lines show the best fit to the DFT results in the strain ranges indicated in
Figure \ref{Figure_1y2}.}
\centering
\label{AREAS1}
\end{figure}
\begin{figure}[t!]
\hspace*{-1.5cm}
\centering
\includegraphics[width=1.2\textwidth]{AREAS2}
\caption{Adsorption energies as a function of the area of the adsorption hole
that is function of the applied deformation gradient. (a) H adsorption
at HCP positions at (0001) surfaces of hcp transition metals. (b) O adsorption
at HCP positions at (0001) surfaces of hcp transition metals. (c) OH
adsorption at HCP positions at (0001) surfaces of hcp transition metals.
Solid and open circles stand for the adsorption energies calculated under
biaxial and uniaxial strains, respectively, and lines show the best fit to the DFT results in the strain ranges
indicated in Figure \ref{nueva}.}
\centering
\label{AREAS2}
\end{figure}
The curves corresponding to the 1st row transition metals (Cu, Ni, Zn, and Co)
in Figures \ref{AREAS1} and \ref{AREAS2} are located in the region with
smaller hole areas because of the smaller size of these atoms. In addition, Ni
and Co also show very low adsorption energies, which can be attributed to
magnetism, while the adsorption energies of Cu and Zn are much higher for
similar hole areas. The curves corresponding to the 2nd and 3rd row transition
metals (Au, Ag, Ir, Pt, Rh, Pd, and Cd) are spread towards the right in Figures
\ref{AREAS1} and \ref{AREAS2} as a result of the larger atom size. In general,
the adsorption energies becomes more positive (less favorable) when going from
left to right in the periodic table periods. This tendency is related with the
electronic density of the metals, whose d-bands become more populated as atomic
number of the metal increases in a period. Thus, as a general rule, the
adsorption process is favored by a reduction in the number of valence
electrons in the metal.
In most cases, the relationship between the adsorption energy and the hole area
is linear, although a parabola is more appropriate in several
cases. Moreover, the slope of the linear fit is similar for most transition
metals and large differences are only found in the case of H adsorption on Ni,
which is almost insensitive to the applied strains. These results indicate
that the mechanisms that control the adsorption of H, O, and OH are very
similar and the differences that appear in these figures are ultimately
related to the particularities of d-orbitals of each metal and to magnetic
effects.
Finally, the adsorption energies of H and O onto the HCP position of (111) Pt
surfaces were determined for different magnitudes of axial strains to check
whether the relationship between adsorption energy and hole area could be
extrapolated to other adsorption sites. They are plotted in Figure S9 in the
supporting information, together with the adsorption energies onto the FCC
positions of (111) Pt surfaces, support the previous findings: for a given
adsorption site, the effect of mechanical strains on the adsorption energy of
transition metals only depends on the area of adsorption hole.
\section{Discussion}
\subsection{Electronic structure}
It is known that changes in the surface geometry are accompanied by changes in the surface electronic structure \citep{Hammer2000}. To quantify these effects, the projected density of the states (PDOS) on the d-band onto the surface of all metals and adsorbates was analyzed to explore the electronic origin of the
adsorption energy - hole area relationship. The PDOS corresponding to the (111) fcc Pt slab subjected to biaxial tensile ($\epsilon$ = 2\% and 8\%) or compressive ($\epsilon$ = -2\%) strains are plotted in Figure
\ref{Figure_6y7}(a) and compared with the PDOS at $\epsilon = 0$. The overlap of metal d-states at neighboring sites will either increase or decrease when a surface undergoes compressive or
tensile strains, and so will the d-bandwidths in order to maintain a constant filling. Thus, compressive or tensile strains lead to downshifts or up-shifts of the d-band centers, respectively \cite{Kattel2014}, as shown in figure \ref{Figure_6y7}(a).
\begin{figure}[t!]
\hspace*{-1.5cm}
\centering
\includegraphics[width=1.2\textwidth]{Figure_6y7}
\caption{(a) PDOS onto the 5d orbitals of the (111) fcc Pt surface slab
subjected to biaxial strain states characterized by $\epsilon$ = -2\%, 0\%,
2\%, and 8\%. (b), (c), and (d) PDOS of the 5d orbitals of the (111) fcc Pt surface
with a H, O, and OH, respectively, adsorbed at FCC and ONTOP positions. The
corresponding PDOS of the 5d orbitals of the clean (111) fcc Pt slab, the
H$_{2}$ molecule, the O$_{2}$ molecule, and the H$_{2}$O molecule at gaseous state are also included
for comparison. The energy values are referenced to the Fermi level of the
(111) Pt slab when $\epsilon$ = 0 ($E_F$ = 3.5668 eV).}
\label{Figure_6y7}
\end{figure}
The PDOS onto the 5d orbitals of a (111) fcc Pt surface after the adsorption of
a H atom onto the FCC site and the ONTOP site are plotted in Figure
\ref{Figure_6y7}(b), together with the PDOS of a hydrogen molecule and of the
clean (111) fcc Pt slab. The adsorption of a H atom on the ONTOP position
generates strong electronic changes, which in turn are reflected in the PDOS.
In contrast, the PDOS when adsorption takes place onto the FCC site remains
practically identical to the one corresponding to the clean (111) Pt slab, with
the only difference being a small shoulder at $-$2 eV. This variation in the
PDOS can be considered negligible, as compared with the electronic changes
produced by the application of strain and, therefore, allows to establish a
direct link between the adsorption energy with the geometrical changes induced
at the FCC adsorption site by the application of strain. The PDOS of a (111) fcc Pt surface after the
adsorption of an O atom onto the FCC site and the ONTOP site together with the
PDOS of an oxygen molecule, and of the clean (111) fcc Pt slab are plotted in Figure
\ref{Figure_6y7}(c) and very similar trends are observed. The PDOS is localized in a single peak on the left after the adsorption of the O atom on the ONTOP position, whereas the PDOS become broader but maintains its original shape after the adsorption of the O atom onto the FCC position. Similarly, the PDOS of a (111) fcc Pt surface after the adsorption of OH onto the FCC site and the ONTOP site are plotted in Figure \ref{Figure_6y7}(d) together with the PDOS of a hydrogen and a water molecule, and of the clean (111)
fcc Pt slab. The trends are equivalent to those observed in Figs. \ref{Figure_6y7}(b) and (c). While the adsorption of OH at ONTOP position produces significant changes in the PDOS, the adsorption of OH at FCC position leads to negligible changes in the PDOS as compared with that of the clean (111) Pt slab. The above results suggest that the analysis the electronic structure of the
clean surface slabs may be enough to determine the effect of strain on the
adsorption energy of H, O, and OH in the transition metals studied in this investigation.
Moreover, the application of strain does not modify the shape of the PDOS
curves but only leads to a shift of the energy levels (Figure \ref{Figure_6y7}(a)). Thus, the Fermi level could be used as a metric of the electronic changes in the slabs upon the application of mechanical strains and, eventually, of the
adsorption energies. This hypothesis is supported by the results in Figures
\ref{ADS_FERMI}(a), (b), and (c) in which the adsorption energies as a function
of the mechanical strains are plotted as a function of the Fermi level of the
clean, strained surface slabs for H, O, and OH adsorbates, respectively. The
linear correlation between both magnitudes is obvious for all metals and
adsorbates and can be represented by
\begin{equation}
E_\mathrm{adsX} = b +m E_F
\label{AdsorptionFermi}
\end{equation}
\noindent where X = H, O or OH and the coefficients $b$ and $m$ for each pair of adsorbate and transition metal can be found in Table S3 in the supporting information. It should be noted that $E_F$ in eq. \eqref{AdsorptionFermi} stands for the Fermi energy of the clean, strained slab. Obviously, these similarities indicate that the underlying adsorption processes are governed by the same physical mechanisms.
\begin{figure}[h!]
\hspace*{-1.5cm}
\centering
\includegraphics[width=1.2\textwidth]{ADS_FERMI}
\caption{Adsorption energies of strained slabs as a function of the Fermi
level in the clean, strained slab for all metals. (a) H adsorption. (b) O
adsorption. (c) OH adsorption. The adsorption energies were calculated onto
the FCC sites on (111) fcc slabs and onto HCP sites of (0001) hcp slabs.
Values in both axes are expressed in eV. Solid and open circles represent
the biaxial and uniaxial DFT calculations, respectively. The solid lines stand for the fit of the DFT results with eq. \eqref{AdsorptionFermi}. Black circles indicate the adsorption energy without strain.}
\label{ADS_FERMI}
\end{figure}
The last step to link the adsorption energy with the area of adsorption site is
to find the relationship between the latter and the Fermi energy. The values of
both magnitudes obtained by DFT calculations on clean, strained slabs of the 11
transition metals are plotted in Figure \ref{fermi}. The straight lines in this
figure are given by
\begin{equation}
E_F = 7.4 -7.37 A + E_F(A_0)
\label{FermiArea}
\end{equation}
\noindent where $E_F$ is the Fermi energy (in eV), and $A$ the area of the
adsorption site in the surface subjected to a given strain state. $E_F(A_0)$ is
the Fermi energy of the surface corresponding to the undeformed state, which is given in Table S4 in the supporting information for each transition metal. This equation captures the independent contribution of the metal ligand (expressed by
$E_F(A_0)$) and of the mechanical strain (given by $A$) to the Fermi energy and, thus, through eq. \eqref{ADS_FERMI} to the adsorption energy for each adsorbate. It should be noted the excellent agreement of this simple linear equation with the DFT calculations for most transition metals
indicates that changes in the local electronic environment as a result of strain are better represented by the area of the hole than by the distance between one atom and the different neighbours. Moreover, eq. \eqref{FermiArea} provides the explanation of the link between the adsorption energy and the area of the adsorption site in Figures \ref{AREAS1} and \ref{AREAS2}. The different behavior of Co -that does not follow equation \eqref{FermiArea}- may be due to the magnetic properties of this metal but
further research is needed to clarify this point. Moreover, it should be noted
that preliminary studies on (110) bcc Cr and (100) bcc V surfaces did not find
a clear relationship between adsorption energy, Fermi energies, and area of the
adsorption site. These differences may be attributed to the nature of the
d-orbitals in these metals, that are less than half or half filled in metals on
the left side of the periodic table. Thus, further research is needed to reach
a full understanding of the effect of mechanical strains on the adsorption
properties of transition metals.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{fermi}
\caption{Evolution of the Fermi energy, $E_F$, at the surface of the
different transition metals as a function of the area of the adsorption site,
$A$. Circles stand for the results of the Fermi levels corresponding to the clean, deformed slabs obtained by DFT and the straight lines stand for the predictions of equation \eqref{FermiArea}.}
\label{fermi}
\end{figure}
\section*{Conclusions}
The influence of elastic strains on the adsorption of H, O, and OH on the (111)
surfaces of 8 fcc (Ni, Cu, Pd, Ag, Pt, Au, Rh, Ir) and on the (0001) surfaces
of 3 hcp (Co, Zn, Cd) transition metals was analyzed by means of DFT
calculations. The surface slabs were subjected to different strain states
(uniaxial, biaxial, shear, and a combination of them) up to strains dictated by
the mechanical stability limits indicated by the phonon calculations. It was
found that tensile strains favored the adsorption of the three adsorbates while
compressive and shear strains increased the adsorption energy (less negative)
and, thus, hindered the adsorption, in agreement with the d-band theory. Adsorption energies for H were smaller in absolute values than those calculated for O and OH in all cases. Moreover, the optimum
adsorption (lowest energy) of the three species was found onto the FCC sites
of the (111) fcc surfaces and onto the HCP sites of the (0001) hcp surfaces and
did not change with strain.
It was found that the variation of the adsorption energy in all metals due to
the application of mechanical strains was only a function of the change in the
area of the adsorption site and the relationship between both magnitudes was
fairly linear in most cases. Thus, different combinations of normal and shear
strains that lead to the same change in the area of the adsorption site have
identical effect on the adsorption energy. This general behavior indicated
that the physical mechanisms of adsorption were equivalent in all metals. The
analysis of the electronic structure showed that the application of strains did
not modify the shape of PDOS of the d-orbitals of the transition metals but
only led to a shift in the energy levels. Moreover, the adsorption of H and O
on the surfaces led to negligible changes in the PDOS. Thus, the adsorption
energies of all adsorbates in all metals were a function of the Fermi energy
which in turn was associated to the change of the area of the adsorption
through linear law that was valid for all metals.
As the change in the area of the adsorption site due to the application of
strain can be accurately determined by purely geometrical considerations, the
information in this paper allows the immediate and accurate estimation of the
effect of any elastic strain on the adsorption energies of H, O, and OH on 11
transition metals with more than half-filled d-orbitals.
This information can be used to predict the activation free energies of the different intermediates in the HER and ORR (as well as in other catalytic reactions) as a function of the applied strain for different transition metals with very limited computational cost, indicating the optimum combination of material and strain to enhance the catalytic activity. Moreover, the results in this paper can spur the search for correlations between geometrical descriptors of the elastic deformation and adsorption energies that can be used to make accurate predictions of the latter with minimum computational cost for other compounds and adsorbates.
\section*{Acknowledgments}
This investigation was supported by the MAT4.0-CM project funded by the Madrid
region under program S2018/NMT-4381 and by the HexaGB project of the Spanish Ministry of Science (reference RTI2018-098245). Computer resources and technical
assistance provided by the Centro de Supercomputaci\'on y Visualizaci\'on de Madrid
(CeSViMa) are gratefully acknowledged. Additionally, the authors thankfully
acknowledge the computer resources at CTE-Power and Minotauro in the Barcelona
Supercomputing Center (project QS-2021-1-0013). Finally, use of the
computational resources of the Center for Nanoscale Materials, an Office of
Science user facility, supported by the U.S. Department of Energy, Office of
Science, Office of Basic Energy Sciences, under Project No. 73377, is
gratefully acknowledged. CMA also acknowledges the support from the Spanish
Ministry of Education through the Fellowship FPU19/02031.
\footnotesize
\providecommand{\url}[1]{\texttt{#1}}
\providecommand{\urlprefix}{}
\providecommand{\foreignlanguage}[2]{#2}
\providecommand{\Capitalize}[1]{\uppercase{#1}}
\providecommand{\capitalize}[1]{\expandafter\Capitalize#1}
\providecommand{\bibliographycite}[1]{\cite{#1}}
\providecommand{\bbland}{and}
\providecommand{\bblchap}{chap.}
\providecommand{\bblchapter}{chapter}
\providecommand{\bbletal}{et~al.}
\providecommand{\bbleditors}{editors}
\providecommand{\bbleds}{eds.}
\providecommand{\bbleditor}{editor}
\providecommand{\bbled}{ed.}
\providecommand{\bbledition}{edition}
\providecommand{\bbledn}{ed.}
\providecommand{\bbleidp}{page}
\providecommand{\bbleidpp}{pages}
\providecommand{\bblerratum}{erratum}
\providecommand{\bblin}{in}
\providecommand{\bblmthesis}{Master's thesis}
\providecommand{\bblno}{no.}
\providecommand{\bblnumber}{number}
\providecommand{\bblof}{of}
\providecommand{\bblpage}{page}
\providecommand{\bblpages}{pages}
\providecommand{\bblp}{p}
\providecommand{\bblphdthesis}{Ph.D. thesis}
\providecommand{\bblpp}{pp}
\providecommand{\bbltechrep}{Tech. Rep.}
\providecommand{\bbltechreport}{Technical Report}
\providecommand{\bblvolume}{volume}
\providecommand{\bblvol}{Vol.}
\providecommand{\bbljan}{January}
\providecommand{\bblfeb}{February}
\providecommand{\bblmar}{March}
\providecommand{\bblapr}{April}
\providecommand{\bblmay}{May}
\providecommand{\bbljun}{June}
\providecommand{\bbljul}{July}
\providecommand{\bblaug}{August}
\providecommand{\bblsep}{September}
\providecommand{\bbloct}{October}
\providecommand{\bblnov}{November}
\providecommand{\bbldec}{December}
\providecommand{\bblfirst}{First}
\providecommand{\bblfirsto}{1st}
\providecommand{\bblsecond}{Second}
\providecommand{\bblsecondo}{2nd}
\providecommand{\bblthird}{Third}
\providecommand{\bblthirdo}{3rd}
\providecommand{\bblfourth}{Fourth}
\providecommand{\bblfourtho}{4th}
\providecommand{\bblfifth}{Fifth}
\providecommand{\bblfiftho}{5th}
\providecommand{\bblst}{st}
\providecommand{\bblnd}{nd}
\providecommand{\bblrd}{rd}
\providecommand{\bblth}{th}
|
1,314,259,993,312 | arxiv | \section{Introduction}
A complex of rank $\frac 7 4$\ is a 2-complex with triangle faces, whose links are isomorphic to the Moebius-Kantor graph.
\begin{figure}[H]
\includegraphics[width=4cm]{IRL_MoebiusKantor.pdf}
\caption{The Moebius--Kantor graph}
\end{figure}
\noindent In this paper we continue our investigations of complexes of rank $\frac 7 4$, which was begun in \cite{rd}. We study the isomorphism type of simply connected complexes of rank $\frac 7 4$, and aim to prove that
\begin{enumerate}
\item[] (a) there are infinitely isomorphism types with a cocompact automorphism group
\item[] (b) there are uncountably isomorphism types, in general.
\end{enumerate}
These statements should be compared to similar ones, for Euclidean Tits buildings, especially in the rank 2 case.
The second statement (b) is reminiscent of the ``free constructions'' of Ronan and Tits in \cite{tits1981local,ronan1986construction,ronan1987building} for buildings of type $\tilde A_2$, which can be used to show the existence of uncountably many such buildings.
The main tool to establish free constructions is a prescription theorem of a local invariant of the complex. For buildings of type $\tilde A_2$, Ronan has proved such a theorem in \cite{ronan1986construction}, showing that one can prescribe the residues at vertices arbitrarily, which can be provided by any set of projective planes of fixed order $q$.
Furthermore, in the case $q=2$, Tits has proved in \cite{tits1988spheres} that there are precisely two isomorphism types of spheres of radius 2, and these spheres of radius 2 can also be used as a local invariant by \cite{barre2000immeubles}.
The use of spheres of radius 2 is not very practical in our situation. Our results will in fact imply that there are at least 174163 isomorphism types of such spheres. We shall rely instead on a local invariant which makes use of the intermediate rank properties of complexes of rank $\frac 7 4$, and in particular, how the roots of rank $2$ (and those of rank $\neq 2$) are organized in the complex. The invariant is defined in \textsection\ref{S - rank parity} and called the parity. A free prescription theorem for the parity is established in \textsection\ref{S- Prescription}.
The first statement (a) is also true for Euclidean buildings. A way to prove it, which is in fact the only way we know of, even in the rank 2 case, is to use Euclidean buildings associated with non isomorphic local fields. For example, if $(K_r)_r$ is a sequence of totally ramified finite extensions of $\mathbb{Q}_p$ which are pairwise nonisomorphic, then (by the classical results of Tits \cite{tits1974buildings}) the Euclidean Tits buildings $X_r$ of $\mathrm{PSL}_d(K_r)$ will be also pairwise nonisomorphic.
For complexes of rank $\frac 7 4$, constructions using local fields are not available. However, an alternative approach for building such complexes was proposed in \cite{surgery}, using surgery theory. This construction provides compact complexes of rank $\frac 7 4$, and therefore simply connected ones with cocompact automorphism groups, taking universal covers. The question addressed in (a), which was raised in a recent talk by one of us, is to distinguish these complexes up to isomorphism. The problem is that different constructions may lead to isomorphic universal covers.
In \textsection\ref{S - surgery construtions}, we show that the parity can be computed explicitly in these constructions, and that it provides a way to distinguish these complexes up to isomorphism.
Note that infinitely many of the Euclidean buildings $X_r$ defined in the previous paragraph coincide on arbitrary large balls, since there are only finitely many isomorphism types of balls of radius $n$ in this case. In order to prove (a), we must in fact show that a similar construction, where the complexes coincide on arbitrarily large balls, can be done in the case of complexes of rank $\frac 7 4$\ (with cocompact automorphism group). This is because the existence of infinitely many isomorphism types of complexes of rank $\frac 7 4$\ with cocompact automorphism group, as stated in (a), implies the same statement where furthermore, the complexes coincide on arbitrary large finite balls. (Another way to phrase this is to use the space of simply connected complexes of rank $\frac 7 4$, an analog to the space of triangle buildings $E_q$ of \cite{E1,E2}, which is a compact space.) Therefore, we might as well look directly for such complexes, which is what we shall do.
We note however that, for complexes of rank $\frac 7 4$, the quotient by the automorphism group in (a) has to be arbitrary large, because the automorphism group is uniformly discrete. An analogous statement for buildings of type $\tilde A_2$ is not known to hold: questions regarding the existence or number of buildings, whose quotient by their automorphism group has a prescribed, e.g., arbitrarily large, number of vertices, are generally open. This is one of our motivations to look for new constructions techniques.
The following is a further analogy between the rank $\frac 7 4$ constructions and the local field constructions for $X_r$. The buildings $X_r$ can be distinguished up to isomorphism, because the field $K_r$ can be recovered abstractly, by the results of Tits \cite{tits1974buildings}, from the building at infinity. However, it is not clear a priori how distant from each other these complexes are in the space of buildings (say in the $\tilde A_2$ case). A similar phenomenon is also occurs in our rank $\frac 7 4$\ constructions. It is not clear a priori how distant from each other the complexes we build will be in the space of complexes of rank $\frac 7 4$, since there might a priori exist ``exotic'' isomorphisms between the spheres of large radius which we have not detected. Thus, we only prove an injectivity result, see the proof of Theorem \ref{T - pairwise non isomorphic cover}, which explains that the final statement is not more explicit than the one put forward in (a) above.
\bigskip
\textbf{Acknowledgements.} The second author was partially supported by an NSERC discovery grant and a JSPS award.
\section{The parity invariant}\label{S - rank parity}
We first define a metric (and simplicial) invariant for complexes of rank $\frac 7 4$, called the parity (Def.\ \ref{D - rank parity}).
Let $X$ be a complex of rank $\frac 7 4$. For convenience, we shall call 2-triangle a simplicial equilateral triangle in $X$ whose sides have length 2. Every 2-triangle contains 4 faces which are themselves (small) equilateral triangles.
Let $x\in X$ be a vertex and $L_x$ the link at $x$ (namely, the sphere of small radius around $x$ in $X$, endowed with the angular metric). Recall that we call \emph{root} at $x$ a metric embedding $\alpha\colon [0,\pi]\inj L_x$ in the link of $x$, such that $\alpha(0)$ is a link vertex. Every root has a rank $\rk(\alpha)$ which is a rational number in $[1,2]$. It is defined by
\[
\rk(\alpha)=1+{N(\alpha)\over q_\alpha}\tag{$\dagger$}
\]
where
\[
N(\alpha):=|\{\beta\in \Phi_x\mid \alpha\neq \beta, \alpha(0)=\beta(0), \alpha(\pi)=\beta(\pi)\}|,
\]
writing $\Phi_x$ for the set of roots at $x$ and, for a root $\alpha$, $q_\alpha$ for the valency of $\alpha(0)$ minus 1:
\[
q_\alpha:=\val(\alpha(0))-1.
\]
We refer to \cite[\textsection 4]{chambers} for more details on this definition. In a complex of rank $7/4$, the rank of a root can be 2 or \th.
Every side in a 2-triangle defines two roots at every vertex $x$ of its center (small) triangle, which are of the form $\alpha(t)$ and $\alpha(\pi-t)$ for $t\in [0,\pi]$ for some $\alpha\in \Phi_x$. These two roots have the same rank, which we call the rank of the corresponding side of the 2-triangle.
If $T_1$, $T_2$ are two 2-triangles, we say that $T_1$ and $T_2$ are in \emph{branching configuration} if the intersection $T_1\cap T_2$ contains 3 triangles. In this case, we call \emph{branching permutation}, and we write $T_1\to T_2$, the transformation that fixes the intersection $T_1\cap T_2$ pointwise, and takes the free triangle in $T_1$ isometrically to the free triangle in $T_2$. Such a transformation is involutive. The rank of the two sides of $T_1$, which are not included in the fixed set $T_1\cap T_2$, must change under such a transformation:
\begin{lemma}\label{L - rank parity lemma} Let $X$ be a complex of rank $\frac 7 4$. Let $T_1, T_2$ be two 2-triangles of $X$ in branching configuration.
If $\alpha$ is the root of a side of $T_1$ and $s\colon T_1\to T_2$ is a branching permutation, then the corresponding root transformation $\alpha \mapsto s(\alpha)$ permutes the values of the rank:
\[
\begin{cases}\frac 3 2\mapsto 2\\ 2\mapsto \frac 3 2 \end{cases}
\]
unless $s(\alpha)=\alpha$.
\end{lemma}
\begin{proof}
Consider such a root $\alpha$ at a vertex
$x$. In the link $L_x$, every isometric embedding $\alpha\colon [0,\frac{2\pi}3]\inj L_x$, where $\alpha(0)$ is a vertex, admits exactly two isometric extensions $\alpha\colon [0,\pi]\inj L_x$ into a root. One of them has rank \th, the other rank 2. Hence the two values of the rank are permuted by a branching permutation.
\end{proof}
Using Lemma \ref{L - rank parity lemma} one can define a metric invariant of the complex taking values in $\mathbb{Z}/2\mathbb{Z}$, and attached to every face of $X$.
\begin{definition}\label{D - rank parity}
The \emph{parity} of a triangle face $t$ in a complex of rank $\frac 7 4$\ is the parity of the number of roots of rank 2 in a 2-triangle $T$ in which $t$ embeds as the centerpiece.
\end{definition}
It follows from Lemma \ref{L - rank parity lemma} that this is a well defined map
\[
X^{2}\to \mathbb{Z}/2\mathbb{Z}
\]
where $X^{2}$ denotes the set of 2-faces of the 2-complex $X$. We call this map the parity map.
It is clearly an invariant of isomorphism:
\begin{lemma}
Simplicial isomorphisms between complexes of rank $\frac 7 4$\ preserve the parity of faces.
\end{lemma}
In particular, the automorphism group $\Aut(X)$ of a complex $X$ of rank $\frac 7 4$\ acts in a parity preserving way.
\begin{definition}
We will say that a complex of rank $\frac 7 4$\ is even (resp.\ odd) if its faces are even (resp.\ odd).
\end{definition}
In general, one might expect (compare \textsection\ref{S- Prescription}) that a complex of rank $\frac 7 4$\ has mixed parity.
\section{Explicit computations}\label{S- Explicit}
In \cite[\textsection 4]{rd} we found 13 complexes of rank $\frac 7 4$. The parity map can be computed explicitly in these examples, which provides new information on these complexes. We explain how in this section.
The simplest case to consider is that of $X_0:=\tilde V_0$. The complex can be described by its fundamental set of triangles as follows:
\[
V_0:=[[1,2,6],[2,3,7],[3,4,8],[4,5,1],[5,6,2],[6,7,3],[7,8,4],[8,1,5]].
\]
It is the simplest case because the automorphism group is face transitive, and we know in advance that the complex $X_0$ is either odd or even.
\begin{proposition}\label{P - V0 odd}
$X_0$ is odd.
\end{proposition}
\begin{proof}
Consider for instance the face $[1,2,6]$. It is contained in $8(=2^3)$ 2-triangles. If the face $[1,2,6]$ is oriented counterclockwise, one of these 2-triangles is given by the faces $[2,3,7]$, $[6,7,3]$, and $[4,5,1]$ in this order. Labelling the link vertices by a signed integer $\pm k$, where $k\in \{1,\ldots, 8\}$, corresponding to an incoming ($+k$) or a outgoing ($-k$) edge with label $k$ in $X_0$, this gives us three roots at the vertices of $[1,2,6]$
\[
\alpha_1\colon 3\to -6\to 2\to-3
\]
\[
\alpha_2\colon 5\to-1\to6\to-7
\]
\[
\alpha_3\colon 7\to-2\to1\to-4
\]
corresponding to the sides of the given 2-triangle. A direct computation in the labelled link using $(\dagger)$ shows that
\[
\rk(\alpha_1)=\rk(\alpha_3)=\frac 3 2
\]
\[
\rk(\alpha_2)=2.
\]
where the representation of the link and its labelling is given by
\begin{figure}[H]
\begin{tikzpicture}[shift ={(0.0,0.0)},scale = 2.0]
\tikzstyle{every node}=[font=\small]
\node (v1) at (1.0,0.0){$1$};
\node (v2) at (0.92,0.38){$-2$};
\node (v3) at (0.71,0.71){$6$};
\node (v4) at (0.38,0.92){$-7$};
\node (v5) at (-0.0,1.0){$3$};
\node (v6) at (-0.38,0.92){$-4$};
\node (v7) at (-0.71,0.71){$8$};
\node (v8) at (-0.92,0.38){$-1$};
\node (v9) at (-1.0,-0.0){$5$};
\node (v10) at (-0.92,-0.38){$-6$};
\node (v11) at (-0.71,-0.71){$2$};
\node (v12) at (-0.38,-0.92){$-3$};
\node (v13) at (0.0,-1.0){$7$};
\node (v14) at (0.38,-0.92){$-8$};
\node (v15) at (0.71,-0.71){$4$};
\node (v16) at (0.92,-0.38){$-5$};
\draw[solid,thin,color=black,-] (v1) -- (v2);
\draw[solid,thin,color=black,-] (v2) -- (v3);
\draw[solid,thin,color=black,-] (v3) -- (v4);
\draw[solid,thin,color=black,-] (v4) -- (v5);
\draw[solid,thin,color=black,-] (v5) -- (v6);
\draw[solid,thin,color=black,-] (v6) -- (v7);
\draw[solid,thin,color=black,-] (v7) -- (v8);
\draw[solid,thin,color=black,-] (v8) -- (v9);
\draw[solid,thin,color=black,-] (v9) -- (v10);
\draw[solid,thin,color=black,-] (v10) -- (v11);
\draw[solid,thin,color=black,-] (v11) -- (v12);
\draw[solid,thin,color=black,-] (v12) -- (v13);
\draw[solid,thin,color=black,-] (v13) -- (v14);
\draw[solid,thin,color=black,-] (v14) -- (v15);
\draw[solid,thin,color=black,-] (v15) -- (v16);
\draw[solid,thin,color=black,-] (v16) -- (v1);
\draw[solid,thin,color=black,-] (v1) -- (v6);
\draw[solid,thin,color=black,-] (v3) -- (v8);
\draw[solid,thin,color=black,-] (v5) -- (v10);
\draw[solid,thin,color=black,-] (v7) -- (v12);
\draw[solid,thin,color=black,-] (v9) -- (v14);
\draw[solid,thin,color=black,-] (v11) -- (v16);
\draw[solid,thin,color=black,-] (v13) -- (v2);
\draw[solid,thin,color=black,-] (v15) -- (v4);
\end{tikzpicture}
\end{figure}
\noindent This proves that $X_0$ is odd.
\end{proof}
This proof works for every face in any complex of rank $\frac 7 4$. Similar computations, for example, will lead to the following statement.
\begin{proposition}\label{P - 74 even}
The following complexes of rank $\frac 7 4$\ are even:
\begin{align*}
V^1_0=[[1, 2, 3], [1, 4, 5], [1, 6, 4], [2, 6, 8], [2, 8, 5], [3, 6, 7], [3, 7, 5], [4, 8, 7]]\\
V^2_0=[[1, 2, 3], [1, 4, 5], [1, 6, 7], [2, 4, 6], [2, 8, 5], [3, 6, 8], [3, 7, 5], [4, 8, 7]]\\
\check V_0^2=[[1, 2, 3], [1, 4, 5], [1, 6, 7], [2, 6, 4], [2, 8, 5], [3, 6, 8], [3, 7, 5], [4, 8, 7]]\\
V_4^1=[[1, 1, 5], [2, 2, 5], [3, 3, 6], [4, 4, 6], [1, 3, 8], [2, 7, 4], [5, 8, 7], [6, 7, 8]]
\end{align*}
\end{proposition}
The classification given in \ts4 of \cite{rd} contains eight other complexes of rank $\frac 7 4$. These complexes are of mixed parity. The details of the parity map are given in an appendix.
\section{Constructing universal covers}\label{S - surgery construtions}
Our goal in this section is to use the parity to construct compact complexes of rank $\frac 7 4$\ with non isomorphic universal covers.
\begin{theorem}\label{T - pairwise non isomorphic cover}
There exists infinitely many compact complexes of rank $\frac 7 4$\ with pairwise non isomorphic universal covers.
\end{theorem}
An idea is to construct complexes with ``different proportions'' of even faces, which will therefore have non isomorphic universal covers. A direct implementation of this requires to assign to a complex of rank $\frac 7 4$, in a computable way, an average number of the even faces it contains. In view of the discussion in the introduction, we shall rather use the following invariant, which measures the `radius of evenness' of the complex $X$.
\begin{definition} Let $X$ be a simply connected complex of rank $\frac 7 4$. We let $e(X)$ be the largest integer $n$ such that there exists a vertex $x\in X$ for which the ball $B(x,n)$ of radius $n$ is even.
\end{definition}
Clearly, if $e(X)\neq e(Y)$, the complexes $X$ and $Y$ and not isomorphic. To prove the theorem, we will construct a sequence $X_n$ of universal covers on which $e$ is injective.
\begin{lemma}\label{L - e finite}
Let $X$ be a simply connected complex of rank $\frac 7 4$\ with cocompact isometry group. Then $X$ is even if and only if $e(X)=\infty$.
\end{lemma}
\begin{proof}
It is clear that $e(X)=\infty$ if $X$ is even. Conversely, assume that $e(X)=\infty$. Then there exists a sequence $(x_n)$ of vertices of $X$ and a sequence $r_n\to \infty$ of radii, such that the ball $B(x_n,r_n)$ in $X$ are even. Since $\Isom(X)$ has a compact fundamental set, we may find a vertex $x$ in $X$ such that for every $n$, the balls $B(x,r_n)$ in $X$ are even. Thus, $X$ is even.
\end{proof}
In order to find an injectivity set for $e$ we will use the surgery construction of \cite{surgery}. In fact, this is the only construction of (infinitely many) compact complexes of rank $\frac 7 4$\ that we are aware of at this moment.
Let us start with a brief review of these constructions for groups of rank $\frac 7 4$. The surgery is described by a category $\mathrm{Bord}_{\frac 7 4}$ whose arrows are called the group cobordisms. These objects in this category are called the collars. Both the objects and the morphisms in $\mathrm{Bord}_{\frac 7 4}$ correspond to 2-dimensional complexes.
The following 2-complex defines a collar in $\mathrm{Bord}_{\frac 7 4}$. It was used in \cite{randomsurgery} to define a model of random groups of rank $\frac 7 4$.
\[
(x,a,d),\ (y,c,d),\ (z,c,b),\ (x',d,a),\ (y',b,a),\ (z',b,c)
\]
This 2-complex, which we will denote $C$, is obtained from a set of
six oriented equilateral triangles, with labeled edges, by identifying the edges respecting the labels and the orientations. It is not hard to check that this is an object in $\mathrm{Bord}_{\frac 7 4}$; in fact this collar is included in the ST lemma of \cite[\textsection 8]{surgery}.
We will use two arrows
\[
X_{00},Y_{00}\colon C\to C
\]
in $\mathrm{Bord}_{\frac 7 4}$, keeping the notation of \cite[\textsection 2]{randomsurgery} for consistency. As 2-complexes, these arrows have the following respective presentations
\[
(x,a,d),(y,c,d), (z,c,b),\ (1,1,2),\ (2,a',d'),(4,c',d'),(3,c',b')
\]
\[
(4,d,a),(3,b,a), (2,b,c),\ (1,3,4),\ (x',d',a'),(y',b',a'),(z',b',c')
\]
and
\[
(x,a,d),(y,c,d), (z,c,b),\ (1,2,3),\ (4,a',d'),(2,c',d'),(1,c',b')
\]
\[
(1,d,a),(3,b,a), (4,b,c),\ (2,4,3),\ (x',d',a'),(y',b',a'),(z',b',c')
\]
described in \cite[Remark 2.8]{randomsurgery}.
As group cobordisms, when viewed as arrows $C\to C$ in the category $\mathrm{Bord}_{\frac 7 4}$, these complexes $Z$ are endowed with maps $L_Z\colon C\to X$ and $R_Z\colon C\to C$, which in the case of $Z=X_{00}$ and $Z=Y_{00}$ are the obvious ones.
As in \textsection\ref{S- Explicit}, Prop.\ \ref{P - V0 odd}, a direct computation shows that:
\begin{lemma} The triangles $(1,1,2)$ and $(1,3,4)$ (resp.\ $(1,2,3)$ and $(2,3,4)$) are even in $X_{00}$ (resp.\ in $Y_{00}$).
\end{lemma}
Note that the cobordisms themselves are not complexes of rank $\frac 7 4$, since they have a non trivial boundary. However, since the triangles referred to in the above lemma belong to the core, the link involved is indeed the Moebius--Kantor graph, and the parity makes sense in this case.
On the other hand, the parity of the collar faces is not determined a priori by the cobordism alone, since it may depend on composition. We shall use this fact to construct complexes with different parity ratios.
\begin{lemma}\label{L - even central}
In the composition $Y_{00}Y_{00}$, the central collar is even.
\end{lemma}
\begin{proof}
Indeed, the $n$-fold composition $\underbrace{Y_{00}\cdots Y_{00}}_n$ of the cobordism $Y_{00}$ leads to finite covers of the $V_0^1$, which is an even complex, as we pointed out in Prop.\ \ref{P - 74 even}.
\end{proof}
On the other hand we have:
\begin{lemma}\label{L - product odd}
The central collar in the composition $X_{00}Y_{00}$ contains an odd face.
\end{lemma}
\begin{proof}
Let us show that the face $(4,c',d')$ is odd. We can relabel the second cobordism $Y_{00}$ as follows
\[
(2,a',d'),(4,c',d'), (3,c',b'),\ (11,12,13),\ (14,a'',d''),(12,c'',d''),(11,c'',b'')
\]
\[
(11,d',a'),(13,b',a'), (14,b',c'),\ (12,14,13),\ (x'',d'',a''),(y'',b'',a''),(z'',b'',c'')
\]
viewing it as embedded in the composition $X_{00}Y_{00}$. A 2-triangle containing this face is described, in counterclockwise order, by the 3 faces $(1,3,4)$, $(3,c',b')$, and $(2,a',d')$, which gives us three roots
\[
\alpha_1\colon a'\to -d'\to c'\to -b'
\]
\[
\alpha_2\colon 3\to -c'\to 4\to -1
\]
\[
\alpha_3\colon 3\to -4\to d'\to -2
\]
of which we have to compute the rank.
The root $\alpha_1$ belongs to the following link in $X_{00}Y_{00}$:
\begin{center}
\begin{tikzpicture}[shift ={(0.0,0.0)},scale = 2.0]
\tikzstyle{every node}=[font=\small]
\node (v1) at (1.0,0.0){$6$};
\node (v2) at (0.92,0.38){$-14$};
\node (v3) at (0.71,0.71){$12$};
\node (v4) at (0.38,0.92){$-7$};
\node (v5) at (-0.0,1.0){$8$};
\node (v6) at (-0.38,0.92){$-5$};
\node (v7) at (-0.71,0.71){$14$};
\node (v8) at (-0.92,0.38){$-13$};
\node (v9) at (-1.0,-0.0){$a'$};
\node (v10) at (-0.92,-0.38){$-11$};
\node (v11) at (-0.71,-0.71){$13$};
\node (v12) at (-0.38,-0.92){$-b'$};
\node (v13) at (0.0,-1.0){$c'$};
\node (v14) at (0.38,-0.92){$-d'$};
\node (v15) at (0.71,-0.71){$11$};
\node (v16) at (0.92,-0.38){$-12$};
\draw[solid,thin,color=black,-] (v1) -- (v2);
\draw[solid,thin,color=black,-] (v2) -- (v3);
\draw[solid,thin,color=black,-] (v3) -- (v4);
\draw[solid,thin,color=black,-] (v4) -- (v5);
\draw[solid,thin,color=black,-] (v5) -- (v6);
\draw[solid,thin,color=black,-] (v6) -- (v7);
\draw[solid,thin,color=black,-] (v7) -- (v8);
\draw[solid,thin,color=black,-] (v8) -- (v9);
\draw[solid,thin,color=black,-] (v9) -- (v10);
\draw[solid,thin,color=black,-] (v10) -- (v11);
\draw[solid,thin,color=black,-] (v11) -- (v12);
\draw[solid,thin,color=black,-] (v12) -- (v13);
\draw[solid,thin,color=black,-] (v13) -- (v14);
\draw[solid,thin,color=black,-] (v14) -- (v15);
\draw[solid,thin,color=black,-] (v15) -- (v16);
\draw[solid,thin,color=black,-] (v16) -- (v1);
\draw[solid,thin,color=black,-] (v1) -- (v6);
\draw[solid,thin,color=black,-] (v3) -- (v8);
\draw[solid,thin,color=black,-] (v5) -- (v10);
\draw[solid,thin,color=black,-] (v7) -- (v12);
\draw[solid,thin,color=black,-] (v9) -- (v14);
\draw[solid,thin,color=black,-] (v11) -- (v16);
\draw[solid,thin,color=black,-] (v13) -- (v2);
\draw[solid,thin,color=black,-] (v15) -- (v4);
\end{tikzpicture}
\end{center}
Therefore it is of rank 2.
The roots $\alpha_2$ and $\alpha_3$ belong to the following link:
\begin{center}
\begin{tikzpicture}[shift ={(0.0,0.0)},scale = 2.0]
\tikzstyle{every node}=[font=\small]
\node (v1) at (1.0,0.0){$1$};
\node (v2) at (0.92,0.38){$-1$};
\node (v3) at (0.71,0.71){$4$};
\node (v4) at (0.38,0.92){$-6$};
\node (v5) at (-0.0,1.0){$5$};
\node (v6) at (-0.38,0.92){$-3$};
\node (v7) at (-0.71,0.71){$b'$};
\node (v8) at (-0.92,0.38){$-c'$};
\node (v9) at (-1.0,-0.0){$3$};
\node (v10) at (-0.92,-0.38){$-4$};
\node (v11) at (-0.71,-0.71){$d'$};
\node (v12) at (-0.38,-0.92){$-a'$};
\node (v13) at (0.0,-1.0){$2$};
\node (v14) at (0.38,-0.92){$-8$};
\node (v15) at (0.71,-0.71){$7$};
\node (v16) at (0.92,-0.38){$-2$};
\draw[solid,thin,color=black,-] (v1) -- (v2);
\draw[solid,thin,color=black,-] (v2) -- (v3);
\draw[solid,thin,color=black,-] (v3) -- (v4);
\draw[solid,thin,color=black,-] (v4) -- (v5);
\draw[solid,thin,color=black,-] (v5) -- (v6);
\draw[solid,thin,color=black,-] (v6) -- (v7);
\draw[solid,thin,color=black,-] (v7) -- (v8);
\draw[solid,thin,color=black,-] (v8) -- (v9);
\draw[solid,thin,color=black,-] (v9) -- (v10);
\draw[solid,thin,color=black,-] (v10) -- (v11);
\draw[solid,thin,color=black,-] (v11) -- (v12);
\draw[solid,thin,color=black,-] (v12) -- (v13);
\draw[solid,thin,color=black,-] (v13) -- (v14);
\draw[solid,thin,color=black,-] (v14) -- (v15);
\draw[solid,thin,color=black,-] (v15) -- (v16);
\draw[solid,thin,color=black,-] (v16) -- (v1);
\draw[solid,thin,color=black,-] (v1) -- (v6);
\draw[solid,thin,color=black,-] (v3) -- (v8);
\draw[solid,thin,color=black,-] (v5) -- (v10);
\draw[solid,thin,color=black,-] (v7) -- (v12);
\draw[solid,thin,color=black,-] (v9) -- (v14);
\draw[solid,thin,color=black,-] (v11) -- (v16);
\draw[solid,thin,color=black,-] (v13) -- (v2);
\draw[solid,thin,color=black,-] (v15) -- (v4);
\end{tikzpicture}
\end{center}
Both of them are of rank $\frac 32$.
This proves that the face is odd.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T - pairwise non isomorphic cover}]
Consider the compact complexe of rank $\frac 7 4$\
\[
V_n:=X_{00}\underbrace{Y_{00}\cdots Y_{00}}_{2n}/\sim
\]
where $\sim$ identifies the two extremal copies of $C$ in the composition $X_{00}Y_{00}\cdots Y_{00}$. Let $X_n$ denote the universal cover of $V_n$. Since the projection $X_n\surj V_n$ reduces the distances, it is clear that if $x_n$ denotes a lift the core vertex in the $n^\text{th}$ cobordism $Y_{00}$ in $V_n$, then by Lemma \ref{L - even central} the ball of radius $n$ in $X_n$ is even. It follows that $e(X_n)\geq n$. By Lemma \ref{L - e finite} and Lemma \ref{L - product odd}, we have $e(X_n)<\infty$. Therefore, we can find a subsequence of $(V_n)_n$ on which $e$ is injective.
\end{proof}
\begin{remark} 1) By their definition, the groups $\pi_1(V_n)$ in Theorem \ref{T - pairwise non isomorphic cover} is accessible by surgery.
2) The proof shows that the universal cover of $V_0^1$ is not determined locally among the complexes of rank $\frac 7 4$, namely one can find compact complexes of rank $\frac 7 4$\ with universal cover distinct from $X_0^1$, but which coincide with it on arbitrary large balls. Another way to phrase this is to say that $V_0^1$ is an accumulation point in the space of complexes of rank $\frac 7 4$.
3)
Lemma \ref{L - even central} shows that the universal covers constructed in the present section always contains a definite amount of even faces, which can never be reduced to zero.
On the other hand, we have an example $X_0=\tilde V_0$ of a complex with no even face (Prop.\ \ref{P - V0 odd}). In the forthcoming section we show how to construct (simply connected) complexes of rank $\frac 7 4$\ with an arbitrary parity map.
\end{remark}
\section{Free constructions}\label{S- Prescription}
J.\ Tits observed that certain free constructions of (e.g.) Euclidean buildings can be given in the rank 2 case, in analogy with the construction of free projective planes. In fact, it was shown by M.\ Ronan in \cite{ronan1986construction} that these Euclidean buildings (in the $\tilde A_2$ case) can all be obtained by such free constructions, and furthermore that a prescription theorem holds, to the effect that the residues at the vertices can be taken to run over any set of projective plane of the given order. Thus, there is complete freedom in the constructions of buildings of type $\tilde A_2$. We \cite{ronan1986construction,ronan1987building} for details on these results, and to \cite[\textsection 2]{barre2000immeubles} for a different free prescription theorem along these lines, for buildings of type $\tilde A_2$ and order 2.
Our goal in this section is to show that complexes of rank $\frac 7 4$\ behave similarly in this respect. We prove a similar prescription theorem, replacing the projective planes by a prescription theorem for the parity map.
\begin{theorem}\label{T - Free constructions}
There are free constructions of simply connected complexes of rank $\frac 7 4$\ with a given parity map. In particular, there exist uncountably many pairwise non isomorphic simply connected complexes of rank $\frac 7 4$.
\end{theorem}
The free construction is (as in the above references) by induction, starting with a ball $B_1$ in a complex of rank $\frac 7 4$\ and extending successively the balls $B_1\subset B_2\subset\cdots$, and setting $X:=\bigcup B_n$, where we have to show that the parity can be chosen freely. This is done in the following lemma, from which the theorem follows easily.
\begin{lemma}
Let $B_n$ be a ball of radius $n$ in a complex of rank $\frac 7 4$, and let $S_n:=B_n\setminus (B_{n-1})^\circ$ denote the simplicial sphere of radius $n$. Let $p\colon S_n\to\mathbb{Z}/2\mathbb{Z}$ be a map defined on the 2-skeleton. Then there exists a ball $B_{n+1}$ in a complex of rank $\frac 7 4$, containing $B_n$, and such that the parity of the faces of $S_n$ in $B_{n+1}$ is given by $p$.
\end{lemma}
\begin{proof}
Let $f$ be a face in $S_n$. Then $f\cap \del B_n$ is either a point, an edge, or a pair of adjacent edges (where $\del B_n$ denotes the topological boundary).
If $f\cap \del B_n$ is a point $x$, then $f$ is adjacent to 4 faces in $S_n$ whose intersections with $\del B_n$ is an edge. Let $T$ be a 2-triangle containing the face $f$ and let $\alpha$ be a root corresponding to $T$ at $x$. We can choose to embed $\alpha$ in the Moebius--Kantor graph $L_x$ in such a way that its rank is $\frac 3 2$ or 2, and therefore can freely decide the parity of $f$ in the construction, which can be chosen to be $p(f)$. Furthermore, this operation only determines the following subgraph of $L_x$.
\begin{center}
\begin{tikzpicture}[shift ={(0.0,0.0)},scale = 2.0]
\tikzstyle{every node}=[font=\small]
\coordinate (v1) at (0.0,0.0);
\coordinate (v2) at (1,0);
\coordinate (v3) at (-0.5,0.5);
\coordinate (v4) at (0.1,.9);
\coordinate (v5) at (.9,.9);
\coordinate (v6) at (1.5,.5);
\coordinate (v7) at (0,1.1);
\coordinate (v8) at (1,1.1);
\coordinate (w3) at (-0.5,-0.5);
\coordinate (w4) at (0.1,-.9);
\coordinate (w5) at (.9,-.9);
\coordinate (w6) at (1.5,-.5);
\coordinate (w7) at (0,-1.1);
\coordinate (w8) at (1,-1.1);
\draw (v2) node[above] {$u$} ;
\draw (v4) node[above] {$v$};
\draw (v7) node[above] {$w$} ;
\path (v1) edge[solid,thin,color=black,-] node[above] {$f$} (v2);
\draw[solid,thin,color=black,-] (v1) -- (v3);
\draw[solid,thin,color=black,-] (v3) -- (v4);
\draw[solid,thin,color=black,-] (v4) -- (v5);
\draw[solid,thin,color=black,-] (v5) -- (v6);
\draw[solid,thin,color=black,-] (v2) -- (v6);
\draw[solid,thin,color=black,-] (v3) -- (v7);
\draw[solid,thin,color=black,-] (v7) -- (v8);
\draw[solid,thin,color=black,-] (v8) -- (v6);
\draw[solid,thin,color=black,-] (v1) -- (w3);
\draw[solid,thin,color=black,-] (w3) -- (w4);
\draw[solid,thin,color=black,-] (w4) -- (w5);
\draw[solid,thin,color=black,-] (w5) -- (w6);
\draw[solid,thin,color=black,-] (v2) -- (w6);
\draw[solid,thin,color=black,-] (w3) -- (w7);
\draw[solid,thin,color=black,-] (w7) -- (w8);
\draw[solid,thin,color=black,-] (w8) -- (w6);
\end{tikzpicture}
\end{center}
In this graph the edge labelled $f$ corresponds to the centerpiece of the root $\alpha$, which extends on both side, in a direction depending on its rank. The origin and extremity of $\alpha
$ belong to the two faces in $T$ adjacent to $f$ and intersecting the boundary. Let $g$ be one of these two faces and $\beta$ be a root in $L_x$ corresponding to $g$. Note that $g$ is such that $g\cap \del B_n$ is a single edge. We have to show that the parity of $g$ can be chosen according to $p$. By symmetry, one may assume that $\beta(0) = u$ and $\beta(\pi)=v$ or $w$. Lemma \ref{L - rank parity lemma} ensures that the rank of $\beta$ is permuted according to the choice $\beta(\pi)=v$ or $\beta(\pi)=w$. This shows that one can freely choose the rank parity of $g$ by choosing an appropriate embedding of the above graph in the Moebius--Kantor graph $L_x$.
This argument shows that for any face $g$ such that $g\cap \del B_n$ is a single edge $[x,y]$, the rank of both roots $\beta_x$ and $\beta_y$ corresponding to $x$ and $y$ in a 2-triangle containing $g$ can be chosen independently. In particular, the parity of $g$ can be chosen to be $p(g)$.
In the last case, $f\cap \del B_n$ is a pair of adjacent edges intersecting at point $x$. In that case the ball $B_n$ determines a single edge of $L_x$, and therefore the parity of $f$ can be freely chosen.
\end{proof}
\begin{remark}
1) It seems clear from the proof that the parity alone will not determine the isomorphism class of a complex of rank $\frac 7 4$, and that there ought to exist uncountably many complexes with given parity (for example, uncountably many even/odd complexes of rank 7/4). Providing a formal proof of this assertion however requires a more refined invariant that distinguishes between complexes with a given parity map. We shall not pursue these investigations further.
2) It follows from Theorem \ref{T - Free constructions} that the equivalence in Lemma \ref{L - e finite} fails without the assumption that the isometry group has compact quotient.
\end{remark}
The following should be compared to the results of \cite{tits1988spheres}.
\begin{corollary}
There exists at least 174163 isomorphism types of spheres of radius 2 in complexes of rank $\frac 7 4$.
\end{corollary}
\begin{proof}
A sphere of radius 2 determine the parity map on the ball $B$ of radius 1. By Theorem \ref{T - Free constructions}, for every map $p\colon B\to \mathbb{Z}/2\mathbb{Z}$ defined on the 2-skeleton, there exists a ball $B_p$ of radius 2 extending $B$, such that the parity of the faces of the ball of radius 1 in $B_p$ is given by $p$. Since the parity is an invariant, if $B_p$ and $B_{p'}$ are abstractly isomorphic, then there exists an automorphism $\theta\colon B\to B$ such that $p'=p\circ \theta$. Thus, the number of spheres of radius 2 is at least the number of $p$'s modulo the action of $\Aut(B)\simeq \Aut(G)$ where $G$ is the Moebius--Kantor graph. This gives at least $2^{24}/96=174762.6$ isomorphism types.
\end{proof}
\section{Generic constructions}\label{S - generic}
In \cite{E1,E2} we studied Euclidean buildings of type $\tilde A_2$ from a dynamical point of view, motivated by
some questions in orbit equivalence theory. In view of the result in \textsection\ref{S- Prescription}, it seems obvious that these results will carry over to complexes of rank $\frac 7 4$. In the present section we mention an analog of \cite[Theorem 5]{E1}, which can be stated as follows:
\begin{theorem}
A generic complex of rank $\frac 7 4$\ has trivial automorphism group.
\end{theorem}
The notion of genericity used in this result (and in \cite{E1}) is topological, in the space of Baire, meaning that this holds for a dense $G_\delta$ set in the space of pointed complexes of rank $\frac 7 4$.
Rather than giving a formal proof of this result (which would significantly overlap with \cite{E1}), let us reproduce the figure from \cite[\textsection 6]{E1} since it is helpful to understand how the generic triviality of the automorphism group occurs.
\begin{figure}[H]
\includegraphics[width=7cm]{AT.pdf}
\end{figure}
This figure symbolizes a complex of rank $\frac 7 4$\ (or a building in the situation of \cite{E1}). The central (white) disk represents a ball of ``small'' (i.e., arbitrary large, but fixed) radius, which corresponds to fixing a neighborhood in the space of pointed complexes of rank $\frac 7 4$. Above this ball are larger balls which we can control to trivialize automorphisms, where the shades of grey account for the ``density'' of odd faces in the given portion of the complex. With some details, that need to be checked (as we did in \cite{E1}, in the case of buildings), this proves that the automorphism group of the generic (in the topological sense) space of rank $\frac 7 4$\ is trivial.
\begin{remark}
1) This argument is quite robust. It shows that the automorphism group of the spaces is generically trivial as soon as one can establish a free prescription theorem in the style of Ronan \cite{ronan1986construction} (or at least a partial prescription theorem which offers enough if not total freedom, as in \cite{E1} in the case $q\geq 3$), of a local invariant of isomorphism. The parity is such a local invariant of isomorphism, by Theorem \ref{T - Free constructions}.
2) In the case of complexes of rank $\frac 7 4$, it is easy to check that the isotropy groups are finite, and therefore the full use of shadings is not required. Namely, one may trivialize the isotropy groups by using appropriate shades of grey in the first wreath, and then confine oneself to prescribe even wreaths, followed by odd ones in alternance.
3) The parity allows us to distinguish universal covers up to isometry, but due to its local nature, there seems to be no direct way to make use of it for quasi-isometries. It is of course natural to wonder what can be said about the relation of quasi-isometry in the space of complexes of rank $\frac 7 4$.
4) One motivation for studying the space of Euclidean buildings was, in the theory of orbit equivalence, to give new constructions of probability measure preserving standard equivalence relations with the property T of Kazhdan. One difficulty is the construction of diffuse invariant measures on this space. In \cite{surgery} it is shown that the (similarly defined) space of complexes of rank $\frac 7 4$, contrary to what we currently know in the case of Euclidean buildings, admits diffuse invariant measures, albeit ones with amenable support. It would be interesting to study quasiperiodic spaces of rank $\frac 7 4$\ in more details, using the techniques of \cite{E1,E2} for example.
\end{remark}
\section{Appendix}
In this appendix we compute the parity map for the remaining eight complexes of rank $\frac 7 4$\ from \cite{rd}, namely $V_1$, $V_3$, $V_5$, $V_2^i$ for $i=1,\ldots,4$, and $V_4^2$.
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_1$& $[1,1,2]$&$[1,3,4]$&$[2,5,6]$&$[2,7,8]$&$[3,5,7]$&$[3,6,5]$&$[4,6,8]$&$[4,8,7]$\\
\hline
Parity&0&0&1&1&0&0&0&0
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_2^1$& $[1,1,3]$&$[2,2,3]$&$[1,4,5]$&$[2,7,8]$&$[3,5,7]$&$[4,6,8]$&$[4,7,6]$&$[5,8,6]$\\
\hline
Parity&0&0&1&1&0&0&0&0
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_2^2$& $[1,1,3]$&$[2,2,4]$&$[3,7,4]$&$[1,4,6]$&$[2,5,3]$&$[5,7,8]$&$[5,8,6]$&$[6,8,7]$\\
\hline
Parity&1&1&0&1&1&0&0&0
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_2^3$& $[1,1,3]$&$[2,2,4]$&$[1,5,2]$&$[3,6,4]$&$[3,7,6]$&$[4,6,8]$&$[5,7,8]$&$[5,8,7]$\\
\hline
Parity&0&0&0&1&1&1&0&0
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_2^4$& $[1,1,3]$&$[2,2,4]$&$[1,5,2]$&$[3,6,5]$&$[3,7,8]$&$[4,5,8]$&$[4,6,7]$&$[6,8,7]$\\
\hline
Parity&0&0&0&1&1&1&1&0
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_3$& $[1,1,4]$&$[2,2,4]$&$[3,3,5]$&$[1,3,6]$&$[2,5,7]$&$[4,7,8]$&$[5,8,6]$&$[6,8,7]$\\
\hline
Parity&0&0&0&0&1&1&1&0
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_4^2$& $[1,1,5]$&$[2,2,5]$&$[3,3,6]$&$[4,4,7]$&$[1,3,8]$&$[2,7,6]$&$[4,8,6]$&$[5,8,7]$\\
\hline
Parity&0&0&1&1&0&1&1&1
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{c@{\hspace{.1cm}} | c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}} c@{\hspace{.1cm}}c@{\hspace{.1cm}} c@{\hspace{.1cm}}}
$V_5$& $[1,1,2]$&$[3,3,2]$&$[4,4,2]$&$[5,5,6]$&$[7,7,8]$&$[1,8,6]$&$[3,6,7]$&$[5,8,4]$\\
\hline
Parity&0&0&0&1&1&0&0&0
\end{tabular}
\end{table}
\begin{remark}
The complex $V_0$ is therefore the unique complex in the list of \cite{rd}
such that every face is the centerpiece of a 2-triangle whose three sides are of rank 2.
\end{remark}
|
1,314,259,993,313 | arxiv | \section{Introduction}
This paper deals with weak limit theorems involving the high-frequency
components (in the sense of the spherical harmonics decomposition) of random
fields defined on the unit sphere $\mathbb{S}^{2}$. Our results are
motivated by a number of mathematical issues arising in connection with the
probabilistic and statistical analysis of the Cosmic Microwave Background
radiation (see e.g. \cite{dodelson}). We start by giving a description of
our abstract mathematical framework, along with a sketch of the main results
of the paper; the subsequent Section \ref{ss : PhysMot} focuses on the
physical motivations and applications of our research. Here, and for the
rest of the paper, all random elements are defined on a suitable probability
space $\left( \Omega ,\mathcal{F},\mathbb{P}\right) $.
\subsection{General framework and outline of the main results}
We shall consider real-valued random fields $\{\widetilde{T}(x):x\in \mathbb{%
S}^{2}\}$ enjoying the following properties:%
\begin{equation}
\mathbb{E}\widetilde{T}(x)=0\text{ , }\mathbb{E}\widetilde{T}^{2}(x)<+\infty
\text{ \ and \ }\widetilde{T}(gx)\overset{law}{=}\widetilde{T}(x)\text{,}
\label{IntroDEF}
\end{equation}%
for all $x\in \mathbb{S}^{2}$ and all $g\in SO(3)$, where $\overset{law}{=}$
denotes equality in law (in the sense of stochastic processes). A field
verifying the last relation in (\ref{IntroDEF}) is usually called \textsl{%
isotropic }or \textsl{rotationally-invariant }(in law). It is a standard
result that the following spectral representation holds in the mean-square
sense:%
\begin{equation}
\widetilde{T}\left( x\right) =\sum_{l=0}^{\infty }\widetilde{T}%
_{l}(x)=\sum_{l=0}^{\infty }\sum_{m=-l}^{l}a_{lm}Y_{lm}\left( x\right) \text{%
,} \label{specrap}
\end{equation}%
where $\left\{ Y_{lm}:l\geq 0\text{, }m=-l,...,l\right\} $ is the
collection of the spherical harmonics, and the $\left\{
a_{lm}\right\} $ are the associated (harmonic) Fourier coefficients. For $%
l\geq 0$, we also write $C_{l}\triangleq \mathbb{E}\left\vert
a_{lm}\right\vert ^{2}$, and we call the sequence $\left\{ C_{l}:l\geq
0\right\} $ the \textsl{angular power spectrum} of the random field $%
\widetilde{T}$ (note that $C_{l}$ does not depend on $m$ -- see e.g. \cite%
{BaMa}). For every $l\geq 0$, the field $\widetilde{T}_{l}$ provides the
projection of $\widetilde{T}$ on the subspace of $L^{2}(\mathbb{S}^{2},dx)$
spanned by the class $\left\{ Y_{lm}:m=-l,...,l\right\} $. The spherical
harmonics form an orthonormal basis of $L^{2}(\mathbb{S}^{2},dx)$ which can
be derived from the restriction to the sphere of harmonic polynomials. In
particular, in spherical coordinates $x=(\theta ,\varphi )$ they can be
written explicitly as: $Y_{00}\equiv 1/\sqrt{4\pi }$ and%
\begin{eqnarray}
Y_{lm}(\theta ,\varphi ) &=&\sqrt{\frac{2l+1}{4\pi }\frac{(l-m)!}{(l+m)!}}%
P_{lm}(\cos \theta )e^{im\varphi }\text{, }m\geq 0\text{ ,} \label{IntroSH1}
\\
Y_{lm}(\theta ,\varphi ) &=&(-1)^{m}\overline{Y_{l,-m}}(\theta ,\varphi )%
\text{, }m<0,\text{ }0\leq \theta \leq \pi ,\text{ }0\leq \varphi <2\pi
\text{ ,} \label{IntroSH2}
\end{eqnarray}%
where, for $l\geq 1$ and $m=0,1,2,...,l,$ $P_{lm}(\cdot )$ denotes the
Legendre polynomial of index $l,m,$ i.e.,%
\begin{equation}
P_{lm}(x)=(-1)^{m}(1-x^{2})^{m/2}\frac{d^{m}}{dx^{m}}P_{l}(x)\text{ , }%
P_{l}(x)=\frac{1}{2^{l}l!}\frac{d^{l}}{dx^{l}}(x^{2}-1)^{l}.
\label{Legendre}
\end{equation}%
For a discussion of these and other properties of the spherical harmonics
see e.g. \cite[Chapter 9]{Libo}, or \cite[Chapter 5]{VMK}. For $l\geq 0$,
the real-valued field $\widetilde{T}_{l}$ is called the $l$th \textsl{%
frequency component }of $\widetilde{T}$. The expansion (\ref{specrap}) can
be achieved by many different routes, for instance by a Karhunen-Lo\'{e}ve
argument or by means of the stochastic Peter-Weyl theorem, see for instance
\cite{Adler}, \cite{BaMaVa}, \cite{Leon} and \cite{PePy}. The random
harmonic coefficients $\left\{ a_{lm}\right\} $ appearing in (\ref{specrap})
form a triangular array of zero-mean random variables, which are
complex-valued for $m\neq 0$ and such that $\mathbb{E}a_{lm}\overline{%
a_{l^{\prime }m^{\prime }}}=\delta _{l}^{l^{\prime }}\delta
_{m}^{m^{\prime}}C_{l}$ (the bar denotes complex conjugation and
$\delta $ is Kronecker's symbol; note also that
$a_{lm}=(-1)^{m}\overline{a_{l-m}}$). For a Gaussian random field
$\widetilde{T}$ verifying (\ref{IntroDEF}), it is trivial that the
set $\left\{ a_{lm}\right\} $ is itself a complex-Gaussian array,
with independent elements for $m\geq 0$. It is a simple but
interesting fact that
the converse also holds, i.e. that, under an isotropy assumption on $%
\widetilde{T}$, the independence of the $a_{lm}$'s for $m\geq 0$ implies
Gaussianity, see \cite{BaMa}. Apart from this result, the behaviour of the
array $\left\{ a_{lm}\right\} $ and of the projections $\{\widetilde{T}%
_{l}\} $ for non-Gaussian isotropic fields is so far almost completely
unexplored and open for research, although such objects are highly relevant
for cosmological applications (see the next subsection). It should be
stressed that the coefficients $\left\{ a_{lm}\right\} $ depend on the
choice of coordinates and are not intrinsic to the field, although their law
is. In this sense, it is sometimes physically more sound to focus on the
behaviour of the sequence of projections $\{\widetilde{T}_{l}\}$, which are
indeed invariant with respect to the choice of coordinates.
In what follows, we focus on non-Gaussian fields $\widetilde{T}$ that are
\textsl{Gaussian-subordinated}, and we address the previous topic by
studying the asymptotic behaviour of $\left\{ a_{lm}\right\} $ and $\{%
\widetilde{T}_{l}\}$, as $l\rightarrow +\infty $. Recall that $\widetilde{T}$
is called \textsl{Gaussian-subordinated} whenever $\widetilde{T}\left(
x\right) =F\left( T\left( x\right) \right) $, where $F$ is a suitable
real-valued function, and $T$ is an isotropic spherical (real) Gaussian
field. In particular, our purpose is to establish sufficient (and sometimes,
also necessary) conditions on $F$ and on the law of $T$ to have that the
following two phenomena take place: (\textbf{I}) as $l\rightarrow +\infty $,
for a fixed $m$ and for an appropriate sequence $\tau _{1}\left( l\right) $ (%
$l\geq \left\vert m\right\vert $), the sequence $$\tau _{1}\left(
l\right) \times a_{lm} =\tau _{1}\left( l\right)
\int_{\mathbb{S}^{2}}F\left( T\left( z\right) \right)
\overline{Y_{lm}\left( z\right) }dz, \text{ \ \ } l\geq \left\vert
m\right\vert$$ converges in law to a Gaussian random variable
(real-valued for $m=0$, and complex-valued for $m\neq 0$);
(\textbf{II}) for a suitable real-valued sequence $\tau _{2}\left(
l\right) $ ($l\geq 0$) and for $l$ sufficiently large, the
finite-dimensional distributions of the field $$\tau _{2}\left(
l\right) \times \widetilde{T}_{l}\left( \cdot \right)=\tau
_{2}\left( l\right) \sum_{m=-l,...,l}a_{lm}Y_{lm}\left( \cdot
\right), $$
are close (for instance, in the sense of the Prokhorov
distance -- see \cite{Pesco}) to those of a real spherical
Gaussian field. Note that
both results (\textbf{I}) and (\textbf{II}) can be interpreted as CLTs%
\textsl{\ in the high-frequency }(or \textsl{high-resolution}) \textsl{sense}%
, since they involve Gaussian approximations and are established by letting
the frequency index $l$ diverge to infinity.
Our findings generalize previous results, obtained in \cite{MaPe}, for
fields defined on Abelian compact groups. One of our main tools is a result
concerning the Gaussian approximation of multiple Wiener-It\^{o} integrals
established in \cite{Pesco} (see also \cite{NuPe}, \cite{PeTaq2} and \cite%
{PT}). These CLTs can be seen as a simplification of the combinatorial
\textsl{method of diagrams and cumulants }(see e.g. \cite{Surg}). These
techniques, combined with the use of group representation theory, lead to
one of the main contributions of this paper: the derivation of sufficient
(or necessary and sufficient) conditions for (\textbf{I}) and (\textbf{II}),
expressed in terms of convolutions of \textsl{Clebsch-Gordan coefficients}
(see e.g. \cite[Ch. 4]{VMK}), which are the elements of unitary matrices
connecting specific reducible representations of $SO\left( 3\right) $.
Clebsch-Gordan coefficients are widely used in quantum mechanics, and admit
a well-known interpretation in terms of probability amplitudes related to
the coupling of angular momenta in a quantum mechanical system (see \cite%
{Libo}, \cite{VMK} or Sections \ref{S : Clebsch-Gordan} and \ref{S :
Roynette} below). We will also show that many of our conditions can be
alternatively restated in terms of `bridges' of random walks on $\widehat{%
SO\left( 3\right) }$ (the dual of $SO\left( 3\right) $). The definition of
such random walks differs from the classic one given in \cite{GKR}, although
the two approaches can be related through the notion of \textsl{mixed
quantum state} (see Section \ref{S : Roynette}). Note that an analogous
connection with random walks on $\mathbb{Z}^{d}$ was pointed out in \cite%
{MaPe}.
\subsection{Cosmological motivations\label{ss : PhysMot}}
The Cosmic Microwave Background radiation (hereafter CMB) can be viewed as a
relic radiation of the Big Bang, providing maps of the primordial Universe
before the formation of any of the current structures (approximately, $%
3\times 10^{5}$ years after the Big Bang); as such, it is acknowledged as a
goldmine of information for fundamental physics. Many satellite experiments
involving hundred of physicists throughout the world are devoted to the
construction of spherical maps of the CMB radiation, and for pioneering work
in this area G. Smoot and J. Mather were awarded the Nobel Prize for Physics
in 2006 -- see for instance \texttt{http://map.gsfc.nasa.gov/} for more
details.
The crucial point is that most cosmological models imply that the CMB
radiation is the realization of a random field $\{\widetilde{T}(x):x\in
\mathbb{S}^{2}\}$, verifying the three conditions in (\ref{IntroDEF}); each $%
x\in \mathbb{S}^{2}$ corresponds to a direction in which the CMB radiation\
is measured. The isotropic property can be seen as a consequence of
Einstein's \textsl{cosmological principle}, roughly stating that, on
sufficiently large distance scales, the Universe looks identical everywhere
in space (homogeneity) and appears the same in every direction (isotropy). A
central issue in modern cosmology relates therefore to the distribution of
the CMB random field $\widetilde{T}$, which is predicted to be (close to)
Gaussian by some models for the dynamics at primordial epochs (for instance,
by the so-called \textsl{inflationary scenario}), and non-Gaussian by other
models, where fluctuations are generated by topological defects arising in
phase transitions of a thermodynamical nature -- see for instance \cite%
{dodelson}. Many testing procedures have been proposed to tackle this issue;
in some form, they all rely asymptotically on the behaviour of the field at
the highest frequencies (see for instance \cite{Bart}, \cite{Marinucci} and
the references therein). This is a sort of unescapable, foundational issue
in Cosmology. By definition, the latter is a science based on a single
realization, e.g. our Universe or the trace of its primordial structure in
the form of the CMB\ radiation, which is observed at higher and higher
resolutions. As such, an asymptotic theory for statistical tests is possible
only in the sense of observations at higher and higher frequencies (smaller
and smaller scales) becoming available as the experiments become more
sophisticated. In particular, any satellite experiment measuring the CMB
radiation can reconstruct the spherical harmonic developement appearing in (%
\ref{specrap}) only up to a finite frequency $l_{\max }$, the quantity $\pi
/l_{\max }$ representing approximately the \textsl{angular resolution} of
the experiment (the pioneering satellite COBE (1993) could reach a frequency
$l_{\max }\simeq 20$, WMAP (2003, 2006) improved this limit to $l_{\max
}\simeq 600/800$, and Planck (to be launched in 2008) is expected to reach $%
l_{\max }\simeq 2500/3000$). In order for such procedures to yield
consistent outcomes, one should therefore figure out what is the limiting
behaviour of $\{\widetilde{T}_{l}\}$, for $l>>0$, under different
distributional assumptions on $\widetilde{T}$. Some Monte Carlo evidence
(see for instance \cite{MaPi} and the references therein) has suggested that
this behaviour may be close to Gaussian even in circumstances where the
underlying field $\widetilde{T}$ clearly is not. The investigation of this
issue is necessary for rigorous inference on CMB data, and in particular for
non-Gaussianity tests. The relevance of the asymptotic behaviour of the $\{%
\widetilde{T}_{l}\}$, however, goes much beyond the issue of such tests, and
relates indeed to the whole statistical analysis of CMB -- which is largely
dominated by likelihood approaches (see \cite{efst}).
We stress by now that the results we provide cover models that are quite
relevant for cosmological applications, for instance the so called \textsl{%
Sachs-Wolfe model}, which represents the standard starting model for the
inflationary scenario (see for instance \cite{Bart}, \cite{dodelson}). In
its simplest version, this model implies that the CMB is a straightforward
quadratic transformation of an underlying Gaussian field, i.e.%
\begin{equation}
\widetilde{T}(x)=T(x)+f_{NL}\left\{ T(x)^{2}-\mathbb{E}T(x)^{2}\right\}
\text{ , \ \ }x\in \mathbb{S}^{2}\text{,} \label{swmodel}
\end{equation}%
where $f_{NL}$ is a nonlinearity parameter depending on constants from
particle physics and $T$ is Gaussian and isotropic. As a special case, our
results do allow for a complete characterization of the high-frequency
behaviour of models such as (\ref{swmodel}), and in this sense they are
immediately applicable in the cosmological literature.
\subsection{Plan}
In Section 2 we provide some background material on isotropic
random fields on the sphere. Section 3 is devoted to a discussion
on representation theory for the group of rotations $SO(3)$ and
the so-called Clebsch-Gordan coefficients, which will play a
crucial role in the analysis to follow. In Section 4 we state and
prove a general CLT result for the spherical harmonics
coefficients and the high-frequency components of a field arising
from polynomial transformations of arbitrary order of a
subordinating Gaussian process. In Section 5 we provide a more
detailed analysis of necessary and sufficient condition for the
CLT to hold in the case of quadratic and cubic transformations; we
also highlight the connections between our conditions and the
theory of random walks on hypergroups. The interplay with random
walks on hypergroups is further explored in Section 6, where some
comparisons with the existing literature are provided, and some
physical interpretations of our conditions in terms of randomly
interacting quantum particles are given. In Section 7, we turn our
attention to more explicit conditions on the angular power
spectrum, and we discuss an exponential/algebraic duality which
parallels to some extent some earlier findings in the Abelian
case.
\section{Preliminaries on Gaussian and Gaussian-subordinated\
i\-so\-tro\-pic fields\label{S : GaussSub}}
As in the Introduction, we denote by $\mathbb{S}^{2}$ the unit sphere $%
\mathbb{S}^{2}=\left\{ x\in \mathbb{R}^{3}:\left\Vert x\right\Vert
=1\right\} $. For every rotation $g\in SO\left( 3\right) $ and every $x\in
\mathbb{S}^{2}$, the symbol $gx$ indicates the canonical action of $g$ on $x$
(see \cite[Ch. 1]{VMK}, as well as Section \ref{S : Clebsch-Gordan} below,
for further details). We will systematically write $dx$ for the Lebesgue
measure on $\mathbb{S}^{2}$, and we denote by $L^{2}\left( \mathbb{S}%
^{2},dx\right) $ the class of complex-valued functions on $\mathbb{S}^{2}$
which are square-integrable with respect to $dx$. We denote by $\left\{
Y_{lm}:l\geq 0\text{, }m=-l,...,l\right\} $ the basis of $L^{2}\left(
\mathbb{S}^{2},dx\right) $ given by spherical harmonics, as defined via (\ref%
{IntroSH1}) and (\ref{IntroSH2}). From now on, we shall denote by $T=\left\{
T\left( x\right) :x\in \mathbb{S}^{2}\right\} $ a centered, real-valued and
\textsl{Gaussian} random field parametrized by $\mathbb{S}^{2}$. We also
suppose that $T$ is \textsl{isotropic}, that is, for every $g\in SO\left(
3\right) $ one has that $T\left( x\right) \overset{law}{=}T\left( gx\right) $%
, where the equality holds in the sense of finite dimensional distributions.
To simplify the notation, we also assume that $\mathbb{E}T\left( x\right)
^{2}=1$. Following e.g. \cite{BaMa} (but see also \cite{BaMaVa}, \cite{PePy}
and \cite{Pyc}), one deduces from isotropy that $T$ admits the spectral
decomposition
\begin{equation}
T\left( x\right) =\sum_{l=0}^{\infty }\sum_{m=-l}^{l}a_{lm;1}Y_{lm}\left(
x\right) =\sum_{l=0}^{\infty }T_{l}\left( x\right) \text{, \ \ }x\in \mathbb{%
S}^{2}\text{,} \label{spectral-G}
\end{equation}%
where $a_{lm;1}\triangleq \int_{\mathbb{S}^{2}}T\left( x\right) \overline{%
Y_{lm}\left( x\right) }dx$ (the role of the subscript \textquotedblleft $%
lm;1 $\textquotedblright\ will be clarified in the following discussion), $%
T_{l}\left( x\right) $ $\triangleq $ $\sum_{m=-l}^{l}a_{lm;1}Y_{lm}\left(
x\right) $, and the convergence takes place in $L^{2}\left( \mathbb{P}%
\right) $ for every fixed $x$, as well as in $L^{2}\left( \mathbb{P}\otimes
dx\right) $. The next result gives a simple and very useful characterization
of the joint law of the complex-valued array $\left\{ a_{lm;1}:l\geq 0\text{%
, }m=-l,...,l\right\} $. For every $z\in \mathbb{C}$, the symbols $\Re
\left( z\right) $ and $\Im \left( z\right) $ indicate, respectively, the
real and the imaginary part of $z$.
\begin{proposition}
\label{P : BaMa}Let $T$ be the centered, isotropic and Gaussian random field
appearing in (\ref{spectral-G}). Then: (i) for every $l\geq 0$ the random
variable $a_{l0;1}$ is real-valued, centered and Gaussian; (ii) for every $%
l\geq 1$, and every $m=1,...,l$, the random variable $a_{lm;1}$ is
complex-valued and such that $a_{lm;1}$ $=$ $\left( -1\right) ^{m}\overline{%
a_{l-m;1}}$, and moreover $\mathbb{E(}\Re \left( a_{lm;1}\right) ^{2})$ $=$ $%
\mathbb{E(}\Im \left( a_{lm;1}\right) ^{2})$ $=$ $\mathbb{E(}a_{l0;1}^{2})/2$
$=$ $C_{l}/2$, for some constant $C_{l}\in \left[ 0,+\infty \right) $ not
depending on $m$, and%
\begin{equation}
\mathbb{E(}\Re \left( a_{lm;1}\right) \Im \left( a_{lm;1}\right) )=0;
\label{aa'}
\end{equation}%
(iii) for every $l\geq 1$ and every $m=-l,...,l$, the random coefficient $%
a_{lm;1}$ is independent of $a_{l^{\prime }m^{\prime };1}$ for every $%
l^{\prime }\geq 0$ such that $l^{\prime }\neq l$ and every $m^{\prime
}=-l^{\prime },...,l^{\prime }$. By noting $C_{0}\triangleq \mathbb{E(}%
a_{00;1}^{2})$, one also has the relation
\begin{equation}
1=\mathbb{E}\left[ T\left( x\right) ^{2}\right] =\sum_{l=0}^{\infty }\frac{%
2l+1}{4\pi }C_{l}\text{.} \label{var}
\end{equation}
\end{proposition}
The reader is referred to \cite{BaMa} for a proof of Proposition \ref{P :
BaMa}, as well as for several converse statements. Here, we shall only
stress that formula (\ref{var}) is a consequence of the well-known relation
(see e.g. \cite{VMK})
\begin{equation}
\sum_{m=-l}^{l}Y_{lm}(x)\overline{Y_{lm}(y)}=\frac{2l+1}{4\pi }P_{l}(\cos
\left\langle x,y\right\rangle )\text{, \ \ }x,y\in \mathbb{S}^{2}\text{,}
\label{angle}
\end{equation}%
where $\left\langle x,y\right\rangle $ is the angle between $x$ and $y$.\
Observe that property (\ref{aa'}) implies that $\Re \left( a_{lm;1}\right) $
and $\Im \left( a_{lm;1}\right) $ are independent centered Gaussian random
variables. Moreover, the combination of (\ref{aa'}) and point (iii) in the
statement of Proposition \ref{P : BaMa} yields that $\mathbb{E(}a_{lm;1}%
\overline{a_{l^{\prime }m^{\prime };1}})$ $=$ $0$, $\forall \left(
l,m\right) $ $\neq $ $\left( l^{\prime },m^{\prime }\right) $. Finally, it
is also evident that points (i)-(iii) in the previous statement imply that
the law of an isotropic Gaussian field such as $T$ is completely
characterized by its angular power spectrum $\left\{ C_{l}:l\geq 0\right\} $%
. To avoid trivialities, we will always work under the following assumption:
\smallskip \textbf{Assumption. }The angular power spectrum $\left\{
C_{l}:l\geq 0\right\} $ is such that $C_{l}>0$ for every $l$. \smallskip
Note that the results of this paper could be extended without difficulties
(but at the cost of an heavier notation) to the case of a power spectrum
such that $C_{l}\neq 0$ for infinitely many $l$'s. In the subsequent
sections, we shall obtain high-frequency CLTs for centered isotropic
spherical fields that are \textsl{subordinated} to the Gaussian field $T$
defined above.
\smallskip
\textbf{Definition A }(\textit{Subordinated fields}).\textbf{\ }Let $%
L_{0}^{2}(\mathbb{R}$, $e^{-z^{2}/2}dz)$ indicate the class of
real-valued functions $F\left( z\right) $ on $\mathbb{R}$, which
are square-integrable with respect to the measure $e^{-z^{2}/2}dz$
and such that $\int F\left( z\right) e^{-z^{2}/2}dz=0$. A
(centered) random field $\widetilde{T}= \{\widetilde{T}\left(
x\right) :x\in \mathbb{S}^{2}\} $ is said to be
\textsl{subordinated} to the Gaussian field $T$ appearing in
(\ref{specrap})
if there exists $F\in L_{0}^{2}(\mathbb{R}$, $e^{-z^{2}/2}dz)$ such that $%
\widetilde{T}\left( x\right) $ $=F\left[ T\right] \left( x\right) $,\ $%
\forall x\in \mathbb{S}^{2}$, where the symbol $F\left[ T\right] \left(
x\right) $ stands for $F\left( T\left( x\right) \right) $. Whenever $%
\widetilde{T}$ is subordinated, we will rather use the notation $F\left[ T%
\right] \left( x\right) $ instead of $\widetilde{T}\left( x\right) $, in
order to emphasize the role of the function $F$. Of course, if $F\left(
z\right) =z$, then $F\left[ T\right] \left( x\right) $ $=\widetilde{T}\left(
x\right) $ $=T\left( x\right) $.
\smallskip
It is immediate to check that, since $T$ is isotropic, a subordinated field $%
F\left[ T\right] \left( \cdot \right) $ as in Definition A is necessarily
isotropic. As a consequence, following again \cite{BaMa} or \cite{PePy}, one
deduces that $F\left[ T\right] $ admits the spectral representation
\begin{equation}
F\left[ T\right] \left( x\right) =\sum_{l=0}^{\infty
}\sum_{m=-l}^{l}a_{lm}\left( F\right) Y_{lm}\left( x\right)
=\sum_{l=0}^{\infty }F\left[ T\right] _{l}\left( x\right) \text{, \ \ }x\in
\mathbb{S}^{2}\text{,} \label{SpecSub}
\end{equation}%
with convergence in $L^{2}\left( \mathbb{P}\right) $ (for fixed $x$) and in $%
L^{2}\left( \Omega \times \mathbb{S}^{2}\text{, }\mathbb{P}\otimes dx\right)
$. Here,
\begin{eqnarray}
a_{lm}\left( F\right) &\triangleq &\int_{\mathbb{S}^{2}}F\left[ T\right]
\left( y\right) \overline{Y_{lm}\left( y\right) }dy\text{,\ and}
\label{subCoeff} \\
F\left[ T\right] _{l}\left( x\right) &\triangleq
&\sum_{m=-l}^{l}a_{lm}\left( F\right) Y_{lm}\left( x\right) .
\label{subProj}
\end{eqnarray}%
The complex-valued array $\left\{ a_{lm}\left( F\right) :l\geq 0,\text{ \ }%
m=-l,...,l\right\} $ always enjoys the following properties (\textbf{a})-(%
\textbf{c}): (\textbf{a}) for every $l\geq 0$, the random variable $%
a_{l0}\left( F\right) $ is real-valued, centered and Gaussian; (\textbf{b})
for every $l\geq 1$, and every $m=1,...,l$, the random variable $%
a_{lm}\left( F\right) $ is complex-valued and such that
\begin{align*}
a_{lm}\left( F\right) & =\left( -1\right) ^{m}\overline{a_{l-m}\left(
F\right) }\text{ \ ; \ }\mathbb{E(}\Re \left( a_{lm}\left( F\right) \right)
\Im \left( a_{lm}\left( F\right) \right) )=0 \\
\mathbb{E(}\Re \left( a_{lm}\left( F\right) \right) ^{2})& =\mathbb{E(}\Im
\left( a_{lm}\left( F\right) \right) ^{2})=\mathbb{E(}a_{l0}\left( F\right)
^{2})/2=C_{l}\left( F\right) /2,
\end{align*}%
where the finite constant $C_{l}\left( F\right) \geq 0$ depends solely on $F$
and $l$; (\textbf{c}) $\mathbb{E(}a_{lm}\left( F\right) $ $\times $ $%
\overline{a_{l^{\prime }m^{\prime }}\left( F\right) })$ $=0$, $\forall
\left( l,m\right) $ $\neq $ $\left( l^{\prime },m^{\prime }\right) $. Note
that, in general, it is no longer true that $\Re \left( a_{lm}\left(
F\right) \right) $ and $\Im \left( a_{lm}\left( F\right) \right) $ are
independent random variables. Moreover, we state the following consequence
of \cite[Th. 7]{BaMa}: \textit{for every }$l\geq 1$, \textit{the coefficients%
} $\left( a_{l0}\left( F\right) ,...,a_{ll}\left( F\right) \right) $ \textit{%
are stochastically independent if, and only if, they are Gaussian. }Also, $%
\mathbb{E(}F\left[ T\right] \left( x\right) ^{2})=\sum_{l=0}^{\infty }\frac{%
2l+1}{4\pi }C_{l}\left( F\right) $.
In the subsequent sections, a crucial role will be played by the class of
\textit{Hermite polynomials}. Recall (see e.g. \cite[p. 20]{Janss}) that the
sequence $\left\{ H_{q}:q\geq 0\right\} $ of Hermite polynomials is defined
by the differential relation
\begin{equation}
H_{q}\left( z\right) =\left( -1\right) ^{q}e^{\frac{z^{2}}{2}}\frac{d^{q}}{%
dz^{q}}e^{-\frac{z^{2}}{2}}\text{, \ \ }z\in \mathbb{R}\text{, \ }q\geq 0;
\label{Her}
\end{equation}%
it is well-known that the sequence $\{ \left( q!\right)
^{-1/2}H_{q}:q\geq 0\} $ defines an orthonormal basis of the space $%
L^{2}(\mathbb{R},\left( 2\pi \right) ^{-1/2}e^{-z^{2}/2}dz)$. When a
subordinated field has the form (for $q\geq 2$) $H_{q}\left[ T\right] \left(
x\right) $, $x\in \mathbb{S}^{2}$ (that is, when $F=H_{q}$ in Definition A),
we will use the shorthand notation:
\begin{eqnarray}
T^{\left( q\right) }\left( x\right) &\triangleq &H_{q}\left[ T\right] \left(
x\right) ,\text{ \ }x\in \mathbb{S}^{2}\text{,} \label{Short1} \\
a_{lm;q} &\triangleq &a_{lm}\left( H_{q}\right) \text{,} \label{Short1.5} \\
T_{l}^{\left( q\right) }\left( x\right) &\triangleq &H_{q}\left[ T\right]
_{l}\left( x\right) ,\text{ \ }l\geq 1\text{, }x\in \mathbb{S}^{2}\text{,}
\label{Short2} \\
\overline{T}_{l}^{\left( q\right) }\left( x\right) &\triangleq &Var\left(
T_{l}^{\left( q\right) }\left( x\right) \right) ^{-1/2}T_{l}^{\left(
q\right) }\left( x\right) ,\text{ \ }l\geq 1\text{, }x\in \mathbb{S}^{2}%
\text{,} \label{Short2.5} \\
\widetilde{C}_{l}^{(q)} &\triangleq &C_{l}\left( H_{q}\right) =\mathbb{E}%
|a_{lm;q}|^{2}\text{, }l\geq 1\text{, }m=-l,...,l\text{.} \label{Short3}
\end{eqnarray}
To justify our notation (\ref{Short1})--(\ref{Short3}), we recall that for
every fixed $x$ the random variable $H_{q}\left[ T\right] \left( x\right)
=H_{q}\left( T\left( x\right) \right) $ is just the $q$th \textsl{Wick power
}of $T\left( x\right) $ (see for instance \cite{Janss}). We conclude the
section with an easy Lemma, that will be used in Section \ref{S: CLT}.
\begin{lemma}
\label{L : Pl}Let $F\left[ T\right] \left( x\right) $, $x\in \mathbb{S}^{2}$%
, be an (isotropic) subordinated field as in Definition A. Then, for every $%
l\geq 1$ one has the following:
\begin{enumerate}
\item The random field $x\mapsto F\left[ T\right] _{l}\left( x\right) $
defined in (\ref{subProj}) is real-valued and isotropic;
\item For every fixed $x\in \mathbb{S}^{2}$, $F\left[ T\right] _{l}\left(
x\right) $ $\overset{law}{=}$ $\sqrt{\frac{2l+1}{4\pi }}a_{l0}\left(
F\right) $, where the coefficient $a_{l0}\left( F\right) $ is defined
according to (\ref{subCoeff}), and consequently $\mathbb{E(}F\left[ T\right]
_{l}\left( x\right) ^{2})$ $=$ $\frac{2l+1}{4\pi }C_{l}\left( F\right) $;
\item The normalized random field
\begin{equation}
\overline{F\left[ T\right] }_{l}\left( x\right) =\left[ \frac{\left(
2l+1\right) C_{l}\left( F\right) }{4\pi }\right] ^{-1/2}F\left[ T\right]
_{l}\left( x\right) \label{SubNorm}
\end{equation}%
has a covariance structure given by: for every $x,y\in \mathbb{S}^{2}$,%
\begin{equation}
\mathbb{E}\left( \overline{F\left[ T\right] }_{l}\left( x\right) \times
\overline{F\left[ T\right] }_{l}\left( x\right) \right) =P_{l}\left( \cos
\left\langle x,y\right\rangle \right) \text{,} \label{covSubNorm}
\end{equation}%
where $P_{l}\left( \cdot \right) $ is the $l$\textrm{th} Legendre polynomial
defined in (\ref{Legendre}) and, as before, $\left\langle x,y\right\rangle $
is the angle between $x$ and $y$.
\end{enumerate}
\end{lemma}
\begin{proof}
Point 1. is straightforward. To prove point 2. define (in polar coordinates)
$x_{0}=\left( 0,0\right) $ and use the isotropy property stated at point 1.
to write
\begin{equation*}
F\left[ T\right] _{l}\left( x\right) \overset{law}{=}F\left[ T\right]
_{l}\left( x_{0}\right) =\sum_{m=-l}^{l}a_{lm}\left( F\right) Y_{lm}\left(
x_{0}\right) =\sqrt{\frac{2l+1}{4\pi }}a_{l0}\left( F\right) \text{,}
\end{equation*}%
since (\ref{IntroSH1}) implies that $Y_{lm}\left( x_{0}\right) =\sqrt{\left(
2l+1\right) /4\pi }\delta _{m}^{0}$. Finally, to prove relation (\ref%
{covSubNorm}) we use (\ref{angle}) to deduce that, for every $x,y\in \mathbb{%
S}^{2}$,%
\begin{equation*}
\mathbb{E}(F\left[ T\right] _{l}\left( x\right) F\left[ T\right]
_{l}\left( y\right) ) =C_{l}\left( F\right) \frac{2l+1}{4\pi
}P_{l}(\cos \left\langle x,y\right\rangle )\text{,}
\end{equation*}%
thus giving the desired conclusion (recall that $P_{l}\left( 1\right) =1$).
\end{proof}
For instance, a first consequence of Lemma \ref{L : Pl} is that, for every $%
q\geq 2$,
\begin{equation}
\mathbb{E(}T_{l}^{\left( q\right) }\left( x\right) ^{2})=\left( 2l+1\right)
\widetilde{C}_{l}^{\left( q\right) }/4\pi \label{tq}
\end{equation}%
where we used the notation introduced at (\ref{Short1})-(\ref{Short3}), so
that $\overline{T}_{l}^{\left( q\right) }\left( x\right) $ $=$ $[\left(
2l+1\right) \widetilde{C}_{l}^{(q)}/4\pi ]^{-1/2}$ $T_{l}^{\left( q\right)
}\left( x\right)$.
The main aim of the subsequent sections is to provide an accurate solution
to the following problems (\textbf{P-I})--(\textbf{P-III}).
\smallskip
\noindent%
(\textbf{P-I}) For a fixed $q\geq 2$, find conditions on the power spectrum $%
\left\{ C_{l}:l\geq 0\right\} $ of $T$, to have that the subordinated
process $T^{\left( q\right) }=\left\{ T^{\left( q\right) }\left( x\right)
:x\in \mathbb{S}^{2}\right\} $ defined in (\ref{Short1}) is such that, for
every $x\in \mathbb{S}^{2}$,%
\begin{equation}
\sqrt{\left( 2l+1\right) \widetilde{C}_{l}^{\left( q\right) }/4\pi }\times
T_{l}^{\left( q\right) }\left( x\right) \underset{l\rightarrow +\infty }{%
\overset{law}{\longrightarrow }}N\text{,} \label{CLT1}
\end{equation}%
where $N\ $is a centered standard Gaussian random variable.
\noindent%
(\textbf{P-II}) Under the conditions found at (\textbf{P-I}), study the
asymptotic behaviour, as $l\rightarrow +\infty $, of the vector
\begin{equation}
\sqrt{\left( 2l+1\right) \widetilde{C}_{l}^{\left( q\right) }/4\pi }\times
\left( T_{l}^{\left( q\right) }\left( x_{1}\right) ,...,T_{l}^{\left(
q\right) }\left( x_{k}\right) \right) \text{,} \label{vector}
\end{equation}%
for every $x_{1},...,x_{k}\in \mathbb{S}^{2}$.
\noindent%
(\textbf{P-III}) Combine (\textbf{P-I}) and (\textbf{P-II}) to study the
asymptotic behaviour (in particular, the asymptotic Gaussianity), as $%
l\rightarrow +\infty $, of vectors of the type
\begin{equation}
\sqrt{\left( 2l+1\right) C_{l}\left( F\right) /4\pi }\times \left( F\left[ T%
\right] _{l}\left( x_{1}\right) ,...,F\left[ T\right] _{l}\left(
x_{k}\right) \right) \text{,} \label{vector2}
\end{equation}%
for every $x_{1},...,x_{k}\in \mathbb{S}^{2}$ and every $F\in
L_{0}^{2}\left( \mathbb{R},e^{-z^{2}/2}dz\right) $.
\smallskip
Note that Problems (\textbf{P-I})-(\textbf{P-III}) are stated in increasing
order of generality. We observe also the following fact: since (\ref%
{covSubNorm}) holds, and since the limit of $P_{l}\left( \left\langle
x,y\right\rangle \right) $ ($l\rightarrow +\infty $) does not exist in
general, it will not be possible to prove that the vectors in (\ref{vector})
and (\ref{vector2}) converge in law to some Gaussian limit. However, by
using the results developed in \cite{Pesco}, we will be able to establish
conditions under which the laws of such vectors are \textquotedblleft
asymptotically close\textquotedblright\ to a sequence of $k$-dimensional
Gaussian distributions. As already mentioned, to study (\textbf{P-I})--(%
\textbf{P-III}) we shall use estimates involving the so-called \textsl{%
Clebsch-Gordan }coefficients, that are elements of unitary matrices
connecting some reducible representations of $SO\left( 3\right) $. The
definition and the analysis of some crucial properties of Clebsch-Gordan
coefficients are the object of the next section.
\section{A primer on Clebsch-Gordan coefficients\label{S : Clebsch-Gordan}}
In this subsection, we need to review some basic representation theory
results for $SO(3)$, the group of rotations in $\mathbb{R}^{3}$. We refer
the reader to standard textbooks (for instance, \cite{VMK} and \cite{VilKly}%
) for further details, as well as for any unexplained notion or definition.
It should be stressed that most of our arguments below could be extended to
general compact groups with known representations; however, throughout the
following we shall stick to the group of rotations $SO(3)$, mainly for the
sake of notational simplicity.
We recall first that each element $g\in SO(3)$ can be parametrized by the
set $(\alpha ,\beta ,\gamma )$ of so-called \textsl{Euler angles}, where $%
0\leq \alpha <2\pi ,$ $0\leq \beta \leq \pi $ and $0\leq \gamma <2\pi $. In
these coordinates, a complete set of irreducible matrix representations for $%
SO(3)$ is provided by the so-called \textsl{Wigner's }$D$\textsl{\ matrices}
$D^{l}(\alpha ,\beta ,\gamma )$, of dimensions ($2l+1)\times (2l+1)$ for $%
l=0,1,2,...$ -- see \cite[Ch. 4]{VMK} for an analytic expression.
Here, we simply point out that the elements of $D^{l}(\alpha
,\beta ,\gamma )$ are related to the spherical harmonics by the
relationship
\begin{equation}
D_{m0}^{l}(\alpha ,\beta ,\gamma )=(-1)^{m}\sqrt{\frac{4\pi }{2l+1}}%
Y_{l-m}(\beta ,\alpha )=\sqrt{\frac{4\pi }{2l+1}}Y_{lm}^{\ast }(\beta
,\alpha )\text{ ,} \label{spherwig}
\end{equation}%
from which it is not difficult to show how the usual spectral representation
for random fields on the spheres (for instance (\ref{specrap}) and (\ref%
{spectral-G})) is really just the stochastic Peter-Weyl Theorem on $\mathbb{S%
}^{2}=SO(3)/SO(2)$. The reader is referred e.g. to \cite{VilKly} and \cite%
{Varadara} for further discussions on the Peter-Weyl Theorem, and to \cite%
{BaMa}, \cite{BaMaVa} and \cite{PePy} for several related probabilistic
results.
It follows from standard representation theory that we can exploit
the family $\{ D^{l}\} _{l=0,1,,2,...}$ to build alternative
(reducible) representations, either by taking the tensor product family $%
\{ D^{l_{1}}\otimes D^{l_{2}}\} _{l_{1},l_{2}}$, or by considering
direct sums $\{ \oplus _{l=|l_{2}-l_{1}|}^{l_{2}+l_{1}}D^{l}\}
_{l_{1},l_{2}}$; these
representations have dimensions $(2l_{1}+1)(2l_{2}+1)$ $\times $ $%
(2l_{1}+1)(2l_{2}+1)$ and are unitarily equivalent, whence there exists a
unitary matrix $C_{l_{1}l_{2}}$ such that%
\begin{equation}
\left\{ D^{l_{1}}\otimes D^{l_{2}}\right\} =C_{l_{1}l_{2}}\left\{ \oplus
_{l=|l_{2}-l_{1}|}^{l_{2}+l_{1}}D^{l}\right\} C_{l_{1}l_{2}}^{\ast }\text{.}
\label{clebun}
\end{equation}%
Here, $C_{l_{1}l_{2}}$ is a $\left\{ (2l_{1}+1)(2l_{2}+1)\times
(2l_{1}+1)(2l_{2}+1)\right\} $ block matrix with blocks $%
C_{l_{1}(m_{1})l_{2}}^{l}$ of dimensions $(2l_{2}+1)\times (2l+1)$, $%
m_{1}=-l_{1},...,l_{1}$. The elements of such a block are indexed by $m_{2}$
(over rows) and $m$ (over columns). More precisely,%
\begin{eqnarray*}
C_{l_{1}l_{2}} &=&\left[ C_{l_{1}(m_{1})l_{2}\cdot }^{l\cdot }\right]
_{m_{1}=-l_{1},...,l_{1};l=|l_{2}-l_{1}|,...,l_{2}+l_{1}} \\
C_{l_{1}(m_{1})l_{2}.}^{l.} &=&\left\{ C_{l_{1}m_{1}l_{2}m_{2}}^{lm}\right\}
_{m_{2}=-l_{2},...,l_{2};m=-l,...,l}\text{ .}
\end{eqnarray*}
The \textsl{Clebsch-Gordan coefficients} for $SO(3)$ are then defined as $%
\{\!C_{l_{1}m_{1}l_{2}m_{2}}^{lm}\!\} $, that is, as the elements
of the unitary matrices $C_{l_{1}l_{2}}$ (note that such matrices
are real-valued, and so are the $C_{l_{1}m_{1}l_{2}m_{2}}^{lm}$).
These coefficients were introduced in Mathematics in the XIX
century, as motivated by the analysis of invariants in Algebraic
Geometry; in the 20th century, they have gained
an enormous importance in the quantum theory of angular momentum, where $%
C_{l_{1}m_{1}l_{2}m_{2}}^{lm}$ represents the \textsl{probability amplitude}
that two quantum particles with total angular momentum $l_{1}$ and $l_{2}$
and momentum projections on the $z$-axis $m_{1}$ and $m_{2}$ are coupled to
form a system with total angular momentum $l$ and projection $m$ (see e.g.
\cite{Libo}). Their use in the analysis of isotropic random fields is much
more recent, see for instance \cite{Hu} and the references therein. Explicit
expressions for the Clebsch-Gordan coefficients of $SO(3)$ are known, but
they are in general hardly manageable (see e.g. \cite[Section 8.2]{VMK}).
However, these expressions become somewhat neater when $m_{1}=m_{2}=m_{3}=0$%
, in which case one has the relations: $C_{l_{1}0l_{2}0}^{l_{3}0}=0$, when $%
l_{1}+$ $l_{2}+$ $l_{3}$ is odd, and, for $l_{1}+l_{2}+l_{3}$ even,%
\begin{eqnarray*}
C_{l_{1}0l_{2}0}^{l_{3}0} &=&\frac{(-1)^{\frac{l_{1}+l_{2}-l_{3}}{2}}\sqrt{%
2l_{3}+1}\left[ (l_{1}+l_{2}+l_{3})/2\right] !}{\left[ (l_{1}+l_{2}-l_{3})/2%
\right] !\left[ (l_{1}-l_{2}+l_{3})/2\right] !\left[ (-l_{1}+l_{2}+l_{3})/2%
\right] !} \\
&&\times \left\{ \frac{%
(l_{1}+l_{2}-l_{3})!(l_{1}-l_{2}+l_{3})!(-l_{1}+l_{2}+l_{3})!}{%
(l_{1}+l_{2}+l_{3}+1)!}\right\} ^{1/2}.
\end{eqnarray*}
The Clebsch-Gordan coefficients enjoy also a nice set of symmetry and
orthogonality properties which will play a crucial role in our results to
follow (see \cite{Marinucci} and \cite{MarPTRF} for an account of such properties).
Note in particular that the Clebsch-Gordan coefficients are
different from zero only if $m_{1}+m_{2}=m$ and $|l_{2}-l_{1}|\leq l\leq
l_{1}+l_{2}$ (the \emph{triangle conditions})$.$ Also, from unitary
equivalence we deduce that%
\begin{equation}
\sum_{m_{1},m_{2}}\!
C_{l_{1}m_{1}l_{2}m_{2}}^{lm}\!C_{l_{1}m_{1}l_{2}m_{2}}^{l^{\prime
}m^{\prime }} \!=\! \delta _{l}^{l^{\prime} }\!\delta _{m}^{m^{\prime} }%
\text{ and }\sum_{l,m}\!C_{l_{1}m_{1}l_{2}m_{2}}^{lm}\!C_{l_{1}m_{1}^{\prime
}l_{2}m_{2}^{\prime }}^{lm}\!=\!\delta _{m_{1}}^{m_{1}^{\prime }}\delta
_{m_{2}}^{m_{2}^{\prime}}\!. \label{orto1}
\end{equation}
\textbf{Remark on Notation. }Depending on the notational convenience, we
write sometimes sums of Clebsch-Gordan coefficients without specifying the
range of the indices $l$ and/or $m$. In such cases, the range of the sums is
conventionally taken to be the set of indices where the Clebsch-Gordan
coefficients are different from zero. For instance, in (\ref{orto1}) one
should read: $\sum_{m_{1},m_{2}}=\sum_{m_{1}=-l_{1},...,l_{1}}%
\sum_{m_{2}=-l_{2},...,l_{2}}$ and $\sum_{l,m}=\sum_{l=0}^{+\infty
}\sum_{m=-l,...,l}$. Similar conventions are adopted (without further
notice) throughout the paper. We recall also that the Clebsch-Gordan
coefficients are equivalent, up to a normalization factor, to the \emph{%
Wigner's} \emph{3j coefficients, }which are used\emph{\ }in
related works such as \cite{MarPTRF}.
The Clebsch-Gordan coefficients play a crucial role in the evaluation of
integrals involving products of spherical harmonics. In particular, the
so-called \textsl{Gaunt integral} gives%
\begin{equation}
\int_{\mathbb{S}^{2}}\!Y_{l_{1}m_{1}}\left( x\right)\! Y_{l_{2}m_{2}}\left(
x\right)\! \overline{Y_{lm}\left( x\right) }dx \!=\!\sqrt{\frac{\left(
2l_{1}\!+\!1\right)\! \left( 2l_{2}\!+\!1\right) }{4\pi \left( 2l+1\right) }}%
C_{l_{1}m_{1}l_{2}m_{2}}^{lm}C_{l_{1}0l_{2}0}^{l0}\text{ }. \label{gauint}
\end{equation}%
Relation (\ref{gauint}) can be established using (\ref{spherwig}), (\ref%
{clebun}) and resorting to standard orthonormality properties of
the elements of group representations -- see \cite[Expression
5.9.1.4]{VMK}. More generally, define
\begin{equation}
\mathcal{G}\left\{ l_{1},m_{1};...;l_{r},m_{r}\right\} \triangleq \int_{%
\mathbb{S}^{2}}Y_{l_{1}m_{1}}\left( x\right) \cdot \cdot \cdot
Y_{l_{r}m_{r}}\left( x\right) dx\text{,} \label{supergaunt}
\end{equation}%
and call the quantity $\mathcal{G}\left\{
l_{1},m_{1};...;l_{r},m_{r}\right\} $ a \textsl{generalized Gaunt integral}$%
. $ Then, iterating the previous argument, for $q\geq 3$ it can be shown
that (by using for instance \cite[Expression 5.6.2.12]{VMK})%
\begin{eqnarray}
&&\quad \quad \mathcal{G}\left\{ l_{1},m_{1};...;l_{q},m_{q};l,-m\right\}
\label{megagaunt} \\
&=&\sum_{L_{1}...L_{q-2}}\sum_{M_{1}...M_{q-2}}\left\{
\prod_{i=1}^{q-3}\left( \sqrt{\frac{2l_{i+2}+1}{4\pi }}%
C_{L_{i}0l_{i+2}0}^{L_{i+1}0}C_{L_{i}M_{i}l_{i+2}m_{i+2}}^{L_{i+1}M_{i+1}}%
\right) \right\} \notag \\
&&\times \sqrt{\frac{(2l_{1}+1)(2l_{2}+1)}{4\pi (2l+1)}}%
C_{l_{1}0l_{2}0}^{L_{1}0}C_{l_{1}m_{1}l_{2}m_{2}}^{L_{1}M_{1}}
\!\sqrt{\frac{2l_{q}+1}{4\pi }}
C_{L_{q-2}0l_{q}0}^{l0}C_{L_{q-2}M_{q-2}l_{q}m_{q}}^{lm} \text{,}
\notag
\end{eqnarray}%
where, for $q=3$, we have used the convention
$\Pi_{i=1}^{0}\equiv0$. Note that expressions such as
(\ref{megagaunt}) imply that the generalized Gaunt integrals of
the type (\ref{supergaunt}) are indeed real-valued. To
simplify the expression (\ref{megagaunt}), let us introduce the coefficients%
\begin{equation*}
C_{l_{1},m_{1};...;l_{p}m_{p}}^{\lambda _{1},\lambda _{2},...,\lambda
_{p-1};\mu }\!\triangleq \!\sum_{\mu _{1}=-\lambda _{1}}^{\lambda
_{1}}\!...\!\sum_{\mu _{p-2}=-\lambda _{p-2}}^{\lambda
_{p-2}}C_{l_{1},m_{1},l_{2},m_{2}}^{\lambda _{1},\mu _{1}}C_{\lambda
_{1},\mu _{1};l_{3},m_{3}}^{\lambda _{2},\mu _{2}}\cdot \cdot \cdot
C_{\lambda _{p-2},\mu _{p-2};l_{p},m_{p}}^{\lambda _{p-1},\mu }.
\end{equation*}
These coefficients are themselves the elements of unitary matrices
connecting tensor product and direct sum representations of $SO(3)$, and
thus it follows easily that the following orthonormality conditions hold
\begin{equation}
\sum_{m_{1},...m_{p}}\left\{ C_{l_{1},m_{1};...;l_{p}m_{p}}^{\lambda
_{1},\lambda _{2},...,\lambda _{p-1};\mu }\right\} ^{2}=\sum_{\lambda
_{1}}...\sum_{\lambda _{p-1}}\sum_{\mu =-\lambda _{p-1}}^{\lambda
_{p-1}}\left\{ C_{l_{1},m_{1};...;l_{p}m_{p}}^{\lambda _{1},\lambda
_{2},...,\lambda _{p-1};\mu }\right\} ^{2}=1\text{ ;} \label{ortoorto}
\end{equation}%
it is important to note that due to the conditions $m_{1}+m_{2}=m_{3}$ the
sums may actually vanish, for instance%
\begin{equation}
C_{l_{1},0;...;l_{p}0}^{\lambda _{1},\lambda _{2},...,\lambda
_{p-1};0}=C_{l_{1},0,l_{2},0}^{\lambda _{1},0}C_{\lambda
_{1},0;l_{3},0}^{\lambda _{2},0}\cdot \cdot \cdot C_{\lambda
_{p-2},0;l_{p},0}^{\lambda _{p-1},0}\text{ .} \label{0_conv}
\end{equation}
We have also that
\begin{eqnarray}
&&\mathcal{G}\left\{ l_{1},m_{1};...;l_{q},m_{q};l,-m\right\}
\label{gagaunt} \\
&=&\sqrt{\frac{4\pi }{2l+1}}\left\{ \prod_{i=1}^{q}\sqrt{\frac{2l_{i}+1}{%
4\pi }}\right\}
\sum_{L_{1}...L_{q-2}}C_{l_{1},0;...;l_{q}0}^{L_{1},L_{2},...,L_{q-2},l;0}C_{l_{1},m_{1};...;l_{q}m_{q}}^{L_{1},L_{2},...,L_{q-2},l;m}.
\notag
\end{eqnarray}
\smallskip
\textbf{Remark. }The coefficients $C_{l_{1},m_{1};...;l_{p}m_{p}}^{\lambda
_{1},\lambda _{2},...,\lambda _{p-1};\mu }$ defined above admit a physical
interpretation in terms of coupling of angular momenta in a quantum
mechanical system. Consider indeed a system composed of $p$ particles, say $%
\alpha _{1},...,\alpha _{p}$, such that $\alpha _{i}$ has total angular
momentum equal to $l_{i}$, and projection on the $z$-axis given by $m_{i}$.
Then, the coefficient $C_{l_{1},m_{1};...;l_{p}m_{p}}^{\lambda _{1},\lambda
_{2},...,\lambda _{p-1};\mu }$ is exactly the probability amplitude of the
intersection of the following $p-1$ events $\mathbf{E}_{1}$,..., $\mathbf{E}%
_{p-1}$:
\noindent $\mathbf{E}_{1}$ $=$ $\{\alpha _{1}$ and $\alpha _{2}$ couple to
form a particle $\eta _{1}$ with total angular momentum $\lambda _{1}\}$, $%
\mathbf{E}_{2}$ $=$ $\{\eta _{1}$ couples with $\alpha _{3}$ to form a
particle $\eta _{2}$ with total angular momentum $\lambda _{2}\}$,...,
\textbf{E}$_{i}=$ $\{\eta _{i-1}$ couples with $\alpha _{i+1}$ to form a
particle $\eta _{i}$ with total angular momentum $\lambda _{2}\}$,...,
\textbf{E}$_{p-1}=$ $\{\eta _{p-2}$ couples with $\alpha _{p}$ to form a
particle with total angular momentum $\lambda _{p-1}$ and projection $\mu $
on the $z$-axis$\}$.
\smallskip
In the sequel, we shall also need the so-called \textsl{Wigner }$6j$\textsl{%
\ (or Racah) coefficients}, which are related to the Clebsch-Gordan by the
identity (see (\cite[Eq. 9.1.1.8]{VMK}))%
\begin{equation}
\left\{
\begin{array}{ccc}
\!l_{1}\! & l_{2}\! & l_{3}\! \\
\!l_{4}\! & l_{5}\! & l_{6}\!%
\end{array}%
\right\} \!=\!K\!\left(l_{1},...,\!l_{6}\!\right)\! \sum_{\substack{ %
m_{1}m_{3} \\ m_{4}m_{6}}}\!C_{l_{1}m_{1}l_{2}m_{2}}^{l_{3}m_{3}}%
\!C_{l_{1}m_{1}l_{6}m_{6}}^{l_{5}m_{5}}
\!C_{l_{3}m_{3}l_{4}m_{4}}^{l_{5}m_{5}}%
\!C_{l_{2}m_{2}l_{4}m_{4}}^{l_{6}m_{6}}\!. \label{wig6j}
\end{equation}%
where $K\left( l_{1},...,l_{6}\right) =\left[ (2l_{3}+1)(2l_{6}+1)\right]
^{-1/2}(-1)^{l_{1}+l_{2}+l_{4}+l_{5}}$ (note that the previous sum does not
involve $m_{2}$ and $m_{5}$, because of the general relation: $C_{\alpha
t_{1}\beta t_{2}}^{\gamma t_{3}}=0$, whenever $t_{3}\neq t_{1}+t_{2}$).
Although the Wigner's 6j coefficients play themselves a very important role
in Quantum Mechanics and Representation Theory, for brevity's sake we avoid
a full discussion on their properties; the interested reader can consult (%
\cite[Ch.9]{VMK}) or (\cite[pp. 529-542]{VilKly}).
\section{High-frequency CLTs: conditions in terms of Gaunt integrals\label%
{S: CLT}}
The aim of this section is to obtain conditions for high-frequency CLTs in
terms of Gaunt integrals of the type (\ref{megagaunt}). We start by
focussing on Hermite polynomials, and then we deal with general subordinated
fields.
\subsection{Hermite subordination\label{SS : Hermite}}
We focus on the spherical field $T^{\left( q\right) }$ ($q\geq 2$) defined
in (\ref{Short1}), which is obtained by composing the Gaussian field $T$ in (%
\ref{specrap}) with the $q$th Hermite polynomial $H_{q}$ (or, equivalently,
by taking the $q$th Wick power of the random variable $T\left( x\right) $
for every $x$). Our first purpose is to characterize the asymptotic
Gaussianity (when $l\rightarrow +\infty $) of the spherical harmonic
coefficients $\left\{ a_{lm;q}\right\} $ defined in (\ref{Short1.5}).
\begin{theorem}
\label{teo1}Fix $q\geq 2$.
\textrm{1.} For every $l\geq 1$, the positive constant $\widetilde{C}%
_{l}^{\left( q\right) }$ in (\ref{Short3}) (which does not depend on $m$)
equals the quantity%
\begin{eqnarray}
&&q!\!\sum_{l_{1},m_{1}}\!\cdot \!\cdot \!\cdot
\!\sum_{l_{q},m_{q}}C_{l_{1}}C_{l_{2}}\cdot \cdot \cdot C_{l_{q}}\left\vert
\mathcal{G}\left\{ l_{1},m_{1};...;l_{q},m_{q};l,-m\right\} \right\vert ^{2}
\label{VAR} \\
&&= \!q!\!\sum_{l_{1},...,l_{q}=0}^{\infty }C_{l_{1}}\!\cdot
\!\cdot \!\cdot
\!C_{l_{q}}\frac{4\pi }{2l+1}\left\{ \prod_{i=1}^{q}\frac{2l_{i}+1}{4\pi }%
\right\} \sum_{L_{1}...L_{q-2}}\left\{
C_{l_{1},0;...;l_{q}0}^{L_{1},L_{2},...,L_{q-2},l;0}\right\} ^{2}
\label{VAR2}
\end{eqnarray}%
for every $m=-l,...,l$, where the (generalized) Gaunt integral $\mathcal{G}%
\left\{ \cdot \right\} $ is defined via (\ref{supergaunt}).
\textrm{2.} Fix $m\neq 0$. As $l\rightarrow +\infty $, the following two
conditions (\textbf{A}) and (\textbf{B}) are equivalent: (\textbf{A})%
\begin{equation}
(\widetilde{C}_{l}^{\left( q\right) })^{-1/2}\times a_{lm;q}\overset{law}{%
\rightarrow }N+iN^{\prime }\text{,} \label{as0}
\end{equation}%
where $N,N^{\prime }\sim \mathcal{N}\left( 0,1/2\right) $ are independent; (%
\textbf{B}) for every $p=\frac{q-1}{2}+1,...,q-1$, if $q-1$ is even, and
every $p=q/2,...,q-1$ if $q-1$ is odd%
\begin{eqnarray}
&&(\widetilde{C}_{l}^{\left( q\right) })^{-2}\sum_{n_{1},j_{1}}\cdot \cdot
\cdot \sum_{n_{2\left( q-p\right) },j_{2\left( q-p\right) }}C_{j_{1}}\cdot
\cdot \cdot C_{j_{2\left( q-p\right) }}\left\vert \sum_{l_{1},m_{1}}\cdot
\cdot \cdot \sum_{l_{p},m_{p}}C_{l_{1}}\cdot \cdot \cdot C_{l_{p}}\right.
\notag \\
&&\text{ \ \ }\times \!\mathcal{G}\!\left\{
l_{1},m_{1};...;l_{p},m_{p};j_{1},n_{1};...;j_{q-p},n_{q-p};l,-m\right\}
\times \label{as11} \\
&&\text{ \ \ }\left. \times \!\mathcal{G}\!\left\{
l_{1},m_{1};...;l_{p},m_{p};j_{q-p+1},n_{q-p+1};...;j_{2\left( q-p\right)
},n_{2\left( q-p\right) };l,-m\right\} ^{^{^{^{^{^{{}}}}}}}\!\right\vert
^{2} \! \rightarrow 0 \! \notag
\end{eqnarray}
\textrm{3.} Let $N$ be a centered Gaussian random variable with unitary
variance. As $l\rightarrow +\infty $, the CLT
\begin{equation}
(\widetilde{C}_{l}^{\left( q\right) })^{-1/2}\times a_{l0;q}\overset{law}{%
\rightarrow }N \label{as00}
\end{equation}%
takes place if, and only if, the asymptotic condition (\ref{as11}) holds for
$m=0$ and for every $p=\frac{q-1}{2}+1,...,q-1$, if $q-1$ is even, and every
$p=q/2,...,q-1$ if $q-1$ is odd.
\end{theorem}
\begin{proof}
Consider a standard Brownian motion $W=\left\{ W_{t}:t\in \left[ 0,1\right]
\right\} $, and denote by $L_{\mathbb{C}}^{2}\left( \left[ 0,1\right]
\right) $ $=$ $L_{\mathbb{C}}^{2}\left( \left[ 0,1\right] ,d\lambda \right) $
the class of complex-valued and square integrable functions on $\left[ 0,1%
\right] $, with respect to the Lebesgue measure $d\lambda $. Now select a
complex-valued family $\left\{ g_{lm}:l\geq 0\text{, \ }-l\leq m\leq
l\right\} \subseteq L_{\mathbb{C}}^{2}\left( \left[ 0,1\right] \right) $
with the following five properties: (1) $g_{l0}$ is real for every $l\geq 0$%
, (2) $g_{lm}=\left( -1\right) ^{m}\overline{g_{l-m}}$, (3)
$\int g_{lm}\overline{g_{l^{\prime }m^{\prime }}}d\lambda $ $=$ $0,$\ $%
\forall \left( l,m\right) $ $\neq $ $\left( l^{\prime },m^{\prime }\right) $%
, (4) $\int \Re \left( g_{lm}\right) \Im \left( g_{lm}\right)
d\lambda $ $=$ $0$, (5) $\int \Re \left( g_{lm}\right)
^{2}d\lambda $ $=$ $\int \Im \left( g_{lm}\right) ^{2}d\lambda $
$=$ $\int g_{l0}^{2}d\lambda /2$ $=C_{l}/2$, where $\left\{
C_{l}:l\geq 0\right\} $ is the power spectrum of the Gaussian
field $T$. According to Proposition \ref{P : BaMa}, the following
identity
in law holds:%
\begin{equation*}
\left\{ a_{lm;1}:l\geq 0\text{, \ }-l\leq m\leq l\right\} \overset{law}{=}%
\left\{ I_{1}\left( g_{lm}\right) :l\geq 0\text{, \ }-l\leq m\leq l\right\} ,
\end{equation*}%
where $I_{1}\left( g_{lm}\right)
=\int_{0}^{1}g_{lm}dW=\int_{0}^{1}\Re (g_{lm})dW$ $+$
$\mathrm{i}\int_{0}^{1}\Im (g_{lm})dW$ is the usual
(complex-valued) Wiener-It\^{o} integral of $g_{lm}$ with respect
to $W$. From this last relation, it also follows that, in the
sense of stochastic processes, $T\left( x\right) $
$\overset{law}{=}$ $I_{1}\left( \sum_{l=0}^{\infty
}\sum_{m=-l}^{l}g_{lm}Y_{lm}\left( x\right) \right) $ (note that
the function $z\mapsto \sum_{l,m}$$g_{lm}(z)Y_{lm}\left( x\right)
$ is real-valued for every fixed $x\in \mathbb{S}^{2}$ and with
norm equal to $1$). Now define $L_{s,\mathbb{C}}^{2}\left( \left[
0,1\right]
^{q}\right) $ to be the class of complex-valued and symmetric functions on $%
\left[ 0,1\right] ^{q}$, that are square-integrable with respect to Lebesgue
measure. For every $f\in L_{s,\mathbb{C}}^{2}\left( \left[ 0,1\right]
^{q}\right) $, we define $I_{q}\left( f\right) =I_{q}\left( \Re (f)\right) +%
\mathrm{i}I_{q}\left( \Im (f)\right) $ to be the multiple Wiener-It\^{o}
integral, of order $q$, of $f$ with respect to the Brownian motion $W$ (see
e.g. \cite[Ch. 1]{NualartBook}, or \cite{Janss}). From the previous
discussion it follows that, for every $q\geq 2$,
\begin{equation}
T^{\left( q\right) }\left( x\right) =H_{q}\left( T\left( x\right) \right)
\overset{law}{=}I_{q}\left[ \left\{ \sum_{l=0}^{\infty
}\sum_{m=-l}^{l}g_{lm}Y_{lm}\left( x\right) \right\} ^{\otimes q}\right] ,
\label{gr}
\end{equation}%
where the equality in law holds in the sense of finite dimensional
distributions and, for every $f\in L_{\mathbb{C}}^{2}\left( \left[ 0,1\right]
\right) $, we use the notation $f^{\otimes q}\left( a_{1},...,a_{q}\right) $
$=$ $f\left( a_{1}\right) $ $\times $ $\cdot \cdot \cdot $ $\times $ $%
f\left( a_{q}\right) .$ Note that, to obtain the last equality in (\ref{gr}%
), we used the well-known relation (see e.g. \cite{Janss}): for every
real-valued $f\in L_{\mathbb{R}}^{2}\left( \left[ 0,1\right] \right) $ such
that $\left\Vert f\right\Vert _{L_{\mathbb{R}}^{2}\left( \left[ 0,1\right]
\right) }=1$, it holds that $H_{q}\left[ I_{1}\left( f\right) \right] $ $=$ $%
I_{q}\left( f^{\otimes q}\right) $. Now set $h_{l,m}^{\left( q\right) }$ $=$
$\left( -1\right) ^{m}\sum_{l_{1},m_{1}}$ $\cdot \cdot \cdot $ $%
\sum_{l_{q},m_{q}}g_{l_{1}m_{1}}$ $\cdot \cdot \cdot $ $g_{l_{q}m_{q}}$ $%
\mathcal{G\{}l_{1},m_{1};...;$ $l_{q},m_{q};l,-m\}$, so that%
\begin{equation}
a_{lm;q}\overset{law}{=}\int_{S^{2}}I_{q}\left[ \left\{ \sum_{l=0}^{\infty
}\sum_{m=-l}^{l}g_{lm}Y_{lm}\left( x\right) \right\} ^{\otimes q}\right]
\overline{Y_{lm}\left( x\right) }dx=I_{q}\left[ h_{l,m}^{\left( q\right) }%
\right] \label{fubz}
\end{equation}%
so that (\ref{VAR}) follows immediately from the well-known isometry
relation:%
\begin{equation*}
\mathbb{E}\left[ \left\vert I_{q}\left[ h_{l,m}^{\left( q\right) }\right]
\right\vert ^{2}\right] =q!\left\Vert h_{l,m}^{\left( q\right) }\right\Vert
_{L^{2}\left( \left[ 0,1\right] ^{q}\right) }^{2}
\end{equation*}%
(to obtain (\ref{fubz}) we interchanged stochastic and
deterministic integration, by means of a standard stochastic Fubini
argument). To prove that (\ref{VAR2}) is equal to (\ref{VAR}), observe first
that (\ref{ortoorto}) yields that
\begin{equation*}
\sum_{m_{1}=-l_{1}}^{l_{1}}\cdot \cdot \cdot
\sum_{m_{q}=-l_{q}}^{l_{q}}C_{l_{1},m_{1};...;l_{q}m_{q}}^{L_{1},L_{2},...,L_{q-2},l;m}C_{l_{1},m_{1};...;l_{q}m_{q}}^{L_{1}^{\prime },L_{2}^{\prime },...,L_{q-2}^{\prime },l;m}=\delta _{L_{1}}^{L_{1}^{\prime }}...\delta _{L_{q-2}}^{L_{q-2}^{\prime }}
\end{equation*}%
(the RHS of the previous expression does not depend on $m$). Then, use (\ref%
{gagaunt}) to deduce that
\begin{eqnarray}
&&\sum_{m_{1}=-l_{1}}^{l_{1}}\cdot \cdot \cdot \sum_{m_{q}=-l_{q}}^{l_{q}}%
\mathcal{G}\left\{ l_{1},m_{1};...;l_{q},m_{q};l,-m\right\} ^{2} \notag \\
&=&\frac{4\pi }{2l+1}\left\{ \prod_{i=1}^{q}\frac{2l_{i}+1}{4\pi }\right\}
\sum_{L_{1}...L_{q-2}}\left\{
C_{l_{1},0;...;l_{q}0}^{L_{1},L_{2},...,L_{q-2},l;0}\right\} ^{2}. \notag
\end{eqnarray}%
This proves Point 1 in the statement. To prove Point 2, recall that,
according to \cite[Proposition 6]{MaPe}, relation (\ref{as0}) holds if, and
only if,
\begin{equation*}
(\widetilde{C}_{l}^{\left( q\right) })^{-2}\left\Vert h_{l,m}^{\left(
q\right) }\otimes _{p}\overline{h_{l,m}^{\left( q\right) }}\right\Vert
_{L^{2}(\left[ 0,1\right] ^{2\left( q-p\right) })}^{2}\rightarrow 0,
\end{equation*}%
for every $p=1,...,q-1$, where the complex-valued (and not necessarily
symmetric) function $h_{l,m}^{\left( q\right) }\otimes _{p}\overline{%
h_{l,m}^{\left( q\right) }}$ (which is an element of $L^{2}(\left[ 0,1\right]
^{2\left( q-p\right) })$) is defined as the \textsl{contraction}%
\begin{eqnarray}
&&h_{l,m}^{\left( q\right) }\otimes _{p}\overline{h_{l,m}^{\left( q\right) }}%
\left( a_{1},...,a_{2\left( q-p\right) }\right) \label{contpr} \\
&=&\int_{\left[ 0,1\right] ^{p}}h_{l,m}^{\left( q\right) }\left( \mathbf{x}%
_{p},a_{1},...,a_{q-p}\right) \overline{h_{l,m}^{\left( q\right) }\left(
\mathbf{x}_{p},a_{q-p+1},...,a_{2\left( q-p\right) }\right) }d\mathbf{x}_{p},
\notag
\end{eqnarray}%
for every $( a_{1},...,a_{2\left( q-p\right) }) \in \left[ 0,1%
\right] ^{2\left( q-p\right) }$, where $d\mathbf{x}_{p}$ is the
Lebesgue measure on $\left[ 0,1\right] ^{p}$. Since, trivially,
$\Vert h_{l,m}^{\left( q\right) }\otimes _{p}\overline{h_{l,m}^{\left( q\right) }}%
\Vert ^{2}$ $=$ $\Vert h_{l,m}^{\left( q\right) }\otimes _{q-p}%
\overline{h_{l,m}^{\left( q\right) }} \Vert ^{2}$ (we stress that,
in the last equality, the first norm is taken in $L^{2}( \left[
0,1\right] ^{2\left( q-p\right)})$, whereas the second is in
$L^{2}( \left[ 0,1\right]^{2p})$ ), one deduces that it is
sufficient to check that
the norm of $h_{l,m}^{\left( q\right) }\otimes _{p}%
\overline{h_{l,m}^{\left( q\right) }}$ is asymptotically
negligeable for every $p=\frac{q-1}{2}+1,...,q-1$, if $q-1$ is
even, and every $p=q/2,...,q-1$ if $q-1$ is odd. It follows that
the result is proved once it is shown that, for every $p$ in such
range, the norm $\Vert h_{l,m}^{\left( q\right) }\otimes _{p}\overline{%
h_{l,m}^{\left( q\right) }}\Vert ^{2}$ equals the multiple sum
appearing in (\ref{as11}). To see this, use (\ref{contpr}) to
deduce that
(recall that Gaunt integrals are real-valued)%
\begin{eqnarray*}
&&h_{l,m}^{\left( q\right) }\otimes _{p}\overline{h_{l,m}^{\left( q\right) }}%
\left( a_{1},...,a_{2\left( q-p\right) }\right) \\
&=&\sum_{n_{1},j_{1}}\cdot \cdot \cdot \sum_{n_{2\left( q-p\right)
},j_{2\left( q-p\right) }}g_{j_{1}n_{1}}\cdot \cdot \cdot g_{j_{q-p}n_{q-p}}%
\overline{g_{j_{q-p+1}n_{q-p+1}}\cdot \cdot \cdot g_{j_{2\left( q-p\right)
}n_{2\left( q-p\right) }}} \\
&&\sum_{l_{1},m_{1}}\cdot \cdot \cdot \sum_{l_{p},m_{p}}C_{l_{1}}\cdot \cdot
\cdot C_{l_{p}}\mathcal{G}\left\{
l_{1},m_{1};...;l_{p},m_{p};j_{1},n_{1};...;j_{q-p},n_{q-p};l,-m\right\} \\
&&\text{ \ }\mathcal{G}\left\{
l_{1},m_{1};...;l_{p},m_{p};j_{q-p+1},n_{q-p+1};...;j_{2\left( q-p\right)
},n_{2\left( q-p\right) };l,-m\right\} ,
\end{eqnarray*}%
and the result is obtained by using the orthogonality properties of the $%
g_{jn}$'s. Point 3 in the statement is proved in exactly the same way, by
first observing that $a_{l0;q}$ is a real-valued random variable, and then
by applying Theorem 1 in \cite{NuPe}.
\end{proof}
\textbf{Remark}.\textbf{\ }One has the relation $\mathbb{E}\left[ T^{\left(
q\right) }\left( x\right) ^{2}\right] =q!\left[ E\left\{ T(x)^{2}\right\} %
\right] ^{q}$. This equality can be proved in two ways: (i) by exploiting
the representation of $T^{\left( q\right) }\left( x\right) $ as a multiple
Wiener-It\^{o} integral, or (ii) by using the equality $\mathbb{E}\left[
T^{\left( q\right) }\left( x\right) ^{2}\right] =\sum_{l}\frac{2l+1}{4\pi }%
\widetilde{C}_{l}^{(q)}$, and the by expanding
$\widetilde{C}_{l}^{(q)}$ according to Theorem \ref{teo1}, so that
one can apply the orthogonality relations (\ref{ortoorto}).
\smallskip
Now recall that, according to part 2 of Lemma \ref{L : Pl}, $T_{l}^{\left(
q\right) }\left( x\right) \overset{law}{=}\sqrt{\frac{2l+1}{4\pi }}a_{l0;q}$%
, so that relation (\ref{tq}) holds. This gives immediately a first
(exhaustive) solution to Problem (\textbf{P-I}), as stated in Section \ref{S
: GaussSub}.
\begin{corollary}
\label{C : PunctualCLT}For every $q\geq 2$ the following conditions are
equivalent:
\begin{enumerate}
\item The CLT (\ref{CLT1}) holds for every $x\in \mathbb{S}^{2}$;
\item The asymptotic relation (\ref{as11}) takes place for $m=0$ and for
every $p=\frac{q-1}{2}+1,...,q-1$, if $q-1$ is even, and every $%
p=q/2,...,q-1 $ if $q-1$ is odd.
\end{enumerate}
\end{corollary}
To deal with Problem (\textbf{P-II}) of Section \ref{S : GaussSub}, we
recall the notation $\overline{T}_{l}^{\left( q\right) }$ (indicating the $l$%
th normalized frequency component of $T^{\left( q\right) }$) introduced in (%
\ref{Short2.5}). We also introduce (for every $l\geq 1$) the \textsl{%
normalized }$l$\textsl{th frequency component} of the Gaussian field $T$,
which is defined as
\begin{equation}
\overline{T}_{l}\left( x\right) =\frac{T_{l}\left( x\right) }{%
Var(T_{l}\left( x\right) )^{1/2}}=\frac{T_{l}\left( x\right) }{(\frac{2l+1}{%
4\pi }C_{l})^{1/2}}\text{, \ \ }x\in \mathbb{S}^{2}\text{.} \label{ti_barra}
\end{equation}
According to Lemma \ref{L : Pl} (in the special case $F\left(
z\right) =z$), $\overline{T}_{l}$ is a real-valued, isotropic,
centered and Gaussian field. Moreover, one has that $\mathbb{E}[
\overline{T}_{l}\left( x\right)
\overline{T}_{l}\left( y\right) ] $ $=$ $\mathbb{E}[ \overline{T}%
_{l}^{\left( q\right) }\left( x\right) \overline{T}_{l}^{\left( q\right)
}\left( y\right) ] =P_{l}\left( \left\langle x,y\right\rangle \right) $%
, for every $q\geq 2$ and every $l\geq 1$. The next result -- which gives an
exhaustive solution to Problem (\textbf{P-II}) -- states that, whenever
Condition 1 (or, equivalently, Condition 2) in the statement of Corollary %
\ref{C : PunctualCLT} is verified (and without\textsl{\ any }additional
assumption), the \textquotedblleft distance\textquotedblright\ between the
finite dimensional distributions of the normalized field $\overline{T}%
_{l}^{\left( q\right) }$ and those of $\overline{T}_{l}$ converge
to zero. For every $k\geq 1$, we denote by $\mathbf{P}
(\mathbb{R}^{k}) $ the class of all probability measures on
$\mathbb{R}^{k}$. We say that a metric $\gamma \left( \cdot ,\cdot
\right) $ \textsl{metrizes the weak convergence} \textsl{on}
$\mathbf{P}( \mathbb{R}^{k}) $ whenever the following double
implication holds for every $Q\in \mathbf{P}( \mathbb{R}^{k}) $
and every $\left\{ Q_{l}:l\geq 1\right\} \subset
\mathbf{P}(\mathbb{R}^{k}) $ (as $l\rightarrow +\infty $): $%
\gamma \left( Q_{l},Q\right) \rightarrow 0$ if, and only if,
$Q_{l}$ converges weakly to $Q$. The quantity $\gamma(P,Q)$ is
sometimes called the $\gamma$\textsl{--distance} between $P$ and
$Q$.
\begin{theorem}
\label{T : VectorCV}Let $q\geq 2$ be fixed, and suppose that Condition 1 (or
2) of Corollary \ref{C : PunctualCLT} is satisfied.
\begin{enumerate}
\item For every $k\geq 1$, every $x_{1},...,x_{k}\in \mathbb{S}^{2}$ and
every compact subset $M\subset \mathbb{R}^{k}$,%
\begin{equation}
\sup_{\left( \lambda _{1},...,\lambda _{k}\right) \in M}\left\vert \mathbb{E}%
\left[ e^{\mathrm{i}\sum_{j=1}^{k}\lambda _{j}\overline{T}_{l}^{\left(
q\right) }\left( x_{j}\right) }\right] -\mathbb{E}\left[ ^{^{^{^{^{{}}}}}}e^{%
\mathrm{i}\sum_{j=1}^{k}\lambda _{j}\overline{T}_{l}^{{}}\left( x_{j}\right)
}\right] \right\vert \underset{l\rightarrow +\infty }{\longrightarrow }0.
\label{THcv}
\end{equation}
\item Fix $x_{1},...,x_{k}$ and denote by $\mathcal{L}\left( \overline{T}%
_{l}^{\left( q\right) };x_{1},...,x_{k}\right) $ and $\mathcal{L}\left(
\overline{T}_{l};x_{1},...,x_{k}\right) $ ($l\geq 1$), respectively, the law
of $\left( \overline{T}_{l}^{\left( q\right) }\left( x_{1}\right) ,...,%
\overline{T}_{l}^{\left( q\right) }\left( x_{k}\right) \right) $
and the law of $\left( \overline{T}_{l}\left( x_{1}\right)
,...,\overline{T}_{l}\left( x_{k}\right) \right) $. For every
metric $\gamma \left( \cdot ,\cdot \right) $ on $\mathbf{P}(
\mathbb{R}^{k}) $ such that $\gamma \left( \cdot ,\cdot \right) $
metrizes the weak convergence, it holds that
\begin{equation*}
\lim_{l\rightarrow +\infty }\gamma \left( \mathcal{L}\left( \overline{T}%
_{l}^{\left( q\right) };x_{1},...,x_{k}\right) ,\mathcal{L}\left( \overline{T%
}_{l};x_{1},...,x_{k}\right) \right) =0.
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{proof}
The crucial point is that the spherical\ field $x\mapsto $ $\overline{T}%
_{l}^{\left( q\right) }\left( x\right) $ lives in the $q$th Wiener
chaos associated with the Gaussian space generated by $T$. By
using this fact, and by arguing as in the proof of Theorem
\ref{teo1}, one can show that the
vector $(\overline{T}_{l}^{\left( q\right) }\left( x_{1}\right) ,...,%
\overline{T}_{l}^{\left( q\right) }\left( x_{k}\right) )$ is
indeed equal in law to a vector of multiple Wiener-It\^{o}
integrals, of order $q$, with respect to a Brownian motion. Since
each element of this vector converges in law to a standard
Gaussian random variable, one can directly apply Theorem 1 and Proposition 2 in \cite%
{Pesco} to achieve the desired conclusion (see also \cite[Proposition 5]%
{Pesco}).
\end{proof}
\subsection{General subordination\label{SS : General Sub}}
We now give a solution to Problem (\textbf{P-III}), as stated at the end of
Section \ref{S : GaussSub}, where $F$ is a general real-valued function
belonging to the class $L_{0}^{2}\left( \mathbb{R},e^{-x^{2}/2}dx\right) $.
The function $F$ admits a unique representation of the form
\begin{equation}
F\left( z\right) =\sum_{q=1}^{\infty }\frac{c_{q}\left( F\right) }{q!}%
H_{q}\left( z\right) \text{, \ \ }z\in \mathbb{R}\text{,} \label{devF}
\end{equation}%
where the Hermite polynomials $H_{q}$ are given by (\ref{Her}) and the real
coefficients $c_{q}\left( F\right) $, $q=1,2...$, are such that
\begin{equation}
\Sigma _{q}\frac{c_{q}\left( F\right) ^{2}}{q!}<+\infty \text{ .}
\label{proprCF}
\end{equation}
As a consequence, for every $l\geq 0$, the frequency component $F\left[ T%
\right] _{l}\left( x\right) $ defined in (\ref{subProj}) admits the expansion%
\begin{equation}
F\left[ T\right] _{l}\left( x\right) =\sum_{q=1}^{\infty }\frac{c_{q}\left(
F\right) }{q!}T_{l}^{\left( q\right) }\left( x\right) \text{, \ \ }x\in
\mathbb{S}^{2}\text{,} \label{dev coeff}
\end{equation}%
where the series converges in $L^{2}\left( \mathbb{P}\right) $ for every
fixed $x$. Formula (\ref{dev coeff}) combined with Lemma \ref{L : Pl} yields
also that
\begin{equation*}
\mathbb{E(}F\left[ T\right] _{l}\left( x\right) F\left[ T\right] _{l}\left(
y\right) )=\frac{2l+1}{4\pi }P_{l}\left( \cos \left\langle x,y\right\rangle
\right) \sum_{q=1}^{\infty }\left( \frac{c_{q}\left( F\right) }{q!}\right)
^{2}\widetilde{C}_{l}^{(q)}\text{,}
\end{equation*}%
where $\widetilde{C}_{l}^{(q)}$ is given by (\ref{Short3}) or, equivalently,
by (\ref{VAR2}). The next result characterizes the asymptotic Gaussianity of
$F$-subordinated spherical random fields. Recall the definition of $%
\overline{F\left[ T\right] }_{l}$ given in (\ref{SubNorm}). The proof is
standard, and therefore omitted (it can be obtained e.g. along the lines of
\cite[Th.\ 4]{HuNu}).
\begin{theorem}
\label{T : GenCLT}Suppose that the following relations hold
\begin{enumerate}
\item For every $q\geq 1$, $\lim_{l\rightarrow +\infty }\frac{2l+1}{4\pi }%
\left( \frac{c_{q}\left( F\right) }{q!}\right) ^{2}\widetilde{C}_{l}^{(q)}/%
\mathbb{E(}F\left[ T\right] _{l}\left( x\right) ^{2})$ $\rightarrow $ $%
\sigma _{q}^{2}\in \lbrack 0,+\infty );$
\item $\sum_{m\geq 1}\left\{ c_{q}\left( F\right) /q!\right\} ^{2}\sigma
_{q}^{2}$ $\triangleq $ $\sigma ^{2}\left( F\right) $ $<$ $+\infty ;$
\item For every $q\geq 2$, the asymptotic relation (\ref{as11}) takes place
for $m=0$ and for every $p=\frac{q-1}{2}+1,...,q-1$, if $q-1$ is even, and
every $p=q/2,...,q-1$ if $q-1$ is odd;
\item $\lim_{p\rightarrow +\infty }\overline{\lim }_{l}\left( 2l+1\right)
\sum_{q=p+1}^{\infty }\left( \frac{c_{q}\left( F\right) }{q!}\right) ^{2}%
\widetilde{C}_{l}^{(q)}=0.$
\end{enumerate}
Then, for every $k\geq 1$, every $x_{1},...,x_{k}\in \mathbb{S}^{2}$ and
every compact $M\subset \mathbb{R}^{k}$,%
\begin{equation*}
\sup_{\left( \lambda _{1},...,\lambda _{k}\right) \in M}\left\vert \mathbb{E}%
\left[ e^{\mathrm{i}\sum_{j=1}^{k}\lambda _{j}\overline{F\left[ T\right] }%
_{l}\left( x_{j}\right) }\right]\! -\!\mathbb{E}\left[
e^{\mathrm{i}\sigma ^{2}\left( F\right)
^{1/2}\sum_{j=1}^{k}\lambda _{j}\overline{T}_{l}\left(
x_{j}\right) }\right] \right\vert \underset{l\rightarrow +\infty }{%
\!\rightarrow \!}0\text{,}
\end{equation*}%
where we used the notation (\ref{ti_barra}). In particular, the
last asymptotic relation implies that, for every
$\gamma(\cdot,\cdot)$ metrizing the weak convergence on
$\mathbf{P}( \mathbb{R}^{k}) $, the $\gamma$--distance between
$$(\overline{F\left[ T\right]} _{l}\left(
x_{1}\right),...,\overline{F\left[ T\right]} _{l}\left(
x_{k}\right))$$ and $\sigma ^{2}\left( F\right)^{1/2}(
\overline{T}_{l}\left( x_{1}\right),$ $...,$
$\overline{T}_{l}\left( x_{k}\right))$ converges to zero as
$l\rightarrow +\infty$.
\end{theorem}
\textbf{Remark. }A sufficient condition, ensuring that points 1 and 3 in the
statement of Theorem \ref{T : GenCLT} are verified, is the following: there
exist constants $\rho \left( q\right) >0$ such that (a) $\left( 2l+1\right)
\widetilde{C}_{l}^{(q)}\leq \rho \left( q\right) $ for every $q\geq 1$ and
every $l$, and (b) $\sum_{q=1}^{\infty }\left( \frac{c_{q}\left( F\right) }{%
q!}\right) ^{2}\rho \left( q\right) <+\infty $.
\section{Explicit sufficient conditions: convolutions and random walks\label%
{S : Explicit}}
In this section, we further explicit the conditions for the CLTs proved in
Section \ref{S: CLT} for the (Hermite) frequency components $T_{l}^{\left(
q\right) }$, $l\geq 0$. In particular, we shall establish sufficient
conditions that are more directly linked to primitive assumptions on the
behaviour of the angular power spectrum $\left\{ C_{l}:l\geq 0\right\} $.
The results of Section \ref{SS : Q2} and Section \ref{SS : Q3} cover,
respectively, the case $q=2$ and $q=3$. Section \ref{SS : CONJQ} contains
some partial findings for the case of a general $q$, as well as several
conjectures. These results will be used in Section \ref{S : Ang PS} to
deduce explicit conditions on the rate of decay of the angular power
spectrum $\left\{ C_{l}:l\geq 0\right\} $.
Our analysis is inspired by the following result, which is a particular case
of the statements contained in \cite[Section 3]{MaPe}, concerning fields on
Abelian groups. Consider indeed a centered real-valued Gaussian field $%
V=\left\{ V\left( \theta \right) :\theta \in \mathbb{T}\right\} $ defined on
the torus $\mathbb{T=[}0,2\mathbb{\pi )}$ (that we regard as an Abelian
compact group with group operation given by $xy=\left( x+y\right) \mathbf{mod%
}(2\pi )$). We suppose that the law of $V$ is \textsl{isotropic}, i.e. that $%
V\left( \theta \right) \overset{law}{=}V\left( x\theta \right) $ (in the
sense of stochastic processes) for every $x\in \mathbb{T}$, and also $%
\mathbb{E}V\left( \theta \right) ^{2}=1.$ We denote by $V\left( \theta
\right) =\sum_{l\in \mathbb{Z}}a_{l}e^{\mathrm{i}l\theta }$ the Fourier
decomposition of $V$, and we write $\Gamma _{l}^{V}=\mathbb{E}\left\vert
a_{l}\right\vert ^{2}$ (note that $\Gamma _{l}^{V}=\Gamma _{-l}^{V}$). Fix $%
q\geq 2$, and consider the Hermite-subordinated field $H_{q}\left[ V\right]
\left( \theta \right) $ $=H_{q}\left( V\left( \theta \right) \right) $,
where $q$ is the $q$th Hermite polynomial. The Fourier decomposition of $%
H_{q}\left[ V\right] $ is $H_{q}\left[ V\right] \left( \theta \right) $ $=$ $%
\sum_{l\in \mathbb{Z}}a_{l}^{\left( q\right) }e^{\mathrm{i}l\theta }$. We
write $N,N^{\prime }$ to indicate a pair of independent centered Gaussian
random variables with common variance equal to $1/2$: in \cite{MaPe} it is
proved that to have the \textsl{high-frequency CLT}%
\begin{equation}
\frac{a_{l}^{\left( q\right) }}{Var\left( a_{l}^{\left( q\right) }\right)
^{1/2}}=\frac{\int_{\mathbb{T}}H_{q}\left[ V\right] \left( \theta \right)
e^{-\mathrm{i}l\theta }d\theta }{Var\left( a_{l}^{\left( q\right) }\right)
^{1/2}}\underset{l\rightarrow \infty }{\overset{law}{\rightarrow }}N+%
\mathrm{i}N^{\prime } \label{clll}
\end{equation}%
it is \textsl{necessary and sufficient} that, for every $p=1,...,q-1$,
\begin{equation}
\lim_{l\rightarrow +\infty }\sup_{j\in \mathbb{Z}}\mathbb{P}\left[
U_{p}=j\mid U_{q}=l\right] =0\text{,} \label{rwAbel}
\end{equation}%
where $\left\{ U_{n}:n\geq 0\right\} $ is the random walk on $\mathbb{Z}$
whose law is given by $U_{0}=0$ and $$\mathbb{P}\left[ U_{n+1}=j\mid U_{n}=k%
\right] =\Gamma _{j-k}^{V}.$$ Note that the law of the random variable $%
U_{n} $ has trivially the form of a \textsl{convolution }of the
coefficients $\Gamma _{l}^{V}$ (see also the discussion below).
The correspondence between (\ref{clll}) and the \textquotedblleft
random walk bridge\textquotedblright\ (\ref{rwAbel}) has been used
in \cite{MaPe} to establish explicit conditions on the power
spectrum $\{ \Gamma _{l}^{V}\} $ to have that (\ref{clll}) holds.
In what follows, we shall unveil (and apply) an analogous connection between
the CLTs proved in Section \ref{S: CLT} and some specific convolutions and
random walks on $\widehat{SO\left( 3\right) }$.
\subsection{Convolutions on $\widehat{SO\left( 3\right) }$}
In the light of Part 3 of Theorem \ref{teo1} and by Corollary \ref{C :
PunctualCLT}, we will focus on the sequence $\left\{ a_{l0;q}:l\geq
0\right\} $ (see (\ref{Short1.5})), whose behaviour as $l\rightarrow +\infty
$ yields an asymptotic characterization of the fields $T_{l}^{\left(
q\right) }\left( \cdot \right) $ defined in (\ref{Short2}). A crucial point
is the simple fact that the numerator of (\ref{as11}), for $m=0$, can be
developed as a multiple sum involving products of four generalized Gaunt
integrals, so that, by (\ref{megagaunt}), the asymptotic expressions
appearing in Theorem \ref{teo1} can be studied by means of the properties of
linear combinations of products of Clebsch-Gordan coefficients. As
anticipated, a very efficient tool for our analysis will be the use of
convolutions on $\mathbb{N}$, that we endow with an hypergroup structure
isomorphic to $\widehat{SO\left( 3\right) }$, i.e. the dual of $SO\left(
3\right) $. This will be the object of the subsequent discussion.
From now on, and for the rest of the section, we shall fix a sequence $%
\left\{ C_{l}:l\geq 0\right\} $, representing the angular power spectrum of
an isotropic centered, normalized Gaussian field $T$ over $\mathbb{S}^{2}$,
as in Section \ref{S : GaussSub}. Whenever convenient we shall write
\begin{equation}
\Gamma _{l}\triangleq (2l+1)C_{l}\text{ , \ }l\geq 0\text{,}
\label{FreqSpectrum}
\end{equation}%
so that, for $l\geq 1$ and up to the constant $1/4\pi $, the parameter $%
\Gamma _{l}$ represents the variance of the projection of the Gaussian field
$T$ in (\ref{specrap}) on the frequency $l$: indeed, according to Lemma \ref%
{L : Pl}, $Var(T_{l})=\Gamma _{l}/4\pi $. Also, we define the following
\textsl{convolutions} of the coefficients $\Gamma _{l}$ (in the following
expressions, the sums over indices $l_{i}$, $L_{i}$ ... range implicitly
from $0$ to $+\infty $):%
\begin{align}
& \!\!\!\!\!\!\!\!\widehat{\Gamma }_{2,l}\!=\!\sum_{l_{1},l_{2}}\Gamma
_{l_{1}}\Gamma _{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}\text{ ,}\qquad \qquad
\qquad \qquad \qquad \qquad \label{cgconv-2} \\
& \!\!\!\!\!\!\!\!\widehat{\Gamma }_{3,l}\!=\!\sum_{L_{1},l_{3}}\!\widehat{%
\Gamma }_{2,L_{1}}\Gamma
_{l_{3}}(C_{L_{1}0l_{3}0}^{l0})^{2}\!=\!\sum_{l_{1},l_{2},l_{3}}\!\Gamma
_{l_{1}}\Gamma _{l_{2}}\Gamma
_{l_{3}}\sum_{L_{1}}(C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l;0})^{2}\text{, ...}
\label{cgconv-1}
\end{align}%
$\ $ \\[-30pt]
\begin{equation}
\widehat{\Gamma }_{q,l}\!=\!\sum_{L_{1},l_{q}}\!\widehat{\Gamma }%
_{q-1,L_{q-1}}\!\Gamma
_{l_{q}}\!(C_{L_{q-1}0l_{q}0}^{l0})^{2}\!=\!\sum_{l_{1}...l_{q}}\!\Gamma
_{l_{1}}\!...\!\Gamma
_{l_{q}}\!\sum_{L_{1}\!...\!L_{q-2}}%
\!(C_{l_{1}0...l_{q}0}^{L_{1}...L_{q-2}l;0})^{2}. \label{cgconv}
\end{equation}%
We stress that the equalities in formulae (\ref{cgconv-1}) and (\ref{cgconv}%
) are consequences of (\ref{0_conv}). It will be also convenient to define a
\textsl{*-convolution} of order $p\geq 2$ as:%
\begin{eqnarray}
\widehat{\Gamma }_{p,l;l_{1}}^{\ast } &=&\sum_{l_{2}}\cdot \cdot \cdot
\sum_{l_{p}}\Gamma _{l_{2}}\cdot \cdot \cdot \Gamma
_{l_{p}}\sum_{L_{1}...L_{p-2}}\left\{
C_{l_{1}0l_{2}0}^{L_{1}0}C_{L_{1}0l_{3}0}^{L_{2}0}...C_{L_{p-2}0l_{p}0}^{l0}%
\right\} ^{2} \notag \\
&=&\sum_{l_{2}}\cdot \cdot \cdot \sum_{l_{p}}\Gamma _{l_{2}}\cdot \cdot
\cdot \Gamma _{l_{p}}\sum_{L_{1}...L_{p-2}}\left\{
C_{l_{1}0l_{2}0...l_{p}0}^{L_{1}...l;0}\right\} ^{2}. \label{starconv}
\end{eqnarray}%
Note that the number of sums following the equalities in formula
(\ref{starconv}) is $p-1$: however, we choose to keep the symbol
$p$ to denote *-convolutions, since it
is consistent with the probabilistic representations given in formulae (\ref%
{pint1}) and (\ref{pint2}) below. The above *-convolution has the following
property: for every $p=2,...,q$%
\begin{equation*}
\sum_{l_{1}}\widehat{\Gamma }_{q+1-p,l_{1}}\widehat{\Gamma }%
_{p,l;l_{1}}^{\ast }=\widehat{\Gamma }_{q,l}\text{ , and, in particular, }%
\sum_{l_{1}}\Gamma _{l_{1}}\widehat{\Gamma }_{q,l;l_{1}}^{\ast }=\widehat{%
\Gamma }_{q,l}\text{ .}
\end{equation*}%
The *-convolution of order 2 can be written more explicitly as
\begin{equation}
\widehat{\Gamma }_{2,l;l_{1}}^{\ast }=\sum_{l_{2}}\Gamma
_{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}. \label{cgconv1}
\end{equation}
\textbf{Remarks. (1) }(\textit{Probabilistic interpretation of the
convolutions}) Write first $\Gamma _{\ast }\triangleq \sum_{l}\Gamma _{l}$
(plainly, in our framework $\Gamma _{\ast }=4\pi $, but the following
discussion applies to coefficients $\left\{ \Gamma _{l}\right\} $ such that $%
\Gamma _{\ast }>0$ is arbitrary) so that $l\longmapsto \Gamma _{l}/\Gamma
_{\ast }$ defines a probability on $\mathbb{N}$. The second orthonormality
relation in (\ref{orto1}) implies that, for fixed $l_{1},l_{2}$, the
application $l\longmapsto (C_{l_{1}0l_{2}0}^{l0})^{2}$ is a probability on $%
\mathbb{N}$. Now define the law of a (homogeneous) \textsl{Markov chain }$%
\left\{ Z_{n}:n\geq 1\right\} $ as follows:
\begin{align}
\mathbb{P}\left\{ Z_{1}=l\right\} & =\Gamma _{l}/\Gamma _{\ast } \label{rw1}
\\
\mathbb{P}\left\{ Z_{n+1}=l\mid Z_{n}=L\right\} & =\sum_{l_{0}}\frac{\Gamma
_{l_{0}}}{\Gamma _{\ast }}\left( C_{l_{0}0L0}^{l0}\right) ^{2}\text{.}
\label{rw2}
\end{align}%
It is clear that $\mathbb{P}\left\{ Z_{q}=l\right\} =\widehat{\Gamma }%
_{q,l}/\left( \Gamma _{\ast }\right) ^{q}$, and also, for $p\geq 2$,%
\begin{eqnarray}
\frac{\widehat{\Gamma }_{p,l:l_{1}}^{\ast }}{\left( \Gamma _{\ast }\right)
^{p-1}} &=&\mathbb{P}\left\{ Z_{p}=l\mid Z_{1}=l_{1}\right\} \label{pint1}
\\
\frac{\widehat{\Gamma }_{p,l:l_{1}}^{\ast }\widehat{\Gamma }_{q+1-p,l_{1}}}{%
\left( \Gamma _{\ast }\right) ^{q}} &=&\mathbb{P}\left\{ \left(
Z_{q}=l\right) \cap \left( Z_{q+1-p}=l_{1}\right) \right\} \text{ \ \ (}q>p-1%
\text{).} \label{pint2}
\end{eqnarray}%
The following quantity will be crucial in the subsequent sections:%
\begin{equation}
\frac{\widehat{\Gamma }_{q+1-p,l;\lambda }^{\ast }\widehat{\Gamma }%
_{p,\lambda }}{\sum_{L}\widehat{\Gamma }_{p,L}\widehat{\Gamma }%
_{q+1-p,l;L}^{\ast }}=\frac{\widehat{\Gamma }_{q+1-p,l;\lambda }^{\ast }%
\widehat{\Gamma }_{p,\lambda }}{\widehat{\Gamma }_{q,l}}=\mathbb{P}\left\{
Z_{p}=\lambda \mid Z_{q}=l\right\} \text{ \ (}q>p\text{);} \label{impint}
\end{equation}%
observe that the last relation in (\ref{impint}) derives from $$\widehat{%
\Gamma }_{q+1-p,l;\lambda }^{\ast }/\left( \Gamma _{\ast }\right) ^{q-p}=%
\mathbb{P\{}\left( Z_{q+1-p}=l\right) | \left( Z_{1}=\lambda
\right) \} = \mathbb{P}\left\{ \left( Z_{q}=\lambda \right) |
\left( Z_{p}=l\right) \right\},$$where the last equality is a
consequence of the
homogeneity of $Z$. Note also that we can identify each natural number $%
l\geq 0$ with an irreducible representation of $SO\left( 3\right) $. It
follows that the formal addition $l_{1}+l_{2}\triangleq
\sum_{l}l(C_{l_{1}0l_{2}0}^{l0})^{2}$ may be used to endow $\widehat{%
SO\left( 3\right) }$ with an hypergroup structure. In this sense, we can
interpret the chain $\left\{ Z_{n}:n\geq 1\right\} $ as a \textsl{random walk%
} on the hypergroup $\widehat{SO\left( 3\right) }$, in a spirit similar to
\cite{GKR}. In Section \ref{S : Roynette}, we will discuss a physical
interpretation of these convolutions and establish a precise connection
between the objects introduced in this section and the notion of convolution
appearing in \cite{GKR}.
(\textbf{2}) (\textit{A comparison with the Abelian case}) In
\cite{MaPe}, where we dealt with similar problems in the case of
homogenous spaces of Abelian groups, we used extensively
convolutions over $\mathbb{Z}$. This kind of convolutions, that we
note $_{A}\widehat{\Gamma }_{q,l}$ ($q\geq 2$, $l\in \mathbb{Z}$)
are obtained as in (\ref{cgconv-2})-(\ref{cgconv1}), by taking
sums over $\mathbb{Z}$ (instead than over $\mathbb{N}$) and by
replacing the Clebsch-Gordan symbols $( C_{l_{1}0l_{2}0}^{l0} )
^{2}$ with the indicator $\mathbf{1}_{l_{1}+l_{2}=l}$. Note that
these indicator functions do indeed provide the Clebsch-Gordan
coefficients associated with the irreducible representations of
the $1$-dimensional torus $\mathbb{T=[}0,2\mathbb{\pi )}$,
regarded as a compact Abelian group with group operation
$xy=\left( x+y\right) \left( \mathbf{mod}(2\pi )\right) $
(this is equivalent to the trivial relation $e^{\mathrm{i}l_{1}x}e^{\mathrm{i%
}l_{2}x}=\sum_{l}\mathbf{1}_{l_{1}+l_{2}=l}e^{\text{i}lx}=e^{\text{i}\left(
l_{1}+l_{2}\right) x}$)$.$ Note also that in the Abelian case one has $_{A}%
\widehat{\Gamma }_{p,l;l_{1}}^{\ast }$ $=$ $_{A}\widehat{\Gamma }%
_{p,l-l_{1}}.$ Also, if $\Gamma _{l}=\Gamma _{l}^{V}$, where $\{
\Gamma
_{l}^{V}\} $ is the power spectrum of the Gaussian field $V$ on $%
\mathbb{T}$ appearing in (\ref{clll}), one has that
$_{A}\widehat{\Gamma _{q,l}^{V}}$ $=$ $\mathbb{P}\left[
U_{q}=l\right] $, where $\left\{ U_{n}\right\} $ is the random
walk given in (\ref{rwAbel}).
\subsection{The case $q=2$\label{SS : Q2}}
In this subsection, we provide a sufficient condition on the spectrum $%
\left\{ C_{l}:l\geq 0\right\} $ (or, equivalently, on $\left\{ \Gamma
_{l}:l\geq 0\right\} $, as defined in (\ref{FreqSpectrum})) to have the CLT (%
\ref{as00}) in the quadratic case $q=2$. This condition is stated in
Proposition \ref{cglem}, and is obtained via some preliminary (technical)
computations and lemmas.
According to Part 3 of Theorem \ref{teo1}, to deal with (\ref{as00}) we
shall find sufficient conditions to have that (\ref{as11}) takes place for $%
m=0$, $q=2$ and $p=1$. From (\ref{VAR2}) we deduce
\begin{equation}
\widetilde{C}_{l}^{\left( 2\right) }=2\left\{ \sum_{l_{1},l_{2}=0}^{\infty }%
\frac{(2l_{1}+1)(2l_{2}+1)}{4\pi (2l+1)}%
C_{l_{1}}C_{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}\right\} ^{2}. \label{den2}
\end{equation}%
On the other hand, the multiple sums appearing in the numerator of (\ref%
{as11}) become ($q=2$, $p=1$)%
\begin{eqnarray}
&&\sum_{j_{1},n_{1},j_{2},n_{2}}\!\!\!C_{j_{1}}\!C_{j_{2}}\!\left\vert
\sum_{l_{1},m_{1}}C_{l_{1}}\!\mathcal{G}\!\left\{
l_{1},m_{1};j_{1},n_{1};l,-m\right\} \!\mathcal{G}\!\left\{
l_{1},m_{1};j_{2},n_{2};l,-m\right\} \right\vert ^{2} \notag \\
&=&\! \! \frac{1}{\left[ 4\pi (2l+1)\right] ^{2}}%
\sum_{j_{1},n_{1},j_{2},n_{2}}\! \Gamma _{j_{1}}\Gamma _{j_{2}}\!\left\vert
\sum_{l_{1},m_{1}}\Gamma
_{l_{1}}C_{l_{1}m_{1}j_{1}n_{1}}^{lm}C_{l_{1}0j_{1}0}^{l0}C_{l_{1}m_{1}j_{2}n_{2}}^{lm}C_{l_{1}0j_{2}0}^{l0}\right\vert ^{2}
\notag \\
&=&\frac{1}{\left[ 4\pi (2l+1)\right] ^{2}}\sum_{j_{1},n_{1},j_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\left\vert \sum_{l_{1},m_{1}}\Gamma
_{l_{1}}C_{l_{1}m_{1}j_{1}n_{1}}^{lm}C_{l_{1}0j_{1}0}^{l0}C_{l_{1}m_{1}j_{2}n_{2}}^{lm}C_{l_{1}0j_{2}0}^{l0}\right\vert ^{2}
\notag \\
&=&\frac{1}{\left[ 4\pi (2l+1)\right] ^{2}}\sum_{j_{1}j_{2}}%
\sum_{l_{1}l_{2}}\Gamma _{j_{1}}\Gamma _{j_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}C_{l_{1}0j_{1}0}^{l0}C_{l_{1}0j_{2}0}^{l0}C_{l_{2}0j_{1}0}^{l0}C_{l_{2}0j_{2}0}^{l0}
\notag \\
&&\text{\ \ \ \ \ \ \ }\times \left\{
\sum_{n_{1}n_{2}m_{1}m_{2}}C_{l_{1}m_{1}j_{1}n_{1}}^{lm}C_{l_{1}m_{1}j_{2}n_{2}}^{lm}
C_{l_{2}m_{2}j_{1}n_{1}}^{lm}C_{l_{2}m_{2}j_{2}n_{2}}^{lm}\right\}.
\label{num2}
\end{eqnarray}%
Now, from \cite[Eq. 8.7.4.20]{VMK} we deduce that%
\begin{eqnarray*}
&&\sum_{n_{1}n_{2}}%
\sum_{m_{1}m_{2}}C_{l_{1}m_{1}j_{1}n_{1}}^{lm}C_{l_{1}m_{1}j_{2}n_{2}}^{lm}C_{l_{2}m_{2}j_{1}n_{1}}^{lm}C_{l_{2}m_{2}j_{2}n_{2}}^{lm}
\\
&=&(-1)^{\beta }\sum_{s\sigma }(2s+1)(2l+1)(C_{lms\sigma }^{lm})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\} \\
&=&\sum_{s}(2s+1)(2l+1)(C_{lms0}^{lm})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\} \text{,}
\end{eqnarray*}%
where $\beta =l_{1}+j_{1}+l_{2}+j_{2}$, and we used the Wigner $6j$ symbols,
as defined in (\ref{wig6j}). The last equality follows because the quantity $%
l_{1}+j_{1}$ $+l_{2}+j_{2}$ $+2l$ must be necessarily even, and therefore $%
\beta $ must be even as well. It should be noted that the role of the pairs $%
(j_{1},n_{1})$ and $(l_{1},m_{1})$ is perfectly symmetric, so we obtain also
\begin{eqnarray*}
&&\sum_{n_{1}n_{2}}%
\sum_{m_{1}m_{2}}C_{l_{1}m_{1}j_{1}n_{1}}^{lm}C_{l_{1}m_{1}j_{2}n_{2}}^{lm}C_{l_{2}m_{2}j_{1}n_{1}}^{lm}C_{l_{2}m_{2}j_{2}n_{2}}^{lm}
\\
&=&\sum_{s}(2s+1)(2l+1)(C_{lms0}^{lm})^{2}\left\{
\begin{array}{ccc}
j_{1} & l_{1} & l \\
l & s & j_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
j_{1} & l_{2} & l \\
l & s & j_{2}%
\end{array}%
\right\} \text{ ,}
\end{eqnarray*}%
whence
\begin{eqnarray}
&&\sum_{s}(2s+1)(2l+1)(C_{lms0}^{lm})^{2}\left\{
\begin{array}{ccc}
j_{1} & l_{1} & l \\
l & s & j_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
j_{1} & l_{2} & l \\
l & s & j_{2}%
\end{array}%
\right\} \label{pau1} \\
&\equiv &\sum_{s}(2s+1)(2l+1)(C_{lms0}^{lm})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\} \text{ .} \label{pau2}
\end{eqnarray}
\begin{lemma}
\label{lemma6j}For all integers $l,l_{1},l_{2},j_{1},j_{2}$ it holds that,
for some positive constant $c$,
\begin{eqnarray*}
&&\sum_{s}(2s+1)(2l+1)(C_{l0s0}^{l0})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\} \\
&\leq &c\max \left[ \frac{1}{\sqrt[5]{2l_{1}+1}}\wedge \frac{1}{\sqrt[5]{%
2l_{2}+1}},\frac{1}{\sqrt[5]{2j_{1}+1}}\wedge \frac{1}{\sqrt[5]{2j_{2}+1}}%
\right] .
\end{eqnarray*}
\end{lemma}
\begin{proof}
Assume without loss of generality $j_{1},j_{2}>l_{1}$ otherwise we focus on
(\ref{pau2}) rather than (\ref{pau1}). For $\alpha \in (0,1),$ we have that%
\begin{equation*}
\sum_{s}(2s+1)(2l+1)(C_{l0s0}^{l0})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\}
\end{equation*}%
\begin{eqnarray*}
&\leq &\sum_{s\leq l_{1}^{\alpha }}(2s+1)(2l+1)(C_{l0s0}^{l0})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\} \\
&&+\sum_{s>l_{1}^{\alpha }}(2s+1)(2l+1)(C_{l0s0}^{l0})^{2}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\}
\end{eqnarray*}\\[-30pt]
\begin{eqnarray*}
&\leq &Cl_{1}^{2\alpha }(2l+1)\max_{s\leq l_{1}^{\alpha }}\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\} \\
&&+\left\{ \max_{s>l_{1}^{\alpha }}(C_{l0s0}^{l0})^{2}\right\}
\sum_{s}(2s+1)(2l+1)\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \left\{
\begin{array}{ccc}
l_{1} & j_{2} & l \\
l & s & l_{2}%
\end{array}%
\right\}
\end{eqnarray*}\\[-30pt]
\begin{eqnarray*}
\leq Cl_{1}^{2\alpha }(2l+1)\frac{1}{(2l+1)(2l_{1}+1)}+\frac{C}{%
l_{1}^{\alpha /2}}\frac{2l+1}{\sqrt{j_{1}j_{2}}}=O(l_{1}^{2\alpha
-1}+l_{1}^{-\alpha /2})=O(\frac{1}{\sqrt[5]{l_{1}}}),
\end{eqnarray*}%
where the last equality has been obtained by setting $\alpha = 2/5$.
The second last step follows because $j_{1},j_{2}\geq l_{1},l_{2}$ implies $%
j_{1},j_{2}>l/2,$ in view of the triangle inequalities $%
l_{1}+j_{1},l_{1}+j_{2}>l$; also, we used the inequality\emph{\ }$\left\{
\max_{s>l_{1}^{\alpha }}(C_{l0s0}^{l0})^{2}\right\} \leq l_{1}^{-\alpha /2}$%
, see Lemma \ref{cglem} below. The bound with $l_{2}$ can be obtained by
exploiting the symmetries of the $6j$ coefficients; in particular, we recall
that (see (\cite[Eq. 9.4.2.2]{VMK}))%
\begin{equation*}
\left\{
\begin{array}{ccc}
l_{1} & j_{1} & l \\
l & s & l_{2}%
\end{array}%
\right\} \equiv \left\{
\begin{array}{ccc}
l & j_{1} & l_{2} \\
l_{1} & s & l%
\end{array}%
\right\} \equiv \left\{
\begin{array}{ccc}
l_{2} & j_{1} & l \\
l & s & l_{1}%
\end{array}%
\right\} \text{ .}
\end{equation*}
\end{proof}
\textbf{Remark. }The bound provided in Lemma (\ref{lemma6j}) is
sufficient for our purposes below and we did not investigate its
efficiency in detail. We remark, however, by setting
$j_{1}=j_{2}=0$, we have explicitly (see \cite[Eq. 8.5.1.2]{VMK})
\begin{align}
\sum_{n_{1}n_{2}m_{1}m_{2}}C_{l_{1}m_{1}j_{1}n_{1}}^{lm}C_{l_{1}m_{1}j_{2}n_{2}}^{lm}C_{l_{2}m_{2}j_{1}n_{1}}^{lm}C_{l_{2}m_{2}j_{2}n_{2}}^{lm}
\notag \\
=\sum_{m_{1}m_{2}}C_{l_{1}m_{1}00}^{lm}C_{l_{1}m_{1}00}^{lm}C_{l_{2}m_{2}00}^{lm}C_{l_{2}m_{2}00}^{lm}\equiv 1%
\text{ .} \notag
\end{align}%
\begin{lemma}
\label{cglem} As $l_{1}\rightarrow +\infty $, $C_{l0l_{1}0}^{l0}=O(\frac{1}{%
\sqrt[4]{l_{1}}})$.
\end{lemma}
\begin{proof}
Unless the triangle condition $2l\geq l_{1}$ is satisfied, the
Clebsch-Gordan coefficient is identically zero and the bound is trivial. Now
recall that%
\begin{equation*}
C_{l0l_{1}0}^{l0}=\frac{\sqrt{2l+1}\left[ (2l+l_{1})/2\right] !}{\left[
l_{1}/2\right] !\left[ (2l-l_{1})/2\right] !\left[ l_{1}/2\right] !}\left\{
\frac{l_{1}!(2l-l_{1})!l_{1}!}{(2l+l_{1}+1)!}\right\} ^{1/2}.
\end{equation*}%
For sequences $\left\{ a_{l}\right\} $ and $\left\{ b_{l}\right\} $, write $%
a_{l}\approx b_{l}$ when both $a_{l}=O(b_{l})$ and $b_{l}=O(a_{l})$ hold
true. From Stirling's formula%
\begin{align*}
C_{l0l_{1}0}^{l0} \!&\approx \!\frac{\sqrt{2l+1}\left[
(2l+l_{1})/2\right]
^{(2l+l_{1})/2+1/2}}{\left[ l_{1}/2\right] ^{l_{1}+1}\left[ (2l-l_{1}+1)/2%
\right] ^{(2l-l_{1})/2+1/2}}\!\left\{ \frac{%
l_{1}^{2l_{1}+1}(2l-l_{1})^{(2l-l_{1})+1/2}}{(2l+l_{1}+1)^{2l+l_{1}+3/2}}%
\right\} ^{1/2} \\
&=\!\frac{\sqrt{2l+1}(2l+l_{1})^{(2l+l_{1})/2+1/2}}{%
l_{1}^{l_{1}+1}(2l-l_{1}+1)^{(2l-l_{1})/2+1/2}}\left\{ \frac{%
l_{1}^{2l_{1}+1}(2l-l_{1})^{(2l-l_{1})+1/2}}{(2l+l_{1}+1)^{2l+l_{1}+3/2}}%
\right\} ^{1/2} \\
&=\!\!\frac{\sqrt{2l+1}}{l_{1}^{1/2}(2l\!-l_{1}\!+\!1)^{1/4}}\frac{1}{%
(2l\!+\!l_{1}\!+\!1)^{1/4}}\!\leq\! \frac{\sqrt[4]{2l+1}}{l_{1}^{1/2}(2l\!-\!l_{1}\!+\!1)^{1/4}}%
\!=\!O(\frac{1}{\sqrt[4]{l_{1}}})
\end{align*}
\end{proof}
\noindent We can finally state a sufficient condition for the CLT
(\ref{as00}) in the case $q=2.$
\begin{proposition}
\label{lemmaq2}\ For $q=2$, a sufficient condition for the CLT (\ref{as00})
is the following asymptotic relation%
\begin{equation}
\lim_{l\rightarrow +\infty }\sup_{l_{1}}\frac{\sum_{l_1}\Gamma
_{l_{1}}\Gamma _{l_{2}}\left\{ C_{l_{1}0l_{2}0}^{l0}\right\} ^{2}}{%
\sum_{l_{1},l_{2}}\Gamma _{l_{1}}\Gamma _{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}}%
\!=\!\lim_{l\rightarrow +\infty }\sup_{l_{1}}\mathbb{P}\left\{
Z_{1}\!=\!l_{1}\!\mid \! Z_{2} \! = \!l_{2}\right\} \!=0\text{,}
\label{sufcon2}
\end{equation}%
where the $\left\{ \Gamma _{l}\right\} $ are given by (\ref{FreqSpectrum})
and $\left\{ Z_{l}\right\} $ is the Markov chain defined in formulae (\ref%
{rw1}) and (\ref{rw2}).
\end{proposition}
\begin{proof}
In the sequel, we shall use repeatedly the trivial inequality
\begin{equation}
\sum_{j=0}^{n}\left| \frac{c_{j}}{a_{j}\wedge b_{j}}\right|
=\sum_{j:a_{j}\leq b_{j}}\left| \frac{c_{j}}{a_{j}}\right|
+\sum_{j:a_{j}>b_{j}}\left| \frac{c_{j}}{b_{j}}\right| \leq
\sum_{j=0}^{n}\left| \frac{c_{j}}{a_{j}}\right| +\sum_{j=0}^{n}\left| \frac{%
c_{j}}{b_{j}}\right| \text{,} \label{trivine}
\end{equation}%
which holds for arbitrary $n$ and real vectors $\left\{ a_{j}\right\} $, $%
\left\{ b_{j}\right\} $ and $\left\{ c_{j}\right\} $. In view of Lemma \ref%
{lemma6j}, by using a generalized Cauchy-Schwartz inequality, (\ref{trivine}%
) and symmetry considerations, we obtain that the expression
(\ref{num2}) is such that
\begin{eqnarray*}
(\ref{num2}) &\leq &\frac{1}{[4\pi (2l+1)]^{2}}\sum_{j_{1},j_{2}}%
\sum_{l_{1},l_{2}}\Gamma _{j_{1}}\Gamma _{j_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}\left|
C_{l_{1}0j_{1}0}^{l0}C_{l_{1}0j_{2}0}^{l0}C_{l_{2}0j_{1}0}^{l0}C_{l_{2}0j_{2}0}^{l0}\right|
\\
&\leq &\frac{2}{[4\pi
(2l+1)]^{2}}\!\sum_{j_{1},j_{2}}\!\sum_{l_{1},l_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\Gamma _{l_{1}}\Gamma _{l_{2}}\!\left|
C_{l_{1}0j_{1}0}^{l0}C_{l_{1}0j_{2}0}^{l0}C_{l_{2}0j_{1}0}^{l0}C_{l_{2}0j_{2}0}^{l0}\right|
\frac{1}{\sqrt[5]{j_{1}}}\\
&\leq &\frac{1}{8[\pi
(2l+1)]^{2}}\sqrt{\sum_{l_{1}j_{1}}\frac{\Gamma _{l_{1}}\Gamma
_{j_{1}}}{\sqrt[5]{j_{1}^{2}}}\left\{
C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}\sum_{l_{1}j_{2}}\Gamma
_{l_{1}}\Gamma
_{j_{2}}\left\{ C_{l_{1}0j_{2}0}^{l0}\right\} ^{2}} \\
&&\times \sqrt{\sum_{l_{2}j_{1}}\Gamma _{l_{2}}\Gamma
_{j_{1}}\left\{ C_{l_{2}0j_{1}0}^{l0}\right\}
^{2}\sum_{l_{2}j_{2}}\Gamma _{l_{2}}\Gamma _{j_{2}}\left\{
C_{l_{2}0j_{2}0}^{l0}\right\} ^{2}}.
\end{eqnarray*}%
The last expression is less than
\begin{equation}%
\frac{ \sum_{l_{1}l_{2}}\Gamma _{l_{1}}\Gamma _{l_{2}}\left\{
C_{l_{1}0l_{2}0}^{l0}\right\} ^{2}}{8[\pi (2l+1)]^{2}}
\sqrt{\sum_{l_{1}j_{1}}\frac{\Gamma _{l_{1}}\Gamma
_{j_{1}}}{\sqrt[5]{j_{1}^{2}}}\left\{
C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}\sum_{l_{1}j_{2}}\Gamma
_{l_{1}}\Gamma _{j_{2}}\left\{ C_{l_{1}0j_{2}0}^{l0}\right\}
^{2}}. \label{urca}
\end{equation}%
Now
\begin{eqnarray*}
\sum_{l_{1}j_{1}}\frac{\Gamma _{l_{1}}\Gamma _{j_{1}}}{\sqrt[5]{j_{1}^{2}}}%
\left\{ C_{l_{1}0j_{1}0}^{l0}\right\} ^{2} &\leq &j^{\ast }\max_{j_{1}\leq
j^{\ast }}\left[ \sum_{l_{1}\geq 0}\Gamma _{l_{1}}\Gamma _{j_{1}}\left\{
C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}\right] \\
&&+\frac{1}{\sqrt[5]{(j^{\ast })^{2}}}\sum_{l_{1},j_{1}\geq 0}\Gamma
_{l_{1}}\Gamma _{j_{1}}\left\{ C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}.
\end{eqnarray*}%
It follows that%
\begin{eqnarray*}
\frac{(\ref{num2})}{(\ref{den2})} &\leq
&\frac{(\ref{urca})}{\left\{ \sum_{l_{1},l_{2}=0}^{\infty }\Gamma
_{l_{1}}\Gamma
_{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}\right\} ^{2}}\!=\!
\sqrt{\frac{\sum_{l_{1}j_{1}}\frac{\Gamma _{l_{1}}\Gamma _{j_{1}}}{\sqrt[5%
]{j_{1}^{2}}}\left\{ C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}}{%
\sum_{l_{1},l_{2}=0}^{\infty }\Gamma _{l_{1}}\Gamma
_{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}}} \\
&\leq& 2\sqrt{\frac{j^{\ast }\max_{j_{1}\leq j^{\ast }}\left[
\sum_{l_{1}\geq 1}\Gamma _{l_{1}}\Gamma
_{j_{1}}\left\{ C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}\right] }{%
\sum_{l_{1},l_{2}=0}^{\infty }\Gamma _{l_{1}}\Gamma
_{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}}+\frac{1}{\sqrt[5]{(j^{\ast })^{2}}}}%
\text{ .}
\end{eqnarray*}%
Now fix $\varepsilon >0$. Under (\ref{sufcon2}) we have that, for
any fixed and positive number $l_{1}^{\ast }>1/\varepsilon $,
\begin{eqnarray*}
&&\lim_{l\rightarrow \infty }\left[ \frac{j^{\ast }\max_{j_{1}\leq j^{\ast }}%
\left[ \sum_{l_{1}\geq 1}\Gamma _{l_{1}}\Gamma _{j_{1}}\left\{
C_{l_{1}0j_{1}0}^{l0}\right\} ^{2}\right] }{\sum_{l_{1},l_{2}=1}^{\infty
}\Gamma _{l_{1}}\Gamma _{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}}+\frac{1}{\sqrt[5]%
{(j^{\ast })^{2}}}\right] \\
&\leq &j^{\ast }\lim_{l\rightarrow \infty }\sup_{l_{1}}\frac{%
\sum_{l_{2}=1}^{\infty }\Gamma _{l_{1}}\Gamma _{l_{2}}\left\{
C_{l_{1}0l_{2}0}^{l0}\right\} ^{2}}{\sum_{l_{1},l_{2}=1}^{\infty }\Gamma
_{l_{1}}\Gamma _{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2}}+\sqrt[5]{\varepsilon ^{2}%
}=\sqrt[5]{\varepsilon ^{2}}\text{ .}
\end{eqnarray*}%
Because $\varepsilon $ is arbitrary, the proof is concluded.
\end{proof}
\textbf{Remark. }Note that, using (\ref{cgconv}) and (\ref{cgconv1}),
condition (\ref{sufcon2}) becomes%
\begin{equation}
\lim_{l\rightarrow \infty }\sup_{\lambda }\frac{\Gamma _{\lambda }\widehat{%
\Gamma }_{2,l;\lambda }^{\ast }}{\sum_{l_{1}}\Gamma _{l_{1}}\widehat{\Gamma }%
_{2,l;l_{1}}}=0\text{ .} \label{rwhq21}
\end{equation}%
Note also that if, in the convolutions (\ref{cgconv}), one replaces each
squared Clebsch-Gordan coefficient $\left( C_{l_{1}0l_{2}0}^{l0}\right) ^{2}$
by the indicator $\mathbf{1}_{l_{1}+l_{2}=l}$ and extends the sums over $%
\mathbb{Z}$, one obtains the relation
\begin{equation}
\lim_{l\rightarrow \infty }\sup_{l_{1}}\frac{\Gamma _{l_{1}}\Gamma _{l-l_{1}}%
}{\sum_{l_{1}}\Gamma _{l_{1}}\Gamma _{l-l_{1}}}=0\text{.} \label{rwhq2}
\end{equation}%
In particular, when $\left\{ \Gamma _{l}\right\} =\{ \Gamma
_{l}^{V}\} $ (the power spectrum of the field $V$ on $\mathbb{T}$
given in (\ref{clll})) it is not difficult to show that formula (\ref{rwhq2}%
) gives exactly the asymptotic (necessary and sufficient) condition (\ref%
{rwAbel}).
\subsection{The case $q=3$\label{SS : Q3}}
Our results for $q=3$ closely mirrors the conditions we derived in the
previous subsection.
\begin{proposition}
\label{lemmaq3} A sufficient condition for the CLT (\ref{as00}) when $q=3$
is
\begin{eqnarray}
\lim_{l\rightarrow \infty }\sup_{L_{1}}\frac{\sum_{l_{1}l_{2}j_{1}}\Gamma
_{l_{1}}\Gamma _{l_{2}}\Gamma _{j_{1}}\left\{
C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\right\} ^{2}}{\sum_{L_{1}}%
\sum_{l_{1},l_{2},l_{3}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{l_{3}}\left\{ C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}}
&=&0,\text{
and} \label{condq31} \\
\lim_{l\rightarrow \infty }\sup_{j_{1}}\frac{\sum_{l_{1}l_{2}L_{1}}\Gamma
_{l_{1}}\Gamma _{l_{2}}\Gamma _{j_{1}}\left\{
C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\right\} ^{2}}{\sum_{L_{1}}%
\sum_{l_{1},l_{2},l_{3}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{l_{3}}\left\{ C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}}
&=&0\text{ .} \label{condq32}
\end{eqnarray}
\end{proposition}
\textbf{Remark. }In the light of (\ref{cgconv})-(\ref{cgconv1}) and of the
definition of the random walk $Z$ given in (\ref{rw1}) and (\ref{rw2}), it
is not difficult to see that (\ref{condq31}) can be rewritten as%
\begin{align}
\lim_{l\rightarrow \infty }\sup_{\lambda }\frac{\widehat{\Gamma }_{2,\lambda
}\sum_{j_{1}}\Gamma _{j_{1}}\left\{ C_{\lambda j_{1}0}^{l0}\right\} ^{2}}{%
\widehat{\Gamma }_{3,l}}& =\!\lim_{l\rightarrow \infty }\sup_{\lambda }\frac{%
\widehat{\Gamma }_{2,\lambda }\widehat{\Gamma }_{2,l;\lambda }^{\ast }}{%
\sum_{L_{1}}\left[ \widehat{\Gamma }_{2,L_{1}}\widehat{\Gamma }%
_{1,l;L_{1}}^{\ast }\right] } \label{convq31} \\
& =\!\lim_{l\rightarrow \infty }\sup_{\lambda }\mathbb{P}\left[
Z_{2}=\lambda \mid Z_{3}=l\right] =0\text{.} \notag
\end{align}%
Likewise, one obtains that (\ref{condq32}) is equivalent to
\begin{eqnarray}
&&\lim_{l\rightarrow \infty }\sup_{j_{1}}\frac{\Gamma _{j_{1}}\widehat{%
\Gamma }_{3,l;j_{1}}^{\ast }}{\sum_{L_{1}}\sum_{l_{1},l_{2},l_{3}}\Gamma
_{l_{1}}\Gamma _{l_{2}}\Gamma _{l_{3}}\left\{
C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}} \label{convq32a} \\
&=&\lim_{l\rightarrow \infty }\sup_{j_{1}}\mathbb{P}\left[ Z_{1}=j_{1}\mid
Z_{3}=l\right] =0\text{ .} \notag
\end{eqnarray}%
It should be noted that the two conditions (\ref{convq31}) and (\ref%
{convq32a}) can be written compactly as
\begin{equation}
\lim_{l\rightarrow \infty }\max_{q=1,2}\sup_{j_{1}}\frac{\widehat{\Gamma }%
_{q,j_{1}}\widehat{\Gamma }_{3-q,l;j_{1}}^{\ast }}{\sum_{L_{1}}%
\sum_{l_{1},l_{2},l_{3}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma _{l_{3}}\left\{
C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}}=0\text{ .} \label{rwhq3}
\end{equation}%
Relation (\ref{rwhq3}) parallels once again analogous conditions
established for stationary fields on a torus -- see \cite{MaPe}.
\smallskip
\textbf{Proof of Proposition \ref{lemmaq3}. }In view of Part 3 of Theorem %
\ref{teo1}, we shall focus on the asymptotic negligeability of the ratio
appearing in (\ref{as11}), in the case where $q=3$ and $p=2.$ As before, the
denominator of (\ref{as11}) is proportional to%
\begin{eqnarray}
&&\left\{ \sum_{l_{1},l_{2},l_{3}}C_{l_{1}}C_{l_{2}}C_{l_{3}}\frac{1}{2l+1}%
\left\{ \prod_{i=1}^{3}(2l_{i}+1)\right\} \sum_{L_{1}}\left\{
C_{l_{1}0l_{2}0}^{L_{1}0}C_{L_{1}0l_{3}0}^{l0}\right\} ^{2}\right\} ^{2}
\label{den3} \\
&=&\frac{1}{(2l+1)^{2}}\left\{ \sum_{l_{1},l_{2},l_{3}}^{\infty }\Gamma
_{l_{1}}\Gamma _{l_{2}}\Gamma _{l_{3}}\sum_{L_{1}}\left\{
C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}\right\} ^{2}. \notag
\end{eqnarray}%
On the other hand, the numerator is proportional to%
\begin{eqnarray}
&&\frac{1}{(2l+1)^{2}}\sum_{j_{1},j_{2}}\sum_{n_{1},n_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\times \notag \\
&&\left\vert \sum_{l_{1},l_{2},m_{1},m_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}%
\sum_{L_{1}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}m_{1}l_{2}m_{2}j_{1}n_{1}}^{L_{1}lm}\sum_{L_{2}}C_{l_{1}0l_{2}0j_{2}0}^{L_{2}l0}C_{l_{1}m_{1}l_{2}m_{2}j_{2}n_{2}}^{L_{2}lm}\right\vert ^{2}
\notag \\
&=&\frac{1}{(2l+1)^{2}}\sum_{j_{1},j_{2}}\sum_{n_{1},n_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\times \notag \\
&&\left\vert \sum_{l_{1},l_{2},m_{1},m_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}\sum_{L_{1}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}%
\sum_{M_{1}}C_{l_{1}m_{1}l_{2}m_{2}}^{L_{1}M_{1}}C_{L_{1}M_{1}j_{1}n_{1}}^{lm}\right.
\notag \\
&&\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ }\left.
\sum_{L_{2}}C_{l_{1}0l_{2}0j_{2}0}^{L_{2}l0}%
\sum_{M_{2}}C_{l_{1}m_{1}l_{2}m_{2}}^{L_{2}M_{2}}C_{L_{2}M_{2}j_{2}n_{2}}^{lm}\right\vert ^{2}
\label{num3}
\end{eqnarray}%
This last expression equals in turn%
\begin{eqnarray}
&=&\frac{1}{(2l+1)^{2}}\sum_{j_{1},j_{2}}\sum_{n_{1},n_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\times \notag \\
&&\left\vert \sum_{l_{1},l_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}\sum_{L_{1}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}%
\sum_{M_{1}}C_{L_{1}M_{1}j_{1}n_{1}}^{lm}%
\sum_{L_{2}}C_{l_{1}0l_{2}0j_{2}0}^{L_{2}l0}%
\sum_{M_{2}}C_{L_{2}M_{2}j_{2}n_{2}}^{lm}\delta _{L_{1}}^{L_{2}}\delta
_{M_{1}}^{M_{2}}\right\vert ^{2} \notag \\
&=&\frac{1}{(2l+1)^{2}}\sum_{j_{1},j_{2}}\sum_{n_{1},n_{2}{}_{1}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\times \notag \\
&&\left\vert \sum_{l_{1},l_{2}=0}\Gamma _{l_{1}}\Gamma
_{l_{2}}%
\sum_{L_{1}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}%
\sum_{M_{1}}C_{L_{1}M_{1}j_{1}n_{1}}^{lm}C_{L_{1}M_{1}j_{2}n_{2}}^{lm}\right%
\vert ^{2} \notag
\end{eqnarray}%
\begin{eqnarray*}
&=&\frac{1}{(2l+1)^{2}}\sum_{j_{1},j_{2}}\sum_{n_{1},n_{2}{}_{1}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\times \\
&&\left\vert \sum_{l_{1},l_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}\sum_{L_{1}}\!C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\!\sum_{M_{1}}%
\!C_{L_{1}M_{1}j_{1}n_{1}}^{lm}\!\sum_{L_{2}}%
\!C_{l_{1}0l_{2}0j_{2}0}^{L_{2}l0}\!\sum_{M_{2}}%
\!C_{L_{2}M_{2}j_{2}n_{2}}^{lm}\delta _{L_{1}}^{L_{2}}\delta
_{M_{1}}^{M_{2}}\!\right\vert ^{2}
\end{eqnarray*}%
and we can use the same argument as for $q=2.$ More precisely, one can write%
\begin{eqnarray}
&&\left\vert \sum_{l_{1},l_{2}=1}\Gamma _{l_{1}}\Gamma
_{l_{2}}%
\sum_{L_{1}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}%
\sum_{M_{1}}C_{L_{1}M_{1}j_{1}n_{1}}^{lm}C_{L_{1}M_{1}j_{2}n_{2}}^{lm}\right%
\vert ^{2} \notag \\
&=&\sum_{l_{1}...l_{4}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}%
\sum_{L_{1}L_{2}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}
\notag \\
&&%
\sum_{M_{1}M_{2}}C_{L_{1}M_{1}j_{1}n_{1}}^{lm}C_{L_{1}M_{1}j_{2}n_{2}}^{lm}C_{L_{2}M_{2}j_{1}n_{1}}^{lm}C_{L_{2}M_{2}j_{2}n_{2}}^{lm}
\notag \\
&=&\sum_{l_{1}...l_{4}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}%
\sum_{L_{1}L_{2}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}
\notag \\
&&(-1)^{\zeta }\sum_{s\sigma }(2s\!+\!1)(2l\!+\!1)(C_{l0s\sigma
}^{l0})^{2}\!\left\{
\begin{array}{ccc}
L_{1} & j_{1} & l \\
l & s & L_{2}%
\end{array}%
\right\} \!\left\{
\begin{array}{ccc}
L_{1} & j_{2} & l \\
l & s & L_{2}%
\end{array}%
\right\} \label{so}
\end{eqnarray}%
where $\zeta =L_{1}+j_{1}+L_{2}+j_{2}$, and (\ref{so}) equals%
\begin{eqnarray*}
&=&\sum_{l_{1}...l_{4}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}%
\sum_{L_{1}L_{2}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}
\\
&&(-1)^{2l}\!\sum_{s}\!(2s\!+\!1)(2l\!+\!1)(C_{lms0}^{lm})^{2}\!\left\{
\begin{array}{ccc}
L_{1} & j_{1} & l \\
l & s & L_{2}%
\end{array}%
\right\} \!\left\{
\begin{array}{ccc}
L_{1} & j_{2} & l \\
l & s & L_{2}%
\end{array}%
\right\} .
\end{eqnarray*}%
From (\ref{lemma6j}) we now obtain that the last expression is bounded by%
\begin{eqnarray*}
&&\sum_{l_{1}...l_{4}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}%
\sum_{L_{1}L_{2}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}%
\frac{1}{\sqrt[5]{L_{1}}} \\
&&+\sum_{l_{1}...l_{4}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}%
\sum_{L_{1}L_{2}}C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}%
\frac{1}{\sqrt[5]{j_{1}}}\text{ ,}
\end{eqnarray*}%
whence all the terms are bounded by
\begin{eqnarray}
&&\sum_{j_{1}j_{2}}\sum_{l_{1}...l_{4}}\sum_{L_{1}L_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}C_{l_{1}0l_{2}0}^{L_{1}0}C_{L_{1}0j_{1}0}^{l0}C_{l_{1}0l_{2}0}^{L_{1}0}\times
\label{q3bou1} \\
&&\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }%
\times
C_{L_{1}0j_{2}0}^{l0}C_{l_{3}0l_{4}0}^{L_{2}0}C_{L_{2}0j_{1}0}^{l0}C_{l_{3}0l_{4}0}^{L_{2}0}C_{L_{2}0j_{2}0}^{l0}%
\frac{1}{\sqrt[5]{L_{1}}} \notag \\
&&+\sum_{j_{1}j_{2}}\sum_{l_{1}...l_{4}}\sum_{L_{1}L_{2}}\Gamma
_{j_{1}}\Gamma _{j_{2}}\Gamma _{l_{1}}...\Gamma
_{l_{4}}C_{l_{1}0l_{2}0}^{L_{1}0}C_{L_{1}0j_{1}0}^{l0}C_{l_{1}0l_{2}0}^{L_{1}0}\times
\label{q3bou2} \\
&&\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }%
\times
C_{L_{1}0j_{2}0}^{l0}C_{l_{3}0l_{4}0}^{L_{2}0}C_{L_{2}0j_{1}0}^{l0}C_{l_{3}0l_{4}0}^{L_{2}0}C_{L_{2}0j_{2}0}^{l0}%
\frac{1}{\sqrt[5]{j_{1}}}\text{ .} \notag
\end{eqnarray}%
Also,
\begin{eqnarray*}
&&\sum_{j_{1}j_{2}}\sum_{l_{1}...l_{4}}\Gamma _{j_{1}}\Gamma _{j_{2}}\Gamma
_{l_{1}}...\Gamma _{l_{4}}\sum_{L_{1}L_{2}}\!\frac{%
C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}%
}{\sqrt[5]{L_{1}}} \\
&=&\sum_{\substack{ j_{1}j_{2} \\ l_{1}...l_{4}}}\Gamma _{j_{1}}\Gamma
_{j_{2}}\Gamma _{l_{1}}...\Gamma _{l_{4}}\sum_{L_{1}...L_{4}}\!\frac{%
C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}C_{l_{1}0l_{2}0j_{2}0}^{L_{3}l0}C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}C_{l_{3}0l_{4}0j_{2}0}^{L_{4}l0}%
}{\sqrt[5]{L_{1}}}\delta _{L_{1}}^{L_{3}}\delta _{L_{2}}^{L_{4}} \\
&\leq &\sqrt{\sum_{l_{1}l_{2}j_{1}L_{1}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{j_{1}}\frac{\left\{ C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\right\} ^{2}}{%
L_{1}^{2/5}}}\sqrt{\sum_{l_{1}l_{2}j_{2}L_{1}}\Gamma _{l_{1}}\Gamma
_{l_{2}}\Gamma _{j_{2}}\left\{ C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}\right\} ^{2}}%
\times \\
&&\sqrt{\sum_{l_{3}l_{4}j_{1}L_{2}}\Gamma _{l_{3}}\Gamma _{l_{4}}\Gamma
_{j_{1}}\left\{ C_{l_{3}0l_{4}0j_{1}0}^{L_{2}l0}\right\} ^{2}}\sqrt{%
\sum_{l_{3}l_{4}j_{2}L_{2}}\Gamma _{l_{3}}\Gamma _{l_{4}}\Gamma
_{j_{2}}\left\{ C_{l_{3}0l_{4}0j_{2}0}^{L_{2}l0}\right\} ^{2}}
\end{eqnarray*}\\[-30pt]
\begin{equation}
\leq \sqrt{\sum_{l_{1}l_{2}j_{1}L_{1}}\!\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{j_{1}}\!\frac{\left\{ C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\right\} ^{2}}{%
L_{1}^{2/5}}}\!\left\{ \sum_{l_{1}l_{2}j_{2}L_{1}}\!\Gamma _{l_{1}}\Gamma
_{l_{2}}\Gamma _{j_{2}}\!\left\{ C_{l_{1}0l_{2}0j_{2}0}^{L_{1}l0}\right\}
^{2}\!\right\} ^{3/2}\text{.} \label{socc}
\end{equation}%
To sum up, we have obtained%
\begin{eqnarray}
\frac{(\ref{q3bou1})}{(\ref{den3})} &\leq &\frac{\frac{1}{(2l+1)^{2}}%
\times (\ref{socc})}{\frac{1}{ (2l+1)^{2}}\left\{
6\sum_{l_{1},l_{2},l_{3}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{l_{3}}\sum_{L_{1}}\left\{
C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\}
^{2}\right\} ^{2}} \notag \\
&\leq &\left[ \frac{\sum_{l_{1}l_{2}j_{1}L_{1}}\Gamma
_{l_{1}}\Gamma _{l_{2}}\Gamma _{j_{1}}\left\{
C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\right\}
^{2}/L_{1}^{2/5}}{6\sum_{l_{1},l_{2},l_{3}}\Gamma _{l_{1}}\Gamma
_{l_{2}}\Gamma _{l_{3}}\sum_{L_{1}}\left\{
C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}}\right] ^{1/2}
\label{joe},
\end{eqnarray}%
By an identical argument we obtain also
\begin{equation}
\frac{(\ref{q3bou2})}{(\ref{den3})}\leq \left[ \frac{%
\sum_{l_{1}l_{2}j_{1}L_{1}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{j_{1}}\left\{ C_{l_{1}0l_{2}0j_{1}0}^{L_{1}l0}\right\} ^{2}/j_{1}^{2/5}}{%
6\sum_{l_{1},l_{2},l_{3}}\Gamma _{l_{1}}\Gamma _{l_{2}}\Gamma
_{l_{3}}\sum_{L_{1}}\left\{ C_{l_{1}0l_{2}0l_{3}0}^{L_{1}l0}\right\} ^{2}}%
\right] ^{1/2}. \label{zzzz}
\end{equation}%
Now we can adopt exactly the same line of reasoning as in the proof of
Proposition \ref{lemmaq2}, so that by trivial manipulations we deduce that (%
\ref{condq31}) and (\ref{condq32}) are indeed sufficient to have
that the RHS of (\ref{joe}) and (\ref{zzzz}) converges to zero as
$l\rightarrow +\infty $. \hfill $\square $
\subsection{The case of a general $q$: results and conjectures\label{SS :
CONJQ}}
The following proposition gives a general version of the results proved in
Sections \ref{SS : Q2} and \ref{SS : Q3}. The proof (omitted) is rather
long, and can be obtained along the lines of those of Proposition \ref%
{lemmaq2}\ and Proposition \ref{lemmaq3}.
\begin{proposition}
\label{P : qq-1}Fix $q\geq 4$. Then, a sufficient condition to have the
asymptotic relation (\ref{as11}) in the case $p=q-1$ is the following;%
\begin{eqnarray}
&&\lim_{l\rightarrow \infty }\left\{ \sup_{\lambda }\frac{\widehat{\Gamma }%
_{q-1,\lambda }\widehat{\Gamma }_{2,l;\lambda }^{\ast }}{\sum_{L}\widehat{%
\Gamma }_{q-1,L}\widehat{\Gamma }_{1,l;L}^{\ast }}+\sup_{\lambda }\frac{%
\widehat{\Gamma }_{q,l;\lambda }^{\ast }\Gamma _{\lambda }}{\sum_{L}\widehat{%
\Gamma }_{q-1,L}\widehat{\Gamma }_{1,l;L}^{\ast }}\right\} \notag \\
&=&\lim_{l\rightarrow \infty }\left\{ \sup_{\lambda }\frac{\widehat{\Gamma }%
_{q-1,\lambda }\widehat{\Gamma }_{2,l;\lambda }^{\ast }}{\widehat{\Gamma }%
_{q,l}}+\sup_{\lambda }\frac{\widehat{\Gamma }_{q,l;\lambda }^{\ast }\Gamma
_{\lambda }}{\widehat{\Gamma }_{q,l}}\right\} =0\text{.} \label{qq-1}
\end{eqnarray}
\end{proposition}
\textbf{Remarks. }(\textbf{1}) As in the proofs of Proposition \ref{lemmaq2}%
\ and Proposition \ref{lemmaq3}, a crucial technique in proving Proposition %
\ref{P : qq-1} consists in the simplification of sums of the type%
\begin{equation}
\sum_{\substack{ m_{1}m_{2}m_{3} \\ M_{1}...M_{4}}}%
C_{l_{1}m_{1}l_{2}m_{2}}^{L_{1}M_{1}}C_{L_{1}M_{1}l_{3}m_{3}}^{L_{2}M_{2}}C_{L_{2}M_{2}j_{1}n_{1}}^{lm}C_{l_{1}m_{1}l_{2}m_{2}}^{L_{3}M_{3}}C_{L_{3}M_{3}l_{3}m_{3}}^{L_{4}M_{4}}C_{L_{4}M_{4}j_{2}n_{2}}^{lm2}%
\text{,} \label{goo}
\end{equation}%
by means of the general relation%
\begin{equation}
\sum_{m_{1}m_{2}}C_{l_{1}m_{1}l_{2}m_{2}}^{L_{1}M_{1}}C_{l_{1}m_{1}l_{2}m_{2}}^{L_{3}M_{3}}=\delta _{L_{1}}^{L_{3}}\delta _{M_{1}}^{M_{3}}.
\label{fff}
\end{equation}%
This basically means that, if in (\ref{goo}) each Clebsch-Gordan coefficient
is represented as the vertex of a connected graph, then it is possible to
\textquotedblleft reduce\textquotedblright\ such graph by cutting edges
corresponding to 2-loops -- see \cite{Marinucci} for a more detailed
discussion on these graphical methods.
(\textbf{2}) Note that, since $q\geq 4$ and according to Part C of Theorem %
\ref{teo1}, condition (\ref{as11}) \textsl{is only necessary} to have the
CLT (\ref{as00}), so that (\ref{qq-1}) cannot be used to deduce the
asymptotic Gaussianity of the frequency components of Hermite-subordinated
fields of the type $H_{q}\left[ T\right] $. Some conjectures concerning the
case $q\geq 4$, $p\neq q-1$ are presented at the end of the section.
(\textbf{3}) Observe that, in terms of the random walk $\left\{
Z_{n}\right\} $ defined in (\ref{rw1})-(\ref{rw2}),%
\begin{equation*}
\frac{\widehat{\Gamma }_{q-1,\lambda }\widehat{\Gamma
}_{2,l;\lambda }^{\ast }}{\widehat{\Gamma
}_{q,l}}=\mathbb{P}\left\{ Z_{q-1}=\lambda \mid Z_{q}=l\right\}
\text{ \ and \ }\frac{\widehat{\Gamma }_{q,l;\lambda }^{\ast
}\Gamma _{\lambda }}{\widehat{\Gamma }_{q,l}}=\mathbb{P}\left\{
Z_{1}=\lambda \mid Z_{q}=l\right\} .
\end{equation*}
\smallskip
As mentioned before, the relation (\ref{as11}) (which implies (\ref{as00})),
in the general case where $q\geq 4$ and $p\neq q-1$, is still being
investigated, as it requires a hard analysis of higher order Clebsch-Gordan
coefficients by means of graphical techniques (see for instance \cite[Ch. 11]%
{VMK}). At this stage, it is however natural to propose the following
conjecture. Recall that we focus on the CLT (\ref{as00}) because of the
equality in law $T_{l}^{\left( q\right) }\left( x\right) =\sqrt{\frac{2l+1}{%
4\pi }}a_{l0;q}$, and Corollary \ref{C : PunctualCLT}.
\textbf{Conjecture A }(\textit{Weak})\textbf{\ }\textsl{A sufficient
condition for the CLT (\ref{as00}) is}%
\begin{eqnarray}
&&\lim_{l\rightarrow \infty }\max_{1\leq p\leq q-1}\sup_{\lambda }\frac{%
\widehat{\Gamma }_{p,\lambda }\widehat{\Gamma }_{q+1-p,l;\lambda }^{\ast }}{%
\sum_{L}\widehat{\Gamma }_{p,L_{q-2}}\widehat{\Gamma }_{q+1-p,l;L}^{\ast }}
\label{rwhqw} \\
&=&\lim_{l\rightarrow \infty }\max_{1\leq p\leq q-1}\sup_{\lambda }\mathbb{P}%
\left\{ Z_{p}=\lambda \mid Z_{q}=l\right\} =0\text{ .} \notag
\end{eqnarray}
It is worth emphasizing how condition (\ref{rwhqw}) is the exact analogous
of the necessary and sufficient condition (\ref{rwAbel}), established in
\cite{MaPe} for the high-frequency CLT\ on the torus $\mathbb{T}=[0,2\pi )$.
This remarkable circumstance may suggest the following (much more general
and, for the time being, quite imprecise) extension.
\textbf{Conjecture B }(\textit{Strong})\textbf{\ }\textsl{Let }$T$ \textsl{%
be an isotropic Gaussian field defined on the homogeneous space of a}
\textsl{compact group }$G$, \textsl{and set }$T^{\left( q\right)
}=H_{q}\left( T\right) $ ($q\geq 2$). \textsl{Then, the high-frequency}
\textsl{components of }$T^{\left( q\right) }$\textsl{\ are asymptotically
Gaussian if, and only if, it holds a condition of the type}%
\begin{equation}
\lim_{l\rightarrow l_{0}}\max_{1\leq p\leq q-1}\sup_{\lambda \in \widehat{G}}%
\frac{\widehat{\Gamma }_{p,\lambda }^{\ast }\widehat{\Gamma }%
_{q+1-p,l;\lambda }}{\sum_{L\in \widehat{G}}\widehat{\Gamma }_{p,L}^{\ast }%
\widehat{\Gamma }_{q+1-p,l;L}}=0\text{ ,} \label{rwhqs}
\end{equation}%
\textsl{where }$\widehat{G}$ \textsl{is the dual of} $G,$ $l_{0}$ \textsl{is
some point at the boundary of} $\widehat{G}$\textsl{, and the convolutions }$%
\widehat{\Gamma }$\textsl{\ and }$\widehat{\Gamma }^{\ast }$\textsl{\ are
defined (analogously to (\ref{cgconv-2})-(\ref{starconv})) on the power
spectrum of }$T$,\textsl{\ by means of the appropriate Clebsch-Gordan
coefficients of the group.}
We leave the two Conjectures A and B as open issues for future research.
\textbf{Remark. }(\textit{On} \textquotedblleft \textit{no privileged path}%
\textquotedblright \textit{\ conditions}) In terms of $Z$, condition (\ref%
{rwhqw}) can be further interpreted as follows: for every $l$, define a
\textquotedblleft bridge\textquotedblright\ of length $q$, by conditioning $%
Z $ to equal $l$ at time $q$. Then, (\ref{rwhqw}) is verified if, and only
if, the probability that the bridge hits $\lambda $ at time $q$ converges to
zero, uniformly on $\lambda $, as $l\rightarrow +\infty $. It is also
evident that, when (\ref{rwhqw}) is verified for every $p=1,...,q-1$, one
also has that
\begin{equation}
\lim_{l\rightarrow +\infty }\sup_{\lambda _{1},...,\lambda _{q-1}\in \mathbb{%
N}}\mathbb{P}\left[ Z_{1}=\lambda _{1},...,Z_{m-1}=\lambda _{q-1}\mid Z_{q}=l%
\right] =0\text{,} \label{rwint}
\end{equation}%
meaning that, asymptotically, the law of $Z$ does not charge any
\textquotedblleft privileged path\textquotedblright\ of length $q$\ leading
to $l$. The interpretation of condition (\ref{rwint}) in terms of bridges
can be reinforced by putting by convention $Z_{0}=0$, so that the
probability in (\ref{rwint}) is that of the particular path $0\rightarrow
\lambda _{1}\rightarrow ...\rightarrow \lambda _{q-1}\rightarrow l$,
associated with a random bridge linking $0$ and $l$.
\section{Further physical interpretation of the convolutions and connection
with other random walks on hypergroups\label{S : Roynette}}
\subsection{Convolutions as mixed states}
We recall that, in quantum mechanics, it is customary to consider two
possible initial states for a particle, i.e. those provided by the so-called
\textsl{pure states},\emph{\ }where the state of a particle is given, and
those provided by the so-called \textsl{mixed states},\emph{\ }where the
state of the particle is given by a mixture (in the usual probabilistic
sense) over different quantum states. We refer the reader to \cite{Libo} for
an introduction to these ideas. From this standpoint, the quantity $\widehat{%
\Gamma }_{q,l}$ defined in (\ref{cgconv}) is the probability associated to a
mixed state, where the mixing is performed over all possible values of the
total angular momentum. To illustrate this point, we use the standard
bra-ket notation $\left| l0\right\rangle $ to indicate the state of a
particle having total angular momentum equal to $l$ and projection $0$ on
the $z$-axis. By using this formalism, the quantity $\widehat{\Gamma }_{q,l}$
can be obtained as follows:
\begin{description}
\item[(i)] consider a system of $q$ particles $\alpha _{1},...,\alpha _{q}$
such that each $\alpha _{j}$ is in the mixed state $\Xi $ according to which
a particle is in the state $\left\vert k0\right\rangle $ with probability $%
\Gamma _{k}/\Gamma _{\ast }$ ($k\geq 0$);
\item[(ii)] obtain $\widehat{\Gamma }_{q,l}$ as the probability that the
elements of this system are coupled pairwise to form a particle in the state
$\left\vert l0\right\rangle $.
\end{description}
Now denote by $\mathbf{A}_{p,\left\vert \lambda 0\right\rangle }$
the event that the first $p$ particles $\alpha _{1},...,\alpha
_{p}$ have coupled pairwise to generate the state $\left\vert
\lambda
0\right\rangle $. Then, one also has that%
\begin{equation}
\frac{\widehat{\Gamma }_{p+1,\lambda }\widehat{\Gamma }_{q-p,l;\lambda
}^{\ast }}{\widehat{\Gamma }_{q,l}}=\Pr \text{ }\{\text{the }q\text{
particles generate }\left\vert l0\right\rangle \text{ \ }|\text{ \ }\mathbf{A%
}_{p,\left\vert \lambda 0\right\rangle }\text{ }\}. \label{ourpint}
\end{equation}%
In particular, relation (\ref{ourpint}) yields a further physical
interpretation of the \textquotedblleft no privileged path
condition\textquotedblright\ discussed in (\ref{rwint}).
\subsection{Other convolutions and random walks on group duals}
Random walks on hypergroups, and specifically on group duals, have been
actively studied in the seventies -- see \cite[Ch. 6]{GKR}. Our aim in the
sequel is to compare our definitions with those provided in this earlier
literature, mainly by discussing the alternative physical meanings of the
associated notion of convolution. We recall from Section \ref{S :
Clebsch-Gordan} that, starting from the Wigner's $D$-matrices representation
of $SO(3)$, we obtain the unitary equivalent reducible representations $%
\{ D^{l_{1}}(g)\otimes D^{l_{2}}(g)\} $ and $\{ \oplus
_{l=|l_{2}-l_{1}|}^{l_{2}+l_{1}}D^{l}(g)\} $. Now note $\chi
_{l}(g)$
the character of $D^{l}(g)$; for all $g\in SO(3)$, we have immediately%
\begin{equation*}
\chi _{l_{1}}(g)\chi _{l_{2}}(g)=\sum_{l=|l_{2}-l_{1}|}^{l_{2}+l_{1}}\chi
_{l}(g)\text{ .}
\end{equation*}%
In \cite[p. 222]{GKR}, an alternative class of Clebsch-Gordan coefficients $%
\{C_{l_{1}l_{2}|G}^{l}$ $:$ $l_{1},l_{2},l\geq 0\}$ is defined by means of
the identity%
\begin{equation*}
\frac{1}{2l_{1}+1}\chi _{l_{1}}(g)\frac{1}{2l_{2}+1}\chi
_{l_{2}}(g)=\sum_{l}C_{l_{1}l_{2}|G}^{l}\frac{1}{2l+1}\chi _{l}(g)
\end{equation*}%
which leads to%
\begin{equation*}
C_{l_{1}l_{2}|G}^{l}=\frac{2l+1}{(2l_{1}+1)(2l_{2}+1)}\left\{
l_{1}l_{2}l\right\} \text{,}
\end{equation*}%
where we use the same notation as in \cite{VMK} and in many other physical
textbooks, i.e. we take $\left\{ l_{1}l_{2}l\right\} $ to represent the
indicator function of the event $|l_{2}-l_{1}|\leq l\leq l_{2}+l_{1}$. Of
course%
\begin{equation}
C_{l_{1}l_{2}|G}^{l}=\sum_{l=|l_{2}-l_{1}|}^{l_{2}+l_{1}}\frac{2l+1}{%
(2l_{1}+1)(2l_{2}+1)}\equiv 1\text{ .} \label{sum}
\end{equation}%
As observed in \cite{GKR}, relation (\ref{sum}) can be used to endow $%
\widehat{SO\left( 3\right) }$ with an hypergroup structure, via the formal
addition $l_{1}+l_{2}\triangleq \sum_{l}lC_{l_{1}l_{2}|G}^{l}$. Now let $%
\left\{ \Gamma _{l}:l\geq 0\right\} $ be a collection of positive
coefficients such that $\sum_{l}\Gamma _{l}=1$. The convolutions and
*-convolutions of the $\left\{ \Gamma _{l}\right\} $ that are naturally
associated with the above formal addition are given by%
\begin{eqnarray}
\widetilde{\Gamma }_{2,l} &=&\sum_{l_{1},l_{2}}\Gamma _{l_{1}}\Gamma
_{l_{2}}C_{l_{1}l_{2}|G}^{l}\text{ , \ }\widetilde{\Gamma }%
_{3,l}=\sum_{L_{1},l_{3}}\widetilde{\Gamma }_{2,L_{1}}\Gamma
_{l_{3}}C_{L_{1}l_{3}|G}^{l}\text{, ...} \label{Roy1} \\
\widetilde{\Gamma }_{q,l} &=&\sum_{L_{1},l_{q}}\widetilde{\Gamma }%
_{q-1,L_{q-1}}\Gamma _{l_{q}}C_{L_{q-1}l_{q}|G}^{l}\text{ }, \label{Roy3}
\end{eqnarray}%
and, for $p\geq 2$,%
\begin{equation}
\widetilde{\Gamma }_{p,l;l_{1}}^{\ast }=\sum_{l_{2}}\cdot \cdot \cdot
\sum_{l_{p}}\Gamma _{l_{2}}\cdot \cdot \cdot \Gamma
_{l_{p}}%
\sum_{L_{1}...L_{p-2}}C_{l_{1}l_{2}|G}^{L_{1}}C_{L_{1}l_{3}|G}^{L_{2}}...C_{L_{p-2}l_{p}|G}^{l}%
\text{ }. \label{Roy4}
\end{equation}
As shown in \cite{GKR}, the objects appearing in
(\ref{Roy1})-(\ref{Roy4}) can be used to define the law of a
random walk $\widetilde{Z}=\{ \widetilde{Z}_{n}:n\geq 1\} $ on
$\mathbb{N}$ (regarded as an hypergroup isomorphic to
$\widehat{SO\left( 3\right) }$), exactly as we did in
(\ref{rw1})-(\ref{rw2}). In particular, since $\Gamma _{\ast
}=\sum
\Gamma _{l}=1$, one has that $\widetilde{\Gamma }_{p,l:l_{1}}^{\ast }=%
\mathbb{P}\{ \widetilde{Z}_{p}=l\mid \widetilde{Z}_{1}=l_{1}\} $.
Also, the convolutions (\ref{Roy1})-(\ref{Roy4}) (and therefore
the random walk $\widetilde{Z}$) enjoy a physical interpretation
which is interesting to compare with our previous result. To see
this, assume we have two mixed states $\Xi _{l_{1}}$ and$\ \Xi
_{l_{2}}$: in state $\Xi _{l_{1}}$, the
particle has total angular momentum $l_{1}$ and its projection on the axis $%
z $ takes values $m_{1}=-l_{1},...,l_{1}$ with uniform (classical)
probability $(2l_{1}+1)^{-1}$; analogous conditions are imposed for $\Xi
_{l_{2}}$. Let us now compute the probability $\Pr \left\{ l\mid \Xi
_{l_{1}},\Xi _{l_{2}}\right\} $ that the system will couple to form a
particle with total angular momentum $l$ and arbitrary projection on $z$.
Start by observing that the probability that a particle in the state $%
\left\vert l_{1}m_{1}\right\rangle $ will couple with another particle in
the state $\left\vert l_{2}m_{2}\right\rangle $ to yield the state $%
\left\vert lm\right\rangle $ is exactly given by $%
\{C_{l_{1}m_{1}l_{2}m_{2}}^{lm}\}^{2}. $ Hence, with straightforward
notation,%
\begin{eqnarray}
\Pr \left\{ l\mid \Xi _{l_{1}},\Xi _{l_{2}}\right\} &=&\sum_{m_{1}m_{2}}\Pr
\left\{ l\mid \left\vert l_{1}m_{1}\right\rangle ,\left\vert
l_{2}m_{2}\right\rangle \right\} \Pr \left\{ m_{1},m_{2}\right\} \notag \\
&=&\sum_{m_{1}m_{2}}\Pr \left\{ l\mid \left\vert l_{1}m_{1}\right\rangle
,\left\vert l_{2}m_{2}\right\rangle \right\} \frac{1}{2l_{1}+1}\frac{1}{%
2l_{2}+1} \notag \\
&=&\sum_{m}\sum_{m_{1}m_{2}}\left\{ C_{l_{1}m_{1}l_{2}m_{2}}^{lm}\right\}
^{2}\frac{1}{2l_{1}+1}\frac{1}{2l_{2}+1} \notag \\
&=&\sum_{m}\frac{\left\{
l_{1}l_{2}l\right\}}{2l_{1}+1}\frac{1}{2l_{2}+1}=\frac{2l+1}{2l_{1}+1}\frac{\left\{
l_{1}l_{2}l\right\}}{2l_{2}+1}=C_{l_{1}l_{2}|G}^{l}\text{ }.
\label{cgr}
\end{eqnarray}%
It follows from (\ref{cgr}) that the quantity $\widetilde{\Gamma }_{q,l}$
can be obtained as follows:
\begin{description}
\item[(i)] consider a system of $q$ particles $\alpha _{1},...,\alpha _{q}$
such that each $\alpha _{j}$ is in the mixed state $\Xi $ according to which
a particle is in the state $\left\vert ku\right\rangle $, $u=-k,...,k$, with
probability $(2k+1)^{-1}\Gamma _{k}/\Gamma _{\ast }$ ($k\geq 0$);
\item[(ii)] obtain $\widetilde{\Gamma }_{q,l}$ as the probability that the
elements of this system are coupled pairwise to form a particle in the state
$\left| lm\right\rangle ,$ any $m=-l,...,l.$
\end{description}
To sum up, both convolutions $\widehat{\Gamma }$ and $\widetilde{\Gamma }$
can be interpreted in terms of random interacting quantum particles: $%
\widehat{\Gamma }$-type convolutions are obtained from particles in mixed
states where the mixing is performed over pure states of the form $%
\left\vert k0\right\rangle $; on the other hand, $\widetilde{\Gamma }$-type
convolutions are associated with mixed state particles where mixing is over
pure states of the type $\left\{ \left\vert ku\right\rangle
:u=-k,...,k\right\} $, uniformly in $u$ for every fixed $k.$
\section{Application: algebraic/exponential dualities\label{S : Ang PS}}
In this section we discuss explicit conditions on the angular
power spectrum $\left\{ C_{l}:l\geq 0\right\} $ of the Gaussian
field $T$ introduced in Section \ref{S : GaussSub}, ensuring that
the CLT (\ref{as00}) may hold. Our results show that, if the power
spectrum decreases exponentially, then a high-frequency CLT\
holds, whereas the opposite implication holds if the spectrum
decreases as a negative power. This duality mirrors analogous
conditions previously established in the Abelian case -- see
\cite{MaPe}. For simplicity, we stick to the case $q=2.$ Note that
the results below allow to deal with the asymptotic
(high-frequency) behaviour of the Sachs-Wolfe model
(\ref{swmodel}).
\subsection{The Exponential case}
Assume
\begin{equation}
C_{l}\approx (l+1)^{\alpha }\exp (-l)\text{ },\text{ }\alpha \geq 0.
\label{fdsq}
\end{equation}%
To prove that, in this case, (\ref{as00}) is verified for $q=2$, we will
prove that (\ref{sufcon2}) holds (recall the definition of $\Gamma _{l}$
given in (\ref{FreqSpectrum})). For the denominator of the previous
expression we obtain the lower bound%
\begin{eqnarray}
\sum_{l_{1},l_{2}=1}^{\infty }\Gamma _{l_{1}}\Gamma
_{l_{2}}(C_{l_{1}0l_{2}0}^{l0})^{2} &\geq &\sum_{l_{1}=[l/3]}^{[2l/3]}\Gamma
_{l_{1}}\Gamma _{l-l_{1}}(C_{l_{1}0l-l_{1}0}^{l0})^{2} \notag \\
&\approx &\exp (-l)l^{2(\alpha
+1)}\sum_{l_{1}=[l/3]}^{[2l/3]}(C_{l_{1}0l-l_{1}0}^{l0})^{2} \label{seg2}
\end{eqnarray}%
and in view of \cite{VMK}, equation 8.5.2.33, and Stirling's formula%
\begin{eqnarray*}
(\ref{seg2}) &\approx &\exp (-l)l^{2(\alpha
+1)}\sum_{l_{1}=[l/3]}^{[2l/3]}\left( \frac{l!}{l_{1}!(l-l_{1})!}\right)
^{2}\left( \frac{(2l_{1})!(2l-2l_{1})!}{(2l)!}\right) \\
&\approx &\exp (-l)l^{2(\alpha +1)}\sum_{l_{1}=[l/3]}^{[2l/3]}\frac{l^{2l+1}%
}{l_{1}^{2l_{1}+1}(l-l_{1})^{2l-2l_{1}+1}} \\
&&\times \left( \frac{(2l_{1})^{2l_{1}+1/2}(2l-2l_{1})^{2l-2l_{1}+1/2}}{%
(2l)^{2l+1/2}}\right) \\
&\approx &\exp (-l)l^{2(\alpha +1)}\sum_{l_{1}=[l/3]}^{[2l/3]}\frac{l^{1/2}}{%
l_{1}^{1/2}(l-l_{1})^{1/2}}\approx \exp (-l)l^{2(\alpha +1)}l^{1/2}.
\end{eqnarray*}%
On the other hand, recall that by the triangle conditions (Section
\ref{S : Clebsch-Gordan}) $\{ C_{l_{1}0l_{2}0}^{l0}\} ^{2}$
$\equiv 0$ unless
$l_{1}+l_{2}\geq l.$ Hence%
\begin{eqnarray*}
&&\sup_{l_{1}}\sum_{l_{2}}\Gamma _{l_{1}}\Gamma _{l_{2}}\left\{
C_{l_{1}0l_{2}0}^{l0}\right\}^{2} \leq \!K\!\sup_{l_{1}}\exp
(-l)l_{1}^{\alpha +1}\!\\
&&\times \left\{ \left\vert l\!-\!l_{1}\right\vert ^{\alpha +1} \!
+ \!\sum_{u=1}^{\infty }\!\exp (-u)\left\vert
l_{1}\!+\!u\right\vert ^{\alpha +1}\!\right\} \! \approx \!
\exp(-l)l^{2(\alpha +1)}l^{1/2}.
\end{eqnarray*}%
It is then immediate to see that that (\ref{sufcon2}) is satisfied.
\subsection{Regularly varying functions}
For $q=2$, we show below that the CLT fails for all sequences $C_{l}$ such
that: (a) $C_{l}$ is quasi monotonic, i.e. $C_{l+1}\leq C_{l}(1+K/l)$, and
(b) $C_{l}$ is such that $\lim \inf_{l\rightarrow \infty }C_{l}/C_{l/2}>0$.
In particular, a necessary condition for the CLT (\ref{as00}) to hold is that%
\emph{\ }$C_{l}/C_{l/2}\rightarrow 0$. This is exactly the same
necessary condition as was derived by \cite{MaPe} in the Abelian
case. For the general case $q\geq 2$, we expect the CLT fails for
all regularly varying angular power spectra, i.e. for all $C_{l}$
such that $\lim \inf_{\ell \rightarrow \infty }C_{l}/C_{\alpha
l}>0$ for all $\alpha >0$. Note that we are thus covering all
polynomial forms for $C_{l}^{-1}.$
Since (\ref{sufcon2}) only provides a sufficient condition for the
CLT, we need to analyze directly the more primitive condition
(\ref{as11}) for $m=0$ (however, the case $m\neq 0$ entails just a
more complicated notation). We
consider first an upper bound for the square root of the denominator of (\ref%
{as11}), which is given by $\widetilde{C}_{l}^{\left( 2\right) }$.
We have%
\begin{eqnarray*}
\widetilde{C}_{l}^{\left( 2\right) } &=&\sum_{j_{1},j_{2}}C_{j_{1}}C_{j_{2}}%
\frac{(2j_{1}+1)(2j_{2}+1)}{4\pi (2l+1)}\left( C_{j_{1}0j_{2}0}^{l0}\right)
^{2} \\
&\leq &2\sum_{j_{1},j_{2}}C_{j_{1}}C_{j_{2}}\frac{(2j_{1}+1)(2j_{2}+1)}{4\pi
(2l+1)}\left( C_{j_{1}0j_{2}0}^{l0}\right) ^{2} \\
&=&\frac{1}{2\pi }\sum_{j_{1}}C_{j_{1}}(2j_{1}+1)\sum_{j_{2}=j_{1}}^{\infty
}C_{j_{2}}\left( C_{j_{1}0l0}^{j_{2}0}\right) ^{2} \\
&\leq &\frac{1}{2\pi }\sum_{j_{1}}C_{j_{1}}(2j_{1}+1)\left\{ \sup_{j_{2}\geq
j_{1},j_{1}+j_{2}>l}C_{j_{2}}\right\} \sum_{j_{2}=0}^{\infty }\left(
C_{j_{1}0l0}^{j_{2}0}\right) ^{2}\leq KC_{l/2}\text{ .}
\end{eqnarray*}%
where we have used the relation $\frac{2j_{2}+1}{2l+1}%
(C_{j_{1}0j_{2}0}^{l0})^{2}=(C_{j_{1}0l0}^{j_{2}0})^{2}$, as well as%
\begin{equation*}
\sup_{j_{2}\geq j_{1},j_{1}+j_{2}>l}C_{j_{2}}\leq KC_{l/2}\text{ , and }%
\sum_{l=|l_{2}-l_{1}|}^{l_{2}+l_{1}}\left( C_{l_{1}0l_{2}0}^{l0}\right)
^{2}\equiv 1\text{ .}
\end{equation*}%
For the numerator of (\ref{as11}) one has that it is greater than%
\begin{eqnarray*}
&&\sum_{j_{1},j_{2}}\!C_{j_{1}}\!C_{j_{2}}\!\frac{(2j_{1}\!+\!1)(2j_{2}\!+\!1)}{%
(4\pi (2l\!+\!1))^{2}}\!\left\vert
\sum_{l_{1}}C_{l_{1}}(2l_{1}\!+%
\!1)C_{l_{1}0j_{1}0}^{l0}C_{l_{1}0j_{1}0}^{l0}C_{l_{1}0j_{2}0}^{l0}C_{l_{1}0j_{2}0}^{l0}\right\vert ^{2}
\\
&\geq &\sum_{j_{1},j_{2}}C_{j_{1}}C_{j_{2}}\frac{(2j_{1}+1)(2j_{2}+1)}{(4\pi
(2l+1))^{2}}\left\vert 5C_{2}\left\{
C_{20j_{1}0}^{l0}C_{20j_{2}0}^{l0}\right\} ^{2}\right\vert ^{2} \\
&\geq &C_{l}^{2}\frac{1}{(4\pi )^{2}}\left\vert 5C_{2}\left\{
C_{20l0}^{l0}\right\} ^{2}\right\vert ^{2}\geq KC_{l}^{2}.
\end{eqnarray*}%
The left-hand side of condition (\ref{as11}) is then bounded below by $%
\lim_{l\rightarrow \infty }$ $\left( K_{1}C_{l}^{2}\right)
/(K_{2}C_{l/2}^{2})\neq 0$, so that the CLT (\ref{as00}) cannot
hold.
|
1,314,259,993,314 | arxiv | \section{Introduction}
Optimal control methodology
has certainly caught notable attention
both in applications, and also in research and development efforts,
over the many recent past years~\cite{dolk2016output,jiang2019optimal}.
During this time too, there has been
sustained technological advancement
which has
facilitated and enabled
the application of optimal control in
quite a number of theoretical and practical problems~\cite{zhang2014distributed,wei2015value}.
Among all the optimization approaches available,
the linear quadratic regulator (LQR) methodology certainly garners much attention and interest~\cite{wu2018optimal,kanieski2015robust,zhang2015linear}.
In LQR problems, as is well-known, the cost function is defined to be a linear quadratic cost function
in terms of the state variables and the control inputs;
and the methodology is effective and straightforwardly applicable
when the dynamic system to be controlled
can be modeled as linear and time-invariant.
Here too, it needs to be noted that
the LQR methodology requires the availability of full state feedback as a prerequisite.
However, in rather many practical applications,
it can be a typical case that some of the system state variables are not measurable, nor available
for feedback purposes;
and such a situation can happen
arising possibly from
real-world constraints of feasibility, complexity, and reconfigurability.
For these cases,
it is commonly the situation that the Kalman filter is used to estimate the unavailable system state variables,
and this
is an important extension of the LQR concept to systems with Gaussian additive noise~\cite{tanaka2017lqg}.
This LQG control methodology (as it is labeled)
involves coupling the LQR with the Kalman filter using the separation principle;
which, as an evident consequence, increases the complexity of the controller structure.
The methodology, although certainly very useful in many situations,
nevertheless suffers from the key constraint that
when the system disturbances and noise cannot be suitably characterized by the normal distribution,
the Kalman filter then cannot really be applied successfully.
Often-times in these situations,
compared to the controller structure with the Kalman filter,
an output feedback controller is more straightforward
and can be applied more effectively
to a much more extensive range of applications.
When some of the system state variables are not measurable,
certainly the alternate approach of
a static output feedback (SOF) controller can be utilized to satisfy
the prescribed system performance requirements.
With this approach,
the optimal control problem can thus
be formulated as the SOF LQR problem.
The necessary and sufficient conditions for finding a stable solution
for the SOF LQR problem are discussed in~\cite{levine1970determination},
and an iterative solution is obtained by solving the associated Lyapunov equations.
Notably there, the controller gain resulting from the Lyapunov equations solution is a full matrix
without any prescribed structural constraints.
However, as indicated earlier, structural constraints in the controller gain can arise in certain scenarios;
such as those, say, in decentralized control and sparse control problems.
For these problems, it is then not straightforward to derive an optimal solution~\cite{ma2019parameter}.
The evident reason here is that finding an optimal solution to the SOF problem
is a Bilinear Matrix Inequality (BMI) optimization problem,
which is generally non-convex~\cite{sadabadi2016static}.
Moreover, it has been shown in~\cite{blondel1997np} that the SOF stabilization problem is an NP-hard problem;
and unless it can be proved $P=NP$, there is no polynomial-time algorithm to solve this problem.
In the existing literature then, most of the algorithms for finding
a stable solution to the non-convex SOF problem are based on the Lyapunov equation approach,
such as the D-K iteration optimization technique~\cite{el1994synthesis,lind1994evaluating},
the min-max iteration technique~\cite{geromel1994lmi,geromel1998static},
and the projection algorithm~\cite{peres1993h}.
Also, a cone complementarity linearization algorithm proposed by~\cite{el1997cone}
interestingly
introduces an efficient technique for finding a stable controller gain matrix
with certain specifications.
To cater to the situation with structural constraints,
substantial work actually has been conducted in the core area of gradient projection.
In~\cite{ma2017integrated}, a first-order gradient projection method
is implemented to enhance the linear quadratic performance;
and which also considers the linear equality constraints
such that the method can be used to solve decentralized control and sparse control problems.
In~\cite{chanekar2017optimal}, generalized benders decomposition (GBD)
and gradient projection are combined and utilized
to solve a constrained linear quadratic problem on the condition
that the closed-loop system is stable
and a box constraint on the controller gain matrix is satisfied.
However, all these existing algorithms utilize essentially the first-order method;
and thus the rate of convergence is limited.
Here notably although not an unknown matter,
yet due to the high complexity of calculating
the Hessian matrix and the indefiniteness of the Hessian matrix,
the more promising second-order methods are rarely used in
developing effective solutions to
these non-convex optimal control problems.
To the best of our knowledge,
in available known developments,
approaches have been formulated where
the Hessian matrix can only be calculated
in terms of the entire controller gain matrix
instead of separately element-wise~\cite{tassa2012synthesis,lin2013design,rautert1997computational}.
Here when the controller gain matrix is sparse, or the dimension of
the controller gain matrix is much less than the dimension
of the system state,
the computational complexity of the Hessian matrix is then very high.
With all of the above descriptions as a back-drop,
in this work here, we thus aim
to develop a second-order optimization approach to solve the SOF LQR problem effectively.
An efficient method is proposed in the matrix space to calculate the Hessian matrix
by solving several associated Lyapunov equations.
Then a new optimization technique is applied to deal with the indefiniteness of the Hessian matrix.
After that, through the constrained Newton's method,
a second-order optimization method is developed to solve the
specified constrained SOF LQR problem.
It is perhaps also worth mentioning and notable that
the resulting proposed approach here is actually suitably generally applicable
quite extensively to many various classes of commonly encountered optimal control problems,
including the controller synthesis problem with prescribed sparsity pattern; the decentralized control problem;
and certainly even the controller optimization problem without structural constraints.
The paper here is organized thus as follows:
In Section~\ref{section:preliminary}, the constrained SOF LQR problem is elaborated;
and then the first-order method with gradient projection
is also introduced
on how this is used
to solve the linear equality constrained optimization problem.
In Section~\ref{section:second_order},
we present and develop our second-order optimization method
where, firstly,
the Hessian matrix is derived
with detailed discussions on dealing with the indefiniteness of the Hessian matrix.
After that, the linear equality constrained Newton's method is given
to solve the formulated optimization problem.
In Section~\ref{section:numerical_example},
we consider the performance and effectiveness
of our proposed methodology on suitable
illustrative examples,
and the results here
can certainly be seen
to validate applicability and effectiveness of the proposed method.
Section~\ref{section:conclusion} then concludes the paper with salient pertinent points.
\section{Preliminaries}\label{section:preliminary}
The following notations are used in the remaining text. $\mathbb R^{m\times n}$ ($\mathbb R^{n}$) denotes the real matrix with $m$ rows and $n$ columns ($n$ dimensional real column vector). $\mathbb S^{n}_{++}$ ($\mathbb S^{n}_{+}$) denotes the $n$ dimensional positive definite (positive semi-definite) real symmetric matrix. The symbol $A\succ0$ ($A\succeq0$) means that the matrix $A$ is positive definite (positive semi-definite). $A^T$ ($x^T$) denotes the transpose of the matrix $A$ (vector $x$). $J^{ij}$ denotes the single-entry matrix with a single entry $1$ located at the $i$th row and $j$th column, and the other entries are zero. $I$ represents the identity matrix with appropriate dimensions. The operator $\text{Tr}(\cdot)$ denotes the trace of a matrix. The operator $\langle \cdot,\cdot \rangle$ denotes the Frobenius inner product, i.e., $\langle A,B\rangle= \text{Tr}\left(A^TB\right)$ for $A,B \in \mathbb R^{m\times n}$. The norm operator based on the inner product operator is defined by $\|x\|_F=\sqrt{\langle x,x\rangle}$ for $x\in \mathbb R^{m\times n}$. The operator $\otimes$ denotes the kronecker product. The operator $\mathrm{vec}(\cdot)$ denotes the vectorization operator that expands a matrix by columns into a column vector. The operator $\text{det}(\cdot)$ denotes the determinant of a square matrix. $[A_1,A_2,\dots,A_n]$ ($[A_1;A_2;\dots;A_n]$) denotes the block matrix organized by rows (columns). $\mathbb E(\cdot)$ means the expectation. The operator $\lambda(\cdot)$ represents the eigenvalues of a matrix, and $\text{Re}(\cdot)$ returns the real part of a complex number.
\subsection{Problem Statement}
A linear time-invariant (LTI) system with an SOF controller can be expressed as
\begin{IEEEeqnarray}{rCl}\label{equation:linear_system}
\dot x(t)&=&Ax(t)+Bu(t)\IEEEyesnumber\IEEEyessubnumber\\
z(t)&=&C_1x(t)+D_1u(t)\IEEEyessubnumber\\
y(t)&=&Cx(t)\IEEEyessubnumber\\
u(t)&=&Ky(t),\IEEEyessubnumber
\end{IEEEeqnarray}
where $x(t) \in \mathbb R^n$ is the state vector, $u(t)\in \mathbb R^{m}$ is the control input vector, $z(t)\in \mathbb R^p$ is the performance output vector used for specifying the system performance, $y(t)\in \mathbb R^q$ is the measured output vector for the controller, $A\in \mathbb R^{n\times n}$ is the state matrix, $B\in \mathbb R^{n\times m}$ is the input matrix, $C_1\in \mathbb R^{p\times n}$ and $D_1 \in \mathbb R^{p\times m}$ are the output matrix and the direct output matrix for specifying the system performance, $C\in \mathbb R^{q\times n}$ is the output matrix for the controller, and $K\in \mathbb R^{m\times q}$ is the SOF controller gain matrix.
For an SOF linear quadratic optimization problem with respect to (\ref{equation:linear_system}), the cost function in the infinite horizon is defined as
\begin{IEEEeqnarray}{rCl}
J(K) &=& \displaystyle\int_0^\infty z(t)^T\mathscr Q z(t)dt\IEEEnonumber\\
&=&\displaystyle\int_0^\infty \left[x(t)^TC_1^T \mathscr Q C_1 x(t)+u(t)^TD_1^T\mathscr Q D_1 u(t)\right] dt,
\end{IEEEeqnarray}
where $\mathscr Q \in \mathbb S^{p}_+$ is a weighting matrix for the performance output vector $z(t)$. For simplicity, we define $Q=C_1^T \mathscr Q C_1$ and $R=D_1^T\mathscr Q D_1$ as the usual practice. Notably, $Q\in\mathbb S^n$ is positive semi-definite, and $R\in\mathbb S^m$ is positive definite. Then the cost function can be converted to
\begin{IEEEeqnarray}{rCl}\label{equation:cost_function}
J(K) &=&
x_0^T\left(\displaystyle\int_0^\infty \Lambda_c(t)^T \left[Q+(KC)^T RKC\right] \Lambda_c(t)dt\right)x_0,
\end{IEEEeqnarray}
where $\Lambda_c(t) = e^{(A+BKC)t}$, and $x_0\in\mathbb R^n$ denotes the initial state vector of the system. The following matrices are used in the remaining text for the sake of brevity,
\begin{IEEEeqnarray}{rClr}
A_c&=&A+BKC\IEEEyesnumber\IEEEyessubnumber\label{equation:A_c}\\
Q_c&=&Q+(KC)^TRKC&\quad\IEEEyessubnumber\label{equation:Q_c}\\
X_0&=&x_0x_0^T\IEEEyessubnumber\\
P&=&\displaystyle\int _0^\infty \Lambda_c^T(t)Q_c\Lambda_c(t)dt.\IEEEyessubnumber
\end{IEEEeqnarray}
Then the cost function can be expressed by
\begin{IEEEeqnarray}{rClr}
J(K)=\text{Tr} (PX_0).
\end{IEEEeqnarray}
Define the set of the stable controller gains by $\mathscr K_s=\{K\in \mathbb R^{m\times q}\,|\, \max \{\text{Re} (\lambda(A_c))\}<0\}$. Then for each $K\in \mathscr K_s$, there exists a $P\in \mathbb S_{++}^{n}$ such that
\begin{IEEEeqnarray}{rCl}
A_c^TP+PA_c\prec 0.
\end{IEEEeqnarray}
Define the generalized Lyapunov operator $L: \mathbb R^{n\times n}\rightarrow \mathbb R^{n\times n}$ given by $P\mapsto A_c^TP+PA_c$, where $A_c$ is defined in \eqref{equation:A_c}. To derive the important properties of the generalized Lyapunov operator, the following lemma for the Lyapunov operator, which is a special case of the generalized Lyapunov operator with both of the domain and co-domain restricted to $\mathbb S^{n}$, is introduced.
\begin{lemma}\label{theorem:unique}
For the LTI system~\eqref{equation:linear_system} and $K\in \mathscr K_s$, there exists a unique solution $P\in \mathbb S_{++}^{n}$ to the equation
\begin{IEEEeqnarray}{rCl}\label{equation:lyapunov_equation}
A_c^TP+PA_c+Q_c=0,
\end{IEEEeqnarray}
with $A_c$ and $Q_c$ defined in \eqref{equation:A_c} and \eqref{equation:Q_c}.
\end{lemma}
\begin{proof}
From \eqref{equation:lyapunov_equation}, it follows that
\begin{IEEEeqnarray}{rCl}\label{equation:lyapunov_equation_expand}
\left(I\otimes A_c^T+A_c^T\otimes I\right)\mathrm{vec}(P)=\mathrm{vec}(-Q_c),
\end{IEEEeqnarray}
where $\left(I\otimes A_c^T+A_c^T\otimes I\right)$ is a parameter matrix with a dimensions of $n^2\times n^2$. There exists a unique solution to \eqref{equation:lyapunov_equation_expand} if and only if the parameter matrix is full rank, i.e., $\text{det}\left(I\otimes A_c^T+A_c^T\otimes I\right)\neq 0$.
The eigenvalues of the parameter matrix can be listed as
\begin{IEEEeqnarray}{rCl}
\lambda_1+\lambda_1,\dots,\lambda_1+\lambda_n,\lambda_2+\lambda_1,\dots,\lambda_2+\lambda_n,\dots,\lambda_n+\lambda_n,
\end{IEEEeqnarray}
where $\lambda_i$ is the $i$th eigenvalue of the matrix $A_c$. Then $\text{det}\left(I\otimes A_c^T+A_c^T\otimes I\right)\neq 0$ if and only if $A_c$ and $-A_c$ have no common eigenvalues. If $K\in\mathscr K_s$, then $\max \{\text{Re} (\lambda(A_c))\}<0$, which is sufficient to the condition. This completes the proof of Lemma~\ref{theorem:unique}.
\end{proof}
If there is no constraint on the LQR problem, and all system state variables
can be measured, then the optimal static state feedback gain can be directly obtained by solving the Algebra Riccati Equation (ARE). However, in some real-world applications, it is impossible to measure all of the system state variables. Moreover, some constraints on the controller structure must be considered. In these cases, the optimal controller gain matrix to the linear quadratic static state feedback problem cannot be directly obtained. In this work, we assume linear equality constraints are imposed on the controller structure, and we denote the controller parameters satisfying the desired linear equality constraints by $K\in {\mathscr C}$, where ${\mathscr C}=\{K\in {\mathbb R}^{m\times q}\,|\, {\mathcal C}(K)={\mathcal C}_0\}$. Considering the scenarios with multiple linear equality constraints, we denote the linear equality constraints on the controller structure by
\begin{IEEEeqnarray}{rCl}\label{eqaution:matrix_constraints}
{\mathcal C}_1(K)&=&{\mathcal A}_1^{(1)}K{\mathcal B}_1^{(1)}+\dots+{\mathcal A}_{m_1}^{(1)}K{\mathcal B}_{m_1}^{(1)}={\mathcal C}_0^{(1)}\IEEEnonumber\\
{\mathcal C}_2(K)&=&
{\mathcal A}_1^{(2)}K{\mathcal B}_1^{(2)}
+\dots
+{\mathcal A}_{m_2}^{(2)}K{\mathcal B}_{m_2}^{(2)}={\mathcal C}_0^{(2)}\IEEEnonumber\\
&\dots&\IEEEnonumber\\
{\mathcal C}_N(K)&=&{\mathcal A}_1^{(N)}K{\mathcal B}_1^{(N)}+\dots+{\mathcal A}_{m_N}^{(N)}K{\mathcal B}_{m_N}^{(N)}={\mathcal C}_0^{(N)},\IEEEeqnarraynumspace
\end{IEEEeqnarray}
where $\mathcal A_1^{(1)},\dots,\mathcal A_{m_1}^{(1)},\mathcal A_1^{(2)},\dots,\mathcal A_{m_2}^{(2)},\dots,\mathcal A_1^{(N)},\dots,\mathcal A_{m_N}^{(N)}$ and $\mathcal B_1^{(1)},\dots,\mathcal B_{m_1}^{(1)},\mathcal B_1^{(2)},\dots,\mathcal B_{m_2}^{(2)},\dots,$ $\mathcal B_1^{(N)},\dots,\mathcal B_{m_N}^{(N)}$ are constraint matrices given by the optimization problem, $m_i$ for all $i=1,2,\dots,N$ in the subscript represents the number of constraint matrices in one equality for the $i$th equality constraint, and $N$ is the total number of the equality constraints.
Then the constrained SOF problem can be summarized as
\begin{IEEEeqnarray}{lrCl}\label{eqn:optimization_problem}
\underset{K\in\mathbb R^{m\times q}}{\mathrm{minimize}}& \quad\quad J(K)\IEEEnonumber\\
\text{subject to}& \dot x(t)&=& Ax(t)+Bu(t)\IEEEnonumber\\
& u(t) &=& KCx(t)\IEEEnonumber\\
& K&\in&\mathscr C \cap \mathscr K_s.
\end{IEEEeqnarray}
The basic requirement for the controller design is the closed-loop stability. For a minimum-phase SISO system with a relative degree of less than one, the necessary and sufficient conditions for the stability can be given by graphical methods~\cite{syrmos1997static}. It can be easily shown that the closed-loop system can be stabilized by choosing a large enough controller gain by using the root-locus method. However, when we look for the optimal solution for an SOF problem of a multi-input-multi-output (MIMO) system, the proof of the stability is still an open question. Even in some cases, it is not easy to determine whether there exists a stable SOF controller for the system. Moreover, we assume that a stable initial controller gain matrix always exists for the system to be controlled and can be found by using some existing algorithms such as the D-K iteration technique.
\subsection{First-Order Method with Gradient Projection}
When the gradient projection method is applied to solve the constrained SOF problem, the problem can be divided into two sub-problems. Firstly, the gradient of the cost function with respect to the controller gain matrix without any constraint is obtained. Secondly, the unconstrained gradient is projected onto the linear equality constraints of the controller structure. By solving the two sub-problems in each iteration, we can obtain the descent direction of the linear quadratic cost function that preserves the linear equality constraints in the controller gain matrix.
To solve the first sub-problem, Property 1 and Property 2 are presented firstly. Subsequently, Theorem~\ref{theorem:gradient} is introduced.
\begin{property}\label{property:linear}
The generalized Lyapunov operator is linear, bounded, and invertible with a bounded inverse.
\end{property}
\begin{proof}
From the definition, it is straightforward to show that the generalized Lyapunov operator $L$ is linear and bounded. Then we prove that the generalized Lyapunov operator is invertible.
Notice that the generalized Lyapunov operator $L$ has the following property,
\begin{IEEEeqnarray}{rCl}
LP=0\text{ for all } P\in \mathbb R^{n\times n}\implies P=0,
\end{IEEEeqnarray}
which can be easily shown by Lemma~\ref{theorem:unique}. Since a linear operator $L$ is invertible if and only if $LP=0$ for all $P\in \mathbb R^{n\times n}$ implies $P=0$.
This completes the proof of the invertibility of the generalized Lyapunov operator. It is straightforward to show that the inverse of the generalized Lyapunov operator is bounded by bounded inverse theorem~\cite{renardy2006introduction}.
This completes the proof of Property~\ref{property:linear}.
\end{proof}
\begin{property}\label{property:adjoint_invert}
The generalized Lyapunov operator $L$ has the following property,
\begin{IEEEeqnarray}{l}
\left(L^{-1}\right)^*=\left(L^*\right)^{-1},
\end{IEEEeqnarray}
where $L^*$ is the adjoint operator of the linear operator $L$ which can be expressed as
\begin{IEEEeqnarray}{l}
L^*\Gamma=\Gamma A^T_c+A_c \Gamma,
\end{IEEEeqnarray}
for all $\Gamma\in \mathbb R^{n\times n}$.
\end{property}
\begin{proof}
To show $L^*\Gamma=\Gamma A^T_c+A_c\Gamma$ for all $\Gamma\in \mathbb R^{n\times n}$, we have
\begin{IEEEeqnarray}{rCl}
{\left\langle LP,\Gamma\right\rangle} &=&{\text{Tr}} \left((LP)^T\Gamma\right)\IEEEnonumber\\
&=&{\text{Tr}} \left(A^T_cP^T\Gamma+P^TA_c\Gamma\right)\IEEEnonumber\\
&=&{\left\langle P,\Gamma A^T_c+A_c\Gamma\right\rangle}\IEEEnonumber\\
&=& {\left\langle P, L^*\Gamma\right\rangle},
\end{IEEEeqnarray}
for all $P,\Gamma\in \mathbb R^{n\times n}$.
It is straightforward to show that the adjoint operator of the generalized Lyapunov operator $L^*$ is also bounded. Then it remains to prove that if the bounded generalized Lyapunov operator $L$ has a bounded inverse, the adjoint operator of the generalized Lyapunov operator $L^*$ is invertible and $\left(L^*\right)^{-1}=\left(L^{-1}\right)^*$.
By the definition of the adjoint operator ${\left\langle LP,\Gamma\right\rangle}={\left\langle P, L^*\Gamma\right\rangle}$, we notice
\begin{IEEEeqnarray}{rCl}
{\left\langle L^* \left(L^{-1}\right)^*P,\Gamma\right\rangle}
&=&{\left\langle \left(L^{-1}\right)^*P,L \Gamma\right\rangle}\IEEEnonumber\\
&=&{\left\langle P,\left(L^{-1}\right) L \Gamma\right\rangle}\IEEEnonumber\\
&=&{\left\langle P, \Gamma\right\rangle},
\end{IEEEeqnarray}
which means $L^* \left(L^{-1}\right)^* = I$.
The proof of Property~\ref{property:adjoint_invert} is completed.
\end{proof}
\begin{theorem}\label{theorem:gradient}
For the LTI system (\ref{equation:linear_system}) with the cost function (\ref{equation:cost_function}), the gradient of the cost function with respect to the controller gain matrix is given by
\begin{IEEEeqnarray}{c}\label{equation:gradient}
\dfrac{dJ}{dK}=2\left(B^TP_g +RKC\right)\Gamma C^T,
\end{IEEEeqnarray}
where $P_g\in\mathbb S^n_+$ and $\Gamma\in\mathbb S^n_+$ can be obtained by solving the following two Lyapunov equations,
\begin{IEEEeqnarray}{rCl}
LP_g &=&-Q_c\IEEEyesnumber\IEEEyessubnumber\label{eq:lya1}\\
L^*\Gamma &=&-X_0.\IEEEyessubnumber\label{eq:lya2}
\end{IEEEeqnarray}
\end{theorem}
\begin{proof}
Define $P_g\in \mathbb R^{n\times n}$ such that $J(K)=\text{Tr}(P_g X_0)$.
Equation \eqref{eq:lya1} can be derived by the definition of the linear operator $L$.
By Property~\ref{property:linear}, we have
\begin{IEEEeqnarray}{C}
P_g=-L^{-1}Q_c.
\end{IEEEeqnarray}
Define the partial differential operator $\partial_k=\partial/\partial k$ and the linear operator $L_k: \mathbb R^{n\times n}\rightarrow \mathbb R^{n\times n}$ given by $P\mapsto(\partial_kA_c^T)P+P(\partial_k A_c)$. By the continuity and linearity of the trace operator, we have
\begin{IEEEeqnarray}{C}
\partial_{k_{ij}}J={\left\langle\partial_{k_{ij}}P_g,X_0\right\rangle}.
\end{IEEEeqnarray}
It can be easily proved that
\begin{IEEEeqnarray}{rCl}
\partial_{k_{ij}}Q_c&=&\partial_{k_{ij}}(-LP_g)\IEEEnonumber\\
&=&-L_{k_{ij}}P_g-L\left(\partial_{k_{ij}}P_g\right),
\end{IEEEeqnarray}
and then the partial derivative of $P_g$ can be expressed by
\begin{IEEEeqnarray}{C}
\partial_{k_{ij}} P_g=-L^{-1}\left(L_{k_{ij}}P_g\right)-L^{-1}\left(\partial_{k_{ij}}Q_c\right).
\end{IEEEeqnarray}
Then we can denote the partial derivative of the cost function $J$ by
\begin{IEEEeqnarray}{rCl}
\partial_{k_{ij}} J
&=&{\left\langle\partial_{k_{ij}} P_g,X_0\right\rangle}\IEEEnonumber\\
&=&{\left\langle -L^{-1}\left(L_{k_{ij}}P_g\right)-L^{-1}\left(\partial_{k_{ij}}Q_c\right),X_0\right\rangle}\IEEEnonumber\\
&=&{\left\langle L_{k_{ij}}P_g+\partial_{k_{ij}}Q_c,\left(L^{-1}\right)^*(-X_0)\right\rangle}.
\end{IEEEeqnarray}
Define a new matrix $\Gamma\in \mathbb R^{n\times n}$ such that $L^*\Gamma=-X_0$. Then we have
\begin{IEEEeqnarray}{rCl}
\partial_{k_{ij}} J
&=&{\left\langle L_{k_{ij}}P_g+\partial_{k_{ij}}Q_c,
\left(L^*\right)^{-1}(-X_0)\right\rangle} \IEEEnonumber\\
&=&{\left\langle \left(BJ^{ij}C\right)^T P_g+P_g\left(BJ^{ij}C\right)+\left(\partial_{k_{ij}} Q_c\right),\Gamma\right\rangle}\IEEEnonumber\\
&=&2\text{Tr}\left(C\Gamma P_gBJ^{ij}\right)+2\text{Tr}\left(C\Gamma (KC)^TR J^{ij}\right).
\end{IEEEeqnarray}
Thus, it is trivial to denote the above equation in the matrix form as shown in (\ref{equation:gradient}), where $\Gamma$ and $P_g$ can be obtained by directly solving the two Lyapunov equations (\ref{eq:lya1}) and (\ref{eq:lya2}). This completes the proof of Theorem~\ref{theorem:gradient}.
\end{proof}
After the gradient of the cost function with respect to the controller gain matrix is obtained, we consider the linear equality constraints for the desired controller structure. We hope that the gradient of the cost function with respect to the controller gain matrix with constraints is as close as possible to the gradient without constraint, which can be formulated as an optimization problem, where
\begin{IEEEeqnarray}{ll}\label{eqn:optimization_problem2}
\underset{\mathcal G\in\mathbb R^{m\times q}}{\mathrm{minimize}}&\quad \dfrac{1}{2} \left\|{\dfrac{d J}{d K}-{\mathcal G}}\right\|^2_F\IEEEnonumber\\
\text{subject to}& \quad {\mathcal{C}}_i({\mathcal G})=\mathcal C_0^{(i)},\quad i=1,\dots,N,
\end{IEEEeqnarray}
where $\mathcal G$ is the gradient with the linear equality constraints taken into consideration.
The dual problem of problem~(\ref{eqn:optimization_problem2}) can be expressed as
\begin{IEEEeqnarray}{l}\label{equation:optimization_problem2_dual}
\underset{\Lambda_i}{\mathrm{maximize}}
\inf_{\mathcal G \in \mathbb R^{m\times q}} \left(\dfrac{1}{2}\left\|{\dfrac{ d J}{d K}-{\mathcal{G}}}\right\|^2_F+\text{Tr} \left(\displaystyle\sum_{i=1}^N \Lambda_i^T\left({\mathcal C}_i({\mathcal G})-C_0^{(i)}\right)\right)\right),
\end{IEEEeqnarray}
where $\Lambda_i$ is the dual variable with appropriate dimensions corresponding to the $i$th equality constraint.
When the dual problem~(\ref{equation:optimization_problem2_dual}) is solved, the solution to the primal problem~(\ref{eqn:optimization_problem2}) can be easily obtained. The following theorem is introduced.
\begin{theorem}\label{theorem:gradient_projection}
In terms of the cost function defined in \eqref{equation:cost_function} and the linear equality constraints for the controller structure defined in \eqref{eqaution:matrix_constraints}, the optimal gradient of the cost function $\mathcal G^*$ with respect to the controller gain matrix with linear equality constraints for the controller structure is given by
\begin{IEEEeqnarray}{l}\label{equation:optimal_projection_gradient}
{\mathcal G^\star} = \dfrac{ d J}{ d K}-
\displaystyle\sum_{i=1}^N \displaystyle\sum_{j=1}^{m_i} \left(
\left({\mathcal A}_j^{(i)}\right)^T\Lambda_i\left({\mathcal B}_j^{(i)}\right)^T
\right).
\end{IEEEeqnarray}
\end{theorem}
\begin{proof}
By the KKT optimality conditions, the necessary conditions can be expressed as
\begin{IEEEeqnarray}{rCl}
\dfrac{\partial }{\partial \mathcal G} \left[\dfrac{1}{2}\left|\left|{\dfrac{ d J}{ d K}-{\mathcal{G}}}\right|\right|^2_F\right]
&+&
\dfrac{\partial }{\partial {\mathcal G}}\left[\text{Tr} \left(\displaystyle\sum_{i=1}^N \Lambda_i^T\left({\mathcal C}_i({\mathcal G})-C_0^{(i)}\right)\right)\right]=0
\IEEEyesnumber\IEEEyessubnumber\label{equation:KKT_condition}\\
{\mathcal{C}}_i({\mathcal G})&=&C_0^{(i)}.\IEEEeqnarraynumspace\IEEEeqnarraynumspace\IEEEyessubnumber\label{equation:equality_constraints_gradient_projection}
\end{IEEEeqnarray}
For the first part of~(\ref{equation:KKT_condition}), the following result is derived,
\begin{IEEEeqnarray}{rCl}\label{equation:gradient_first_part}
\dfrac{\partial }{\partial {\mathcal G}}\left[\dfrac{1}{2}\left|\left|{\dfrac{d J}{ d K}-{\mathcal{G}}}\right|\right|^2_F\right]&=& -\left(\dfrac{ d J}{ d K}-\mathcal G\right).
\end{IEEEeqnarray}
For the second part of~(\ref{equation:KKT_condition}), the following result can be achieved,
\begin{IEEEeqnarray}{RL}\label{equation:gradient_second_part}
\dfrac{\partial }{\partial {\mathcal G}}\left[\text{Tr} \left(\displaystyle\sum_{i=1}^N \Lambda_i^T\left({\mathcal C}_i({\mathcal G})-C_0^{(i)}\right)\right)\right]
=\displaystyle\sum_{i=1}^N \displaystyle\sum_{j=1}^{m_i} \left(
\left({\mathcal A}_j^{(i)}\right)^T\Lambda_i\left({\mathcal B}_j^{(i)}\right)^T
\right).
\end{IEEEeqnarray}
From (\ref{equation:KKT_condition}), (\ref{equation:gradient_first_part}) and (\ref{equation:gradient_second_part}), it is easy to derive \eqref{equation:optimal_projection_gradient}. Then we complete the proof of Theorem~\ref{theorem:gradient_projection}.
\end{proof}
By (\ref{equation:equality_constraints_gradient_projection}), the dual variable $\Lambda_i$ for the optimization problem can be calculated. Intuitively, this technique can be considered as a method by projecting a known gradient to the linear equality constraints. In most of the optimization problems, this method can work very well except for the slow convergence. One of the reasons is the linear rate of convergence for most of the first-order optimization methods. Another reason is that the projection operation causes the loss of the gradient information. Therefore, in the next section, we propose the second-order optimization method.
\section{Second-Order Optimization Method}\label{section:second_order}
\subsection{Derivation of the Hessian Matrix}
On the basis of Theorem~\ref{theorem:gradient}, Theorem~\ref{theorem:hessian_matrix} is introduced to calculate the Hessian matrix of the cost function with respect to the controller gain matrix.
\begin{theorem}~\label{theorem:hessian_matrix}
For the LTI system~(\ref{equation:linear_system}), the Hessian matrix of the cost function~(\ref{equation:cost_function}) with respect to the controller gain matrix can be expressed element-wisely by
\begin{IEEEeqnarray}{rCl}\label{equation:hessian}
\partial_K\partial_{k_{ij}} J &=& 2B^T\left[\left(P_1^{ij}\right)^T+P_1^{ij}\right]\Gamma C^T+2\left[B^TP_g+RKC\right]\left[\left(\Gamma_1^{ij}\right)^T+\Gamma_1^{ij}\right]C^T\IEEEnonumber\\
&&+2B^T\left[\left(R_1^{ij}\right)^T+R_1^{ij}\right]\Gamma C^T+2RJ^{ij} C \Gamma C ^T,
\end{IEEEeqnarray}
where $k_{ij}$ denotes the entry in the $i$th row and $j$th column of the gradient matrix, and a set of Lyapunov equations are defined as follows,
\begin{IEEEeqnarray}{rCl}
LP_1^{ij}&=&-P_gBJ^{ij}C\IEEEyesnumber\IEEEyessubnumber\label{equation:lypunov_hessian_1}\\
L^*\Gamma_1^{ij}&=&-\Gamma \left(BJ^{ij}C\right)^T\IEEEyessubnumber\label{equation:lypunov_hessian_2}\\
LR_1^{ij}&=&-(KC)^TRJ^{ij}C\IEEEyessubnumber\label{equation:lypunov_hessian_3}.
\end{IEEEeqnarray}
\end{theorem}
\begin{proof}
By Theorem~\ref{theorem:gradient}, denote the gradient of the cost function in terms of the single element of the controller gain matrix in the inner product form, and then we have
\begin{IEEEeqnarray}{rCl}
\partial_{k_{ij}} J&=&2\left\langle{\Gamma ,P_gBJ^{ij}C}\right\rangle
+2\left\langle{ \Gamma,(KC)^TR J^{ij}C}\right\rangle.
\end{IEEEeqnarray}
Then the Hessian matrix of the cost function can be expressed in the scalar form,
\begin{IEEEeqnarray}{rCl}
\partial_{k_{mn}}\partial_{k_{ij}} J
&=&2\left\langle{\partial_{k_{mn}}\Gamma,P_gBJ^{ij}C}\right\rangle
+2\left\langle{\Gamma ,\partial_{k_{mn}}P_gBJ^{ij}C}\right\rangle\IEEEnonumber\\
&&+2\left\langle{\partial_{k_{mn}}\Gamma,(KC)^TR J^{ij}C}\right\rangle+2\left\langle{\Gamma,\left(J^{mn}C\right)^TR J^{ij}C}\right\rangle.
\end{IEEEeqnarray}
Note that $\Gamma = -\left(L^*\right)^{-1}(X_0)$. Define $L_k^*\Gamma=\Gamma \left(\partial_kA_c^T\right)+\left(\partial_k A_c\right)\Gamma$. Then it can be easily proved that
\begin{IEEEeqnarray}{rCl}
\partial_{k_{mn}}\Gamma&=&-\left(L^*\right)^{-1}L^*_{k_{mn}}\Gamma-\left(L^*\right)^{-1}\partial_{k_{mn}}X_0\IEEEnonumber\\
&=&-\left(L^*\right)^{-1}L^*_{k_{mn}}\Gamma.
\end{IEEEeqnarray}
Then we have
\begin{IEEEeqnarray}{rCl}
\partial_{k_{mn}}\partial_{k_{ij}} J
&=&2\left\langle{L^*_{k_{mn}}\Gamma,L^{-1}\left(-P_gBJ^{ij}C\right)}\right\rangle+2\left\langle{\Gamma ,\left(-L^{-1}L_{k_{mn}}P_g-L^{-1}\partial_{k_{mn}}Q_c\right)BJ^{ij}C}\right\rangle\IEEEnonumber\\
&&+2\left\langle{L^*_{k_{mn}}\Gamma,L^{-1}\left(-(KC)^TR J^{ij}C\right)}\right\rangle+2\left\langle{\Gamma,\left(J^{mn}C\right)^TR J^{ij}C}\right\rangle.
\end{IEEEeqnarray}
Since we have
$$\left\langle{\Gamma ,\left(-L^{-1}L_{k_{mn}}P_g-L^{-1}\partial_{k_{mn}}Q_c\right)BJ^{ij}C}\right\rangle = \left\langle{\Gamma\left(BJ^{ij}C\right)^T ,-L^{-1}L_{k_{mn}}P_g-L^{-1}\partial_{k_{mn}}Q_c}\right\rangle,
$$ the Hessian matrix is given by
\begin{IEEEeqnarray}{rCl}
\partial_{k_{mn}}\partial_{k_{ij}} J&=&2\left\langle{L^*_{k_{mn}}\Gamma,L^{-1}\left(-P_gBJ^{ij}C\right)}\right\rangle+2\left\langle{\left(L^*\right)^{-1}\left(-\Gamma\left(BJ^{ij}C\right)^T\right),L_{k_{mn}}P_g+\partial_{k_{mn}}Q_c}\right\rangle\IEEEnonumber\\
&&+2\left\langle{L^*_{k_{mn}}\Gamma,L^{-1}\left(-(KC)^TR J^{ij}C\right)}\right\rangle+2\left\langle{\Gamma,\left(J^{mn}C\right)^TR J^{ij}C}\right\rangle.
\end{IEEEeqnarray}
Note that $L_{k_{mn}}^*\Gamma=\Gamma\left(\partial_{k_{mn}}A_c^T\right)+\left(\partial_{k_{mn}}A_c\right)\Gamma=\Gamma \left(BJ^{mn}C\right)^T+\left(BJ^{mn}C\right)\Gamma$. Then the Hessian matrix can be expressed as
\begin{IEEEeqnarray}{rCl}
\partial_{k_{mn}}\partial_{k_{ij}} J&=&2\left\langle{\Gamma\left(BJ^{mn}C\right)^T+\left(BJ^{mn}C\right)\Gamma,L^{-1}\left(-P_gBJ^{ij}C\right)}\right\rangle\IEEEnonumber\\
&&+2\Big\langle\left(L^*\right)^{-1}\left(-\Gamma\left(BJ^{ij}C\right)^T\right),\left(BJ^{mn}C\right)^TP_g\IEEEnonumber\\
&&\quad+P_g\left(BJ^{mn}C\right)+\left(J^{mn}C\right)^TRKC+\left(KC\right)^TRJ^{mn}C\Big\rangle\IEEEnonumber\\
&&+2\left\langle{\Gamma\left(BJ^{mn}C\right)^T+\left(BJ^{mn}C\right)\Gamma,L^{-1}\left(-\left(KC\right)^TR J^{ij}C\right)}\IEEEnonumber\right\rangle\\
&&+2\left\langle{\Gamma,\left(J^{mn}C\right)^TR J^{ij}C}\right\rangle.
\end{IEEEeqnarray}
From \eqref{equation:lypunov_hessian_1}$-$\eqref{equation:lypunov_hessian_3}, it follows that
\begin{IEEEeqnarray}{rCl}
\partial_{k_{mn}}\partial_{k_{ij}} J
&=&2\left\langle{\Gamma\left(BJ^{mn}C\right)^T+\left(BJ^{mn}C\right)\Gamma,P_1^{ij}}\right\rangle\IEEEnonumber\\
&&+2\Big\langle\Gamma_1^{ij},\left(BJ^{mn}C\right)^TP_g+P_g\left(BJ^{mn}C\right)\IEEEnonumber\\
&&\quad+\left(J^{mn}C\right)^TRKC+\left(KC\right)^TRJ^{mn}C\Big\rangle\IEEEnonumber\\
&&+2\left\langle{\Gamma\left(BJ^{mn}C\right)^T+\left(BJ^{mn}C\right)\Gamma,R_1^{ij}}\right\rangle\IEEEnonumber\\
&&+2\left\langle{\Gamma,\left(J^{mn}C\right)^TR J^{ij}C}\right\rangle.
\end{IEEEeqnarray}
Then the Hessian matrix in the trace form is expressed as
\begin{IEEEeqnarray}{rCl}
\quad\partial_{k_{mn}}\partial_{k_{ij}} J
&=&2\text{Tr}\left(C\Gamma P_1^{ij}BJ^{mn}\right)+2\text{Tr}\left(B^T P_1^{ij}\Gamma C^T\left(J^{mn}\right)^T\right)\IEEEnonumber\\
&&+2\text{Tr}\left(C\Gamma_1^{ij}P_gBJ^{mn}\right)+2\text{Tr}\left(B^TP_g\Gamma_1^{ij} C^T\left(J^{mn}\right)^T\right)\IEEEnonumber\\
&&+2\text{Tr}\left(C\Gamma_1^{ij}\left(KC\right)^TRJ^{mn}\right)+2\text{Tr}\left(RKC\Gamma_1^{ij}C^T\left(J^{mn}\right)^T\right) \IEEEnonumber\\
&&+2\text{Tr}\left(C\Gamma R_1^{ij}BJ^{mn}\right)+2\text{Tr}\left(B^TR_1^{ij}\Gamma C^T\left(J^{mn}\right)^T\right) \IEEEnonumber\\
&&+2\text{Tr}\left(R J^{ij}C\Gamma C^T\left(J^{mn}\right)^T\right).
\end{IEEEeqnarray}
By the continuity and linearity of the trace operator, the Hessian matrix can be expressed as (\ref{equation:hessian}).
This completes the proof of Theorem~\ref{theorem:hessian_matrix}.
\end{proof}
\subsection{Indefiniteness of the Hessian Matrix}
The indefiniteness of the Hessian matrix is a pervasive problem existing in the non-convex optimization problems. The algorithms on the second-order optimization for the nonlinear optimization problems have been widely studied.
Intuitively, finding a locally optimal point for the non-convex problem should be as simple as finding a globally optimal point for the non-convex problem, but in practice, the fact is that many more steps are required to achieve the locally optimal point. This is because of the pervasively existing saddle points in the non-convex problems. It has been shown that for the non-convex optimization problems, it is the saddle points that impede the optimization procedures~\cite{dauphin2014identifying}. Therefore, how to evade the saddle points becomes a critical problem.
An intuitive solution to evade the saddle point is to rescale the gradient vector by the inverse of the absolute value of the corresponding eigenvalue, i.e., rescale $\left({dJ}/{dK}\right)_i$ by ${1}/{|\lambda_i|}$, where $\lambda_i$ is the $i$th eigenvalue of the Hessian matrix\cite{dauphin2014identifying}. Adding an identity matrix to the indefinite Hessian matrix such that the matrix $(\alpha I+H)$ is positive definite~\cite{tassa2012synthesis} and using the absolute value of the Hessian matrix~\cite{nocedal2006numerical} are also commonly used in the existing literature. However, there is no theoretical support for such techniques so far and even no intuitive explanation.
Even though many algorithms have been proposed, how to evade the saddle point when the second-order methods are used for the non-convex optimization problems is still an open question. In this paper, the positive definite truncated (PT)-inverse method proposed by~\cite{paternain2019newton} is utilized.
Since the PT-inverse can guarantee that the Hessian matrix is positive definite, the iteration steps are in the proper descent direction. The sub-optimal point can be definitely achieved alongside this direction.
\subsection{Equality Constrained Newton's Method}
Since the controller gain matrix $K\in \mathbb R^{m\times q}$ is not in a vector form, the Hessian matrix of the cost function cannot be denoted explicitly. By expanding the controller gain matrix into the vector form, we can do the optimization in terms of the vector form controller gain. After that, the controller gain can be easily converted to the matrix form for further implementation.
From Theorem~\ref{theorem:vector_constraints}, it shows that the linear equality constraints can be expressed explicitly in the vector form.
\begin{theorem}\label{theorem:vector_constraints}
The linear equality constraints defined in \eqref{eqaution:matrix_constraints} can be converted to the vector form, which can be expressed as
\begin{IEEEeqnarray}{rCl}
\bar{\mathcal{A}} \mathrm{vec}(K) = \bar{\mathcal{C}},
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray}{rCl}\label{equation:transformation}
\bar{\mathcal{A}} &=&
\Bigg[
\sum_{i=1}^{m_1}\left(\left({\mathcal B}_i^{(1)}\right)^T\otimes{\mathcal A}_i^{(1)}\right);
\sum_{i=1}^{m_2}\left(\left({\mathcal B}_i^{(2)}\right)^T\otimes{\mathcal A}_i^{(2)}\right);\dots;\sum_{i=1}^{m_N}\left(\left({\mathcal B}_i^{(N)}\right)^T\otimes{\mathcal A}_i^{(N)}\right)\Bigg]\IEEEyesnumber\IEEEyessubnumber\\
\bar{\mathcal{C}} &=& \left[\mathrm{vec} \left({\mathcal C}_0^{(1)}\right); \mathrm{vec} \left({\mathcal C}_0^{(2)}\right);\dots;\mathrm{vec} \left({\mathcal C}_0^{(N)}\right)\right].\IEEEyessubnumber
\end{IEEEeqnarray}
\end{theorem}
\begin{proof}
By doing the vectorization in both sides to the constraints expressed in the matrix form as shown in (\ref{eqaution:matrix_constraints}), we can derive
\begin{IEEEeqnarray}{rCl}
\Bigg[\sum_{i=1}^{m_j}\left(\left({\mathcal B}_i^{(j)}\right)^T\otimes{\mathcal A}_i^{(j)}\right)\Bigg]\mathrm{vec}(K) = \mathrm{vec}\left({\mathcal C}_0^{(j)}\right),
\end{IEEEeqnarray}
where $j$ denotes the $j$th linear equality constraint. Then Theorem~\ref{theorem:vector_constraints} can be easily proved if all the equations are denoted in a block matrix form.
\end{proof}
For the linear equality constrained Newton's method, we need to ensure that the point after each iteration must stay in the feasible region, i.e., $\bar{\mathcal{A}} \mathrm{vec} (K+\Delta K) = \bar{\mathcal{C}}$. Therefore, if the stability constraint condition is ignored temporarily, we have the following optimization problem at a specific point $K=K_s$,
\begin{IEEEeqnarray}{rl}\label{equation:Newton_optimization_problem}
\underset{\mathrm{vec}(\Delta K)\in\mathbb R^{mq}}{\mathrm{minimize}}\quad &\bar J(\mathrm{vec}(K_s+\Delta K))=J(K_s)+G_v^T\mathrm{vec}(\Delta K)+\frac{1}{2}\mathrm{vec}(\Delta K)^T H_v\mathrm{vec}(\Delta K) \IEEEnonumber\\
\;\;\text{subject to}\quad &\bar{\mathcal{A}} \left(\mathrm{vec} (K_s)+\mathrm{vec}(\Delta K)\right) = \bar{\mathcal{C}},
\end{IEEEeqnarray}
By using the analytical solution to the linear quadratic optimization problem, we can denote (\ref{equation:Newton_optimization_problem}) in the matrix form,
\begin{IEEEeqnarray}{LL}\label{equation:Newton_step}
\begin{bmatrix}
H_v & \bar {\mathcal{A}}^T\\
\bar {\mathcal {A}} & 0
\end{bmatrix}
\begin{bmatrix}
\mathrm{vec}(\Delta K)\\ w
\end{bmatrix}=
\begin{bmatrix}
-G_v\\0
\end{bmatrix},
\end{IEEEeqnarray}
where $w$ is the dual variable vector with the appropriate dimension for the linear quadratic optimization problem, $G_v\in\mathbb R^{mq}$ and $H_v\in \mathbb R^{mq\times mq}$ are given as
\begin{IEEEeqnarray}{rCl}
G_v&=&\mathrm{vec}\left(\frac{dJ}{dK}\right)\IEEEyesnumber\IEEEyessubnumber~\label{equation:gradient_vector}\\
H_v&=&\bigg[
\mathrm{vec}\left(\frac{\partial^2 J}{\partial k_{11}\partial K}\right),
\dots,
\mathrm{vec}\left(\frac{\partial^2 J}{\partial k_{m1}\partial K}\right),\quad\mathrm{vec}\left(\frac{\partial^2 J}{\partial k_{12}\partial K}\right),
\dots,
\mathrm{vec}\left(\frac{\partial^2 J}{\partial k_{m2}\partial K}\right),\IEEEnonumber\\
&&\quad\mathrm{vec}\left(\frac{\partial^2 J}{\partial k_{1q}\partial K}\right),
\dots,
\mathrm{vec}\left(\frac{\partial^2 J}{\partial k_{mq}\partial K}\right)
\bigg].\IEEEyessubnumber~\label{equation:hessian_vector}
\end{IEEEeqnarray}
Then in each iteration, we can derive the Newton step $\mathrm{vec}(\Delta K)$ by solving (\ref{equation:Newton_step}).
However, since this problem is non-convex, the indefiniteness of the Hessian matrix must be considered. Integrated with the PT-inverse method, the Newton step $\mathrm{vec}(\Delta K)$ is given by solving the following matrix equation,
\begin{IEEEeqnarray}{LL}\label{equation:Newton_step_PT}
\begin{bmatrix}
H_{v,\epsilon} & {\bar {\mathcal{A}}}^T\\
\bar {\mathcal {A}} & 0
\end{bmatrix}
\begin{bmatrix}
\mathrm{vec}(\Delta K)\\ w
\end{bmatrix}=
\begin{bmatrix}
-G_v\\0
\end{bmatrix},
\end{IEEEeqnarray}
where $H_{v,\epsilon}$ is the PT-matrix for the Hessian matrix $H_v$. To calculate the PT-matrix,
we use the singular value decomposition (SVD). Denote $H_v=M\Lambda M^T$, where $M\in\mathbb R^{n\times n}$ is a unitary matrix, and $\Lambda \in \mathbb S^n$ is a diagonal matrix. Define the positive definite truncated eigenvalue matrix $\Lambda_\epsilon$ with the parameter $\epsilon$ as
\begin{IEEEeqnarray}{c}
(\Lambda_\epsilon)_{ii}=
\begin{cases}
|\Lambda_{ii}|& \text{if } |\Lambda_{ii}|\ge \epsilon \\
\epsilon & \text{otherwise}.
\end{cases}
\end{IEEEeqnarray}
The PT-matrix of the Hessian matrix $H_v$ with the parameter $\epsilon$, which is denoted by $H_{v,\epsilon}$, is given by $H_{v,\epsilon} = M\Lambda_\epsilon M^T$.
From~\cite{nocedal2006numerical}, we can guarantee that each step $\mathrm{vec}(\Delta K)$ is a descent step. Since the cost function value of an unstable system is infinite, the stability of the system can be guaranteed if the cost function value belongs to a decreasing sequence as long as the initial gain stabilizes the closed-loop system.
Algorithm~\ref{algorithm: back_tracking_line_search} is introduced to summarize the modified backtracking linear search used in this paper.
Then the linear equality constrained second-order non-convex optimization algorithm is summarized in Algorithm~\ref{algorithm:optimization}.
\begin{remark}
$P_g$ must be positive definite to ensure the stability of the system. This can be easily seen from the Lyapunov stability theorem of the linear system. Therefore, for each step, the positive definiteness of the $P_g$ matrix must be guaranteed in the backtracking line search algorithm.
\end{remark}
\begin{remark}
The optimization method proposed by this paper can be applied to the linear equality constrained SOF optimization problem as well as the unconstrained SOF optimization. For the unconstrained SOF problem, Newton's method can be applied to find the descent step. For both the constrained and unconstrained cases, super-linear convergence can be achieved due to the second-order optimization algorithm.
\end{remark}
\begin{algorithm}
\caption{Backtracking line search with guaranteed stability }
\begin{algorithmic}[1]\label{algorithm: back_tracking_line_search}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Current controller gain matrix $K$, descent direction $\Delta K$, gradient $dJ/dK$, and backtracking parameters $\alpha \in (0,0.5)$, $\beta\in(0,1)$
\ENSURE Controller gain matrix after iteration $K'$
\\ \textit{Initialization} $t=1$:
\WHILE {\TRUE}
\STATE Compute $P_g\in\mathbb R^{n\times n}$ for $J(K+t\Delta K)$ \\
\IF {$J(K+t\Delta K) < J(K)+\alpha t \text{Tr}\left(\left(dJ/dK\right)^T\Delta K\right)$ \AND $\min \{\text{eig}(P_g)\} > 0$}
\STATE \textbf{break}
\ELSE
\STATE $t=\beta t$
\ENDIF
\ENDWHILE
\RETURN $K'=K+t\Delta K$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Second-order optimization algorithm for the SOF LQR problem}
\begin{algorithmic}[1]\label{algorithm:optimization}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Stable controller gain matrix $K$, tolerance $\varepsilon>0$
\ENSURE Sub-optimal controller gain matrix $K^*$
\WHILE {\TRUE}
\STATE Compute the gradient vector of the cost function $G_v$ by \eqref{equation:gradient} and (\ref{equation:gradient_vector})
\STATE Compute the Hessian matrix of the cost function $H_v$ by \eqref{equation:hessian} and (\ref{equation:hessian_vector})
\STATE Compute the PT-matrix $H_{v,\epsilon}$ of the Hessian matrix $H_v$
\STATE Compute the Newton step $\mathrm{vec}(\Delta K)$ by (\ref{equation:Newton_step_PT})
\IF {$\|\mathrm{vec}(\Delta K)\|\le \varepsilon$}
\STATE \textbf{break}
\ENDIF
\STATE Conduct the line search using Algorithm~\ref{algorithm: back_tracking_line_search} to find the controller gain matrix $K'$ for the next iteration
\ENDWHILE
\RETURN $K^*=K$
\end{algorithmic}
\end{algorithm}
\section{Numerical Examples}\label{section:numerical_example}
In this section, two appropriate examples are worked through
to demonstrate the effectiveness and applicability
of the proposed second-order optimization method here.
The first example, which is a benchmark problem introduced in~\cite{choi1974computation},
is to design an SOF controller for a given fourth-order system without any constraints.
The second example is to design a linear equality constrained SOF controller
for a third-order decentralized system.
Both the first-order optimization algorithm with the gradient projection method and
the proposed second-order optimization algorithm here are applied to solve the SOF problem.
Comparative results are given to demonstrate the performance of both methods.
Both of the SOF problems in the given examples are solved on
a computer with 16G RAM and a 2.2GHz i7-8750H processor (6 cores),
and the optimization algorithm is implemented and executed on MATLAB R2019b
(essentially a rather commonly available engineering development/computation environment presently).
\begin{example}
The fourth-order system for an aircraft system is given by
\begin{IEEEeqnarray}{rCl}
\dot x(t) &=& Ax(t)+Bu(t)\IEEEnonumber\\
y(t)&=& Cx(t)\IEEEnonumber\\
u(t)&=&Ky(t),
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray}{l}
A=\left[
\begin{array}{llll}
-0.03700 & \phantom+0.01230 & \phantom+0.00055 & -1.00000\\
\phantom+ 0.00000 & \phantom+0.00000 & \phantom+1.00000 & \phantom+0.00000\\
-6.37000 & \phantom+0.00000 & -0.23000 & \phantom+0.06180\\
\phantom+ 1.25000 & \phantom+0.00000 & \phantom+0.01600 & -0.04570
\end{array}
\right]\IEEEnonumber\\
B=\left[
\begin{array}{ll}
\phantom+0.000840 & \phantom+0.000236\\
\phantom+0.000000 & \phantom+0.000000\\
\phantom+0.080000 & \phantom+0.804000\\
-0.086200 & -0.066500
\end{array}
\right]\IEEEnonumber\\
C=\left[
\begin{array}{llll}
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}
\right].
\end{IEEEeqnarray}
An optimal controller, which is denoted by
\begin{IEEEeqnarray}{l}
K=\left[
\begin{array}{llll}
k_{11} & k_{12} & k_{13} \\
k_{21} & k_{22} & k_{23}
\end{array}
\right],
\end{IEEEeqnarray}
is designed to minimize the cost function as given by
\begin{IEEEeqnarray}{rCl}
J &=&\displaystyle\int_0^\infty \left(x(t)^TQ x(t)+u(t)^TR u(t)\right) dt,
\end{IEEEeqnarray}
where the weighting parameters are chosen as $Q=I,\; R=I$ for demonstrative purposes. The system initial state matrix is chosen as a random vector with $\mathbb E\left(x_0x_0^T\right)=I$.
\end{example}
The initial controller gain matrix is chosen as
\begin{IEEEeqnarray}{l}
K_0=\left[
\begin{array}{llll}
0 & 0 & 0 \\
0 & 0 & 0
\end{array}
\right],
\end{IEEEeqnarray}
with which the closed-loop system is stable.
The stopping criterion is chosen as $\varepsilon = 1\times 10^{-9}$. For both of the first-order optimization method and the second-order optimization method, Algorithm~\ref{algorithm: back_tracking_line_search} is used to choose the suitable step size. The parameters for the backtracking line search are chosen as $\alpha = 0.2$ and $\beta = 0.1$. The parameter for the PT-matrix, which will be used in the second-order optimization method, is chosen as $\epsilon = 1\times 10^{-9}$.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{first_order_norm.pdf}
\caption{Norm of the gradient during iterations with the first-order optimization method in Example 1 (in log scale).}
\label{fig:first_order_norm}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{first_order_cost.pdf}
\caption{Distance to the sub-optimal point during iterations with the first-order optimization method in Example 1 (in log scale).}
\label{fig:first_order_cost}
\end{figure}
Fig.~\ref{fig:first_order_norm} shows that the norm of the gradient of the cost function with respect to the controller gain matrix with the first-order optimization method. It can be seen that the norm has a decreasing trend after iterations with the first-order optimization method. Since it takes too many iterations to satisfy the stopping criterion, and the tendency for the curve of the norm of the gradient is much more clear with less data point, a relaxed stopping criterion $\varepsilon = 1\times 10^{-5}$ is chosen for the first-order method. It takes 624 iterations to achieve the sub-optimal point with the norm of the gradient $\|\mathrm{vec}(\Delta K^*)\|=9.5772\times 10^{-6}$. If the number of backtracking line search iterations is also taken into consideration, it takes in total 1696 iterations to reach the sub-optimal point with the defined stopping criterion. It can be seen that except for the very beginning iterations, the rate of convergence is linear in most of the iterations.
Fig.~\ref{fig:first_order_cost} shows the distance $E$ defined as the distance to the reachable sub-optimal point, i.e., $E=J(K)-J(K^*)$ with the first-order optimization method for each of the iterations. It shows that the distance $E$ to the reachable sub-optimal point decreases after each iteration with the first-order optimization method. The reachable sub-optimal point with the first-order optimization method in this example is $J(K^*)=159.0686$. It takes 154.4332 seconds to reach this sub-optimal point. It can be seen from the figure that except for the very beginning iterations, the rate of convergence for the distance $E$ is almost linear in most of the iterations. The sub-optimal parameter matrix given by the first-order method is
\begin{IEEEeqnarray}{l}
K^*_{(1)}=\left[
\begin{array}{llll}
\phantom+0.3975 & \phantom+1.5925 & \phantom+7.8522 \\
-1.2575 & -3.4823 & -5.0040
\end{array}
\right].
\end{IEEEeqnarray}
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{second_order_norm.pdf}
\caption{Norm of the gradient during iterations with the second-order optimization method in Example 1 (in log scale).}
\label{fig:second_order_norm}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{second_order_cost.pdf}
\caption{Distance to the sub-optimal point during iterations with the second-order optimization method in Example 1 (in log scale).}
\label{fig:second_order_cost}
\end{figure}
Fig.~\ref{fig:second_order_norm} shows that the norm of the gradient of the cost function with respect to the controller gain matrix with the proposed second-order optimization method. It can be seen that the norm decreases after each iteration with the second-order optimization method. Compared with the first-order optimization method, the second-order optimization method
shows significantly higher convergence.
It only takes 23 iterations to achieve the sub-optimal point with the norm of the gradient $\|\mathrm{vec}(\Delta K^*)\|=1.7571\times 10^{-13}$. If we consider the number of backtracking line search, it totally takes 24 iterations to reach the sub-optimal point with this norm. Therefore, in this example, the backtracking line search can reach a satisfying point almost in each iteration, which means the second-order optimization method can save much computational effort for finding the step size. This is because the second-order method uses the quadratic approximation instead of the affine approximation of the target cost function. The step size can be roughly calculated with the second-order optimization method, but the step size can only be chosen by the line search method in the first-order optimization method.
Fig.~\ref{fig:second_order_cost} shows the the distance $E$ during iterations with the proposed second-order optimization method. It shows that the distance $E$ decreases after each iteration with the second-order optimization method. The reachable sub-optimal point with the second-order optimization method in this example is $J(K^*)=159.0686$. It only takes 3.0150 seconds to reach this sub-optimal point. We can see that when the parameters approach closely to the sub-optimal point, this method can achieve second-order convergence, which means that the parameters can converge much faster than the first-order method. The sub-optimal parameter matrix given by the second-order method is
\begin{IEEEeqnarray}{l}
K^*_{(2)}=\left[
\begin{array}{llll}
\phantom+0.3975 & \phantom+1.5925 & \phantom+7.8522 \\
-1.2575 & -3.4823 & -5.0041
\end{array}
\right].
\end{IEEEeqnarray}
\begin{example}
Here next, a third-order system is considered with the following structure,
\begin{IEEEeqnarray}{rCl}
\dot x(t) &=& Ax(t)+Bu(t)\IEEEnonumber\\
y(t)&=& Cx(t)\IEEEnonumber\\
u(t)&=&Ky(t),
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray}{l}
A=\left[
\begin{array}{lll}
-4 & \phantom+2 & \phantom+1\\
\phantom+3 & -2 & \phantom+5\\
-7 & \phantom+0 & \phantom+3
\end{array}
\right]\quad
B=\left[
\begin{array}{ll}
1 & 0\\
1 & 0\\
0 & 1
\end{array}
\right]
C=\left[
\begin{array}{llll}
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right].
\end{IEEEeqnarray}
A decentralized optimal controller, which is denoted by
\begin{IEEEeqnarray}{l}
K=\left[
\begin{array}{llll}
k_{11} & 0 \\
0 & k_{22}
\end{array}
\right],
\end{IEEEeqnarray}
is designed to minimize the cost function as given by
\begin{IEEEeqnarray}{rCl}
J &=&\displaystyle\int_0^\infty \left(x(t)^TQ x(t)+u(t)^TR u(t)\right) dt,
\end{IEEEeqnarray}
where the weighting parameters are chosen as $Q=I,\; R=I$ for demonstrative purposes.
\end{example}
The decentralized linear equality constraints are denoted as
\begin{IEEEeqnarray}{rcl}
\mathcal A_1^{(1)} K \mathcal B_1^{(1)} &=& \mathcal C_1^{(1)}\IEEEnonumber\\
\mathcal A_1^{(2)} K \mathcal B_1^{(2)} &=& \mathcal C_1^{(2)},
\end{IEEEeqnarray}
where
\begin{IEEEeqnarray}{l}
\mathcal A_1^{(1)}=\left[
\begin{array}{lll}
1 & 0
\end{array}
\right]\quad
\mathcal B_1^{(1)}=\left[
\begin{array}{ll}
0 \\ 1
\end{array}
\right]\quad
\mathcal C_1^{(1)}=0 \IEEEnonumber\\
\mathcal A_1^{(2)}=\left[
\begin{array}{lll}
0 & 1
\end{array}
\right]\quad
\mathcal B_1^{(2)}=\left[
\begin{array}{ll}
1 \\ 0
\end{array}
\right]\quad
\mathcal C_1^{(2)}=0.
\end{IEEEeqnarray}
By using (\ref{equation:transformation}), we have
\begin{IEEEeqnarray}{l}
\bar {\mathcal A}=\left[
\begin{array}{llll}
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0
\end{array}
\right]\quad
\bar {\mathcal C}=\left[
\begin{array}{ll}
0 \\ 0
\end{array}
\right].
\end{IEEEeqnarray}
In this example, the stopping criterion is chosen as $\varepsilon = 1\times 10^{-9}$ and the initial system state vector is chosen as a random vector with $\mathbb E\left(x_0x_0^T\right)=I$. For both of the first-order optimization method and the second-order optimization method, the backtracking line search method with stability guaranteed is used. The parameters for the backtracking line search are chosen as $\alpha = 0.2$ and $\beta = 0.1$. The parameter for the PT-matrix, which will be used in the second-order optimization method, is chosen as $\epsilon = 1\times 10^{-6}$. The initial controller gain matrix is chosen as
\begin{IEEEeqnarray}{l}
K_0=\left[
\begin{array}{llll}
-2 & \phantom+0 \\
\phantom+0 & -3
\end{array}
\right],
\end{IEEEeqnarray}
which stabilizes the closed-loop system.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{first_order_norm_second_example.pdf}
\caption{Norm of the gradient during iterations with the first-order optimization method in Example 2 (in log scale).}
\label{fig:first_order_norm_second_example}
\end{figure}
Fig.~\ref{fig:first_order_norm_second_example} shows the norm of the gradient of the cost function with respect to the controller gain matrix during iterations with the first-order optimization method. It can be seen that the first-order optimization method with the gradient projection method takes 118 iterations (totally 209 iterations with the backtracking line search iterations taken into consideration) to satisfy the stopping criterion. It takes 16.9911 seconds to reach the sub-optimal point. The cost function value with respect to the initial controller gain matrix is 22.2010, and after 118 iterations, the value of the cost function decreases to 12.8281.
It is rather obvious to see that the rate of convergence is linear in most of the iterations.
\begin{figure}[!t]
\centering
\includegraphics[width=1\textwidth]{second_order_norm_second_example.pdf}
\caption{Norm of the gradient during iterations with the second-order optimization method in Example 2 (in log scale).}
\label{fig:second_order_norm_second_example}
\end{figure}
Fig.~\ref{fig:second_order_norm_second_example} shows the norm of the gradient of the cost function with respect to the controller gain matrix during iterations with the second-order optimization method. It shows that the second-order optimization method with the equality constrained Newton's method only needs 8 iterations (totally 8 iterations with the backtracking line search iterations taken into consideration) to satisfy the stopping criterion. It takes 1.5021 seconds to reach the sub-optimal point. The cost function value with respect to the initial controller gain matrix is 22.2010, and after 8 iterations, the value of the cost function decreases to 12.8281. Compared with the first-order method, the second-order method can achieve a much higher rate of convergence. The sub-optimal parameter matrices given by both of the methods are the same, which is
\begin{IEEEeqnarray}{l}
K^*=\left[
\begin{array}{ll}
-1.3211 & \phantom+0 \\
\phantom+0 & -6.0723
\end{array}
\right].
\end{IEEEeqnarray}
\section{Conclusion}\label{section:conclusion}
In this paper, a second-order non-convex optimization method is introduced and proposed
to solve the constrained fixed-structure SOF problem.
Firstly, an efficient method in the matrix space is proposed to derive the Hessian matrix
of the cost function with respect to the controller gain matrix.
Secondly, the PT-inverse method is utilized to cater to the indefiniteness of the Hessian matrix.
Thirdly, the equality constrained Newton's method is proposed to solve
the controller optimization problem with the structural constraints.
Finally, two illustrative examples are given to verify the
applicability and effectiveness of the
proposed method.
Comparisons between the first-order method and the second-order method proposed here
show the greatly improved performance of our proposed methodology and algorithm.
With this proposed algorithm, the SOF LQR problems can certainly
be solved with the requisite high accuracy and improved effectiveness.
\bibliographystyle{ieeetran}
|
1,314,259,993,315 | arxiv | \section{Introduction}
The subject of this paper is a set of related topics centered around the
Landau analysis \cite{Landau:1959fi}\footnote{See also many textbooks on
QFT.} of singularities of Feynman graphs. What makes this subject
currently important is not merely the classic application to the locating
of singularities of Feynman amplitudes, but its application by Libby and
Sterman \cite{Libby:1978bx} to determine and analyze regions of low
virtuality for amplitudes and cross sections in various asymptotic
high-momentum limits. Their analysis shows that, in the loop-momentum
space of Feynman graphs, these regions are determined by the locations of
pinches in the massless limit (without any requirement that the full itself
theory is massless).\footnote{The exact wording of this sentence might
appear to be somewhat at odds with what Libby and Sterman actually wrote.
See Ch.\ 5 of Ref.\ \cite{Collins:2011qcdbook} for my attempt to explain
the logic.} In addition, they formulate a power-counting analysis to
determine which regions contribute at leading power in a given theory. To
locate the pinches, they use the Landau criterion applied in a massless
theory, in a form given by Coleman and Norton \cite{Coleman:1965xm}. That
form is that the pinches correspond to classically allowed processes; this
is rather easy to apply in the massless limit. Libby and Sterman's
analysis results, among other things, in the well-known classification of
momenta into hard, collinear and soft. It then underlies all results in
factorization, which is an essential tool in most current QCD
phenomenology. Moreover, as will be explained in more detail below, a
number of extensions are needed to the Landau results for current and
future work.
The primary outcome of the Landau analysis is a criterion for the existence
of a pinch of the contour of integration in terms of what are called the
Landau equations\footnote{In this paper we will solely be concerned with
physical-region singularities and pinches (or their equivalent for more
general integrals).} when the objects of study are standard
momentum-space Feynman graphs. To cover more general situations, I will use
the term ``Landau condition'' (or criterion, depending on the shade of
meaning needed).
However, given that the Landau criterion and the work of Coleman and Norton
are foundational to most work on perturbative QCD (pQCD), it is very
disconcerting that there are notable deficiencies in existing treatments of
the Landau criterion, and that these become particularly noticeable in
massless theories. I will review some of the problems in the next
paragraphs, and then in more detail in Sec.\ \ref{sec:literature}. One of
the problems is that the actual proof by Coleman and Norton, of the Landau
criterion for pinches, fails completely in the massless case. This is not
simply a matter of a subtle issue in a high-order graph, but something that
happens in a one-loop self-energy graph. It turns out that an implicit and
apparently obvious and uncontroversial assumption is false. None of the
deficiencies necessarily entail that the Landau criterion is incorrect.
Indeed, a primary result of the present paper is a proof that does work and
that is valid for the massless case, as well as for other cases needed in
work on QCD, among others.
Nevertheless the problems indicate areas where some conceptual
understanding has been missing. This can seriously impact efforts to use
the methods in other situations. In addition, loopholes in the original
arguments suggest the possibility of interesting new results.
The aim of Coleman and Norton's derivation was to show that the Landau
condition is both necessary and sufficient for physical-region pinches in
the space of loop momenta. From it they then derived the well-known result
that the location of a pinch corresponds to a classically allowed
process.\footnote{In a theory with no massless particles, this part of the
proof is correct. But as pointed out by Ma \cite{Ma:2019hjq}, the proof
needs extensions to make it work in the massless case.} They apply a
Feynman parameter representation and then perform the momentum integrals.
Their proof is applied to the integral over Feynman parameters, with its
single denominator. There is an unstated assumption that there is a pinch
in the original momentum-space integral if and only if there is a pinch in
the parameter integral. But in the massless case that implication simply
fails, and it fails in the simplest graph, a one-loop self energy. As
shown in the present paper in App.\ \ref{sec:F.param.massless}, the
parameter integral for this graph has no pinch at all corresponding to the
well-known collinear pinch in momentum space; this is quite unlike the
situation for the normal threshold singularity in a massive theory.
Therefore, the first aim of this paper is to prove the necessity and
sufficiency of the Landau criterion for a pinch directly in momentum space.
The proof given here applies to a whole class of integrals, of which
standard Feynman graphs are only one example. Unfortunately some
restrictions are applied to make the proof work, but these are obeyed both
for standard Feynman graphs and for various other kinds of graph that are
commonly used in QCD. More work is needed to investigate more general
cases.
It should be noted that there are important shifts of emphasis between the
Landau analysis and the QCD applications. The Landau analysis was
concerned mainly with where actual singularities of graphs occur as
functions of their external parameters, and was almost entirely confined to
the massive case. But the QCD applications are fundamentally concerned
with locating regions where an integration contour is trapped by propagator
singularities to be in a region of low virtuality compared with some large
scale $Q^2$. These regions correspond to manifolds of exact pinches in the
massless theory; the regions in a possibly massive theory are neighborhoods
of the pinch singular surfaces in the massless theory.
Moreover, once one has a trap of the contour, one is also interested in
what contour deformations are allowed and hence which subset of propagator
singularities are involved in trapping the contour and which can be
avoided.
Symptoms of problems in the available treatments are found in the classic
book by Eden, Landshoff, Olive, and Polkinghorne \cite{ELOP} (ELOP). In
their Sec.\ 2.1, they give a general treatment of singularities of
integrals over an arbitrary number of variables. They present derivations
of the Landau condition for a singularity, in a form appropriate for a
general integral, and not merely for those integrals that arise from
momentum-space Feynman graphs. They are very explicit (p.\ 48) that a
``proper proof needs the use of topology'' and that they will ``be content
with plausibility arguments''. However, they do not give any real
indications of the deficiencies of their arguments. At the end of the
section, they write: ``A rigorous treatment requires homology theory and
for this we refer to the paper by Fotiadi, Froissart, Lascoux, and Pham
(1964).'' They do not explain the need for homology theory, and the paper
by Fotiadi et al.\ is listed as unpublished in the bibliography, with
a statement to see also a published paper by the same authors
\cite{Fotiadi:1965}. The published paper does contain relevant material,
but not what is needed. It is clear that there is second paper by the same
authors that contains the missing proof. However, as far as I can
determine, only one paper by these authors can now be found, and that is
the one that ELOP listed as published. The then-unpublished paper that
contains the referred-to proof appears to remain both unpublished and
inaccessible.
As regards the application of homology theory to this problem, there is a
book by Hwa and Teplitz \cite{Hwa.Teplitz.1966} on the subject. They give
much relevant material, including a reprint of the published Fotiadi et
al.\ paper. But on p.\ 51, they write that an extension of the treatment
is needed for Feynman graphs of more than one loop, and that ``at present
no such extension has been made''. (Their book is dated 1966.) A much
later book by Pham \cite{Pham:2011} gives many relevant results, but, as
far as I can see, not the ones that are otherwise missing. In a sense, one
basic problem with both references is that they try to be too general in
the integrals they work with. For the results in which we are interested
for Feynman graphs, the denominators are real for real values of their
arguments and there is an $i\epsilon$ prescription, and we are interested in
physical-region pinches. These properties are what enable the proof in the
present paper to work.
Primary new results of the present paper are as follows:
\begin{enumerate}
\item A full proof is given that the Landau condition is both necessary and
sufficient to determine the locations of physical-region pinch
configurations in a class of integrals that includes momentum-space
integrals for Feynman graphs. The applicability to Feynman graphs
includes not only standard relativistic Feynman graphs, but also the
various modified graphs that appear in factorization (notably including
Wilson lines and the approximated graphs that arise in a treatment of the
Glauber region,\footnote{See Ref.\ \cite{Collins:2011qcdbook} for
details, including in its Sec.\ 5.6 an analysis that uses the Landau
criterion.} as well as those containing Wilson lines).
The proof applies directly to the momentum-space integrals for Feynman
graphs without any need to invoke the Feynman parameter method.
\item The proof is in two parts. One part is a detailed analysis of the
conditions for a trapped contour in terms of constraints on the direction
of contour deformation. Certain restrictions apply to this part of the
proof --- see item \ref{item:restrictions}. The other part of the proof
is embodied in a purely geometric theorem in arbitrarily high dimension
on whether or not the constraints can be satisfied.\footnote{Undoubtedly
the second part of the proof is closely related to the mathematical
subjects of which an account is given in books by Gallier
\cite{Gallier:2008,Gallier:2011}. But I have not yet found the result
that is needed for the applications treated in this paper.}
The presentation of the two parts is in the reverse order to the
description just given. The geometric part comes first, since its
results are used in analyzing properties of contour deformations.
\item
\label{item:first.order}
A simple example is given, in App.\ \ref{sec:2D.first.order}, to
illustrate a difficulty that has to be overcome in the part of the proof
analyzing contour deformations.
Based on experience in visualizable examples in integrals over one
complex dimension, it is natural to assume that if a contour deformation
avoids the singularity due to a zero of a denominator, then there must be
a (non-zero) positive first-order shift in the imaginary part of the
denominator, given the usual $i\epsilon$ prescription. The example in App.\
\ref{sec:2D.first.order} shows that this supposition is false;
singularities can be avoided with a contour deformation that gives a zero
first-order shift; I term this an ``anomalous deformation''. Two or more
complex dimensions are needed for an anomalous deformation to exist.
In the particular example given, the contour is not pinched and one can
equally avoid the singularities by a non-anomalous deformation. But to
make an satisfactory determination of the condition(s) for a pinch, it is
essential to exclude the possibility of an anomalous deformation that
avoids the singularities of the integrand when the Landau criterion is
satisfied. This leads to considerable complications in the proof in this
paper.
\item
\label{item:restrictions}
Overcoming the difficulties just mentioned, leads to a need to impose
certain non-trivial restrictions on the denominators in the integral, in
order for the methods of proof used here to succeed. The restrictions
are that the denominators are at most quadratic in their arguments and
that any quadratic terms obey a certain sign constraint --- see the
statement of Thm.\ \ref{thm:main.contour}.
Luckily the restrictions apply to the pure momentum-space form of
standard Feynman graphs, including many of the modified graphs used in
QCD factorization. However, it would be obviously be useful to find
better proofs that would eliminate the restrictions as much as possible.
The difficulties suggest areas for further investigation that appear not
to have been properly considered in the original proofs.
\item A simple explanation is given that the Landau condition is necessary
but not sufficient for a singularity as a function of external parameters
(contrary to the situation concerning pinches of the contour of
integration). A supplementary analysis is needed to determine whether or
not there is an actual singularity given that the contour is trapped.
\item The proofs apply to more general situations than standard Feynman
graphs. The range of applicability includes the modified and
approximated Feynman graphs ubiquitous in QCD factorization. Others
include systematic treatments of properties of Feynman graphs in
coordinate space. Some illustrations are provided.
In coordinate space, only some restrictions on contour deformations arise
from singularities of the integrand. Other restrictions arise when one
has rapidly oscillating factors like $e^{ik\cdot x}$. These give strong
cancellations in the integral, and to get a good analysis it is needed,
if possible, to deform the contour in a direction that gives an
exponential suppression. The geometrical part of this paper's proofs
applies directly to such cases to determine where such a deformation is
not possible.
\end{enumerate}
Many or all of the issues are elementary or even trivial in integrals over
one complex variable. In that situation, issues about contour deformation
are readily visualizable. But this is no longer the case in higher
dimensions, as illustrated by the simple example in App.\
\ref{sec:2D.first.order}.
Some areas for future extension and use of the methods and results in this
paper are:
\begin{enumerate}
\item For a number of purposes, it would be useful to have a systematic and
general determination for a given process of which regions of space-time
for vertices dominate. For example, Brodsky et al.\
\cite{Brodsky:2019jla} have given an argument that the momentum sum rule
is violated in deep inelastic scattering, in contradiction with standard
results from the operator product expansion (OPE) and factorization.
Their argument depends on properties of the regions of space-time
involved. To assess their work completely, it is necessary to have
systematic and fully deductive derivations of the space-time regions
involved in processes such as those to which the OPE and standard
factorization are applied.
Here we see examples of the situations mentioned above where one needs to
determine where there is a lack of suppression in an integral containing
multiple oscillating exponentials of the form $e^{ik\cdot x}$, and to be able
to do this systematically to all orders of perturbation theory (at
least).
\item Given that there is a pinch at some point in an integration for a
Feynman graph, it is common that the pinch is restricted to a subset of
the propagators. Then the contour of integration can be deformed to take
the other propagators off shell, while not crossing the poles for the
pinched propagators. How in general is one to characterize the allowed
directions for such deformations and determine unambiguously which
propagators are trapped and which not?
\item The methods in this paper could be very useful in calculations of
hard scattering coefficients and other quantities, as is needed for much
Standard Model phenomenology. Loop integrals are encountered that are
not readily amenable to analytic calculations, so that numerical
calculations are needed. In the numerical implementation of integrals,
it is very desirable to have \emph{algorithmic} methods to deform
integration contours away from non-pinch singularities of the integrand.
In addition, where there is a pinch, it is important to deform the contour
away from singularities that do not participate in the pinch.
Some important work in this area is by Gong, Nagy and Soper
\cite{Gong:2008ww}, and by Becker and Weinzierl
\cite{Becker:2012nk,Becker:2012bi}. The geometric methods obtained in
the present paper should be able to contribute to more general methods.
\end{enumerate}
Although the results in this paper are in principle purely mathematical,
the motivations and the situations considered arise from certain kinds of
physics problem. Thus the presentation, the examples, and the terminology
are strongly influenced by the physics applications.
A guide to the statements of the main results is as follows:
\begin{itemize}
\item The statement of the main result on pinches is Thm.\
\ref{thm:main.contour}.
\item It applies to an integral of the form (\ref{eq:integral}).
\item It uses Definition \ref{def:Landau.point} for a Landau point.
\item The proof of the theorem uses a corresponding geometrical result,
Thm.\ \ref{thm:main.geom}, and the relation to the notation for the integral
is specified in Eq.\ (\ref{eq:denom.series}).
\end{itemize}
\section{Pinches of singularities of integrand in momentum-space
Feynman graphs, etc}
\label{sec:pinch}
In this section, I present the classic problem of determining where the
contour of integration is trapped in the kind of situation exemplified by
momentum-space Feynman graphs. The integrals are restricted to those such
as occur in momentum-space Feynman graphs in the physical
region\footnote{For the purposes of this paper, saying that a graph is in
the physical region means that the external momenta are real and that
before contour deformation the integral is calculated with internal loop
momenta all real \cite{Coleman:1965xm}. There is no requirement that the
external momenta be on-shell.}. These restrictions are: (a) the external
parameters are real; (b) the integration variables before contour
deformation are real; (c) the denominators giving the singularities of the
integrand are real for real values of their arguments; (d) the integral is
defined by an $i\epsilon$ prescription.
Although much of the material is basically standard, the presentation here
is needed to emphasize particular issues that are important in the sequel,
and to define the notation to be used.
Motivation can be made, both for the general problem and for the
geometrical formulation, from an elementary example. To this end, App.\
\ref{sec:self-energy} gives the well-known example of a one-loop
self-energy.
The general case is in an arbitrarily high dimension with arbitrarily many
denominators whose zeros give singularities of the integrand. Considerable
subtleties occur, as we will see. Hence to provide fully water-tight
derivations, it is important to have precise operational definitions of the
relevant concepts about a given contour deformation, about its
compatibility or not with the integrand's singularities, and about its
avoidance or non-avoidance of the singularities. It is important that the
definitions can be applied mechanically and essentially computationally,
without the need for creativity or special insights.
The work in this section will motivate the geometric theorem to be proved
in Secs.\ \ref{sec:overall}--\ref{sec:landau.proof}. Only after that will
be able to find a full proof that a necessary and sufficient condition for
a pinch of the integration contour is that a particular Landau condition is
obeyed.
This is the canonical application of the more abstract geometrical theorem,
and it will influence the terminology used. Some further applications are
summarized in Sec.\ \ref{sec:extra.cases}.
\subsection{Formulation of problem}
\label{sec:formulation}
\subsubsection{Momentum-space Feynman graphs}
The value of a momentum-space Feynman graph has the form
\begin{equation}
\label{eq:F.graph}
I(p,m) = \lim_{\epsilon \to 0+} \int_{\Gamma_0} \diff[d]{k}
\frac{ X(k;p,m) }
{ \prod_{j=1}^N \left[ f_j(k;p,m) + i\epsilon \right]^{n_j} }.
\end{equation}
Here $p$ is the multi-dimensional array of variables for the external
momenta, and $m$ is the array of masses of the theory.\footnote{No
restriction is placed on whether the masses are zero or non-zero.} The
integration variable $k$ is the array of all loop momenta, and has
dimension $d$, which may be arbitrarily high. We call each $f_j+i\epsilon$ a
denominator factor, and we call $X$ the numerator factor. Each is a
function of the integration variable $k$ and the external parameters $p$
and $m$.
We restrict from now to situations where:
\begin{itemize}
\item The denominator factors $f_j$ are real-valued when their arguments
are real.
\item The values of the external parameters $p$ and $m$ are real, and the
initial contour of integration, denoted by $\Gamma_0$, gives an integral over
all real values of $k$. This we will call the restriction to the
physical region, and it implies a restriction solely to physical-region
pinches and singularities.
\item There is an $i\epsilon$ with each denominator, and the end result is for the
boundary value as $\epsilon$ goes to zero from positive values.
\item All the functions $f_j$ and $X$ are analytic functions of their
arguments. In particular, $X$ has no singularities for any finite value
of $X$. Then all singularities of the integrand are due to zeros in one
or more of the denominator factors.
\end{itemize}
These properties evidently apply to a much wider class of integrands than
those for standard relativistic Feynman graphs, but they are motivated by
that situation. The methods we use could be easily applied to certain more
general classes of integrand, but we will not do so. In the standard case,
each $f_j$ is a quadratic function of its arguments and the numerator $X$
is polynomial in momenta and masses. But other possibilities can and do
arise. Notably, denominators from straight Wilson lines (or eikonal lines)
have linear dependence on momentum instead of quadratic.
Since the numerator factor $X$ is non-singular as a function of $k$, it
does not affect the determination of where pinch singularities occur. In
contrast, the numerator factor often affects power-counting analyses
\cite{Libby:1978bx} for quantifying the size of the contribution associated
with a pinch, but that is not the concern of the present paper.
The exponent $n_j$ of a denominator factor is typically unity; however, one
regularly meets other cases. Our concern is with singularities of the
integrand caused by zeros of one or more $f_j$, and with whether the
singularities obstruct contour deformations. The most general case is that
each $n_j$ is not zero or a negative integer, and this is what we will
assume henceforth. (In the remaining cases, where an $n_j$ is a negative
integer or zero, a zero in $f_j$ does not cause a singularity of the
integrand, and then the factor $1/(f_j+i\epsilon)^{n_j}$ can be incorporated in
the numerator factor $X$.)
Furthermore, it is possible that when a particular $f_j$ is zero, the
numerator factor is also zero in such a way as to remove the singularity of
the integrand due to a factor $1/f_j^{n_j}$. In such cases the denominator
factor can be removed and compensated by a corresponding change in the
numerator factor. So we remove such cases from consideration.
For most values of its arguments, $I(p,m)$ is an analytic function; this is
shown by differentiating the integrand with respect to $p$ and $m$.
However, that argument fails if one or more $f_j$ is zero somewhere on the
initial contour $\Gamma_0$. But if the contour can be deformed away from the
singularity/ies of the integrand, then the differentiation argument can be
applied on the deformed contour, and gives analyticity of $I(p,m)$ at the
values of $p$ and $m$ under consideration.
Hence our primary aim is to determine situations where such a deformation
away from a singularity of the integrand is not possible. We term such a
situation a pinch (by analogical generalization from corresponding
situations in one-dimensional contour integrals).
A contour of integration is a surface that in terms of real variables has
dimension $d$. It is embedded in a space of $d$ complex dimensions, i.e.,
of $2d$ real dimensions.
The only singularities of the integrand are at zeros of one or more
denominators. When $\epsilon$ is nonzero and when all of $k$, $p$, and $m$ are
real, the integrand is non-singular on all of the initial contour $\Gamma_0$,
because the imaginary part of each denominator is non-zero. When $\epsilon\to0+$,
i.e., $\epsilon$ approaches zero from positive values, singularities may appear on
$\Gamma_0$ at positions where one or more $f_j$ is zero. In analyzing a
candidate deformation to a contour $\Gamma$, we wish first to know whether or
not a singularity is encountered for positive $\epsilon$ during the deformation
and before $\epsilon$ is finally taken to zero. Such a deformation is disallowed.
Finally, for an allowed deformation we wish to know whether the deformed
contour avoids a particular given singularity after $\epsilon\to0+$
\subsubsection{Feynman parameters}
One technique for evaluating Feynman graphs is to use Feynman parameters.
This converts the original formula (\ref{eq:F.graph}) to an integral with a
single denominator factor. It has more integration variables, but the
integrals over momenta can be performed analytically.
After Feynman parameterization, but before the integral over momenta, the
integral becomes
\begin{multline}
\label{eq:F.graph.param.mom}
I(p,m) = \lim_{\epsilon \to 0+} \int \diff[d]{k} \prod_j \left( \int_0^1 \diff{\alpha_j} \right)
~ \delta\biggl( \sum_j \alpha_j - 1 \biggr)
\\
\frac{ \Gamma\mathopen{}\left( \sum_jn_j \right) }{ \prod_j \Gamma(n_j) }
\frac{ X(k;p,m) \prod_j \alpha_j^{n_j-1} }
{ \left( \sum_j \alpha_j f_j(k;p,m) + i\epsilon \right)^{\sum_jn_j} }.
\end{multline}
The single denominator has an exponent that is the sum of the exponents in
the original problem. The extra normalization factor with the
Gamma-functions can be absorbed into a redefinition of the numerator
factor. The same applies to the factor of powers of $\alpha_j$, which can be
singular only at endpoints of the integration, and then only if some $n_j$s
are not positive integers.
After exchanging the order of the integrations and performing the momentum
integrals, one gets an integral of the form
\begin{multline}
\label{eq:F.graph.param.only}
I(p,m) = \lim_{\epsilon \to 0+} \prod_j \left( \int_0^1 \diff{\alpha_j} \right)
~ \delta\biggl( \sum_j \alpha_j - 1 \biggr)
\\
\frac{ C(\alpha;p,m) }
{ \bigl( D(\alpha;p,m) + i\epsilon \bigr)^{\sum_jn_j} },
\end{multline}
where $\alpha$ without a subscript denotes the array $j\mapsto\alpha_j$. The rules for
obtaining the functions $C$ and $D$ can be found in textbooks, e.g.,
\cite{ELOP}.
\subsubsection{General situation}
We can treat all of these integrals as special cases of the following form:
\begin{equation}
\label{eq:integral}
I(z) = \lim_{\epsilon \to 0+} \int_{\Gamma_0} \diff[d]{w}
\frac{ B(w;z) }
{ \prod_{j=1}^N \bigl( A_j(w;z) + i\epsilon \bigr)^{n_j} },
\end{equation}
with each $n_j$ not equal to zero or a negative integer. This is the most
general form we will consider for our work. It is just like
(\ref{eq:F.graph}) except that we allow the contour of integration to have
boundaries, and the values of $d$, $N$, and $n_j$ may not be the same as
before. To indicate the more general situation, the notation has been
changed: all external parameters are folded into a single multidimensional
variable $z$, and the symbols are changed for the integration variables,
the denominators, and the numerator.
The numerator $B$ and the denominators $A_j$ are analytic for all values of
their arguments, and we restrict attention to the case that every $A_j$ is
real when its arguments are real.\footnote{This restriction and the $i\epsilon$
prescription do not appear in the mathematical work in Refs.\
\cite{Fotiadi:1965,Pham:2011}. The restrictions lead here to more
powerful results of physical relevance \cite{Coleman:1965xm,Libby:1978bx}
for Feynman graphs in the physical region. } The numerator factor $B$
need not be real when its arguments are real, and in fact $B$ will play no
role in our work.
\subsubsection{Singularities of integral}
Landau's original problem was to determine where the integral $I(p,m)$ is
singular as a function of $p$ and/or $m$. Now we change to the more
general notation of Eq.\ (\ref{eq:integral}). By definition, $I(z)$ is
analytic at some point when it is complex-differentiable in a neighborhood
of the point. If it is analytic, then all derivatives exist, by standard
theorems. Hence a function is singular at some point if and only if one or
more derivatives fails to exist at that point, or arbitrarily close to it.
As already observed, we can apply derivatives with respect to $z$ inside
the integration. Then a singularity can only arise in the dependence on
the external variable $z$ if the initial contour of integration is pinched
somewhere by a singularity caused by a zero of one or more $A_j$s. The
integral might actually diverge if integrand is singular enough, or an
actual divergence might only occur in a derivative or multiple derivative.
So in a general integral, like (\ref{eq:integral}), we can only get a
singularity as a function of external parameters if one of the following
occurs (see \cite{ELOP} and other references):
\begin{enumerate}
\item The integration contour is trapped, i.e., pinched, by singularities
of the integrand. That is, there is no contour deformation that avoids
these singularities.
\item Singularities of the integrand occur on a boundary of the
integration, and cannot be avoided by a deformation that preserves the
boundary of the contour.\footnote{The condition of preserving the
boundary of the contour is actually not quite what we need. In a
multidimensional case, Cauchy's theorem can allow deformations that
preserve the value of an integral while moving the boundary. Since our
concern in this paper is non-boundary singularities, we will not
consider the ramifications of this remark. It can matter when a
momentum-space integral is decomposed into sectors and contour
deformations determined separately on each sector, as in applications
of the numerical methods of Ref.\ \cite{Gong:2008ww}.} This case does
not occur for pure momentum-space Feynman integrals of a standard kind,
but can occur in more general situations (including Feynman graphs with
the use of Feynman parameters).
\end{enumerate}
In a general integral of the form of (\ref{eq:integral}), it is also
possible that for particular values of $z$, the integral acquires
divergences from where some integration variables go to infinity; this
requires that the integration range is infinite in those variables. This
situation does not arise for Feynman graphs in a pure momentum-space
formulation, since differentiation with respect to $p$ or $m$ in
(\ref{eq:F.graph}) always improves ultra-violet convergence. But it can
occur when $I(p,m)$ is expressed as an integral over space-time coordinates
of vertices, and divergences occur for large positions. Such cases do in
fact appear to be able to be treated by elementary generalizations of the
methods considered here, but we leave that for further work.
Notice that the above argument says that the existence of a singularity of
$I(z)$ at a particular value of $z$ implies that the contour of integration
is trapped at a singularity of the integrand considered as a function of
the integration variable $w$. The converse is definitely not always valid,
as shown in App.\ \ref{sec:trap.no.sing} with the aid of a trivial
counterexample. Given that a pinch has been found, a separate calculation
of the contribution to the integral from a neighborhood of the pinch point
to determine the existence or non-existence of an actual singularity.
But, as already observed in the introduction, what is important to many
modern applications is not the actual existence of a singularity of $I(z)$
as a function of the external parameter(s) $z$, but whether or not the
integration is trapped, and where.
\subsection{Deformations of contour}
\label{sec:deformation}
In the integral (\ref{eq:integral}), the contour deformations to be
considered are replacements of the real values of the integration
variable $w$ by
\begin{equation}
\label{eq:deform}
w = w_R + i \lambda v(w_R).
\end{equation}
Here $w_R$ is a real variable that ranges over all values on the original
real contour $\Gamma_0$. The real variable $\lambda$ is in the range 0 to 1, and
parameterizes the amount of deformation. For each $\lambda$ we have a particular
contour $\Gamma_\lambda$, parameterized by $w_R$. The original contour is at $\lambda=0$,
and the final deformed contour is at $\lambda=1$. The function $w_R \mapsto v(w_R)$ is
from real values $w_R$ to real $d$-dimensional values, so that $i\lambda v(w_R)$
gives the imaginary part of $w$ on the deformed contour. The function $v$
must be continuous and piecewise differentiable (but only in the sense of
differentiation with respect to \emph{real} variables, not necessarily with
respect to complex variables). We take it to be zero on any boundary of
$\Gamma_0$, so that the boundaries of the contour are unchanged.
The value of the integral on the deformed contour is
\begin{multline}
\label{eq:integral.deformed}
\hspace*{-3mm}I(z,\lambda,\epsilon)
\\
\stackrel{\textrm{def}}{=} \int_{\text{real},\Gamma_0} \diff[d]{w_R} J(w_R,\lambda)
\frac{ B(w;z) }
{ \prod_{j=1}^N \bigl( A_j(w;z) + i\epsilon \bigr)^{n_j} }.
\end{multline}
Here, the integral symbol is still equipped with the symbol for the
undeformed contour $\Gamma_0$, but now it is the real variable of integration
$w_R$ that is on $\Gamma_0$, and not the argument $w$ of the integrand. We have
chosen not to take the limit $\epsilon\to0+$ yet. Here $w$ is given by
(\ref{eq:deform}), and $J$ is the Jacobian of the transformation from $w_R$
to $w$. According to Cauchy's theorem\footnote{An accessible and
elementary proof of the Cauchy theorem beyond the one-dimensional case is
given by Soper \cite{Soper:1999xk}. Note that it is possible to
generalize the theorem to certain cases where the boundary changes, but
we will not deal with that issue here.}, the value of the integral is
independent of $\lambda$ provided that the integrand has no singularities on the
contour when $0\leq\lambda\leq1$, and hence that every $A_j(z,w_R+i\lambda v(w_R))+i\epsilon$ is
non-zero for all $w_R$ on $\Gamma_0$, for $0\leq\lambda\leq1$. Thus $I(z,1,\epsilon)=I(z,0,\epsilon)$.
The target value of the integral is the limit as $\epsilon$ decreases to zero of
the integral on the undeformed contour, i.e., of $I(z,0,0+)$. On the
undeformed contour, $\lambda=0$, there are typically singularities of the
integrand for some values of $w_R$; these are where there are zeros of one
or more denominators. To get the same value for the integral on the
deformed contour, we must apply the condition of all $A_j$s being nonzero
for all $\lambda$ in the range $0\leq\lambda\leq1$ and for all $\epsilon$ in a range $0<\epsilon\leq\epsilon_0$, for
some positive number $\epsilon_0$. Notice that the range of $\epsilon$ considered in the
condition \emph{excludes} zero. Then taking the limit $\epsilon\to0+$ gives
$I(z,1,0+)=I(z,0,0+)$.
Even if the contour does not avoid all singularities, it is useful to
deform a contour to avoid as many singularities as is possible. For such a
deformation to be useful, we require that no singularities of the integrand
appear on the contour until the very last step of taking $\epsilon$ to zero with
the contour fully deformed. We call such a deformation allowed, while any
deformation that encounters a singularity before that step is not an
allowed deformation.
Given some of the complications that arise in a complete analysis, it is
useful to have very precise operational specifications of what is meant by
an allowed deformation, and of what is meant by a singularity-avoiding
deformation. In analyzing problems concerning possible deformations, it is
also useful to define such concepts relative to particular subsets of
denominators. In particular, we may be interested not only in whether a
contour deformation avoids all singularities of the integrand, but also in
whether it avoids singularities at particular values of $w_R$, and possibly
only for those singularities caused by zeros in particular subsets of the
$A_j$. One physical motivation is that in many applications in QCD, sets
of singular propagators correspond to factors in a factorization theorem,
each of which can be considered separately; given a kinematic configuration
of momenta, only in a part of a graph that corresponds to a hard scattering
factor can we deform away from propagator singularities.
In all of the following definitions, we will assume a particular value of
the external parameter $z$.
\begin{definition}
\label{def:compat.deform.point}
Given a particular subset $\mathcal{A}$ of the $A_j$s, we define that at
a particular value $z$ and $w_S$ for the external parameter and
integration variable, a deformation is defined to be \emph{compatible}
with the denominators $\mathcal{A}$ if there exists a positive non-zero
$\epsilon_0$ such that $A_j(w_R+i\lambda v(w_R);z)+i\epsilon$ is non-zero when $w_R$ is in a
neighborhood of $w_S$ for all $A_j$ in $\mathcal{A}$, and for $0\leq\lambda\leq1$
and $0<\epsilon\leq\epsilon_0$.
\end{definition}
Note that the case $\epsilon=0$ is specifically \emph{not} included. The lack of
singularities in the given range indicates that the zeros of the specified
denominators do not give singularities that obstruct the use of Cauchy's
theorem. We specify that there is a lack of zeros not only at $w_R$ but in
a neighborhood. The reason is that if there is a zero of $A_j(w)$ at
$w=w_S$, then as $\lambda$ is increased from zero, the position of the zero
usually migrates to nearby values of $w_R$; such a zero is equally
effective at obstructing a contour deformation.
\begin{definition}
\label{def:allowed.deform}
A \emph{locally allowed deformation} at $z$ and $w_S$ is one that is
compatible with all the denominators at $w_S$. An \emph{allowed
deformation} is one that is an allowed deformation at all $w_S$ in
the range of integration.
\end{definition}
As already observed, for an allowed deformation there is no obstruction to
the contour deformation, so that $I(z,0,\epsilon)=I(z,1,\epsilon)$ when $0<\epsilon\leq\epsilon_0$, and
hence $I(z,0,0+)=I(z,1,0+)$. (It is sufficient to take a common nonzero
maximum $\epsilon_0$ for all denominators.)
It should be noted that a trivial special case of an allowed deformation is
when there is no change in the contour at all, i.e., when $v(w_R)$ is zero
for all $w_R$.
We next have the definition of a deformation that avoids singularities:
\begin{definition}
\label{def:sing.avoid,j.kS}
Given a particular subset $\mathcal{A}$ of the $A_j$s, we define that at
$z$ and $w_S$ a deformation \emph{avoids an $\mathcal{A}$-associated
singularity} if the deformation is allowed at $z$ and $w_S$, and if
the non-zero condition on $A_j+i\epsilon$ also applies for the given
$A_j\in\mathcal{A}$ at $\epsilon=0$ and $0<\lambda\leq1$, for $w_R$ in some neighborhood of
$w_S$. By bringing in the definition of an allowed deformation, the
condition for a singularity-avoiding deformation is that the only place
where $A_j+i\epsilon$ is zero (with $A_j\in\mathcal{A}$) is where $\epsilon$ and $\lambda$ are
both zero. (Here it is taken for granted that the relevant ranges for
$\epsilon$ and $\lambda$ are for non-negative values below $\epsilon_0$ and 1, respectively.)
\end{definition}
Of course, no requirement is placed when $\lambda=\epsilon=0$, because the interesting
case is when one or more $A_j(w_S)$ is zero, and then we ask whether a
particular deformation avoids the resulting singularity in the integrand.
We can define more global kinds of singularity avoidance:
\begin{definition}
\label{def:sing.avoid,kS}
A deformation \emph{avoids any singularity} at $w_S$ if it is allowed
and avoids $A_j$-related singularities for all $A_j$.
\end{definition}
\begin{definition}
\label{def:sing.avoid}
We define that at $z$ a \emph{deformation (globally) avoids any
singularity} if it is allowed and avoids $A_j$-related singularities
for \emph{all} $A_j$ and for \emph{all} $w_S$ on $\Gamma_0$.
\end{definition}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace*{1cm}}c@{\hspace*{1cm}}c}
\includegraphics[scale=0.7]{figures/block}
&
\includegraphics[scale=0.7]{figures/no-block}
&
\includegraphics[scale=0.7]{figures/no-block-no-avoid}
\\
(a) & (b) & (c)
\end{tabular}
\caption{(a) To illustrate where in the $(\epsilon,\lambda)$ plane singularities can
be encountered that prevent a deformation from being allowed. The
thick solid line indicates where the contour deformation first hits a
singularity as $\lambda$ is increased from zero. The dashed line is a path
in $(\epsilon,\lambda)$ to get from the initial situation with $\lambda=\epsilon=0$ to the
deformed contour $(\epsilon,\lambda)=(0,1)$. The thick solid line extends to the
origin. (b) The case where no singularity is encountered except at
$\epsilon=\lambda=0$, i.e., that deformation avoids the singularity. (c) A situation
with an allowed deformation that does not avoid a singularity: The
thick solid line on the vertical axis indicates that the integrand is
singular when $\epsilon=0$ for all values of $\lambda$.}
\label{fig:eps.lam.sing}
\end{figure}
An illustration of how these definitions are used is given in Fig.\
\ref{fig:eps.lam.sing}. In the left-hand diagram of the $\lambda$-$\epsilon$ plane is
illustrated the situation for a disallowed contour deformation. The dashed
line indicates the sequence of values we wish to use to get from the target
value of the integral, i.e., $I(z,0,0+)$, to the value on the deformed
contour. The solid line is where the contour deformation first hits a zero
of $A_j+i\epsilon$ somewhere on the contour of integration as $\lambda$ is increased
from zero. On the right-hand horizontal axis, where $\lambda$ is zero and $\epsilon$ is
positive, there is no singularity. As $\lambda$ is increased eventually a zero
of an $A_j$ hits the contour, and that is indicated by where the dashed
line intersects the solid line. For larger $\lambda$ Cauchy's theorem
fails.\footnote{It may be that for yet larger $\lambda$ there remains a zero of
an $A_j$ on the contour. That is irrelevant to our considerations.} The
solid line goes all the way to the origin; otherwise, simply by restricting
$\lambda$ to a smaller range, the zero(s) of $A_j+i\epsilon$ are avoided, and we can
convert the deformation to the standard form by rescaling the deformation.
In contrast, for a singularity-avoiding deformation, any line or region of
zeros of $A_j+i\epsilon$ does not come all the way to the origin, as in the middle
diagram. There may be a singularity when both $\lambda$ and $\epsilon$ are zero; that
is the case of interest, i.e., of a singularity on the undeformed contour.
It is possible that there are singularities when large enough deformations
are considered, shown in the middle diagram above $\lambda=1$. Thus $v(w_R)$ has
been scaled down enough to avoid encountering the singularity/ies.
If the deformation is allowed but doesn't avoid singularities, then we
have a line of singularity-encounters on the vertical
axis, i.e., where $\epsilon=0$ and $\lambda>0$, and this line goes all the way to
$\lambda=0$, as in the right-hand diagram.
Note that in the diagrams, we are concerned with singularities of the
integrand anywhere on the integration over $w_R$. The solid lines
correspond to the existence of an singularity somewhere in the integration
range. A singularity that is at some point $w_R=w_S$ when $\epsilon=\lambda=0$ often
migrates to other values of $w_R$ as $\lambda$ is increased.
\subsection{Conversion to a geometrical problem}
\label{sec:pinch.to.geom}
Based on experience with simple examples, it is natural to suppose that one
can determine whether or not a singularity due to a zero of $A_j(w_R)$ is
avoided or collided with, by examining the sign of the (imaginary)
first-order shift of $A_j$ in $\lambda$. Suppose there is a zero of $A_j$ at
$w=w_S$. Then a Taylor expansion in powers of $\lambda$ gives
\begin{equation}
A_j(w_S+i\lambda v(w_S)) +i\epsilon = i\lambda v(w_S) \cdot \partial A_j(w_S) + O(\lambda^2) +i\epsilon,
\end{equation}
where $\partial_\mu A_j(w_R)= \partial A_j(w_R)/\partial w_R^\mu$. Then one would normally expect
that the singularity is avoided if and only if $v(w_S) \cdot \partial A_j(w_S)$ is
strictly positive, which is a geometric condition on the deformation vector
$v(w_S)$ at the zero of $A_j$. In contrast $v(w_S) \cdot \partial A_j(w_S)$ would be
zero if the deformation is allowed but doesn't avoid the singularity, while
if $v(w_S) \cdot \partial A_j(w_S)$ were negative, then the deformation would not be
allowed. If these statements were all exactly correct, then applying the
positivity condition on $v(w_S) \cdot \partial A_j(w_S)$ to all $A_j$ for which
$A_j(w_S)=0$ would give the condition that the deformation avoids any
singularity at $w_S$.
As we will see, the Landau condition gives a necessary and sufficient
criterion that these positivity conditions are incompatible and hence that
the contour is trapped at $w_S$.
However, it is possible to arrange a contour deformation that avoids a
singularity by use of second-order or higher-order terms in $\lambda$, as shown
in App.\ \ref{sec:2D.first.order}. Now our interest is in the exact
conditions under which contours are trapped or not trapped, i.e., we need a
condition for a trap that is both necessary and sufficient. Therefore the
result in App.\ \ref{sec:2D.first.order} suggests that there could be an
interesting loophole in the Landau analysis.
To exclude the loophole, we need a more detailed analysis, which will be
given in Sec.\ \ref{sec:deal.with.zero.first.order}. The precise
definitions given above will assist that analysis. In addition, the
following observations concerning the $\lambda$ dependence of $A_j$ will also be
useful. They are used in treating the zeros of $A_j$ as a function of $\lambda$
at a fixed value of the real part $w_R$ of the integration variable.
Define $f_{j,w_S}(\lambda)=A_j(w_S+i\lambda v(w_S))$. Since $A_j(w)$ is analytic as a
function of $w$, $f_{j,w_S}(\lambda)$ is analytic as a function of the
one-dimensional variable $\lambda$ with $w_S$ fixed.\footnote{Note that it is not
necessary that the function $v(w_R)$ specifying the contour deformation
be analytic as a function of $w_R$. Hence $f_{j,w_R}(\lambda)$ is not
necessarily analytic as a function of $w_R$.} Suppose that $A_j(w_S)=0$.
Then $f_{j,w_S}(0)=0$. By standard properties of analytic functions,
either this is an isolated zero of $f_{j,w_S}(\lambda)$ or $f_{j,w_S}(\lambda)=0$ for
all $\lambda$. In the first case, $f_{j,w_S}(\lambda)$ is non-zero for all sufficiently
small non-zero $\lambda$. In the second case, we haven't avoided the singularity
by the contour deformation under consideration.
\subsection{Primary theorems}
\label{sec:primary.theorems}
In order to provide context for later sections, I state here the main
theorems to be proved.
First come a couple of convenient terminological definitions, of a Landau
point and a Landau condition:
\begin{definition}
\label{def:Landau.point}
Let $(D_1,\dots,D_N)$ be a list of dual vectors on a real vector space
$V$. We define a \emph{Landau point} for $(D_1,\dots,D_N)$ to be a list
of real numbers $\lambda_j$ ($1\leq j \leq N$) such that
\begin{itemize}
\item All the $\lambda_j$ are non-negative, and at least one is strictly
positive,
\item $\sum_{j=1}^N \lambda_j D_j = 0$.
\end{itemize}
\end{definition}
\begin{definition}
\label{def:Landau.condition}
We define the \emph{Landau condition} for $(D_1,\dots,D_N)$ to be obeyed
if and only if there exists a Landau point for them.
\end{definition}
\noindent
Recall that a dual vector on a vector space $V$ is a linear map from $V$ to
the scalars --- e.g., real numbers --- of a vector space. A standard
example is the derivative of a function on $V$. Thus in the preceding
subsection, the derivative of a function $A_j(w_R)$ is $\partial A_j$. It can be
considered a dual vector $D_j$ by the mapping of vectors to scalars that is
given by $D_j(v)= v\cdot\partial A_J = \sum_\mu v^\mu \partial A_j(w_R)/\partial w_R^\mu$.
Observe that in the case that there is only a single denominator $A$, i.e.,
$N=1$, a Landau point is one where $A=0$ and $D=0$.
The main theorem to be proved concerning pinches is:
\begin{theorem}
\label{thm:main.contour}
Given an integral of the form (\ref{eq:integral}), but subject to the
extra restrictions stated below, consider a real (vector) valued point
$w_S$ where a nonempty set of denominators is zero. Then the
integration is trapped at $w_S$ if and only if a Landau point exists for
the first derivatives of those denominators that are zero.
The extra restrictions are that (a) the denominators $A_j(w)$ are at
most quadratic in $w$, and (b) the signs of the nonzero quadratic terms
obey a condition which is stated below in (\ref{eq:restrict}) and the
following paragraphs. This condition is obeyed for the denominators
encountered in Feynman graphs, including cases with Wilson lines. It
also applies to the modified Feynman graphs obtained by applying the
typical approximations used in deriving factorization.
\end{theorem}
\noindent
The theorem may well be true without these extra restrictions or with
weaker restrictions. But the proof that we will give only applies when the
restrictions are valid. Either better methods or much more work would be
needed to give a proof of a less restricted theorem.
However, we do in all cases impose the reality conditions etc that were
listed below Eq.\ (\ref{eq:integral}).
Our proof of the main theorem is made by combining two subsidiary theorems.
The first subsidiary theorem relates the trapping or non-trapping of a
contour to the positivity of the first-order (imaginary) shift in
denominators:
\begin{theorem}
\label{thm:pinch.to.1st.order.shift}
With the same hypotheses as in Thm.\ \ref{thm:main.contour}, the
integration is not trapped at $w_S$ if and only if there is a direction $v$
such that $v\cdot\partial A_j(w_S)$ is strictly positive for every $A_j$ which is
zero at $w_S$.
\end{theorem}
\noindent
Notice that the theorem does \emph{not} say that a contour deformation
given by a function $v \mapsto v(w_R)$ avoids a singularity at $w_S$ if and only
if $v(w_S)\cdot\partial A_j(w_S)$ is strictly positive for every $A_j$ which is zero
at $w_S$. That property appears to be universally assumed in textbook
proofs, but it is fact false, as shown by the example in App.\
\ref{sec:2D.first.order}. That is, it is possible to avoid singularities
with a anomalous contour deformation, i.e., one for which $v\cdot\partial A_j(w_S)$ is
zero instead of positive for one or more of the relevant denominators.
Hence some trouble is needed to prove a correct theorem, as we will do
later. What the theorem does enable one to say is that if there exists an
anomalous deformation there is also a non-anomalous deformation that avoids
the singularity.
The second subsidiary theorem is a purely geometrical result:
\begin{theorem}
\label{thm:main.geom}
Let $V$ be a real vector space, and let $(D_1,\dots,D_N)$ be dual vectors
on $V$. Then, there is a direction $v$ for which $D_j(v)>0$ for all $j$,
if and only if there is no Landau point for the $D_j$.
\end{theorem}
\noindent
When $v$ is the deformation direction of a contour, this theorem gives the
condition under which the first order imaginary-direction shifts in
denominators can be made all positive.
It will be convenient to prove these theorems in the opposite order to
which they are stated, since the proof of the contour deformation theorem
uses the geometrical theorem. But the motivation and relevance of the
geometric theorem arises from considerations about contour deformations, so
it was convenient to state the contour deformation theorems first.
\subsection{Elementary parts of proofs}
\label{sec:elem.parts}
Certain directions of implication in Thms.\
\ref{thm:pinch.to.1st.order.shift} and \ref{thm:main.geom} are almost trivial to
prove, as follows:
Suppose at some point $w_S$ in the integration range, a set of denominators
is zero and that a contour deformation gives positive first order shifts in
these denominators. Without loss of generality, list the denominators that
are zero as $(A_1,\ldots,A_N)$. Let the derivatives be $D_j=\partial A_j(w_S)$. The
positivity of first-order shifts means that $D_j(v(w_S))>0$ for $1\leq j \leq N$.
Then we have already seen that the contour deformation avoids the
integrand's singularity at $w_S$.
Now, under the same conditions, consider any array of real numbers $\lambda_j$
($1\leq j \leq N$) which are non-negative and for which at least one is positive.
Then $\sum_{j=1}^N\lambda_j D_j(v(w_S)) >0$ and hence the dual vector
$\sum_{j=1}^N\lambda_jD_j$ is nonzero. Therefore there is no Landau point.
In order to get the desired necessary and sufficient conditions, we also
need to prove the reverse implications, which is quite non-trivial. It is
interesting to observe, that even to get one of the directions of
implication in the main theorem requires that we use of a non-trivial
direction of implication in one or other of the subsidiary theorems.
\section{Useful generalizations}
\label{sec:extra.cases}
In this section, I gather a couple of illustrations of applications of the
derived theorems to situations beyond the standard analyses of
singularities of ordinary Feynman amplitudes. The standard application to
momentum-space Feynman graphs is illustrated in App.\
\ref{sec:self-energy}.
\subsection{Glauber region}
\label{sec:Glauber}
One example of the need for a more general derivation of the Landau
criterion is the general analysis of the Glauber region given in Sec.\
5.6.3 of Ref.\ \cite{Collins:2011qcdbook}. In that situation,
approximations have been made for a Feynman graph that are valid in a
certain region of its loop momenta, with the momenta being classified into
soft, collinear, and hard categories. It is desired to determine when
there is a trap in the Glauber region; this is important because the
approximations used for soft momenta fail when a soft momentum is of the
kind called Glauber. The importance of this issue is that in some
situations, there are uncanceled Glauber contributions, and these break
standard formulations of factorization in interesting cases, e.g.,
\cite{Collins:2007nk}.
An appropriate method \cite[Sec.\ 5.6.3]{Collins:2011qcdbook} to locate
Glauber contributions uses a version of the Libby-Sterman argument, but
applied to the approximated graph in which standard soft and collinear
approximations have been made. If a contour deformation cannot be made to
avoid the Glauber regions, then there is a corresponding exact pinch in the
approximated graph. The use of the relevant Landau condition gives a
necessary and sufficient condition for the Glauber pinch.
The importance of this analysis, with its systematic use of an improved
Landau analysis, is that it can be used to locate in full generality where
extra regions and scalings in momentum space are important beyond the usual
classification into soft, collinear, and hard, with associated scalings of
momentum components.
\subsection{Coordinate space}
\label{sec:coord}
Another example is the extraction of coordinate-space properties of
amplitudes. For example, the Fourier transform of a free propagator is
\begin{equation}
\label{eq:coord}
S_F(x) = \int \frac{ \diff[n]{k} }{ (2\pi)^n }
e^{-i k \cdot x}
\frac{ i }{ k^2-m^2 +i\epsilon },
\end{equation}
where $n$ is the number of space-time dimensions, and the limit
$\epsilon \to 0$ from positive values is implicit, as usual.
Suppose we are interested in how this integral behaves when $x$ is scaled
to large values: $x \mapsto \kappa x$ with $\kappa \to \infty$. Of course, in the particular case
given, a solution can be found analytically, since the free propagator is a
kind of Bessel function with known asymptotics. But it is important to
have a method that can be applied much more generally without appealing to
properties of known special functions. To do this, we observe that over
much of the space of real $k$, one can deform the contour of $k$ so as to
give $k\cdot x$ a negative imaginary part. But near the pole at $k^2=m^2$, we
need to have the deformation compatible with the $i\epsilon$ prescription in the
denominator. If these two conditions on contour deformation are
incompatible, then we must leave the contour on the real ``axis'' and get
an unsuppressed contribution to the large $\kappa$ asymptotics.
Let us specify the deformed contour as
\begin{equation}
k = k_R + iv(k_R).
\end{equation}
Then the condition for an exponential suppression is
\begin{equation}
\label{eq:exp.supp}
-v(k_R)\cdot x > 0,
\end{equation}
while the condition for avoiding the propagator pole is\footnote{In this
statement, we are assuming that avoiding the pole can always be done by a
contour deformation that gives a positive first-order shift to the
imaginary part of the denominator. The complications hidden in
justifying this assumption have already been mentioned. Nevertheless,
use of the methods of Sec.\ \ref{sec:deal.with.zero.first.order} will
show that an exponential suppression with a singularity-avoiding contour
occurs if and only if there is a contour obeying Eqs.\
(\ref{eq:exp.supp}) and (\ref{eq:pole.avoid}).}
\begin{equation}
\label{eq:pole.avoid}
v(k_R) \cdot k_R > 0 \quad \mbox{when $k_R^2=m^2$}.
\end{equation}
These conditions are incompatible when $x$ is proportional to $k_R$ with a
\emph{positive} coefficient and $k_R$ is on-shell. If $k_R$ has positive
energy, then the relevant values of $x$ are future pointing in the same
direction as $k_R$, while if $k_R$ has negative energy, $x$ is past
pointing.
Given a value of $x$, this observation determines which values (if any) of
$k_R$ give unsuppressed contributions to $S_F(x)$. Here ``unsuppressed''
means ``not exponentially suppressed''; this use of ``unsuppressed'' allows
it to include merely ``power suppressed''.
Now an on-shell value of $k_R$ is time-like. Hence, when $x$ is space-like,
there is no value of $k_R$ giving an unsuppressed contribution. Then there
is no obstruction to deforming the contour, and an exponential suppression
of $S_F(x)$ is a consequence.
In contrast, when $x$ is time-like, the deformation cannot be made, and
that gives power-law behavior as $x$ is scaled. The dominant contribution
comes from near the pole in momentum-space, and the asymptote can be
extracted by suitable approximation methods. These methods continue to
apply if the free momentum-space propagator is replaced by the full
propagator in an interacting theory, which has a more general dependence on
momentum, but with its strongest singularity still being a pole at the
physical particle mass.
It is worth noting that similar methods can also be applied to get from the
behavior of a coordinate-space Green function to particular properties in
momentum space. Thus one can determine for the vertices of a graph the
dominant regions in coordinate space that contribute to a particular
process. We leave the systematic codification of such results to future
work.
Some relevant recent work is by Erdo\u{g}an and Sterman
\cite{Erdogan:2014gha,Erdogan:2016ylj,Erdogan:2017gyf}.
\section{Literature review}
\label{sec:literature}
In this section I assess some of the classic literature about the Landau
analysis. Since many of these works continue to be cited regularly as the
primary sources for results on singularities and pinches of contours, it is
useful to examine their arguments in detail. The review in this section
extends observations already made in the introduction.
It should be observed that typical treatments rely on the use of Feynman
parameters to combine the denominators into a single denominator. Then
they examine the conditions for a pinch of the integration contour, rather
than trying to create a more detailed geometrical argument that applies to
the multiple-denominator situation. This rules out any easy application of
the methods to more general situations, e.g., examining properties of
integrals involving coordinate space properties, as in Sec.\
\ref{sec:coord}, or the issues of algorithmic deformation of a contour for
numerical integration of a Feynman graph in the pure momentum-space
formulation.
\paragraph{Landau \cite{Landau:1959fi}}
Landau's paper \cite{Landau:1959fi} gives the original treatment of the his
criterion for singularities of a Feynman graph as a function of its
external parameters.
The analysis solely uses the Feynman parameter representation in the form
(\ref{eq:F.graph.param.mom}). It relies on the denominators being those of
standard Feynman graphs. Then in Landau's Eq.\ (4) the single denominator
is written as $\phi+K(k',l',\ldots)$, where $\phi$ is a function only of the external
parameters and $K$ is a homogeneous quadratic form in a set of variables
that are formed by a (parameter-dependent) linear transformation from the
original loop momenta. This by itself rules out the case that some
denominators have linear dependence on some (or all) loop momenta. Such
cases arise in practice. For example, in QCD applications we have cases
with Wilson denominators. In such a situation, the equivalent of $K$ is not
a homogeneous quadratic function.
The argument then continues to determine that a singularity of the integral
(as a function of external parameters) occurs when there is a point in
integration space where the denominator and its first derivative vanish.
No detailed argument is given, the core parts of the argument being treated
as ``easy to verify''. However, a detailed derivation, in Sec.\
\ref{sec:one-denom} of the present paper, is not at all easy. In fact the
proof fails whenever the matrix of second derivatives of the denominator
has an eigenvector with zero eigenvalue. This situation does in fact
sometimes arise in practice, as mentioned in a later paper by Coleman and
Norton \cite{Coleman:1965xm}.
Moreover, Landau's argument is rather difficult to apply as written if
there are massless particles, as is essential in applications to QCD
factorization. In contrast, the methods of the present paper do apply
unchanged to such cases. They are also applied directly to the momentum
space integral without an appeal to Feynman parameters.
A minor problem is that the $i\epsilon$ prescription is not mentioned explicitly
even though that is critical in determining whether or not there is a
pinch.
\paragraph{Coleman and Norton \cite{Coleman:1965xm}}
Coleman and Norton \cite{Coleman:1965xm} again use a parametric
representation. In the first part of the paper, they discuss the version
with both momentum and parameter integrations. They state, rather like
Landau, that to get pinch there needs to be either a coalescing pair of
singularities or an end-point singularity. This immediately gives the
Landau equations. However, given this first part of the derivation, the
Landau condition is clearly necessary but not sufficient, since it has not
yet been determined whether or not coalescing singularities actually pinch
the contour. It is also not really obvious what the term ``coalescing
singularities'' means except in one dimension. In addition, it is not
clear why attention is restricted to pairs of singularities,
To provide an actual proof of necessity and sufficiency, Coleman and Norton
perform the momentum integrals analytically, and work with an integral
solely over the parameters, i.e., an integral of the form
(\ref{eq:F.graph.param.only}), and restore the $i\epsilon$. It is not actually
clear why they switch to this kind of integral. The rest of their argument
appears to apply to a general multidimensional integral (subject to certain
conditions on the quadratic terms, as we will see). Thus their arguments
appear to apply equally to the integral with both momentum and parameter
integrations. But they clearly think that this approach would fail.
Then they examine the denominator in the neighborhood of a point where both
the denominator and its first derivative are zero. This is a place where
the Landau condition is satisfied, because of the zero first derivative.
They expand the denominator to quadratic order in small deviations from the
candidate pinch location, which gives a formula for the denominator of the
form
\begin{equation}
A = \frac12 \sum_{ij} E_{ij} \eta_i\eta_j.
\end{equation}
The authors then state that it is easy to show that the contour is trapped,
but only if none of eigenvalues of $E_{ij}$ is zero. However, as will be
seen later in the present paper, in Sec.\ \ref{sec:one-denom}, an adequate
proof is not entirely trivial. The proof does indeed fail when zero
eigenvalues exist. It is not at all clear whether the failure can be
remedied, or how that can be done.
That cases of zero eigenvalues arise in massive theories in reality is
mentioned; they occur only at ``very exceptional points''. The reader is
referred to Ref.\ \cite{Eden:1961} for more details. But that paper
appears not to contain a clear statement of whether such singularities
can occur in the physical region. Considerable further work is apparently
needed to resolve the issue.
In contrast, in a massless theory, a much simpler failure happens, as will
be explained in this paper in App.\ \ref{sec:F.param.massless} for the case
of a one-loop self-energy with massless particles. This graph has a
well-known collinear pinch when the external momentum is light-like. But
it is found that in the parameter integral there is no pinch that
corresponds to the collinear pinch in momentum space.
A further complication is found in App.\ \ref{sec:F.param.linear} in an
example graph where propagators are linear in a momentum component. For
that graph the pure parameter integral has a pinch independently of whether
there is a pinch in the momentum integral.
Evidently Coleman and Norton have assumed that a pinch in momentum space
occurs if and only if a corresponding pinch occurs in parameter space, and
that this is so obvious as to need neither mention nor proof. The examples
just mentioned show that the implication is not even correct, in general,
even if it works in the case of standard massive Feynman graphs.
After giving their derivation of the Landau condition, Coleman and Norton
derive their well-known result that a pinch configuration corresponds to a
situation with classical particles propagating and scattering in space-time
with momenta corresponding to the on-shell momenta of the lines
participating in the pinch.
It is important to remember that it is not the result that breaks down, but
the proof. But the proof's breakdown is a symptom of things that were not
understood. For example, in Apps.\ \ref{sec:F.param.massless} and
\ref{sec:F.param.linear} are given counterexamples that imply a failure of
Coleman and Norton's proof. But in both cases the Landau condition
correctly locates pinch(es) in the momentum-space integral. The general
proof in the present paper applies perfectly well to those cases. Of
course the new proof is much longer than those in the old papers.
\paragraph{ELOP \cite{ELOP}}
The venerable book by Eden et al.\ \cite{ELOP} remains a standard reference
for analyticity properties of Feynman graphs. Therefore it is worth
carefully assessing its treatment. As was remarked in the introduction,
the authors do say that their treatment lacks rigor, but do not make
explicit what is not rigorous.
After a clear discussion of the one-dimensional case, they come to the
multidimensional case on p.\ 47. Their subject matter is a general
integral over multiple complex variables, but without the further
``physical region'' restrictions inherent in our (\ref{eq:integral}); these
are a reality property of the denominators and an $i\epsilon$ prescription.
Theirs is therefore in principle a more general treatment. Their equations
for singularity surfaces $S_r=0$ correspond to the equations $A_j=0$ for
the zeros of our denominator factors.
The first problem is that they say that when a singularity surface advances
on the contour of integration, they say that if the singularity is to be
avoided, the contour should be distorted ``in the direction of the normal''
to the singularity surface. This appears to say that there is a unique
direction in which to distort the contour. But we have seen that in fact
there is a whole half-space of possible directions, and it is absolutely
necessary to take this into account. In addition, the concept of an
unambiguous ``normal'' to a surface only makes sense in a Euclidean space,
which is not the case for multidimensional complex variables with which we
are concerned.
In addition, they appear to assume as so obvious as not to need a proof
that for a contour deformation to avoid a singularity surface it must give
a nonzero first order shift in the denominator factors (or the equivalent
in their more general integral). But this definitely not the case ---
see App.\ \ref{sec:2D.first.order}.
Then in Eq.\ (2.1.19) they assert the conditions for singularity surfaces
to trap the integration contour. These are a form of the Landau condition.
But no proof and no reference to a proof is given. It is as if they think
the equation is obvious. But as we will see in Secs.\
\ref{sec:overall}--\ref{sec:deal.with.zero.first.order}, the condition is
rather non-trivial to derive. They continue to refer to normals to
surfaces, but have evidently confused the concept with the relevant one of
dual vectors, so that there is considerable conceptual confusion not
conducive to adequate reasoning. It is not at all obvious whether they
consider the conditions to be both necessary and sufficient, and why.
Finally, their statement (2.1.19b) of the condition for a version of a
Landau point lacks the positivity constraint needed for the kind of
``physical region'' pinch we consider. Recall that the positivity
constraint is that the $\lambda_j$ parameters in Defn.\ \ref{def:Landau.point}
are non-negative, and that at least one is positive. While the more
general version is appropriate for pinches outside the physical region,
further conditions are needed to determine whether or not there is a pinch.
This can be seen from the fact that their version of the condition is
trivially satisfied whenever the number of singularity surfaces is larger
than the dimension of the integration space, as the authors do indeed
observe. Hence some stronger condition than (2.1.19b) is needed to provide
sufficient conditions to determine that there is a pinch.
In stark contrast, for a physical region pinch, the Landau condition (with
the positivity constraint) is both necessary and sufficient. Of course
this only applies given both the reality conditions on our denominator
factors $A_j$ and the $i\epsilon$ prescription; the relevant theorem is Thm.\
\ref{thm:main.contour}, and its very non-trivial proof appears in later
sections. (Our proof also has some further restrictions, given in the
statement of the theorem; these are obeyed by standard and by important
non-standard Feynman graphs.) It is worth re-emphasizing that it is solely
the physical-region pinches that are relevant to QCD applications, and the
positivity constraints on the $\lambda_j$ parameters in the Landau point
definition are very important in delimiting collinear configurations of
partons.
The positivity conditions do appear in the ELOP treatment for physical
region pinches/singularities, but only when they consider Feynman graphs in
a Feynman parametric representation. Then the positivity conditions arise
from the range of the Feynman parameters. But they do not derive the same
constraints when the derive the conditions for a pinch from the pure
momentum-space formula for a Feynman graph. Moreover, working in parameter
space leads to the issues explained in the analysis of the Coleman-Norton
treatment.
\section{The geometrical theorem: Set up}
\label{sec:overall}
In this section and the next two sections, we will prove the last of the
theorems listed in Sec.\ \ref{sec:primary.theorems}, i.e., the purely
geometric Thm.\ \ref{thm:main.geom}. It can be regarded as giving a
compatibility condition for linear constraints on directions in a vector
space.
Throughout the treatment of this theorem, we work with a
finite-dimensional\footnote{The assumption of finite dimensionality can be
relaxed, but we will not need to do so.} real vector space $V$ of
dimension $d$, and we suppose given a list $\Dset$ of dual vectors $D_j$ on
$V$ ($1\leq j \leq N$). By definition, each $D_j$ is a real-valued linear
function from $V$ to the space of real numbers. The constraints on vectors
with which we are concerned are written $D_j(v) > 0$. In component
notation, we write
\begin{equation}
\label{eq:Dj.compt}
D_j(v) = \sum_\alpha D_{j\alpha}v^\alpha,
\end{equation}
where $v^\alpha$ denotes the components of $v$ with respect to some basis. But
we will use coordinate-independent notation much of the time. The space of
all dual vectors is a vector space $V^*$ of the same dimension as $V$ (if
$V$ is finite dimensional). We do not assume that there is any metric
given on $V$ or $V^*$.
Observe that although our original subject was integration in a complex
space, the manipulations involved in analyzing possible directions of
deformation, and hence of the constraints $D_j(v)>0$, only concern a real
vector space.
In the integration problem, we were concerned with whether or not a contour
deformation exists that avoids a singularity of the integrand. In the
geometric problem that we are addressing at the moment, a concept
corresponding to singularity avoidance in integrations is what we call a
``good direction'', defined by
\begin{definition}
\label{def:all.dir}
A \emph{good direction} for $(D_1,\dots,D_N)$ is defined to be a $v \in V$
such that $D_j(v)>0$ for all $j$.
\end{definition}
Throughout this and the next two sections, we use the terminology of Landau
points and Landau conditions that was defined in Defns.\
\ref{def:Landau.point} and \ref{def:Landau.condition}, names motivated by
the application to integrals. The theorem to be proved is that a good
direction exists for $(D_1,\dots,D_N)$ if and only if there is no Landau
point. Alternatively, there is no good direction if and only if there is
at least one Landau point.
We have already observed, in Sec.\ \ref{sec:elem.parts}, that if there is a
good direction then there is no Landau point and hence the Landau condition
holds. Equivalently, if the Landau condition holds, then there is no good
direction.
To complete the proof of Thm.\ \ref{thm:main.geom}, we need to prove the
converse, i.e., that if there is no good direction then there is a Landau
point. What is needed is to exclude with full generality the possibility
that there might fail to exist both a Landau point and a good direction.
In simple examples, it is not too hard to see that the theorem is valid,
with both directions of implication; such examples can often be visualized.
But in general the vector space $V$ can be of arbitrarily high dimension,
and the number of $D_j$ can be arbitrarily large. Then visualizing the
details of the proof is hard. Thus careful abstract arguments are needed.
In making the detailed analysis, we will encounter methods and results that
should be useful in algorithmic determining good directions for contour
deformations in numerical integration over loop momenta in Feynman graphs.
We will start in Sec.\ \ref{sec:pos.sets} by characterizing properties of
the set of good directions, and especially the boundaries of this set.
Then in Sec.\ \ref{sec:landau.proof}, we will use these properties to
complete the proof of the geometric theorem. A reader may find it unclear
what the motivation is for deriving some of the earlier properties, i.e.,
those in Sec.\ \ref{sec:pos.sets}. So it may be useful to skip ahead to
Sec.\ \ref{sec:landau.proof} to see what use is made of the results of
Sec.\ \ref{sec:pos.sets}.
\section{Geometry of positive regions of sets of dual vectors}
\label{sec:pos.sets}
\subsection{Setting up the problem}
We use the notation of the previous section, and define the positive region
of a list $\Dset=(D_1,\ldots,D_N)$ of dual vectors by
\begin{definition}
We define $P_{\Dset}$ to be the region of $V$ in which all the
$D_j$s in $\Dset$ are strictly positive:
\begin{equation}
P_{\Dset} \stackrel{\textrm{def}}{=} \left\{ v \in V : \forall D_j \in \Dset, D_j(v) > 0 \right\}.
\end{equation}
We call this the \emph{``positive region''} of $\Dset$.
\end{definition}
The overall issue we are addressing is the determination of whether or not
$P_{\Dset}$ is empty.
In this section, we will examine the case that $P_{\Dset}$ is non-empty,
and determine properties of its boundary that we will need later. Observe
that if $P_{\Dset}$ is non-empty, then all the $D_j\in\Dset$ are necessarily
non-zero.
\begin{definition}
The complement of $P_{\Dset}$ is notated as:
\begin{equation}
\widehat{P}_{\Dset} \stackrel{\textrm{def}}{=} V \setminus P_{\Dset}
= \left\{ v \in V : \exists D_j \in \Dset : D_j(v) \leq 0 \right\}.
\end{equation}
\end{definition}
We make a lot of use of the intersection of the kernels of $D_j$.
So we define
\begin{definition}
\begin{equation}
K_{\Dset} \stackrel{\textrm{def}}{=} \left\{ v \in V : \forall D_j \in \Dset, D_j(v) = 0 \right\}.
\end{equation}
\end{definition}
\begin{definition}
Define $n_{\Dset}$ to be the codimension of $K_{\Dset}$ in $V$, i.e.,
$n_{\Dset} = d - \dim(K_{\Dset})$.
\end{definition}
It is well-known that $K_{\Dset}$ is a vector subspace of $V$. When
$P_{\Dset}$ is non-empty, $K_{\Dset}$ cannot be the whole of $V$, so
that in this case its codimension obeys $n_{\Dset} \geq 1$.
We can decompose $V$ as a direct sum of the form
\begin{equation}
\label{eq:V.decomp}
V = V_{\perp \Dset} \oplus K_{\Dset}.
\end{equation}
The dimension of $V_{\perp \Dset}$ is $n_{\Dset}$. Note that $V_{\perp
\Dset}$ is non-unique, since its basis vectors can be changed by the
addition of elements of $K_{\Dset}$. If we are given that $P_{\Dset}$
is non-empty, then there must be a region of $V_{\perp \Dset}$ where the
$D_j$ are positive.
The critical result that we are working towards in this section is Thm.\
\ref{thm:PD.decomp} below, where we find a set of non-zero ``edge vectors''
$e_L$ for $P_{\Dset}$ such that every element of $v$ of $P_{\Dset}$ has the
form $v = \sum_L C_L e_L + v_K$, where all the $C_L$ are positive real
numbers, $C_L>0$, and $v_K \in K_{\Dset}$.
To derive Thm.\ \ref{thm:PD.decomp}, we will need a series of subsidiary
results, many of which are very elementary, and are obvious in
low-dimensional examples. But these results need to be explicitly stated
in order to ensure that the main theorem is properly proved in a space of
arbitrarily high dimension; their cumulative effect is quite non-trivial.
Many of the subsidiary results are likely to be useful in themselves for
applications, e.g., for searching for good directions to deform a contour
when there is no pinch.
\subsection{Elementary properties of \texorpdfstring{$P_{\Dset}$}{PD}}
\begin{theorem}
\label{thm:P.basic}
$P_{\Dset}$ obeys
\begin{enumerate}[(a)]
\item It is convex, i.e., if $v_1,v_2 \in P_{\Dset}$ and $\kappa$ is any
real number between 0 and 1 inclusive (i.e., $0\leq\kappa\leq1$), then
$\kappa v_1 + (1-\kappa)v_2 \in P_{\Dset}$.
\item If $v\in P_{\Dset}$ then $\lambda v\in P_{\Dset}$ for any positive real
$\lambda$.
\item $P_{\Dset}$ is connected.
\item It is an open set.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose that $v_1,v_2 \in P_{\Dset}$, that $\lambda_1$ and $\lambda_2$ are real
numbers, that both $\lambda_1,\lambda_2 \geq 0$, and that at least one is strictly
positive. Then each $D_j(\lambda_1v_1 + \lambda_2v_2)=\lambda_1D_j(v_1) + \lambda_2D_j(v_2)$ is
strictly positive, and hence $\lambda_1v_1 + \lambda_2v_2 \in P_{\Dset}$. (This
demonstrates that $P_{\Dset}$ is an example of a convex cone in
mathematical terminology.)
Properties (a) and (b) immediately follow, and then so does (c) from
(a).
To derive part (d), let $v \in P_{\Dset}$, and let $l = \min_{D_j \in \Dset}
D_j(v) > 0$. Now let $\delta v$ be another element of $V$. Then
\begin{equation}
D_j(v+\delta v) = D_j(v) + D_j(\delta v) \geq l + D_j(\delta v).
\end{equation}
For all small enough $\delta v$, we have $|D_j(\delta v)|<l$ for every $D_j\in
\Dset$, and then $v+\delta v \in P_{\Dset}$. Hence $P_{\Dset}$ is open.
\end{proof}
Since $P_{\Dset}$ is open, it is a manifold of the same dimension as
$V$, i.e., $d$, provided only that it is non-empty.
\begin{quote}
\textbf{From now on, we will assume that $P_{\Dset}$ is non-empty,
unless explicitly stated, and will only reiterate this assumption
when it seems particularly important.}
\end{quote}
\subsection{Basic properties of the boundary of
\texorpdfstring{$P_{\Dset}$}{PD}}
\label{sec:bdy.props}
We now consider the boundary $\partial P_{\Dset}$ of $P_{\Dset}$, i.e., the
set of points of $V$ that are limit points both of $P_{\Dset}$ and its
complement $\widehat{P}_{\Dset}$.
\begin{theorem}
\label{thm:bdy.char}
If $P_{\Dset}$ is non-empty, the boundary of $P_{\Dset}$ is
characterized by
\begin{align}
\partial P_{\Dset}
= \bigl\{ v \in V : {}& \forall D_j \in \Dset, D_j(v) \geq 0
\nonumber\\
& \mbox{ \rm and } \exists D_j \in \Dset : D_j(v) = 0
\bigr\}.
\end{align}
\end{theorem}
It follows that the boundary is contained in $\widehat{P}_{\Dset}$.
\begin{proof}
Suppose we have a point $v \in \partial P_{\Dset}$. Then there is a sequence
$v_a$ in $P_{\Dset}$ whose limit is $v$. So for all $D_j\in \Dset$
\begin{equation}
D_j(v) = \lim_{a\to\infty} D_j(v_a) \geq 0.
\end{equation}
If $D_j(v)$ were also nonzero for all $D_j$, then it would be in
$P_{\Dset}$. Since $P_{\Dset}$ is open, this would imply that $v$ is not
in its boundary. Hence we must have $D_j(v)=0$ for at least one $D_j$.
Conversely, suppose we have a point $v \in V$ for which all the $D_j(v)$ are
positive or zero, and at least one of which is zero, i.e.,
\begin{equation}
\label{eq:P.bc}
\forall D_j \in \Dset, D_j(v) \geq 0,
\mbox{ and }
\exists D_j \in \Dset : D_j(v) = 0.
\end{equation}
Then choose $\delta v \in P_{\Dset}$. For every positive real number $\lambda$,
$D_j(v+\lambda\delta v) = D_j(v) + \lambda D_j(\delta v) > 0$, so that $v+\lambda\delta v \in P_{\Dset}$.
Thus $v$ is a limit point of $P_{\Dset}$. But it is not in $P_{\Dset}$, so
it must be in the complement $\widehat{P}_{\Dset}$. It follows that $v$ is
trivially a limit point of $\widehat{P}_{\Dset}$.
\end{proof}
\begin{theorem}
(a) The subspace where all the $D_j$s are zero is inside the boundary
of $P_{\Dset}$. I.e., $K_{\Dset} \subseteq \partial
P_{\Dset}$.
\\
(b) $\partial P_{\Dset}$ is connected.
\end{theorem}
\begin{proof}
Every element $k$ of $K_{\Dset}$ obeys $D_j(k)=0$, for all $j$, and is
thus in $\partial P_{\Dset}$, by Thm.\ \ref{thm:bdy.char}. Hence
$K_{\Dset} \subseteq \partial P_{\Dset}$.
Since the zero vector is in $K_{\Dset}$ it is also in $\partial
P_{\Dset}$. For any $v$ in $\partial P_{\Dset}$, $\lambda v$ is also
in $\partial P_{\Dset}$ whenever $\lambda \geq 0$. This gives a line
connecting an arbitrary element of $\partial P_{\Dset}$ to one
particular element, i.e., the zero vector. Hence $\partial P_{\Dset}$
is connected.
\end{proof}
For our purposes, the interesting parts of $\partial P_{\Dset}$ are those that are
not in $K_{\Dset}$, i.e., where at least one $D_j(v)$ is strictly positive.
Therefore we define
\begin{definition}
The non-trivial part of the boundary of $P_{\Dset}$ is
\begin{equation}
\widetilde{\partial} P_{\Dset} \stackrel{\textrm{def}}{=} \partial P_{\Dset} \setminus K_{\Dset}.
\end{equation}
\end{definition}
The set $\widetilde{\partial} P_{\Dset}$ may be empty; our later work
shows that this happens if and only if $n_{\Dset}=1$ (or, of course if
$P_{\Dset}$ itself is empty).
From Thm.\ \ref{thm:bdy.char} it follows that the non-trivial part of the
boundary obeys
\begin{align}
\widetilde{\partial} P_{\Dset}
= \bigl\{ v \in V : {}& \forall D_j \in \Dset, D_j(v) \geq 0
\nonumber\\
& \mbox{ \rm and } \exists D_j \in \Dset : D_j(v) = 0
\nonumber\\
& \mbox{ \rm and } \exists D_j \in \Dset : D_j(v) > 0
\bigr\},
\end{align}
i.e., all the $D_j(v)$ are non-negative, at least one is zero, and at
least one is positive.
\begin{definition}
Here we define some auxiliary objects at a point $w$ that is in the
non-trivial part of the boundary, $w \in \widetilde{\partial} P_{\Dset}$.
\begin{enumerate}[(a)]
\item The sets of $D_j$ with zero and non-zero values are:
\begin{subequations}
\label{ref:Zw.hatZw}
\begin{align}
Z(w) & \stackrel{\textrm{def}}{=} \left\{ D_j \in \Dset : D_j(w) = 0 \right\},
\\
\widehat{Z}(w) & \stackrel{\textrm{def}}{=} \left\{ D_j \in \Dset : D_j(w) > 0 \right\}.
\end{align}
\end{subequations}
Given $w$, each $D_j$ is in exactly one of these sets, of course. Both
sets are non-empty when $w$ is in the non-trivial part of the boundary.
\item The minimum non-zero value of the $D_j(w)$s is:
\begin{equation}
m(w) \stackrel{\textrm{def}}{=} \min_{D_j \in \widehat{Z}(w)} D_j(w) > 0.
\end{equation}
\item Let $K(w)$ be the intersection of the kernels of those $D_j$
that are in $Z(w)$:
\begin{align}
K(w) & \stackrel{\textrm{def}}{=} \left\{ v \in V : \forall D_j \in Z(w) : D_j(v) = 0 \right\}
\nonumber\\
& = \cap_{D_j \in Z(w)} \ker(D_j)
\end{align}
\item Let $n(w)$ be the codimension of $K(w)$, so that the dimension
of $K(w)$ is $d-n(w)$.
\end{enumerate}
\end{definition}
Note that $w$ is one (non-zero) element of the subspace $K(w)$.
Since $P_{\Dset}$ is non-empty, there are vectors $v$ for which $D_j(v)>0$
for all $j$. It follows that $K(w)$ cannot be the whole of $V$. Hence
\begin{equation}
n(w)\geq1 .
\end{equation}
\subsection{Decomposition of the boundary
of \texorpdfstring{$P_{\Dset}$}{PD}}
\label{sec:bdy.decomp}
In this section, we show that the boundary of $P_{\Dset}$ can be
decomposed into a hierarchy of disjoint flat segments. On each of
these one set of $D_j$s is strictly positive and the others are zero.
First, given $w \in \widetilde{\partial} P_{\Dset}$, we construct the
boundary segment of which it is part. We define
\begin{align}
B(w) \stackrel{\textrm{def}}{=} {}& \bigl\{ v \in K(w): \forall D_j \in \hat{Z}(w), D_j(v) > 0 \bigr\}
\nonumber\\
={}& \bigl\{ v \in V: \forall D_j \in Z(w), D_j(v) = 0;
\nonumber\\
&\hspace*{1cm}
\mbox{and}\ \forall D_j \in \hat{Z}(w), D_j(v) > 0 \bigr\}.
\end{align}
Note that $B(w)$ is a subset of $K(w)$.
The boundary segments have the following elementary properties
\begin{theorem}
\label{thm:B.props}
\begin{enumerate}[(a)]
\item $B(w)$ is convex.
\label{thm:part:B.convex}
\item Whenever $v\in B(w)$, so is $\lambda v$ for positive $\lambda$.
\label{thm:part:B.scale}
\item $B(w)$ is a flat connected manifold of the same dimension as
$K(w)$, i.e., $d-n(w)$.
\label{thm:part:B.manif}
\item For every point $v\in B(w)$,
\label{thm:part:B.const.ZK}
\begin{equation}
\label{eq:B.ZK}
\begin{split}
Z(v) = Z(w), \quad
\hat{Z}(v) = \hat{Z}(w), \\
K(v) = K(w), \quad
n(v) = n(w).
\end{split}
\end{equation}
\item When $v \in B(w)$, we have $B(v)=B(w)$.
\label{thm:part:B.const.B}
\item For any $v$ and $w$ in $\widetilde{\partial} P_{\Dset}$,either $B(v)$ and
$B(w)$ are non-intersecting or they are equal. It immediately follows
that the boundary $P_{\Dset}$ is decomposed into a set of disjoint flat
segments.
\label{thm:part:B.overlap}
\end{enumerate}
\end{theorem}
\begin{proof}
Parts (\ref{thm:part:B.convex}) and (\ref{thm:part:B.scale}) follow by the
same method used to prove the corresponding properties for $P_{\Dset}$.
It immediately follows that $B(w)$ is connected and flat. As to the
dimension, first note that from its definition, $B(w)$ is contained in the
kernel space $K(w)$, so that its dimension is at most that of $K(w)$, i.e.,
$d-n(w)$. Furthermore, let $\delta w$ be any element of $K(w)$. For all small
enough $\delta w$, $w+\delta w$ is in $B(w)$. This is because when $D_j\in Z(w)$
$Dj(w+\delta w)=D_j(w) + D_j(\delta w)=0$, and because when $D_j\in\hat{Z}(w)$ and $\delta w$
is small enough the value of $D_j(\delta w)$ cannot compensate the positive value
of $D_j(w)$. Hence the dimension of $B(w)$ is at least $d-n(w)$.
Property (\ref{thm:part:B.manif}) now follows.
Next suppose $v \in B(w)$. By the definition of $B(w)$, $D_j(v) = 0$ for
every $D_j$ in $Z(w)$, and $D_j(v) > 0$ for every $D_j$ in $\hat{Z}(w)$.
Hence the set $Z(v)$ is the same as $Z(w)$, since it is the set of $D_j$
for which $D_j$ is zero at $v$. From this follows all of (\ref{eq:B.ZK}).
It then follows from the definition of $B(v)$ that $B(v)=B(w)$ whenever
$v\in B(w)$, which thereby proves property (\ref{thm:part:B.const.B}).
Now consider $B(v)$ and $B(w)$ for two points $v$ and $w$. Either
they do not intersect or they intersect. In the second case, pick $k$
in the intersection. From the previous result it follows that
$B(k)=B(v)$ and $B(k)=B(w)$, and hence that $B(v)=B(w)$. This proves
property (\ref{thm:part:B.overlap}).
\end{proof}
Since the sets $Z(v)$, $\hat{Z}(v)$, and $K(v)$, and the number $n(v)$ are
constant on any given boundary segment $B$, we can say that $Z$ etc are
determined by the set of points $B$. Thus we can write
\begin{align}
Z(B) &\stackrel{\textrm{def}}{=} \{ D_j : D_j(v) = 0 \mbox{~for every~} v \in B \}
\nonumber\\
&= Z(v) \mbox{ for every $v \in B$},
\end{align}
and similarly for $\hat{Z}(B)$, $K(B)$ and $n(B)$.
Notice that subspace $K(B)$ contains the common kernel subspace $K_{\Dset}$
of all the $D_j$. For a non-trivial boundary segment $B$ this implies that
the subspace $K(B)$ is strictly larger than $K_{\Dset}$. This is because
in this case there are points of $K(B)$ where at least one of $D_j$ is
non-zero; these points cannot be in $K_{\Dset}$. Hence the codimensions
obey $n(B) \leq n_{\Dset}$, with $n_{\Dset}$ being the codimension of the
smallest (trivial) boundary segment, i.e., the common kernel of all the
$D_j$, and with equality only for the trivial boundary segment.
Observe that each boundary segment $B$ obeys all of the properties of
positive regions, but with respect to $K(B)$ instead of the whole
space $V$, and with respect to $\widehat{Z}(B)$ instead of $\Dset$.
In particular, it is an open and convex set in $K(B)$. Moreover, the
same arguments as given above for $P_{\Dset}$ show that each $B$
itself has a boundary consisting of boundary segments, which are also
boundary segments of $P_{\Dset}$ itself, with all the associated
properties.
There is in fact a hierarchy of boundary segments, for which it is possible
to prove the following results:
\begin{enumerate}
\item The unique lowest dimension boundary segment is the subspace
$K_{\Dset}$, of dimension $d-n_{\Dset}$.
\item There are boundary segments of every dimension between the minimum
dimension $d-n_{\Dset}$ and the maximum dimension $d-1$, inclusive.
\item Each boundary segment of non-maximal dimension is a boundary segment
of a boundary segment of one dimension higher. If it has the maximal
dimension $d-1$, it is a boundary segment only of $P_{\Dset}$ itself.
\item $P_{\Dset}$ and non-minimal boundary segments have one or more
boundary segments of one dimension lower.
\end{enumerate}
In visualizable examples, the existence of this hierarchy and many of its
properties are quite obvious. But the general case needs a proof, which is
non-trivial. For the purposes of this paper, we will not need the whole
collection of properties of the hierarchy, so we will not make all the
proofs.
What we do need are the boundary segments of one dimension higher than the
minimal dimension, whose existence we will prove. Projected onto a
subspace $V_{\perp \Dset}$ that gives a decomposition of the form in Eq.\
(\ref{eq:V.decomp}), the next-to-minimal boundary segments become line
segments. This leads us to the concept of edge vectors specifying the
directions of the next-to-minimal boundary segments. The edge vectors play
a critical role in our later analysis.
\subsection{Edge vectors \texorpdfstring{$e_L$}{eL}}
Now we construct what we call the edge vectors $e_L$ of $P_{\Dset}$. Each
edge vectors has a label $L$, whose meaning will be given below. There are
two cases (with $P_{\Dset}$ being non-empty, as we are assuming): One is
where the subspace $V_{\perp \Dset}$ in Eq.\ (\ref{eq:V.decomp}) has dimension
$n_{\Dset}=1$ and the other is where it has a higher dimension.
\subsubsection{Case \texorpdfstring{$n_{\Dset}=1$}{n(D)=1}}
\label{sec:edge.1}
First is the case $n_{\Dset}=1$, i.e., that the subspace $V_{\perp \Dset}$
defined in Eq.\ (\ref{eq:V.decomp}) has dimension one. As observed below
that equation, there is a region of $V_{\perp \Dset}$ where all the $D_j$ are
positive. We choose any vector in this region to be the single edge vector
$e$ for $P_{\Dset}$; no more will be needed. For every $D_j$, $D_j(e)>0$.
Then every vector $v \in V$ is of the form $v=Ce + k$ for some $k\in K_{\Dset}$
and some real number $C$. Then
\begin{equation}
\label{eq:Dj.nD.1}
D_j(v) = C D_j(e) + D_j(k) = C D_j(e).
\end{equation}
So the condition that $v \in P_{\Dset}$ is simply that $C>0$. Then
\begin{equation}
\label{eq:std.decomp.1}
P_{\Dset} = \left\{ Ce + k : C>0 \mbox{ and } k \in K_{\Dset} \right\}.
\end{equation}
Note that $e$ is non-unique, but only up to a scaling by a positive factor
and the addition of an element of $K_{\Dset}$. Any single choice of $e$ is
sufficient for our purposes.
From Eq.\ (\ref{eq:Dj.nD.1}) it follows that
$D_j=\frac{D_j(e)}{D_1(e)}D_1$ and hence that all the $D_j$ are
proportional to each other, with positive coefficients.
\subsubsection{Case \texorpdfstring{$n_{\Dset}\geq2$}{nD.ge.2}}
\label{sec:edge.ge.2}
For all the higher co-dimension cases, we will see that $P_{\Dset}$
has non-trivial boundary segments, with lower dimension. These in
turn have boundary segments, etc. At each stage of taking boundaries,
one has a strictly lower dimension.
The minimum possible dimension for a non-trivial boundary segment is
$d-n_{\Dset} +1$. Later, we will prove results about the existence and
properties such next-to-minimal boundary segments. Here we will simply
provide a definition of corresponding edge vectors, i.e., a vector $e_L$
for each next-to-minimum dimension boundary segment $L$.
Let $L$ be one such boundary segment. We apply to it the argument of Sec.\
\ref{sec:edge.1} but applied for $L$ with respect to $K(L)$ instead of
$P_{\Dset}$ with respect to $V$, and with the set $\hat{Z}(L)$ instead of
$\Dset$. We then choose a corresponding vector $e_L$ in the boundary
segment. A general $v$ in $K(L)$ is $\lambda e_L+k$ where $\lambda$ is real and $k \in
K_{\Dset}$.
We find the conditions for $v$ to be in $L$ as follows: For $D_j \in
Z(L)$, $D_j(e_L)=0$ by the construction of $e_L$, so $D_j(v)=0$. For
$D_j \in \widehat{Z}(L)$, $D_j(v)=\lambda D_j(e_L)$. Hence
\begin{equation}
L = \left\{ \lambda e_L + k : \lambda>0 \mbox{ and } k \in K_{\Dset} \right\}.
\end{equation}
\subsubsection{Overall definition of set of edge vectors}
\label{sec:eL.def}
If $n_{\Dset}\geq2$, we define the set of edge vectors to be all the $e_L$
found in Sec.\ \ref{sec:edge.ge.2} for each boundary segment $L$ that obeys
$n(L)=n_{\Dset}-1$.
If $n_{\Dset}=1$, the set of edge vectors is simply the set consisting of
the one element $e$ constructed in Sec.\ \ref{sec:edge.1}.
The name ``edge vector'' is appropriate when $n_{\Dset} \geq 2$, since each
$e_L$ then corresponds to a projection of boundary segment $L$ onto a line
in $V_{\perp\Dset}$, a projection onto a segment of a line. But ``edge
vector'' is a bit of a misnomer in the case that $V_{\perp\Dset}$ is
one-dimensional, i.e., $n_{\Dset}=1$.
\subsection{The main decomposition theorem}
\label{ref:decomp.thm}
We are now ready to prove the following theorem:
\begin{theorem}
\label{thm:PD.decomp}
Every element of $v$ of $P_{\Dset}$ can be written in the form
\begin{equation}
\label{eq:std.decomp}
v = \sum_L C_L e_L + v_K,
\end{equation}
where all the $C_L$ are positive real numbers, $C_L>0$, and $v_K \in
K_{\Dset}$, and where the set of $e_L$ is a set of edge vectors, as
defined in Sec.\ \ref{sec:eL.def}. Conversely, every $v$ of the form
Eq.\ (\ref{eq:std.decomp}) with positive $C_L$ is in $P_{\Dset}$.
Thus $P_{\Dset}$ is exactly the set of vectors of the form
(\ref{eq:std.decomp}) with the stated restrictions.
\end{theorem}
Before proving the theorem, we make the following comments:
\begin{itemize}
\item The values $C_L$ need not be unique, since it may happen that
the number of $e_L$s is larger than the dimension $n_{\Dset}$ of
$V_{\perp \Dset}$. In that case, the $e_L$s are over-complete as a
spanning set. If we removed the extra $e_L$s compared with those
needed to make a basis for $V_{\perp \Dset}$, we could still express $v$
in the form (\ref{eq:std.decomp}), but some of the coefficients
might need to be negative for some values of $v$.
\item The edge vectors $e_L$ are not actually in $P_{\Dset}$ except in the
almost trivial case that $n_{\Dset}=1$. In other cases, they are always
on the boundary of $P_{\Dset}$, as we saw.
\end{itemize}
\subsubsection{Examples}
Before treating the general case, we examine examples with effective
dimension one and two, i.e., $n_{\Dset}=1$ and $n_{\Dset}=2$. Then the
derivation of the corresponding specializations of the theorem will be
elementary. The trick for the general case is to find a way of
successively reducing the dimension of the problem by repeated application
of the two-dimensional version.
In setting up the examples in a fairly general context, it is useful
to recall the following theorem of linear algebra:
\begin{quote}
Let $\mathcal{E}=(E_1,\dots,E_A)$ be dual vectors on a vector space
$V$, and let $K_{\mathcal{E}}$ be the intersection of their kernels,
as defined earlier. Let $F$ be another dual vector. Then $F$ is a
linear combination of $E_1,\dots,E_A$ if and only if the kernel of
$F$ contains $K_{\mathcal{E}}$, i.e., $\ker F \supseteq K_{\mathcal{E}}$.
\end{quote}
The example of $n_{\Dset}=1$ was already treated in Sec.\ \ref{sec:edge.1}.
Observe that the common kernel $K_{\Dset}$ of the $D_j$ has its maximum
possible dimension $d-1$, and is equal to the kernel of every $D_j$, and
that all the $D_j$ are all proportional to each other (with positive
coefficients so that $P_{\Dset}$ is non-empty). We constructed an instance
of the single edge vector needed for the problem, and obtained the
decomposition Eq.\ (\ref{eq:std.decomp.1}). Positivity constraints can be
obtained by examining values of $D_j$ on the space $V_{\perp \Dset}$, and the
results visualized because it is one-dimensional, as in Fig.\
\ref{fig:PD.1}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{figures/P-ex-1}
\caption{Positivity constraint in $V_{\perp \Dset}$ for the case that it is
one-dimensional, i.e., $n_{\Dset}=1$. that $n_{\Dset}=1$. All the
$D_j$ are necessarily proportional. The solid line is where $D_j(v)>0$,
i.e., it is $P_{\Dset}$ projected onto $V_{\perp \Dset}$. The dotted line
is where $D_j(v)\leq0$.}
\label{fig:PD.1}
\end{figure}
In the case $n_{\Dset}=2$, $V_{\perp \Dset}$ is a two-dimensional space
illustrated in Fig.\ \ref{fig:PD.2}. Each of the $D_j$ has a positive
space delimited by its kernel. Let us parameterize vectors in $V_{\perp
\Dset}$ by polar coordinates $(r,\theta)$ with respect to some axes. Then the
positive region for each $D_j$ is a range $r>0$ with $\theta$ in a continuous
range of size $\pi$. The kernel of each $D_j$ is a line of fixed $\theta$. The
common positive region is of the form $\alpha<\theta<\beta$, where $0<\beta-\alpha<\pi$. The most
limiting directions are given by two distinct $D_j$ whose kernels are the
lines of angles $\alpha$ and $\beta$; we use vectors in these directions for the
edge vectors $e_L$, and it is evident that the common positive region
$P_{\Dset}$ is the set of all linear combinations of the two $e_L$ with
positive coefficients. In polar coordinates, the edge vectors can be
chosen as unit vectors with angles $\alpha$ and $\beta$; they are linearly
independent because $0<\beta-\alpha<\pi$.
\begin{figure}
\centering
\includegraphics[scale=0.7]{figures/P-ex-2}
\caption{Positivity constraints in $V_{\perp \Dset}$ for a case where it is
two-dimensional, i.e., $n_{\Dset}=2$. The diagram depicts the case
that there are three different $D_j$s involved. The diagonal lines are
the locations of the kernels of the $D_j$, and the shaded parts point
to the negative regions of the $D_j$.}
\label{fig:PD.2}
\end{figure}
If we had made a mistake in stating the situation, and in fact all the
$D_j$ were proportional to each other, then all the kernels would lie on
top of each other, and we would get the situation shown in Fig.\
\ref{fig:PD.2-1}. Then the positive range is $\alpha<\theta<\beta$, but now with
$\beta-\alpha=\pi$, so that the would-be $e_L$ vectors from Fig.\ \ref{fig:PD.2}, at
angles $\alpha$ and $\beta$, are exactly opposite to each other, and are therefore
linearly dependent. These now span one dimension of the kernel space
instead of the positive manifold. The kernel space has its dimension
increased by one, and correspondingly $V_{\perp \Dset}$ has its dimension
reduced by one. To get an exemplar of the single edge vector that is
needed, we choose a vector pointing in a direction intermediate between
angles $\alpha$ and $\beta$. To get the results in terms of $V_{\perp \Dset}$, we
simply project onto a one-dimensional space in the direction of the edge
vector, after which we recover a version of Fig.\ \ref{fig:PD.1}.
\begin{figure}
\centering
\includegraphics[scale=0.7]{figures/P-ex-2-1}
\caption{Like Fig.\ \ref{fig:PD.2}, but for the case that all the
$D_j$ are linearly dependent. The diagram now depicts the space $V_{\perp
\Dset}$ plus one dimension of the kernel space $K_{\perp \Dset}$.}
\label{fig:PD.2-1}
\end{figure}
\subsubsection{General case}
If $n_{\Dset}=1$, we already proved the appropriate specialization of
Thm.\ \ref{thm:PD.decomp} in Sec.\ \ref{sec:edge.1}, with $L$ having
one value and the associated $e_L$ being the $e$ of that section.
We now provide a method to deal with all the remaining cases $n_{\Dset}\geq2$
(including the already treated case of $n_{\Dset}=2$). Necessarily, at
least two of the $D_j$ are linearly independent. Otherwise all of them
would be proportional to each other (with positive coefficients to allow
$P_{\Dset} \neq \emptyset$), and then the positive space is the positive space for one
$D_j$, so that we get $n_{\Dset}=1$.
Let $v$ be any vector in $P_{\Dset}$. Then to prove that it is of the
form Eq.\ (\ref{eq:std.decomp}), we adopt the following recursive
strategy
\begin{enumerate}
\item Construct an expression for $v$ as a linear combination of two
vectors on non-trivial boundary segments. This we will do quite easily,
by a simple generalization of the two-dimensional case that was
illustrated in Fig.\ \ref{fig:PD.2}.
\item For each of these vectors:
\begin{enumerate}
\item Either its boundary segment is of the lowest possible dimension for
a non-trivial boundary segment, i.e., $d-n_{\Dset}+1$, and we can write
the vector as a positive coefficient times the chosen edge vector for
the segment, plus a contribution from a vector in the kernel
$K_{\Dset}$.
\item Or the boundary segment has a higher dimension, in which case we
repeat the procedure to express the vector in terms of vectors on
non-trivial boundary segments of yet lower dimension.
\end{enumerate}
\item All of this terminates when we get to the lowest dimension
non-trivial boundary segments. This gives the desired expansion.
\end{enumerate}
To implement this strategy, given a vector $v$ in $P_{\Dset}$, we first
pick two independent $D_j$ in $\Dset$, and call them $D_a$ and $D_b$. Then
pick any vector $\delta v$ in the kernel of $D_b$ such that $D_a(\delta v)>0$, and
make it small enough that $v+\delta v$ is still in $P_{\Dset}$. Then let $w=v+\delta
v$. The geometry of this situation in the two-dimensional space spanned by
$v$ and $w$ is shown in Fig.\ \ref{fig:P-vw}. The vectors $v$ and $w$ are
linearly independent, so that they do in fact span a two-dimensional
space.
\begin{figure}
\centering
\includegraphics[scale=0.8]{figures/P-vw}
\caption{The dotted line is the circle explored to express $v$ in terms
of boundary vectors, defined to be where the circle first hits the
kernel of a $D_j$. Here are seen the intersections of $\ker D_a$ and
$\ker D_b$ with the two dimensional space spanned by $v$ and $w$.
\emph{Note that there is not necessarily any metric specified on the
space $V$, and even if there were there would be no guaranteed
constraint on the angle between $v$ and $w$. Nevertheless, it is
always possible to change the coordinate system by applying a linear
transformation. One can do this to go from a situation where $v$ and
$w$ are in general directions to one where they are drawn at right
angles, as is the case here. With this choice of coordinates, the
loop of vectors in Eq.\ (\ref{eq:loop}) becomes a circle.}}
\label{fig:P-vw}
\end{figure}
Now $D_j(v)$ and $D_j(w)$ are positive, for all $j$ including $j=a$
and $j=b$, and in addition
\begin{align}
D_a(w) & = D_a(v) + D_a(\delta v) > D_a(v),
\\
D_b(w) & = D_b(v) + D_b(\delta v) = D_b(v).
\end{align}
Let $r = D_a(\delta v)/D_a(v) > 0$, so that $D_a(w) = (1+r) D_a(v)$.
Then consider the following loop of vectors in the plane of $v$ and
$w$, parameterized by an angle $\theta$:
\begin{equation}
\label{eq:loop}
u(\theta) \stackrel{\textrm{def}}{=} v \cos \theta + w \sin \theta,
\end{equation}
on which for a general $D_j$
\begin{equation}
D_j(u(\theta)) = D_j(v) \cos \theta + D_j(w) \sin \theta.
\end{equation}
Since both of $D_j(v)$ and $D_j(w)$ are positive, $D_j(u(\theta))$ is
positive in the range $0\leq\theta\leq\pi/2$, and also somewhat beyond
this range. Now define
\begin{equation}
\theta_j \stackrel{\textrm{def}}{=} \arctan \frac{D_j(v)}{D_j(w)},
\end{equation}
which is in the range $0<\theta_j<\pi/2$. The zeros of
$D_j(u(\theta))$ are at $\theta=-\theta_j$ and $\theta=\pi-\theta_j$,
so that $D_j(u(\theta))$ is positive when
$-\theta_j<\theta<\pi-\theta_j$.
For the specific cases of $D_a$ and $D_b$
\begin{align}
D_a(u(\theta)) & = D_a(v) \left[ \cos \theta + (1+r) \sin \theta \right],
\\
D_b(u(\theta)) & = D_b(v) \left[ \cos \theta + \sin \theta \right],
\end{align}
so that
\begin{equation}
\label{eq:theta.a.b}
\theta_a = \arctan \frac{1}{1+r} < \frac{\pi}{4},
\qquad
\theta_b = \frac{\pi}{4}.
\end{equation}
Now define the minimum and maximum values of the $\theta_j$:
\begin{equation}
\alpha \stackrel{\textrm{def}}{=} \min_j \theta_j,
\qquad
\beta \stackrel{\textrm{def}}{=} \max_j \theta_j.
\end{equation}
Then for $-\alpha<\theta<\pi-\beta$, all $D_j(u(\theta))$ are
positive, so $u(\theta)\in P_{\Dset}$. But at each of
$\theta=-\alpha$ and $\theta=\pi-\beta$, at least one $D_j(u(\theta))$
is zero, so that $u(-\alpha)$ and $u(\pi-\beta)$ are on the boundary
of $P_{\Dset}$. They are in fact on the non-trivial part of the
boundary of $P_{\Dset}$ and are linearly independent. To see this, we
first observe that from Eq.\ (\ref{eq:theta.a.b}) and from
$0<\theta_j<\pi/2$, it follows that $0<\alpha\leq\theta_a<\pi/4$, while
$\pi/2>\beta\geq\theta_b=\pi/4$. It follows that $D_b(u(-\alpha))$
and $D_a(u((\pi-\beta))$ are both positive, which puts the vectors
$u(-\alpha)$ and $u(\pi-\beta)$ on the non-trivial part of the
boundary of $P_{\Dset}$, where at least one $D_j$ is positive.
Furthermore, from the same bounds, it follows that $-\alpha$ and
$\pi-\beta$ are not opposite angles, and hence that $u(-\alpha)$ and
$u(\pi-\beta)$ are linearly independent. See Fig.\ \ref{fig:P-vw-1}
for an illustration of how another $D_j$ can impose a more restrictive
bound on where $u_j(\theta) \in P_{\Dset}$ than is given by $D_a$ and
$D_b$ alone.
Since $u(-\alpha)$ and $u(\pi-\beta)$ are on the non-trivial part of
the boundary of $P_{\Dset}$, we have incidentally proved that for the
case we are treating, $n_{\Dset}\geq2$, there are in fact non-trivial
boundary segments.
\begin{figure}
\centering
\includegraphics[scale=0.8]{figures/P-vw-1}
\caption{The same as Fig.\ \ref{fig:P-vw}, except that the position of
the kernel of another $D_j$ is shown, in a situation where it provides
a more restrictive region of positive $D_j(\theta)$ than is given by $D_a$
and $D_b$ alone.}
\label{fig:P-vw-1}
\end{figure}
We can now express $v$ in terms of non-trivial boundary vectors:
\begin{equation}
\label{eq:v.v1.v2}
v = v_1 \frac{ \sin\beta }{ \sin(\beta-\alpha) }
+ v_2 \frac{ \sin\alpha }{ \sin(\beta-\alpha) }
\end{equation}
where
\begin{equation}
v_1 = u(-\alpha),
\qquad
v_2 = u(\pi-\beta).
\end{equation}
The coefficients in Eq.\ (\ref{eq:v.v1.v2}) are positive, so we have
accomplished our aim of expressing $v$ in terms of vectors on the
non-trivial part of boundary of $P_{\Dset}$ with positive coefficients.
Let the boundary segments in which $v_1$ and $v_2$ lie be $B_1$ and $B_2$
First consider the case that $v_1$'s (non-trivial) boundary segment has the
minimum dimension $d-n(B_1)=d-n_{\Dset}+1$. Then there is an edge vector
for that segment, as defined in Sec.\ \ref{sec:edge.ge.2}, and $v_1$ can be
expressed in terms of the edge vector, with a positive coefficient, plus an
element of the common kernel $K_{\Dset}$.
The other case is that $v_1$'s boundary segment $B_1$ is of higher
dimension. Then we apply the whole argument of this section to $v_1$, but
now instead of ${\Dset}$ and $V$, we apply the argument to the dual vectors
$\widehat{Z}(B_1)$ that are non-zero at $v_1$ and work in the space
$K(B_1)$. The argument needs to be extended only by the observation that
all the vectors involved give zero for any $D_j \in Z(B_1)$, i.e., for any
$D_j$ that is zero at $v_1$ and hence on $B_1$.
The result is to express $v_1$ in terms of vectors in
yet lower dimension boundary segments.
The same argument applies equally to $v_2$.
Iterating the argument eventually stops when all the vectors obtained are
proportional to edge vectors (plus elements of $K_{\Dset}$), with positive
coefficients. Thus any element $v\in P_{\Dset}$ is a linear combination of
edge vectors with positive coefficients, plus a vector in $K_{\Dset}$).
Hence any vector in $P_{\Dset}$ is of the form (\ref{eq:std.decomp}) given
in the statement of Thm.\ \ref{thm:PD.decomp}.
To complete the proof of Thm.\ \ref{thm:PD.decomp}, we need to show that
any vector of the form (\ref{eq:std.decomp}) is in the positive manifold
$P_{\Dset}$, as opposed to being in its boundary. So let $v$ be any vector
of the form (\ref{eq:std.decomp}). For each $D_j$, at least one $D_j(e_L)$
is positive, and so $D_j(v)$ is strictly positive. Hence $v\in P_{\Dset}$.
\section{The Landau theorem for dual vectors}
\label{sec:landau.proof}
Now we come to the already-stated Thm.\ \ref{thm:main.geom} relating the Landau
condition to the non-existence of good directions, i.e., to the
non-existence of a $v$ for which all of $D_j(v)$ are positive. To prove
the theorem, we consider the cases that there is and that there is not a
good direction.
We already saw in Sec.\ \ref{sec:elem.parts} that if there is a good
direction, then there can be no Landau point. It remains to show that if
there is no good direction, then a Landau point exists. Given that there
fails to be a good direction for $(D_1,\dots,D_N)$, we will construct a set
of $\lambda_j$s that instantiates a Landau point.
It might be that one or more of the $D_j$s is zero. In that case, let
$D_{j_0}$ be one of the zero dual vectors. Then set $\lambda_{j_0}=1$
and set the remaining $\lambda_j$ to zero, and we have a Landau point.
So we only need further to consider the case that every $D_j$ is
non-zero.
Consider the following subsets of $D_j$s, where we start with $D_1$,
and successively add an extra $D_j$: $S_1=(D_1)$, $S_2=(D_1,D_2)$,
\dots, $S_N=(D_1,\dots,D_N)$. Since $D_1\neq 0$, we can find a vector
$v\in V$ with $D_1(v)=1$, and so there exists a good direction
for $S_1$. But by hypothesis there is no good direction for
$S_N$. Therefore there is a last one in this sequence, $S_{n_0}$, for
which there is a good direction; for the next set, $S_{n_0+1}$,
there is no good direction.
In the following, two different vector spaces come into play. One is the
space $V$ on which the $D_j$s act, with an important role played by its
submanifold where all the $D_j$s are positive. The other space is a space
$\Lset$ of the coefficients $\3\lambda$ used in linear combinations of the form
$\sum_{j=1}^{n_0} \lambda_j D_j + D_{n_0+1}$, with its definition in Eq.\
(\ref{eq:Lset.def}) below.
\subsection{The positive hyperplane \texorpdfstring{$P$}{P}}
Let $P$ be the set of good directions for $S_{n_0}$, i.e., $P$ is
the positive space for the corresponding $D_j$s:
\begin{equation}
P = P_{S_{n_0}}
= \left\{
v \in V :
D_j(v)>0 \mbox{ whenever $1\leq j \leq n_0$}
\right\}.
\end{equation}
Then the lack of a good direction for $S_{n_0+1}$ immediately
shows that $D_{n_0+1}(v) \leq 0$ for every $v \in P$. In fact, strict
inequality holds:
\begin{lemma}
\begin{equation}
\label{eq:n0+1.P}
D_{n_0+1}(v) < 0 \mbox{ for every $v \in P$}.
\end{equation}
\end{lemma}
\begin{proof}
We use the fact, following from Thm.\ \ref{thm:P.basic}, that $P$ is an
open set. Suppose that the strict inequality did not hold. Then there
would be a $v \in P$ for which $D_{n_0+1}(v) = 0$. Since $D_{n_0+1}$ is
non-zero, we can find a $w \in V$ for which $D_{n_0+1}(w)=1$. Then for
every $\kappa > 0$, $D_{n_0+1}(v+\kappa w) = \kappa >0$. Since $P$ is an open set, $v+\kappa
w \in P$ for all small enough $\kappa$, and we would therefore find a vector in
$P$ on which $D_{n_0+1}$ is positive. That is, we would find a good
direction for the set $S_{n_0+1}$. This is contrary to hypothesis, so we
need the strict inequality (\ref{eq:n0+1.P}).
\end{proof}
\subsection{Kernels of \texorpdfstring{$D_j$ $(0\leq j \leq
n_0+1)$}{Dj}}
\label{sec:ker.Dj}
Next, let $K$ be the intersection of the kernels of $D_1$, \dots,
$D_{n_0}$:
\begin{equation}
\label{eq:K.def}
K \stackrel{\textrm{def}}{=} \left\{ v \in V : D_j(v) = 0
\mbox{ for $1 \leq j \leq n_0$}
\right\}.
\end{equation}
It is a vector subspace of $V$.
Now the other $D_j$ we consider, i.e., $D_{n_0+1}$, is also zero on
$K$. To see this, suppose otherwise, and we will prove a
contradiction. Thus, suppose that there is a $u \in K$ such that
$D_{n_0+1}(u) \neq 0$. By scaling $u$, we can arrange
$D_{n_0+1}(u)=1$, while maintaining $D_j(u) = 0$ for the other $D_j$.
Pick any $v \in P$, so that $D_j(v)>0$ for every $1 \leq j \leq n_0$.
Then for every positive real number $\kappa>0$
\begin{equation}
D_{n_0+1}(\kappa u + v) = \kappa + D_{n_0+1}(v),
\end{equation}
while
\begin{multline}
D_j(\kappa u + v) = \kappa D_j(u) + D_j(v) = D_j(v) > 0
\\
\mbox{for $1 \leq j \leq n_0$}.
\end{multline}
It follows that $\kappa u+v$ is also in $P$. But by choosing $\kappa$
large enough, we can make $D_{n_0+1}(\kappa u + v)$ positive, which
would give us a good direction for the set $S_{n_0+1}$. We only
avoid this by having $D_{n_0+1}(u)=0$ for every element $u\in K$.
Thus the kernel of $D_{n_0+1}$ contains the intersection of the kernel of
the other $D_j$s. It follows, by a standard theorem of linear algebra,
that $D_{n_0+1}$ is a linear combination of the other $D_j$s. But we do
not need to use this. In fact, we will prove a stronger result that a
linear combination can be found where all the coefficients are negative or
zero.
\subsection{Spanning vectors of \texorpdfstring{$P$}{P}}
We now recall results from Sec.\ \ref{sec:pos.sets}, but applied with
$\Dset$ set equal to $S_{n_0}=(D_1,\dots,D_{n_0})$ instead of the original
set of dual vectors. The space $V$ can be decomposed as a direct sum $V=K\oplus
V_\perp$. Then there is a set of non-zero edge vectors $e_L$ that give
one-dimensional edges for $P\cap V_\perp$, and the general form for a vector $v \in
P$ is
\begin{equation}
\label{eq:P.decomp}
v = k + \sum_L C_L e_L,
\end{equation}
where $k \in K$ and all the real-valued coefficients $C_L$ are
strictly positive: $C_L>0$. The vectors $e_L$ span $V_\perp$, but
they could be an over-complete set; the extra elements are needed to
maintain the positivity property on the $C_L$s for every $v \in P$.
All the edge vectors obey $D_je_L \geq 0$ for $1 \leq j \leq n_0$ and any $L$. For
every $j$ in the range $1\leq j \leq n_0$, there is at least one value of $L$ for
which $D_je_L$ is strictly positive. Similarly for every $L$ there is at
least one value of $j$ in the range $1\leq j \leq n_0$ for which $D_je_L$ is
strictly positive.
From the properties that $D_{n_0+1}(v) < 0$ for every $v \in P$ and that
$D_{n_0+1}(v) = 0$ for every $v \in K$, it follows that
\begin{lemma}
\begin{equation}
\mbox{For all $L$}, D_{n_0+1}(e_L) \leq 0,
\end{equation}
and at least one $D_{n_0+1}(e_L)$ is strictly negative.
\end{lemma}
\subsection{Linear combinations of \texorpdfstring{$D_j$s}{Dj}; the
regions \texorpdfstring{$\Lset$}{Lambda}, \texorpdfstring{$M$}{M},
and \texorpdfstring{$\notM$}{M-tilde}}
\label{ref:sec.M}
Our aim is to find a set of $\lambda_j$ for which $\sum_{j=1}^{n_0+1}\lambda_jD_j=0$,
with all $\lambda_j\geq0$, and with at least one non-zero (positive) $\lambda_j$. To
obtain this, it is necessary that the last $\lambda$ is non-zero, i.e.,
$\lambda_{n_0+1}>0$. This is because if it were zero, we would have the
Landau point for $S_{n_0}=(D_1,\dots,D_{n_0})$, i.e., we would have
$\sum_{j=1}^{n_0}\lambda_jD_j=0$ (with the sum up to $j=n_0$). But the
definition of $n_0$ is that there is a good direction for $S_{n_0}$,
which implies that there is no Landau point for $S_{n_0}$.
Therefore, to avoid a contradiction, any Landau point for $S_{n_0+1}$ must
have $\lambda_{n_0+1}$ strictly greater than zero. We can now scale all the
$\lambda_j$'s to make $\lambda_{n_0+1}=1$, and still have a Landau point. So we will
work with
\begin{equation}
D(\3\lambda) \stackrel{\textrm{def}}{=} \sum_{j=1}^{n_0} \lambda_j D_j + D_{n_0+1},
\end{equation}
where we use boldface notation $\3\lambda = (\lambda_1,\dots,\lambda_{n_0})$ to denote a
vector of only the first $n_0$ values, and we simply require allowed
values to obey $\lambda_j\geq0$. Then our aim is to find an allowed $\3\lambda$ for
which $D(\3\lambda)=0$.
Define $\Lset$ to be the set of allowed $\3\lambda$:
\begin{equation}
\label{eq:Lset.def}
\Lset \stackrel{\textrm{def}}{=} \left\{ \3\lambda : \lambda_j \geq 0 \mbox{ for all $j$} \right\},
\end{equation}
and define the following subset of $\Lset$:
\begin{equation}
\label{eq:M.def}
M \stackrel{\textrm{def}}{=} \left\{ \3\lambda \in \Lset : D(\3\lambda)(v) \leq 0
\mbox{ for all $v \in P$} \right\},
\end{equation}
i.e., $M$ is the set of all $\3\lambda$ in $\Lset$ for which
$D(\3\lambda)$ is negative or zero for every vector that makes all of
$D_1$, \dots, $D_{n_0}$ positive. Its complement in $\Lset$ is the
set $\3\lambda$ for which we have a positive value for $D(\3\lambda)$
somewhere in $P$:
\begin{align}
\notM
\stackrel{\textrm{def}}{=} \Lset \backslash M
= {}& \bigl\{ \3\lambda \in \Lset :
\exists v \in V
\mbox{ such that }
D(\3\lambda)(v) > 0,
\nonumber\\ & ~
\mbox{ and }
D_j(v) > 0 \mbox{ for $1 \leq j \leq n_0$}
\bigl\}.
\end{align}
We will find a Landau point at a certain corner or edge of the set
$M$.
To visualize the kind of set that $M$ is, it is useful to refer to the
simple example given in App.\ \ref{sec:M.simple} below. It results in
a region for $M$ that is illustrated in Fig.\ \ref{fig:ex.M}. Notice
that $M$ is convex and is a closed set. The boundaries are segments
of straight lines. The value of $\3\lambda$ giving a Landau point is
at the upper right-hand corner.
\begin{figure}
\centering
\includegraphics[scale=0.9]{figures/M-ex}
\caption{Set $M$ for the example given in App.\ \ref{sec:M.simple}.}
\label{fig:ex.M}
\end{figure}
\subsection{Properties of \texorpdfstring{$M$}{M}}
We first derive some elementary properties of $M$ and $\notM$ for
the general case:
\begin{enumerate}
\item The zero vector $\3{0}$ is in $M$, so that $M$ is non-empty.
This is simply because $D(\3{0})=D_{n_0+1}$, and $D_{n_0+1}(v)$ is
negative for all vectors in $P$, Eq.\ (\ref{eq:n0+1.P}).
\item $M$ is convex. Suppose that $\3\lambda_a$ and $\3\lambda_b$ are
any 2 elements of $M$ and that $t$ is any real number obeying $0 \leq
t \leq 1$. Then for every $v \in P$
\begin{multline}
D\mathopen{}\left( t\3\lambda_a + (1-t)\3\lambda_b \right)(v)
\\
= t D(\3\lambda_a)(v)
+ (1-t) D(\3\lambda_b)(v),
\end{multline}
which is zero or negative because each term is. It follows that
$t\3\lambda_a + (1-t)\3\lambda_b \in M$. Hence $M$ is convex.
\item $M$ is a closed set. Let $\3\lambda_\alpha$ be any sequence of
elements of $M$ that converges to some element $\3\lambda$ of
$\Lset$. To show that $M$ is closed, we need to show that the
limit point $\3\lambda$ is actually in $M$. To do this, we observe
that for every $v \in P$, all the $\3\lambda_\alpha$ obey
$D(\3\lambda_\alpha)(v) \leq 0$, by the definition of $M$. Hence
\begin{equation}
D(\3\lambda)(v) = \lim_{\alpha\to\infty} D(\3\lambda_\alpha)(v) \leq 0,
\end{equation}
by the continuity of linear functions. Hence $\3\lambda\in M$.
\item Consider an arbitrary non-zero $\3\lambda \in \Lset$, and consider
an arbitrarily scaled value $\kappa \3\lambda$, where $\kappa$ is a
positive real number. Then for large enough $\kappa$, $D(\kappa
\3\lambda) \in \notM$, but not $M$
\emph{Proof}: For any $v \in P$
\begin{equation}
\label{eq:D.kappa.v}
D(\kappa \3\lambda)(v)
= \kappa \3\lambda \cdot \3D(v) + D_{n_0+1}(v).
\end{equation}
Since $v \in P$, at least one $\lambda_j>0$, and the others are non-negative,
$\3\lambda \cdot \3D(v)$ is positive, so for large enough $\kappa$, the quantity in
(\ref{eq:D.kappa.v}) is positive, and hence $D(\kappa \3\lambda) \in \notM$, but not
$M$. The line of $\kappa \3\lambda$ intersects the boundary of $M$ at some point,
which may in degenerate situations be at $\3{0}$.
\end{enumerate}
It is easily checked that these properties are obeyed in the example shown
in Fig.\ \ref{fig:ex.M}.
\subsection{Characterization of \texorpdfstring{$M$}{M} in terms of
properties of edge vectors \texorpdfstring{$e_L$}{eL}}
We have seen in Eq.\ (\ref{eq:P.decomp}) that any vector in $P$ can be
written as a sum of edge vectors with strictly positive coefficients plus
an element of the kernel $K$, i.e., $v=k+\sum_LC_Le_L$. Since $D_j(k)=0$ for
$1 \leq j \leq n_0+1$, it follows that
\begin{equation}
D(\3\lambda)(v)= \sum_L C_L f(L,\3\lambda),
\end{equation}
where
\begin{equation}
f(L,\3{\lambda}) \stackrel{\textrm{def}}{=} \sum_{j=1}^{n_0} \lambda_jD_j(e_L) + D_{n_0+1}(e_L).
\end{equation}
Therefore for any given $\3\lambda\in \Lset$, we can characterize
whether $\3\lambda$ is in $M$ or $\notM$, and whether it is in the
boundary of $\notM$ by the following exclusive criteria:
\begin{enumerate}
\item \emph{Either at least one $f(L,\3{\lambda})$ is strictly positive.} In this
case $\3\lambda \in \notM$.
The last statement is proved by letting $L_0$ be one of the cases for
which $f(L_0,\3{\lambda})>0$, and we set $C_L = \delta_{L,L_0} + \kappa$, with $\kappa>0$.
Then $v =\sum_L C_Le_L \in P$ and
\begin{equation}
D(\3\lambda)(v)
= f(L_0,\3{\lambda}) + \kappa \sum_L f(L,\3{\lambda}).
\end{equation}
By making $\kappa$ small enough (but non-zero), we can make this
positive. Hence $\3\lambda \in \notM$.
\item \emph{Or all of $f(L,\3{\lambda})$ are strictly negative.} Then $\3\lambda \in M$
and $\3\lambda$ is in the interior of $M$, not on its boundary with $\notM$.
First, we observe that the negativity of $f(L,\3{\lambda})$ implies
that $D(\3\lambda)(v)$ is negative for all $v \in P$, so that
$\3\lambda \in M$. Then we consider a nearby point $\3\lambda_a =
\3\lambda + \delta\3\lambda$ that is still in $\Lset$ (i.e., the
components obey $\lambda_{a,j}\geq 0$), and we let a general element
of $P$ be $v = k + \sum_L C_L e_L$. We let
\begin{align}
-F &= \max_{L} f(L,\3\lambda) < 0,
\\
G &= \max_{j,L} D_j(e_L) > 0.
\end{align}
Then we can bound
\begin{align}
D(\3\lambda+\delta\3\lambda)(v)
& = \sum_L C_L \biggl[ f(L,\3\lambda)
+ \sum_{j=1}^{n_0} \delta\lambda_j D_j(e_L)
\biggr]
\nonumber\\
& \leq \sum_L C_L \biggl[ -F + \sum_{j=1}^{n_0} |\delta\lambda_j| G \biggr].
\end{align}
Now take $\sum_{j=1}^{n_0} |\delta\lambda_j| < F/G$. Then
$D(\3\lambda+\delta\3\lambda)$ is negative on the whole of $P$, and
so $\3\lambda+\delta\3\lambda$ is in $M$. Hence all points
sufficient close to $\3\lambda$ are themselves in $M$, and not in
$\notM$. Therefore $\3\lambda$ is not on the boundary with $\notM$.
\item \emph{Or for all $L$, $f(L,\3{\lambda}) \leq 0$, and at least one is zero.}
Then $\3\lambda \in M$ and $\3\lambda$ is on its boundary with $\notM$.
Given that none of $f(L,\3{\lambda})$ is positive, $\3\lambda$ must be in $M$,
not $\notM$. It remains to show that it is on the boundary.
So pick $L_0$ such that $f(L_0,\3{\lambda}) = 0$, and pick $\delta\3\lambda$ such that all
the $\delta\lambda_j$ are strictly positive, $\delta\lambda_j>0$, for $1 \leq j \leq n_0$. We will
show that $\3\lambda+\delta\3\lambda$ is in $\notM$ no matter how small $\delta\3\lambda$ is. First,
$\3\lambda+\delta\3\lambda$ is in $\Lset$, because each component of the vector is
non-negative. Let
\begin{equation}
-A = \min_{L} f(L,\3\lambda) \leq 0,
\end{equation}
and choose an element of $P$ by $v = \kappa e_{L_0} + \sum_{L}e_L$,
with $\kappa>0$. Then
\begin{equation}
\label{eq:A.v}
D(\3\lambda+\delta\3\lambda)(v)
\geq - A \#(L)
+ \kappa \sum_{j} \delta\lambda_jD_j(e_{L_0}),
\end{equation}
with $\#(L)$ being the number of $e_L$ vectors. Hence, by choosing $\kappa$
large enough, we make $D(\3\lambda+\delta\3\lambda)(v)$ positive. Therefore $D(\3\lambda+\delta\3\lambda)$
is in $\widehat{M}$ no matter how small the non-zero $\delta\3\lambda$ is, and so
$\3\lambda$ is on the boundary between $M$ and $\widehat{M}$.
\end{enumerate}
\subsection{Moving along boundary between \texorpdfstring{$M$}{M}
and \texorpdfstring{$\notM$}{M-tilde}}
\label{sec:notM.M}
Now consider a point $\3{\lambda}_0$ on the boundary between $M$ and $\notM$.
Such a point exists. At it, $f(L,\3{\lambda}_0)$ is zero for some number of
edges $L$ of the positive region $P$, and negative for any others. We will
now show that we can move from $\3{\lambda}_0$ along the boundary of $M$ in such
a way that we find a place where there is an increase in the number of $L$
for which $f(L,\3{\lambda})=0$. We keep going, repeating this process, which
terminates only when all are zero. At that point $D(\3{\lambda})$ is itself zero
and we have a Landau point, as we aimed to find.
Let\footnote{Note that $Z$ is now used with a different meaning and type of
argument than before.} $Z(\3{\lambda}_0)$ and $\widehat{Z}(\3{\lambda}_0)$ be the set
of $L$ for which $f(L,\3{\lambda}_0)$ is zero and non-zero (necessarily
negative):
\begin{align}
Z(\3{\lambda}_0) & =
\left\{ L : f(L,\3{\lambda}_0) = 0 \right\},
\\
\widehat{Z}(\3{\lambda}_0) & =
\left\{ L : f(L,\3{\lambda}_0) < 0 \right\}.
\end{align}
This is a partition of the set of all edges of $P$.
Let $v$ be a general element of $P$, decomposed as in Eq.\
(\ref{eq:P.decomp}). Then
\begin{equation}
\label{eq:D.lambda.0}
D(\3{\lambda}_0)(v)
=
\sum_{\text{all }L} C_L f(L,\3{\lambda}_0)
=
\sum_{L \in \widehat{Z}(\3{\lambda}_0)} C_L f(L,\3{\lambda}_0).
\end{equation}
It is possible that all the $F(L,\3{\lambda})$ are zero, so that $D(\3\lambda_0)(v)=0$
for every $v\in P$. Since $P$ is a manifold of the same dimension as the
whole space $V$, it follows that $D(\3\lambda_0)$ itself is zero, so that we have
a Landau point, and we need go no further.
Otherwise at least one $F(L,\3{\lambda})$ is nonzero and negative. To deal with
this case, our method will be to first prove that the set of $e_L$ with $L
\in Z(\3{\lambda}_0)$ spans a boundary segment of $P$, rather than the whole of
$P$, and then that there is a $j_0$ for which
\begin{equation}
\label{eq:on.bound.M}
D_{j_0}(e_L) = 0 \mbox{ for all } L \in Z(\3{\lambda}_0).
\end{equation}
We will use this to provide a direction $\delta\3\lambda$ in which to
move while staying on the boundary of $M$ and then eventually find a
point where yet another $f(L,\3{\lambda})$ is zero.
The following simple results are useful in the sequel:
\begin{lemma}
\label{lem:neg.D.P}
$D(\3\lambda_0)(v)$ is negative for every $v$ in $P$.
\end{lemma}
\begin{proof}
The $F(L,\lambda_0)$ in (\ref{eq:D.lambda.0}) are negative or zero, and at
least is nonzero. Hence $D(\3\lambda_0)(v)$ is negative for every $v$ in $P$,
since all the $C_L$ are strictly positive.
\end{proof}
\begin{lemma}
\label{lem:null.D.P}
There is a null intersection between the kernel of
$D(\3\lambda_0)$ and the positive region $P$:
\begin{equation}
\label{eq:K.D.intersect}
\ker D(\3\lambda_0) \cap P = \emptyset.
\end{equation}
\end{lemma}
\begin{proof}
This follows from Lemma \ref{lem:neg.D.P}, since $D(\3\lambda_0)$ is nonzero on
the whole of $P$.
\end{proof}
\begin{lemma}
\label{lem:P.and.bound}
For every $v$ in both $P$ and its boundary, $D(\lambda_0)(v) \leq 0$.
\end{lemma}
\begin{proof}
We know that $D(\3\lambda_0)$ is negative for every element of $P$. A boundary
point is obtained by taking a limit of points in $P$. Therefore
$D(\3\lambda_0)$ is negative or zero on the boundary of $P$.
\end{proof}
\begin{lemma}
For every $k \in K$ and any $\3\lambda$,
\begin{equation}
\label{eq:D.lambda.K}
D(\3\lambda)(k) = 0.
\end{equation}
\end{lemma}
\begin{proof}
Since $k\in K$, every $D_j(k)=0$, for $1 \leq j \leq n_0$. We have
also seen in Sec.\ \ref{sec:ker.Dj} that $\ker D_{n_0+1} \supseteq
K$, so $D_{n_0+1}(k)=0$. Equation (\ref{eq:D.lambda.K}) follows.
\end{proof}
Now consider vectors of the form
\begin{equation}
\label{eq:v.Z}
w = k + \sum_{L \in Z(\3{\lambda}_0)} C_L e_L,
\end{equation}
where $C_L>0$, $k \in K$, and we have only used the subset of $e_L$ for which
$f(L,\3{\lambda}_0)=0$. The vector $w$ is in the kernel of $D(\3\lambda_0)$, i.e.,
$D(\3\lambda_0)(w)=0$. So by Lemma \ref{lem:null.D.P} it cannot be in $P$. But
by adding a term $\kappa \sum_{L \in \hat{Z}(\3{\lambda}_0)} e_L$, with $\kappa$ non-zero and
positive, but arbitrarily small, we get a vector in $P$ itself. Hence the
given $w$ is in a non-trivial boundary segment $B$ of $P$.
Now from Thm.\ \ref{thm:bdy.char}, applied to $P$ instead of $P_{\Dset}$,
we know that on the boundary segment $B$, there is a value $j_0$ for which
$D_{j_0}$ is zero on $B$, and hence $D_{j_0}(w)=0$.
Now, in Eq.\ (\ref{eq:v.Z}), for all $j$ in the range $1\leq j \leq n_0$,
$D_j(k)=0$ and for all $L$ $D_j(e_L)\geq0$. Hence from the zero value of
$D_{j_0}(w)$ it follows that $D_{j_0}(e_L)$ is zero for all $L \in
Z(\3{\lambda}_0)$, i.e., for all those $L$ for which $D(\3\lambda_0)(e_L)$ is zero
rather than negative. (At least one such $L$ exists, since $\3\lambda_0$ is on
the boundary of $M$.)
\medskip
Now let us make an increment $\delta \3\lambda$ to $\3\lambda_0$ defined by
\begin{equation}
\delta\lambda_j = \delta_{jj_0}\kappa,
\end{equation}
with $\kappa \geq 0$. All its components are non-negative, so
$\3\lambda_0 +\delta \3\lambda$ remains in $\Lset$. To determine
its location with regards to $M$ and $\notM$, we calculate
\begin{equation}
f(L,\3{\lambda}_0 + \3\delta\lambda)
=
f(L,\3{\lambda}_0) + \kappa D_{j_0}(e_L).
\end{equation}
When $L\in Z(\3\lambda_0)$, this is zero. When, instead, $L\in \widehat{Z}(\3\lambda_0)$,
this starts out negative and either stays at the same value or increases
with $\kappa$, depending on whether $D_{j_0}(e_L)$ is zero or not. (Recall that
every $D_j(e_L)$ is positive or zero.)
Thus for small enough $\kappa$, $\3{\lambda}_0 + \3\delta\lambda$ is still on the boundary of
$M$. But given $j_0$, there is at least one $e_L$ for which $D_{j_0}(e_L)$
is strictly positive; this $e_L$ is necessarily one of those corresponding
to $\widehat{Z}(\3\lambda_0)$. So at least one of the initially negative
$f(L,\3{\lambda}_0 + \3\delta\lambda)$ increases. There is a least $\kappa$ for which one (or
more) of these reaches zero. Let $\3\lambda_1$ be the resulting position on the
boundary of $M$.
At this point, we go back to the start of this Sec.\ \ref{sec:notM.M}, and
replace $\3\lambda_0$ by $\3\lambda_1$. We keep iterating this procedure, getting a
sequence of boundary points $\3\lambda_i$, with at each stage getting an
increased number of $L$ for which $f(L,\3{\lambda}_i)=0$.
Eventually this procedure has to stop because we run out of
values of $L$, and the only way this happens in the argument is that there
are no values of $L$ for which $F(L,\3{\lambda}_{i_{\rm max}})$ is negative,
i.e., that all the $F(L,\3{\lambda}_{i_{\rm max}})$ are zero, it follows that
$D(\3\lambda_{i_{\rm max}})=0$, i.e., we have a Landau point.
This completes our proof of Thm.\ \ref{thm:main.geom}.
\section{Contour deformations and pinches}
\label{sec:deal.with.zero.first.order}
We now return to the determination of the conditions for the existence of a
pinch at a particular value $w_S$ of the integration variable in an
integral of the form given in Eq.\ (\ref{eq:integral}). We will complete
the proof of Thm.\ \ref{thm:main.contour}, that there is a pinch if and
only if the corresponding Landau condition holds. To do this, we need to
prove Thm.\ \ref{thm:pinch.to.1st.order.shift} relating a pinch to the
non-existence of an allowed deformation with positive first-order shifts in
the imaginary parts of the relevant denominators. Once that is proved, the
already-proved geometric Thm.\ \ref{thm:main.geom} gives Thm.\
\ref{thm:main.contour} as an immediate consequence.
We saw in Sec.\ \ref{sec:elem.parts} that if an allowed deformation exists
with positive first-order shifts in the relevant denominators, then the
integration is not trapped. So it remains to show that if there is no such
deformation, then the integral is trapped. Now in the one-dimensional
case, this is easy to show, because a contour deformation avoids a
singularity due to a zero of a denominator if and only if the first order
shift in the denominator is positive --- see App.\ \ref{sec:1D.first.order}
for explicit details. But in higher dimensions, the example in App.\
\ref{sec:2D.first.order} shows that a singularity can be avoided while
having a first-order shift that is zero, i.e., the direction of contour
deformation can be tangent to the singularity surface. Hence a more
detailed argument is needed for the general case, and it will turn out to
be annoyingly difficult for such an apparently elementary result.
\subsection{Elementary results}
First we prove some elementary results that strongly restrict the kinds of
contour deformation that do or do not avoid singularities.
Given a denominator $A_j(w_R)$ that is zero at $w_R=w_S$, define its
derivative by $D_j = \partial A_j(w_S)$. Then consider a candidate contour
deformation specified by $v(w_R)$, and let $v_S=v(w_S)$, the direction of
deformation at $w_S$. We classify what happens by the sign of $D_j(v_S) =
v_S\cdot \partial A_j$, and codify the results in some theorems.
It is useful to define $\delta w=w_R-w_S$ and $\delta v(\delta w) = v(w_S+\delta w)-v_S$, i.e., the
deviations from values at $w_S$.
First, for positive $D_j(v_S)$:
\begin{theorem}
\label{thm:v.D.pos}
Suppose, with the notation and conditions just stated, that the
deformation is allowed and that $D_j(v_S)>0$. Then the deformation avoids
the singularity at $w_S$ associated with the zero of $A_j$.
\end{theorem}
\begin{proof}
Although this result is elementary,we will give a detailed argument,
since this will introduce techniques to be used in more difficult
situations.
The denominator is $A_j(w_S+\delta w+i\lambda (v_S + \delta v))+i\epsilon$. Now we always
require that $A_j(w)$ is analytic and that it is real when $w$ is real.
Therefore all the Taylor coefficients for an expansion about $w_S$ are
real. We expand the denominator $A_j+i\epsilon$ in powers of $\delta w$ and $\lambda$.
The imaginary part comes solely from the odd terms in $\lambda$, and hence
\begin{equation}
\label{eq:im.aj}
\Im(A_j+i\epsilon) = \epsilon + \lambda \left[ D_j(v_S) + O(\lambda^2) + O(\delta w) \right].
\end{equation}
For small enough $\delta w$ and $\lambda$, the correction terms are smaller in size
than the positive $D_j(v_S)>0$ term, and we therefore have a sum of two
non-negative terms. Therefore the imaginary part of $A_j$ is zero only
if both $\epsilon$ and $\lambda$ are zero.
Now a zero of the denominator occurs when both its real and imaginary
parts are zero. Hence in a neighborhood of $w_S$, the denominator is
nonzero when $\lambda$ is small and positive, and therefore the singularity due
to $A_j(w_S)=0$ is avoided by the contour deformation.
\end{proof}
Next, if $D_j(v_S)$ is negative, the candidate deformation is not even
allowed:
\begin{theorem}
\label{thm:v.D.neg}
Suppose that $v(w_R)$ specifies a candidate deformation, and that
$D_j(v_s) < 0$. Then the deformation is not allowed.
\end{theorem}
\begin{proof}
We need to show that if $D_j(v_S)$ is negative then we have a situation
like that shown in Fig.\ \ref{fig:eps.lam.sing}(a). That is, for all
small $\lambda$, there is a zero of $A_j\!\bigl(w_S+\delta w+i\lambda (v_S+\delta v(\delta
w))\bigr)+i\epsilon$ for some small positive $\epsilon$ and some value(s) of $\delta w$.
Furthermore (at least one of) these values of $\epsilon$ and $\delta w$ approach zero
as $\lambda\to0+$. This corresponds to the negation of the definition of an
allowed deformation given as given by Defns.\
\ref{def:compat.deform.point} and \ref{def:allowed.deform}.
We start with $\epsilon$ slightly positive, increase $\lambda$ from zero, and then
decrease $\epsilon$ to zero. We encounter a situation where the imaginary part
of $A_j+i\epsilon$ is zero. This can be seen from Eq.\ (\ref{eq:im.aj}) given
that $D_j(v_S)$ is negative. The zero of the imaginary part occurs both
when $\delta w=0$ and for all nearby values of $\delta w$, and it occurs for all
small positive $\lambda$.
But a zero in the imaginary part of the denominator does not itself show
that the deformation encounters a singularity from a zero in $A_j+i\epsilon$,
because to get a zero we also need the real part to be zero, and we need
to show that such a zero occurs independently of any higher order terms
in the Taylor expansion of $A_j$ in powers of small quantities $\lambda$, $\delta w$
and $\delta v$.
Choose $w_R=w_S+xv_S$, i.e., $\delta w=xv_S$. Thus $x$ parameterizes a
particular line in the space of $w_R$. Then $\delta v=O(x)$ as $x\to0$. Hence
applying a Taylor expansion of $A_j(w)$ about $w=w_S$ gives
\begin{multline}
\label{eq:case.linear}
A_j(w_S+xv_S + i\lambda(v_S+\delta v)) +i\epsilon
\\
= i\epsilon + (x+i\lambda) D_j(v_S) + O\mathopen{}\left( |x|^2, \lambda^2, |x|\lambda \right) .
\end{multline}
Recall that $v_S\cdot D_j$ is real, and is negative in the situation that we
are currently considering. Define a complex variable
\begin{equation}
\zeta = x+i\lambda,
\end{equation}
and consider values $\zeta=re^{i\theta}$ for positive $r$ and for $0\leq\theta\leq\pi$, so that
both $x$ and $\lambda$ are at most of size $r$, with $x=r\cos\theta$ and $\lambda=r\sin\theta$.
As we increase $\theta$ from 0 to $\pi$, $(x+i\lambda) v_S \cdot D_j$ traces out the
semicircle in the lower half plane shown in Fig.\ \ref{fig:tour1}. It
necessarily crosses the negative imaginary axis. The value of $A_j$
differs from $(x+i\lambda) v_S \cdot D_j$ by terms of order $r^2$, so for small
enough $r$, they only slightly modify the path in Fig.\ \ref{fig:tour1}.
It still starts on the negative real axis and ends on the positive real
axis, and crosses the negative imaginary axis. But it crosses the
imaginary axis with a value of $x$ that is order $r^2$ (and hence of
order $\lambda^2$), instead of exactly zero. We therefore get a zero of
$A_j+i\epsilon$ for any small $\lambda$ for some small $\epsilon$, and have not avoided a
singularity. This gives the situation shown in Fig.\
\ref{fig:eps.lam.sing}(a), which shows where, as we take $\epsilon$ and $\lambda$
through the values used to try to get a successful deformation, we first
encounter a singularity.
Hence the deformation is not allowed.
\begin{figure}
\centering
\includegraphics[scale=0.7]{figures/tour}
\caption{Value of $A_j$ takes approximately this tour in complex
plane, in the case of (\ref{eq:case.linear}), with
$\zeta=x+i\lambda=re^{i\theta}$, over $\theta$ from $0$ to $\pi$.}
\label{fig:tour1}
\end{figure}
\end{proof}
From Thm.\ \ref{thm:v.D.neg}, the following property of allowed
deformations immediately follows:
\begin{theorem}
\label{thm:v.D.neg.bis}
Suppose that $v(w_R)$ specifies an allowed deformation. Then $v(w_S)\cdot D_j
\geq 0$, whenever $A_j(w_S)=0$, for every $j$ and every real $w_S$ in the
integration range.
\end{theorem}
The remaining case for $D_j(v_S)$ is that it is zero. One possibility is
that $v_S$ itself is zero. In that case $A_j(w_S+i\lambda v(w_S))=A_j(w_S)=0$,
Hence
\begin{theorem}
\label{thm:v.zero}
Suppose that $v_s=0$. Then the deformation does not avoid the
singularity due to the zero of $A_j$ at $w_S$.
\end{theorem}
This leaves one situation to treat, that $D_j(v_S)=0$ but $v_S\neq0$, which we
defer to Sec.\ \ref{sec:v.D.zero}.
The difficulties in its analysis concern the possibility of a non-constant
dependence of $v(w_R)$ on $w_R$. So it is useful to prove the simple
results that obtain if $v(w_R)$ is independent of $w_R$, at least in a
neighborhood of $w_S$.
\begin{theorem}
\label{thm:v.D.zero.nonzero.const}
Suppose that $v(w_R)$ is a candidate deformation that has no dependence
on $w_R$ near $w_S$, and that $D_j(v(w_S)) = 0$ but $A_j(w_S+i\lambda v_S)$ is
non-zero for some (non-zero) $\lambda$. Then the deformation is not allowed.
\end{theorem}
\begin{proof}
The non-zero value of $A_j(w_S+i\lambda v_S)$ implies that on the deformed
contour we no longer need encounter a zero of $A_j+i\epsilon$ when we restrict
attention to $w_R=w_S$. To see this, first observe that the analyticity
of $A_j(w_S+i\lambda v_S)$ as a function of $\lambda$ and its nonzero value for some
value of $\lambda$ imply that the zero of $A_j$ at $\lambda=0$ is isolated. Then we
can get a situation where no zero of $A_j(w_S+i\lambda v_S)+i\epsilon$ is encountered,
for all small enough non-zero $\lambda$,
But there are, in fact, zeros at nearby values of $w_R$, and these
obstruct the deformation, as we now show. Consider values $w_R =
w_S+xv_s$, with $x$ real, so that on the deformed contour we have
$A_j(w_S+(x+i\lambda)v_s)$. This is an analytic function of $x+i\lambda$. The
function is zero when $x=\lambda=0$, and by the hypothesis of the theorem is
not zero for some values of $x+i\lambda$.
Therefore there is a first non-zero term in the Taylor expansion:
\begin{equation}
\label{eq:v.D.zero.Taylor}
A_j(w_S+(x+i\lambda)v_s) = C (x+i\lambda)^n + O\mathopen{}\left( |x+i\lambda|^{n+1} \right),
\end{equation}
with $n\geq2$ since $v_S \cdot D_j=0$. Since $A_j$ is real for real values of
its argument, so is $C$. Set $x+i\lambda = r e^{i\theta}$, with $r$ positive.
Allowed values have $0\leq\theta\leq\pi$, and any small $r$ is possible. The value of
$A_j$ is
\begin{equation}
Cr^n e^{i\theta n} + O(r^{n+1}).
\end{equation}
The value is real when $\theta$ is $0$ or $\pi$. As $\theta$ is increased from $0$
to $\pi$, the value $A_j$ must go round the origin $n/2$ times, i.e., at
least once. So it crosses the negative imaginary axis. The order
$r^{n+1}$ term from higher terms in the Taylor expansion can affect the
position of this crossing, but do not affect its existence, at least when
$r$ is small enough.
Hence for small $\lambda$ we find a zero of $A_j+i\epsilon$ during the contour
deformation, which therefore encounters a singularity, as in Fig.\
\ref{fig:eps.lam.sing}(a). Hence the deformation was not allowed,
contrary to hypothesis.
\end{proof}
\begin{theorem}
\label{thm:v.D.zero.const}
Suppose that $v(w_R)$ is also required to be an \emph{allowed}
deformation as well as having no dependence on $w_R$ near $w_S$, and that
$v(w_S)\cdot D_j = 0$. Then $A(w_S+i\lambda v_S)=0$ for all $\lambda$, and so the
singularity due to the zero in $A_j$ is not avoided.
\end{theorem}
\begin{proof}
This is an immediate consequence of Thm.\
\ref{thm:v.D.zero.nonzero.const}:
\end{proof}
\begin{theorem}
\label{thm:v.D.avoid.const}
Suppose that $v(w_R)$ is required to be an \emph{allowed} deformation as
well as having no dependence on $w_R$ near $w_S$, and that $A_j(w_S)=0$
for one or more $A_j$, where $w_S$ is real. Then the singularity due to
the zero in $A_j$ is avoided if and only if $v(w_S)\cdot D_j$ is strictly
positive, i.e., $D_j(v(w_S)) > 0$, for every one of the zero
denominators.
\end{theorem}
\begin{proof}
This follows directly from the application of the last few theorems
proved so far to multiple denominators, together with the results of
Sec.\ \ref{sec:elem.parts}.
\end{proof}
This last theorem is Thm.\ \ref{thm:pinch.to.1st.order.shift} with a
restriction on the $w_R$ dependence of $v(w_R)$, but without any of the
extra restrictions on the denominator that appear in the statement of Thm.\
\ref{thm:main.contour}.
Combined with Thm.\ \ref{thm:main.geom}, it gives our primary Theorem
\ref{thm:main.contour} under the same conditions.
\subsection{Analysis of neighborhood of singularity of integrand}
\label{sec:v.D.zero}
Consider a point $w_S$ in an integral where some denominators are zero.
First consider the case that there is a vector $v_S$ such that $D_j(v_S)>0$
for the derivatives of all the zero denominators. Then we choose a contour
deformation function which at $w_S$ is equal to $v_S$ (or proportional to
it with a positive coefficient). Then we saw in Sec.\ \ref{sec:elem.parts}
that the deformation avoids the singularity at $w_S$. This gives part of
the result stated in Thm.\ \ref{thm:pinch.to.1st.order.shift}.
To complete the proof of Thm.\ \ref{thm:pinch.to.1st.order.shift} (and
hence of Thm.\ \ref{thm:main.contour}) we now show that if no such $v_S$
exists, then the integration is trapped at $w_S$, i.e., that no allowed
deformation avoids the integrand's singularity at $w_S$.
We start by assuming that we have an allowed deformation, given by
$v(w_R)$, and that there is no $v_S$ such that $D_j(v_S)$ is strictly
greater than zero for all those $D_j$ that correspond to zero denominators.
We will obtain constraints that $v(w_R)$ must obey, and hence show that in
all cases the deformation does not avoid the singularity, thereby
completing the proof of the theorem. As given in the statement of the
theorem, we will restrict attention to denominators that are at most
quadratic in the integration variable, and the reason for the remaining
restriction in the statement of the theorem will emerge in the course of
making the proof.
To simplify the notation, we shift the integration variable so that
$w_S=0$. We define $v_0=v(0)$, the deformation at the singular point being
examined.
Since we cannot make all the relevant $D_j(v_0)$ positive, Thm.\
\ref{thm:main.geom} shows that the array of $D_j$s has a Landau point, i.e.,
there are values $\alpha_j$ such that
\begin{equation}
\label{eq:Landau.pt.deriv}
\sum_j \alpha_j D_j = 0,
\end{equation}
with all the $\alpha_j$s being non-negative and at least one being positive. The
denominators for which $\alpha_j=0$ will play no role in the proof, and so we
now focus attention on only those values of $j$ with nonzero $\alpha_j$. With
this focus, the denominators $A_j(w)$ in the retained set are zero at
$w=0$ and have strictly positive $\alpha_j$ in Eq.\
(\ref{eq:Landau.pt.deriv}).
Now for an allowed deformation, $D_j(v_0)\geq0$. From Eq.\
(\ref{eq:Landau.pt.deriv}), $\sum_j \alpha_j D_j(v_0) = 0$. So strict positivity
of the $\alpha_j$ implies that each $D_j(v_0)$ is actually zero.
We expand each denominator in powers of $w$:
\begin{align}
\label{eq:denom.series}
A_j(w) &= \sum_a D_{j,a} w^a + \frac12 \sum_{a,b} w^a E_{j,ab} w^b
\nonumber\\
&= D_j \cdot w + \frac12 w \cdot E_j \cdot w,
\end{align}
given that the $A_j$ are at most quadratic in the integration variables.
On the deformed contour $w=w_R+i\lambda v(w_R)$, as usual.
For the particular case of $w_R=0$, a denominator on the deformed contour
is
\begin{equation}
\label{eq:denom.0}
A_j(i\lambda v_0) = i\lambda D_j \cdot v_0 - \frac{\lambda^2}{2} v_0 \cdot E_j \cdot v_0 = -\frac{\lambda^2}{2} v_0 \cdot E_j \cdot v_0.
\end{equation}
If $v_0 \cdot E_j \cdot v_0$ is zero for at least one $j$, then we have a zero of
$A_j$ on the deformed contour, so that the deformation has not avoided the
singularity. Then we need go no further for proving the target result.
So we now restrict attention to the case that $v_0\cdot E_j \cdot v_0$ is nonzero
for all of the attended denominators. We now examine the denominators near
the origin, to search for possible zeros and to determine whether or not
they are avoided. Define $\delta v$ by
\begin{equation}
v(w_R) = v_0 + \delta v(w_R),
\end{equation}
so that $\delta v(w_R)$ goes to zero as $w_R$ goes to zero, i.e., $\delta(w_R) =
o(1)$ in this limit. Then
\begin{align}
A_j(w_R+i\lambda v(w_R))
\hspace*{-20mm}&
\nonumber\\
= {}& D_j \cdot w_R + \frac12 w_R \cdot E_j \cdot w_R - \frac{\lambda^2}{2} v_0 \cdot E_j \cdot v_0
\nonumber\\ &
- \lambda^2 v_0 \cdot E_j \cdot \delta v
- \frac{\lambda^2}{2} \delta v \cdot E_j \cdot \delta v
\nonumber\\ &
+ i\lambda \left[ D_j \cdot\delta v + w_R \cdot E_j \cdot v_0 + w_R \cdot E_j \cdot \delta v \right],
\end{align}
where the first two lines give the real part and the last line gives the
imaginary part.
We now search for possible obstructions to the contour deformation, i.e.,
zeros of $A_j+i\epsilon$ for small $w_R$, $\lambda$ and $\epsilon$. Avoiding these will give
constraints on the functional form of $\delta v(w_R)$. We do this by choosing a
small value of $w_R$, finding a value of $\lambda$ for which the real part of
$A_j+i\epsilon$ is zero, and then investigating the imaginary part. A zero of the
real part is obtained by setting $\lambda=\lambda(w_R)$, where
\begin{equation}
\lambda(w_R)
=
\sqrt{
\frac{ 2 D_j \cdot w_R + w_R \cdot E_j \cdot w_R }
{ v_0 \cdot E_j \cdot v_0 + 2 v_0 \cdot E_j \cdot \delta v + \delta v \cdot E_j \cdot \delta v }
},
\end{equation}
provided that the argument of the square root is positive.
Now let us consider the particular case that $w_R$ is in the direction
$v_0$ and set $w_R = xv_0$. Then
\begin{align}
\lambda(xv_0)
&=
\sqrt{
\frac{ x^2 v_0 \cdot E_j \cdot v_0 }
{ v_0 \cdot E_j \cdot v_0 + 2 v_0 \cdot E_j \cdot \delta v + \delta v \cdot E_j \cdot \delta v }
}
\nonumber\\
& =
|x| ~
\left( 1 + \frac{ 2 v_0 \cdot E_j \cdot \delta v + \delta v \cdot E_j \cdot \delta v }{ v_0 \cdot E_j \cdot v_0} \right)^{-1/2}.
\end{align}
Then there is a zero of the real part of $A_j$ for all small enough $x$,
both positive and negative, and the solution has $\lambda(xv_0)\simeq |x|$. There the
value of $A_j$ only arises from its imaginary part
\begin{align}
A_j(x v_0+i\lambda(x v_0) \, v(x v_0))
\hspace*{-35mm}&
\nonumber\\
&= i \lambda(x v_0) \left[ D_j \cdot\delta v + xv_0 \cdot E_j \cdot v_0 + xv_0 \cdot E_j \cdot \delta v \right]
\nonumber\\
&= \lambda(x v_0) \left[ D_j\cdot \delta v + x v_0 \cdot E_j \cdot v_0 + o(x) \right].
\end{align}
If at any point we were to get a negative imaginary part for all small $x$,
then a zero of $A_j+i\epsilon$ would be encountered in the contour deformation, so
that the deformation would not be allowed. We therefore ask what
constraints apply to $\delta v(w_R)$ to avoid such a negative imaginary part.
For the deformation to be allowed, we must have
\begin{equation}
\label{eq:Im.Aj}
D_j \cdot\delta v(x v_0) + x v_0 \cdot E_j \cdot v_0 + x v_0 \cdot E_j \cdot \delta v(x v_0) \geq 0
\end{equation}
for all small $x$.
First notice that $x$ can have either sign, and that when $x$ has the
opposite sign to $v_0 \cdot E_j \cdot v_0$, the term $x v_0 \cdot E_j \cdot v_0$ is
negative. The $o(x)$ term is strictly smaller (in the limit $x\to0$).
There are two cases to consider, according to whether $D_j$ itself is zero
or not.
If $D_j$ is zero, then $D_j\cdot v=0$, and the negative term $x v_0 \cdot E_j \cdot
v_0$ dominates; we have a negative imaginary part, and the contour
deformation is not allowed, contrary to our initial assumption.
Therefore $D_j$ must be nonzero. Then it is conceivable that the $D_j\delta v$
term compensates the negativity of $x v_0 \cdot E_j \cdot v_0$.
As announced in the statement of the theorem, we now restrict\footnote{It
would be desirable to make a proof without this restriction, but it would
require a harder proof beyond the scope of this paper.} attention only
to cases with the property that all non-zero $v_0 \cdot E_j \cdot v_0$ have the
same sign. Thus
\begin{align}
\label{eq:restrict}
\mbox{Either for all ``relevant'' $j$, $v_0 \cdot E_j \cdot v_0 \geq 0$;}
\nonumber\\
\mbox{or for all ``relevant'' $j$, $v_0 \cdot E_j \cdot v_0 \leq 0$},
\end{align}
where a ``relevant'' $j$ is one for which $\alpha_j$ is non-zero in Eq.\
(\ref{eq:Landau.pt.deriv}), and for which $A_j$ is zero, $D_j$ is nonzero,
and $D_j\cdot v_0=0$, both at the point of integration space under
consideration. As already mentioned, if $v_0 \cdot E_j \cdot v_0$ is zero for at
least one relevant $j$, then the contour is definitely trapped, and we only
now examine the case where all the $v_0 \cdot E_j \cdot v_0$ are nonzero.
The restriction is obeyed for standard applications to Feynman graphs and
certain generalizations. To see this, observe that the standard Feynman
denominator for a line of a Feynman graph has the form $A_j=k^2-m^2$, where
$k$ is the line's momentum. It is zero when $k^2=m^2$. Let the projection
of an allowed deformation onto the momentum of the line be $\hat{v}_0$. We
then have $D_j\cdot v_0 = 2k\cdot \hat{v}_0$ and $\frac12 v_0\cdot E_j \cdot v_0 =
\hat{v}_0 \cdot \hat{v}_0$. For a massive line (i.e., $m\neq0$), all deformations
that obey $D_j\cdot v_0=0$ must have a space-like (or zero) $\hat{v}_0$, and
hence $v_0\cdot E_j \cdot v_0\leq0$. In the massless case with $k^2=0$ and $k$
nonzero, $\hat{v}_0$ is either space-like or null (or zero), and again
$v_0\cdot E_j \cdot v_0\leq0$. In the massless case with $k=0$, i.e., a soft line,
$D_j=2k=0$, so the denominator is not one of the relevant ones in Eq.\
(\ref{eq:restrict}). Another important case is of a Wilson line, for which
the denominator is simply linear: $A_j=k\cdot n$ for some vector $n$, and hence
$E_j$ itself is zero. A non-relativistic propagator, with denominator
$E-\3p^2/2m$ has the for the quadratic term as in the massive relativistic
case. One other case that can be met in QCD is an approximation where a
longitudinal light-front component of momentum is set to zero, but
transverse momenta are preserved. Then the quadratic terms involve only
transverse momentum, and the quadratic terms obey the same sign condition
as for an unapproximated denominator.
Hence in all of these cases, the restriction (\ref{eq:restrict}) is
obeyed.
Given this restriction (independently now of which sign occurs), the second
term in (\ref{eq:Im.Aj}) is negative when we give $x$ the opposite sign to
$v_0\cdot E_j \cdot v_0$. Most importantly, the same value of $x$ can be used for
all the relevant denominators. The third term in the imaginary part is
always smaller when the size of $x$ is small enough. So the only hope for
getting a non-negative imaginary part is for the first term, $D_j \cdot\delta v(x
v_0)$, to compensate by being sufficiently positive. Since $\delta v$ is zero
when $x$ is zero, this compensation relies on $x$-dependence in $v(x v_0)$,
and hence on $w_R$ dependence in $v(w_R)$.
In the present case, there is a Landau point, so that $\sum_j \alpha_j D_j \cdot\delta
v(xv_0)=0$. Hence at least one $D_j \cdot\delta v(xv_0)$ is not positive. Hence,
for at least one $j$, the first term in (\ref{eq:Im.Aj}) cannot compensate
the negative value of the sum of the second and third terms. Then there is
a zero in $A_j$ that causes an obstruction to the contour deformation, and
the proposed deformation would not be allowed.
We have now covered all the cases, so that given the existence of a Landau
point, we have shown that all allowed contour deformations fail to avoid
the singularity. This completes the proof of Thm.\
\ref{thm:pinch.to.1st.order.shift} and hence of Thm.\
\ref{thm:main.contour}. Notice how we used the existence of a Landau
point, which was shown by a use of the geometrical Thm.\
\ref{thm:main.geom}.
We now revisit the rationale for extra restriction (\ref{eq:restrict}). If
the restriction were not obeyed, then there would be at least one positive
and one negative $v_0 \cdot E_j \cdot v_0$. The negative values of the second term
in in (\ref{eq:Im.Aj}) would occur for opposite signs of $x$ for different
denominators. Hence an appeal to $\sum_j \alpha_j D_j \cdot \delta v(xv_0) = 0$ would not be
sufficient to rule out a compensation of the negative terms by some choice
of $\delta v(xv_0)$. A better argument would be needed, but I have not found
one that is watertight.
\subsection{Anomalous deformations}
All but the very last part of the derivation in the previous subsection
gives a strategy for finding examples like that in App.\
\ref{sec:2D.first.order}, where a singularity is avoided by a deformation
that has zero first-order shifts at the singular point(s). Let us call
such a deformation an ``anomalous deformation'', formally defined by:
\begin{definition}
\label{def:anom.def}
An \emph{anomalous deformation} means an allowed deformation that avoids
the singularity due to a zero of one or more denominators $A_j$, but
where the first-order imaginary part is zero.
\end{definition}
What we did in the previous section, was to exclude the possibility that
when the Landau condition is obeyed an anomalous deformation could exist
that avoids singularities due to all of the denominators. But the proof
relied on the extra restrictions on the denominators stated in Thm.\
\ref{thm:main.contour}.
If, in contrast, there is no Landau point, then we can find a vector that
gives positive first-order shifts in the denominators. Hence, in this
situation of no Landau point, given the existence of an anomalous
deformation we can find another that is not anomalous and is still
singularity-avoiding.
\subsection{Patching local deformations to global}
The arguments in the preceding sections as to whether or not an integrand's
singularity can be avoided by a contour deformation were local. That is,
the arguments were applied at each position $w_S$ where there a
singularity, and they involved determining (a) which directions of
deformation at $w_S$ are compatible with the integrand's singularities, and
most importantly (b) which directions avoid a singularity.
The question now arises as to whether such locally determined directions
can be globally patched together consistently, so as to give a contour
deformation $v(w_R)$ for all $w_R$ that has one of the determined
directions at each point of singularity of the integrand, and that can be
implemented without some kind of discontinuity.
As an indication of possible issues, the example of determining normal
directions to a M\"obius strip comes to mind. This is a situation in which
the global topology of a surface prevents global patching of locally
determined vectors. But the present situation is different. At each point
on the initial integration contour, properties of the denominators
determine a manifolds of directions of singularity avoiding deformations
(and similarly for allowed deformations that don't avoid singularities).
The boundary of the manifold of possible directions depends continuously on
position in the manifold, and the denominators are single-valued functions
of position. So we can steer the deformation to stay within the allowed
manifold. Of course, at some parts of the original contour we may find
that no deformation is avoids singularities.
If we take a tour of the initial integration contour going from some
initial point back to the same point, then we have the same restrictions on
the direction of deformation at the start and end, and no inconsistency.
This is an extremely simple-minded argument, and undoubtedly too weak to be
fully persuasive. An improved argument would be useful.
Of course, if we changed our integral to one in which a denominator $A_j$
had a branch cut on the initial integration contour, then the situation
would be different. But that is not the case for the integrals that we
consider in this paper. Now there are allowed to be non-integer exponents
in Eq.\ (\ref{eq:integral}), so that the integrand itself can have branch
points and cut(s) that are on the initial integration contour. But that
does not affect the possible directions of deformation, which are all
determined by the denominator functions themselves, $A_j$, which we require
always to be analytic and single valued.
\subsection{Case of one denominator}
\label{sec:one-denom}
We now examine the situation when there is only one denominator. This is a
common special case, because it occurs when Feynman parameters are used.
Its analysis has some special features compared with the case of multiple
denominators, so it is useful to treat this case specially. In particular,
we will understand explicitly why Coleman and Norton \cite{Coleman:1965xm}
needed to put the restriction on their proof, that the matrix of second
derivatives of the denominator has no zero eigenvalues.
Let the denominator be $A(w)$. The Landau criterion for a putative pinch
at some point $w_S$ is simply that the denominator and its first derivative
$D(w)\stackrel{\textrm{def}}{=} \diff{A(w)}/\diff{w}$ are zero at $w_S$. The aim is to show,
if possible, that the contour of integration is trapped at that point. Of
course, given the zero derivative, any deformation that avoids the
singularity has a zero first-order shift in the denominator, and hence is
anomalous. If we don't succeed in excluding the possibility of an
anomalous deformation, then at least we can strongly constraint its
properties and those of the denominator. Of course, if $A(w_s)=0$ but the
derivative were non-zero, then we can certainly avoid the singularity at
$w_S$ by a deformation $w\mapsto w_R + i\lambda v(w_S)$ with $D(w_S)\cdot v(w_S)>0$. So
the case of a zero derivative is the only one to examine further.
If the denominator is quadratic in the integration variables, then the
restrictions in Thm.\ \ref{thm:main.contour} are obeyed, and the derivation
in previous sections is valid. But the denominator from applying the
Feynman parameter method to a standard Feynman graph is cubic if the
momentum integrals are not performed.
If the momentum integrals are performed, as can be done analytically for
standard Feynman graphs, then the denominator is a polynomial of order one
plus the number of loops. It can therefore be of arbitrarily high order.
But as we have already observed, a pinch in momentum space does not always
entail a pinch in parameter space, so this case isn't so generally useful.
Given the significance of the Feynman parameter representation of a Feynman
graph before the momentum integrals are performed, we will restrict
attention to the case that the denominator is at most cubic in the
integration variables. As before, given a point $w_S$ where the denominator
and its derivative are zero, we simplify the notation by shifting variables
so that $w_S=0$. Then the denominator has the form:
\begin{align}
\label{eq:one.denom}
A(w) &= \frac12 \sum_{ab} w^a E_{ab} w^b + \frac16 \sum_{abc} w^aw^bw^c F_{abc}
\nonumber\\
&= \frac12 w \cdot E \cdot w + \frac16 w w w \cdot F.
\end{align}
where each of the arrays $E$ and $F$ is symmetric in its indices.
At certain points it will be useful to follow Coleman and Norton, and
diagonalize $E$ by a change of variable, to write
\begin{equation}
\label{eq:E.diag}
w \cdot E \cdot w = \sum_j c_j \eta_j^2,
\end{equation}
with each $\eta_j$ being a linear combination of $w^a$s. By rescaling the
$\eta_a$, we can arrange that each non-zero $c_a$ has absolute value unity.
Thus without loss of generality, we can arrange that each $c_a$ is either
$+1$, $-1$ or $0$.
There are several different cases to consider, so it is convenient to
encapsulate in a lemma each of the separate cases, as well as several
subsidiary results. The first lemma is elementary:
\begin{lemma}
For a contour deformation $w = w_R +i\lambda v(w_R)$ to avoid the singularity at
caused by the denominator (\ref{eq:one.denom}) at $w=0$, it is necessary
(but not sufficient, as we will see), for $v_0\cdot E\cdot v_0$ or $v_0v_0v_0\cdot F$
(or both) to be nonzero. Here $v_0=v(0)$.
Conversely, if both of $v_0\cdot E\cdot v_0$ and $v_0v_0v_0\cdot F$ are zero, the
singularity is not avoided.
\end{lemma}
\begin{proof}
The trivial proof is to observe that $A(w_R+i\lambda v(w_R)$ needs to be nonzero
at $w_R=0$ if the singularity is to be avoided.
\end{proof}
\begin{lemma}
For an allowed deformation, $v_0\cdot E\cdot v_0$ must be zero.
\end{lemma}
\begin{proof}
The proof is a minor modification of the argument leading to Eq.\
(\ref{eq:Im.Aj}). If $v_0\cdot E\cdot v_0$ were nonzero, then we would have a
zero of the real part of $A$ with $\lambda$ close to $|x|$. The higher-than
quadratic terms that we now have do not affect that result. But in Eq.\
(\ref{eq:Im.Aj}) we now longer have the single-derivative term, which is
therefore not available to compensate the negative value of $xv_0\cdot E\cdot
v_0$ that occurs when $x$ has the opposite sign to $v_0\cdot E\cdot v_0$. The
cubic term does not affect that for small $\lambda$ (and $x$). So the
constraint Eq.\ (\ref{eq:Im.Aj}) for an allowed deformation cannot be
obeyed.
This leaves only the case of zero $v_0\cdot E\cdot v_0$ for an allowed deformation.
\end{proof}
\begin{lemma}
For an allowed deformation, $v_0$ must be an eigenvector of $E$ with
eigenvalue 0.
\end{lemma}
\begin{proof}
We now use the change of variables that gives the diagonalized form for
the quadratic term, Eq.\ (\ref{eq:E.diag}), with each $c_a$ being either
$+1$, $-1$ or $0$. Let $h$ be the result of applying the change of
variables to $v_0$. Then
\begin{equation}
\label{eq:vEv.diag}
v_0\cdot E\cdot v_0 = \sum_j c_j h_j^2 = 0.
\end{equation}
There are two cases to consider. One is where the only nonzero values of
$h_j$ are with $c_j=0$. Then $v_0$ is an eigenvector of $E$ with
eigenvalue zero, so we are done.
The other case is where there is at least one $j$ with both of $h_j$ and
$c_j$ nonzero. To get the zero value in Eq.\ (\ref{eq:vEv.diag}), there
must be at least one positive term and one negative term. Permute the
labels so that the $j=1$ term is positive and the $j=2$ term is
negative: $v_0\cdot E\cdot v_0 = h_1^2-h_2^2 + \mbox{terms from other $j$}$, with
both of $h_1$ and $h_2$ nonzero.
We now find a zero of the denominator that gives deformation-obstructing
singularity in the integrand. Choose $w_R$ to correspond to
\begin{equation}
\eta_j = x \delta_{j1} + y \delta_{j2}.
\end{equation}
\begin{widetext}
Then the denominator is
\begin{align}
A(w_R(x,y) + i \lambda (v_0+\delta v))
={}& \frac12 x^2 - \frac12 y^2 + O(\lambda^2\delta v^2) + O(\lambda^3)
+i\lambda \left[ xh_1 -yh_2 + O(|x|\delta v) + O(|y|\delta v) + O(\lambda^2) \right].
\end{align}
\end{widetext}
If the first two terms in the real part had no corrections, then it would
be zero whenever $|x|=|y|$, with all combinations of signs allowed. By
taking $x$ to have the opposite sign to $h_1$ and $y$ to have the same
sign as $h_2$, we get a negative value for the first two terms in the
imaginary part. We now choose $x$ and $y$ to be of order $\lambda$, and take
$\lambda$ to zero. Then the correction terms in the real part are smaller than
the first two terms, and cause the position of the zero to move slightly,
with the fractional change decreasing to zero as $\lambda\to0$. The correction
terms in the imaginary part are similarly small than the first two terms,
and leave the imaginary part negative.
Hence when $\lambda$ and $\epsilon$ are decreased to zero, a zero of
$A_j(w_R+i\lambda v(w_R))+i\epsilon$ is always encountered somewhere on the integration
contour, and hence the deformation is obstructed by a singularity and is
not allowed.
Hence the case that $v_0$ is not an eigenvector of $E$ of eigenvalue zero
is ruled out, and the lemma is established.
\end{proof}
\begin{lemma}
\label{lemma:PSS.tangent}
Suppose $w=0$ is part of a manifold $M$ of points satisfying the Landau
condition, i.e., $A=0$ and $D=0$. Then (a) any tangent vector $t$ to the
manifold (at $w=0$) is an eigenvector of $E$ of eigenvalue zero; (b) $t t
t \cdot F=0$; (c) hence a contour deformation whose $v_0$ is tangent to $M$
at $w=0$ does not avoid the singularity.
\end{lemma}
\begin{proof}
Consider a path within the manifold $M$, starting at the origin and with
initial direction $t$. Let the path be parameterized by $P(x)$ where $x$
is real, $P(0)=0$, and $P'(0)=t$, with $w'(x)$ denoting $\diff{P(x)}/
\diff{x}$. Then for all $x$ for which $P(t)\in M$,
\begin{subequations}
\begin{align}
\label{eq:A.path.0}
0 & = A(w(x)) = \frac12 P(x) \cdot E \cdot P(x) + \frac16 P(x)P(x)P(x) \cdot F,
\\
\label{eq:D.path.0}
0 & = D(w(x)) = E \cdot P(x) + \frac12 P(x)P(x) \cdot F.
\end{align}
\end{subequations}
with the second equation meaning
\begin{align}
0 & = D_a(w(x)) = \sum_b E_{ab} P^b(x) + \frac12 \sum_{bc} F_{abc} P^b(x)P^c(x).
\end{align}
Differentiate Eq.\ (\ref{eq:D.path.0}) with respect to $x$ to get
\begin{equation}
0 = \frac{\diff{D(w(x)}}{\diff{x}}
= E \cdot P'(x) + P'(x)P(x) \cdot F.
\end{equation}
Set $x=0$ to get $E\cdot t=0$, i.e., $t$ is an eigenvector of $E$ of
eigenvalue zero.
Now differentiate Eq.\ (\ref{eq:D.path.0}) twice with respect to $x$ to
get
\begin{equation}
0 = E \cdot P''(x) + P'(x)P'(x) \cdot F + P(x)P''(x) \cdot F.
\end{equation}
Setting $x=0$, using $P(0)=0$ and $P'(0)=t$, and contracting with $t$
gives
\begin{equation}
t t t \cdot F = 0,
\end{equation}
which gives item (b) in the lemma.
Then because both $t\cdot E\cdot t$ and $ttt \cdot F$ are zero, a contour deformation
with $v_0\propto t$ gives a zero of $A$ at $w_R=0$ on the deformed contour, so
that the singularity is not avoided. This proves item (c).
\end{proof}
\begin{lemma}
For any contour deformation that avoids the singularity (necessarily an
anomalous deformation), $v_0$ has eigenvalue zero, but is not tangent to
any manifold such as $M$ in the previous lemma.
\end{lemma}
\begin{proof}
This is immediate from the previous two lemmas.
\end{proof}
\begin{lemma}
\label{lemma:PSS.anomalous}
For there to exist a contour deformation that avoids the singularity, $E$
must have an eigenvector of eigenvalue zero that is not in the space of
directions of manifolds of the form of $M$.
\end{lemma}
\begin{proof}
Immediate from the previous lemma.
\end{proof}
We now see why Coleman and Norton needed their restriction on the
eigenvalues of $E$. However, they did not explain why, and the argument in
this section appears to show that the derivation is non-trivial. They
refer to Ref.\ \cite{Eden:1961} for situations when the zero eigenvalue
problem arises.
There is one common case of zero eigenvalues in a massless theory, and that
is when there is a collinear region. In that case the corresponding
pinch-singular-surface is not simply a point, being parameterized by
longitudinal momentum fraction(s). But Lemmas \ref{lemma:PSS.tangent} and
\ref{lemma:PSS.anomalous} show that if the only zero eigenvalues are for
tangents to the surface, the singularity is not avoided. There is perhaps
an obscure reference to this in the third and fourth lines of p.\ 441 of
Ref.\ \cite{Coleman:1965xm}.
Undoubtedly, it is possible to examine in more detail the case with a zero
eigenvalue, and to find further constraints on allowed deformations. If
the contour deformation direction $v(w_R)$ were required to be independent
of $w_R$, or sufficiently slowing varying, then Thms.\
\ref{thm:v.D.zero.nonzero.const} and \ref{thm:v.D.zero.const} show that in
full generality it is not possible to avoid the singularity due to a zero
of the denominator and its derivative. The elementary proof simply uses
the first non-zero term in the Taylor expansion of $A(zv_0)$ in powers of
$z$. In our case it would be the cubic term that is relevant.
However, the deformation direction can depend on $w_R$. The
non-trivial problem is there can then arise a nonzero contribution to the
quadratic term involving $\lambda\delta v$, and this is potentially capable of
compensating the part of the cubic term that would otherwise result in an
unavoidable singularity in the contour deformation.
\section{Conclusions and implications}
In this paper, I have provided a complete proof of the necessity and
sufficiency of the Landau condition for a pinch in the kind of integral
typified by Feynman graphs in the physical region. The proof overcomes a
number of deficiencies in existing work, and it can be applied directly to
Feynman graphs in momentum space (unlike many previous proofs). The
analysis of pinch singularities is foundational to perturbative QCD, so it
is important not only to have a full explicit proof, but to have one whose
domain of application, as here, includes Feynman graphs with massless
propagators as well as massive ones, and also modified propagators such as
the Wilson-line denominators that are common in QCD applications.
The methods and intermediate results in the proof have further
implications, beyond simply determining where pinches occur. For example,
an analysis of coordinate-space behavior can be made by deforming a contour
of integration as much as possible to convert rapidly oscillating
exponential factors into strongly decaying exponentials. Dominant regions
are determined by the locations where such a deformation cannot be made.
Allowed directions of deformation are constrained not only by the need to
avoid singularities of the integrand, but also to avoid making the
exponentials rapidly growing. Dominant regions of the integration variables
are where the constraints cannot be satisfied, and it is useful to have an
analysis that works at all orders of perturbation theory.
Another possible application, especially of the geometric results in Sec.\
\ref{sec:overall}--\ref{sec:landau.proof}, is to improve algorithmic
methods for deforming contours in numerical calculations of Feynman
graphs, as in Refs.\ \cite{Gong:2008ww,Becker:2012nk,Becker:2012bi}.
In constructing the proof, some interesting subsidiary results were found.
Some of these simply resulted from a close analysis of treatments in the
classic literature (which give a strong inspiration to treatments in
textbooks). Particular problems and even a demonstrably false assumption
were found. Awareness of such issues is important to provide sound and
properly persuasive pedagogical treatments.
Another notable case was to recognize the possibility of avoiding a
singularity in the integrand by a contour deformation in a direction that
is tangent to the singularity surface. Such a deformation I termed
``anomalous''. With such a deformation, the first order shift of a
denominator due to the contour deformation is zero. This contrasts with
the natural intuition (engendered by experience in one dimensional cases)
that, in order to avoid a singularity of the integrand, the contour
deformation must give a positive first-order shift to the imaginary part of
the denominator, matching the sign of the $i\epsilon$. An example of an anomalous
deformation was found. But this was a case where the contour is not
trapped, in which case there are also a non-anomalous contour deformations
that avoid the singularity.
As regards the proof given here, considerable complications were
encountered in excluding the possibility that one can have a situation
where a Landau condition is obeyed, but the contour is \emph{not} trapped;
that is, it was necessary to rule out the possibility of an anomalous
deformation when a Landau condition is obeyed. A proof was found only when
the denominators in the integral obeyed certain conditions. Luckily, these
conditions are indeed obeyed for Feynman graphs --- see the statement of
Thm.\ \ref{thm:main.contour} and Eq.\ (\ref{eq:restrict}). A more general
proof (or counterexample) would be obviously be useful.
Here are some possible directions for further work.
\begin{enumerate}
\item It would be useful to apply the methods to give a fully systematic
and general account in coordinate space of the large-$Q$ behavior of
amplitudes, such as appear in QCD factorization. This would extend, for
example, the work of Erdo\u{g}an and Sterman
\cite{Erdogan:2014gha,Erdogan:2016ylj,Erdogan:2017gyf}.
\item Another direction is to determine from the geometrical considerations
given in this paper the possible directions for allowed contour
deformations at a pinch. Given the existence of a pinch at a particular
point or at a points on some manifold, there is a certain set of
denominator(s) whose corresponding singularity/ies of the integrand
cannot be avoided. These effectively are the denominators that actually
cause the pinch. But it is possible that other denominators are zero,
but that the corresponding singularities can be avoided by a contour
deformation that respects the constraints given by the pinching
denominators. It would be useful to have a determination of the range of
allowed directions.
Such issues were not important in the original application of the Landau
analysis to determine singularities of an integral as a function of
\emph{external} parameters. But in QCD applications, the focus is rather
on the momentum configurations at a pinch and their neighborhoods. The
exact pinches of relevance in standard pQCD applications are in a
massless theory, whereas the true theory is not massless; the massless
version is simply a useful tool for locating relevant regions in the
space of loop momenta. Moreover, a subtracted hard scattering
coefficient, calculated in the massless limit as usual, is singular at
zero mass, but the singularity is not strong enough to make the hard
scattering actually divergent there, given the subtractions. (The same
is not true of the derivatives of sufficiently high order with respect to
mass at zero mass.)
\item Consider the Coleman-Norton result that locations where a Landau
condition is obeyed correspond to possible classical processes. Their
result is very useful for readily determining the well-known results on
regions involved in asymptotic large $Q$ behavior, notably the
classification into hard, collinear, and soft subgraphs. Coleman and
Norton's proof works in the massive case, but it becomes singular in the
massless case and doesn't fully capture \cite{Ma:2019hjq} what is
actually needed in QCD applications. It would be useful to remedy this
problem, perhaps in conjunction with a systematic treatment in coordinate
space.
\end{enumerate}
\section*{Acknowledgments}
I thank Marko Berghoff, Yao Ma, Maximilian M\"uhlbauer, Dave Soper, and
George Sterman for useful conversations.
|
1,314,259,993,316 | arxiv | \section{Introduction}
\begin{figure}[t]
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=0.6\linewidth]{figures/sample_467.png}
\caption{Image with human and object detections.}
\label{fig:teaser-sample}
\end{subfigure}
\begin{subfigure}[t]{\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/tokens.pdf}
\caption{Unary and pairwise tokens with predicted scores (\emph{riding a motorcycle}).}
\label{fig:teaser-tokens}
\end{subfigure}
\caption{Our Unary--Pairwise Transformer encodes human and object instances individually and in pairs, allowing it to reason about the data in complementary ways. In this example, our network correctly identifies the interactive pairs for the action \textit{riding a motorcycle}, while suppressing the visually-similar non-interactive pairs and those with different associated actions.
}
\label{fig:teaser}
\vspace{-10pt}
\end{figure}
Human--object interaction (HOI) detectors localise interactive human--object pairs in an image and classify the actions. They can be categorised as one- or two-stage, mirroring the grouping of object detectors.
Exemplified by Faster R-CNN~\cite{fasterrcnn}, two-stage object detectors typically include a region proposal network, which explicitly encodes potential regions of interest in the form of bounding boxes. These bounding boxes can then be classified and further refined via regression in a downstream network. In contrast, one-stage detectors, such as RetinaNet~\cite{retinanet}, retain the abstract feature representations of objects throughout the network, and decode them into bounding boxes and classification scores at the end of the pipeline.
In addition to the same categorisation convention, HOI detectors need to localise two bounding boxes per instance instead of one. Early works~\cite{hicodet,gpnn,no-frills,tin} employ a pre-trained object detector to obtain a set of human and object boxes, which are paired up exhaustively and processed by a downstream network for interaction classification. This methodology coincides with that of two-stage detectors and quickly became the mainstream approach due to the accessibility of high-quality pre-trained object detectors.
The first instance of one-stage HOI detectors was introduced by Liao \etal.~\cite{ppdm}. They characterised human--object pairs as interaction points, represented as the midpoint of the human and object box centres. Recently, due to the great success in using learnable queries in transformer decoders for localisation~\cite{detr}, the development of one-stage HOI detectors has been greatly advanced. However, HOI detectors that adapt the DETR model rely heavily on the transformer, which is notoriously difficult to train~\cite{train-xfmer}, to produce discriminative features. In particular, when initialised with DETR's pre-trained weights, the decoder attends to regions of high objectness by default. The heavy-weight decoder stack then has to be adapted to attend to regions of high interactiveness. Consequently, training such one-stage detectors often consumes large amounts of memory and time as shown in \cref{fig:convg-time}.
In contrast, two-stage HOI detectors do not repurpose the backbone network, but maintain it as an object detector. Since the first half of the pipeline already functions as intended at the beginning of training, the second half can be trained quickly for the specific task of HOI detection. Furthermore, since the object detector can be decoupled from the downstream interaction head during training, its weights can be frozen, and a lighter-weight network can be used for interaction detection, saving a substantial amount of memory and computational resources.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/conv_time.png}
\caption{Mean average precision as a function of the number of epochs (left) and training time (right) to convergence. The backbone networks for all methods have been initialised with the same weights and trained on 8 GeForce GTX TITAN X GPUs.}
\label{fig:convg-time}
\end{figure}
\begin{table}[t]\small
\vspace{-4pt}
\caption{The performance discrepancy between existing state-of-the-art one-stage and two-stage HOI detectors is largely attributable to the choice of backbone network. We report the mean average precision ($\times 100$) on the HICO-DET~\cite{hicodet} test set.}
\label{tab:one-vs-two}
\setlength{\tabcolsep}{6pt}
\vspace{-4pt}
\begin{tabularx}{\linewidth}{l l l C}
\toprule
\textbf{Method} & \textbf{Type} & \textbf{Detector Backbone} & \textbf{mAP} \\
\midrule
SCG~\cite{scg} & two-stage & Faster R-CNN R-50-FPN & 24.88 \\
SCG~\cite{scg} & two-stage & DETR R-50 & 28.79 \\
SCG~\cite{scg} & two-stage & DETR R-101 & \textbf{29.26} \\
\midrule
QPIC~\cite{qpic} & one-stage & DETR R-50 & 29.07 \\
QPIC~\cite{qpic} & one-stage & DETR R-101 & \textbf{29.90} \\
\midrule
Ours & two-stage & DETR R-50 & 31.66 \\
Ours & two-stage & DETR R-101 & \textbf{32.31} \\
\bottomrule
\end{tabularx}
\vspace{-6pt}
\end{table}
Despite these advantages, the performance of two-stage detectors has lagged behind their one-stage counterparts. However, most of these two-stage models used Faster R-CNN~\cite{fasterrcnn} rather than more recent object detectors. We found that simply replacing Faster R-CNN with the DETR model in an existing two-stage detector (SCG)~\cite{scg} resulted in a significant improvement, putting it on par with a state-of-the-art one-stage detector (QPIC), as shown in \cref{tab:one-vs-two}. We attribute this performance gain to the representation power of transformers and bipartite matching loss~\cite{detr}. The latter is particularly important because it resolves the misalignment between the training procedure and evaluation protocol. The evaluation protocol dictates that, amongst all detections associated with the same ground truth, the highest scoring one is the true positive while the others are false positives. Without bipartite matching, all such detections will be labelled as positives. The detector then has to resort to heuristics such as non-maximum suppression to mitigate the issue, resulting in procedural misalignment.
We propose a two-stage model that refines the output features from DETR with additional transformer layers for HOI classification. As shown in \cref{fig:teaser}, we encode the instance information in two ways: a unary representation where individual human and object instances are encoded separately, and a pairwise representation where human--object pairs are encoded jointly. These representations provide orthogonal information, and we observe different behaviours in their associated layers. The unary encoder layer preferentially increases the predicted interaction scores for positive examples, while the pairwise encoder layer suppresses the negative examples. As a result, this complementary behaviour widens the gap between scores of positive and negative examples, particularly benefiting ranking metrics such as mean average precision (mAP).
Our primary contribution is a novel and efficient two-stage HOI detector with unary and pairwise encodings. Our secondary contribution is demonstrating how pairwise box positional encodings---critical for HOI detection---can be incorporated into a transformer architecture, enabling it to jointly reason about unary appearance and pairwise spatial information. We further provide a detailed analysis on the behaviour of the two encoder layers, showing that they have complementary properties. Our proposed model not only outperforms state-of-the-art methods, but also consumes much less time and memory to train. The latter allows us to employ more memory-intensive backbone networks, further improving the performance.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/upt.pdf}
\caption{Flowchart for our unary--pairwise transformer. An input image is processed by a backbone CNN to produce image features, which are partitioned into patches of equal size and augmented with sinusoidal positional encodings. These tokens are fed into the DETR~\cite{detr} transformer encoder--decoder stack, generating new features for a fixed number of learnable object queries. These are decoded by an MLP as object classification scores and bounding boxes, and are also passed to the interaction head as unary tokens. The interaction head also receives pairwise positional encodings computed from the predicted bounding box coordinates. A modified transformer encoder layer then refines the unary tokens using the pairwise positional encodings. The output tokens are paired up and fused with the same positional encodings to produce pairwise tokens, which are processed by a standard transformer encoder layer before an MLP decodes the final features as action classification scores.
}
\label{fig:diagram}
\end{figure*}
\section{Related work}
Transformer networks~\cite{xfmer}, initially developed for machine translation, have recently become ubiquitous in computer vision due to their representation power, flexibility, and global receptive field via the attention mechanism. The image transformer ViT~\cite{vit} represented an image as a set of spatial patches, each of which was encoded as a token through simple linear transformations. This approach for tokenising images rapidly gained traction and inspired many subsequent works~\cite{swint}. Another key innovation of transformers is the use of learnable queries in the decoder, which are initialised randomly and updated through alternating self-attention and cross-attention with encoder tokens. Carion \etal~\cite{detr} use these as object queries in place of conventional region proposals for their object detector. Together with a bipartite matching loss, this design gave rise to a new class of one-stage detection models that formulate the detection task as a set prediction problem. It has since inspired numerous works in HOI detection~\cite{qpic, hoitrans, hotr, asnet}.
To adapt the DETR model to HOI detection, Tamura \etal~\cite{qpic} and Zou \etal~\cite{hoitrans} add additional heads to the transformer in order to localise both the human and object, as well as predict the action. As for bipartite matching, additional cost terms are added for action prediction. On the other hand, Kim \etal~\cite{hotr} and Chen \etal~\cite{asnet} propose an interaction decoder to be used alongside the DETR instance decoder. It is specifically responsible for predicting the action while also matching the interactive human--object pairs. These aforementioned one-stage detectors have achieved tremendous success in pushing the state-of-the-art performance. However, they all require significant resources to train the models. In contrast, this work focuses on exploiting novel ideas to produce equally discriminative features while preserving the memory efficiency and low training time of two-stage detectors.
Two-stage HOI detectors have also undergone significant development recently. Li \etal~\cite{idn} studied the integration and decomposition of HOIs in an analogy to the superposition of waves in harmonic analysis. Hou \etal explored few-shot learning by fabricating object representations in feature space~\cite{fcl} and learning to transfer object affordance~\cite{atl}. Finally, Zhang \etal~\cite{scg} proposed to fuse features of different modalities within a graphical model to produce more discriminative features. We make use of this modality fusion in our transformer model and show that it leads to significant improvements.
\section{Unary--pairwise transformers}
To leverage the success of transformer-based detectors, we use DETR~\cite{detr} as our backbone object detector and focus on designing an effective and efficient interaction head for HOI detection, as shown in \cref{fig:diagram}. The interaction head consists of two types of transformer encoder layers, with the first layer modified to accommodate additional pairwise input. The first layer operates on unary tokens, \ie, individual human and object instances, while the second layer operates on pairwise tokens, \ie, human--object pairs. Based on our analysis and experimental observations in \cref{sec:macro} and \cref{sec:micro}, self-attention in the unary layer preferentially increases the interaction scores for positive HOI pairs, whereas self-attention in the pairwise layer decreases the scores for negative pairs. As such, we refer to these layers as \textit{cooperative} and \textit{competitive} layers respectively.
\subsection{Cooperative layer}
\label{sec:coop}
A standard transformer encoder layer takes as input a set of tokens and performs self-attention. Positional encodings are usually indispensable to compensate for the lack of order in the token set. Typically, sinusoidal functions of the position~\cite{xfmer} or learnable embeddings~\cite{detr} are used for this purpose. It is possible to extend sinusoidal encodings to bounding box coordinates, however, our unary tokens already contain positional information, since they were decoded into bounding boxes. Instead, we take this as an opportunity to inject pairwise spatial information into the transformer, something that has been shown to be helpful for the task of HOI detection~\cite{scg}. Specifically, we compute the unary and pairwise spatial features used by Zhang \etal~\cite{scg} from the bounding boxes, including the unary box centre, width and height, and pairwise intersection-over-union, relative area, and direction, and pass this through an MLP to obtain the pairwise positional encodings. We defer the full details to~\cref{app:pe}.
We also found that the usual additive approach did not perform as well for our positional encodings. So we slightly modified the attention operation in the transformer encoder layer to allow directly injecting the pairwise positional encodings into the computation of values and attention weights.
\begin{figure}[t]
\centering
\includegraphics[width=0.89\linewidth]{figures/modified_layer.pdf}
\caption{Architecture of the modified transformer encoder layer (left) and its attention module (right). FFN stands for feedforward network~\cite{xfmer}. ``Pairwise concat.'' refers to the operation of pairing up all tokens and concatenating the features. ``Duplicate'' refers to the operation of repeating the features along a new dimension.}
\label{fig:modified-layer}
\end{figure}
More formally, given the detections returned by DETR, we first apply non-maximum suppression and thresholding. This leaves a smaller set $\{d_i\}_{i=1}^{n}$, where a detection $d_i=(\bb_i, s_i, c_i, \bx_i)$ consists of the box coordinates $\bb_i \in \reals^4$, the confidence score $s_i \in [0, 1]$, the object class $c_i \in \cK$ for a set of object categories $\cK$, and the object query or feature $\bx_i \in \reals^{m}$. We compute the pairwise box positional encodings $\{\by_{i, j} \in \reals^m\}_{i, j=1}^{n}$ as outlined above.
We denote the collection of unary tokens by $X \in \reals^{n \times m}$ and the pairwise positional encodings by $Y \in \reals^{n \times n \times m}$. The complete structure of the modified transformer encoder layer is shown in \cref{fig:modified-layer}. For brevity of exposition, let us assume that the number of heads $h$ is 1, and define
\begin{align}
\dot{X} \in \reals^{n \times n \times m},\: \dot{X}_i & \triangleq X \in \reals^{n \times m}, \\
\ddot{X} \in \reals^{n \times n \times 2m},\: \ddot{\bx}_{i,j} & \triangleq \bx_{i} \oplus \bx_{j} \in \reals^{2m},
\end{align}
where $\oplus$ denotes vector concatenation. That is, the tensors $\dot{X}$ and $\ddot{X}$ are the results of duplication and pairwise concatenation. The equivalent values and attention weights can then be computed as
\begin{align}
V &= \dot{X} \otimes Y, \\
W &= \text{softmax}( (\ddot{X} \oplus Y) \bw + b ),
\end{align}
where $\otimes$ denotes elementwise product and $\bw \in \reals^{3m}$ and $b \in \reals$ are the parameters of the linear layer. The output of the attention layer is then computed as $W \otimes V$.
Additional details can be found in~\cref{app:me}.
\subsection{Competitive layer}
To compute the set of pairwise tokens, we form all pairs of distinct unary tokens and remove those where the first token is not human, as object--object pairs are beyond the scope of HOI detection. We denote the resulting set as $\{p_k = (\bx_i, \bx_j, \by_{i, j}) \mid i \neq j, c_i = ``\text{human}"\}$. We then compute the pairwise tokens from the unary tokens and positional encodings via multi-branch fusion (MBF)~\cite{scg} as
\begin{equation}
\bz_k = \text{MBF}(\bx_i \oplus \bx_j, \by_{i, j}).
\end{equation}
Specifically, the MBF module fuses two modalities in multiple homogeneous branches and return a unified feature representation. For completeness, full details are provided in~\cref{app:mbf}. Last, the set of pairwise tokens are fed into an additional transformer encoder layer, allowing the network to compare the HOI candidates, before an MLP predicts each HOI pair's action classification logits $\widetilde{\bs}$.
\subsection{Training and inference}
To make full use of the pre-trained object detector, we incorporate the object confidence scores into the final scores of each human--object pair. Denoting the action logits of the $k^{\text{th}}$ pair $p_k$ as $\widetilde{\bs}_k$, the final scores are computed as
\begin{equation}
\bs_k=(s_i)^\lambda \cdot (s_j)^\lambda \cdot \sigma(\widetilde{\bs}_k),
\label{eq:scores}
\end{equation}
where $\lambda > 1$ is a constant used during inference to suppress overconfident objects~\cite{scg} and $\sigma$ is the sigmoid function. We use focal loss\footnote{Final scores in \cref{eq:scores} are normalised to the interval $[0, 1]$. In training, we instead recover the scale prior to normalisation and use the corresponding loss with logits for numerical stability. See more details in~\cref{app:loss}.}~\cite{retinanet} for action classification to counter the imbalance between positive and negative examples. Following previous practice~\cite{no-frills,scg}, we only compute the loss on valid action classes for each object type, specified by the dataset. During inference, scores for invalid combinations of actions and objects (\eg, \textit{eating a car}) are zeroed out.
\section{Experiments}
\begin{table*}[t]\small
\centering
\caption{Comparison of HOI detection performance (mAP$\times100$) on the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} test sets. The highest result in each section is highlighted in bold.}
\label{tab:results}
\begin{tabularx}{\linewidth}{@{\extracolsep{\fill}} l l cccccccc}
\toprule
& & \multicolumn{6}{c}{\textbf{HICO-DET}} & \multicolumn{2}{c}{\textbf{V-COCO}} \\ [4pt]
& & \multicolumn{3}{c}{Default Setting} & \multicolumn{3}{c}{Known Objects Setting} & & \\
\cline{3-5}\cline{6-8}\cline{9-10} \\ [-8pt]
\textbf{Method} & \textbf{Backbone} & Full & Rare & Non-rare & Full & Rare & Non-rare & AP$_{role}^{S1}$ & AP$_{role}^{S2}$ \\
\midrule
HO-RCNN~\cite{hicodet} & CaffeNet & 7.81 & 5.37 & 8.54 & 10.41 & 8.94 & 10.85 & - & - \\
InteractNet~\cite{interactnet} & ResNet-50-FPN & 9.94 & 7.16 & 10.77 & - & - & - & 40.0 & - \\
GPNN~\cite{gpnn} & ResNet-101 & 13.11 & 9.34 & 14.23 & - & - & - & 44.0 & - \\
TIN~\cite{tin} & ResNet-50 & 17.03 & 13.42 & 18.11 & 19.17 & 15.51 & 20.26 & 47.8 & 54.2 \\
Gupta \etal~\cite{no-frills} & ResNet-152 & 17.18 & 12.17 & 18.68 & - & - & - & - & - \\
VSGNet~\cite{vsgnet} & ResNet-152 & 19.80 & 16.05 & 20.91 & - & - & - & 51.8 & 57.0 \\
DJ-RN~\cite{djrn} & ResNet-50 & 21.34 & 18.53 & 22.18 & 23.69 & 20.64 & 24.60 & - & - \\
PPDM~\cite{ppdm} & Hourglass-104 & 21.94 & 13.97 & 24.32 & 24.81 & 17.09 & 27.12 & - & - \\
VCL~\cite{vcl} & ResNet-50 & 23.63 & 17.21 & 25.55 & 25.98 & 19.12 & 28.03 & 48.3 & - \\
ATL~\cite{atl} & ResNet-50 & 23.81 & 17.43 & 27.42 & 27.38 & 22.09 & 28.96 & - & - \\
DRG~\cite{drg} & ResNet-50-FPN & 24.53 & 19.47 & 26.04 & 27.98 & 23.11 & 29.43 & 51.0 & - \\
IDN~\cite{idn} & ResNet-50 & 24.58 & 20.33 & 25.86 & 27.89 & 23.64 & 29.16 & 53.3 & 60.3 \\
HOTR~\cite{hotr} & ResNet-50 & 25.10 & 17.34 & 27.42 & - & - & - & 55.2 & \textbf{64.4} \\
FCL~\cite{fcl} & ResNet-50 & 25.27 & 20.57 & 26.67 & 27.71 & 22.34 & 28.93 & 52.4 & - \\
HOI-Trans~\cite{hoitrans} & ResNet-101 & 26.61 & 19.15 & 28.84 & 29.13 & 20.98 & 31.57 & 52.9 & - \\
AS-Net~\cite{asnet} & ResNet-50 & 28.87 & 24.25 & 30.25 & 31.74 & 27.07 & 33.14 & 53.9 & - \\
SCG~\cite{scg} & ResNet-50-FPN & 29.26 & \textbf{24.61} & 30.65 & \textbf{32.87} & \textbf{27.89} & \textbf{34.35} & 54.2 & 60.9 \\
QPIC~\cite{qpic} & ResNet-101 & \textbf{29.90} & 23.92 & \textbf{31.69} & 32.38 & 26.06 & 34.27 & \textbf{58.8} & 61.0 \\
\midrule
Ours (UPT) & ResNet-50 & 31.66 & 25.94 & 33.36 & 35.05 & 29.27 & 36.77 & 59.0 & 64.5 \\
Ours (UPT) & ResNet-101 & 32.31 & 28.55 & 33.44 & 35.65 & \textbf{31.60} & 36.86 & 60.7 & 66.2 \\
Ours (UPT) & ResNet-101-DC5 & \textbf{32.62} & \textbf{28.62} & \textbf{33.81} & \textbf{36.08} & 31.41 & \textbf{37.47} & \textbf{61.3} & \textbf{67.1} \\
\bottomrule
\end{tabularx}
\end{table*}
In this section, we first demonstrate that the proposed unary--pairwise transformer achieves state-of-the-art performance on both the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} datasets, outperforming the next best method by a significant margin. We then provide a thorough analysis on the effects of the cooperative and competitive layers. In particular, we show that the cooperative layer increases the scores of positive examples while the competitive layer suppresses those of the negative examples. We then visualise the attention weights for specific images, and show how these behaviours are achieved by the attention mechanism.
At inference time, our method with ResNet50~\cite{resnet} runs at 24 FPS on a single GeForce RTX 3090 device.
\paragraph{Datasets:}
HICO-DET~\cite{hicodet} is a large-scale HOI detection dataset with $37\,633$ training images, $9\,546$ test images, $80$ object types, $117$ actions, and $600$ interaction types. The dataset has $117\,871$ human--object pairs with annotated bounding boxes in the training set and $33\,405$ in the test set.
V-COCO~\cite{vcoco} is much smaller in scale, with $2\,533$ training images, $2\,867$ validation images, $4\,946$ test images, and only $24$ different actions.
\subsection{Implementation details}
We fine-tune the DETR model on the HICO-DET and V-COCO datasets prior to training and then freeze its weights. For HICO-DET, we use the publicly accessible DETR models pre-trained on MS COCO~\cite{coco}. However, for V-COCO, as its test set is contained in the COCO val2017 subset, we first pre-train DETR models from scratch on MS COCO, excluding those images in the V-COCO test set. For the interaction head, we filter out detections with scores lower than $0.2$, and sample at least $3$ and up to $15$ humans and objects each, prioritising high scoring ones. For the hidden dimension of the transformer, we use $m=256$, the same as DETR. Additionally, we set $\lambda$ to $1$ during training and $2.8$ during inference~\cite{scg}. For the hyperparameters used in the focal loss, we use the same values as SCG~\cite{scg}.
We apply a few data augmentation techniques used in other detectors~\cite{detr,qpic}. Inputs images are scaled such that the shortest side is at least $480$ and at most $800$ pixels. The longest side is limited at $1333$ pixels. Additionally, each image is cropped with a probability of $0.5$ to a random rectangle with each side being at least $384$ pixels and at most $600$ pixels before being scaled. We also apply colour jittering, where the brightness, contrast and saturation values are adjusted by a random factor between $0.6$ to $1.4$. We use AdamW~\cite{adamw} as the optimiser with an initial learning rate of $10^{-4}$. All models are trained for $20$ epochs with a learning rate reduction at the $10^{\text{th}}$ epoch by a factor of $10$. Training is conducted on $8$ GeForce GTX TITAN X devices, with a batch size of $2$ per GPU---an effective batch size of $16$.
\subsection{Comparison with state-of-the-art methods}
\begin{table*}[t]\small
\caption{Comparing the effect of the cooperative (coop.) and competitive (comp.) layers on the interaction scores. We report the change in the interaction scores as the layer in the $\Delta$ Architecture column is added to the reference network, for positives, easy negatives and hard negatives, with the number of examples in parentheses. As indicated by the bold numbers, the cooperative layer significantly increases the scores of positive examples while the competitive layer suppresses hard negative examples. Together, these layers widen the gap between scores of positive and negative examples, improving the detection mAP.}
\label{tab:delta}
\setlength{\tabcolsep}{3pt}
\begin{tabularx}{\linewidth}{@{\extracolsep{\fill}} l l c c c c c c}
\toprule
& & \multicolumn{2}{c}{$\Delta$ \textbf{Positives} ($25\,391$)} & \multicolumn{2}{c}{$\Delta$ \textbf{Easy Negatives} ($3\,903\,416$)} & \multicolumn{2}{c}{$\Delta$ \textbf{Hard Negatives} ($510\,991$)}\\ [4pt]
\cline{3-4} \cline{5-6} \cline{7-8} \\ [-8pt]
\textbf{Reference} & $\Delta$ \textbf{Architecture} & Mean & Median & Mean & Median & Mean & Median \\
\midrule
Ours w/o coop. layer & + coop. layer & \textbf{+0.1487} & +0.1078 & +0.0001 & +0.0000 & +0.0071 & +0.0000 \\
Ours w/o comp. layer & + comp. layer & -0.0463 & -0.0310 & -0.0096 & -0.0024 & \textbf{-0.1080} & -0.0922 \\
Ours w/o both layers & + both layers & \textbf{+0.0799} & +0.0390 & -0.0076 & -0.0018 & \textbf{-0.0814} & -0.0748 \\
\bottomrule
\end{tabularx}
\end{table*}
\begin{figure*}[t]
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/add_coop.png}
\caption{\cref{tab:delta} first row}
\label{fig:scatter-left}
\end{subfigure}
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/add_comp.png}
\caption{\cref{tab:delta} second row}
\label{fig:scatter-mid}
\end{subfigure}
\begin{subfigure}[t]{0.33\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/add_both.png}
\caption{\cref{tab:delta} third row}
\label{fig:scatter-right}
\end{subfigure}
\caption{Change in the interaction score (delta) with respect to the reference score. \subref{fig:scatter-left} The distribution of score deltas when adding the cooperative layer (first row of \cref{tab:delta}). \subref{fig:scatter-mid} Adding the competitive layer to the model (second row). \subref{fig:scatter-right} Adding both layers (last row). For visualisation purposes, only $20\%$ of the negatives are sampled and displayed.
}
\label{fig:scatter}
\end{figure*}
The performance of our model is compared to existing methods on the HICO-DET~\cite{hicodet} and V-COCO~\cite{vcoco} datasets in \cref{tab:results}. There are two different settings for evaluation on HICO-DET. \textit{Default Setting}: A detected human--object pair is considered matched with a ground truth pair, if the minimum intersection over union (IoU) between the human boxes and object boxes exceeds $0.5$. Amongst all matched pairs, the one with the highest score is considered the true positive while others are false positives. Pairs without a matched ground truth are also considered false positives. \textit{Known Objects Setting}: Besides the aforementioned criteria, this setting assumes the set of object types in ground truth pairs are known. Therefore, detected pairs with an object type outside the set are removed automatically, thus reducing the difficulty of the problem. For V-COCO, the average precision (AP) is computed under two scenarios, differentiated by the superscripts $S1$ and $S2$. This is to account for missing objects due to occlusion. For scenario $1$, empty object boxes should be predicted in case of occlusion for a detected pair to be considered a match with the corresponding ground truth, while for scenario $2$, object boxes are always assumed to be matched in such cases.
We report our model's performance for three different backbone networks. Notably, our model with the lightest-weight backbone already outperforms the next best method by a significant margin in almost every category. This gap is further widened with more powerful backbone networks. In particular, since the backbone CNN and object detection transformer are detached from the computational graph, our model has a small memory footprint. This allows us to use a higher-resolution feature map by removing the stride in the $5^{\text{th}}$ convolutional block (C5) of ResNet~\cite{resnet}, which has been shown to improve detection performance on small objects~\cite{detr}. We denote this as dilated C5 (DC5).
\subsection{Macroscopic effects of the interaction head}
\label{sec:macro}
In this section, we compare the effects of the unary (cooperative) and pairwise (competitive) layers on the HICO-DET test set, with ResNet50~\cite{resnet} as the CNN backbone.
Since the parameters in the object detector are kept frozen for our model, the set of detections processed by the downstream network remains the same, regardless of any architectural changes in the interaction head. This allows us to compare how different variants of our model perform on the same human--object pairs. To this end, we collected the predicted interaction scores for all human--object pairs over the test set and compare how adding certain layers influence them. In \cref{tab:delta}, we show some statistics on the change of scores upon an architectural modification. In particular, note that the vast majority of collected pairs are easy negatives with scores close to zero. For analysis, we divide the negative examples into easy and hard, where we define an easy negative as one with a score lower than $0.05$ as predicted by the ``Ours w/o both layers'' model, which accounts for $90\%$ of the negative examples. In addition, we also show the distribution of the change in score with respect to the reference score as scatter plots in \cref{fig:scatter}. The points are naturally bounded by the half-spaces $0 \leq x+y \leq 1$.
Notably, adding the cooperative layer results in a significant average increase ($+0.15$) in the scores of positive examples, with little effect on the negative examples. This can be seen in \cref{fig:scatter-left} as well, where the score changes for almost all positive examples are larger than zero.
In contrast, adding the competitive layer leads to a significant average decrease ($-0.11$) in the scores of hard negative examples, albeit with a small decrease in the score of positive examples as well. This minor decrease is compensated by the cooperative layer as shown in the last row of \cref{tab:delta}. Furthermore, looking at \cref{fig:scatter-mid}, we can see a dense mass near the line $y=-x$, which indicates that many negative examples have had their scores suppressed to zero.
\begin{table}[t]\small
\caption{Effect of the cooperative and competitive layers on the HICO-DET test set under the default settings.}
\label{tab:ablation}
\setlength{\tabcolsep}{3pt}
\begin{tabularx}{\linewidth}{l C C C}
\toprule
\textbf{Model} & \textbf{Full} & \textbf{Rare} & \textbf{Non-rare} \\
\midrule
Ours w/o both layers & 29.22 & 23.09 & 31.05 \\
Ours w/o comp. layer & 30.78 & 24.92 & 32.53 \\
Ours w/o coop. layer & 30.68 & 24.69 & 32.47 \\
Ours w/o pairwise pos. enc. & 29.98 & 23.72 & 31.64 \\
\midrule
Ours ($1 \times$ coop., $1 \times$ comp.) & 31.33 & 26.02 & 32.91 \\
Ours ($1 \times$ coop., $2 \times$ comp.) & 31.62 & \textbf{26.18} & 33.24 \\
Ours ($2 \times$ coop., $1 \times$ comp.) & \textbf{31.66} & 25.94 & \textbf{33.36} \\
\bottomrule
\end{tabularx}
\end{table}
\paragraph{Ablation study:}
In \cref{tab:ablation}, we ablate the effect of different design decisions on performance. Adding the cooperative and competitive layers individually improves the performance by around $1.5$~mAP, while adding both layers jointly improves by over $2$~mAP. We also demonstrate the significance of the pairwise position encodings by removing them from the modified encoder and the multi-branch fusion module. This results in a 1.3~mAP decrease. Finally, we observe a slight improvement (0.3~mAP) when adding an additional cooperative or competitive layer, but no further improvements with more layers. As the competitive layer is more costly, we use two cooperative layers.
\subsection{Microscopic effects of the interaction head}
\label{sec:micro}
\begin{figure}[t]
\centering
\includegraphics[height=0.36\linewidth]{figures/image.png} \hspace{3pt}
\includegraphics[height=0.36\linewidth]{figures/unary_attn.png}
\caption{Detected human and object instances (left) and the unary attention map for these instances (right).}
\label{fig:unary_attn}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/pairwise_attn.png}
\caption{Pairwise attention map for the human and object instances in \cref{fig:unary_attn}.}
\label{fig:pairwise_attn}
\end{figure}
\begin{figure*}[t]
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/stand_on_snowboard_6544.png}
\caption{\textit{standing on a snowboard}}
\label{fig:standing-on-snowboard}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/holding_umbrella_7243.png}
\caption{\textit{holding an umbrella}}
\label{fig:holding-umbrella}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/carrying_suitcase_357.png}
\caption{\textit{carrying a suitcase}}
\label{fig:carrying-suitcase}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/sitting_at_dining_table_8701.png}
\caption{\textit{sitting at a dining table}}
\label{fig:sitting-at-dinning-table}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/sitting_on_bench_934.png}
\caption{\textit{sitting on a bench}}
\label{fig:sitting-on-bench}
\end{subfigure}
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/flying_airplane_573.png}
\caption{\textit{flying an airplane}}
\label{fig:flying-airplane}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/holding_surfboard_1681.png}
\caption{\textit{holding a surfboard}}
\label{fig:holding-surfboard}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/wielding_baseball_bat_1860.png}
\caption{\textit{wielding a baseball bat}}
\label{fig:wielding-baseball-bat}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/riding_bike_998.png}
\caption{\textit{riding a bike}}
\label{fig:riding-bike}
\end{subfigure}
\hfill%
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.68\linewidth]{figures/holding_wine_glasses_2661.png}
\caption{\textit{holding a wineglass}}
\label{fig:holding-wineglass}
\end{subfigure}
\caption{Qualitative results of detected HOIs. Interactive human--object pairs are connected by red lines, with the interaction scores overlaid above the human box. Pairs with scores lower than $0.2$ are filtered out.}
\label{fig:qualitative}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/driving_truck_8578.png}
\caption{\textit{driving a truck}}
\label{fig:driving-truck}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\includegraphics[height=0.67\linewidth]{figures/buying_bananas_2502.png}
\caption{\textit{buying bananas}}
\label{fig:buying-bananas}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/repairing_laptops_607.png}
\caption{\textit{repairing a laptop}}
\label{fig:repairing-laptop}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/washing_bicycle_4213.png}
\caption{\textit{washing a bicycle}}
\label{fig:washing-bike}
\end{subfigure} \hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[height=0.67\linewidth]{figures/cutting_tie_9522.png}
\caption{\textit{cutting a tie}}
\label{fig:cutting-tie}
\end{subfigure}
\caption{Failure cases often occur when there is ambiguity in the interaction~\subref{fig:driving-truck},~\subref{fig:buying-bananas},~\subref{fig:repairing-laptop} or a lack of training data~\subref{fig:repairing-laptop},~\subref{fig:washing-bike},~\subref{fig:cutting-tie}.}
\label{fig:failure}
\end{figure*}
In this section, we focus on a specific image and visualise the effect of attention in our cooperative and competitive layers. In \cref{fig:unary_attn}, we display a detection-annotated image and its associated attention map from the unary (cooperative) layer. The human--object pairs $(1, 4)$, $(2, 5)$ and $(3, 6)$ are engaged in the interaction \textit{riding a horse}. Excluding attention weights along the diagonal, we see that the corresponding human and horse instances attend to each other.
We hypothesise that attention between pairs of unary tokens (e.g., $1$ and $4$) helps increase the interaction scores for the corresponding pairs. To validate this hypothesis, we manually set the attention logits between the three positive pairs to minus infinity, thus zeroing out the corresponding attention weights. The effect of this was an average decrease of $0.06$ ($8\%$) in the interaction scores for the three pairs, supporting the hypothesis.
In \cref{fig:pairwise_attn}, we visualise the attention map of the pairwise (competitive) layer. Notably, all human--object pairs attend to the interactive pairs $(1, 4)$, $(2, 5)$ and $(3, 6)$ in decreasing order, except for the interactive pairs themselves. We hypothesise that attention is acting here to have the dominant pairs suppress the other pairs. To investigate, we manually set the weights such that the three interactive pairs all attend to $(1, 4)$ as well, with a weight of $1$. This resulted in a decrease of their interaction scores by $0.08$ ($11\%$). We then instead zeroed out the attention weights between the rest of the pairs and ($1, 4$), which resulted in a small increase in the scores of negative pairs. These results together suggest that attention in the competitive layer is acting as a soft version of non-maximum suppression, where pairs less likely to foster interactions attend to, and are suppressed by, the most dominant pairs. See~\cref{app:qual} for more examples.
\subsection{Qualitative results and limitations}
\vspace{-8pt}
In \cref{fig:qualitative}, we present several qualitative examples of successful HOI detections, where our model accurately localises the human and object instances and assigns high scores to the interactive pairs. For example, in \cref{fig:holding-umbrella}, our model correctly identifies the subject of an interaction (the lady in red) despite her proximity to a non-interactive human (the lady in black). We also observe in \cref{fig:standing-on-snowboard} that our model becomes less confident when there is overlap and occlusion. This stems from the use of object detection scores in our model. Confusion in the object detector often translates to confusion in action classification.
We also show five representative failure cases for our model, illustrating its limitations. In \cref{fig:driving-truck}, due to the indefinite position of drivers in the training set (and real life), the model struggled to identify the driver. For \cref{fig:washing-bike}, the model failed to recognise the interaction due to a lack of training data ($1$ training example), even though the action is well-defined. Overall, ambiguity in the actions and insufficient data are the biggest challenges for our model.
Another limitation, specific to our model, is that the computation and memory requirements of our pairwise layer scale quadratically with the number of unary tokens. For scenes involving many interactive humans and objects, this becomes quite costly.
Moreover, since the datasets we used are limited, we may expect poorer performance on
data in the wild, where image resolution, lighting condition, etc. may be less controlled.
\section{Conclusion}
In this paper, we have proposed a two-stage detector of human--object interactions using a novel transformer architecture that exploits both unary and pairwise representations of the human and object instances. Our model not only outperforms the current state-of-the-art---a one-stage detector---but also consumes much less time and memory to train. Through extensive analysis, we demonstrate that attention between unary tokens acts to increase the scores of positive examples, while attention between pairwise tokens acts like non-maximum suppression, reducing the scores of negative examples. We show that these two effects are complementary, and together boost performance significantly.
\vspace{-10pt}
\paragraph{Potential negative societal impact:}
Transformer models are large and computationally-expensive, and so have a significant negative environmental impact. To mitigate this, we use pre-trained models and a two-stage architecture, since fine-tuning an existing model requires less resources, as does training a single stage with the other stage fixed. There is also the potential for HOI detection models to be misused, such as for unauthorised surveillance, which disproportionately affects minority and marginalised communities.
\vspace{-10pt}
\paragraph{Acknowledgments:}
We are grateful for support from Continental AG (D.C.). We would also like to thank Jia-Bin Huang and Yuliang Zou for their help with the reproduction of some experiment results.
\clearpage
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,993,317 | arxiv | \section{Introduction}
While studying an invariant of $\mathrm{II_1}$-factors related to Connes'
embedding conjecture, Brown~\cite{Br} found that there is a natural way of
defining convex combinations on this invariant. However, there seemed to be no
evident embedding of this set into some linear space such that the convex
combinations are precisely those inherited from the vector space structure.
Searching for an axiomatization of those metric spaces where it makes sense to
talk about convex combinations without having any linear structure, he proposed
the notion of a convex-like structure. The obvious examples of convex-like
structures are closed convex subsets of Banach spaces. The very basic question
is whether any convex-like structure is of this form. Besides being interesting
in itself, this question has also a technical reason: there are many properties
of convex combinations which are trivial to verify in vector spaces, but are
hard to prove in the context of convex-like structures. Here we give a
positive answer to this problem.
Actually, we obtain this result as a consequence of a more general
one: four of the five Brown's axiom, exactly the algebraic ones, are equivalent
to certain well-known axioms of abstract convexity. These were introduced by
Stone~\cite{St} and have since been discussed and sometimes rediscovered,
modulo minor variations, several
times~\cite{Fr}\cite{Gu}\cite{Mo}\cite{PR}\cite{Se} using various terminology;
here, we shall follow the notation and terminology of~\cite{Fr}.
\section{Convex-like structures and convex spaces}
In order to be precise, and also for the convenience of the
reader, we recall the definitions and the Stone embedding theorem
which we are going to use. The following two definitions are both
abstractions of the properties of convex combinations in vector
spaces.
\begin{defin}[{\cite{Br}}]\label{convex-like}
Let $(X,d)$ be a complete metric space. Take $X^{(n)} = X\times
\cdots \times X$ to be the $n$-fold Cartesian product and $\mathrm{Prob}_n$
the set of probability measures on the $n$-element set
$\{1,2,\ldots,n\}$ endowed with the $\ell_1$-metric $\| \mu -
\tilde{\mu} \| = \sum_{i=1}^n |\mu(i) - \tilde{\mu}(i) |$. We say
that $(X,d)$ has a \emph{convex-like structure} if for every $n \in
\mathbb N$ and $\mu \in \mathrm{Prob}_n$ there is given a continuous map
$\gamma_\mu \colon X^{(n)} \to X$ such that
\begin{enumerate}[label={\normalfont{($\gamma.\arabic*$)}}]
\item \label{abelian} for each permutation $\sigma \in S_n$ and
$x_1,\ldots, x_n \in X$, $$\gamma_\mu(x_1,\ldots,x_n) =
\gamma_{\mu\circ\sigma} (x_{\sigma(1)},\ldots, x_{\sigma(n)});$$
\item \label{linearity} if $x_1 = x_2$, then $\gamma_\mu(x_1,x_2,
\ldots,x_n) = \gamma_{\tilde{\mu}} (x_1, x_3, \ldots, x_n)$, where
$\tilde{\mu} \in \mathrm{Prob}_{n-1}$ is given by $\tilde{\mu} (1) = \mu(1) +
\mu(2)$ and $\tilde{\mu} (j) = \mu(j+1)$ for $2 \leq j \leq n-1$;
\item\label{dirac} if $\mu(i) = 1$, then $\gamma_\mu(x_1,\ldots,x_n) = x_i$;
\item\label{metric} The metric compatibility axiom\footnote{Brown's original metric compatibility
axiom actually consisted of two conditions. See remark~\ref{finalrem}.} for all $y_1,\ldots,y_n \in X$,
$$d (\gamma_{\mu} (x_1,\ldots,x_n), \gamma_{\mu} (y_1,\ldots,y_n))
\leq \sum_{i = 1}^n \mu(i) d(x_i, y_i);$$
\item\label{algebraic} for all $\nu \in \mathrm{Prob}_2$, $\mu \in \mathrm{Prob}_n$,
$\tilde{\mu} \in \mathrm{Prob}_m$ and $x_1,\ldots, x_n, \tilde{x}_1,\ldots,
\tilde{x}_m \in X$, $$\gamma_{\nu}(\gamma_\mu(x_1,\ldots,x_n),
\gamma_{\tilde{\mu}}( \tilde{x}_1,\ldots, \tilde{x}_m) ) =
\gamma_{\eta} ( x_1,\ldots,x_n, \tilde{x}_1,\ldots, \tilde{x}_m),$$
where $\eta \in \mathrm{Prob}_{n+m}$ is given by $\eta(i) = \nu(1)\mu(i)$, if
$1\leq i \leq n$, and $\eta(j + n) = \nu(2) \tilde{\mu}(j)$, if
$1\leq j \leq m$.
\end{enumerate}
\end{defin}
The idea behind this definition is that the $n$-ary operation $\gamma_\mu$ is supposed to stand for a convex combination with weights given by the coefficients of $\mu$:
\begin{equation}
\label{gammadef}
\gamma_\mu(x_1,\ldots,x_n) \:\widehat{=}\: \sum_{i=1}^n\mu(i)x_i\:.
\end{equation}
With this intuition, it is clear why one wants the
properties~\ref{abelian} through~\ref{algebraic} to hold.
\begin{defin}[{\cite{Fr}}]\label{convex}
A convex space is given by a set $X$ and a family of binary
operations $\{cc_\lambda\}_{\lambda\in[0,1]}$ on $X$ such that
\begin{enumerate}[label={\normalfont{(cs.\arabic*)}}]
\item\label{unitlaw} $cc_0(x,y)=x \quad \forall x,y\in X$
\item\label{idempotency} $cc_\lambda(x,x)=x \quad \forall x\in X,\:\lambda\in[0,1]$,
\item\label{commutativity} $cc_\lambda(x,y)=cc_{1-\lambda}(y,x) \quad \forall x,y\in X,\:\lambda\in[0,1]$,
\item\label{associativity} $cc_\lambda(cc_\mu(x,y),z)=cc_{\lambda\mu}(x,cc_\nu(y,z)) \quad \forall x,y,z\in X,\:\lambda,\mu\in[0,1]$,
where $\nu$ is arbitrary if $\lambda=\mu=1$ and $\nu=\frac{\lambda(1-\mu)}{1-\lambda\mu}$ otherwise.
\end{enumerate}
\end{defin}
Now an interpretation analogous to~(\ref{gammadef}) holds: the
$cc_\lambda$ simply model binary convex combinations with weight
$\lambda$:
\begin{equation}
\label{ccdef}
cc_\lambda(x,y) \:\widehat{=}\: \lambda x+(1-\lambda)y\:.
\end{equation}
Again, properties~\ref{unitlaw} through~\ref{associativity} clearly
hold for convex combinations in vector spaces.
Our first result follows now. It states that convex-like structures differ from
convex spaces just by the metric compatibility axiom~\ref{metric}. The following
equation~(\ref{gammacc}) is motivated by the correspondences~(\ref{gammadef})
and~(\ref{ccdef}).
A piece of notation: when $\mu(1)=\lambda\in[0,1]$ and $\mu(2)=1-\lambda$ are the parameters of a distribution $\mu\in \mathrm{Prob}_2$, then we also write $\gamma_{\lambda,1-\lambda}$ instead of $\gamma_\mu$.
\begin{teo}\label{equivalence}
The algebraic axioms~\ref{abelian},~\ref{linearity},~\ref{dirac} and~\ref{algebraic} of Definition
\ref{convex-like} are equivalent to the axioms of convex space in
Definition \ref{convex}. More precisely: for a given set $X$, a convex-like structure on $X$ and the structure of a convex space on $X$ mutually determine each other by the identity
\begin{equation}
\label{gammacc}
cc_{\lambda}(x,y)=\gamma_{\lambda,1-\lambda}(x,y)\:.
\end{equation}
\begin{proof}
Let us start proving that Brown's axioms~\ref{abelian},~\ref{linearity},~\ref{dirac} and~\ref{algebraic} for a convex-like structure imply
the axioms of convex spaces when the $cc_{\lambda}$ are defined as in~(\ref{gammacc}).
\begin{enumerate}
\item[\ref{unitlaw}] We have $cc_0(x,y)=\gamma_{0,1}(x,y)=y$ thanks to Brown's axiom~\ref{dirac}.
\item[\ref{idempotency}] We have $cc_\lambda(x,x)=\gamma_{\lambda,1-\lambda}(x,x)$ thanks to Brown's axiom~\ref{linearity}.
\item[\ref{commutativity}] We have
$$
cc_\lambda(x,y)=\gamma_{\lambda,1-\lambda}(x,y)=\gamma_{1-\lambda,\lambda}(y,x)=cc_{1-\lambda}(y,x)
$$
thanks to Brown's axiom~\ref{abelian}.
\item[\ref{associativity}] This is implied by the previous axioms when $\lambda=\mu=1$, so it is enough to treat the case $\lambda\mu\neq1$. We will evaluate $cc_\lambda(cc_\mu(x,y),z)$ and $cc_{\lambda\mu}(x,cc_{\frac{\lambda(1-\mu)}{1-\lambda\mu}}(y,z))$ separately and obtain two identical expressions. Using axiom~\ref{algebraic}, we have
$$
cc_\lambda(cc_\mu(x,y),z)=\gamma_\eta(x,y,z)
$$
where $\eta(1)=\lambda\mu, \eta(2)=\lambda(1-\mu)$ and
$\eta(3)=1-\lambda$. On the other hand, the same~\ref{algebraic} also implies
$$
cc_{\lambda\mu}(x,cc_{\frac{\lambda(1-\mu)}{1-\lambda\mu}}(y,z))=\gamma_\eta(x,y,z)
$$
with the same distribution $\eta\in \mathrm{Prob}_3$.
\end{enumerate}
We now proceed to the proof of the converse implication. Given a
family of binary operations $cc_{\lambda}$ which satisfy the axioms
of convex spaces, we first define $\gamma_{\lambda,1-\lambda}$ according to equation~(\ref{gammacc}). Given this, it then has to be shown that there exist unique choices for the $\gamma_\eta$ with $\eta\in\mathrm{Prob}_n$ for all $n\in\mathbb{N}$ such that~\ref{abelian},~\ref{linearity},~\ref{dirac} and \ref{algebraic} hold. Since $\gamma_{\iota}=\mathrm{id}_X$ for $\iota\in\mathrm{Prob}_1$, and, for $n\geq 3$, any $\eta\in\mathrm{Prob}_n$ can appear on the right-hand side of~\ref{algebraic}, we can already conclude the uniqueness: it is enough to specify the $\gamma_\eta$ with $\eta\in\mathrm{Prob}_n$ for $n=2$.
We still need to show the existence part. To this end, we first define $\gamma_\mu$ for $\mu\in \mathrm{Prob}_n$ recursively by setting
$$
\gamma_\mu(x_1,\ldots,x_n)\equiv \left\{\begin{array}{cc} x_n & \textrm{if }\mu(n)=1\\ cc_{1-\mu(n)}\left(\gamma_\nu(x_1,\ldots,x_{n-1}),x_n\right) & \textrm{if }\mu(n)\neq 1\end{array}\right.
$$
where $\nu\in\mathrm{Prob}_{n-1}$ given by
$\nu(i)=\frac{\mu(i)}{1-\mu(n)}$. So one obtains all $n$-ary operations by
repeated application of the binary ones.
Due to $cc_{\lambda}(x,y)=cc_{1-\lambda}(y,x)$, this definition
respects the permutation invariance~\ref{abelian} when the permutation does
nothing but exchange $x_1$ with $x_2$. For any $n\geq 3$, the definition can be expanded to
$$
\gamma_\mu(x_1,\ldots,x_n)=\left\{\begin{array}{cc} x_n & \textrm{if }\mu(n)=1\\ x_{n-1} & \textrm{if }\mu(n-1)=1\\ cc_{1-\mu(n)}\left(cc_{1-\frac{\mu(n-1)}{1-\mu(n)}}\left(\gamma_\eta(x_1,\ldots,x_{n-2}),x_{n-1}\right),x_n\right) & \textrm{otherwise} \end{array}\right.
$$
with $\eta\in\mathrm{Prob}_{n-2}$ given by
$\eta(i)=\frac{\nu(i)}{1-\nu(n-1)}=\frac{\mu(i)}{1-\mu(n-1)-\mu(n)}$.
Writing $y=\gamma_\eta(x_1,\ldots,x_{n-2})$, the associativity rule~\ref{associativity}
gives
\begin{align*}
\gamma_\mu(x_1,\ldots,x_n)&=cc_{1-\mu(n)}\left(cc_{1-\frac{\mu(n-1)}{1-\mu(n)}}(y,x_{n-1}),x_{n-2}\right) \\
&=cc_{1-\mu(n-1)-\mu(n)}\left(y,cc_{\frac{\mu(n-1)}{\mu(n-1)+\mu(n)}}(x_{n-1},x_n)\right)
\end{align*}
and hence~\ref{commutativity} implies the permutation invariance~\ref{abelian} also for $\gamma_\mu$ when
exchanging $x_{n-1}$ with $x_n$ while keeping all other arguments fixed. By
the recursive definition of $\gamma_\mu$, this argument also proves
invariance under transposing $x_{k-1}$ with $x_k$ for any $k<n$.
Hence now we know that~\ref{abelian} holds with respect to all transpositions
of neighboring arguments. But since the latter generate all permutations,~\ref{abelian} holds in complete generality.
With this, Brown's~\ref{linearity} and~\ref{dirac} are straightforward to prove: by~\ref{abelian},
the property~\ref{linearity} is equivalent to the analogous one with
$x_{n-1}=x_n$ instead of $x_1=x_2$. The latter follows from the
previous considerations together with the axiom $cc_\lambda(x,x)=x$.
The statement~\ref{dirac}, for $i=1$, follows directly from the definition
of $\gamma_\mu$ together with $cc_1(x,y)=y$.
Finally, we prove~\ref{algebraic} by induction on $m$. For $m=1$, this equation
coincides with our definition of its right-hand side. For $m\geq 2$, we can assume $\tilde{\mu}(m)\neq 1$ by appealing to~\ref{abelian}. Then the left-hand side of~\ref{algebraic} can be written as
$$
cc_{\nu(1)}\left(\gamma_\mu(x_1,\ldots,x_n),cc_{1-\tilde{\mu}(m)}\left(\gamma_{\mu'}(\tilde{x}_1,\ldots,\tilde{x}_{m-1}),\tilde{x}_m\right)\right)
$$
where $\mu'(i)=\frac{\tilde{\mu}(i)}{1-\tilde{\mu}(m)}$. An
application of the associativity rule~\ref{associativity} evaluates this to
$$
cc_{1-\tilde{\mu}(m)\nu(2)}\left(cc_{\frac{\nu(1)}{1-\tilde{\mu}(m)\nu(2)}}\left(\gamma_{\mu}(x_1,\ldots,x_n),\gamma_{\mu'}(\tilde{x}_1,\ldots,\tilde{x}_{m-1})\right),\tilde{x}_m\right)
$$
Now by the induction assumption, this can be written as
$$
cc_{1-\tilde{\mu}(m)\nu(2)}\left(\gamma_\delta(x_1,\ldots,x_n,\tilde{x}_1,\ldots,\tilde{x}_{m-1}),\tilde{x}_m\right)
$$
where $\delta$ is the distribution with
$\delta(i)=\frac{\nu(1)}{1-\tilde{\mu}(m)\nu(2)}\mu(i)$ for $1\leq
i\leq n$ and
$\delta(i+n)=\frac{\nu(2)}{1-\tilde{\mu}(m)\nu(2)}\tilde{\mu}(i)$.
This equation is the definition of the right-hand side of~\ref{algebraic}.
\end{proof}
\end{teo}
\section{Embeddings into vector spaces}
The following theorem and proof have been adapted from~\cite{St}.
\begin{teo}[{\cite{St}}]\label{stone}
A convex space embeds into a real vector space with~(\ref{ccdef}) if and only if the following
cancellation property holds:
$$
cc_\lambda(x,y)=cc_\lambda(x,z)\:\textrm{ with }\:\lambda\in(0,1)\quad\Longrightarrow\quad y=z\:.
$$
\end{teo}
\begin{proof}
It is clear that every convex subset of a vector space satisfies this cancellation property, so that it remains to prove the ``if'' direction.
Given a convex space $X$ with the cancellation property, we define a
real vector space as follows: let $V_X$ be the real vector space
formally generated by all points of $X$, so that $V(X)$ has a basis
$(e_x)_{x\in X}$. The vectors of the form \begin{equation} \label{quotientsub}
e_{cc_\lambda(x,y)}-\lambda e_x-(1-\lambda)e_y\:,\qquad x,y\in
X,\:\lambda\in[0,1]\:, \end{equation} generate a subspace $U_X\subseteq V_X$.
Let $W_X$ be the quotient space $V_X/U_X$ and let $\tilde{e}_x$
denote the image of $e_x$ under the canonical projection. Then the
mapping
$$
X\rightarrow W_X,\qquad x\mapsto\tilde{e}_x
$$
preserves convex combinations.
In order to see hat this mapping is injective, it is first necessary
to take a closer look at the subspace $U_X$. The vectors in $U_X$
are all the finite linear combinations of vectors of the
form~(\ref{quotientsub}). Taking the coefficients $\alpha_i$ and
$\beta_i$ to be non-negative, we can write such a linear combination
as
$$
\sum_{i=1}^m\alpha_i\left(e_{cc_{\lambda_i}(a_i,b_i)}-\lambda_i e_{a_i}-(1-\lambda_i)e_{b_i}\right)-\sum_{i=1}^m\beta_i\left(e_{cc_{\mu_i}(c_i,d_i)}-\mu_i e_{c_i}-(1-\mu_i)e_{d_i}\right)
$$
for certain points $a_i,b_i,c_i,d_i\in X$ and weights $\lambda_i,\mu_i\in[0,1]$. We split this into positive terms and negative terms as follows:
\begin{equation}
\label{uxvectors}
\sum_{i=1}^m\left(\alpha_i e_{cc_{\lambda_i}(a_i,b_i)}+\beta_i\mu_i e_{c_i}+\beta_i(1-\mu_i)e_{d_i}\right)-\sum_{i=1}^m\left(\beta_i e_{cc_{\mu_i}(c_i,d_i)}+\alpha_i\lambda_i e_{a_i}+\alpha_i(1-\lambda_i)e_{b_i}\right)
\end{equation}
This expression has two important properties: firstly, the sum of the coefficients of all negative terms equals the sum of the coefficients of all positive terms, namely $\sum_i(\alpha_i+\beta_i)$. If we assume this sum to be $1$ without loss of generality, then, secondly, both sums are just convex combinations. Interpreting these as convex combinations in $X$, these sums moreover define the same point in $X$.
We now prove the required injectivity property by showing that $\tilde{e}_x=\tilde{e}_y$ implies $x=y$ for any two points $x,y\in X$. The equation $\tilde{e}_x=\tilde{e}_y$ holds whenever $e_x-e_y$ lies in $U_X$. If this is the case, then there exists an expression of the form~(\ref{uxvectors}) where the first sum contains the term $\kappa e_x$ for some $\kappa>0$ and the second sum contains the term $\kappa e_y$ for the same $\kappa$, while all other terms cancel. Then by the above, the two sums in~(\ref{uxvectors}) define convex combinations of the same points with the same weights, except that the first one contains the point $x$ with weight $\kappa$, while the second one contains the point $y$ with weight $\kappa$. If one combines all the other points besides these $x$ and $y$ to a single point $z$ which carries a weight $1-\kappa$, one ends up with the equation
$$
cc_\kappa(x,z)=cc_\kappa(y,z)\:,
$$
which implies $x=y$ by the cancellation condition.
\end{proof}
The similarity to the Grothendieck construction which embeds a cancellative abelian monoid into an abelian group should be clear. Just like the latter proceeds by constructing a left adjoint to the inclusion functor of the category of abelian groups into the category of abelian monoids, Stone's embedding theorem implicitly constructs a left adjoint to the inclusion functor of the category of real vector spaces into the category of convex spaces.
We will soon prove that the metric compatibility axiom~\ref{metric} guarantees that the cancellation condition holds in a convex-like structure. This requires a bit of preparation:
\begin{lem}
If the equation
$$
cc_{\lambda}(y,x)=cc_{\lambda}(z,x)
$$
holds for some $x,y,z\in X$ and $\lambda\in(0,1)$, then it also holds for all $\lambda\in(0,1)$.
\end{lem}
\begin{proof}
Let us write $\lambda_0$ for the original value for which the equation holds. Then for all $\lambda<\lambda_0$,
$$
cc_{\lambda}(y,x)=cc_{\lambda/\lambda_0}(cc_{\lambda_0}(y,x),x)=cc_{\lambda/\lambda_0}(cc_{\lambda_0}(z,x),x)=cc_{\lambda}(z,x)
$$
by~\ref{associativity} and~\ref{idempotency}, so that the equation is also true in that case. Hence it is enough to find a sequence $\left(\lambda_n\right)_{n\in\mathbb{N}}$ with $\lambda_n\stackrel{n\rightarrow\infty}{\longrightarrow}1$ for which the equation holds. We construct such a sequence by defining $\lambda_{n+1}=\frac{2\lambda_n}{1+\lambda_n}$, for which an inductive argument shows the validity of the equation:
\begin{align*}
cc_{\lambda_{n+1}}(y,x)&=cc_{\lambda_n/(1+\lambda_n)}(y,cc_{\lambda_n}(y,x))=cc_{\lambda_n/(1+\lambda_n)}(y,cc_{\lambda_n}(z,x))\\[.3cm]
&=cc_{\lambda_n/(1+\lambda_n)}(z,cc_{\lambda_n}(y,x))=cc_{\lambda_n/(1+\lambda_n)}(z,cc_{\lambda_n}(z,x))=cc_{\lambda_{n+1}}(z,x)\:.
\end{align*}
\end{proof}
\begin{cor}\label{embedding}
Let $X$ be a convex-like structure. Then there is a linear embedding of $X$
into some vector space.
\begin{proof}
By Theorem \ref{equivalence} and Theorem \ref{stone} it suffices to
prove the cancellation property: if
$\gamma_{\lambda,1-\lambda}(x,y)=\gamma_{\lambda,1-\lambda}(x,z)$ for some $\lambda\in (0,1)$, then $y=z$. By the previous lemma, we know that if $\gamma_{\lambda,1-\lambda}(x,y)=\gamma_{\lambda,1-\lambda}(x,z)$ holds for some $\lambda\in(0,1)$, then it holds for all $\lambda\in(0,1)$. But then, we get from~\ref{metric}, for any $\lambda>0$,
$$
d(y,z)\leq d(y,\gamma_{\lambda,1-\lambda}(x,y))+d(z,\gamma_{\lambda,1-\lambda}(x,z))\leq \lambda d(x,y)+\lambda d(x,z)=\lambda\left[d(x,y)+d(x,z)\right]
$$
Since $\lambda$ was arbitrary, we conclude $d(y,z)=0$, and hence $y=z$.
\end{proof}
\end{cor}
\begin{rem}
The proof of Corollary \ref{embedding} indeed strongly depends on
Brown's axiom~\ref{metric}: in \cite{Fr} there are examples of
convex spaces which do not embed into a vector space.
\end{rem}
\section{Isometric embeddings into normed spaces}
\begin{lem}
\label{metricnorm}
Let $(X,d)$ be a metric space which is a convex subset $X\subseteq E$ of some vector space $E$ such that
\begin{equation}
\label{metricconv}
d(\lambda y+(1-\lambda)x,\lambda z+(1-\lambda)x)\leq \lambda d(y,z)\quad\forall x,y\in X,\:\lambda\in[0,1]
\end{equation}
holds. Then there is a norm $||\cdot||$ on $E$ such that for
all $x,y\in X$,
$$
d(x,y)=||x-y||\:.
$$
\end{lem}
\begin{proof}
As a special case,~(\ref{metricconv}) gives for $z=x$,
$$
d(\lambda y+(1-\lambda)x,x)\leq \lambda d(y,x)
$$
which yields, in combination with the triangle inequality,
$$
d(y,x)\leq d(y,\lambda y+(1-\lambda)x)+d(\lambda y+(1-\lambda)x,x)\leq (1-\lambda) d(y,x)+\lambda d(y,x)\:.
$$
Since the term on the left-hand side equals the term on the right-hand side, we deduce that both inequalities are actually equalities. In particular, the metric is ``uniform on lines'' in the sense that
$$
d(x,(1-\lambda)x+\lambda y)=\lambda d(x,y)\quad\forall x,y\in X,\:\lambda\in [0,1]\:.
$$
Now in order to prove the assertion, it needs to be shown that $d$
is translation-invariant in the following sense: suppose that
$x_0,x_1,y_0,y_1\in X$
are such that
$$
y_1-x_1=y_0-x_0\:,
$$
then $d(x_1,y_1)=d(x_0,y_0)$. See figure~\ref{parallelogram} for an illustration. For $\varepsilon\in(0,1)$, we will
also consider the points
$$
x_{\varepsilon}=\varepsilon x_1+(1-\varepsilon)x_0\:,\qquad y_{\varepsilon}=\varepsilon y_1+(1-\varepsilon)y_0\:,\qquad z_\varepsilon=(1-\varepsilon)x_\varepsilon + \varepsilon y_\varepsilon=\varepsilon y_1+(1-\varepsilon)x_0\:.
$$
Then by the assumption~(\ref{metricconv}),
$$
d(x_{\varepsilon},z_\varepsilon)=d\left(\varepsilon x_1+(1-\varepsilon)x_0,\varepsilon y_1+(1-\varepsilon)x_0\right)\leq \varepsilon d(x_1,y_1)\:.
$$
By the definition of $z_\varepsilon$ and the uniformity of $d$ on the line connecting $z_\varepsilon$ with $x_\varepsilon$ and $y_\varepsilon$, we have
$$
d(x_\varepsilon,y_\varepsilon)=\varepsilon^{-1}d(x_\varepsilon,z_\varepsilon)\leq d(x_1,y_1)\:.
$$
Upon taking the limit $\varepsilon\rightarrow 0$ we therefore arrive at
$$
d(x_0,y_0)\leq d(x_1,y_1)\:,
$$
and the other inequality direction is then clear by symmetry, so that $d$ is indeed translation invariant.
Now $d$ can be uniquely extended to a translation-invariant metric on the affine hull of $X$. Assuming $0\in X$ without loss of generality, this affine hull equals the linear hull, and then the translation-invariant metric on $\mathrm{lin}(X)$ comes from a norm. If necessary, this norm can be extended from the subspace $\mathrm{lin}(X)$ to all of $E$.
\end{proof}
\begin{figure}
\begin{centering}
\begin{tikzpicture}
\draw (0,0)node[anchor=north east]{$x_0$}--(5,1)node[anchor=north west]{$x_1$}--(5,4)node[anchor=south west]{$y_1$}--(0,3)node[anchor=south east]{$y_0$}--(0,0)--(1,.2)node[anchor=north]{$x_\varepsilon$}--(1,3.2)node[anchor=south]{$y_\varepsilon$};
\draw (0,0)--(5,4);
\draw (1,.8)node[anchor=south east]{$z_\varepsilon$};
\fill (0,0) circle (.05);
\fill (5,1) circle (.05);
\fill (5,4) circle (.05);
\fill (0,3) circle (.05);
\fill (1,.2) circle (.05);
\fill (1,3.2) circle (.05);
\fill (1,.8) circle (.05);
\end{tikzpicture}
\end{centering}
\label{parallelogram}
\caption{Illustration of the proof of lemma~\ref{metricnorm}.}
\end{figure}
Now we have assembled all the ingredients for our main theorem:
\begin{teo}\label{banach}
Every convex-like structure is affinely and isometrically isomorphic
to a closed convex subset of a Banach space.
\end{teo}
\begin{proof}
Since the inequality~(\ref{metricconv}) is an instance of the metric compatibility axiom~\ref{metric}, this is a direct consequence of corollary~\ref{embedding} and lemma~\ref{metricnorm} and the fact that every norm space embeds into its completion, which is a Banach space. Closedness then follows from the requirement that a convex-like structure is assumed to be complete.
\end{proof}
\begin{rem}
We have not used the completeness of $X$ in the derivation of
corollary~\ref{embedding} or lemma~(\ref{metricnorm}). So if we would remove this
hypothesis from the axioms, then we would get that (not necessarily complete) convex-like structures are
precisely the convex subsets of normed spaces.
\end{rem}
\begin{rem}
\label{finalrem}
Given the axioms in Definition \ref{convex-like}, Brown's original first metric compatibility condition
\begin{quote}
``\: There is a constant $C$ such that for all $x_1,\ldots, x_n \in X$,
$$d (\gamma_{\mu} (x_1,\ldots,x_n), \gamma_{\tilde{\mu}}
(x_1,\ldots,x_n)) \leq C \sum_{i=1}^n| \mu(i) - \tilde{\mu}(i)|\:,\textrm{''}
$$
\end{quote}
holds if and only if $X$ is bounded (as a metric space).
\end{rem}
\begin{proof}
By Theorem \ref{banach}, we can take $X$ to be a closed convex subset of a Banach space, with the metric $d$ induced by the norm. Brown's condition then just states that
\begin{equation}
\label{firstmetricaxiom}
\left\|\sum_{i=1}^n(\mu(i)-\tilde{\mu}(i))\, x_i\right\|\leq C\sum_{i=1}^n| \mu(i) - \tilde{\mu}(i)|\:.
\end{equation}
If $X$ is bounded, then we can set $C=\sup_{x\in X}||x||$, and the inequality holds. Conversely, we can use~(\ref{firstmetricaxiom}) to deduce the boundedness of $X$: taking $n=2$ and $\mu(1)=\tilde{\mu}(2)=1$ gives
$$
d(x_1,x_2)=||x_1-x_2||\leq 2C\:.
$$
\end{proof}
The following corollary is a reformulation of our previous results.
It provides a simple way to axiomatize (closed) convex subsets of Banach
spaces.
\begin{cor}
Let $(X,\{cc_\lambda\},d)$ be a convex space in the sense of Definition
\ref{convex} together with a (complete) metric $d$. It is a (closed)
convex subset of a Banach space if and only if it satisfies the inequality
$$
d(cc_\lambda(y,x),cc_\lambda(z,x)) \leq \lambda d(y,z) \quad\forall x,y,z\in X\:,\lambda\in[0,1]\:.
$$
\end{cor}
|
1,314,259,993,318 | arxiv |
\section*{Acknowledgments}
We are grateful to our anonymous reviewers for their valuable comments, \textit{oo7}'s~\cite{wang2018oo7} author Ivan Gotovchits and \textit{SpecFuzz}'s~\cite{oleksenko2019specfuzz} author Oleksii Oleksenko for helping us running the \textit{oo7} and \textit{SpecFuzz} tools accurately. This work is supported by the National Science Foundation, under grant CNS-1814406, and in part by Intel Corporation.
\bibliographystyle{IEEEtran}
\section{FastSpec: Fast Gadget Detection Using BERT}\label{sec:NLP_classifier}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figures/embedding_figure_withLABELS_withoutCIRCLES.PNG
\caption{3-D visualization for the distribution of instructions and registers after t-SNE is applied to embedding vectors. Similar instructions and registers have the same colors. The unrelated instructions are separated from each other in the three-dimensional space after the pre-training.}
\label{fig:bert_embedding}
\end{figure}
In an assembly function representation model, the main challenge is to obtain the representation vectors, namely embedding vectors, for each token in a function. Since the skip-gram and RNN-based training models are surpassed by the attention-only models in sentence classification tasks, we introduce FastSpec, which applies a lightweight BERT version.
\subsection{Training Procedures}
We adopt the same training procedures with BERT on assembly functions, namely, \textit{pre-training} and \textit{fine-tuning}.
\subsubsection{Pre-training}
The first procedure is \textit{pre-training}, which includes two unsupervised tasks. The first task follows a similar approach to MaskGAN by masking a portion of tokens in an assembly function. The mask positions are selected from 15\% of the training sequence, and the selected positions are masked and replaced with \texttt{<MASK>} token with 0.80 probability, replaced with a random token with 0.10 probability, or kept as the same token with 0.10 probability. While the masked tokens are predicted based on other tokens' context, the context vectors are obtained by the multi-head self-attention mechanism.
The second task is the next sentence prediction, where the previous sentence is given as input. Since our assembly code data has no paragraph structure where the separate long sequences follow each other, each assembly function is split into pieces with a maximum token size of 50. For the next sentence prediction task, we add \texttt{<CLS>} to each piece. For each piece of function, the following piece is given with the label \texttt{IsNext}, and a random piece of function is given with label \texttt{NotNext}. FastSpec is trained with the self-supervised approach.
At the end of the \textit{pre-training} procedure, each token is represented by an embedding vector with a size of $H$. Since it is impossible to visualize the high dimensional embedding vectors, we leverage the t-SNE algorithm~\cite{maaten2008t-sne} which maps the embedding vectors to a three-dimensional space as shown in~\autoref{fig:bert_embedding}. We illustrate that the embedding vectors for similar tokens are close to each other in three-dimensional space, as this outcome shows that the embedding vectors are learned efficiently. In~\autoref{fig:bert_embedding}, the registers with different sizes, floating-point instructions, control flow instructions, shift/rotate instructions, set instructions, and MMX instructions/registers are accumulated in separate clusters. The separation among different types of tokens enables achieving a higher success rate in the Spectre gadget detection phase.
\iffalse
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/BERT_gadget_added.pdf}
\caption{(Above) The confidence rates of sliding windows with token size 80 is shown for a benign assembly function. (Below) The inserted Spectre-V1 gadget is detected with 90\% confidence rate in the same function. }
\label{fig:bert_spec}
\end{figure}
\fi
\subsubsection{Fine-tuning}
The second procedure is called \textit{fine-tuning}, which corresponds to a supervised sequence classification in FastSpec. This phase enables FastSpec to learn the conceptual differences between Spectre gadgets and general-purpose functions through labeled pieces. The pieces created for the pre-training phase are merged into a single sequence with a maximum of 250 tokens. The disassembled object files, which have more than 250 tokens, split into separate sequences. Each sequence is represented by a single~\texttt{<CLS>} token at the beginning. The benign files are labeled with~\texttt{0}, and the gadget samples are labeled with~\texttt{1} for the supervised classification. Then, the embedding vectors of the corresponding~\texttt{<CLS>} token and position embedding vectors for the first position are summed up. Finally, the resulting vector is fed into the softmax layer, which is fine-tuned with supervised training. The output probabilities of the softmax layer are the predictions on the assembly code sequence.
\subsection{Training Details and Evaluation}\label{sec:training_details}
We combine the assembly data set generated in~\autoref{sec:dataset} and the disassembled Linux libraries to train FastSpec. Although Linux libraries may contain Spectre-V1 gadgets, we assume that the total number of hidden Spectre gadgets is negligible, comparing the data set's total size. Therefore, the model treats those gadgets as noise, which does not affect the performance of FastSpec. In total, a data set of 107 million lines of assembly code is collected, which consists of 370 million tokens after the pre-processing. We separate 80\% of the data set for training and validation, and the remaining 20\% is used for FastSpec evaluation.
While the same pre-processing phase in~\autoref{sec:gan_details} is implemented, we further merge similar tokens to decrease the total vocabulary size. We replace all labels, immediate values and out-of-vocabulary tokens with \texttt{<label>}, \texttt{<imm>} and \texttt{<UNK>}, respectively. After the pre-processing, the vocabulary size is reduced to 960.
We choose the number of Transformer blocks as $L=3$, the hidden size as $H=64$, and the number of self-attention heads as $A=2$. We train FastSpec on NVIDIA Titan XP GPU. The \textit{pre-training} phase takes approximately 6 hours, with a sequence length of 50. We further train the positional embeddings for 1 hour with a sequence length of 250. The fine-tuning takes only 20 minutes on the pre-trained model to classify all types of samples in the test data set correctly. Note that the training time is less than previous NLP techniques in the literature since BERT~\cite{devlin2019bert} leverages GPU parallelization significantly. The analysis duration is measured on Intel Xeon CPU E5-2637 v2 @3.50GHz.
In the evaluation of FastSpec, we obtained 1.3 million true positives and 110 false positives (99.9\% precision rate) in the test data set, demonstrating the high performance of FastSpec. We assume that the false positives are Spectre-like gadgets in Linux libraries, which need to be explored deeply in future work. Moreover, we only have 55 false negatives (99.9\% recall rate), which yield a 0.99 F-1 score on the test data set.
In the next section, we show that FastSpec achieves high performance and extremely fast gadget detection without needing any GPU acceleration since FastSpec is built on a lightweight BERT implementation.
\iffalse
\subsection{Synthetic Data Analysis}\label{sec:window_size}
\begin{figure}[h
\centering
\includegraphics[width=\columnwidth]{figures/BERT_confidence_openssl.pdf}
\caption{OpenSSL crypto library analysis by FastSpec (libcrypto$-$lib$-$ex$\_$data). The blue line represents the confidence ratio for a window of 80 tokens. When the confidence exceeds 0.6, the code snippet is classified as a Spectre gadget which is shown with red peaks.}
\label{fig:BERT_confidence_openssl}
\end{figure}
We analyze the performance of FastSpec with synthetic data to gain an intuition about the behavior on real-world data.
We insert a known Spectre code with a size of 89 tokens to a random benchmark code given in ~\autoref{lst:spectre}. In~\autoref{fig:bert_spec}, FastSpec detects a simple Spectre gadget with a 0.9 confidence rate for a contiguous interval of 162 tokens. This result shows that FastSpec can detect a hidden gadget with a sliding window size of 80, even though the gadget length is larger than the window size.
\fi
\subsection{Case Study: OpenSSL Analysis}
\label{sec:openssl}
We analyze FastSpec to validate with a separate ground truth data set other than the one we generate in~\autoref{sec:dataset}. The purpose of this analysis is to measure the effect of the covariate shift and robustness of FastSpec against a real-world benchmark. We focus on OpenSSL~v3.0.0 libraries~\cite{openssl}, as it is a popular general-purpose cryptography library in commercial software. We use a subset of functions from RSA, ECDSA, and DSA ciphers in the OpenSSL \textit{speed} benchmark. The function labels are obtained by running the \textit{SpecFuzz} tool, which is a dynamic detection tool to find Spectre-V1 vulnerabilities using fuzzing~\cite{oleksenko2019specfuzz}. The functions in which the \textit{SpecFuzz} tool finds vulnerabilities are labeled as positive, and the remaining ones are labeled as negative. We also exclude the functions without any conditional branch instructions from the positive class and the functions that have a call to them. In total, 4242 functions are extracted from the aforementioned cryptography libraries to analyze with FastSpec. Positive and negative classes include 720 and 2500 functions, respectively.
First, we apply the same pre-processing procedures, as explained in \autoref{sec:training_details} to obtain the tokens. The total number of tokens is more than 4 million. Since the labels are assigned on function-level, we choose the maximum confidence rate that we get among all the sliding windows. The maximum confidence rate is assigned as the prediction of our model for the corresponding input function. In order to find the optimal sliding window size, we scan through the functions with various different window sizes and compare the performances. \autoref{fig:roc} shows that FastSpec achieves the highest performance to detect functions with Spectre-V1 vulnerability when the window size is set to 80 tokens, which corresponds to 0.998 as an area under the curve (AUC) value. The optimal threshold value is found as 0.48, which corresponds to the maximum F-score. The highest F-score is achieved as 0.99, where the false positive rate (benign functions that are mistakenly classified as Spectre gadget) is 0.04\%, and false negative rate (functions that are mistakenly classified as benign) is 2\%. We claim that further analysis of the detected functions by using symbolic execution or taint analysis tools can reduce the number of false negative samples and provide an efficient end-to-end security solution against Spectre-V1 vulnerability.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/roc3.pdf}
\caption{Solid line stands for the ROC curve of FastSpec for Spectre gadget class. Dashed line represents the reference line.}
\label{fig:roc}
\end{figure}
\subsection{Case Study: Phoronix Test Suite Analysis}\label{sec:phoronix}
The performance comparison between FastSpec and other static analysis tools is evaluated on the Phoronix Test Suite v5.2.1~\cite{phoronix}. For the ground truth, the \textit{SpecFuzz} technique is chosen as the tool that dynamically analyzes the binaries, and exploitable gadgets can be detected with a higher success rate compared to static tools. The selected benign files have source code since it is required to obtain the assembly files for the \textit{Spectector} tool. The assembly files are generated by compiling the source C code with the \textit{GCC} compiler. On the other hand, the binary files are generated at the test installation; therefore, there is no further processing required before testing the binary files in \textit{oo7}. For FastSpec, the disassembled binary files are given as input. Note that since the larger benchmarks take more time to be analyzed by \textit{oo7}, we preferred small size files to make the comparison with \textit{Spectector} and FastSpec easier.
\textbf{Timing} The overall timing results for various benchmarks are given in~\autoref{tab:phoronix}. The analysis time of \textit{oo7} and \textit{Spectector} increases drastically with the number of conditional branches since the tools analyze both paths after a conditional branch is encountered. On the other hand, FastSpec analysis time increases linearly with the binary size. We observe that the pre-processing phase takes the major portion in the analysis time of FastSpec while the inference time is in the order of microseconds. We fuzz the Crafty benchmark for 24 hours and other benchmarks for 1 hour using \textit{SpecFuzz} under the default configuration~\footnote{https://github.com/OleksiiOleksenko/SpecFuzz}.
The effect of the increasing number of branches on time consumption is clear in the Crafty and Clomp benchmarks in~\autoref{tab:phoronix}. Even though the Crafty benchmark has only 10,796 branches, \textit{oo7} and \textit{Spectector} analyze the file in more than \textbf{10 days} (the analysis process is terminated after 10 days) and \textbf{2 days}, respectively. In \autoref{fig:fastspec_branch}, we show that both tools are not sufficiently scalable to be used in real-world applications, especially when the files contain thousands of conditional branches. Especially \textit{oo7} shows an exponential behavior because of the forced execution approach, which executes every possible path of the conditional branches. In contrast, FastSpec analyzes the same Crafty benchmark under 6 minutes, which is a significant improvement.
Note that the Byte benchmark has a higher number of branches than most of the remaining benchmarks. However, it consists of multiple independent files that need to be tested separately, taking less time to analyze in total. Consequently, FastSpec is faster than \textit{oo7} and \textit{Spectector} 455 times and 75 times on average, respectively.
\begin{table*}
\centering
\footnotesize
\caption{Comparison of \textit{oo7}~\cite{wang2018oo7}, Spectector~\cite{guarnieri2020spectector}, and FastSpec on the Phoronix Test Suite. The last column shows that FastSpec is on average 455 times faster than \textit{oo7} and 75 times faster than \textit{Spectector}. (\#CB: Number of conditional branches, \#Fc: Number of functions, \#DFc: Number of detected functions) }
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|c|c|c||c|c|c|c|c|c|c|c|c|c} \toprule
& & & & \textbf{SpecFuzz} & \multicolumn{3}{c|}{\textbf{oo7}} & \multicolumn{3}{c|}{\textbf{Spectector}} & \multicolumn{3}{c}{\textbf{FastSpec}} \\
%
Benchmark & \thead{Size\\(KB)} & \thead{\#CB} & \#Fc & \thead{\#DFc} & \thead{\\Precision} & \thead{\\Recall} & \thead{Time \\(sec)} & \thead{\\Precision} & \thead{\\Recall} & \thead{Time \\(sec)} & \thead{\\Precision} & \thead{\\Recall} & \thead{Time \\(sec)}\\
\midrule
Byte & 183.5 & 363 & 83 &7 & 0.70 &\textbf{0.90} & 400 &1.00 &0.43 & 115 & \textbf{1.00} & 0.86 & \textbf{14} \\
Clomp & 79.4 & 1464 & 45 &1 & 0 &0 & 17.5 hr & 0.05 &0.9 & 2.8 hr & \textbf{1.00} & \textbf{1.00} & \textbf{35} \\
Crafty & 594.8 & 10796 & 207 &44 & \textbf{1.00} &0.54 & $>$10 day & 0.60 &\textbf{0.91} & 48 hr & 0.23 & 0.80 & \textbf{315}\\
C-ray & 27.2 & 139 & 11 &1 & \textbf{1.00} &1.00 & 395 & 0.2 &0.9 & 153 & 0.50 & \textbf{1.00} & \textbf{8} \\
Ebizzy & 18.5 & 104 & 6 &3 & 0 &0 & 467 & 0.60 &1.00 & 206 & \textbf{1.00} & 0.33 & \textbf{3} \\
Mbw & 13.2 & 70 & 5 &1 & 0 &0 & 145 & \textbf{0.50} &1.00 & 34 & 0.33 & \textbf{1.00} & \textbf{2} \\
M-queens & 13.4 & 51 & 4 &1 & 1.00 &1.00 & 136 & 0.50 &1.00 & 24 & \textbf{1.00} & \textbf{1.00} & \textbf{2} \\
Postmark & 38.0 & 309 & 49 &6 & 1.00 &0.83 & 3409 & 0.43 &0.95 & 1202 & \textbf{1.00} & \textbf{1.00} & \textbf{10} \\
Stream & 22.0 & 113 & 4 &3 & 0 &0 & 231 & 0 &0 & 63 & \textbf{1.00} & \textbf{0.66} & \textbf{4} \\
Tiobench & 36.1 & 169 & 19 &1 & 0 &0 & 813 & 0.25 &0.8 & 201 & \textbf{0.33} & \textbf{1.00} & \textbf{9} \\
Tscp & 40.8 & 651 & 38 &13 & 0 &0 & 6667 & 1.00 &0.15 & 972 & \textbf{1.00} & \textbf{0.92} & \textbf{12} \\
Xsbench & 27.9 & 153 & 32 &1 & \textbf{1.00} &\textbf{1.00} & 1985 & 0 &0 & 249 & 0.50 & 0.90 & \textbf{7} \\
\midrule
Average & \multicolumn{4}{c|}{ } & 0.47 &0.44 & & 0.43 &0.67 & & \textbf{0.74} & \textbf{0.87} &
\\ \bottomrule
\end{tabular}
}
\label{tab:phoronix}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{figures/FastSpec_branch_only.pdf}
\caption{The processing time of FastSpec is independent of the number of branches whereas for Spectector and oo7 the analysis time increases drastically.}
\label{fig:fastspec_branch}
\end{figure}
\textbf{Baseline Evaluation} The number of gadgets found by the tools varies significantly. While \textit{oo7} and FastSpec report each Spectre gadget in a binary file, \textit{Spectector} outputs whether a function contains a Spectre gadget or not. To be consistent, if a control or data leakage is found in a function, it is reported as a vulnerable function by all three tools.
The precision and recall rates for \textit{oo7}, \textit{Spectector} and FastSpec are given in~\autoref{tab:phoronix}. The precision is calculated as $TP/(TP+FP)$. TP is the number of overlapping gadgets detected by a tool. FP is the number of functions that are classified as Spectre gadgets mistakenly. The recall value is computed as $TP/(TP+FN)$ where FN is the number of gadgets that are not detected by a tool.
In some cases, \textit{oo7} is not able to track the control flow when the number of function calls increases in a gadget, which yields high false negatives and low recall. Thus, \textit{oo7} suffers from the extraction of complete control flow graph. \textit{Spectector} tends to give more false positives compared to \textit{oo7} and FastSpec. This is because some unsupported instructions are skipped by the tool and the broken Spectre gadgets by specific instructions are still classified as Spectre gadget. On the other hand, FastSpec has low false negatives since all the Spectre gadget patterns are detected with a confidence rate higher than 0.48. When the file size increases, the false positives may increase in parallel. However, these gadgets can be verified with other tools to increase the confidence. As a result, FastSpec scans the functions extremely quicker than other tools without sacrificing the precision and recall rates. Our tool also guarantees the security of the scanned assembly functions by detecting almost all Spectre gadgets with low FN rates. FastSpec outperforms all the compared tools in terms of recall and precision rates by a large margin.
\section{}
\subsection{Assembly Gadget Examples}
In this section, corresponding assembly gadget of given examples in~\autoref{sec:dataset} are provided.
\label{sec:gadgets_bypass_oo7_spectector}
\begin{lstlisting}[style=ASMstyleCMOVXCHGSET,
frame=single,
caption={When the C code in~\autoref{lst:cmov} compiled with certain optimizations (gcc 7-4 with O2 enabled), the generated assembly code contains CMOV instruction which fools \textit{oo7}.},
label={lis:assembly_cmov},
xleftmargin=2em,
framexleftmargin=1.5em,
basicstyle=\footnotesize]
victim_function:
.LFB23:
movl global_condition
testl
movl $0,
cmovne
movslq array1_size
cmpq
jbe .L1
leaq array1
leaq array2
movzbl
sall $12,
cltq
movzbl
andb
.L1:
rep ret
\end{lstlisting}
\begin{lstlisting}[style=ASMstyleCMOVXCHGSET,
frame=single,
caption={While generating gadgets with mutational fuzzing technique, this code is generated by our algorithm from Kocher's example 3 (using clang-6.0 with 02 optimization). },
label={lis:assembly_xchg},
xleftmargin=2em,
framexleftmargin=1.5em,
basicstyle=\footnotesize]
victim_function:
xchg
cmpl
movl array1_size
shr $1,
cmpq
jbe .LBB1_1
addq
leaq array1
movzbl
jmp leakByteNoinlineFunction
.LBB1_1:
retq
leakByteNoinlineFunction:
movl
shlq $9,
leaq array2
movb
andb
retq
\end{lstlisting}
\newpage
\begin{lstlisting}[style=ASMstyleCMOVXCHGSET,
frame=single,
caption={While generating gadgets with mutational fuzzing technique, this code is generated by our algorithm from Kocher's example 9 (using clang-6.0 with 02 optimization). The \textcolor{red}{seta \%sil} instruction sets the lowest 8-bit of \%rsi register based on a condition which is not detected by \textit{oo7}.},
label={lis:assembly_set},
xleftmargin=2em,
framexleftmargin=1.5em,
basicstyle=\footnotesize]
victim_function:
seta
cmpl $0,
je .LBB0_2
addl
sarq $1,
addb
movzbl array1
ja .L1324337
testw
shlq $12,
nop
movb array2
.L1324337:
andb
.LBB0_2:
retq
\end{lstlisting}
\subsection{Instructions and registers inserted randomly in the fuzzing technique} \label{sec:instructions_inserted_by_fuzzing}
\newcolumntype{s}{>{\hsize=0.5\hsize}X}
\newcolumntype{v}{>{\hsize=1.2\hsize}X}
\begin{table}[h
\caption{Instructions and registers inserted randomly in the fuzzing technique.
}
\begin{tabularx}{\columnwidth}{X|X|X|v|X|s}
\hline \toprule
\multicolumn{6}{c}{\textbf{Instructions}} \\ \hline
add & cmovll & jns & movzbl & ror & subl \\
addb & cmp & js & movzwl & sall & subq \\
addl & cmpb & lea & mul & salq & test \\
addpd & cmpl & leal & nop & sarq & testb \\
addq & cmpq & leaq & not & sar & testl \\
andb & imul & lock & notq & sal & testq \\
andl & incq & mov & or & sbbl & testw \\
andq & ja & movapd & orl & sbbq & xchg \\
call & jae & movaps & orq & seta & xor \\
callq & jbe & movb & pop & setae & xorb \\
cmova & je & movd & popq & sete & xorl \\
cmovaeq& jg & movdqa & prefetcht0 & shll & xorq \\
cmovbe & jle & movl & prefetcht1 & shlq & lfence \\
cmovbq & jmp & movq & push & shr & sfence \\
cmovl & jmpq & movslq & pushq & sub & mfence \\
cmovle & jne & movss & rol & subb & \\
\toprule
\multicolumn{6}{c}{\textbf{Registers}} \\ \hline
rax & eax & ax & al & xmm0 & ymm0 \\
rbx & ebx & bx & bl & xmm1 & ymm1 \\
rcx & ecx & cx & cl & xmm2 & ymm2 \\
rdx & edx & dx & dl & xmm3 & ymm3 \\
rsp & esp & sp & spl & xmm4 & ymm4 \\
rbp & ebp & bp & bpl & xmm5 & ymm5 \\
rsi & esi & si & sil & xmm6 & ymm6 \\
rdi & edi & di & dil & xmm7 & ymm7 \\
r8 & r8d & r8w & r8b & xmm8 & ymm8 \\
r9 & r9d & r9w & r9b & xmm9 & ymm9 \\
r10 & r10d & r10w & r10b & xmm10 & ymm10 \\
r11 & r11d & r11w & r11b & xmm11 & ymm11 \\
r12 & r12d & r12w & r12b & xmm12 & ymm12 \\
r13 & r13d & r13w & r13b & xmm13 & ymm13 \\
r14 & r14d & r14w & r14b & xmm14 & ymm14 \\
r15 & r15d & r15w & r15b & xmm15 & ymm15 \\
\bottomrule
\end{tabularx}
\label{tab:inst_reg}
\end{table}
\end{appendices}
\section{Background} \label{sec:background}
\subsection{Transient Execution Attacks}
In order to keep the pipeline occupied at all times, modern CPUs have sophisticated microarchitectural optimizations to predict the control flow and data dependencies, where some instructions can be executed ahead of time in the transient domain.
However, the control-flow predictions are not 100\% accurate, causing them to execute some instructions wrongly. These instructions cause pipeline flush once they are detected, and their results are never committed. Interestingly, microarchitectural optimizations make it possible to leak secrets. The critical period before the flush is commonly referred to as the transient domain.
There are two classes of attacks in the transient domain~\cite{Canella2019extended}. The first one is called Meltdown-type attacks~\cite{Lipp2018meltdown, Schwarz2019ZombieLoad, canella2019fallout, stecklina2018lazyfp, vanbulck2018foreshadow, ridl} which exploit delayed permission checks and lazy pipeline flush in the re-order buffer.
The other class is Spectre-type attacks~\cite{kocher2019spectre, storetoload, kiriansky2018speculative, spectrereturns, Maisuradze_2018} that exploit the speculative execution. As most Meltdown-type attacks are fixed in latest microarchitectures and Spectre-type attacks are still applicable to a wide range of targets~\cite{Canella2019extended}, i.e., Intel, AMD, and ARM CPUs, we focus on Spectre-V1 attacks in this study.
Some researchers proposed new designs requiring a change in the silicon level~\cite{khasawneh2018safespec, koruyeh2019speccfi} while others proposed software solutions to mitigate transient execution attacks~\cite{msvcspectre, retpoline}. Although these mitigations are effective against Spectre-type attacks, most of them are not used because of the drastic performance degradation \cite{speculativeloadhardening}or the lack of support in the current hardware. Hence, Spectre-type attacks are not entirely resolved yet, and finding an efficient countermeasure is still an open problem.
\subsubsection{Spectre}
Since a typical software consists of branches and instruction/data dependencies, modern CPUs have components for predicting conditional branches' outcomes to execute the instructions speculatively. These components are called branch prediction units (BPU), which use a history table and other components to make predictions on branch outcomes.
\vspace{0.3cm}
\begin{lstlisting}[ backgroundcolor=\color{white},
frame=single,
xleftmargin=2em,
framexleftmargin=1.5em,
language=C,
caption= Spectre-V1 C Code,
label={lst:spectre}]
void victim_function(size_t x){
if(x < size)
temp &= array2[array1[x] * 512];
}
\end{lstlisting}
`
In Spectre attacks, a user fills the history table with malicious entries such that the BPU makes a misprediction. Then, the CPU executes a set of instructions speculatively. As a result of misprediction, sensitive data can be leaked through microarchitectural components, for instance, by encoding the secret to the cache lines to establish a covert channel. For example, in the Spectre gadget in~\autoref{lst:spectre}, the $2^{nd}$ line checks whether the user input \texttt{x} is in the bound of \texttt{array1}. In a normal execution environment, if the condition is satisfied, the program retrieves $x^{th}$ element of \texttt{array1}, and a multiple of the retrieved value (512) is used as an index to access the data in \texttt{array2}. However, under some conditions, the \texttt{size} variable might not be present in the cache. In such occurrences, instead of waiting for \texttt{size} to be available, the CPU executes the next instructions speculatively.
To eliminate unnecessary stalls in the pipeline. When \texttt{size} becomes available, the CPU checks whether it made a correct prediction or not. If the prediction was wrong, the CPU rolls back and executes the correct path. Although the results of speculatively executed instructions are not observable in architectural components, the access to the \texttt{array2} leaves a footprint in the cache, making it possible to leak the data through side-channel analysis.
\subsubsection{Program Analysis Techniques}\label{sec:program_analysis_techniques}
There are two main program analysis techniques that are commonly used to detect Spectre gadgets.
\textbf{Taint Analysis:} Taint analysis tracks outside user-controlled variables that possibly leak any secret data. If the tainted variables are consumed by a new variable in the program flow, the latter is also tainted in the information flow. This technique is commonly used in vulnerability detection~\cite{newsome2005dynamic}, malware analysis~, \cite{bayer2009scalable,yin2007panorama} and web applications~\cite{balzarotti2008saner,nguyen2005automatically} where user input misuses are highly likely. Similarly, in Spectre gadgets, the secret dependent operations after conditional branches are potential secret leakage sources.
In particular, when the branch decision depends on the user input, the secret is subject to be revealed in the speculative execution state.
In order to detect the Spectre-V1 based leakage in benign programs, the taint analysis technique is used in \textit{oo7}.
\textbf{Symbolic Execution:} Symbolic execution is a technique to analyze the program with symbolic inputs. Each path of the conditional branch is executed symbolically to determine the values, resulting in unexpected bugs. The symbolic execution is applied to detect potential information leakage in benign applications. For instance, \textit{Spectector}~\cite{guarnieri2020spectector} aims to identify the memory and control leaks by supplying symbolic inputs to target functions. While the symbolic execution provides a good understanding of underlying bugs for different input values, it is challenging to apply for large-scale projects due to high resource demand.
\iffalse VE BURADAN SONRASI
YER MI YOK? IYI YAZILMIS BURALAR
Hocam burayi Koray comment out yapmis niye bilmiyorum su anda uyuyor
OK ATLIYORUM O ZAMAN
Tamam Hocam
\fi
\subsection{Natural Language Processing}
\subsubsection{seq2seq Architecture}
Sequence to sequence mapping is a challenging process since the text data set has no numeric values. First, the text data is converted to numeric values with embedding methods~\cite{mikolov2013word2vec,mikolov2013efficient}. Then, a DNN model is trained with vector representations of the text.
A new approach called seq2seq~\cite{sutskever2014seq2seq} was introduced to model sequence-to-sequence relations. The seq2seq architecture consists of encoder and decoder units. Both units leverage multi-layer Long Short Term Memory (LSTM) structures where the encoder produces a fixed dimension encoder vector. The encoder vector represents the information learned from the input sequence. Then, the decoder unit is fed with the encoder vector to predict the input sequence's mapping sequence. After the end of the sequence token is produced by the decoder, the prediction phase stops. The seq2seq structure is commonly used in chatbot engine~\cite{qiu2017alime} since sequences with different lengths can be mapped to each other.
\subsubsection{Generative Adversarial Networks}\label{sec:GAN}
A specialized method of training generative models was proposed by Goodfellow et al.~, \cite{goodfellow2014generative} called generative adversarial networks (GANs). The generative models are trained with a separate discriminator model under an adversarial setting.
In~\cite{goodfellow2014generative}, the training of the generative model is defined as,
\begin{equation} \label{eq:GAN}
\begin{split}
\min_{G}\max_{D}V(D,G) & = \mathbb{E}_{x\sim p_{data}(x)}[log~D(x)] \\
& + \mathbb{E}_{z\sim p_z(z)}[log(1-D(G(z)))].
\end{split}
\end{equation}
In~\autoref{eq:GAN}, the generator $G$ and the discriminator $D$ are trained in such a way that $D$, as a regular binary classifier, tries to maximize its confidence $D(x)$ on real data x, while minimizing $D(G(z))$ on generated samples by the $G$. At the same time, $G$ tries to maximize the confidence of discriminator $D(G(z))$ on generated samples $G(z)$ and minimize $D(x)$ where $x$ is the real data.
MaskGAN~\cite{fedus2018maskgan} is a type of conditional GAN technique to establish a good performance out of traditional GANs. MaskGAN is based on seq2seq architecture with an attention mechanism which improves the performance of the fixed-length encoder vectors. Each time a prediction is made by the decoder unit, a part of the input sequence is used instead of the encoder vector. Hence, each token in the input sequence has a different weight on the decoder output. The main difference of MaskGAN from other GAN-based text generation techniques is the token masking approach, which helps to learn the missing texts in a sequence. For this purpose, some tokens are masked that is conditioned on the surrounding context. This technique increases the chance of generating longer and more meaningful sequences out of GANs.
\subsubsection{Transformer and BERT}
Although recurrent models with attention mechanisms learn the representations of long sequences, attention-only models, namely \textit{Transformer} architectures~\cite{vaswani2017attention}, are shown to be highly effective in terms of computational complexity and performance on long-range dependencies. Similar to \textit{seq2seq} architecture, \textit{the Transformer} architecture consists of an encoder-decoder model. The main difference of \textit{Transformer} is that recurrent models are not used in encoder or decoder units. Instead, the encoder unit is composed of $L$ hidden layers where each layer has a multi-head self-attention mechanism with $A$ attention heads and a fully connected feed-forward network. The input embedding vectors are fed into the multi-head attention, and the output of the encoder stack is formed by a feed-forward network, which takes the output of the attention sub-layer. The decoder unit also has $L$ hidden layers, and it has the same sub-layers as the encoder. In addition to one multi-head attention unit and one feed-forward network, the decoder unit has an extra multi-head attention layer that processes the encoder stack output. To process the information in the sequence order, positional embeddings are used with token embeddings where both embedding vectors have a size of $H$.
Keeping the same Transformer architecture, Devlin et al.~\cite{devlin2019bert} introduced a new language representation model called BERT (Bidirectional Encoder Representations from Transformers), which surpasses the state-of-the-art scores on language representation learning. BERT is designed to pre-train the token representation vectors of deep bidirectional Transformers. For a detailed description of the architecture, we refer the readers to~\cite{vaswani2017attention,devlin2019bert}. The heavy part of the training is handled by processing unlabeled data in an unsupervised manner. The unsupervised phase is called~\textit{pre-training}, which consists of masked language model training and next sentence prediction procedures.
The supervised phase is referred to as~\textit{fine-tuning}, where the model representations are further trained with labeled data for a text classification task. Both phases are further explained in detail for Spectre gadget detection model in~\autoref{sec:NLP_classifier}.
\section{Conclusion}\label{sec:conclusion}
This work, for the first time, proposed NLP inspired approaches for Spectre gadget generation and detection. First, we extended our gadget corpus to 1.1 million samples with a mutational fuzzing technique. We introduced the SpectreGAN tool that achieves a high success rate in creating new Spectre gadgets by automatically learning the structure of gadgets in assembly language. SpectreGAN overcomes the difficulties of training a large assembly language model, an entirely different domain than natural language. We demonstrate 72\% of the compiled code snippets behave as a Spectre gadget, a massive improvement over fuzzing based generation. Furthermore, we show that our generated gadgets span the speculative domain by introducing new instructions and their perturbations, yielding diverse and novel gadgets. The most exciting gadgets are also introduced as new examples of Spectre-V1 gadgets.
Finally, we propose FastSpec, based on BERT-style neural embedding, to detect the hidden Spectre gadgets. We demonstrate that for large binary files, FastSpec is 2 to 3 orders of magnitude faster than \textit{oo7} and \textit{Spectector} while it still detects more gadgets. We also demonstrate the scalability of FastSpec on OpenSSL libraries to detect potential gadgets.
\section{Discussion and Limitations}\label{sec:discussion}
\subsection{Gadget Verification
The gadget verification process in~\autoref{sec:gadget_verification} is implemented in an isolated core since the system interrupts frequently mistrain the targeted branch instructions in the gadgets, which decreases the gadget verification success rate significantly. This situation particularly affects the first step of the verification process where all the inputs are out-of-bounds, and the target branch is not expected to be mistrained. Therefore, there is a need for an isolated environment to run the verification code for Spectre gadgets. Even though the data cache side-channel is used for the secret decoding, other side-channels can be used to decode the secret in a Spectre gadget such as TLB structure. The secret elements in \textit{array1} should be multiplied with a constant to decode the secret into different cache lines or pages. In the base examples~\cite{kocher2018spectre}, the secret elements are multiplied by 512 or 4096. The verification code only selects the Spectre gadgets with these specific multiplicands, which potentially introduces a bias in the data set. Since all multipliers in the Spectre gadgets are represented with the same token, \texttt{<imm>}, our detection tool is not affected by the bias introduced by different multipliers. For instance, in OpenSSL and in Phoronix, we observed that gadgets with different multiplicands are detected by our detection tool.
Our verification codes also focus on more complex leakage snippets in which the secret is not simply leaked with a simple multiplication. We observed that similar control-flow statements and more complex encoding techniques are present among Kocher's examples~\cite{kocher2018spectre} (Examples 10-15). After new gadgets are generated from these examples, we observed that these gadgets can still be detected by our verification code. However, if the leakage mechanism in the gadget is altered significantly, it is likely that the secret in the generated gadget is not recovered during verification. Unfortunately, this introduces a bias in our data set as the diversity of the gadgets is limited. Moreover, our detection tool might not be able to detect more complex gadgets as these gadgets are not included in the training data set. To include more complex gadgets in the data set, the verification code can be changed dynamically by analyzing each generated assembly code, which is left as future work.
\subsection{Scalability and Flexibility}
\textbf{Other Spectre Variants:}
Since pre-training teaches the general assembly syntax and takes a major part in the training process, our pre-trained FastSpec model can be used after fine-tuning for any assembly code task. The modifications are needed only to Step~1 and Step~4 in ~\autoref{sec:gadget_verification} since we need an initial data set and verification code to build up a larger data set. For Spectre~v1.1~\cite{kiriansky2018speculative}, our verification code can be adapted by adding one more attacker-controlled input to verify whether a speculative load is executed or not. Similarly, the speculatively written value in Spectre~v1.2~\cite{kiriansky2018speculative} can be mapped to cache lines to verify the generated gadgets. For Spectre~v2~\cite{kocher2019spectre}, verification procedure needs to be completely changed as the branch instruction is not a conditional branch anymore. For this purpose, the verification code can be modified to mistrain the indirect jumps with attacker known addresses, and then, the secret bytes in the attacker-controlled function are mapped to separate cache lines. Since Spectre-RSB~\cite{spectrereturns} works in a similar way, except \textit{ret} instruction is targeted, the same verification procedure can be adapted.
Finally, in Spectre~v4~\cite{storetoload}, the verification code can supply attacker-controlled variables to specific registers, and then, speculatively loaded data can be decoded to a shared memory to verify the gadgets.
\textbf{Other Attacks:}
Our approach can detect the target \texttt{SMoTher}-gadgets~\cite{bhattacharyya2019smotherspectre} in the code space. The verification procedure in \autoref{sec:gadget_verification}, specifically Step~4, needs to be changed to analyze port fingerprints. For this purpose, the timing of various instructions that are mapped to certain ports can be measured to detect the leaked secrets as implemented in~\cite{aldaya2019port}. It is highly likely that the verification takes more time for the generated gadgets since we need to collect more timings to distinguish the cases between secret leakage and no secret leakage. In NetSpectre~\cite{schwarz2019netspectre}, there are two types of gadgets. The leak gadget is very similar to Spectre v1 whereas only one bit is transmitted. Hence, the verification procedure can be modified to profile a single cache line instead of 256 cache lines. The transmit gadget is used to leak the secret data over the network and has a different structure than the leak gadget. To detect the transmit gadgets with our verification code, the Thrash+Reload technique can be adapted to measure the timing difference between cached and non-cached accesses over the network. Again, the verification procedure potentially takes more time to analyze the generated gadgets since the secret transmission speed is significantly lower than Spectre V1.
\textbf{Other Architectures and Applications:}
Although we limit the scope of this paper to generating and detecting the Spectre-V1 gadgets on x86 assembly code, the use of SpectreGAN and FastSpec can always be extended to other architectures and applications with only mild effort.
Furthermore, specially designed architectures are not needed when pre-trained embedding representations are used~\cite{devlin2019bert}. Therefore, the pre-trained FastSpec model can be used for any other vulnerability detection, cross-architecture code migration, binary clone detection, and many other assembly-level tasks.
The fuzzing tool increases the diversity of the generated gadgets by introducing variations that are later learned by the FastSpec tool. In addition, the detection tool learns the generic gadget type rather than training on small details. In \autoref{sec:training_details}, the evaluation of FastSpec also shows that the tool can detect the potential Spectre gadgets with a 99.9\% precision rate.
\subsection{Comparison of FastSpec with Other Tools}
The most significant advantage of FastSpec is the capability of detecting Spectre gadgets quicker than other tools. If an instruction is not introduced in the training phase, the instruction is treated as unknown, and it has a slight effect on the accuracy of FastSpec since a large window of instructions is analyzed to decide on the Spectre gadgets. While the unsupported instructions are an important issue for the \textit{Spectector} tool, FastSpec can be deployed to other architectures such as ARM and AMD. While small modifications in the assembly code increase the chance of bypassing other tools, our tool is more robust against small modifications. It is easier to adapt FastSpec to other Spectre variants as the vector representations of assembly instructions can be directly used to train a separate model for the variants. Moreover, over-tainting and under-tainting issues decrease the accuracy of taint-based static analysis techniques. However, FastSpec tracks the registers, instructions, and memory accesses with a vector representation, which makes it more reliable in large-scale projects.
\subsection{Scope and Limitations}
\textbf{Scope:} Our scope is to generate Spectre-V1 gadgets by using mutational fuzzing and SpectreGAN methods as well as to detect potential Spectre gadgets in benign programs by significantly reducing the analysis time.
\textbf{Guarantees:} Our verification methods in Step 4.1 guarantee that the generated Spectre-V1 gadgets leak the secret bytes through cache side-channel attacks. Moreover, the FastSpec tool detects the Spectre gadgets with a high precision and recall rate by identifying the gadget patterns at the assembly level. Possible False Positive outputs do not affect the security guarantee provided by FastSpec. The analysis time is significantly reduced compared to rule-based detection tools.
FastSpec generalizes well, i.e., it can recognize similar patterns that are not in our training dataset. However, it does not provide assurance of coverage (completeness) since FastSpec is not based on hand-written rules or formal analysis. In order to decrease the False Negative rate, the probabilistic threshold is kept low in the case studies. In contrast, while FastSpec does not provide such guarantees, it is much faster and scales to larger code-bases.
\textbf{Assembly Code Generation:}
The challenges faced in the regular text generation with GANs~\cite{yu2017seqgan,fedus2018maskgan} also exist in assembly code generation. One of the challenges is \textit{mode collapse} in the generator models. Although training the model and generating the gadgets with masking help reduce mode collapse, we observed that our generator model still generates some tokens or patterns of tokens repetitively, reducing the quality of the generated samples and compilation and real gadget generation rates.
In regular text generation, even if the position of a token changes in a sequence, the meaning of the sequence may change while it would still be somewhat acceptable. However, if the position of a token in an assembly function changes, it may result in a compilation error because of the incorrect syntax. Even if the generated assembly function has the correct assembly syntax, the function behavior may be completely different from the expected one due to the position of a few instructions and registers.
The fuzzing-based gadget generation technique is based on known gadget examples. Since there are already 15 versions of Spectre-V1, we use these gadgets as the starting point for fuzzing. On the other hand, the available gadgets for other variants are significantly lower compared to Spectre-V1 gadgets. To solve this issue, other detection tools can be used to detect Spectre gadgets in benign programs. Then, new gadgets can be generated with fuzzing technique. We leave the further investigation of generation other Spectre variants as future work.
\textbf{Window Size:}
Since Transformer architecture has no utilization of recurrent modeling as RNNs do, the maximum sequence length is needed to be set before the training procedures. Therefore, the sliding window size can be set to at most the maximum sequence length. On the other hand, our experiments show that using lower window sizes than maximum sequence length provides more accurate Spectre gadget detection and provides fine-grain information on the sequence.
\section{Gadget Generation}\label{sec:detection_tools} \label{sec:dataset}
We propose both mutational fuzzing and GAN-based gadget generation techniques to create novel and diverse gadgets. In the following sections, details of both techniques and the diversity analysis of the gadgets are given:
\subsection{Gadget Generation via Fuzzing}\label{sec:gadget_verification}
We begin with fuzzing techniques to extend the base gadgets to create an extensive data set consists of a million Spectre gadgets in four steps.
\begin{algorithm}
\small
\DontPrintSemicolon
\SetAlgoLined
\KwIn{An Assembly function $A$, a set of instructions $\mathbb{I}_b$ and sets of registers $\mathbb{R}_b$ for different sizes of $b$}
\KwOut{A mutated Assembly function $A'$ }
$\mathbb{G} := {\mathbb{R}_b \mapsto \mathbb{I}_b}$\;
$A' = A$\;
$\textit{MaxOffset}=length(A)$\;
\For{1:\textit{Diversity}}{
\For{Offset=1:\textit{MaxOffset}}{
\For{1:Offset}{
$\textit{i}_b \gets random(\mathbb{I})$\;
$\textit{r}_b \gets random(\mathbb{R}_b | \mathbb{G})$\;
$\textit{l} \gets random(0:length(A'))$\;
$Insert(\{\textit{i}_b | \textit{r}_b\},A',l)$\;
}
\textit{Test boundary check}($A'$)\;
\textit{Test Spectre leakage}($A'$)\;
}
}
\caption{Gadget generation using mutational fuzzing}
\label{alg:generation}
\end{algorithm}
\setlength{\textfloatsep}{3pt
\begin{itemize}[leftmargin=8pt]
\item \textbf{Step 1: Initial Data Set}
There are 15 Spectre-V1 gadgets written in C by Kocher~\cite{kocher2018spectre} and two modified examples introduced by \textit{Spectector}~\cite{guarnieri2020spectector}. For each example, a different attacker code is written
to leak the entire secret data completely in a reasonable time.
%
\item \textbf{Step 2: Compiler variants and optimization levels}
Since our target data set is in assembly code format, each Spectre gadget written in C is compiled into x86 assembly functions using different compilers. We compiled each example with \textit{GCC}, \textit{clang}, and \textit{icc} compilers using \textit{-o0} and \textit{-o2} optimization flags. Therefore, we obtain 6 different assembly functions from each C function with AT\&T syntax.
%
\item \textbf{Step 3: Mutational fuzzing based generation}
We generated new samples with an approach inspired by mutation-based fuzzing technique~\cite{sutton2007fuzzing} as introduced in Algorithm~\ref{alg:generation}. Our mutation operator is the insertion of random assembly instructions with random operands. For an assembly function $A$ with length $L$, we create a mutated assembly function $A'$. We set a limit on the number of generated samples per assembly function $A$ for each \textit{Offset} value, denoted as \textit{Diversity}. We choose a random instruction $\textit{i}_b$ from the instruction set $\mathbb{I}$, and depending on the instruction format of $\textit{i}_b$; we choose random operands $\textit{r}_b$, which are compatible with the instruction in terms of bit size, $b$. After proper instruction-operand selection, we choose a random position $l$ in $A'$ and insert $\{\textit{i}_b | \textit{r}_b\}$ into that location. We repeat the insertion process until we reach the~\textit{Offset} value. The randomly inserted instruction and register list are given in Appendix~\ref{sec:instructions_inserted_by_fuzzing}.
%
\item \textbf{Step 4: Verification of the generated gadgets}
Finally, $A'$ is tested whether it is still a Spectre-V1 gadget or not. There are two verification tests that are applied to the generated functions.
The first verification test is applied to make sure that the function still has the proper array boundary-check for given user inputs. Since random instructions are inserted in random locations in the gadget, a new instruction may alter the flags whose value is checked by the original conditional jump. Once the flags are broken, the secret may be leaked without any speculative execution. To test this case, the PoC Spectre-V1 attacker code~\cite{kocher2019spectre} is modified to supply only out-of-bounds inputs to $A'$, which prevents mistraining the branch predictor. If the secret bytes in the PoC code are still leaked, we conclude that the candidate gadget is broken and exclude it from the pool.
If a generated function $A'$ passes from the first test, we apply the PoC Spectre-V1 attack to the gadget and exclude it if it does not leak the secret data through speculative execution. Additionally, the verification code is modified based on Kocher's examples since each example gadget leaks the secret in a different way. For instance, $4^{th}$ example shifts the user input by 1, which affects the leakage mapping in the cache. Therefore, we modified the PoC code to compile it with the generated gadgets together to leak the whole secret. This process is repeated for each example in Kocher's gadget dataset~\cite{kocher2018spectre}, which yields 16 different verification codes. The secret in the gadgets is only decoded via implementing the Flush+Reload technique. Other microarchitectural side-channels are not in the scope of the verification phase
Other Spectre variants such as SmotherSpectre~\cite{bhattacharyya2019smotherspectre} and NetSpectre~\cite{schwarz2019netspectre} are not in our scope. Hence, the generated gadgets that potentially include SmotherSpectre and NetSpectre variants are not verified with other side-channel attacks. Our verification procedure only guarantees that the extracted gadgets leak secret information through cache side-channel attacks. The verification method can be adjusted to other Spectre variants, which is explained further in~\autoref{sec:discussion}.
\end{itemize}
At the end of the fuzzing-based generation, we obtained a data set of almost 1.1 million Spectre gadgets\footnote{The attacker codes for each example, the entire data set, SpectreGAN, and FastSpec code are available at \url{https://github.com/vernamlab/FastSpec}}. The overall success rate of the fuzzing technique is 5\% out of compiled gadgets. The generated gadgets are used to train SpectreGAN in the next section
\iffalse
\subsection{oo7 Analysis} \label{sec:oo7}
The \textit{oo7} tool leverages taint analysis to detect Spectre-V1 gadgets. It is based on the Binary Analysis Platform (BAP)~\cite{BAP} which forwards taint propagation along all possible paths after a conditional branch is encountered. \textit{oo7}~\footnote{https://gitlab.com/igoto/spectre-detector} is built on a set of hand-written rules which cover the existing examples by Kocher~\cite{kocher2018spectre}. Although our data set size is 1.1 million, we have selected 100,000 samples from each gadget example randomly uniform due to the immense time consumption of \textit{oo7} (150 hours for ~100K gadgets). The detection rate of \textit{oo7} is 94\%. Even though the detection rate seems high, several issues are remaining in \textit{oo7}, which are given below:
\paragraph{\textbf{Lack of Generalization}}
Even though Kocher's 15 examples are detected by the tool, the hand-written rules are not efficient in detecting newly introduced gadgets. Solely relying on a rule-based approach means that if new Spectre-V1 gadgets introduce different ways to leak the secret data, the gadgets found in deployed software are not detected. We also realized that the $17^{th}$ example in~\autoref{sec:gadget17} proposed by \textit{Spectector} fooled \textit{oo7}. According to the authors of \textit{oo7}, it is possible to add a new rule to the taint propagation policy to catch the control flow violation. However, the tool possibly faces an over-tainting problem, which increases time and resource consumption. We analyzed gadgets undetected by \textit{oo7} and found the following code segments that can bypass the tool. The detailed analysis can be found in~\autoref{sec:oo7_table}.
\paragraph{\textbf{\texttt{CMOVcc} instruction}} \texttt{CMOVcc} performs the move operations if the flag conditions are satisfied in \texttt{EFLAGS} register. After careful inspection of \texttt{CMOVcc} in the gadgets, we crafted a novel C example given in~\autoref{lst:cmov} which bypasses \textit{oo7}. When the C code is compiled with \textit{gcc-7.5} and \texttt{O2} optimization, the $2^{nd}$ and $3^{rd}$ lines are merged as a \texttt{CMOVcc} instruction in assembly. Due to the inadequacy of hand-written rules in \textit{oo7}, the gadget is not detected.
\begin{lstlisting}[ backgroundcolor=\color{white},
frame=single,
xleftmargin=2em,
framexleftmargin=1.5em,
language=C,
caption= {\texttt{CMOVcc} example: An example Spectre gadget in C format. When it is compiled with \textit{gcc-7.5} with 02 optimization level, \texttt{CMOVcc} instruction fools \textit{oo7} tool 100\% of the time},
label={lst:cmov}]
void victim_function(size_t x){
if(global_condition)
x = 0;
if(x < size)
temp &= array2[array1[x] * 512];
}
\end{lstlisting}
\paragraph{\textbf{\texttt{XCHG} instruction}}
\texttt{XCHG} instruction is used to swap the contents in the destination and source operands, and it is generally used when atomicity is needed. We inspected Assembly files with \texttt{XCHG} instruction and created a C function that mimics the execution behavior. In this scenario, the gadget has no leakage in the first access. Instead, it stores the user-controlled variable in a separate variable. However, when the gadget is called again in the next iteration, it uses the previous variable to achieve subtle leakage. Since the variable is not tainted by \textit{oo7}, the leakage is not detected.
\begin{lstlisting}[ backgroundcolor=\color{white},
frame=single,
xleftmargin=2em,
framexleftmargin=1.5em,
language=C,
caption= {Mocked behavior of \texttt{XCHG} instruction: When a past value controlled by the attacker is used in the Spectre gadget, \textit{oo7} is fooled by 100\% of the time},
label={lst:prev}]
size_t prev = 0xff;
void victim_function(size_t x) {
if (prev < size)
temp &= array2[array1[prev] * 512];
prev = x;
}
\end{lstlisting}
\paragraph{\textbf{\texttt{SETcc} instruction}}
\texttt{SETcc} instruction sets a byte of the destination operand depending on status flags in \texttt{EFLAGS} register. Once this instruction is used similarly to \texttt{CMOVcc} instruction, \textit{i.e.}, changing a byte of the tainted variable based on another condition, the gadget is not detected by the \textit{oo7}.
Although we found the top three gadget types that are possible to generate in C and \textit{oo7} cannot detect, our findings are not an exhaustive search. As stated earlier, the run time of \textit{oo7} is massive, and we only selected 10\% of our dataset. Hence, it indicates that there might be more Spectre gadgets that are not covered by us, and with high probability, the tool lacks rules for detecting them.
\subsection{Spectector}
\label{sec:Spectector}
\textit{Spectector}~\cite{guarnieri2020spectector} makes use of a symbolic execution technique to detect the potential Spectre-V1 gadgets. The tool directly takes an Assembly file as input, and the instructions are executed sequentially. If the tool finds data or control leakage, the leakage is reported along with example values, which cause the leakage. For each Assembly file, \textit{Spectector} is adjusted to track 25 symbolic paths of at most 5000 instructions each, with a global timeout of 30 minutes. The remaining parameters are kept as default.
When we analyze 100,000 gadgets with \textit{Spectector}, 23.75\% of the gadgets are not detected by \textit{Spectector}. The main reason behind the low detection rate is the unsupported instructions and registers since 96\% of the undetected gadgets contain unsupported instruction/register. In our analysis, we observed that \textit{Spectector} lacks some features: (1) a high number of unsupported instructions and unknown registers, (2) misinterpretation of fence instructions, (3) misinterpretation of \texttt{\%al, \%bl, \%cl} and \texttt{\%dl} registers.
\paragraph{\textbf{Unsupported instructions/registers}} A large set of instructions and registers in x86 are not supported by \textit{Spectector}, which lowers the detection rate in the real world. For instance, \texttt{prefetch, rol, ror, addpd, jnp} instructions are not recognized by \textit{Spectector}, where registers, such as \texttt{xmm, sp, r14w, bp, r14b}, are also unknown. If unknown registers and instructions are encountered during the symbolic execution, they are not executed and skipped. Therefore, the behavior of a potential Spectre gadget is not observed totally, which decreases the detection rate drastically.
\paragraph{\textbf{Fence instructions}} The fence instructions \texttt{lfence}, \texttt{mfence} and \texttt{sfence} are used to serialize the operations, which also affect the characteristic of the gadgets. Some compilers are also updated to insert fence instructions to prevent the leakages if there is a Spectre gadget~\cite{microsoft}. If \texttt{mfence} or \texttt{lfence} instruction is inserted between "\texttt{movl size(\%rip), \%eax}" (boundary check) and "\texttt{movzbl array2(\%rax), \%edx}" (secret encoding to cache) instructions, then the code snippet has no leakage. On the other hand, the presence of \texttt{sfence} instruction has no influence on the secret leakage.
In order to test the \textit{Spectector} tool, an automated analysis tool is written, which puts fence instructions into different positions in a gadget. Then, the gadget is checked whether it still leaks the secret or not. In the last step, the same gadgets are analyzed with \textit{Spectector}. The results demonstrate that if \texttt{mfence} or \texttt{lfence} instruction is inserted between "\texttt{movl size(\%rip), \%eax}" and \texttt{jae} instructions in~\autoref{lis:xorb}, \textit{Spectector} gives false positives. It means even though there is no leakage in the g, the tool mistakenly reports a data leakage. In addition, when \texttt{sfence} instruction is introduced between \texttt{jae} and "\texttt{movb array2(\%rax), \%dl}", \textit{Spectector} cannot detect the leakage, which yields to the false negative. Due to the potential misinterpretations of fence instructions, \textit{Spectector}'s success rate drops in our test data set.
\paragraph{\textbf{\%al, \%bl, \%cl, \%dl registers}} The \texttt{\%xl} registers are used to execute the instructions which are based on 8-bit operations. For instance, \texttt{\%al} is the lower 8 bits of \texttt{\%rax} register and when \texttt{\%al} is modified, the lowest 8 bits of \texttt{\%rax} are affected.
When the basic example Spectre gadget in~\autoref{lst:spectre} is compiled with icc compiler and -O2 optimization, the \texttt{array1[x]} variable is stored in \texttt{\%eax} register. Once the \texttt{\%al} register is zeroed by executing "\texttt{xorb \%al, \%al}" or "\texttt{subb \%al, \%al}" instruction in~\autoref{lis:xorb}, \textit{Spectector} no longer marks the Assembly file as a Spectre gadget. Moreover, this misdetection problem can be repeated for other 64-bit registers used by the secret variable. Hence, \textit{Spectector} is vulnerable to 8-bit registers if a portion of the secret resides in the corresponding 64-bit register.
When the Assembly files are generated from~\autoref{lst:cmov} and~\autoref{lst:prev} by using the \textit{GCC} compiler with -O0 optimization level, \textit{Spectector} can detect the gadgets. These results demonstrate that the past information and conditional moves are tracked with symbolic execution better than taint analysis.
\begin{lstlisting}[style=ASMstyle,
frame=single,
caption={\textcolor{red}{\textit{xorb \%al, \%al}} is added to $1^{st}$ example in Kocher examples~\cite{kocher2018spectre}. \textit{Spectector} is no longer able to detect the leakage due to the zeroing \%al register.},
label={lis:xorb},
xleftmargin=2em,
framexleftmargin=1.5em,
basicstyle=\footnotesize]
victim_function:
movl size
cmpq
jae .B1.2
movzbl array1
shlq $9,
xorb
movb array2
andb
.B1.2:
ret
\end{lstlisting}
\textit{oo7} and \textit{Spectector} use a set of hand-written rules, and the majority of the Spectre-gadgets are detected. Although this is a remarkable result, they fall short in generalizing to new Spectre gadgets. As a result, newly introduced gadgets and their variations can bypass these tools 100\% of the time. The comprehensive set of instructions that can fool these tools are given in ~\autoref{sec:gadgets_bypass_oo7_spectector}. Therefore, there is a need for a tool that can detect the gadgets we created and the unknown, not yet discovered gadgets. In order to achieve that, we first introduce SpectreGAN to diversify our random instruction data set automatically. Then, we train the BERT-based Spectre gadget detector, namely, FastSpec, to identify the gadgets in benign applications quicker than other methods.
\fi
\section{Introduction}
The new era of microarchitectural attacks began with newly discovered Spectre~\cite{kocher2019spectre} and Meltdown~\cite{Lipp2018meltdown} attacks, which may be exploited to exfiltrate confidential information through microarchitectural channels during speculative and out-of-order executions.
The Spectre attacks target vulnerable code patterns called gadgets, which leak information during speculatively executed instructions.
While the initial variants of Spectre~\cite{kocher2019spectre} exploit conditional and indirect branches, Koruyeh et al.~\cite{spectrereturns} propose another Spectre variant by poisoning the entries in Return-Stack-Buffers (RSBs). Moreover, new Spectre-type attacks~\cite{chen2019sgxpectre,spectrereturns} are implemented against the SGX environment and even remotely over the network~\cite{schwarz2019netspectre}. These attacks show the applicability of Spectre attacks in the wild.
Unfortunately, chip vendors try to patch the leakages one-by-one with microcode updates rather than fixing the flaws by changing their hardware designs. Therefore, developers rely on automated malware analysis tools to eliminate mistakenly placed Spectre gadgets in their programs. The proposed detection tools mostly implement taint analysis~\cite{wang2018oo7} and symbolic execution~\cite{wang2019kleespectre,guarnieri2020spectector} to identify potential gadgets in benign applications. However, the methods proposed so far are associated with two shortcomings: (1) the low number of Spectre gadgets prevents the comprehensive evaluation of the tools, (2) time consumption exponentially increases when the binary files become larger.
Thus, there is a need for a robust and fast analysis tool that can automatically discover potential Spectre gadgets in large-scale commercial software.
Natural Language Processing (NLP) techniques are applied to automate challenging natural language and text processing tasks ~\cite{radford2019language}.
Later, NLP techniques have been used in the security domain, such as network traffic~\cite{radford2018sequence} and vulnerability analysis~\cite{redmond2018cross}.
Such applications leverage word~\cite{mikolov2013word2vec} or paragraph~\cite{le2014distributed} embedding techniques to learn the vector representations of the text.
The success of these techniques heavily depends on the large data sets, which ease training scalable and robust NLP models. However, for Spectre, for instance, the number of available gadgets is only $15$, making it crucial to create new Spectre gadgets before building an NLP-based detection tool.
Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} are a type of generative models, which aim to produce new examples by learning the distribution of training instances in an adversarial setting. Since adversarial learning makes GANs more robust and applicable in real-world scenarios, GANs have become quite popular in recent years with applications ranging from generating images~\cite{wang2018high,odena2017conditional} to text-to-image translation~\cite{reed2016generative}, etc. While the early applications of GANs focused on computer vision, implementing the same techniques in NLP tasks poses a challenge due to the lack of continuous space in the text. Various mathematical GAN-based techniques have been proposed to achieve better success in generating human-like sentences to overcome this obstacle ~\cite{fedus2018maskgan,gulrajani2017improved}. However, it is still unclear whether GANs can be implemented in the context of computer security to create application-specific code snippets. Additionally, each computer language has a different structure, semantics, and other features that make it more difficult to generate meaningful snippets for a specific application.
Neural vector embeddings~\cite{mikolov2013word2vec,le2014distributed} used to obtain the vector representations of words have proven extremely useful in NLP applications. Such embedding techniques also enable one to perform vector operations in high dimensional space while preserving the meaningful relations between similar words. Typically, supervised techniques apply word embedding tools as an initial step to obtain the vector embedding of each token and then build a supervised model on top. For instance, BERT~\cite{devlin2019bert} was proposed by the Google AI team, which learns the relations between different words in a sentence by applying a self-attention mechanism~\cite{vaswani2017attention}. BERT has exhibited superior performance compared to previous techniques~\cite{sutskever2014seq2seq,mikolov2010rnnlm} when combined with bi-directional learning. Furthermore, the attention mechanism improves GPU utilization while learning long sequences more efficiently. Recently, BERT-like architectures are shown to be capable of modeling high-level programming languages~\cite{lachaux2020unsupervised, feng2020codebert}. However, it is still unclear whether it will be effective to model a low-level programming language, such as Assembly language, and help build more robust malware detection tools, which is the goal of this paper.
\textbf{Our Contributions} Our contributions are twofold. First, we increase the diversity of Spectre gadgets with the mutational fuzzing technique. We start with 15 examples~\cite{kocher2018spectre} and produce 1 million gadgets by introducing various instructions and operands to the existing gadgets. Then, we propose a GAN-based tool, namely, SpectreGAN, which learns the distribution of 1 million Spectre gadgets to generate new gadgets with high accuracy. The generated gadgets are evaluated from both semantic and microarchitectural aspects to verify their diversity and quality. Furthermore, we introduce novel gadgets that are not detected by state-of-the-art detection tools.
In the second part, we introduce FastSpec, a high dimensional neural embedding based detection technique derived from BERT, to obtain a highly accurate and fast classifier for Spectre gadgets. We train FastSpec with generated gadgets and achieve a 0.998 Area Under the Curve (AUC) score for OpenSSL libraries in the test phase. Further, we apply FastSpec on Phoronix benchmark tests to show that FastSpec outperforms taint analysis-based and symbolic execution-based detection tools as well as significantly decreases the analysis time.
In summary,
\begin{itemize}[itemsep=0pt,parsep=0pt,leftmargin=3mm]
\item We extend 15 base Spectre examples to 1 million gadgets by applying a mutational fuzzing technique,
\item We propose SpectreGAN which leverages conditional GANs to create new Spectre gadgets by learning the distribution of existing Spectre gadgets in a scalable way,
\item We show that both mutational fuzzing and SpectreGAN create diverse and novel gadgets which are not detected by \textit{oo7} and \textit{Spectector} tools,
\item We introduce FastSpec, which is based on supervised neural word embeddings to identify the potential gadgets in benign applications orders of magnitude faster than rule-based methods.
\end{itemize}
\textbf{Outline} The paper is organized as follows: First, the background on transient execution attacks and NLP are given in~\autoref{sec:background}. Then, the related work is given in~\autoref{sec:related_work}. Next, we introduce both fuzzing-based and SpectreGAN generation techniques in~\autoref{sec:detection_tools}. A new Transformer-based detection tool, namely, FastSpec is proposed in~\autoref{sec:NLP_classifier}. Finally, we conclude the paper with discussions and limitations in~\autoref{sec:discussion} and conclusion in~\autoref{sec:conclusion}.
\section{Related Work}\label{sec:related_work}
\subsection{Spectre attacks and detectors}
\textbf{Spectre Variations and Covert Channels} In the first Spectre study~\cite{kocher2019spectre}, two variants were introduced. While Spectre-V1 exploits the conditional branch prediction mechanism when a bound check is present, Spectre-V2 manipulates the indirect branch predictions to leak the secret. Next, researchers discovered new variants of Spectre-based attacks. For instance, a variant of Spectre focuses on poisoning Return-Stack-Buffer (RSB) entries with the desired malicious return addresses~\cite{spectrereturns, Maisuradze_2018}. Another variant of Spectre called Speculative Store Bypass~\cite{storetoload} takes advantage of the memory disambiguator's prediction to create leakage. Traditionally, secrets are leaked through cache timing differences. Then, researchers showed that there are also other covert channels to measure the time difference: namely using network latency ~\cite{schwarz2019netspectre}, port contention ~\cite{bhattacharyya2019smotherspectre}, or control flow hijack attack based on return-oriented programming~\cite{mambretti2020bypassing} to leak secret data.
\textbf{Defenses against Spectre} There are various detection methods for speculative execution attacks. Taint analysis is used in \textit{oo7}~\cite{wang2018oo7} software tool to detect leakages. As an alternative way, the taint analysis is implemented in the hardware context to stop the speculative execution for secret dependent data~\cite{schwarz2019context,yu2019speculative}. The second method relies on symbolic execution analysis. Spectector~\cite{guarnieri2020spectector} symbolically executes the programs where the conditional branches are treated as mispredicted. Furthermore, SpecuSym~\cite{guo2019specusym} and KleeSpectre~\cite{wang2019kleespectre} aim to model cache usage with symbolic execution to detect speculative interference, which is based on Klee symbolic execution engine. Following a different approach, Speculator~\cite{mambretti2019speculator} collects performance counter values to detect mispredicted branches and speculative execution domain. Finally, Specfuzz~\cite{oleksenko2019specfuzz} leverages a fuzzing strategy to test functions with diverse set of inputs. Then, the tool analyzes the control flow paths and determines the most likely vulnerable code snippets against speculative execution attacks.
\subsection{Binary Analysis with Embedding}
Binary analysis is one of the methods to analyze the security of a program. The analysis can be performed dynamically~\cite{nethercote2007valgrind} by observing the binary code running in the system. Alternatively, the binary can also be analyzed statically~\cite{song2008bitblaze}.
NLP techniques have been applied to binary analysis in recent years. Mostly, the studies leverage the aforementioned techniques to embed Assembly instructions and registers into a vector space. The most common usage of NLP in the binary analysis is to find the similarities between files. Asm2Vec~\cite{ding2019asm2vec} leverages a modified version of the PV-DM model to solve the obfuscation and optimization issues in a clone search. Zuo et al.~\cite{zuo2018neural} and Redmond et al.~\cite{redmond2018cross} solve the binary similarity problem by NLP techniques when the same file is compiled in different architectures. SAFE~\cite{massarelli2019safe} proposes a combination of skip-gram and RNN self-attention models to learn the embeddings of the functions from binary files to find the similarities.
\subsection{GAN-based Text Generation}
The first applications of GANs were mostly applied to computer vision to create new images such as human faces~\cite{karras2017progressive,karras2019style}, photo blending~\cite{wu2019gp}, video generation~\cite{vondrick2016generating}, and so on. However, text generation is a more challenging task since it is more difficult to evaluate the performance of the outputs. An application~\cite{li2017adversarial} of GANs is in the dialogue generation, where adversarial learning and reinforcement are applied together. SeqGAN~\cite{yu2017seqgan} introduces gradient policy update with Monte Carlo search. LeakGAN~\cite{guo2018long} implements a modified policy gradient method to increase the usage of word-based features in adversarial learning. RelGAN~\cite{nie2019relgan} applies Gumbel-Softmax relaxation for training GANs as an alternative method to gradient policy update. SentiGAN~\cite{wang2018sentigan} proposes multiple generators to focus on several sentiment labels with one multi-class generator. However, to the best of our knowledge, the literature lacks GANs applied to the Assembly code generation. To fill this literature gap, we propose SpectreGAN in~\autoref{sec:SpectreGAN}.
\subsection{SpectreGAN: Assembly Code Generation with GANs}\label{sec:SpectreGAN}
We introduce SpectreGAN, which learns the fuzzing generated gadgets in an unsupervised way and generates new Spectre-V1 variants from existing assembly language samples. The purpose of SpectreGAN is to develop an intelligent way of creating assembly functions instead of randomly inserting instructions and operands. Hence, the low success rate of gadget generation in the fuzzing technique can be improved further with GANs.
We build SpectreGAN based on the MaskGAN model, with 1.1 million examples generated in~\autoref{sec:dataset}. Since MaskGAN is originally designed for text generation, we modify the MaskGAN architecture to train SpectreGAN on assembly language. Finally, we evaluate the performance of SpectreGAN and discuss challenges in assembly code generation.
\begin{figure*}
\centering
\scalefont{1.50}
\include{figures/GAN}
\vspace{-0.7cm}
\caption{SpectreGAN architecture. Blue and red boxes represent the encoder and decoder LSTM units, respectively. Green boxes represent the softmax layers. The listed assembly function (AT\&T format) on the left is fed to the models after the tokenization process. The critic model and the decoder part of the discriminator get the same sequence of instructions in the adversarial training.}
\label{fig:gan}
\end{figure*}
\subsubsection{SpectreGAN Architecture}
SpectreGAN has a generator model that learns and generates x86 assembly functions and a discriminator model that gives feedback to the generator model by classifying the generated samples as real or fake as depicted in~\autoref{fig:gan}.
\textbf{Generator} The generator model consists of encoder-decoder architecture (seq2seq)~\cite{sutskever2014seq2seq} which is composed of two-layer stacked LSTM units. Firstly, the input assembly functions are converted to a sequence of tokens $T'=\{x'_1,...,x'_N\}$ where each token represents an instruction, register, parenthesis, comma, intermediate value or label. SpectreGAN is conditionally trained with each sequence of tokens where a masking vector $m=(m_1,...,m_N)$ with elements $m_t \in \{0,1\}$ is generated. The masking rate of $m$ is determined as $r_m = \dfrac{1}{N}\sum_{t=1}^N{m_t}$. $m(T')$ is the modified sequence where $x'_t$ is replaced with \texttt{<MASK>} token for the corresponding positions of $m_t=1$. Both $T'$ and $m(T')$ are converted into the lists of vectors $T=\{x_1,...,x_N\}$ and $m(T)$ by a lookup in a randomly initialized embedding matrix of size $V\times H$, where $V$ and $H$ are the vocabulary size and embedding vector dimension, respectively. In order to learn the masked tokens, $T$ and $m(T)$ are fed into the encoder LSTM units of the generator model. Each encoder unit outputs a hidden state $\overline{h}_s$ which is also given as an input to the next encoder unit. The last encoder unit ($e_{G}^{6}$ in~\autoref{fig:gan}) produces the final hidden state which encapsulates the information learned from all assembly tokens.
The decoder state is initialized with the encoder's final hidden state, and the decoder LSTM units are fed with $m(T)$ at each iteration. To calculate the hidden state $\widetilde{h}_t$ of each decoder unit, the attention mechanism output and the current state of the decoder $h_t$ are combined.
The attention mechanism reduces the information bottleneck between encoder and decoder and eases the training~\cite{bahdanau2015attention} on long token sequences in assembly function data set. The attention mechanism is implemented exactly same for both generator and discriminator model which is illustrated in the discriminator part in~\autoref{fig:gan}. The alignment score vector $a_t$ is calculated as:
\begin{equation}\label{eq:alignment}
a_t(s) =\dfrac{e^{h_t^\top \overline{h}_s}}{\sum_{s'=1}^N{e^{h_t^\top \overline{h}_{s'}}}},
\end{equation}
where $a_t$ describes the weights of $\overline{h}_s$, for a token $x'_t$ at time step $t$, where $h_t^\top \overline{h}_s$ is the score value between the token $x'_t$ and $T'$. This forces decoder to consider the relation between each instruction, register, label and other tokens before generating a new token. The context vector $c_t$ is calculated as the weighted sum of $\overline{h}_s$ as follows:
\begin{equation}\label{eq:context}
c_t = \sum_{s'=1}^N{a_t(s)\overline{h}_{s'}}.
\end{equation}
For a context vector, $c_t$, the final attention-based hidden state, $\widetilde{h_t}$, is obtained by a fully connected layer with hyperbolic tangent activation function,
\begin{equation}\label{eq:attention}
\widetilde{h}_t = tanh(W_c[c_t;h_t]),
\end{equation}
where $[c_t;h_t]$ is the concatenation of $c_t$ and $h_t$ with the trainable weights $W_c$. The output list of tokens $\widetilde{T}=(\widetilde{x}_1,...,\widetilde{x}_N)$ is then generated by filling the masked positions for $m(T')$ where $m_t=1$. The probability distribution $p(y_t|y_{1:t-1},x_t)$ is calculated as,
\begin{equation}\label{eq:gen_prob}
p(y_t|y_{1:t-1},x_t) = \dfrac{e^{W_s\widetilde{h}_t}}{\sum e^{W_s\widetilde{h}_t}},
\end{equation}
where $y_t$ is the output token and attention-based hidden state $\widetilde{h_t}$ is fed into the softmax layer which is represented by the green boxes in~\autoref{fig:gan}. It is important to note that the softmax layer is modified to introduce a randomness at the output of the decoder by a sampling operation. The predicted token is selected based on the probability distribution of vocabulary, \textit{i.e.} if a token has a probability of 0.3, it will be selected with a 30\% chance. This prevents the selection of the token with the highest probability every time. Hence, at each run the predicted token would be different which increases the diversity in the generated gadgets.
\textbf{Discriminator} The discriminator model has a very similar architecture to the generator model. The encoder and decoder units in the discriminator model are again two-layer stacked LSTM units. The embedding vectors $m(T)$ of tokens $m(T')$, where we replace $x'_t$ with \texttt{<MASK>} when $m_t=1$, are fed into the encoder. The hidden vector encodings $\overline{h}_s$ and the encoder's final state are given to the decoder.
The LSTM units in the decoder are initialized with the final hidden state of the encoder and $\overline{h}_s$ is given to the attention layer. The list of tokens $\widetilde{T}$ which represents the generated assembly function by the generator model is fed into the decoder LSTM unit with \textit{teacher forcing}. The previous calculations for $a_t(s)$, $c_t$ and $\widetilde{h}_t$ stated in~\autoref{eq:alignment},~\ref{eq:context}, and~\ref{eq:attention} are valid for the attention layer in the discriminator model as well. The attention-based state value $\widetilde{h}_t$ is fed through the softmax layer which outputs only one value at each time step $t$,
\begin{equation}
p_D(\widetilde{x}_t=x^{real}_t|\widetilde{T}) = \dfrac{e^{W_s\widetilde{h}_t}}{\sum e^{W_s\widetilde{h}_t}},
\end{equation}
which is the probability of being a real target token $x_t^{real}$
SpectreGAN has one more model apart from the generator and the discriminator models, which is called the critic model, and it has only one two-layer stacked LSTM unit. The critic model is initialized with zero states and gets the same input $\widetilde{T}$ with the decoder. The output of the LSTM unit at each time step t is given to the softmax layer, and we obtain
\begin{equation}
p_C(\widetilde{x}_t=x^{real}_t|\widetilde{T}) = \dfrac{e^{W_b {h}_t}}{\sum e^{W_b {h}_t}},
\end{equation}
which is an estimated version of $p_D$. The purpose of introducing a critic model for probability estimation will be explained in~\autoref{sec:gan_training}.
\subsubsection{Training}\label{sec:gan_training}
The training procedure consists of two main phases namely, pre-training and adversarial training.
\paragraph{Pre-training phase} The generator model is first trained with maximum likelihood estimation. The real token sequence $T'$ and masked version $m(T')$ are fed into the generator model's encoder. Only the real token sequence $T'$ is fed into the decoder using \textit{teacher forcing} in the pre-training. The training maximizes the log-probability of generated tokens, $\widetilde{x}_t$ given the real tokens, $x'_t$, where $m_t=1$. Therefore, the pre-training objective is
\begin{equation}
\dfrac{1}{N} \sum_{t=1}^N{\log{p(m(\widetilde{x}_t)|m(x'_t))}},
\end{equation}
where $p(m(\widetilde{x}_t)|m(x'_t))$ is calculated only for the masked positions. The masked pre-training objective ensures that the model is trained for a \textit{Cloze} task~\cite{taylor1953cloze}.
\paragraph{Adversarial training phase} The second phase is adversarial training, where the generator and the discriminator are trained with the GAN framework. Since the generator model has a sampling operation from the probability distribution stated in~\autoref{eq:gen_prob}, the overall GAN framework is not differentiable. We utilize the policy gradients to train the generator model, as described in the previous works~\cite{yu2017seqgan,fedus2018maskgan}.
The reward $r_t$ for a generated token $\widetilde{x}_t$ is calculated as the logarithm of $p_D(\widetilde{x}_t=x^{real}_t|\widetilde{T})$. The aim of the generator model is to maximize the total discounted rewards $R_t=m(\sum_{s=t}^{N} \gamma^{s} r_s)$ for the fake samples, where $\gamma$ is the discount factor. Therefore, for each token, the generator is updated with the gradient in \autoref{eq:reinforce_gradient} using the REINFORCE algorithm, where $b_t=\log{p_C(\widetilde{x}_t=x^{real}_t|\widetilde{T})}$ is the baseline rewards by the critic model. Subtracting $b_t$ from $R_t$ helps reducing the variance of the gradient~\cite{fedus2018maskgan}.
\begin{equation}\label{eq:reinforce_gradient}
\nabla_\theta\mathbb{E}_G[R_t] = (R_t - b_t)\nabla_\theta \log G_\theta(\tilde{x}_t)
\end{equation}
To train the discriminator model, both real sequence $T$ and fake sequence $\widetilde{T}$ are fed into the discriminator. Then, the model parameters are updated such that $\log{p_D(\widetilde{x}_t=x^{real}_t|\widetilde{T})}$ is minimized and $\log{p_D(x_t=x^{real}_t|T)}$ is maximized using maximum log-likelihood estimation.
\subsubsection{Tokenization and Training Parameters} \label{sec:gan_details}
Firstly, we pre-process the fuzzing generated data set to convert the assembly functions into sequences of tokens, $T'=(x'_1,...,x'_N)$. We keep commas, parenthesis, immediate values, labels, instruction and register names as separate tokens. To decrease the complexity, we reduce the tokens' vocabulary size and simplify the labels in each function so that the total number of different labels is minimum.
The tokenization process converts the instruction "\texttt{movq~(\%rax),~\%rdx}" into the list \texttt{["movq", "(", "\%rax", ")", ",", "\%rdx"]} where each element of the list is a token $x'_t$. Hence, each token list $T'=\{x'_1,...,x'_N\}$ represents an assembly function in the data set.
The masking vector has two different roles in the training. While a random masking vector $m=(m_1,...,m_N)$ is initialized for the pre-training, we generate $m$ as a contiguous block with a random starting position in the adversarial training. In both training phases, the first token's mask is always selected as $m_1=0$, meaning that the first token given to the model is always real. The masking rate, $r_m$ determines the ratio of masked tokens in an assembly function whose effect on code generation is analyzed further in~\autoref{sec:gan_eval}.
SpectreGAN is configured with the embedding vector size of $d=64$, generator learning rate of $\eta_G=5\times10^{-4}$, discriminator learning rate of $\eta_D=5\times10^{-3}$, critic learning rate of $\eta_C=5\times10^{-7}$ and discount rate of $\gamma=0.89$ based on the MaskGAN implementation~\cite{fedus2018maskgan}. We select the sequences with a maximum length of 250 tokens, building the vocabulary with a size of $V=419$. We separate 10\% of the data set for model validation. SpectreGAN is trained with a batch size of $100$ on NVIDIA GeForce GTX 1080 Ti until the validation perplexity converges in~\autoref{fig:test}. The pre-training lasts about 50 hours, while the adversarial training phase takes around 30 hours.
\subsubsection{Evaluation} \label{sec:gan_eval}
SpectreGAN is based on learning masked tokens with the surrounding tokens. The masking rate is not a fixed value, which is determined based on the context. Since SpectreGAN is the first study to train on Assembly functions, the masking rate choice is of utmost importance to generate high-quality gadgets. Typically, NLP-based generation techniques are evaluated with their associated perplexity score, which indicates how well the model predicts a token. Hence, we evaluate the performance of SpectreGAN with various masking sizes and their perplexity scores. In~\autoref{fig:test}, the perplexity converges with the increasing number of training steps, which means the tokens are predicted with a higher accuracy towards the end of the training. SpectreGAN achieves lower perplexity with higher masking rates, which indicates that higher masking rates are more preferable for SpectreGAN.
Even though the higher masking rates yield lower perplexity and assembly functions of high quality in terms of token probabilities, our purpose is to create functions which behave as Spectre gadgets. Therefore, as a second test, we generated 100,000 gadgets for 5 different masking rates. Next, we compiled our gadgets with the \textit{GCC} compiler and then tested them with all the attacker code to verify their secret leakage. When SpectreGAN is trained with a masking rate of 0.3, the success rate of gadgets increases by up to 72\%. Interestingly, the success rate drops for other masking rates, demonstrating the importance of masking rate choice. In total, 70,000 gadgets are generated with a masking rate of 0.3 to evaluate the performance of SpectreGAN in terms of gadget diversity in~\autoref{sec:diversity}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth, height=5cm]{figures/perplexity_success_rate_combined.pdf}
\caption{(Above) The validation perplexity decreases at each training step and converges for all $r_m$. (Below) Spectre gadget success rates are evaluated when different masking rates are used to train SpectreGAN.
Spectre gadget success rate shows the percentage of gadgets out of compiled functions.
}
\label{fig:test}
\end{figure}
To illustrate an example of the generated samples, we fed the gadget in~\autoref{lst:spec_gadget} to SpectreGAN and generated a new gadget in~\autoref{lst:gan_gadget}. We demonstrate that SpectreGAN is capable of generating realistic assembly code snippets by inserting, removing, or replacing the instructions, registers, and labels. In the~\autoref{lst:gan_gadget}, the lines that start with the instructions written with red color are generated by SpectreGAN, and they correspond to the masked portion of Spectre-V1 gadget given in~\autoref{lst:spec_gadget}.
\begin{minipage}[b]{0.35\columnwidth}
\vspace{3mm}
\centering
\lstset{label=SliceExaple,columns=flexible}
\begin{lstlisting}[style=ASMstyle2,
caption=Input Spectre-V1 gadget,backgroundcolor=\color{white},frame=single,
xleftmargin=-0.4em,
framexleftmargin=1em,
framexrightmargin=-2.5em,
numbers=left, label={lst:spec_gadget}, linewidth =3.96cm,
basicstyle=\footnotesize]
victim_function:
.cfi_startproc
movl size
cmpq
jbe .L0
leaq array1
movzbl
ror $1
shlq $9
leaq array2
movss
movb
andb
movd
test
sbbl
.L0:
retq
cmovll
.cfi_endproc
\end{lstlisting}
\end{minipage}
\hspace{-5mm}
\begin{minipage}[b]{0.49\columnwidth}
\centering
\lstset{label=SliceExaple2,columns=flexible}
\begin{lstlisting}[style=ASMcaner,
escapechar=!,
caption=Generated gadget by SpectreGAN,backgroundcolor=\color{white},frame=single,
xleftmargin=2em,
framexleftmargin=1em,
framexrightmargin=0em,
label={lst:gan_gadget},
linewidth = 3.92cm, basicstyle=\footnotesize]
victim_function:
.cfi_startproc
movl size
cmpq
jbe .L0
leaq array1
movzbl
ror $1
shlq $9
movb array2
andb
.L1:
andb
movb array2
andb
sbbl
.L0:
retq
cmovll
.cfi_endproc
\end{lstlisting}
\end{minipage}
\iffalse
\begin{minipage}[b]{0.5\columnwidth}
\centering
\lstset{label=SliceExaple}
\begin{lstlisting}[caption=Input Spectre-V1 gadget,backgroundcolor=\color{white},frame=single, numbers=left, label={lst:spec_gadget}, linewidth = 4cm]
victim_function:
.cfi_startproc
movl array1_size
cmpq
jbe .L0
leaq array1
movzbl
ror $1
shlq $9
leaq array2
movss
movb
andb
movd
test
sbbl
.L0:
retq
cmovll
.cfi_endproc
\end{lstlisting}
\label{lis:spec_gadget}
\caption{Example WHILE code snippet with nested scopes.}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.45\columnwidth}
\centering
\begin{lstlisting}[caption=Generated Spectre-V1 gadget by SpectreGAN,backgroundcolor=\color{white},frame=single, numbers=left, label={lst:gan_gadget}, linewidth = 4cm]
victim_function:
.cfi_startproc
movl array1_size
cmpq
jbe .L0
leaq array1
movzbl
shlq $9
movb array2
andb
.L1:
andb
movb array2
andb
retq
sbbl
.L0:
retq
cmovll
.cfi_endproc
\end{lstlisting}
\end{minipage}
\fi
\subsection{Diversity and Quality Analysis of Generated Gadgets}\label{sec:diversity}
In total, 1.2 million gadgets are generated by the mutational fuzzing technique and SpectreGAN. Since the gadgets are derived from existing examples, it is crucial to analyze their diversity and quality. The diversity is measured by syntactic analysis, e.g., counting the number of unique n-grams in gadgets. For the quality metric, we monitor performance counters while the gadgets are executed. 5000 gadgets are randomly selected from each gadget generation technique to perform syntactic and microarchitectural analysis. Furthermore, novel gadgets that are not detected by \textit{oo7}~\cite{wang2018oo7} and \textit{Spectector}~\cite{guarnieri2020spectector} tools are given to show that our gadget generation techniques produce meaningful Spectre-V1 gadgets.
\subsubsection{Syntactic Analysis}
In NLP applications, the diversity of the generated texts is evaluated by counting the number of unique n-grams. The most common metrics for the text diversity are perplexity and BLEU scores that are calculated based on the probabilistic occurrences of n-grams in a sequence. The higher number of n-grams indicates that an NLP model learns the data set distribution efficiently and produces new sequences with high diversity. However, both scores are obtained during the training phase; thus, making it impossible to evaluate the fuzzing generated gadgets since there is no training phase. Instead, we conduct diversity analysis by counting the unique n-grams introduced by fuzzing and SpectreGAN methods after all the gadgets are generated.
The number of unique n-grams in generated gadgets is compared with 17 base examples in \autoref{tab:n-grams}. The unique n-grams are calculated as follows: First, unique n-grams produced by fuzzing are identified and stored in a list. Then, additional unique n-grams introduced by SpectreGAN are noted. Therefore, the unique n-grams generated by SpectreGAN in~\autoref{tab:n-grams} represent the number of n-grams introduced by SpectreGAN, excluding fuzzing generated n-grams.
\begin{table}[h]
\centering
\caption{Table shows the number of unique n-grams for base gadgets and generated gadgets by fuzzing and SpectreGAN methods. In the last column the total number of unique n-grams are given as well as the increase factor that improves with the increasing n-grams.}
\small
\begin{tabular}{>{\centering\arraybackslash}m{0.03\columnwidth}|>{\centering\arraybackslash}m{0.09\columnwidth}|>{\centering\arraybackslash}m{0.12\columnwidth}|>{\centering\arraybackslash}m{0.2\columnwidth}|>{\centering\arraybackslash}m{0.28\columnwidth}}
\hline \toprule
\textbf{n} & \textbf{Base} & \textbf{Fuzzing} & \textbf{SpectreGAN} & \textbf{Total}\\ \midrule
2 & 2069 & 15,448 & 7,462 & 22,910 ($\times$11)\\
3 & 3349 & 181,606 & 91,851 & 273,457 ($\times$82)\\
4 & 4161 & 639,608 & 460,317 & 1,099,925 ($\times$264)\\
5 & 4747 & 998,279 & 921,519 & 1,919,798 ($\times$404)\\
\bottomrule
\end{tabular}
\label{tab:n-grams}
\end{table}
In total, the number of unique bigrams (2-grams) is increased to 22,910 from 2,069, which is more than 10 times raise. While new instructions and registers added by fuzzing improve the gadgets' diversity, SpectreGAN contributes to the gadget diversity by producing unique perturbations. Since the instruction diversity increases drastically compared to base gadgets, the unique 5-grams reach up to almost 2 million, 400 times higher than the base gadgets. The results show that both fuzzing and SpectreGAN span the diversity in the generated gadgets. High diversity in the gadget data set also results in microarchitectural behavior diversity as well as new Spectre-V1 gadgets that were not previously considered during the design process of previous detection mechanisms.
\subsubsection{Microarchitectural Analysis}
Another purpose of gadget generation is to introduce new instructions and operands to create high-quality gadgets. To assess the quality of the gadgets, we analyze gadgets' microarchitectural characteristics. The first challenge is to examine the effects of instructions in the transient domain since they are not visible in the architectural state. After carefully analyzing the performance counters for Haswell architecture, we determined that two counters, namely, $uops\_issued:any$ and $uops\_retired:any$ give an insight into gadgets' microarchitectural behavior. $uops\_issued:any$ counter is incremented every time a $\mu$op is issued, which counts both speculative and non-speculative $\mu$ops. On the other hand, $uops\_retired:any$ counter only counts the executed and committed $\mu$ops, which automatically excludes speculatively executed $\mu$ops.
\vspace{0.3cm}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/counter_diversity.pdf}
\caption{The distribution of base (red-triangle), fuzzing generated (blue-square) and SpectreGAN generated (green-circle) gadgets is given for issued and retired $\mu$ops counters. Both SpectreGAN and fuzzing techniques generate diverse set of gadgets in Haswell architecture.}
\label{fig:counter_diversity}
\end{figure}
The performance counter distribution of generated gadgets and base gadgets are given in~\autoref{fig:counter_diversity}. The gadget quality is measured by the number of instructions in the transient domain after a gadget passes the verification step. The exploitable gadgets in the commercial software have many instructions that are speculatively executed until the secret is leaked. If our detection tool in ~\autoref{sec:NLP_classifier} is only trained with simple gadgets from Kocher's examples, the success rate would be low in large-scale software binaries. Moreover, the gadgets that are detected in the case studies are very similar to the generated gadgets which have more instructions in the transient domain. A similar observation is also shared in~\cite{wang2019oo7}, where the authors claim that Spectre gadgets have up to 150 instructions between the conditional branch and speculative memory access in the detected gadgets. Since our aim is to create realistic gadgets by inserting various instructions, we assume that gadget quality increases in parallel when a gadget is close to the x-axis and far from the y-axis.
It is more likely to obtain high-quality gadgets with fuzzing method as new instructions and operands are randomly added. On the other hand, SpectreGAN learns the essential structure of the fuzzing generated gadgets, which yields almost the same number of samples close to the x-axis in~\autoref{fig:counter_diversity}. Moreover, the advantage of SpectreGAN is to automate the creation of gadgets with a higher accuracy (72\%) compared to the fuzzing technique (5\%).
\subsubsection{Detection Analysis}
Even though the microarchitectural and syntactic analyses show that fuzzing and SpectreGAN can produce diverse and high-quality sets of gadgets, we aim to enable a comprehensive evaluation of detection tools and determine the most interesting gadgets in our data set. For this reason, the generated gadgets are fed into \textit{Spectector}~\cite{guarnieri2020spectector} and \textit{oo7}~\cite{wang2018oo7} tools to determine the novelty of the gadgets.
\textbf{oo7} tool leverages taint analysis to detect Spectre-V1 gadgets. It is based on the Binary Analysis Platform (BAP)~\cite{BAP} which forwards taint propagation along all possible paths after a conditional branch is encountered. \textit{oo7}~\footnote{https://gitlab.com/igoto/spectre-detector} is built on a set of hand-written rules which cover the existing examples by Kocher~\cite{kocher2018spectre}. Although our data set size is 1.2 million, we have selected 100,000 samples from each gadget example uniformly random due to the immense time consumption of \textit{oo7} (150 hours for ~100K gadgets), which achieves a 94\% detection rate.
Interestingly, specific gadget types from both fuzzing and SpectreGAN are not caught by \textit{oo7}. When a gadget contains \textit{cmov} or \textit{xchg} or \textit{set} instruction and its variants, it is not identified as a Spectre gadget. Hence, we introduce these gadgets as novel Spectre-V1 gadgets listed in~\autoref{lst:cmov} and~\autoref{lst:prev}. Their corresponding assembly snippets are also given in Appendix~\ref{sec:gadgets_bypass_oo7_spectector}.
\vspace{0.3cm}
\begin{lstlisting}[ backgroundcolor=\color{white},
frame=single,
xleftmargin=2em,
framexleftmargin=1.5em,
language=C,
caption= {\texttt{CMOV} gadget: An example Spectre gadget in C format. When it is compiled with \textit{gcc-7.5} -o2 optimization level, \texttt{CMOVcc} gadget bypasses \textit{oo7} tool. The generated assembly version is given in Appendix~\ref{sec:gadgets_bypass_oo7_spectector}.},
label={lst:cmov}]
void victim_function(size_t x){
if(global_condition)
x = 0;
if(x < size)
temp &= array2[array1[x] * 512];
}
\end{lstlisting}
\vspace{0.1cm}
\begin{lstlisting}[ backgroundcolor=\color{white},
frame=single,
xleftmargin=2em,
framexleftmargin=1.5em,
language=C,
caption= {\texttt{XCHG} gadget: When a past value, that is controlled by the attacker, is used to leak the secret in the Spectre gadget, \textit{oo7} cannot detect the XCHG gadget. This example show that control-flow graph extraction is not efficiently implemented in \textit{oo7} tool.},
label={lst:prev}]
size_t prev = 0xff;
void victim_function(size_t x) {
if (prev < size)
temp &= array2[array1[prev] * 512];
prev = x;
}
\end{lstlisting}
We identified two potential issues of static taint analysis method in \textit{oo7} tool. First, if a portion of a tainted variable is modified by an instruction such as \textit{cmov} or \textit{set}, the tainted variable is not tracked by the tool. However, an attacker still controls the remaining portion of the variable, which makes it possible to leak the secret from memory. In some cases, the implementation of static taint analysis is not sufficiently accurate to track partially modified tainted variables due to under-tainting. Secondly, the tainted variables are not tracked between the iterations of a loop. If an old attacker-controlled variable is used to access the secret, \textit{oo7} tool is not able to taint the old variable between the iterations of a \textit{for} loop. Hence, any old attacker-controlled variable can be used to bypass the tool. This shows that control flow graphs of multiple iterations may not be extracted correctly by \textit{oo7}. Both weaknesses show that hand-written rules do not generalize well for Spectre gadget detection when new Spectre-V1 gadgets are discovered.
\textbf{Spectector}~\cite{guarnieri2020spectector} makes use of a symbolic execution technique to detect the potential Spectre-V1 gadgets. For each assembly file, \textit{Spectector} is adjusted to track 25 symbolic paths of at most 5000 instructions each, with a global timeout of 30 minutes. The remaining parameters are kept as default.
First, we eliminate the gadgets that include unsupported instructions as these gadgets are never detected by \textit{Spectector}. When we analyze the remaining gadgets, 1\% of the gadgets are not detected successfully. Then, undetected gadgets are examined to determine novel gadgets.
We determined two issues in the \textit{Spectector} tool. The first issue is related to the barrier instructions. Even though \textit{lfence}, \textit{sfence} and \textit{mfence} instructions have different purposes, the tool treats them as equal instructions. For instance, if an \textit{sfence} instruction is present after the conditional branch, the tool classifies the gadget as safe. However, \textit{sfence} instruction has no effect on the load operation so, the gadget still leaks the secret. Hence, Spectector's modeling of fences does not distinguish the differences between various x86 fence instructions. The second issue is about 8-bit registers in which a partial information of the elements in \textit{array[x]} is stored. When 8-bit registers are used to modify the elements in~\autoref{lst:xorb}, \textit{Spectector} is no longer able to detect the gadgets. This second issue is also mentioned in~\cite{guarnieri2020spectector}, i.e., sub-registers are currently not supported by the tool. Overall, these issues are due to the problems in the translation from x86 assembly into Spectector's intermediate language.
We show that our large-scale diverse gadget data set establishes a ground truth to evaluate the detection tools accurately. As shown in the case studies on \textit{Spectector} and \textit{oo7}, the success rate on detecting the gadgets in our 1.1 million sample data set could serve as a generic evaluation metric while identifying the flaws in the detection tools.
\begin{lstlisting}[style=ASMstyle,
frame=single,
caption={\textcolor{red}{\textit{xorb \%al, \%al}} is added to $1^{st}$ example in Kocher's examples~\cite{kocher2018spectre}. \textit{Spectector} is no longer able to detect the leakage due to the zeroing \%al register.},
label={lst:xorb},
xleftmargin=2em,
framexleftmargin=1.5em,
basicstyle=\footnotesize,
float=h]
victim_function:
movl size
cmpq
jae .B1.2
movzbl array1
shlq $9,
xorb
movb array2
andb
.B1.2:
ret
\end{lstlisting}
|
1,314,259,993,319 | arxiv | \section{Introduction}
The black-hole evaporation~\cite{Hawking} is a remarkable discovery in the black-hole physics. This
effect reveals itself in the positive flux of energy density being measurable sufficiently far away from the event
horizon. The thermal profile of the mode distribution characterising this energy flux might imply that one
could define other \emph{local observables} which are usually attributable to normal/classical many-particle
systems (rare gases, plasmas and so on). To our knowledge, there has been no progress in this direction.
In this paper, we shall try to make it by exploiting quantum kinetic theory.
Our main purpose in this paper is therefore to study local kinetic state variables in the background
of evaporating spherically symmetric black holes. A part of the local macroscopic variables correspond
to the elements of the renormalised stress tensor $\langle \hat{T}_\nu^\mu(x) \rangle$ associated with
a certain field model. These are the energy density, its flux and the pressure. In principle, these do not
need any reference to the kinetic theory as being quantities directly computable from the first principles.
However, there are many other variables which are not. These are the particle density $n(x)$ and the
particle density flux $\mathbf{N}(x)$ as well as the local entropy density $s(x)$ and the entropy density
flux $\mathbf{S}(x)$.
The framework within which we shall be working below is based on a massless scalar field conformally
coupled to gravity. To do quantum kinetic theory, we need the scalar 2-point function $W(x,x')$. It is in
general a difficult problem to analytically derive $W(x,x')$ in Schwarzschild space. Nevertheless, if one
employs the conformal symmetry of the scalar model, one can compute an approximate expression
of the Wightman function close to and far away from the black-hole horizon~\cite{Page}. However, the
2-point function is up to now unknown for physical black holes, i.e. those which have formed through
the gravitational collapse. This is a technical problem we shall analytically address in this paper. The
basic structure of $W(x,x')$ has been already conjectured by us in~\cite{Emelyanov-16b}
(with further applications in~\cite{Emelyanov-16c}) by exploiting the results of \cite{Page,Candelas}.
We prove this
conjecture in Sec.~\ref{sec:sfm} and derive the higher-order corrections to that result as well. These
corrections turn out to play a crucial role in re-obtaining the correct expression of the renormalised
stress-energy tensor.
These preliminary steps will allow us to derive a covariant Wigner distribution $\mathcal{W}(x,p)$,
wherein $x$ and $p$ denote a space-time point and four-momentum (e.g.,
see~\cite{deGroot&vanLeeuwen&vanWeert}). We then apply this distribution for the derivation of the
local macroscopic variables in the region far away and near to the black-hole horizon. Our results are
presented in Sec.~\ref{sec:qkf}. To sum it up, we find the standard picture far away from the event horizon,
whereas its inapplicability in the near-horizon region as $n(x)$ and $s(x)$ turn out to be negative and
imaginary, respectively.
We discuss our results in Sec.~\ref{sec:cr} and propose physical interpretation of how these
can possibly be understood in a self-consistent manner.
Throughout this paper the fundamental constants are set to $c=G=k_\text{B} = \hbar = 1$, unless stated
otherwise. We shall be employing a convention for the indices $``0"$ and $``1"$ to refer to the far-horizon
region and the near-horizon region, respectively. The logic behind this notation is that $r_H/r \rightarrow 0$
and $r_H/r\sim 1$ hold far away from and near to the event horizon.
\section{Scalar field model}
\label{sec:sfm}
We shall be dealing with the scalar field $\Phi(x)$ in the background of Schwarzschild black hole of
astrophysical mass $M$. We set the scalar-field mass to zero and assume the field is conformally
coupled to gravity. The scalar Lagrangian is thus taken to be of the form
\begin{eqnarray}
\mathcal{L} &=& - \frac{1}{2}\Phi\Box\Phi + \frac{1}{12}\,R\Phi^2\,,
\end{eqnarray}
where $R$ is the Ricci scalar which is, however, identically zero in the Schwarzschild geometry
described by
\begin{eqnarray}
ds^2 &=& g_{\mu\nu}dx^\mu dx^\nu \;=\; f(r)dt^2 - \frac{dr^2}{f(r)} - r^2d\Omega^2\,,
\end{eqnarray}
where the lapse function $f(r) \equiv 1 - r_H/r$ and $d\Omega$ is an element of the solid angle.
The parameter $r_H \equiv 2M$ is the Schwarzschild radius aka the size of the event horizon.
\subsection{Scalar Wightman function in Schwarzschild frame}
We have recently conjectured the structure of the Wightman 2-point function in the background of
evaporating Schwarzschild black hole~\cite{Emelyanov-16b},
wherein we have also used that to study the one-loop effects in QED in the far-from-horizon region
and in a massless scalar model with the quartic self-interaction term in~\cite{Emelyanov-16c}.
In this section, we shall prove that and also derive higher-order corrections
with respect to $\Delta\mathbf{x}$ which have been neglected in~\cite{Emelyanov-16b}.
It turns out that the approximate analytic expression of the scalar Wightman function $W(x,x')$ can be found
with a comparably little computational effort if one takes advantage of the conformal symmetry of the
scalar model. This observation allowed to compute the Wightman function for the Hartle-Hawking (HH) as
well as Boulware state~\cite{Page}.
To derive the 2-point function for the physical black holes, we first perform a conformal transformation
of the Schwarzschild metric to its ultra-static form
$\bar{g}_{\mu\nu}(x)$, namely
\begin{eqnarray}
g_{\mu\nu}(x) &=& f(r)\,\bar{g}_{\mu\nu}(x)\,.
\end{eqnarray}
Correspondingly, the Wightman function $W(x,x')$ fulfilling $\Box W(x,x') = 0$ (we have taken into
account here that $R = 0$ for the Schwarzschild black hole) can be written in the ultra-static
metric as follows
\begin{eqnarray}
W(x,x') &=& \bar{W}(x,x')/\big(f(r)f(r')\big)^{\frac{1}{2}}\,,
\end{eqnarray}
where $\bar{W}(x,x')$ satisfies
\begin{eqnarray}\label{eq:gi-sp:equation}
\Big(\bar{\Box} - \frac{1}{6}\bar{R}\Big)\bar{W}(x,x') &=& 0\,.
\end{eqnarray}
According to our convention being extensively used below, all barred quantities are
defined with respect to the ultra-static metric $\bar{g}_{\mu\nu}(x)$.
Since the Killing algebra of the space under consideration consists of the time translation as well
as three generators of the rotational group, we look for a solution of~\eqref{eq:gi-sp:equation}
in the form
\begin{eqnarray}\label{eq:gi-sp:ansatz}
\bar{W}(x,x') &=& \int_\mathbf{R}\frac{d\omega}{4\pi\omega}\,
e^{-i\omega(t-t')}\bar{K}_\omega(\mathbf{x},\mathbf{x}')\,,
\end{eqnarray}
where the integral is over $\omega \in (-\infty,+\infty)$ and by definition
\begin{eqnarray}\label{eq:gi-sp:ansatz-2}
\bar{K}_\omega(x,x') &\equiv& \frac{1}{4\pi}\bar{\Delta}^{\frac{1}{2}}(\mathbf{x},\mathbf{x}')\,
\frac{\sin(\omega\rho)}{\omega\rho}\,\chi_\omega(\mathbf{x},\mathbf{x}') \quad
\text{with} \quad \mathbf{\rho} \;\equiv\; \big(2\bar{\sigma}(\mathbf{x},\mathbf{x}')\big)^{\frac{1}{2}}\,,
\end{eqnarray}
where $\bar{\sigma}(\mathbf{x},\mathbf{x}')$ is a geodetic interval for the spatial section of the
ultra-static metric $\bar{g}_{\mu\nu}(x)$ and $\bar{\Delta}(\mathbf{x},\mathbf{x}')$ the Van Vleck-Morette
determinant.
Substituting~\eqref{eq:gi-sp:ansatz} into~\eqref{eq:gi-sp:equation}, one obtains an equation which
the unknown bi-scalar $\chi_\omega(\mathbf{x},\mathbf{x}')$ must satisfy. Specifically, this reads
\begin{eqnarray}\label{eq:eq-for-bi-scalar-chi}
\bar{\Box}\chi_\omega
+ \frac{1}{3}\big(\bar{R}_i^j - 2\omega^2\delta_i^j \big)\bar{\sigma}^i\bar{\nabla}_j\chi_\omega
-\frac{1}{12}\big(2\bar{R}_{ik;j} - \bar{R}_{ij;k}\big)\bar{\sigma}^i\bar{\sigma}^j\bar{\nabla}^k\chi_\omega
+ \text{O}\big((\bar{\sigma}^i)^3\big) &=& 0\,,
\end{eqnarray}
where $\bar{\sigma}^i \equiv \bar{\nabla}^i\bar{\sigma}$ and $i,j$ run from $1$ to $3$. In the
derivation of Eq.~\eqref{eq:eq-for-bi-scalar-chi} we have employed the fact that
$\bar{R}_{;i} - 2\bar{R}_{i;j}^j$ identically vanishes for \emph{any} lapse function $f(r)$ and
\begin{eqnarray}
9\bar{R}_{;ij} + 9\bar{R}_{ij;k}^{\;\;\;;k} - 24\bar{R}_{ik;j}^{\;\;\;;k}
-12\bar{R}_{ik}\bar{R}_j^k + 6\bar{R}^{kn}\bar{R}_{ikjn} +
4\bar{R}_{iknm}\big(\bar{R}_{j}^{\;\;mnk} + \bar{R}_j^{\;\;knm}\big) &=& 0
\end{eqnarray}
for the lapse function of the form $1 - r_H/r + \Lambda r^2/3$. It means that the equation
\eqref{eq:eq-for-bi-scalar-chi} is applicable to a wide class of static spacetimes. As a consequence,
we have
\begin{eqnarray}
\bar{\Box}\bar{\Delta}^\frac{1}{2}(\mathbf{x},\mathbf{x}') &=&
\frac{1}{6}\,\bar{R}(\mathbf{x})\bar{\Delta}^\frac{1}{2}(\mathbf{x},\mathbf{x}')
+ \text{O}\big((\bar{\sigma}^i)^3\big)\,.
\end{eqnarray}
We need now to solve Eq.~\eqref{eq:eq-for-bi-scalar-chi} to eventually obtain the 2-point function.
In the leading order of the approximation, the bi-scalar $\chi_\omega(x,x')$ reads
\begin{eqnarray}\label{eq:bi-scalar}
\chi_\omega(\mathbf{x},\mathbf{x}') &=& a_\omega + b_\omega\frac{(f(r)f(r'))^{\frac{1}{2}}}{rr'}
\Big(1 \pm i\omega\Delta{r}_\star + \alpha_{\omega}(r,r')\Delta{r}_\star^2
+ \beta_{\omega}(r,r')\bar{\sigma}(\mathbf{x},\mathbf{x}')\Big),
\end{eqnarray}
where $\Delta{r}_\star \equiv r_\star - r_\star'$ with $r_\star$ denoting the Regge-Wheeler radial
coordinate and
\begin{subequations}\label{eq:fun-coefficients}
\begin{eqnarray}
\alpha_\omega(r,r') &\approx& - \frac{\omega^2}{2}\,,
\\[1mm]
\beta_\omega(r,r') &\approx& + \frac{\omega^2}{3}+\frac{r_H}{12(rr')^\frac{3}{2}}
+ \frac{r_H}{4(rr')^\frac{3}{2}}\big(f(r)f(r')\big)^\frac{1}{2}\,.
\end{eqnarray}
\end{subequations}
The sign in front of the second term in the parenthesis of Eq.~\eqref{eq:bi-scalar} cannot be fixed without
referring to the mode expansion of the scalar field. We take it negative in the far-horizon region and positive
in the near-horizon region, because then $\langle \hat{T}_t^r\rangle$ has a correct sign.
The bi-scalar $\chi_\omega(\mathbf{x},\mathbf{x}')$ given in \eqref{eq:bi-scalar} with
\eqref{eq:fun-coefficients} is a solution of the equation $\bar{\Box}\chi_\omega(\mathbf{x},\mathbf{x}') = 0$
up to the order of $\text{O}(\mathbf{x}-\mathbf{x}')$. There are infinitely many solutions of this type.
However, we shall show below that the bi-functions defined in Eq.~\eqref{eq:fun-coefficients} yield the
stress-energy tensor of the scalar field as found in~\cite{Christensen&Fulling,Candelas}.\footnote{There are
extra terms in $\alpha_\omega(r,r')$ and $\beta_\omega(r,r')$ which give sub-leading
contributions to the diagonal elements of $\langle \hat{T}_\nu^\nu\rangle$ in both far-horizon and
near-horizon region. For instance, there are additional terms vanishing as $f(r)$ near the horizon which we have omitted. We shall study these in detail elsewhere.}
We now need to determine the functions $a_\omega$ and $b_\omega$. With this purpose in mind,
we consider the far-horizon ($r,r' \gg r_H$) and near-horizon ($r,r' \sim r_H$) region separately.
\subsubsection{Far-horizon region}
One might expect from the physical grounds that the Wightman function $W(x,x')$ must reduce to
the Minkowski 2-point function, $W_M(x,x')$, in the asymptotically flat region, i.e.
$W(x,x') \rightarrow W_M(x,x')$ in the limit $r \rightarrow \infty$ with $|r-r'| \ll R$, where $R$ is the
distance to the black-hole centre here and below. Indeed, if it had
not be the case, then it would be not legitimate to use the Minkowski-space approximation in
describing and testing particle physics in the colliders on earth. This implies
\begin{eqnarray}
a_{\omega,0} &\approx& +4\omega^2\,\theta(+\omega)\,,
\end{eqnarray}
where $\theta(z)$ is the Heaviside step function.
The function $b_{\omega,0}$ cannot be so simply determined. However, if we set
\begin{eqnarray}
b_{\omega,0} &\approx& +27\,(\omega M)^2n_\beta(\omega)\big(\theta(\omega)
+ e^{\beta\omega}\theta(-\omega)\big)
\quad \text{with} \quad n_\beta(\omega) \;\equiv\; 1/(e^{\beta\omega} - 1)\,,
\end{eqnarray}
where $\beta = 2\pi/\kappa \equiv 8\pi M$ is the inverse Hawking temperature
$T_H \equiv 1/8\pi M$~\cite{Hawking}, then we obtain our previous
result~\cite{Emelyanov-16b,Emelyanov-16c} (with $\chi_\omega(\mathbf{x},\mathbf{x}')$ in the
limit $\mathbf{x}' \rightarrow \mathbf{x}$) which is in agreement with~\cite{Candelas,Page}.
Specifically, we find
\begin{eqnarray}\label{eq:2pf-far}
W_0(x,x') &=& W_M(x,x') + \Delta{W}_0(x,x')\,,
\end{eqnarray}
where the first term is the Minkowski 2-point function, i.e.
\begin{eqnarray}
W_M(x,x') &\approx& {\int}\frac{d^3\mathbf{k}}{(2\pi)^3}\frac{1}{2k_0}\,\exp(-ik\Delta{x})
\quad \text{with} \quad k_0 \;=\;|\mathbf{k}|\,,
\end{eqnarray}
and the higher-order correction to $W_M(x,x')$ reads
\begin{eqnarray}\label{eq:2pf-far-correction}
\Delta W_0(x,x') &\approx& \frac{27r_H^2}{16R^2}
{\int}\frac{d^3\mathbf{k}}{(2\pi)^3}\frac{n_\beta(k_0)}{k_0}\,\cos(\mathbf{k}\Delta\mathbf{x})
\\[1mm]\nonumber
&& \quad\quad\quad \times
\left[\Big(1 - \frac{k_0^2}{6}\big(3\Delta{r}^2 - \Delta\mathbf{x}^2\big)\Big)\cos(k_0\Delta{t})
+k_0\Delta{r}\sin(k_0\Delta{t})\right],
\end{eqnarray}
where $k\Delta{x} \equiv k_\mu\Delta{x}^\mu$ and
$\Delta\mathbf{x}^2 \equiv \Delta{x}^2 + \Delta{y}^2 + \Delta{z}^2$ by our convention. The
coordinates $x$, $y$ and $z$ here are local Cartesian coordinates introduced at the
distance $R$ from the centre of the black hole.
The 2-point function~\eqref{eq:2pf-far} is more general than that we have found
in~\cite{Emelyanov-16b,Emelyanov-16c} as it also contains the higher-order corrections
in $\Delta\mathbf{x}$. We shall show below that $\Delta W_0(x,x')$ yields the correct expression
of the renormalised stress tensor $\langle \hat{T}_\nu^\mu \rangle$ at the spatial infinity.
\subsubsection{Near-horizon region}
One might expect from the equivalence principle that the Wightman function $W(x,x')$ in the near-horizon
region $r,r' \sim r_H$ with $|r-r'| \ll r_H$ must approximately be given by the 2-point function $W_M(x,x')$
as in Minkowski space when expressed in the \emph{local inertial coordinates}. We take this for granted
below to fix the function $a_{\omega,1}$.
To determine $a_{\omega,1}$, one needs first to introduce new spatial coordinates instead of the
angle coordinates in the following manner:
\begin{eqnarray}
y^2 + z^2 &=& 4r_H^2\tan^2(\theta/2) \quad \text{and} \quad z/y \;=\; \tan\phi
\quad \text{with} \quad y^2 + z^2 \;\ll\; r_H^2\,.
\end{eqnarray}
The geodetic distance $\sigma(x,x')$ in these coordinates acquires a comparably simple structure, namely
\begin{eqnarray}
\sigma(x,x') &\approx& \frac{1}{\kappa^2}\big(f(r)f(r')\big)^\frac{1}{2}\bigg(\cosh\kappa\Delta{t} -
\frac{f(r) + f(r') + \kappa^2(\Delta{y}^2 + \Delta{z}^2)}{2\big(f(r)f(r')\big)^\frac{1}{2}}\bigg)\,,
\end{eqnarray}
where we have neglected the higher-order corrections and $\kappa \equiv 1/2r_H$ by definition.
It is worth emphasising that $\sigma(x,x')$ has been computed directly as a \emph{geometrical}
quantity in Schwarzschild space. Having calculated the geodetic distance
$\bar{\sigma}(\mathbf{x},\mathbf{x}')$ of the spatial section of the ultra-static metric
$\bar{g}_{\mu\nu}(x) = g_{\mu\nu}(x)/f(r)$ in the near-horizon region, we then obtain
\begin{eqnarray}
\sigma(x,x') &\approx& \frac{1}{\kappa^2}\big(f(r)f(r')\big)^\frac{1}{2}
\big(\cosh\kappa\Delta{t} - \cosh\kappa\rho\big)\,,
\end{eqnarray}
where $\rho \equiv (2\bar{\sigma}(\mathbf{x},\mathbf{x}'))^\frac{1}{2}$ as defined in
Eq.~\eqref{eq:gi-sp:ansatz-2}. Therefore, in order to have $W(x,x')$ be approximately equal to
$-1/(8\pi^2\sigma(x,x'))$ in the near-horizon region, one must set
\begin{eqnarray}
a_{\omega,1} &\approx& +4\omega^2 n_\beta(\omega)
e^{\beta\omega}\big(\theta(+\omega) + \theta(-\omega)\big)\,.
\end{eqnarray}
In order to determine the function $b_{\omega,1}$, one needs to treat the mode expansion of the
scalar field operator $\hat{\Phi}(x)$. Employing the results of~\cite{Candelas,Page}, we obtain
\begin{eqnarray}
b_{\omega,1} &\approx& -27(\omega M)^2n_\beta(\omega)\big(
\theta(+\omega) + e^{\beta\omega}\theta(-\omega)\big)\,.
\end{eqnarray}
Thus, the 2-point function near the horizon reads
\begin{eqnarray}\label{eq:2pf-near}
W_1(x,x') &=& W_M(x,x') + \Delta{W}_1(x,x')\,,
\end{eqnarray}
where $W_M(x,x') \approx W_\text{HH}(x,x')$ for $r,r' \rightarrow r_H$ and
\begin{eqnarray}
W_\text{HH}(x,x') &\approx& \frac{\bar{\Delta}^\frac{1}{2}(\mathbf{x},\mathbf{x}')}{(f(r)f(r'))^\frac{1}{2}}
{\int}\frac{\omega d\omega}{(2\pi)^2}\,n_\beta(\omega)e^{\beta\omega}
\Big[e^{-i\omega\Delta{t}} + e^{-\beta\omega}e^{+i\omega\Delta{t}}\Big]\frac{\sin \omega\rho}{\omega\rho}
\end{eqnarray}
with $\omega > 0$, and the correction to $W_M(x,x')$ reads
\begin{eqnarray}\label{eq:2pf-near-correction}
\Delta{W}_1(x,x') &\approx& - \frac{27}{16}\bar{\Delta}^\frac{1}{2}(\mathbf{x},\mathbf{x}')
{\int}\frac{\omega d\omega}{(2\pi)^2}\,n_\beta(\omega)\,\frac{\sin \omega\rho}{\omega\rho}
\\[1mm]\nonumber
&&\;\;\times
\left[\Big(1-\frac{\omega^2}{2}\Delta{r}_\star^2 + \frac{1}{3}(\omega^2 + \kappa^2)\bar{\sigma}(\mathbf{x},\mathbf{x}')\Big)\cos(\omega\Delta{t}) - \omega\Delta{r}_\star\sin(\omega\Delta{t}) \right].
\end{eqnarray}
The 2-point function $W_1(x,x')$ is also more general than that we have found
in~\cite{Emelyanov-16b,Emelyanov-16c} in the near-horizon region, but reduces to it if one takes
the limit $\mathbf{x}' \rightarrow \mathbf{x}$ in the square brackets.
\subsection{Scalar Wightman function in Fermi frame}
For the applications below, we need to express the scalar 2-point function $W_1(x,x')$ via the Fermi
normal coordinates. These coordinates are characterised by a geodesic with the tangent vector $G$~\cite{Manasse&Misner}.
One can introduce an orthonormal tetrad $e_a^\mu = (e_{t_F}^\mu, e_{x_F}^\mu, e_{y_F}^\mu, e_{z_F}^\mu)$,
such that $e_{t_F}^\mu$ is a (time-like) tangent vector to the geodesic $G$ describing a \emph{radial free fall}
towards the black hole. This tetrad is given by
\begin{subequations}\label{eq:fermi-tetrad}
\begin{eqnarray}
e_{t_F}^\mu\partial_\mu &=& \frac{1}{f(r)}\,\partial_t - \big(1 - f(r)\big)^{\frac{1}{2}}\partial_r\,,
\\[0mm]
e_{x_F}^\mu\partial_\mu &=& - \frac{\big(1 - f(r)\big)^{\frac{1}{2}}}{f(r)}\,\partial_t + \partial_r\,,
\\[1mm]
e_{y_F}^{\mu}\partial_\mu &=& \frac{1}{r}\,\partial_\theta\,,
\quad
e_{z_F}^{\mu}\partial_\mu \;=\; \frac{1}{r\sin\theta}\,\partial_\phi\,.
\end{eqnarray}
\end{subequations}
In the Fermi normal coordinates $x^a = (t_F,x_F,y_F,z_F)$, the metric tensor along the geodesic has
the Minkowski form, i.e. $g_{ab}|_G = \eta_{ab}$, such that $\Gamma_{ab}^c|_G = 0$. It is worth noticing
that the Fermi time $t_F$ is identical to the Painlev\'{e}-Gullstrand time $\tau$ we have made use
of in~\cite{Emelyanov-16c}. For space-time points close to the geodesic $G$, one has
\begin{eqnarray}\label{eq:fermi-metric}
g_{ab}(x_F) &=& \eta_{ab} - \kappa_{ab} R_{acbd}(x_F)x_F^cx_F^d + \text{O}\big(x_F^3\big)\,,
\end{eqnarray}
as shown in~\cite{Manasse&Misner}, where there is no summation over $a$ and $b$
in the second term of~\eqref{eq:fermi-metric} and
\begin{eqnarray}
\kappa_{ab} &=& \frac{1}{2}\Big(\delta_a^0 + \delta_b^0
+ \frac{1}{3}\sum\limits_{i = 1}^3\big(\delta_a^i + \delta_b^i\big)\Big)\,.
\end{eqnarray}
The geodetic distance $\sigma(x,x')$ in these coordinates is given by
\begin{eqnarray}
\sigma(x,x') &=& \sigma(x_F,x_F') \;=\;
\frac{1}{2} \eta_{ab}(x_F - x_F')^a(x_F -x_F')^b + \text{O}\Big(\frac{r_H\Delta{x}_F^4}{R^3}\Big).
\end{eqnarray}
where we have taken one of the points on the geodesic.
In the local inertial frame associated with the geodesic $G$, the Wightman function $W(x,x')$
should naturally be given by $W_M(x,x')$ whenever it is legitimate to neglect geometrical corrections.
This is exactly what we have obtained in Eq.~\eqref{eq:2pf-far} and Eq.~\eqref{eq:2pf-near}. This means
no significant (quantum) effect can be discovered in the near-horizon region for an observer freely
falling in the black hole of a sufficiently large mass $M$.\footnote{For black holes of mass in the range
$10^{10}\,\text{g} \lesssim M \ll 10^{16}\,\text{g}$,
there are certain tiny effects (the modification of the light deflection angle and Debye-like screening of a
point-like charge) which might be testable~\cite{Emelyanov-16a,Emelyanov-16b} assuming
these exist in nature and their number density is sufficiently large.}
Far away from the black-hole horizon, the Schwarzschild coordinates $x$ go over to the Minkowski
coordinates $x_M$. Therefore, the 2-point function $W_0(x,x')$ is already given in the flat coordinates
for $R \gg r_H$. In the region near to the black-hole horizon, the Schwarzschild coordinates $x$
considerably differ from the Fermi ones $x_F$. Having computed $x_F$ as functions of $x$ from
\eqref{eq:fermi-tetrad}, we obtain
\begin{eqnarray}
W_1(x_F,x_F') &=& W_M(x_F,x_F') + \Delta{W}_1(x_F,x_F')\,,
\end{eqnarray}
where
\begin{eqnarray}
\Delta{W}_1(x_F,x_F') &\approx& -\frac{27}{4}{\int}\frac{d^3\mathbf{k}}{(2\pi)^3}\frac{n_{2\beta}(k_0)}{k_0}\,
\cos(\mathbf{k}\Delta\mathbf{x}_F)
\\[1mm]\nonumber
&& \quad\quad \times
\left[\Big(1 - \frac{k_0^2}{6}\big(3\Delta{x}_F^2 - \Delta\mathbf{x}_F^2\big)\Big)\cos(k_0\Delta{t}_F)
- k_0\Delta{x}_F\sin(k_0\Delta{t}_F)\right],
\end{eqnarray}
where $\mathbf{x}_F = (x_F,y_F,z_F)$ by definition. It should be mentioned that we have explicitly checked
that $\Delta{W}_1(x_F,x_F')$ coincides with
$\Delta{W}_1(x,x')$ given in Eq.~\eqref{eq:2pf-near-correction} up to the order of $\Delta{x}_F^2$ including.
Therefore, the above expression of $\Delta{W}_1(x_F,x_F')$ should be reliable up to this order only.
The same holds for the 2-point function $W_0(x,x')$ far away from the black hole. The reason of this
limitation comes from the bi-scalar $\chi_\omega(\mathbf{x},\mathbf{x'})$ which we have determined only
up to that order. This approximation is, however, adequate for our purposes below.
\section{Quantum kinetic approach to black-hole physics}
\label{sec:qkf}
\subsection{Relativistic kinetic theory: Brief introduction}
Many-particle systems can be described with the aid of the local macroscopic state variables. These
variables are the particle density number, energy density, pressure and so on. In the framework of the
kinetic theory, these are defined through the one-particle distribution function. This distribution is
usually denoted by $f_\text{cl}(x,p)$, where $x^\mu = (t,\mathbf{x})$ and $p^\mu = (p_0,\mathbf{p})$ with
$g_{\mu\nu}p^\mu p^\nu = m^2$ are a space-time coordinate and momentum coordinate, respectively.
The space-time evolution of the distribution function $f_\text{cl}(x,p)$ is governed by the transport
equation, which is a relativistic generalisation of the famous Boltzmann equation. Specifically, this
reads
\begin{eqnarray}\label{eq:transport-equation}
\Big(p^\mu\frac{\partial}{\partial x^\mu} -
\Gamma_{\mu\nu}^\lambda p^\mu p^\nu \frac{\partial}{\partial p^\lambda}
\Big) f_\text{cl}(x,p) &=& C[f_\text{cl}(x,p)]
\end{eqnarray}
in curved spacetime, where $C[f_\text{cl}(x,p)]$ is a collision integral taking into account
binary scattering processes of the constituent particles of the system, and no external field apart
from gravity has been assumed (e.g., see~\cite{Cercignani&Kremer}).
The main state variables are the particle four-current, i.e.
\begin{eqnarray}
N^{\mu}(x) &=& (-g)^\frac{1}{2}{\int}\frac{d^3\mathbf{p}}{p_0}\,p^\mu f_\text{cl}(x,p)\,,
\end{eqnarray}
the energy-momentum tensor reading
\begin{eqnarray}
T^{\mu\nu}(x) &=& (-g)^\frac{1}{2}{\int}\frac{d^3\mathbf{p}}{p_0}\,p^\mu p^\nu f_\text{cl}(x,p)\,,
\end{eqnarray}
and the entropy four-flow defined as
\begin{eqnarray}
S^{\mu}(x) &=& -(-g)^\frac{1}{2}{\int}\frac{d^3\mathbf{p}}{p_0}\,p^\mu f_\text{cl}(x,p)
\big(\ln(h^3f_\text{cl}(x,p))-1\big)\,.
\end{eqnarray}
These macroscopic variables are local. This fact allows to describe equilibrium as well
as non-equilibrium macroscopic states of the system by these variables.
The macroscopic conservation laws
\begin{subequations}
\begin{eqnarray}
\nabla_\mu N^\mu(x) &=& 0\,,
\\[1mm]
\nabla_\mu T^{\mu\nu}(x) &=& 0\,.
\end{eqnarray}
\end{subequations}
can be shown to hold as a consequence of the transport equation \eqref{eq:transport-equation} and the
microscopic conservation laws of the particle number as well as the four-momentum. These can in
turn be used to derive the well-known hydrodynamic equations like the continuity equation or the Euler
equations with the relativistic corrections.
In the kinetic theory, one can also prove the Boltzmann $H$-theorem. This theorem states that the entropy
production rate $\sigma(x)$ at any space-time point is never negative, i.e.
\begin{eqnarray}
\sigma(x) &\equiv& \nabla_\mu S^\mu(x) \;\geq\; 0\,.
\end{eqnarray}
For the sake of completeness, we want finally to remind about the role of the hydrodynamic (e.g.,
Eckart or Landau-Lifshitz) velocity $U^\mu(x)$. This allows to define covariant state variables corresponding
to the particle number density $N^\mu U_\mu$, the energy density $T_{\mu\nu} U^\mu U^\nu$ and so on.
For more details, we refer to the references~\cite{deGroot&vanLeeuwen&vanWeert,Cercignani&Kremer}.
\subsection{Covariant Wigner distribution $\mathcal{W}(x,p)$}
Starting with local quantum field theory, one can introduce the distribution function and the transport equation
associated with it. This leads to quantum kinetic theory~\cite{deGroot&vanLeeuwen&vanWeert}. The
main object in this theory is the Wigner operator
\begin{eqnarray}\label{eq:wigner-distribution}
\hat{\mathcal{W}}(x,p) &\equiv& \frac{4\pi}{(2\pi)^5}{\int}d^4y\, e^{-ipy}\,\hat{\Phi}\big(x+y/2\big)
\hat{\Phi}\big(x-y/2\big)\,.
\end{eqnarray}
Note that we are working in the Fermi coordinate frame here. It allows us to employ the ordinary Fourier
transform as if the Fermi frame is infinitely large. This does not serve any problem whenever the physics
we are interested in is characterised by \textcolor{new}{a} length scale being much smaller than a characteristic curvature
scale.
Once we have a quantum system described by a certain state, we can relate with it the Wigner
distribution
\begin{eqnarray}
\mathcal{W}(x,p) &=& \langle\hat{\mathcal{W}}(x,p)\rangle\,.
\end{eqnarray}
It is worth pointing out that there is no direct probabilistic interpretation of the covariant
Wigner distribution in terms of particles~\cite{deGroot&vanLeeuwen&vanWeert}. This can be understood
from its very definition which is entirely independent on the covariant wave function. This is in contrast
to the classical one-particle distribution $f_\text{cl}(x,p)$ introduced above, which is a probability density
giving a number of particles in a spatial volume $\Delta\mathbf{x}^3$ with the momentum in the interval
between $\mathbf{p}$ and $\mathbf{p} + \Delta{\mathbf{p}}$. Despite of the lack of the straightforward
physical meaning of $\mathcal{W}(x,p)$ in terms of particles, we want to employ $\mathcal{W}(x,p)$
in Schwarzschild geometry in order to get further insights about black holes and physics related to their
evaporation.
\subsection{Wigner distribution in presence of evaporating black hole}
We now derive the covariant Wigner distribution associated with the scalar model in the background
of the black hole formed via the gravitational collapse.
\subsubsection{Far-horizon region}
Far away from the black-hole horizon $R \gg r_H$, the 2-point function is given by \eqref{eq:2pf-far}.
The contribution to the Wigner distribution \eqref{eq:wigner-distribution} comes from only those
terms in $W_0(x,x')$ which contain the positive frequency modes. The result reads
\begin{eqnarray}\label{eq:wd-far}
\mathcal{W}_0(x,p) &\approx& \frac{3^3M^2n_\beta(p_0)}{2^5\pi^3 R^2p_0}
\left[1 - p_0P_1\Big(\frac{\mathbf{n}\mathbf{p}}{p}\Big)\partial_p
+ \frac{p_0^2}{3p}P_2\Big(\frac{\mathbf{n}\mathbf{p}}{p}\Big)
\big(p \partial_p^2 - \partial_p\big)\right]{\delta(p_0 - p)}\,,
\end{eqnarray}
where $P_n(z)$ is the Legendre polynomial of the order $n \in \mathbb{N}_0$ and
\begin{eqnarray}
p &\equiv& |\mathbf{p}| \quad \text{and} \quad \mathbf{n} \;\equiv\; \mathbf{R}/R\,.
\end{eqnarray}
In deriving the Wigner distribution $\mathcal{W}_0(x,p)$, we have omitted terms which
vanish faster than $1/R^2$ in the limit $R \rightarrow \infty$.
The prefactors of the first, second and third term in $\mathcal{W}_0(x,p)$ are proportional to
the Legendre polynomial of the zeroth, first and second order, respectively. This resembles
a similar structure of the monopole, dipole and quadrupole potentials in the multiple expansion of the
electrostatic potential performed sufficiently far away from a gas of charged particles. Therefore,
we shall occasionally refer to these terms in the following as to the monopole-, dipole- and
quadrupole-\emph{like} term, respectively.
\subsubsection{Near-horizon region}
In the near-horizon region $R \sim r_H$, the 2-point function has the form \eqref{eq:2pf-near}.
Substituting this into \eqref{eq:wigner-distribution}, we obtain
\begin{eqnarray}\label{eq:wd-near}
\mathcal{W}_1(x,p) &\approx& -\frac{3^3n_{2\beta}(p_0)}{2^5\pi^3p_0}
\left[1 + p_0P_1\Big(\frac{\mathbf{m}\mathbf{p}}{p}\Big)\partial_p
+ \frac{p_0^2}{3p}P_2\Big(\frac{\mathbf{m}\mathbf{p}}{p}\Big)
\big(p \partial_p^2 - \partial_p\big)\right]{\delta(p_0 - p)}\,,
\end{eqnarray}
where we have taken into account the condition $p_0 > 0$ and
\begin{eqnarray}
\mathbf{m} \equiv (1,0,0)
\end{eqnarray}
in the Fermi coordinate frame. Note that the structure of $\mathcal{W}_1(x,p)$ is the same as that of
$\mathcal{W}_0(x,p)$ up to the total prefactor and the sign of the dipole-like term.
The distribution $\mathcal{W}_1(x,p)$ does not contain a contribution from
$W_M(x_F,x_F') \approx W_\text{HH}(x,x')$ which is an important part of the total 2-point function $W_1(x,x')$
as it provides the proper singularity structure for the Feynman propagator. The reason is that it has
no modes with the positive frequency.\footnote{Thus, this will contribute if we extend the allowed values
of $p_0$ from $-\infty$ to $+\infty$ in the formula \eqref{eq:wigner-distribution}, but the physics in terms
of particles then becomes obscure.} This is reasonable as the Wigner distribution of the vacuum
2-point function in Minkowski space is trivial as well.
However, there is a \emph{non}-vanishing contribution of $W_{M}(x,x')$ to the renormalised stress tensor if we approximate it by $W_\text{HH}(x,x')$.\footnote{Strictly speaking, $W_M(x,x')$ and
$W_\text{HH}(x,x')$ are not equal to each other near horizon in the Schwarzschild frame, unless one
sets $f'(r)/2 = \kappa$ in $W_M(x,x')$ for $r \sim r_H$.
This subtlety originates from the coordinate transformation to the local inertial frame near horizon and is
inessential for the derivation of $\mathcal{W}_1(x,p)$.}
The renormalisation is performed by employing the point-splitting technique and making
the substitution
\begin{eqnarray}
W_\text{HH}(x,x') &\rightarrow& W_\text{HH}(x,x') - H(x,x')
\end{eqnarray}
where $H(x,x')$ is the Hadamard parametrix (e.g., see~\cite{Moretti,Decanini&Folacci}). The parametrix
serves to cancel the singular part of $W_\text{HH}(x,x')$ in the coincidence limit $x' \rightarrow x$. It
also provides extra non-vanishing (geometrical) terms in $\langle T_\nu^\mu \rangle$ including the trace
aka conformal anomaly. Specifically, it was found in~\cite{Page} that
\begin{eqnarray}\label{eq:hh-nbhh}
\langle T_b^a \rangle_\text{HH} &\approx& - \frac{\kappa^4}{120\pi^2}\,\text{diag}(3,3,1,1)
\quad \text{for} \quad R \;\sim\; r_H
\end{eqnarray}
This result for $\langle T_\nu^\mu\rangle_\text{HH}$ is close to that obtained by the numerical
calculations \cite{Candelas,Howard&Candelas}. This implies that the prefactor in front of the parameter $a_\omega$
appearing in the bi-scalar $\chi_\omega(\mathbf{x},\mathbf{x}')$ has actually to depend on the
spatial coordinates and asymptotically approach $1$ as faster as $1/R^4$ for $R \rightarrow \infty$.
To sum it up, the distribution $\mathcal{W}_1(x,p)$ is expected to give the stress tensor renormalised
as in~\cite{Candelas}, i.e. the \emph{relative} part of the total energy-momentum tensor
$\langle T_\nu^\mu \rangle$ with respect to $\langle T_\nu^\mu \rangle_M$. This seems to be
reasonable as only this part provides the crucial term resulting in the black-hole evaporation and, hence,
related to the Hawking particles.
\subsection{Energy-momentum tensor $\langle \hat{T}_{\mu\nu}(x) \rangle$}
Having the distribution function $\mathcal{W}(x,p)$, we are able to compute the energy-momentum
tensor $\langle \hat{T}_{\mu\nu} \rangle$ as its second moment with respect to the momentum $p_\mu$,
namely
\begin{eqnarray}\label{eq:emt}
\langle \hat{T}_{\mu\nu}(x) \rangle &=& {\int}d^4p \; p_\mu\,p_\nu \,\langle\hat{\mathcal{W}}(x,p)\rangle\,,
\end{eqnarray}
where the integration over $p_0$ is in the interval $(0,+\infty)$.
\subsubsection{Far-horizon region}
Employing our result for the Wigner distribution far away from the black hole, we obtain
\begin{eqnarray}
\langle \hat{T}_\nu^\mu \rangle &\approx&
\frac{1}{4\pi R^2}{\int}\frac{dp_0}{2\pi}\,\frac{p_0 \Gamma_{p_0}}{e^{\beta p_0} - 1}
\left[
\begin{array}{ccc}
+1 & +1\\
-1 & -1
\end{array}
\right] \quad \text{for} \quad R \;\gg\; r_H\,,
\end{eqnarray}
where the indices $\mu,\nu$ run over $\{t,r\}$ and the rest elements of $\langle \hat{T}_\nu^\mu \rangle$ vanish.
We have introduced $\Gamma_{p_0} = 27(p_0 M)^2$ which corresponds to the DeWitt approximation that we
have been employing throughout this paper. It is worth pointing out
that the $tt$-component of $\langle \hat{T}_\nu^\mu \rangle$ is due to the monopole-like term in~\eqref{eq:wd-far},
whereas its non-diagonal elements come from the dipole-like term in $\mathcal{W}_0(x,p)$ and the
$rr$-component of the stress tensor originates from the monopole- and quadrupole-like term of the Wigner
distribution. This result for the stress tensor $\langle \hat{T}_\nu^\mu \rangle$ given above is consistent
with~\cite{Christensen&Fulling,Candelas} and can also be directly obtained by using, e.g., the point-splitting
technique (see Appendix~\ref{app:emt} for some details).
It is a well-known fact that the energy density far away from the black hole is positive, i.e.
$\langle \hat{T}_t^t \rangle >0$, as well as its flux in the radial direction is also positive, i.e.
$\langle \hat{T}_t^r \rangle >0$. It implies that there is a positive energy flux from the black
hole. Thus, we re-derive this Hawking's discovery by use of the quantum kinetic approach.
It is tempting to define an \emph{effective} Wigner distribution as follows
\begin{eqnarray}\nonumber
\mathcal{W}_{\text{eff},0}(x,p) &=& \frac{\Gamma_{p_0}}{32\pi^3p_0^{3}R^2}\,n_\beta(p_0)
\sum_{l = 0}^{2}(2l + 1)P_l\Big(\frac{\mathbf{n}\mathbf{p}}{p}\Big)\delta(p_0 - p)
\\[1mm]
&\approx& \frac{1}{8\pi^2}\frac{\Gamma_{p_0}}{p_0^{3}R^2}\,n_\beta(p_0)\,\delta(p_\theta)\,
\delta(p_\phi)\,\delta(p_0 - p)\,,
\end{eqnarray}
where we have extended the finite summation over $l$ to the infinity and used a sum representation of
the delta function in terms of the spherical harmonics. Furthermore, we may define an effective
one-particle distribution
\begin{eqnarray}
f_{\text{eff},0}(x,p) &=& \frac{1}{8\pi^2}\,\frac{\Gamma_{p_0}}{p_0^2R^2}\frac{1}{e^{\beta p_0} - 1}\,
\delta(p^\theta)\delta(p^\phi)\,,
\end{eqnarray}
which has already been introduced in~\cite{Emelyanov-16c} with slightly different notations. We shall
demonstrate below its usefulness.
\subsubsection{Near-horizon region}
Substituting $\mathcal{W}_1(x,p)$ into Eq.~\eqref{eq:emt}, we obtain
\begin{eqnarray}
\langle \hat{T}_b^a \rangle &\approx&
\frac{1}{\pi r_H^2}{\int}\frac{dp_0}{2\pi}\,\frac{p_0 \Gamma_{p_0}}{e^{2\beta p_0} - 1}
\left[
\begin{array}{cc}
-1 & +1 \\
-1 & +1
\end{array}
\right] \quad \text{for} \quad R \;\sim\; r_H\,,
\end{eqnarray}
where $a,b$ run over $\{t_F,x_F\}$ and the rest elements of $\langle \hat{T}_b^a \rangle$ are
suppressed in the Schwarzschild frame (see Appendix~\ref{app:emt} for further details).
The energy density $\langle \hat{T}_{t_F}^{t_F} \rangle$ is negative near the horizon, whereas
$\langle \hat{T}_{t_F}^{x_F} \rangle$ is positive. This physically implies that there is a flux of negative energy
towards the black hole. The change of the energy flux direction well away from the black-hole horizon
was first found in~\cite{Unruh} with the physical insight that the vacuum spacetime itself is unstable at the
quantum level. The same observation has been recently made in~\cite{Giddings}.
Analogous to the far-horizon region, one can introduce an effective Wigner distribution and associated with
it an effective one-particle distribution function, namely
\begin{eqnarray}\label{eq:eopdf-nhr}
f_{\text{eff},1}(x,p) &=& -\frac{1}{2\pi^2r_H^2}\,\frac{\Gamma_{p_0}}{e^{2\beta p_0} - 1}\,
\theta(-p_x)\delta(p_y)\delta(p_z)\,,
\end{eqnarray}
This correctly reproduces $\langle \hat{T}_b^a \rangle$ as well as $\langle N^a \rangle$ which we shall
compute below.
\subsection{Particle four-current $\langle \hat{N}_{\mu}(x) \rangle$}
In the kinetic theory, the first moment with respect to the four-momentum $p_\mu$ of the distribution
function $\mathcal{W}(x,p)$ gives the particle four-current. Specifically, we have
\begin{eqnarray}\label{eq:particle-current}
\langle \hat{N}_{\mu}(x) \rangle &=& {\int}d^4p \; p_\mu \;\langle\hat{\mathcal{W}}(x,p)\rangle
\quad \text{with} \quad p_0 \;\in\; (0,+\infty)\,.
\end{eqnarray}
Accordingly, the particle number density and its current are
\begin{eqnarray}
n(x) &=& \langle \hat{N}^{0}(x) \rangle \quad \text{and} \quad
N^i(x) \;=\; \langle \hat{N}^{i}(x)\rangle\,,
\end{eqnarray}
where the hydrodynamical velocity $U^\mu$ has been chosen of the form $(1,0,0,0)$. We merely
note that $U^\mu$ corresponds neither Eckart nor Landau-Lifshitz velocity as these have to be
light-like for the scalar model we have been considering.
We now go over to the study of this local macroscopic observable far away and close to the
black-hole horizon.
\subsubsection{Far-horizon region}
Substituting $\mathcal{W}_0(x,p)$ given in Eq.~\eqref{eq:wd-far} in the formula \eqref{eq:particle-current},
we obtain
\begin{eqnarray}
n_0(x) &=& N_0^r(x) \;=\;
\frac{1}{4\pi R^2}{\int}\frac{dp_0}{2\pi}\,\frac{\Gamma_{p_0}}{e^{\beta p_0} - 1}
+ \text{O}\Big(\frac{r_H^2T_H^4}{R^3}\Big)\,,
\end{eqnarray}
whereas $N_0^\theta(x) = N_0^\phi(x) = 0$ identically. Note that this result
can also be obtained with the aid of the effective one-particle distribution $f_{\text{eff},0}(x,p)$. It should also
be emphasised that $n_0(x)$ originates from the monopole-like term of $\mathcal{W}_0(x,p)$,
whereas $N_0^r$ comes from the dipole-like term in the Wigner distribution.
The number density as well as its current are positive, i.e. there is a positive radial particle flux
from the black hole. To better understand what this means we go over to the region near the horizon.
\subsubsection{Near-horizon region}
Substituting $\mathcal{W}_1(x,p)$ in the definition of the particle four-current, we obtain
\begin{eqnarray}
n_1(x) &=& -N_1^x(x) \;\approx\; -
\frac{1}{\pi r_H^2}{\int}\frac{dp_0}{2\pi}\,\frac{\Gamma_{p_0}}{e^{2\beta p_0} - 1}\,,
\end{eqnarray}
while $N_1^{y}(x)$ and $N_1^{z}(x)$ are zero.\footnote{We have suppressed the index
``$F$" in the Fermi coordinates for the sake of transparency of the formulas. This should
not cause any confusions as we employ all the time local inertial coordinates in both regions.}
This result implies that the density number of particles is \emph{negative} at $R \sim r_H$,
whereas its current is positive. The physical interpretation of $n_1(x) < 0$ in terms of particles
is here problematic as the number of particles per cubic centimetre cannot make any physical
sense whenever negative.
One of the possible explanation of this result might be that quantum kinetic theory cannot
adequately describe local physics near the horizon. Although the Wigner distribution
$\mathcal{W}_1(x,p)$ properly reproduces the evaporation effect of black holes, the particle
density may not have physical sense as the notion of particle may not be well-defined at
$R \sim r_H$. On the other hand, it seems that the Wigner's concept of particle should be
applicable in any local Minkowski frame, otherwise it would be unnatural to assume that
this concept holds in local (approximately) Minkowski frame on earth only. If the Wigner's
particle turns out to be physically realised at $R \sim r_H$, then $n_1(x) < 0$ has to be physically
understood.
Taking into account that there is no necessarily probabilistic interpretation of the Wigner
distribution for quantum systems~\cite{deGroot&vanLeeuwen&vanWeert},
it seems that there is still a physically non-excludable way of understanding $n_1(x) < 0$.
Specifically, one might think about $n_1(x)$ as a number of the field \emph{modes} per cubic
centimetre \emph{relative} to its number density in local Minkowski frame. If so, then $n_1(x) < 0$
would mean that the number of the field modes is smaller with respect to the flat case near the
horizon.\footnote{Note that if we consider a one-cubic-meter-size box with the gas of scalar particles
of temperature $T > T_H$, then $n_1(x)$ will be positive within the volume of this box. We explain
below why $T$ must actually be much bigger than $T_H$, i.e. $T \gg T_H$, in order for this
set-up to make physical sense.} As a
consequence, $n_1(x) = -N_1^x(x) < 0$ should then imply the mode number decreases when
one approaches the horizon. If one also associates a positive energy $p_0$ with each
mode, one can then understand $\langle \hat{T}_{t_F}^{t_F} \rangle < 0$ as the total mode
energy density relative to their total energy density in the absence of the black hole. In other
words, this picture seems to fit well the near-horizon behaviour of the stress tensor
$\langle \hat{T}_b^a \rangle$ following from $\mathcal{W}_1(x,p)$.
This manner of interpreting $n_1(x) < 0$ as well as $\langle \hat{T}_{t_F}^{t_F} \rangle < 0$
is mostly motivated by the physical understanding of the Casimir effect. This viewpoint is also
consistent with our previous insights~\cite{Emelyanov-15b}. We come back to this issue below.
Comparing the behaviour of the particle four-current at $R \sim r_H$ and $R \gg r_H$, we find
that $n(x)$ changes its sign at a certain distance $R_c$ away from the event horizon. We
expect that it is of the order of $3M$, i.e. at the distance where the energy flux changes its sign (e.g.,
see~\cite{Giddings,Giddings-2,Hod,Dey&Liberati&Pranzetti,Giddings-3}). In one of our forthcoming papers, we
shall try to carefully study this region with the help of the particle four-current.
\subsection{Entropy four-current $\langle \hat{S}^\mu(x) \rangle$}
The macroscopic variables $\langle \hat{T}_\nu^\mu \rangle$ and
$\langle \hat{N}^\mu \rangle$ at $R \gg r_H$ behave like those of a steady flux of the
stellar wind of distance-independent temperature. Therefore, $s_0(x)$ coincides
with the entropy density of that kind of the idealised stellar wind.
As shown above, this picture is inapplicable in the near-horizon region. Moreover, the
entropy density $s_1(x)$ turns out to be imaginary. Specifically, its imaginary part is
ambiguous and reads
\begin{eqnarray}
\text{Im}\, s_1(x) &=& (\pi + 2\pi k)\,n_1(x) \quad \text{with} \quad k \;\in\; \mathbb{Z}\,.
\end{eqnarray}
We do not understand how it can be interpreted in terms of statistical properties of
some normal many-particle system.
\section{Concluding remarks}
\label{sec:cr}
\subsection{Scalar field splitting}
If we consider the \emph{fundamental} field operator $\hat{\Phi}(x)$ in the local inertial frame
in the far-horizon and near-horizon region, then we find that it possesses the following structure:
\begin{eqnarray}\label{eq:operator-sum}
\hat{\Phi}(x) &=& \hat{\Phi}_M(x) + \delta\hat{\Phi}(x) \quad \text{with} \quad
[\hat{\Phi}_M(x),\delta\hat{\Phi}(x')] \;=\; 0\,,
\end{eqnarray}
where $\hat{\Phi}_M(x)$ is the field operator as if there is no black hole, whereas $\delta\hat{\Phi}(x)$
vanishes as $r_H/R$ in the asymptotically flat region ($R \gg r_H$) and is of $\text{O}(1)$ near the
black-hole horizon ($R \sim r_H$).
The field operator $\hat{\Phi}(x)$ before the collapse can be split in
a sum of two \emph{non}-fundamental operators with non-intersecting supports, namely $\hat{\Phi}_<(x)$
and $\hat{\Phi}_>(x)$, such that $\hat{\Phi}_<(x)$ vanishes for the Finkelstein-Eddington time $v > v_H$,
where $v_H$ corresponds to the moment when the event horizon forms, whereas $\hat{\Phi}_>(x)$
vanishes for $v < v_H$. One can further split $\hat{\Phi}_<(x)$ into $\hat{\Phi}_c(x)$ and $\hat{\Phi}_b(x)$~\cite{Hawking,DeWitt},
such that $\hat{\Phi}_c(x)$ has a vanishing support outside of the black-hole horizon, whereas $\hat{\Phi}_b(x)$
vanishes inside the horizon.\footnote{It should be noted that the modes $u_{in}(x|l,m,\omega,1)$ defined in~\cite{DeWitt}
are associated with the operator $\hat{\Phi}_<(x)$, while $u_{in}(x|l,m,\omega,2)$ with $\hat{\Phi}_>(x)$. The modes
$u_{in}(x|l,m,\omega,1)$ can be further split into $u_{out}(x|l,m,\omega,0)$ and $u_{out}(x|l,m,\omega,1)$.
These are related to $\hat{\Phi}_c(x)$ and $\hat{\Phi}_b(x)$, respectively.}
In terms of the non-fundamental operators $\hat{\Phi}_>(x)$ and $\hat{\Phi}_b(x)$ for $R > r_H$,
we have
\begin{subequations}
\begin{eqnarray}
\hat\Phi_M(x) &=& \hat{\Phi}_>(x)
\quad\text{and}\quad
\delta\hat\Phi(x) \;=\; \hat{\Phi}_b(x) \hspace{5.1mm} \text{for} \quad R \;\gg\; r_H\,,
\\[1mm]
\hat\Phi_M(x) &=& \hat{\Phi}_b(x)
\hspace{5.1mm}\text{and}\quad
\delta\hat\Phi(x) \;=\; \hat{\Phi}_>(x) \quad \text{for} \quad R \;\sim\; r_H\,.
\end{eqnarray}
\end{subequations}
Therefore, the operator $\hat{\Phi}_b(x)$ is as physically relevant as $\hat{\Phi}_>(x)$ and
\emph{vice verse} for having a proper singularity structure in the field propagator far away from as well as near
to the event horizon. It implies, for instance, that it is not legitimate to omit $\hat{\Phi}_>(x)$ in the
asymptotically flat region, contrary to the common practice. Precisely this part of $\hat{\Phi}(x)$ has been
successfully exploiting in particle physics, but do \emph{not} contribute to the covariant Wigner
distribution $\mathcal{W}_0(x,p)$.
The crucial role is, however, played by $\hat{\Phi}_b(x)$ near the event horizon as this part of
$\hat{\Phi}(x)$ provides the proper singularity in the 2-point function $W_1(x,x')$ and, hence, allows to
have the Feynman propagator with its ordinary interpretation in particle physics. The Wigner distribution
$\mathcal{W}_1(x,p)$ we have derived above is completely independent on $\hat{\Phi}_b(x)$.
The splitting~\eqref{eq:operator-sum} is of no physical sense in a local inertial frame falling in the black-hole
geometry. Still, the field operator $\hat{\Phi}(x)$ as being fundamental and its Hilbert space representation
makes physical sense all the way down to the black hole. This is contrary to the tacitly proposed idea to
define a separate Hilbert space for each of the non-fundamental operators on the right-hand side of
Eq.~\eqref{eq:operator-sum}. This idea eventually leads to the conclusion that the far-horizon region has to
be described by a thermal density matrix. We do not share this point of view as it is beyond of our current
understanding of local quantum field theory and, actually, inconsistent with that by
construction~\cite{Emelyanov-15b}.
\subsection{Scalar field particles and Wigner distribution}
The scalar operator $\hat{\Phi}(x)$ acquires the rich physical meaning in QFT when one represents it as
the sum of two non-Hermitian field operators, namely $\hat{\Phi}(x) = \hat{a}(x) + \hat{a}^\dagger(x)$. The
operator $\hat{a}(x)$ is in turn defined through the equation
\begin{eqnarray}
\hat{a}(x) &=& {\int}\frac{d^4k}{(2\pi)^3}\,\theta(k_0)\delta(k^2)\,\Phi_\mathbf{k}(x)\,\hat{a}_\mathbf{k}\,,
\end{eqnarray}
where $\Phi_\mathbf{k}(x)$ are the mode functions being positive-frequency solutions (with respect to
$P_0$ of the \emph{local}\footnote{We find ourselves in a local (approximately)
inertial frame on earth.
Therefore, the Poincar\'{e} group in particle physics is local as well. Although the universe is not globally flat at
macroscopic scales, the Minkowski-space approximation is fully enough to successfully describe
scattering processes in the particle colliders.} Poincar\'{e} group) of the scalar field equation and satisfy
the normalisation condition $(\Phi_\mathbf{p},\Phi_\mathbf{k})_\text{KG} = \delta(\mathbf{p}-\mathbf{k})$.
The vacuum $|\Omega\rangle$ is defined through the equation $\hat{a}_\mathbf{k}|\Omega\rangle = 0$.
The one-particle state $|\mathbf{k}\rangle = \hat{a}_\mathbf{k}^\dagger|\Omega\rangle$ is not normalisable.
The \emph{physical} 1-particle state is defined through the covariant wave packet $h(x)$:
\begin{eqnarray}
h(x) &=& {\int}\frac{d^4k}{(2\pi)^3}\,\theta(k_0)\delta(k^2)\,h(k)\Phi_\mathbf{k}(x)\,,
\end{eqnarray}
where $h(k)$ is a square-integrable function. A \emph{localised} particle state described by $h(k)$ is
\begin{eqnarray}
|h\rangle &\equiv& \hat{a}^\dagger(h)|\Omega\rangle
\;\equiv\; (h^*,\hat{\Phi})_\text{KG}|\Omega\rangle
\;=\; {\int}\frac{d^4k}{(2\pi)^3}\,\theta(k_0)\delta(k^2)\,h(k)|\mathbf{k}\rangle\,,
\end{eqnarray}
which is normalisable, i.e. $\langle h|h\rangle = 1$, as having a finite support.
There are infinitely many ways of splitting the field operator $\hat{\Phi}(x)$ in the sum of non-Hermitian
operators. This is a direct consequence of the linearity of the field equation. The proposal was to choose
different mode functions for different coordinate frames. This usually implies that it is meaningful to have
different notions of particles in different frames. This resulted eventually in a belief that ``quantum
mechanics is observer-dependent". We do not share this point of view as it leads to the various
paradoxical/unphysical conclusions. Recently, we have
proposed another principle which is conservative in its spirit and based on the idea of the equivalence
principle~\cite{Emelyanov-16c}. To make it short, the mode functions $\Phi_\textbf{p}(x)$ defining a
\emph{physical}, observer-\emph{in}dependent notion of particles are those which acquire the Minkowski
structure, namely
\begin{eqnarray}\label{eq:pm}
\Phi_\mathbf{k}(x) &\sim& \exp(-ik_\mu x^\mu)
\end{eqnarray}
in a local inertial frame defined at each point of spacetime. This makes sense only in space-time regions with
not too strong gravity. The main argument in favour of this definition is that we have been doing this all the
time on earth to predict and describe various scattering processes in the particle colliders.
Indeed, a well-tested notion of the particle is associated with the unitary, irreducible representations of the
Poincar\'{e} group $\mathcal{P}_+^\uparrow$. This idea was proposed long ago by Wigner (e.g.,
see~\cite{Haag}). The Poincar\'{e} group forms here the isometry of \emph{local} Minkowski frame only, as
the universe is globally non-flat. This is a basic idea behind of our proposal of relating the well-tested notion
of the particle in Minkowski space with its definition in curved spacetime. Note that the particle in a
non-inertial frame is described by an appropriate covariant wave packet of non-vanishing acceleration.
Once we have defined a wave packet, we have the 1-particle state carrying information about the
particle. The wave packet is characterised by its non-vanishing support. Normally, it should correspond
to the size of the particle. In our case, this is given by the de Broglie wavelength $\lambda_\mathbf{k}$
of the scalar particle. Therefore, the correction to the right-hand side of \eqref{eq:pm} near
horizon must be suppressed by a factor of $(\lambda_\mathbf{k}/r_H)^2 \ll 1$, otherwise there is no
well-defined notion of the particle in the Wigner sense.\footnote{It seems that we are in agreement at this
point with~\cite{Bardeen} (see paragraph 3 on p. 2).} This is indeed the case.
Thus, we cannot relate the Wigner distribution $\mathcal{W}(x,p)$ we have found above to the \emph{real}
particles as this originates from the suppressed correction to the right-hand side of~\eqref{eq:pm}.
\subsection{Negative particle density and quantum noise}
The main idea of defining $\mathcal{W}(x,p)$ in QFT is to have a distribution function derived from
the first principles with the aid of which one can determine the local macroscopic state variables characterising
many-particle systems~\cite{deGroot&vanLeeuwen&vanWeert}. Indeed, we have seen that the Wigner
distribution $\mathcal{W}(x,p)$ can be used to compute the stress tensor $\langle \hat{T}_\nu^\mu\rangle$
and the particle four-flow $\langle \hat{N}^\mu\rangle$ as its second and first moment with respect to the
four-momentum $p_\mu$, respectively.
We have shown that the particle four-current
$N^\mu = (n_0,N_0^r,0,0)$ can make physical sense as a steady outward particle flow in the
asymptotically flat region. This is in full agreement with~\cite{Hawking}. However, this interpretation of
$N^\mu = (n_1,N_1^x,0,0)$ is inapplicable in the near-horizon region, because $n_1 < 0$ cannot be
possible for the real particles and qualitatively differs from a behaviour of a normal relativistic gas~\cite{Kremer}.
As pointed out above, $\mathcal{W}(x,p)$ comes in the present set-up from the correction to the leading
term of the mode functions (see Eq.~\eqref{eq:pm}). This correction plays a sub-leading role in the definition
of the particle creation operator $\hat{a}^\dagger(h)$ of the wave function $h(x)$, but the leading role
for $\mathcal{W}(x,p)$ to be non-trivial. Therefore, we think that $\mathcal{W}(x,p)$ with its moments
are entirely due to the quantum fluctuations described by that correction which is in turn induced by the
presence of the black hole. The number of the modes characterising these fluctuations turns out to be
smaller at $R < R_c$ than that in the absence of the black hole. As a consequence, its relative number
density and energy density are negative.
If so, a noval property of the quantum fluctuations would be their ``ability" to transfer energy (through gravity
playing a role of the ``working body"). This does not seem to be a completely speculative idea bearing in mind
a lab set-up we described in~\cite{Emelyanov-16d}. Specifically, one can compute the vacuum energy density
in two cavities separated by an extra metallic plate in the Casimir set-up when this plate is in the middle and
when it is shifted in a way the dynamical Casimir effect is negligible.
Comparing the total vacuum energy density after and before the shift, one finds that its absolute value
has increased. Thus, the negative vacuum energy has been partially redistributed between the cavities and
partially dissipated in the middle plate by heating it up. The middle plate in this process plays a role of the
working body.
\section*
ACKNOWLEDGMENTS}
It is a pleasure to thank Frans Klinkhamer and Jos\'{e} Queiruga for discussions.
\begin{appendix}
\section{Vacuum expectation value of stress tensor $\hat{T}_{\mu\nu}(x)$}
\label{app:emt}
The stress tensor $T_{\mu\nu}(x)$ of the (classical) massless scalar field $\Phi(x)$ conformally coupled to
gravity is given by
\begin{eqnarray}
T_{\mu\nu} &=& \frac{2}{3}\nabla_\mu\Phi\nabla_{\nu}\Phi - \frac{1}{6}\,g_{\mu\nu}
\nabla_\lambda\Phi \nabla^\lambda\Phi
-\frac{1}{3}\Phi\nabla_\mu\nabla_\nu\Phi\,.
\end{eqnarray}
Employing the point-splitting technique to get the renormalised value of the radial energy flux, we obtain
\begin{eqnarray}
\langle \hat{T}_{tr} \rangle &=& \frac{1}{3}{\lim_{x' \rightarrow x}}\Big[
(\partial_t\partial_{r'} + \partial_{t'}\partial_r)
- \frac{1}{2}(\nabla_t\nabla_r + \nabla_{t'}\nabla_{r'})\Big]W(x,x') \;=\; \pm
\frac{r_H^2}{r^2f}{\int}\frac{d\omega\,\omega}{(4\pi)^2}\,b_\omega\,
\end{eqnarray}
for both the far-horizon and near-horizon region.
The $\Delta{r}_\star$-term in the bi-scalar $\chi_\omega(\mathbf{x},\mathbf{x}')$ is crucial for having
non-vanishing radial energy flux.
It is straightforward to further show that
\begin{subequations}
\begin{eqnarray}
\langle (\partial_t\hat{\Phi})^2\rangle &=&
+\frac{g_{tt}}{f}{\int}\frac{\omega d\omega}{(4\pi)^2}\,\bar{\chi}_\omega(\mathbf{x},\mathbf{x}),
\\[2mm]
\langle (\partial_i\hat{\Phi})^2\rangle &=&
-\frac{g_{ii}}{3f}{\int}\frac{\omega d\omega}{(4\pi)^2}
\left[1+\frac{\bar{R}_i^i}{2\omega^2f}\right]\bar{\chi}_\omega(\mathbf{x},\mathbf{x})
+{\int}\frac{d\omega}{(4\pi)^2\omega}
\lim_{\mathbf{x}' \rightarrow \mathbf{x}}\partial_{(i}\partial_{i')}\bar{\chi}_\omega(\mathbf{x},\mathbf{x}'),
\end{eqnarray}
\end{subequations}
where there is no summation over $i = \{r,\theta,\phi\}$ in the second line, and we have introduced
a new bi-scalar as follows
\begin{eqnarray}
\bar{\chi}_\omega(\mathbf{x},\mathbf{x}') &\equiv&
\frac{\chi_\omega(\mathbf{x},\mathbf{x}')}{(f(r)f(r'))^\frac{1}{2}}\,,
\end{eqnarray}
and
\begin{subequations}
\begin{eqnarray}
\langle \hat{\Phi}\hat{\Phi}_{;tt} \rangle &=&
-\frac{g_{tt}}{f}{\int}\frac{\omega d\omega}{(4\pi)^2}\,\bar{\chi}_\omega(\mathbf{x},\mathbf{x})
+ \frac{1}{2}{\int}\frac{d\omega}{(4\pi)^2\omega}\lim_{r' \rightarrow r}
\big[\nabla_t^2{+}\nabla_{t'}^2\big]\bar{\chi}_\omega(r,r'),
\\[2mm]
\langle \hat{\Phi}\hat{\Phi}_{;ii} \rangle &=&
\frac{g_{ii}}{3f}{\int}\frac{\omega d\omega}{(4\pi)^2}
\left[1{+}\frac{\bar{R}_i^i}{2\omega^2f}\right]\bar{\chi}_\omega(\mathbf{x},\mathbf{x})
{+}\frac{1}{2}{\int}\frac{d\omega}{(4\pi)^2\omega}
\lim_{\mathbf{x}' \rightarrow \mathbf{x}}\big[\nabla_i^2{+}\nabla_{i'}^2\big]
\bar{\chi}_\omega(\mathbf{x},\mathbf{x'}).
\end{eqnarray}
\end{subequations}
The vacuum expectation value of the trace of the non-renormalised stress tensor must vanish:
\begin{eqnarray}
\langle \hat{T}_\mu^\mu \rangle &=& -\frac{1}{3}
\langle \hat{\Phi}\Box\hat{\Phi} \rangle \;=\; -\frac{1}{3f^2}\int_\mathbf{R}\frac{d\omega}{(4\pi)^2\omega}\,
\lim_{\mathbf{x}' \rightarrow \mathbf{x}}\bar{\Box}\chi_\omega(\mathbf{x},\mathbf{x}') \;=\; 0\,.
\end{eqnarray}
\subsubsection{Far-horizon region}
Substituting the bi-scalar $\chi_\omega(\mathbf{x},\mathbf{x}')$ given in Eq.~\eqref{eq:bi-scalar}
with Eq.~\eqref{eq:fun-coefficients}, we
obtain
\begin{eqnarray}
\langle \hat{T}_\nu^\mu\rangle
&=& \frac{1}{f^2}{\int}\frac{d\omega\,\omega a_\omega}{(4\pi)^2}
\left[
\begin{array}{cc}
1 & 0 \\
0 & -\frac{1}{3}{\cdot}1_{3{\times3}} \\
\end{array}
\right]
+ \frac{1}{fR^2}{\int}\frac{d\omega\,\omega b_\omega}{(4\pi)^2}
\left[
\begin{array}{ccc}
+1 & +1 & 0 \\
-1 & -1 & 0\\
0 & 0 & 0_{2{\times}2} \\
\end{array}
\right] +\text{O}\Big(\frac{1}{R^5}\Big)
\end{eqnarray}
in the far-horizon region, i.e. for $R \gg r_H$.
\subsubsection{Near-horizon region}
In the near-horizon region, i.e. $R \sim r_H$, the non-renormalised stress-energy tensor reads
\begin{eqnarray}\label{eq:emt-nhr}
\langle \hat{T}_\nu^\mu\rangle
&=& \frac{1}{f^2}{\int}\frac{d\omega\,\omega a_\omega}{(4\pi)^2}
\left[
\begin{array}{cc}
1 & 0 \\
0 & -\frac{1}{3}{\cdot}1_{3{\times3}} \\
\end{array}
\right]
+ {\int}\frac{d\omega\,\omega b_\omega}{(4\pi)^2r_H^2}\left[
\begin{array}{ccc}
+\frac{1}{f} & - 1 & 0 \\
+\frac{1}{f^2} & -\frac{1}{f} & 0\\
0 & 0 & 0_{2{\times}2} \\
\end{array}
\right]
\\[1mm]\nonumber
&&\hspace{20mm} -\frac{1}{3r_H^4}{\int}\frac{d\omega\, b_\omega}{(4\pi)^2\omega}
\left[
\begin{array}{cc}
+1_{2{\times}2} & 0 \\
0 & -1_{2{\times}2} \\
\end{array}
\right]{\times}\;\Big(1 -\frac{11}{2}f + \frac{35}{2}f^2+\text{O}\big(f^3\big)\Big)
\end{eqnarray}
for $\alpha_\omega(r,r')$ and $\beta_\omega(r,r')$ given in~\eqref{eq:fun-coefficients}.
The matrix structure of the third term in Eq.~\eqref{eq:emt-nhr} is of the crucial importance, because
it guarantees that this term is also finite on the horizon in the Fermi frame. It should
be noted that the extra corrections to $\alpha_\omega(r,r')$ and $\beta_\omega(r,r')$ also contribute
to this term to the leading order changing its numerical value and the sign as follows from
the numerical results of~\cite{Elster}. This contribution to the stress tensor does
not change its value and structure when rewritten in the Fermi frame like the Hartle-Hawking part
(given in Eq.~\eqref{eq:hh-nbhh}).
The terms vanishing as $f(R)$ in the Schwarzschild frame also contribute in the Fermi frame.
It implies that the difference
$\langle \Delta\hat{T}_b^a\rangle \equiv \langle \hat{T}_b^a\rangle - \langle \hat{T}_b^a\rangle_\text{HH}$
in the Fermi frame is actually given by
\begin{eqnarray}\label{eq:delta-emt-nhr}
\langle \Delta\hat{T}_b^a\rangle &\approx& - \frac{L}{16\pi r_H^2}
\left[
\begin{array}{ccc}
+1 & - 1 & 0 \\
+1 & -1 & 0\\
0 & 0 & 0_{2{\times}2} \\
\end{array}
\right]
-{\gamma_1}\left[
\begin{array}{cc}
+1_{2{\times}2} & 0 \\
0 & -1_{2{\times}2} \\
\end{array}
\right]
-{\gamma_2}\left[
\begin{array}{ccc}
+1 & +1 & 0 \\
-1 & -1 & 0\\
0 & 0 & 0_{2{\times}2} \\
\end{array}
\right]
\end{eqnarray}
near the event horizon with
\begin{subequations}
\begin{eqnarray}
\gamma_1 &=& \sum_{l = 0}^{+\infty}{\int}\frac{dx}{4\pi x}
\frac{(2l{+}1)|B_{\omega l}|^2}{e^{8\pi x}-1}\,\frac{l(l{+}1)(1 + 24x^2)+8x^2}
{6\pi r_H^4(1+16x^2)}\,,
\\[1mm]
\gamma_2 &=& \sum_{l = 0}^{+\infty}{\int}\frac{dx}{4\pi x}\frac{(2l{+}1)|B_{\omega l}|^2}{e^{8\pi x}-1}\,
\frac{2l(l{+}1)(1+40x^2)+3[l(l{+}1)]^2(1+8x^2) +72x^2}
{12\pi r_H^4(1+20x^2 +64x^4)}\,,
\end{eqnarray}
\end{subequations}
where $x \equiv \omega M$, with the numerical values $\gamma_1 \approx 1.25{\times}10^{-6}/M^{4}$
(in agreement with~\cite{Elster}) and $\gamma_2 \approx 4.45{\times}10^{-6}/M^{4}$. It should be noted
that $L/16\pi r_H^2 \approx 9.25{\times}10^{-8}/M^{4}$ which is much smaller than $\gamma_2$. Still,
the decrease of the black-hole mass $M$ is entirely due to $L$, namely $\dot{M} = -L$, where dot stands for
the differentiation with respect to the Schwarzschild time coordinate. This means
that the last term in Eq.~\eqref{eq:delta-emt-nhr} is geometrical and it might be that one should throw
away this from the solution of the field equation. Note that this term
vanishes as $f(R)$ in the near-horizon region in the Schwarzschild frame.
The last two terms in~\eqref{eq:delta-emt-nhr} cannot be described by any one-particle distribution
$\tilde{f}_{\text{eff},1}(\mathbf{p})$, because it must satisfy the condition
\begin{eqnarray}
{\int}\frac{d^3\mathbf{p}}{|\mathbf{p}|}\,\tilde{f}_{\text{eff},1}(\mathbf{p}) &=& 0\,.
\end{eqnarray}
This follows from
\begin{eqnarray}
\Delta\tilde{W}_1(x,x') &=& \frac{1}{2}
\big((\gamma_1+\gamma_2)\Delta{t}^2 -2\gamma_2\Delta{t}\Delta{x}
- (\gamma_1-\gamma_2)\Delta{x}^2 + \gamma_1\Delta{y}^2 + \gamma_1\Delta{z}^2 \big)\,,
\end{eqnarray}
which vanishes in the coincidence limit, i.e. $x' \rightarrow x$. Note that
$\Box \Delta\tilde{W}(x,x') = 0$ exactly holds and $\Delta\tilde{W}_1(x,x')$ is locally suppressed
as $(\Delta{x}/r_H)^2$ with respect to $\Delta{W}_1(x,x')$ and as $(\Delta{x}/r_H)^4$
with respect to $W_M(x,x') \approx W_\text{HH}(x,x')$.
\end{appendix}
|
1,314,259,993,320 | arxiv | \section{Introduction}\label{sec:intro}
As defined in \cite{bib:Finch}, a {\em $y$-rough number} is an integer whose prime factors are all at least $y$. For real numbers $x$ and $y$, the function $\Phi(x,y)$ counts the number of $y$-rough numbers less than or equal to $x$. This function was studied by Buchstab, who showed in \cite{bib:Buchstab} that for any fixed $u>1$,
\[\Phi(x,x^{1/u})\sim \omega(u)\frac{x}{\log x^{1/u}}, \hspace{12pt} x\rightarrow \infty,\] where $\omega(u)$ is the unique continuous function $\omega:[1,\infty]\rightarrow (0,\infty)$ satisfying \[\begin{array}{ll}
\omega(u)=\frac{1}{u}, & 1\leq u\leq 2, \\
\\
\frac{d}{du}(u\omega(u))=\omega(u-1), & u\geq 2. \\
\end{array}\]
The current study of $\Phi(x,y)$ is motivated by a graph labeling algorithm used in \cite{bib:mainpaper}. For a graph $G$ with $n$ vertices, a {\em prime labeling} is a bijection $f:V(G)\rightarrow \{1,2,...,n\}$ such that $\gcd(f(v),f(w))=1$ for any edge $vw$ in $G$. We seek a prime labeling of a bipartite graph $G=G[A,B]$ with $|A|=|B|=n/2$. We begin by placing the even multiples of 3 at most $n$ on vertices in $A$ and the odd multiples of 3 at most $n$ on vertices in $B$ such that no two vertices labeled with a multiple of 3 are adjacent. Starting with $p=5$ and continuing with all odd primes $p<n$, we place the unused even multiples of $p$ at most $n$ on unlabeled vertices in $A$ and the unused odd multiples of $p$ at most $n$ on unlabeled vertices in $B$ such that no two vertices labeled with a multiple of $p$ are adjacent. If this process can be completed, then $G$ has a prime labeling. At any step of the process, before assigning the unused multiples of a prime $p$, the number of unlabeled vertices in $B$ is given by $\Phi(n,p)$; the goal of this paper is to prove the following lower bound.
\begin{thm}\label{thm:mainthm}
Suppose $p\geq 11$ is prime. Then for all $n\geq 2p$,
\[\Phi(n,p) \geq \left\lfloor\frac{2n}{p}\right\rfloor+1.\]
\end{thm}
The proof of this theorem breaks into four cases depending on the value of $n$; these cases are covered by Lemmas \ref{lem:phiboundcase1}-\ref{lem:phiboundcase3} and Lemma \ref{lem:phiboundcase4} in Section \ref{sec:proof}. For a real number $x$, let $\pi(x)$ denote the number of prime numbers less than or equal to $x$; we will need the following results on the distribution of prime numbers.
\begin{lem}[Corollary 1 in \cite{bib:Rosser}]\label{lem:pibound}
For $x>1$, \[\pi(x)<\frac{1.25506x}{\ln x},\] and for $x\geq 17$, \[\pi(x)>\frac{x}{\ln x}.\]
\end{lem}
\begin{lem}[Theorem, page 180 in \cite{bib:Nagura}]\label{lem:Bertrandtype}
Let $x\geq 25$. There is at least one prime number in the interval $\left(x,6x/5\right)$.
\end{lem}
\section{Proof of Theorem \ref{thm:mainthm}}\label{sec:proof}
For any $n$ and $p$, set $z=2n/p$; then $\left\lfloor 2n/p \right\rfloor+1=\lfloor z\rfloor+1$ is a step function that only increments at integer values of $z$. Therefore, in what follows, we can restrict our attention to $\Phi(zp/2,p)$ for integer values of $z$. Namely, several of the proofs in this section require the checking of a finite number of small cases. The author has verified these by hand, and the details are provided in Appendix A.
\begin{lem}\label{lem:phiboundcase1}
If $p\geq 11$ is prime and $2p\leq n\leq 3p$, then $\Phi(n,p) \geq \left\lfloor 2n/p \right\rfloor+1$.
\end{lem}
\begin{proof}
Let $p\geq 11$ be prime and $2p\leq n\leq 3p$; set $n=pz/2$, where $4\leq z\leq 6$ is a real number. Since we only need to consider integer values of $z$, it suffices to show that $\Phi(2p,p)\geq 5$, $\Phi(5p/2,p)\geq 6$, and $\Phi(3p,p)\geq 7$.
Let $X_n^p$ denote the set of all positive integers at most $n$ with no prime factors less than $p$, so that $|X_n^p|=\Phi(n,p)$. If $11\leq p\leq 23$, then Table \ref{tab:dataphi1} on page \pageref{tab:dataphi1} provides the set $X_n^p$ and the value of $\Phi(zp/2,p)$ for $z=4,5,6$.
If $p\geq 29$, then the set of integers less than $2p$ that are not divisible by any prime $q<p$ includes $1$ and $p$; moreover, because $p>25$, Lemma \ref{lem:Bertrandtype} implies that there is at least one prime in each interval \[\left(p,\frac{6}{5}p\right),\left(\frac{6}{5}p,\frac{36}{25}p\right),\left(\frac{36}{25}p,\frac{216}{125}p\right).\] Thus, $\Phi(2p,p)\geq 5$. We apply Lemma \ref{lem:Bertrandtype} again to obtain at least one prime in $\left(2p,\frac{12}{5}p\right)\subset \left(2p,\frac{5}{2}p\right)$, so $\Phi(5p/2,p)\geq \Phi(2p,p)+1\geq 6$. Similarly from Lemma \ref{lem:Bertrandtype}, there is at least one prime in $\left(\frac{5}{2}p,3p\right)$, so $\Phi(3p,p)\geq \Phi(5p/2,p)+1\geq 7$, and the proof is complete.
\end{proof}
\begin{lem}\label{lem:phiboundcase2}
If $p\geq 11$ is prime and $3p<n\leq p^2$, then $\Phi(n,p) \geq \left\lfloor 2n/p\right\rfloor+1$.
\end{lem}
\begin{proof}
Assume first that $11\leq p\leq 89$ and set $n=pz/2$, where $6<z\leq 2p$ is a real number. Note that $\Phi(n,p)$ is a nondecreasing function in $n$, so if $\Phi(n,p)\geq 2p+1$, then $\Phi(n_0,p)\geq 2p+1$ for all $n_0\geq n$. Therefore, for each $p$, we simply need to compute $\Phi(pz/2,p)$ starting with $z=7$ and continuing until we obtain a value at least $2p+1$. The necessary values of $\Phi(pz/2,p)$ for $7\leq z\leq 18$ are given in Table \ref{tab:dataphi2a} on page \pageref{tab:dataphi2a}, and the necessary values of $\Phi(pz/2,p)$ for $19\leq z\leq 28$ are given in Table \ref{tab:dataphi2b} on page \pageref{tab:dataphi2b}. No further computations are required for $11\leq p\leq 89$.
Assume now that $p\geq 97$ and again set $n=pz/2$, where $6<z\leq 2p$ is a real number. We first show that \[\frac{p}{\ln p}\geq \frac{4z-4}{z-5.02024}\] for all $z>6$. When $z=6$, $(4z-4)/(z-5.02024)=20/0.97976\leq 20.42$, and for $p\geq 97$, $p/\ln p\geq 20.42$. The derivative of $f(z)=(4z-4)/(z-5.02024)$ is \[f'(z)=\frac{-16.08096}{(z-5.02024)^2}<0.\] Thus, the right hand side decreases as $z$ increases, so \[\frac{p}{\ln p}\geq \frac{4z-4}{z-5.02024}\] for all $z>6$ as well.
From this we derive that
\begin{equation}\label{eqn:ineq}
\frac{p}{4\ln p}(z-5.02024)\geq z-1.
\end{equation} Since $\Phi(n,p)$ counts 1, $p$, and all primes greater than $p$ and less than or equal to $n$, we have (using Lemma \ref{lem:pibound}) \[\Phi(n,p)\geq \pi(pz/2)-\pi(p)+2\geq \frac{\frac{pz}{2}}{\ln(\frac{pz}{2})}-\frac{1.25506p}{\ln p}+2.\] Since $z\leq 2p$ implies $pz/2\leq p^2$, we have \[\frac{\frac{pz}{2}}{\ln(\frac{pz}{2})}-\frac{1.25506p}{\ln p}+2\geq \frac{\frac{pz}{2}}{\ln p^2}-\frac{1.25506p}{\ln p}+2=\frac{pz-5.02024p}{4\ln p}+2.\] Finally, using (\ref{eqn:ineq}), we have \[\frac{pz-5.02024p}{4\ln p}+2\geq z-1+2=\frac{2n}{p}+1\geq \left\lfloor\frac{2n}{p}\right\rfloor+1\] as desired.
\end{proof}
\begin{lem}\label{lem:phiboundcase3}
If $p\geq 11$ is prime and $p^2<n\leq p^d$, where $d=(p-2c)/2\ln p$ and $c=1.25506$, then $\Phi(n,p) \geq \left\lfloor 2n/p\right\rfloor+1$.
\end{lem}
\begin{proof}
If $p=11$ then $d<2$, so the statement is vacuously true. Now let $p\geq 13$, and set $n=p^\alpha$, where $2<\alpha\leq d=(p-2c)/2\ln p$ and $c=1.25506$. We know \[\frac{\alpha}{p^{\alpha-2}}<2\] for all $\alpha>2$, because the terms are equal for $\alpha=2$ and \[\frac{d}{d\alpha}\left(\frac{\alpha}{p^{\alpha-2}}\right)=\frac{1-\alpha\ln p}{p^{\alpha-2}}<0.\] From this we obtain \[\frac{c\alpha}{p^{\alpha-2}}< 2c\Rightarrow p-\frac{c\alpha}{p^{\alpha-2}}> p-2c=2d\ln p\geq 2\alpha\ln p.\] Multiplying both sides of the inequality $p-c\alpha/p^{\alpha-2}\geq 2\alpha\ln p$ by $p^{\alpha-1}/\alpha\ln p$ yields \[\frac{p^\alpha-c\alpha p}{\alpha\ln p}\geq 2p^{\alpha-1}\geq \left\lfloor\frac{2p^\alpha}{p}\right\rfloor.\] Again $\Phi(n,p)$ counts 1, $p$, and all primes greater than $p$ and less than or equal to $n$, so we have (using Lemma \ref{lem:pibound}) \[\Phi(p^\alpha,p)\geq \pi(p^\alpha)-\pi(p)+2\geq \frac{p^\alpha}{\alpha\ln p}-\frac{cp}{\ln p}+2=\frac{p^\alpha-c\alpha p}{\alpha\ln p}+2\geq \left\lfloor\frac{2p^\alpha}{p}\right\rfloor+1.\] This completes the proof.
\end{proof}
The fourth and final case requires an extended argument. Let $p_1=2,p_2=3,p_3=5,...$ denote the sequence of primes, and for $k\geq 1$ set $Q_k=\prod_{i\leq k} p_i$. Moreover, let $\mu(n)$ be the M\"{o}bius function, defined for a positive integer $n$ by \[\mu(n)=\left\{\begin{array}{ll}
1 & n\text{ is square-free with an even number of prime factors;}\\
-1 & n\text{ is square-free with an odd number of prime factors;}\\
0 & n\text{ is not square-free.}\\
\end{array}\right.\] Note that if $d|Q_k$, then $\mu(d)=\pm 1$. We prove the following estimate, which is provided with a brief explanation in \cite[Equation 1.1]{bib:DeBruijn}.
\begin{lem}\label{lem:phiform}
Suppose $p$ is prime. Then \[\Phi(p^\alpha,p)\geq p^\alpha\prod_{p_i<p}\left(1-\frac{1}{p_i}\right)-2^{\pi(p)}.\]
\end{lem}
\begin{proof}
If $p=p_{k+1}$, then an exact formula for $\Phi(p^\alpha,p)$ (sometimes called Legendre's formula) is given by \[\Phi(p^\alpha,p)=\sum_{d|Q_k}\mu(d)\left\lfloor \frac{p^\alpha}{d}\right\rfloor.\] Set \[\phi(p^\alpha,p)=\sum_{d|Q_k}\mu(d)\frac{p^\alpha}{d};\] we now obtain the approximation \[\left|\Phi(p^\alpha,p)-\phi(p^\alpha,p)\right|=\left|\sum_{d|Q_k}\mu(d)\left(\left\lfloor\frac{p^\alpha}{d}\right\rfloor-\frac{p^\alpha}{d}\right)\right|\leq \sum_{d|Q_k} 1=2^{\pi(p_k)}<2^{\pi(p)}.\] From this we obtain \[\Phi(p^\alpha,p)\geq \phi(p^\alpha,p)-2^{\pi(p)}.\] Finally, note that \[\frac{\phi(p^\alpha,p)}{p^\alpha}=\sum_{d|Q_k}\mu(d)\frac{1}{d}=\prod_{p_i<p}\left(1-\frac{1}{p_i}\right).\] The result follows.
\end{proof}
\begin{lem}\label{lem:power2bound}
Suppose $p\geq 19$, and let $c=1.25506$. Then \[\frac{2^{\frac{cp}{\ln p}}+1}{p^{\frac{p-2c}{2\ln p}-1}}<\frac{11}{8}.\]
\end{lem}
\begin{proof}
Set \[f(p)=\frac{2^{\frac{cp}{\ln p}}+1}{p^{\frac{p-2c}{2\ln p}-1}}.\] Then $f(19)\approx 1.373<11/8$; to establish the result, we will show that $f$ is decreasing. Let \[f_1(p)=\frac{2^{\frac{cp}{\ln p}}}{p^{\frac{p-2c}{2\ln p}-1}}\text{ and }f_2(p)=\frac{1}{p^{\frac{p-2c}{2\ln p}-1}},\] so that $f=f_1+f_2$. Consider first \[\ln f_1(p)=\frac{cp}{\ln p}\ln 2-\left(\frac{p-2c}{2\ln p}-1\right)\ln p=\frac{(c\ln 2)p}{\ln p}-\frac{p-2c}{2}+\ln p.\] Implicit differentiation yields \[f_1'(p)=f_1(p)\left[\frac{c\ln 2}{\ln p}-\frac{c\ln 2}{\ln^2 p}-\frac{1}{2}+\frac{1}{p}\right].\] Since $f_1(p)>0$ for all $p$ and the expression in brackets is easily shown to be negative for all $p\geq 19$, we conclude that $f_1'(p)<0$. Similarly, we consider \[\ln f_2(p)=-\frac{p}{2}+c\] and see that $f_2'(p)=-f_2(p)/2<0$ as well. Thus $f(p)$ is decreasing, and $f(p)<11/8$ for all $p\geq 19$ as desired.
\end{proof}
For Lemmas \ref{lem:eulerconstant}-\ref{lem:phiboundcase4}, let \[\gamma=\lim_{m\rightarrow \infty} \left(-\ln m+\sum_{k=1}^{m}\frac{1}{k}\right)\approx 0.577\] be the Euler-Mascheroni constant. Note that 2971 and 2999 are consecutive primes.
\begin{lem}\label{lem:eulerconstant}
For a fixed $p$ prime, set $d=(p-2c)/2\ln p$, where $c=1.25506$, and assume $\alpha\geq d$. If $19\leq p\leq 2971$, then \[\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058p-\frac{2^{\frac{cp}{\ln p}}}{p^{\alpha-1}}\geq 2+\frac{1}{p^{\alpha-1}},\] and if $p\geq 2999$, then \[\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-\frac{2^{\frac{cp}{\ln p}}}{p^{\alpha-1}}\geq 2+\frac{1}{p^{\alpha-1}}.\]
\end{lem}
\begin{proof}
Assume first that $19\leq p\leq 2971$, and let $c=1.25506$ and $d=(p-2c)/2\ln p$. Since $\alpha\geq d$, it suffices to show that \[\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058p-\frac{2^{\frac{cp}{\ln p}}}{p^{d-1}}\geq 2+\frac{1}{p^{d-1}}.\] Set \[f(p)=\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058p-\frac{2^{\frac{cp}{\ln p}}}{p^{d-1}}\] and \[g(p)=2+\frac{1}{p^{d-1}}.\] It is evident that $d>2$ and, hence, $g(p)<2.01$ for all $p\geq 19$, so it suffices to show that $f(p)>2.01$. Tables \ref{tab:dataDusart1} and \ref{tab:dataDusart2} on pages \pageref{tab:dataDusart1} and \pageref{tab:dataDusart2} provide the value of $f(p)$ rounded to two decimal places; clearly, $f(p)>2.01>g(p)$ for all prime $19\leq p\leq 2971$.
Assume now that $p\geq 2999$. Since $\alpha\geq d$, it suffices to show that \[\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-\frac{2^{\frac{cp}{\ln p}}}{p^{d-1}}\geq 2+\frac{1}{p^{d-1}}.\] Moreover, Lemma \ref{lem:power2bound} implies that it is sufficient to show \[g(p)=\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)\geq \frac{27}{8}.\] But this follows immediately from the fact that $g(2999)\approx 209.7\geq 27/8$ and $g(p)$ is clearly increasing.
\end{proof}
\begin{lem}[Theorem 6.12 in \cite{bib:Dusart}]\label{lem:Dusart}
If $x\geq 2973$, then \[\prod_{p_i\leq x}\left(1-\frac{1}{p_i}\right)>\frac{1}{e^{\gamma}\ln x}\left(1-\frac{0.2}{\ln^2 x}\right).\]
\end{lem}
\begin{cor}\label{cor:Dusartcor}
If $19\leq p\leq 2971$ is prime, then \[\prod_{p_i<p}\left(1-\frac{1}{p_i}\right)>\frac{1}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058,\] and if $p\geq 2999$ is prime, then \[\prod_{p_i<p}\left(1-\frac{1}{p_i}\right)>\frac{1}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right).\]
\end{cor}
\begin{proof}
Assume first that $19\leq p\leq 2971$, and set \[f(p)=\prod_{p_i<p}\left(1-\frac{1}{p_i}\right)\] and \[g(p)=\frac{1}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058.\] Tables \ref{tab:dataDusartCor1}-\ref{tab:dataDusartCor4} on pages \pageref{tab:dataDusartCor1}-\pageref{tab:dataDusartCor4} provide the values of $f(p)$ and $g(p)$ rounded to three decimal places; clearly, $f(p)>g(p)$ for all prime $19\leq p\leq 2971$.
The inequality for $p\geq 2999$ follows immediately from Lemma \ref{lem:Dusart}.
\end{proof}
We are now ready to complete the proof of Theorem \ref{thm:mainthm}. In what follows, we make repeated use of the inequality $\left\lfloor x/y\right\rfloor \geq x/y-1$.
\begin{lem}\label{lem:phiboundcase4}
If $p\geq 11$ is prime and $n>p^d$, where $d=(p-2c)/2\ln p$ and $c=1.25506$, then $\Phi(n,p) \geq \left\lfloor 2n/p\right\rfloor+1$.
\end{lem}
\begin{proof}
First, suppose $p=11$ and assume $11^d\approx 69.8 < n<150$. Recall that for a fixed $p$, $\Phi(n,p)$ is increasing in $n$. The maximum value of $\left\lfloor 2n/11 \right\rfloor+1$ for $11^d\approx 69.8 < n<150$ is $\left\lfloor 2(149)/11 \right\rfloor+1=28$, so we simply need to verify the inequality starting at $n=70$ and continuing until we obtain a value of $\Phi(n,11)$ at least 28. The required values are given in Table \ref{tab:dataphi4a} on page \pageref{tab:dataphi4a}, where $\Phi$ stands in the place of $\Phi(n,11)$ and $b=\left\lfloor 2n/11 \right\rfloor+1$.
Assume now that $n\geq 150$; we estimate $\Phi(n,11)$ directly: \[\begin{array}{rcl}
\Phi(n,11) & = & n-\left\lfloor\frac{n}{2}\right\rfloor-\left\lfloor\frac{n}{3}\right\rfloor-\left\lfloor\frac{n}{5}\right\rfloor-\left\lfloor\frac{n}{7}\right\rfloor+\left\lfloor\frac{n}{6}\right\rfloor+\left\lfloor\frac{n}{10}\right\rfloor+\left\lfloor\frac{n}{14}\right\rfloor+\left\lfloor\frac{n}{15}\right\rfloor \\
\\
& & +\left\lfloor\frac{n}{21}\right\rfloor+\left\lfloor\frac{n}{35}\right\rfloor-\left\lfloor\frac{n}{30}\right\rfloor-\left\lfloor\frac{n}{42}\right\rfloor-\left\lfloor\frac{n}{70}\right\rfloor-\left\lfloor\frac{n}{105}\right\rfloor+\left\lfloor\frac{n}{210}\right\rfloor \\
\\
& \geq & n-\frac{n}{2}-\frac{n}{3}-\frac{n}{5}-\frac{n}{7}+\left(\frac{n}{6}-1\right)+\left(\frac{n}{10}-1\right)+\left(\frac{n}{14}-1\right)+\left(\frac{n}{15}-1\right) \\
\\
& & +\left(\frac{n}{21}-1\right)+\left(\frac{n}{35}-1\right)-\frac{n}{30}-\frac{n}{42}-\frac{n}{70}-\frac{n}{105}+\left(\frac{n}{210}-1\right) \\
\\
& = & \frac{48n}{210}-7.
\end{array}\] For $n\geq 150$, \[\frac{48n}{210}-7\geq \left\lfloor\frac{2n}{11}\right\rfloor+1.\]
Suppose now that $p=13$, and assume $13^d\approx 189.6 < n<297$. Again, $\Phi(n,p)$ is increasing in $n$ for $p=13$. The maximum value of $\left\lfloor 2n/13 \right\rfloor+1$ for $13^d\approx 189.6 < n<297$ is $\left\lfloor 2(296)/13 \right\rfloor+1=46$, so we simply need to verify the inequality starting at $n=190$ and continuing until we obtain a value of $\Phi(n,13)$ at least 46. The required values are given in Table \ref{tab:dataphi4b} on page \pageref{tab:dataphi4b}, where $\Phi$ stands in the place of $\Phi(n,13)$ and $b=\left\lfloor 2n/13 \right\rfloor+1$.
Assume now that $n\geq 297$; we use an argument similar to the one above to obtain $\Phi(n,13)\geq 480n/2310-15$. For $n\geq 297$, \[\frac{480n}{2310}-15\geq \left\lfloor\frac{2n}{13}\right\rfloor+1.\]
Likewise for $p=17$, we obtain $\Phi(n,17)\geq 5760n/30030-31$. For all $n> 17^d\approx 1401$, \[\frac{5760n}{30030}-31\geq \left\lfloor\frac{2n}{17}\right\rfloor+1.\]
Suppose now that $19\leq p\leq 2971$ and $n>p^d$. Set $n=p^\alpha$, where $\alpha>d$; from Lemma \ref{lem:phiform} we know \[\Phi(p^\alpha,p)\geq p^\alpha\prod_{p_i<p}\left(1-\frac{1}{p_i}\right)-2^{\pi(p)}.\] Using Lemma \ref{lem:pibound} and Corollary \ref{cor:Dusartcor}, this implies \[\begin{array}{rcl}\Phi(p^\alpha,p)& \geq & p^\alpha\left[\frac{1}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058\right]-2^{\frac{cp}{\ln p}}\\ \\
& = & p^{\alpha-1}\left[\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-0.0058p-\frac{2^{\frac{cp}{\ln p}}}{p^{\alpha-1}}\right],
\end{array}\] and using Lemma \ref{lem:eulerconstant} yields \[\Phi(p^\alpha,p)\geq p^{\alpha-1}\left(2+\frac{1}{p^{\alpha-1}}\right)=2p^{\alpha-1}+1\geq \left\lfloor\frac{2p^\alpha}{p}\right\rfloor+1.\]
Finally, suppose that $p\geq 2999$ and $n>p^d$. We again set $n=p^\alpha$, where $\alpha>d$; using an argument similar to the preceding paragraph, we obtain via Lemmas \ref{lem:pibound}, \ref{lem:phiform}, \ref{lem:eulerconstant} and Corollary \ref{cor:Dusartcor} that \[\begin{array}{rcl}
\Phi(p^\alpha,p) & \geq & {\displaystyle p^\alpha\prod_{p_i<p}\left(1-\frac{1}{p_i}\right)-2^{\pi(p)} }\\ \\
& \geq & p^\alpha\frac{1}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-2^{\frac{cp}{\ln p}}\\ \\
& = & p^{\alpha-1}\left[\frac{p}{e^{\gamma}\ln p}\left(1-\frac{0.2}{\ln^2 p}\right)-\frac{2^{\frac{cp}{\ln p}}}{p^{\alpha-1}}\right] \\ \\
& \geq & p^{\alpha-1}\left(2+\frac{1}{p^{\alpha-1}}\right)=2p^{\alpha-1}+1\geq \left\lfloor\frac{2p^\alpha}{p}\right\rfloor+1.
\end{array}\] This completes the proof.
\end{proof}
|
1,314,259,993,321 | arxiv | \section{Introduction}\label{sec:intro_ES}
A tail quantile of the profit-and-loss distribution measures the risk of loss for investments, which is known as Value-at-Risk (VaR) and has
been widely applied for capital allocation and risk management over the past two decades \citep{mcneil2015quantitative}. Although it is an intuitive measure, VaR has been criticized since it fails to capture tail risks beyond itself. Expected Shortfall (ES), defined as the average above or below a certain quantile, fulfills such deficiency as it better characterizes the tail behavior by consolidating information from the entire tail region. In addition, ES has the desired property of subadditivity which VaR lacks in general \citep{artzner1997thinking, artzner1999coherent}.
With these appealing features, ES has attracted increasing attentions and has been more widely applied for risk management in recent years. In financial risk management, the Basel Committee recently to shift the quantitative risk metrics system from VaR to ES \citep{committee_2013}.
In many applications, risk measures might depend on exogenous covariates. For instance, market risks are often change across investment conditions, such as macroeconomic, financial, and political environments. For most clinical studies, patient outcomes are usually associated with demographic and therapeutic information. It is thus of interest to focus on the conditional risk measures adjusting for certain covariates. In this project, we consider the problem of inference for conditional expected shortfall (CES).
\cite{he2010detection} introduced a COVariate-adjusted Expected Shortfall (\textit{COVES}) test to detect treatment effects through CES, which is motivated by a clinical study with balanced design. However, as shown in Sections \ref{sec:simulation_ES} and \ref{sec:real_data_ES},
the statistical power of the \textit{COVES} test may be affected when there are unbalanced covariates.
To evaluate CES beyond the scope of treatment differences, we consider inference based on a regression framework. Nevertheless, as pointed out by \cite{gneiting2011making}, CES is not ``elicitable" in the sense that it cannot be represented as the minimizer of an expected loss, and hence the stand-alone regression for CES is infeasible.
To overcome the problem of ``elicitability," \cite{leorato2012asymptotically} and \cite{peracchi2008estimating} suggested to approximate CES by fitting an entire quantile process, which imposes both computational and theoretical challenges. Alternative methods such as those proposed by \cite{cai2008nonparametric}, \cite{kato2012weighted} and \cite{xiao2014right} rely on kernel-smoothing estimation for the conditional distribution function, which are subject to the ``curse-of-dimensionality" and practically feasible only for data with a few covariates.
For inference on CES, we consider an alternative approach through the joint modeling of conditional quantile and CES. \cite{fissler2016higher} recently showed that VaR and ES are jointly ``elicitable," and they provided a class of strictly consistent joint loss functions for the pairs of quantile and ES at the same probability level. \cite{dimitriadis2019joint} utilized the joint loss functions in a regression setup for quantile and ES. The computation of the joint estimator is challenging especially when the dimension of parameters is large since the joint loss function is neither differentiable nor convex. To reduce the computational effort, we propose a two-step estimation procedure. We first estimate the quantile parameters with standard quantile regression \citep{koenkerquantile}, and then estimate the ES regression coefficients bt minimizing the simplified objective function with the quantile estimators plugged in. We show that the two-step estimator has the same asymptotic properties as the joint estimator, but the former is numerically more efficient. In addition, the CES estimation in the second step is locally robust to the quantile estimation in the first step, which implies that the local misspecification of the quantile parameters has no effect on the asymptotic distribution of the ES estimator; see \cite{chernozhukov2016locally} for an elaboration on local robustness.
The Wald-type inference method can be conducted based on the asymptotic distribution of the parameter estimator.
However, it has been shown in quantile regression literature that the Wald-type test is generally unstable for small sample sizes, partly due to the uncertainty from estimating nuisance parameters involved in the asymptotic variance, such as the conditional densities of the response \citep{chen2005computational, kocherginsky2005practical}. We develop a score-type inference method for hypothesis testing and confidence interval construction. Numerical studies suggest that the proposed score-type method is superior to the Wald-type method in finite samples, especially when the data is heterogeneous and involves a large number of confounding factors. Furthermore, the method
provides more accurate results than the \textit{COVES} approach for unbalanced design.
In Section \ref{sec:proposed_method_ES}, we first present the two-step estimation procedure for the joint regression model and the large sample properties of the resulting estimators, and then develop the score-type inference method for the ES regression parameters. We assess the finite sample performance of the proposed inference procedure with simulation studies in Section \ref{sec:simulation_ES}. The merit of the proposed method is illustrated by analyzing two real data sets in Section \ref{sec:real_data_ES}. Some concluding remarks are provided in Section \ref{sec:conclusion_ES}. Proofs are deferred to the Appendix.
\section{Proposed Method}\label{sec:proposed_method_ES}
\subsection{Joint regression model}\label{subsec:joint_reg_model_ES}
Consider a continuous response $Y$ and a $p$-dimensional design vector $\mathbf{X}$. At a given probability level $\tau \in (0, 1)$, the conditional quantile of $Y$ given $\mathbf{X}$ is defined as
$$
Q_\tau(Y | \mathbf{X}) = \inf\{y \in \mathbb{R}: F^{-1}_{Y|\mathbf{X}}(y) \ge \tau\},
$$
where $F_{Y|\mathbf{X}}$ is the conditional distribution function of $Y$ given $\mathbf{X}$. The corresponding CES is defined as $$
ES_\tau(Y|\mathbf{X}) = \tau^{-1} \int_0^\tau F^{-1}_{Y|\mathbf{X}} (u) du,
$$
which is deemed to be more informative than the conditional quantile as CES summarizes the entire tail behavior of the conditional distribution.
In this project, we are interested in the inference for the CES of $Y$ given $\mathbf{X}$ at a certain probability level.
To evaluate CES with a wide range of applications, we consider inference based on a regression framework.
However, as pointed out by \cite{gneiting2011making}, the stand-alone regression for CES is infeasible since it cannot be represented as the minimizer of an expected loss. To overcome this problem, we adopt the idea in \citet{fissler2016higher} and employ a joint regression framework that simultaneously models the conditional quantile and CES.
For ease of presentation, let $\mathbf{X}$ denote the design vectors for both quantile and ES regression models, but we should bear in mind that one can consider different design vectors for these two models.
For a fixed probability level $\tau \in (0, 1)$, we jointly model the conditional quantile and CES of $Y$ given $\mathbf{X}$ as
\begin{align*}
Q_\tau(Y | \mathbf{X}) = \mathbf{X}^\prime \boldsymbol{\theta}^q_0 \quad
\text{and} \quad ES_\tau(Y | \mathbf{X}) = \mathbf{X}^\prime \boldsymbol{\theta}^e_0,
\end{align*}
where the parameter vector $\boldsymbol{\theta}_0 = (\boldsymbol{\theta}^{q \prime}_0, \boldsymbol{\theta}^{e \prime}_0)^\prime$ is $\tau$-specific. Denote $u^q(\tau) = Y - Q_\tau(Y | \mathbf{X})$ and $u^e(\tau) = Y - ES_\tau(Y | \mathbf{X})$, we assume $Q_\tau(u^q | \mathbf{X}) = ES_\tau(u^e | \mathbf{X}) = 0$ for identifiability purpose.
To obtain the estimated regression coefficients, we utilize the class of strictly consistent joint loss functions for the pair of quantile and ES \citep{fissler2016higher},
\begin{align}\label{equ:joint_loss}
\rho_\tau(Y, \mathbf{X}, \boldsymbol{\theta}) &= \big\{I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q) - \tau\big\} \cdot G_1(\mathbf{X}^\prime \boldsymbol{\theta}^q) - I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q) \cdot G_1(Y) \nonumber \\
&+ G_2(\mathbf{X}^\prime \boldsymbol{\theta}^e) \cdot \left\{\mathbf{X}^\prime \boldsymbol{\theta}^e - \mathbf{X}^\prime \boldsymbol{\theta}^q + \frac{(\mathbf{X}^\prime \boldsymbol{\theta}^q - Y) I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q)}{\tau}\right\} \\
&- \mathcal{G}_2(\mathbf{X}^\prime \boldsymbol{\theta}^e) + a(Y), \nonumber
\end{align}
where $G_1$ is an increasing and twice continuously differentiable function, $\mathcal{G}_2$ is a three-times continuously differentiable function, $\mathcal{G}_2^{(1)} = G_2$, $G_2$ and $G_2^{(1)}$ are strictly positive, and $G_1$ and $a$ are integrable functions.
\cite{fissler2016higher} also showed that, under some regularity conditions, there exist no strictly consistent loss functions outside the class of functions given above, which implies that \eqref{equ:joint_loss} is the most general class of objective functions that can be applied for the joint regression model.
Given data $(Y_i, \mathbf{X}_i)$, the corresponding joint estimators $\tilde{\boldsymbol{\theta}} = (\tilde{\boldsymbol{\theta}}^{q \prime}, \tilde{\boldsymbol{\theta}}^{e \prime})^\prime$ can be obtained by
\begin{equation}\label{equ:joint_estimation}
\tilde{\boldsymbol{\theta}} = \underset{\boldsymbol{\theta}}{\arg \min} \sum_{i=1}^{n}\rho_\tau(Y_i, \mathbf{X}_i, \boldsymbol{\theta}).
\end{equation}
\cite{dimitriadis2019joint} utilized the joint loss functions in a regression setup and proposed to estimate the quantile and ES parameters jointly by \eqref{equ:joint_estimation}.
However, As a one-step procedure, the estimation is computationally challenging especially when the dimension of parameters is large since the loss functions are neither differentiable nor convex. To reduce the computational effort, we propose a two-step estimation procedure by estimating the quantile parameters first using standard quantile regression.
\begin{remark}
Following \cite{gneiting2011making} and \cite{fissler2016higher}, we introduce the concept of strictly consistent loss functions. A statistical functional, such as the mean or the $\tau$th quantile, is called elicitable if there is a loss function such that the functional is the unique minimizer of the expected loss. Such a loss function is said to be strictly consistent for the functional. The strictly consistency of $\rho_\tau(Y, \mathbf{X}, \boldsymbol{\theta})$ in \eqref{equ:joint_loss} implies that the parameter vector $\boldsymbol{\theta}_0 = (\boldsymbol{\theta}^{q \prime}_0, \boldsymbol{\theta}^{e \prime}_0)^\prime$ is the unique minimizer of $\rho_\tau(Y, \mathbf{X}, \boldsymbol{\theta})$, and thus $\rho_\tau(Y, \mathbf{X}, \boldsymbol{\theta})$ can be used as the objective function to estimate the regression coefficients.
\end{remark}
\subsection{Two-step estimation}\label{subsec:two_step_estimation_ES}
The first part of the joint loss functions \eqref{equ:joint_loss} corresponds to quantile and depends only on quantile. If the quantile parameters are known by ``oracle", we can plug the quantile parameters into the joint loss functions and use only the second part for the estimation of ES parameters, thus effectively reducing the computational complexity.
Let $\hat{\boldsymbol{\theta}}^q$ be a consistent estimator of $\boldsymbol{\theta}^q_0$, then the first part in \eqref{equ:joint_loss} is fixed given the quantile estimate. In addition, the function $a$ depends only on $Y$ and does not affect the estimation procedure. Therefore, in practice, we can consider the following much simpler plug-in objective functions to obtain the estimated ES regression parameters,
\begin{align} \label{equ:sep_loss}
\rho_\tau(Y, \mathbf{X}, \hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) &= G_2(\mathbf{X}^\prime \boldsymbol{\theta}^e) \cdot \left\{\mathbf{X}^\prime \boldsymbol{\theta}^e - \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q + \frac{(\mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q - Y) I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q)}{\tau}\right\} - \mathcal{G}_2(\mathbf{X}^\prime \boldsymbol{\theta}^e),
\end{align}
and the associated ES coefficient estimator $\hat{\boldsymbol{\theta}}^e$ can be represented as
\begin{align}\label{equ:two-step_estimator}
\hat{\boldsymbol{\theta}}^e = \underset{\boldsymbol{\theta}^e}{\arg \min} \sum_{i=1}^n \rho_\tau(Y_i, \mathbf{X}_i, \hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e).
\end{align}
In practice, we estimate $\hat{\boldsymbol{\theta}}^q$ with the \textit{quantreg} package in R. Numerical estimation methods for linear quantile regression has be well developed and the regression coefficients can be obtained efficiently based on linear programming; see \cite{koenkerquantile} for details on linear quantile regression specification.
Compared with the joint estimator $\tilde{\boldsymbol{\theta}}^e$, the two-step estimator $\hat{\boldsymbol{\theta}}^e$ is computationally more efficient. Furthermore, under the following regularity assumptions, we can show that the two estimators are asymptotically equivalent.
\begin{enumerate}
\item [A1] The matrix $E\big(\mathbf{X} \mathbf{X}^\prime\big)$ is positive definite.
\item [A2] The data $(Y_i, \mathbf{X}_i)$ is an independent and identically distributed (i.i.d.) sample of size $n$. Furthermore, the conditional distribution of $Y$ given $\mathbf{X}$, $F_{Y|\mathbf{X}} (\cdot)$ has finite second moment, and is absolutely continuous with a continuous density $f_{Y|\mathbf{X}}$,
which is strictly positive, continuous and bounded in a neighborhood of the $\tau$th conditional quantile of $Y$.
\item [A3] The class of strictly consistent joint loss functions is given by \eqref{equ:joint_loss}, where $G_1$ is an increasing and twice continuously differentiable function, $\mathcal{G}_2$ is a three-times continuously differentiable function, $\mathcal{G}_2^{(1)} = G_2$, $G_2$ and $G_2^{(1)}$ are strictly positive, and $G_1$ and $a$ are integrable functions.
\item [A4] $\hat{\boldsymbol{\theta}}^q$ is a $\sqrt{n}$-consistent estimator of $\boldsymbol{\theta}^q_0$.
\end{enumerate}
\begin{theorem} \label{thm:sep_asy_normality}
Under Assumptions A1, A2, A3, A4 and the Moment Conditions ($\mathcal{M}$-1) in Section \ref{subsec:moment_cond}, we have
\begin{equation}\label{equ:sep_asy_normality}
\sqrt{n} \big(\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_0\big) \overset{d}{\to} N(\mathbf{0}, \Lambda^{-1} \Omega \Lambda^{-1}),
\end{equation}
where $\boldsymbol{\theta}_0 = (\boldsymbol{\theta}^{q \prime}_0, \boldsymbol{\theta}^{e \prime}_0)^\prime$ is the true parameter vector and
\begin{align}\label{equ:joint_asy_cov1}
&\Lambda = E\big\{(\mathbf{X}\mathbf{X}^\prime) G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0)\big\}, \\
\label{equ:joint_asy_cov2}
&\Omega = E\left[\big(\mathbf{X}\mathbf{X}^\prime\big) \big\{G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0)\big\}^2 \times \big\{\frac{1}{\tau} \psi\big(u^q\big) + \frac{1-\tau}{\tau} \phi^2(\mathbf{X}, \boldsymbol{\theta}^q_0, \boldsymbol{\theta}^e_0)\big\}\right], \\
\label{equ:joint_asy_cov3}
&\psi\big(u^q\big) = \text{Var}\big(u^q | u^q \le 0, \mathbf{X}\big) = \text{Var}\big(Y - \mathbf{X}^\prime \boldsymbol{\theta}^q_0 | Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q_0, \mathbf{X}\big), \\
\label{equ:joint_asy_cov4}
&\phi(\mathbf{X}, \boldsymbol{\theta}^q_0, \boldsymbol{\theta}^e_0) = \mathbf{X}^\prime \boldsymbol{\theta}^q_0 - \mathbf{X}^\prime \boldsymbol{\theta}^e_0.
\end{align}
\end{theorem}
Assumption A1 is required to exclude the multicollinearity of the stochastic explanatory variables. Assumption A2 includes standard assumptions in mean and quantile regression, and the finite conditional moment of $Y$ given $\mathbf{X}$ is assumed since ES is a truncated mean of quantiles. The conditions on functions $G_1$ and $\mathcal{G}_2$ are required for the strictly consistent joint loss functions.
Assumption A4 is made for convenience, and it can be relaxed to $\sqrt{n}\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2 = o_p(1)$; see discussions in the proof of Lemma \ref{lemma:sep_asy_normality_lemma3} in Section \ref{subsec:proof_of_thm}.
\begin{remark}
\cite{dimitriadis2019joint} established the asymptotic normality of the joint estimators $(\tilde{\boldsymbol{\theta}}^{q}, \tilde{\boldsymbol{\theta}}^{e})$, and show that the two estimators are asymptotically independent. Theorem \ref{thm:sep_asy_normality} implies that the proposed two-step estimator $\hat{\boldsymbol{\theta}}^e$ is asymptotically equivalent to the joint estimator $\tilde{\boldsymbol{\theta}}^{e}$, but the former is obtained with two steps and is numerically more efficient. In addition, the error involved in the quantile estimation $\hat{\boldsymbol{\theta}}^q$ in the first step does not affect the asymptotic distribution of $\hat{\boldsymbol{\theta}}^e$, which agrees with the results of \cite{dimitriadis2019joint}. This asymptotic independence result follows because
\begin{align*}
\frac{\partial^2 E\{\rho(Y, \mathbf{X}, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e) | \mathbf{X}\}}{(\partial \boldsymbol{\theta}^q \partial \boldsymbol{\theta}^{e \prime})} \big|_{\boldsymbol{\theta}^q = \boldsymbol{\theta}^q_0} & = (\mathbf{X} \mathbf{X}^\prime) G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e) \frac{F_{Y|\mathbf{X}}(\mathbf{X}^\prime \boldsymbol{\theta}^q) - \tau}{\tau} \big|_{\boldsymbol{\theta}^q = \boldsymbol{\theta}^q_0} = \boldsymbol{0}.
\end{align*}
That is, the partial derivative of the joint loss function \eqref{equ:joint_loss} evaluated at the true quantile coefficient is zero.
Therefore, when $\sqrt{n}\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2 = o_p(1)$, $\hat{\boldsymbol{\theta}}^e$ is locally robust to the prior quantile estimation or its local misspecification; see \cite{chernozhukov2016locally} for an elaboration on local robustness. Furthermore, even though we assume both linear models for quantile and ES regression, the local robustness property enables us to consider more general models for the quantile estimation in the first step. For instance, the conditional quantile in the first step can be obtained by nonparametric regression, and this will not affect the asymptotic property of the two-step estimator $\hat{\boldsymbol{\theta}}^e$ as long as the ES regression model is correctly specified and the conditional quantile estimation is consistent with a certain rate.
\end{remark}
Based on the asymptotic normality of the two-step estimator $\hat{\boldsymbol{\theta}}^e$,
a Wald-type test can be constructed for inference on $\boldsymbol{\theta}^e_0$ through direct estimation of the covariance matrix, which involves the conditional variance of the quantile residuals $\psi(u^q)$ given in \eqref{equ:joint_asy_cov3}.
However, accurate estimation of this nuisance quantity is challenging. First of all, for tail quantile levels, e.g., $\tau$ close to 0, corresponding to the left tail,
there exists very few (about $n \cdot \tau$) observations after conditional on $u^q \le 0$. Moreover, taking the dependence of the covariates $\mathbf{X}$ into consideration further complicates the estimation of the conditional variance, especially when the sample size is small.
\cite{dimitriadis2019joint} provided four different ways to directly estimate the asymptotic variance of the joint estimator $\tilde{\boldsymbol{\theta}}^{e}$. Since the joint estimator $\tilde{\boldsymbol{\theta}}^{e}$ and our proposed two-step estimator $\hat{\boldsymbol{\theta}}^{e}$ are asymptotically equivalent, we can adopt the same variance estimation procedures in \cite{dimitriadis2019joint}. We summarize different methods below.
The first three approaches involve the estimation of the nuisance quantity $\psi(u^q) = \text{Var}(u^q | u^q \le 0, \mathbf{X})$.
When errors are homogeneous such that the distribution of $u^q$ is independent of the regression covariates $\mathbf{X}$, we can estimate $\psi(u^q)$ simply by the sample variance of the negative residuals, that is,
\begin{align*}
\hat{\psi}_I = \text{Var}(\hat{u}^q | \hat{u}^q \le 0),
\end{align*}
where $\hat{u}^q = Y - \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q$ is the estimated quantile residual, and we refer to $\hat{\psi}_I$ as the \textit{iid} estimator. The second estimator allows for a location-scale dependence structure of the quantile residuals on $\mathbf{X}$,
\begin{equation}\label{equ:ls_quant_res}
u^q = \mathbf{X}^\prime \boldsymbol{\alpha} + \big(\mathbf{X}^\prime \boldsymbol{\nu}\big) \cdot \epsilon,
\end{equation}
where $\boldsymbol{\alpha}$ and $\boldsymbol{\nu}$ are $p$-dimensional parameter vectors, and $\epsilon \sim F_\epsilon(0, 1)$ follows some distribution $F_\epsilon$ with zero-mean and unit variance. The conditional distribution of $u^q$ given $\mathbf{X}$ is $F_\epsilon\big(\mathbf{X}^\prime \boldsymbol{\alpha}, (\mathbf{X}^\prime \boldsymbol{\nu})^2 \big)$,
and the truncated conditional density of $u^q$ given $u^q \le 0$ and $\mathbf{X}$ is
\begin{align} \label{equ:h_fun}
h(z | \mathbf{X}) = (\mathbf{X}^\prime \nu)^{-1} f_\epsilon \big(\frac{z - \mathbf{X}^\prime \alpha}{\mathbf{X}^\prime \nu}\big) \big/ F_\epsilon\big(-\frac{\mathbf{X}^\prime \alpha}{\mathbf{X}^\prime \nu} \big).
\end{align}
The conditional variance $\psi(u^q)$ can be estimated by quasi generalized pseudo maximum likelihood (\citeauthor{gourieroux1984pseudo}, \citeyear{gourieroux1984pseudo}) based on the scaling formula $$
\text{Var}\big(u^q | u^q \le 0, \mathbf{X}\big) = \int_{-\infty}^{0} z^2 h(z | \mathbf{X}) dz - \big(\int_{-\infty}^{0} z h(z | \mathbf{X}) dz\big)^2.
$$
In practice, we first estimate the conditional mean and variance of $u^q$ given $\mathbf{X}$ by MLE (maximum likelihood estimator) and then employ kernel density estimation to estimate the unknown distribution $F_\epsilon$ nonparametrically. Then truncated density $h(\cdot | \mathbf{X})$ is calculated by \eqref{equ:h_fun} accordingly. The resulting estimator of $\psi$ is referred to as the \textit{nid} estimator (denoted by $\hat{\psi}_N$).
The third option discussed in \cite{dimitriadis2019joint} is by assuming that $\epsilon \sim N(0, 1)$ in \eqref{equ:ls_quant_res}. However, our empirical investigation suggests that this approach does not perform well in some situations. Therefore, throughout the numerical studies, we focus on the $\textit{iid}$ and $\textit{nid}$ approaches for estimating the conditional variance $\psi(u^q)$.
Another feasible alternative of covariance estimation is by adopting the \textit{bootstrap} method \citep{efron1992bootstrap}. We generate $B$ bootstrap samples by randomly selecting the $n$ pairs of $(Y_i, \mathbf{X}_i)$ with replacement. We can then obtain $B$ bootstrap ES coefficient estimators to each of the bootstrap samples by applying either the one-step or the proposed two-step estimation approach.
The bootstrap covariance is then approximated by the sample covariance of the $B$ bootstrap parameter estimates.
\begin{theorem} \label{thm:joint_var_consistency}
Let $\hat{\boldsymbol{\theta}}^e$ be the two-step estimator obtained by \eqref{equ:two-step_estimator} and denote $\hat{u}^q_i = Y_i - \mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^q$,
\begin{align*}
&\hat{\Lambda} = n^{-1} \sum_{i=1}^n (\mathbf{X}_i \mathbf{X}_i^\prime) \cdot G_2^{(1)} (\mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^e), \\
&\hat{\Omega}(\hat{\psi}) = n^{-1} \sum_{i=1}^n \big(\mathbf{X}_i \mathbf{X}_i^\prime\big) \cdot \big\{G_2^{(1)} (\mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^e)\big\}^2 \cdot \left\{\frac{1}{\tau} \hat{\psi}(\hat{u}^q_i) + \frac{1-\tau}{\tau}\phi^2(\mathbf{X}_i, \hat{\theta}^q, \hat{\theta}^e)\right\}.
\end{align*}
In addition, denote $\hat{\Omega}(\hat{\psi}_I)$ and $\hat{\Omega}(\hat{\psi}_N)$ as the estimated covariance matrices with the conditional variance $\psi(u^q) = \text{Var}(u^q | u^q \le 0, \mathbf{X})$ estimated by $\hat{\psi}_I$ and $\hat{\psi}_N$, respectively. If the assumptions of Theorem \ref{thm:sep_asy_normality} hold, then $\hat{\Lambda} - \Lambda = o_p(1)$ and $\hat{\Omega}(\hat{\psi}_N) - \Omega = o_p(1)$. Furthermore, if $u^q$ is independent with $\mathbf{X}$, then $\hat{\Omega}(\hat{\psi}_I) - \Omega = o_p(1)$.
\end{theorem}
\subsection{Proposed score test}\label{subsec:proposed_score_test_ES}
Theorem \ref{thm:joint_var_consistency} shows that the covariance matrix of $\hat{\boldsymbol{\theta}}^e$ can be estimated consistently.
However, in the quantile regression literature, it has been shown that Wald-type test based on direct estimation of the asymptotic covariance matrix is often unstable for small sample sizes.
One reason is due to the sensitivity of Wald-type test to the smoothing parameter involved in estimating the unknown conditional density function. The score test has been shown to have more stable performance than Wald test for quantile regression in finite samples \citep{chen2005computational, kocherginsky2005practical}.
Due to the connection between quantile and ES, we propose an alternative score-type test for the inference on ES regression parameters.
We partition $\boldsymbol{\theta}^e$ into two parts $\boldsymbol{\theta}_1^e \in \mathbb{R}^{p_1}$ and $\boldsymbol{\theta}_2^e \in \mathbb{R}^{p_2}$ with $p_1 + p_2 = p$, and let $\mathbf{W}$ and $\mathbf{Z}$ be the design vectors corresponding to $\boldsymbol{\theta}_1^e$ and $\boldsymbol{\theta}_2^e$, respectively. Suppose we want to test the hypotheses $H_0: \boldsymbol{\theta}_2^e = \mathbf{0}$ against $H_a: \boldsymbol{\theta}_2^e \ne \mathbf{0}$ in the joint regression model
\begin{equation*}
Q_\tau(Y | \mathbf{X}) = \mathbf{X}^\prime \boldsymbol{\theta}^q \quad \text{and} \quad ES_\tau(Y | \mathbf{X}) = \mathbf{W}^\prime \boldsymbol{\theta}^e_1 + \mathbf{Z}^\prime \boldsymbol{\theta}^e_2.
\end{equation*}
Denote
\begin{align} \label{equ:ortho_score}
&\Pi_W = (\mathbf{W}_1, \dots, \mathbf{W}_n)^\prime, \quad \Pi_Z = (\mathbf{Z}_1, \dots, \mathbf{Z}_n)^\prime, \nonumber \\
& G = \text{diag}\{G_2^{(1)}(\mathbf{X}_1^\prime \boldsymbol{\theta}^e), \dots, G_2^{(1)}(\mathbf{X}_n^\prime \boldsymbol{\theta}^e)\}, \\
&P = \Pi_W(\Pi_W^\prime G \Pi_W)^{-1} \Pi_W^\prime G, \qquad
\Pi_Z^* = (I - P)\Pi_Z, \nonumber
\end{align}
and let $\mathbf{Z}_i^{*\prime}$ be the rows of $\Pi_Z^*$ corresponding to the $i$th subject. We consider the orthogonal transformation on $\mathbf{Z}$ to adjust for the dependence of $\mathbf{Z}$ and $\mathbf{W}$. This transformation is needed to cancel out the first-order bias involved in $\hat{\boldsymbol{\theta}}^e_1$ to prove Lemma \ref{lemma:score_stat_null_lemma2} in Section \ref{subsec:proof_of_thm}. The weighted projection in the orthogonal transformation through $G$ is needed to
to account for the heteroscedasticity; see some related discussion under quantile regression in \cite{koenker1999goodness}.
Our proposed score test statistic is defined as
\begin{equation}\label{equ:score_stat}
T_n(\hat{\psi}) = S_n^\prime \big\{\hat{\Sigma}_n(\hat{\psi})\big\}^{-1} S_n,
\end{equation}
where
\begin{align*}
&S_n = n^{-1/2}\sum_{i=1}^n \hat{\mathbf{Z}}_i^* G_2^{(1)}(\mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^e) \big\{\mathbf{W}_i^\prime \hat{\boldsymbol{\theta}}_1^e - \mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^q - \tau^{-1} \hat{u}^q_i I(\hat{u}^q_i \le 0)\big\}, \quad \hat{u}^q_i = Y_i - \mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^q, \\
&\hat{\Sigma}_n(\hat{\psi}) = n^{-1} \sum_{i=1}^n \big(\hat{\mathbf{Z}}_i^* \hat{\mathbf{Z}}_i^{*\prime}\big) \big\{G_2^{(1)}(\mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^e)\big\}^2 \cdot \left\{\frac{1}{\tau} \hat{\psi}(\hat{u}^q_i) + \frac{1-\tau}{\tau} \phi^2(\mathbf{X}_i, \hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}^e)\right\}.
\end{align*}
Here $\hat{\boldsymbol{\theta}}_1^e$ is the two-step estimator of the ES regression parameter $\boldsymbol{\theta}^{e}_1$ under $H_0$, and $\hat{\boldsymbol{\theta}}^e$ is the two-step estimator of $\boldsymbol{\theta}^e = (\boldsymbol{\theta}^{e \prime}_1, \boldsymbol{\theta}^{e \prime}_2)^\prime$ under the unrestricted model. To obtain $\hat{\mathbf{Z}}_i^*$, the weight matrix $G$ given in \eqref{equ:ortho_score} can be estimated with $\hat{\boldsymbol{\theta}}^e$.
Before presenting the asymptotic distribution of $T_n$, we
define $$
\Sigma = E\left[\big(\mathbf{Z}^* \mathbf{Z}^{*\prime}\big) \big\{G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}_{0}^e)\big\}^2\left\{\frac{1}{\tau} \psi(u^q) + \frac{1-\tau}{\tau} \phi^2(\mathbf{X}, \boldsymbol{\theta}_0^q, \boldsymbol{\theta}_0^e)\right\}\right].
$$
We also introduce an additional assumption A5.
\begin{itemize}
\item [A5] The minimum eigenvalue of $\Sigma$ is bounded away from zero.
\end{itemize}
\begin{theorem}\label{thm:score_stat_null}
Suppose that assumptions in Theorem \ref{thm:sep_asy_normality} and A5 hold, we have:
\begin{itemize}
\item [(i)] under $H_0$, $T_n(\hat{\psi}_N) \overset{d}{\to} \chi^2_{p_2}$ as $n \to \infty$. Furthermore, if $u^q$ is independent with $\mathbf{X}$, $T_n(\hat{\psi}_I) \overset{d}{\to} \chi^2_{p_2}$ as $n \to \infty$;
\item [(ii)] under the local alternative hypothesis $H_n: \boldsymbol{\theta}_2^e = \boldsymbol{\theta}_{20}^e / \sqrt{n}$ with $\boldsymbol{\theta}_{20}^e$ be some non-zero parameter vector corresponding to $\mathbf{Z}$, $T_n(\hat{\psi}_N)$ asymptotically follows a non-central $\chi^2_{p_2}$ distribution with the noncentrality parameter $$
\zeta = E\big\{\mathbf{Z}^* G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}_{0}^e) (\mathbf{Z}^\prime \boldsymbol{\theta}_{20}^e)\big\}^\prime \Sigma^{-1} E\big\{\mathbf{Z}^* G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}_{0}^e) (\mathbf{Z}^\prime \boldsymbol{\theta}_{20}^e)\big\}.
$$
Furthermore, if $u^q$ is independent with $\mathbf{X}$, then $T_n(\hat{\psi}_I) \overset{d}{\to} \chi^2_{p_2}$ with the noncentrality parameter $\zeta$.
\end{itemize}
\end{theorem}
\begin{remark}
The matrix $\hat{\Sigma}_n(\hat{\psi})$ in the score test statistic involves the estimation of the truncated conditional variance of $u^q$ given $u^q \le 0$ and $\mathbf{X}$. Similar to the Wald-type test, we consider both $\textit{iid}$ and $\textit{nid}$ estimators for the nuisance parameter $\psi$ to accommodate different scenarios.
\end{remark}
\begin{remark}
The proposed estimation and inference methods can be directly applied for analyzing the upper tail CES at the probability level $\tau$, which is defined as
\begin{equation} \label{equ:upper_CES}
ES_\tau(Y | \mathbf{X}) = (1-\tau)^{-1}\int_\tau^1 F^{-1}_{Y|\mathbf{X}}(u)du = (1-\tau)^{-1}\int_\tau^1 Q_u(Y | \mathbf{X}) du.
\end{equation}
Assuming the upper tail ES regression model:
$$
ES_\tau(Y | \mathbf{X}) = \mathbf{X}^\prime \boldsymbol{\theta}^{e \star}.
$$
That is, $\boldsymbol{\theta}^{e \star}$ and $\boldsymbol{\theta}^{e}$ are the upper and lower tail ES regression parameters, respectively.
Note that $F^{-1}_{Y|\mathbf{X}}(u) = -F^{-1}_{-Y|\mathbf{X}}(1-u)$, which implies the $u$th conditional quantile of $Y$ given $\mathbf{X}$ is the negative of the $(1-u)$th conditional quantile of $-Y$ given $\mathbf{X}$.
In practice, if we are interested in the inference on $\boldsymbol{\theta}^{e \star}$, we can (1) change $Y$ to $Y^\star = -Y$ and let $\tau^\star = 1 - \tau$; (2) apply the proposed two-step estimation and inference methods on the lower tail ES of $Y^\star$ conditional on $\mathbf{X}$ with probability level $\tau^\star$. Then we have $\hat{\boldsymbol{\theta}}^{e \star} = -\hat{\boldsymbol{\theta}}^{e}$ and $\text{Var}(\hat{\boldsymbol{\theta}}^{e \star}) = \text{Var}(\hat{\boldsymbol{\theta}}^{e})$.
\end{remark}
\section{Simulation study}\label{sec:simulation_ES}
In this section, we investigate the finite sample performance of the proposed inference method through Monte Carlo simulation studies. For comparison purpose, we include the results given by the Wald-type and \textit{bootstrap} methods, which are based on the joint estimation and are implemented in the R package \textit{esreg} \citep{dimitriadis2019joint}.
For both Wald and score methods, we consider \textit{W-IID} and \textit{S-IID} approaches, where the asymptotic variance is estimated under the homogeneous error assumption; and \textit{W-NID} and \textit{S-NID} method, where the conditional variance $\psi(u^q)$ is estimated by $\hat{\psi}_N$.
In addition, we also report the results of the \textit{COVES} test introduced by \cite{he2010detection}. Since the \textit{COVES} method focuses on the treatment difference at the right tail of the response distribution, for the simulation study in Section \ref{sec:simulation_ES} and the real data analysis in Section \ref{sec:real_data_ES}, we will focus on the inference for the upper tail CES at the probability level $\tau$, as defined in \eqref{equ:upper_CES}.
\subsection{Simulation Design}\label{subsec:simulation_design_ES}
The first two models we consider have simple setups:
\begin{itemize}
\item \textit{Scenario 1}: $Y = 5 + \eta D + x_1 + \epsilon$;
\item \textit{Scenario 2}: $Y = 5 - \eta D + C + (1 + 0.25D + 2C)\epsilon$;
\end{itemize}
where $D$ is the binary treatment indicator, $x_1 \sim N(2.5, 0.5^2)$ and $\epsilon \sim N(0, 1)$, $C$ is an unbalanced covariate with $C \sim TN(\min = -0.5, \mu = 0.5, \sigma^2 = 0.5^2)$ (truncated normal distribution) in the treatment group and $C \sim TN(\min = -0.5, \mu = 0, \sigma^2 = 0.5^2)$ in control group.
The first scenario is a homogeneous model where the regression error does not interact with any covariates. For the second model, the error depends on both the treatment variable and the unbalanced covariate $C$. At the probability level $\tau$, the marginal impact on the CES due to treatment effect is $$
\eta(\tau) = -\eta + 0.25 \cdot ES_\tau(\epsilon),$$
which is the ES coefficient associated with the treatment variable $D$.
In contrast, the expectation of the \textit{COVES} test statistic is given by \begin{equation}\label{equ:coves_stat}
\mathcal{T}_\tau = \eta(\tau) + 2\big\{E(C|D=1) - E[(C|D=0)\big\} \cdot \big\{ES_\tau(\epsilon) - Q_\tau(\epsilon)\big\},
\end{equation}
where $Q_\tau(\epsilon)$ and $ES_\tau(\epsilon)$ are the marginal $\tau$th quantile and upper tail ES of $\epsilon$.
Due to the imbalance of covariate $C$, \textit{COVES} may inflate or deflate the treatment difference $\eta(\tau)$, which consequently makes the test either too liberal or too conservative.
\begin{remark}
The difference between $\mathcal{T_\tau}$ and the treatment difference $\eta(\tau)$ is $$
\mathcal{T_\tau} - \eta(\tau) = 2 \big\{E(C|D=1) - E[(C|D=0)\big\} \cdot \big\{ES_\tau(\epsilon) - Q_\tau(\epsilon)\big\}.
$$
Suppose that $C$ is an unbalanced covariate such that the mean of $C$ differs for the two treatment groups and the error $\epsilon$ depends on $C$. Then it is possible to have $\mathcal{T_\tau} - \eta(\tau) \ne 0$, and this may affect the power of the \textit{COVES} test.
Specifically, if the second term on the right-hand-side (RHS) of \eqref{equ:coves_stat} cancels out with $\eta(\tau)$, \textit{COVES} may fail to detect the treatment difference.
On the other hand, if the treatment has no impact on CES such that $\eta(\tau) = 0$, but the second term on the RHS of \eqref{equ:coves_stat} is non-zero, then \textit{COVES} may over-reject the null hypothesis and thus lead to higher false positive rate.
\end{remark}
There's only one confounding variable in the first two models, and both error terms follow normal distributions. To examine the robustness of the proposed method, we further consider another two scenarios where more regression covariates are included (Scenario 3) and the error has a heavy-tailed distribution (Scenario 4). The data are generated from
\begin{equation*}\label{equ:simeq3}
Y = 5 + \eta D + \sum_{i=2}^n x_i + (1 + \gamma D)\epsilon,
\end{equation*}
where $x_2$ is $Ber(0.4)$, $x_3$ and $x_4$ have standard log-normal distribution, $(x_5, x_6)$ is bivariate normal with mean (2,2), variance (1,1), and correlation 0.8, $x_7$ is chi-square distributed with one degree of freedom. Except for the correlation between $x_5$ and $x_6$, all other variables are independently generated.
\begin{itemize}
\item \textit{Scenario 3}: we take $\gamma = 0$, and the error term $\epsilon \sim N(0, 1)$.
\item \textit{Scenario 4}: we take $\gamma = 0.2$, and the error term $\epsilon \sim t_3/2$.
\end{itemize}
We consider two sample sizes $n = 50$ and $n = 100$ for each treatment group, and we focus on $\tau = 0.8$ and $\tau = 0.9$ in this study. The simulation is repeated 600 times for each scenario with a given value of $\eta$.
\subsection{Statistical Power for Testing the Treatment Effect}\label{subsec:power_ES}
For both Wald-type and score-type approaches, the estimation efficiency depends on the form of specification functions $G_1$ and $\mathcal{G}_2$.
\cite{dimitriadis2019joint} discusses several feasible choices, and their simulation analysis suggested that $G_1(z) = z$ and $\mathcal{G}_2(z) = -\log(-z)$ provide the most consistent estimation results under all scenarios considered. Throughout, we'll employ these two functions for the regression procedure.
Table \ref{tab:T1E} shows that Wald-type and score-type approaches both maintain the significance level reasonably well and the corresponding type I errors stay close to the nominal level of 0.05. However, the bootstrap method and the \textit{COVES} test yield inflated false positive rates for most cases, especially when sample size $n = 50$ and $\tau = 0.9$.
Under scenario 1, with only one covariate besides the treatment indicator, all methods perform quite similarly. However, as we add more confounding factors, the score-type testing methods show higher statistical power than the Wald-type approaches; see Figure \ref{fig:PA_plots} for some typical examples in Scenarios 3 and 4 at $\tau = 0.8$ and $n = 100$. And the power curves in Scenario 2 confirm that \textit{COVES} test gives biased estimation of the treatment difference due to the unbalanced covariate.
\begin{table}[H]
\centering
\caption{Type I error (percentage) for testing $H_0: \eta(\tau) = 0$ with nominal level of 5\%.}
\label{tab:T1E}
\begin{tabular}{ccccccccc}
\hline
Scenario & $n$ & $\tau$ & \textit{W-IID} & \textit{W-NID} & \textit{S-IID} & \textit{S-NID} & \textit{BOOT} & \textit{COVES} \\
\hline
\multirow{4}{*}{1}& \multirow{2}{*}{50} & 0.8 & 7.3 & 6.7 & 6.2 & 6.3 & 6.7 & 8.5 \\
&& 0.9 & 9.3 & 8.5 & 9.7 & 8.0 & 8.0 & \textbf{13.2} \\
&\multirow{2}{*}{100}& 0.8 & 6.5 & 6.0 & 6.2 & 6.2 & 6.3 & 6.7 \\
&& 0.9 & 6.7 & 5.5 & 6.2 & 5.5 & 6.7 & 7.7 \\
\hline
\multirow{4}{*}{2}& \multirow{2}{*}{50} & 0.8 & 2.3 & 3.8 & 2.7 & 4.2 & 6.8 & \textbf{13.0} \\
&& 0.9 & 4.7 & 6.2 & 5.8 & 7.0 & 7.0 & \textbf{12.5} \\
& \multirow{2}{*}{100} & 0.8 & 1.2 & 2.5 & 1.7 & 3.0 & 5.5 & \textbf{18.3} \\
&& 0.9 & 2.0 & 3.0 & 2.3 & 3.2 & 5.3 & \textbf{11.8} \\
\hline
\multirow{4}{*}{3}& \multirow{2}{*}{50} & 0.8 & 5.7 & 6.0 & 4.5 & 4.8 & 8.2 & \textbf{12.5} \\
&& 0.9 & 8.2 & 8.2 & 8.5 & 8.2 & \textbf{10.0} & \textbf{21.8} \\
& \multirow{2}{*}{100} & 0.8 & 3.5 & 4.3 & 4.0 & 4.3 & 5.2 & 6.7 \\
&& 0.9 & 4.0 & 4.0 & 5.2 & 4.8 & 6.2 & 9.8 \\
\hline
\multirow{4}{*}{4}& \multirow{2}{*}{50} & 0.8 & 2.5 & 3.0 & 2.0 & 3.0 & 6.0 & 9.5 \\
&& 0.9 & 3.0 & 3.5 & 4.7 & 5.0 & \textbf{11.0} & \textbf{17.8} \\
& \multirow{2}{*}{100} & 0.8 & 3.2 & 3.7 & 1.8 & 2.7 & 7.7 & 5.5 \\
&& 0.9 & 2.7 & 3.3 & 3.2 & 4.0 & 9.5 & \textbf{10.7} \\
\hline
\end{tabular}
\begin{itemize}
\item [] \footnotesize \textit{W-IID} (\textit{W-NID}): Wald-type methods with $\psi(u^q)$ estimated by $\hat{\psi}_I$ ($\hat{\psi}_N$); \textit{S-IID} (\textit{S-NID}): score methods with $\psi(u^q)$ estimated by $\hat{\psi}_I$ ($\hat{\psi}_N$); \textit{BOOT}: bootstrap method based on the joint estimation; \textit{COVES}: method in \cite{he2010detection}.
\end{itemize}
\end{table}
\subsection{Comparing the Confidence Interval of Treatment Coefficient}\label{subsec:CI_ES}
To access the performance of different methods for confidence interval construction, we fix $\eta = 1.35, 2, 2.5$ and 3.5 in Scenarios 1-4 respectively.
Tables \ref{tab:CI_re_50} and \ref{tab:CI_re_100} summarize the coverage percentage and average length of 95\% confidence intervals for the treatment difference $\eta(\tau)$. Under Scenario 1, the Wald-type and score methods show similar accuracy. For Scenarios 3 and 4 with more confounding factors, the score-type methods provide shorter confidence intervals with relatively higher coverage, which agrees with the results of the power analysis in Section \ref{subsec:power_ES}. Furthermore, when errors are i.i.d,
\textit{W-IID} and \textit{W-NID} approaches perform similarly. For Scenarios 2 and 4 when the errors are heterogeneous, \textit{W-NID} method shows better performance in the sense that it provides confidence intervals with coverage closer to the nominal level and shorter length than the \textit{W-IID} method.
On the other hand, the score methods are less sensitive to the violation of homogeneity assumption, as \textit{S-IID} and \textit{S-NID} give similar results across all scenarios considered. Overall, the \textit{COVES} method gives lowest coverage percentage, much below the nominal level 95\%, especially under Scenario 2 where the error term depends on an unbalanced covariate.
\begin{table}[H]
\centering
\caption{Coverage probabilities and average lengths (inside the parentheses) of $95\%$ confidence intervals from different methods when sample size $n = 50$. All values are in percentages.}
\label{tab:CI_re_50}
\begin{tabular}{cccccccc}
\hline
Scenario & $\tau$ & \textit{W-IID} & \textit{W-NID} & \textit{S-IID} & \textit{S-NID} & \textit{BOOT} & \textit{COVES} \\
\hline
\multirow{4}{*}{1}& \multirow{2}{*}{0.8} & 92.2 & 92.8 & 94.0 & 94.0 & 93.0 & 91.8 \\
&& (120) & (122) & (120) & (120) & (124) & (113) \\
& \multirow{2}{*}{0.9} &90.5 & 91.5 & 90.7 & 90.7 & 91.3 & \textcolor{red}{88.3} \\
&& (146) & (149) & (146) & (146) & (144) & (133) \\
\hline
\multirow{4}{*}{2}& \multirow{2}{*}{0.8}&98.0 & 96.0 & 97.3 & 97.3& 93.5 & \textcolor{red}{85.8} \\
&& (268) & (240) & (265) & (265) & (203) & (232) \\
& \multirow{2}{*}{0.9} &95.5 & 94.5 & 94.2 & 94.2& 93.3 & \textcolor{red}{84.7} \\
&& (327) & (297) & (320) & (320) & (253) & (268) \\
\hline
\multirow{4}{*}{3}& \multirow{2}{*}{0.8} &92.7 & 92.3 & 95.5 & 95.5 & 94.7 & \textcolor{red}{87.7} \\
&& (184) & (182) & (\textcolor{blue}{157}) & (\textcolor{blue}{157}) & (187) & (110) \\
&\multirow{2}{*}{0.9} & 90.8 & 91.5 & 91.5 & 91.5 & 92.5 & \textcolor{red}{77.3} \\
&& (210) & (210) & (\textcolor{blue}{175}) & (\textcolor{blue}{175}) & (192) & (119) \\
\hline
\multirow{4}{*}{4}& \multirow{2}{*}{0.8}&97.0 & 96.3 & 98.0 & 98.0 & 96.5 & 90.7 \\
&& (234) & (227) & (\textcolor{blue}{200}) & (\textcolor{blue}{200}) & (197) & (143) \\
& \multirow{2}{*}{0.9} &96.3 & 96.2 & 95.5 & 95.5 & 94.3 & \textcolor{red}{83.3} \\
&& (343) & (333) & (\textcolor{blue}{284}) & (\textcolor{blue}{284}) & (238) & (201) \\
\hline
\end{tabular}
\begin{itemize}
\item [] \footnotesize \textit{W-IID} (\textit{W-NID}): Wald-type methods with $\psi(u^q)$ estimated by $\hat{\psi}_I$ ($\hat{\psi}_N$); \textit{S-IID} (\textit{S-NID}): score methods with $\psi(u^q)$ estimated by $\hat{\psi}_I$ ($\hat{\psi}_N$); \textit{BOOT}: bootstrap method based on the joint estimation; \textit{COVES}: method in \cite{he2010detection}.
\end{itemize}
\end{table}
\begin{table}[H]
\centering
\caption{Coverage probabilities and average lengths (inside the parentheses) of $95\%$ confidence intervals from different methods when sample size $n = 100$. All values are in percentages.}
\label{tab:CI_re_100}
\begin{tabular}{cccccccc}
\hline
Scenario & $\tau$ & \textit{W-IID} & \textit{W-NID} & \textit{S-IID} & \textit{S-NID} & \textit{BOOT} & \textit{COVES} \\
\hline
\multirow{4}{*}{1}& \multirow{2}{*}{0.8}&93.3 & 93.8 & 93.8 & 93.8 & 93.7 & 92.3 \\
&& (87.1) & (88.4) & (87.2) & (87.2) & (87.8) & (83.8) \\
& \multirow{2}{*}{0.9} &93 & 93.2 & 93.5 & 93.5 & 92.5 & 91.5 \\
&& (109) & (110) & (108) & (108) & (107) & (103) \\
\hline
\multirow{4}{*}{2}& \multirow{2}{*}{0.8}&98.7 & 97.7 & 98.3 & 98.3 & 94.7 & \textcolor{red}{82.7} \\
&& (194) & (173) & (192) & (192) & (143) & (174) \\
& \multirow{2}{*}{0.9} &98.2 & 97.5 & 97.7 & 97.7 & 94.7 & \textcolor{red}{86.8} \\
&& (242) & (215) & (238) & (238) & (178) & (213) \\
\hline
\multirow{4}{*}{3}& \multirow{2}{*}{0.8} &94.7 & 94.5 & 96.2 & 96.2 & 95.7 & 93.7 \\
&& (129) & (129) & (\textcolor{blue}{108}) & (\textcolor{blue}{108}) & (128) & (81.4) \\
& \multirow{2}{*}{0.9} &94.8 & 94.0 & 95.2 & 95.2 & 95.0 & \textcolor{red}{89.5} \\
&& (150) & (149) & (\textcolor{blue}{130}) & (\textcolor{blue}{130}) & (137) & (96.3) \\
\hline
\multirow{4}{*}{4}& \multirow{2}{*}{0.8}&96.2 & 95.7 & 98.2 & 98.2 & 96.8 & 94.3 \\
&& (170) & (161) & (\textcolor{blue}{152}) & (\textcolor{blue}{152})& (142) & (113) \\
& \multirow{2}{*}{0.9} &96.2 & 95.3 & 96.8 & 96.8 & 93.7 & \textcolor{red}{89.3} \\
&& (260) & (247) & (\textcolor{blue}{233}) & (\textcolor{blue}{233}) & (188) & (176) \\
\hline
\end{tabular}
\begin{itemize}
\item [] \footnotesize \textit{W-IID} (\textit{W-NID}): Wald-type methods with $\psi(u^q)$ estimated by $\hat{\psi}_I$ ($\hat{\psi}_N$); \textit{S-IID} (\textit{S-NID}): score methods with $\psi(u^q)$ estimated by $\hat{\psi}_I$ ($\hat{\psi}_N$); \textit{BOOT}: bootstrap method based on the joint estimation; \textit{COVES}: method in \cite{he2010detection}.
\end{itemize}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=15cm]{figures/PA_S1S2_200405.jpeg}
\includegraphics[width=15cm]{figures/PA_S3_S4_210105.jpeg}
\caption{Power analysis plots for testing $H_0: \eta(\tau) = 0$ at $\tau = 0.8$ and $n = 100$.}
\label{fig:PA_plots}
\end{figure}
\section{Real Data Analysis}\label{sec:real_data_ES}
\subsection{Application to 2018 CPS Income Data}\label{subsec:real_data_CPS_ES}
We illustrate the merit of the proposed score-type inference approach by analyzing a data set from the Current Population Survey (CPS) database, which can be assessed at \url{https://www.census.gov/programs-surveys/cps.html}. The CPS is a monthly survey of about 60,000 U.S. households conducted by the United States Census Bureau for the Bureau of Labor Statistics. Information collected in the survey includes employment status, income from work and a number of demographic characteristics. From the 2018 CPS Annual Social and Economic Supplement (ASEC) Bridge Files, we compile a data set, which contains 870 individuals (401 male and 469 female). To find out if there exists a pay gap between women and men, we use hourly wage in U.S. dollars as the response and consider \textit{Gender}, \textit{Age}, $\textit{Age}^2$, \textit{Education level (ordinal variable with three levels)} and \textit{Work status (full-times v.s. part-time)} as explanatory variables in the joint regression model.
To check whether the quantile error depends on the predictors, we conduct a heterogeneity analysis by analyzing the residual patterns. At a given quantile level $\tau$, we fit a linear quantile regression model and compare the variance of the quantile residuals for different covariate groups.
Table \ref{tab:res_var_edu} summarizes quantile residual variances associated with different education levels at $\tau = 0.7, 0.75$ and $0.8$. The result shows that the quantile residuals appear to depend on the \textit{Education level}. Therefore, for Wald and score inference approaches, we focus on \textit{W-NID} and \textit{S-NID} methods.
Moreover, the \textit{Education level} is an unbalanced covariate since the female group has a higher proportion of subjects with education level 3.
\begin{table}[H]
\centering
\caption{The sample variance of quantile residuals for different education levels. Values inside the parentheses are standard errors obtained with jackknife.}
\label{tab:res_var_edu}
\begin{tabular}{cccc}
\hline
\multirow{2}{*}{Education level} & \multicolumn{3}{c}{$\tau$} \\ \cline{2-4}
& 0.7 & 0.75 & 0.8 \\
\hline
1 & 13.18 (2.17) & 14.03 (2.17) & 14.59 (2.09) \\
2 & 29.49 (3.03) & 29.49 (2.96) & 29.64 (2.96) \\
3 & 170.55 (39.25) & 170.49 (39.71) & 169.30 (39.52) \\
\hline
\end{tabular}
\end{table}
Let $\theta_g^e$ denote the upper tail CES difference of the hourly wage between female and male groups. At the probability levels $0.7, 0.75$ and $0.8$, we apply the proposed inference method to test $H_0: \theta_g^e = 0$ against $H_a: \theta_g^e \ne 0$. Except for the \textit{COVES} method, all the other approaches suggest a significant pay gap between the two gender groups. We further calculate 95\% confidence intervals of $\theta_g^e$ using different methods and the results are summarized in Table \ref{tab:CI_re_CPS}.
Under the significance level of 5\%, results of \textit{W-NID}, \textit{S-NID} and the bootstrap approaches suggest that the female employee is substantially under-paid compared to the male employee, when other characteristics are kept the same. In contrast, the \textit{COVES} test may be negatively affected by the unbalanced covariate \textit{Education level} and thus fails to capture the tail difference.
\begin{table}[H]
\centering
\caption{The 95\% confidence intervals of $\theta_g^e$ given by different approaches, where $\theta_g^e$ is the CES difference of the hourly wage between female and male groups for the 2018 CPS income data.}
\label{tab:CI_re_CPS}
\begin{tabular}{cccccc}
\hline
$\tau$& \textit{W-NID} & \textit{S-NID} & \textit{BOOT} & \textit{COVES} \\
\hline
0.7 & (-3.09, -0.47) & (-3.27, -0.44) & (-3.24, -0.32) & \textbf{(-4.34, 0.12)} \\
0.75 & (-3.31, -0.45) & (-3.41, -0.37) & (-3.39, -0.37) & \textbf{(-4.26, 0.6)} \\
0.8 & (-3.35, -0.15) & (-3.63, -0.14) & (-3.45, -0.05) & \textbf{(-4.49, 1.14)} \\
\hline
\end{tabular}
\end{table}
\subsection{Power Analysis for the Opportunity Knocks Data}\label{subsec:real_data_OK_ES}
To further assess the finite sample performance of the proposed score test, we conduct a power analysis based on the ``Opportunity Knocks" (OK) experiment \citep{angrist2014opportunity}, which was designed to explore the effects of academic achievement awards for first-year and second-year college students.
For our analysis, we consider a subset with all second-year students, which consists of 183 treated subjects and 337 untreated subjects. Treated students can receive bonus awards and have the opportunity to interact with randomly assigned peer advisors who can provide advice about study strategies, time management, and university bureaucracy.
The award scheme offered cash incentives to students with course grades above 70. Therefore, the academic performance of students can be measured by the amount they earned in the OK experiment.
To determine how the academic performance of students are motivated by the merit award, we define the response variable $Y$ as the earning of students (in 1000 U.S. dollars) from the OK program. Besides the treatment indicator $D$, we consider six additional covariates, including gender ($X_1$), high school grade ($X_2$), an indicator for English mother tongue ($X_3$), whether the student answers the scholarship formula question correctly ($X_4$, yes v.s. no), and mother's and father's education levels ($X_5$ and $X_6$, defined as above college degree or not).
We apply the proposed inference method to test the treatment effect $\theta^e_D$ (upper tail ES regression coefficient associated with treatment variable $D$) at $\tau = 0.7, 0.75$ and $0.8$, and the results are summarized in Table \ref{tab:CI_OK}. The data indicates that there's a tendency that the merit award has a positive effect on the academic performance. However, none of the methods show that the treatment is statistically significant on the CES based on the original data, which might be due to the small sample size. To determine the sample size needed for different methods to capture the treatment difference, we conduct a power analysis.
In order to mimic the response distribution based on the linearity model assumption, we fit a linear quantile regression model using the original data at $\tau = 0.75$,
\begin{equation}
\label{equ:qr_OK}
\hat{Q}_\tau(Y | D, X_1, \dots, X_6) = \hat{\beta}_0(\tau) + \hat{\beta}_1(\tau)D + \sum_{i=1}^6 \hat{\alpha}_i(\tau) X_i.
\end{equation}
We then obtain quantile residuals, defined as $\hat{\epsilon}_i(\tau) = Y_i - \hat{Q}_\tau(Y_i | D_i, X_1, \dots, X_6)$, and perform a heterogeneity analysis similar to the one in Section \ref{subsec:real_data_CPS_ES}. The results indicate that the residual term depends on $X_4$. Therefore, for power analysis, we focus on $S$-$NID$ and $W$-$NID$ approaches, and the response is generated by
\begin{equation}
\label{equ:sim_res_OK}
Y_{\text{sim}} = \hat{\beta}_0(\tau) + \hat{\beta}_1(\tau)D + \sum_{i=1}^6 \hat{\alpha}_i(\tau) \tilde{X}_i^d + \xi \cdot I(D = 1) + \tilde{\epsilon},
\end{equation}
where $\hat{\beta}_0(\tau), \hat{\beta}_1(\tau)$ and $\hat{\alpha}_i(\tau)$ are regression coefficient estimators in \eqref{equ:qr_OK}, $\tilde{X}_i^d$ follows the empirical distribution of covariate $X_i$ in group $D = d$, and $\tilde{\epsilon}$ is randomly sampled from the quantile residuals $\{\hat{\epsilon}_i(\tau)\}$ stratified by different grouped values of $X_4$. Since the original treatment effect is not statistically significant, we add an additional signal $\xi$ in \eqref{equ:sim_res_OK} to increase the treatment difference, and let the sample size of the two treatment groups be the same.
We apply the proposed \textit{S-NID} method to the simulated data. Table \ref{tab:sample_size_OK} summarizes the sample size needed for different methods to reach a power of 0.9 at $\tau = 0.75$. The results show that the score test is clearly outperforming the Wald test and the bootstrap method, and the latter two require a trial with more subjects.
\begin{table}[H]
\centering
\caption{Point estimation and the 95\% confidence intervals (within the parentheses) of $\theta^e_D$ given by different approaches based on the original OK data, where $\theta^e_D$ is the ES regression coefficient associated with treatment variable $D$ and measures the treatment effect of the merit award on the upper tail CES.}
\label{tab:CI_OK}
\begin{tabular}{ccccc}
\hline
$\tau$ & \textit{S-NID} & \textit{W-NID} & \textit{BOOT} & \textit{COVES} \\
\hline
\multirow{2}{*}{0.70} & 0.298 & 0.375 & 0.375 & 0.215 \\
& (-0.086, 0.651) & (-0.001, 0.751) & (-0.327, 1.078) & (-0.076, 0.506) \\
\multirow{2}{*}{0.75} & 0.286 & 0.304 & 0.304 & 0.192 \\
& (-0.126, 0.678) & (-0.116, 0.724) & (-0.379, 0.987) & (-0.124, 0.507) \\
\multirow{2}{*}{0.80} & 0.278 & 0.204 & 0.204 & 0.207 \\
& (-0.190, 0.726) & (-0.229, 0.637) & (-0.469, 0.878) & (-0.151, 0.564) \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Sample size needed for each treatment group in the simulated OK data to reach power 0.9 at $\tau = 0.75$.}
\label{tab:sample_size_OK}
\begin{tabular}{ccccccc}
\hline
$\xi$ & \textit{S-NID} & \textit{W-NID} & \textit{BOOT} & \textit{COVES} \\
\hline
0.2 & 436 & 484 & 494 & 402 \\
0.3 & 282 & 292 & 314 & 236 \\
0.4 & 210 & 215 & 218 & 171 \\
0.5 & 134 & 155 & 168 & 128 \\
0.6 & 107 & 120 & 129 & 100 \\
0.7 & 82 & 94 & 101 & 82 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}\label{sec:conclusion_ES}
In this paper, we considered the joint modeling of conditional quantile and ES. A two-step estimation procedure is proposed to reduce the computational effort.
We showed that the resulting two-step estimator is asymptotically equivalent to the joint estimator, but the former is numerically more efficient. In addition, the two-step estimator is locally robust to the perturbation of the quantile estimation in the first step.
We further developed a score-type inference method for hypothesis testing and confidence interval construction. The proposed score method is robust in performance, especially for cases with a large number of confounding factors and heterogeneous errors.
We chose parametric linear models for the joint-regression framework due to its computational efficiency and model interpretability. This framework can be further extended by considering more general models. \cite{wang2018semi}, \cite{taylor2019forecasting} and \cite{patton2019dynamic} consider dynamic models for ES with autoregressive features. To model CES based on exogenous covariates, another feasible alternative is to employ some nonparametric or semiparametric models, e.g., varying coefficient models \citep{hastie1993varying} and generalized additive models \citep{hastie1990generalized}. The proposed two-step estimation procedure and inference methods can be adapted accordingly, but further theoretical and practical investigations are needed.
\section{Appendix}\label{sec:appendix_ES}
\subsection{Finite moment conditions} \label{subsec:moment_cond}
For some constant $c > 0$, define a neighborhood of $\boldsymbol{\theta}^q_0$ as $U_c(\boldsymbol{\theta}^q_0) = \{\boldsymbol{\theta}^q \in \boldsymbol{\Theta}^q: \|\boldsymbol{\theta}^q - \boldsymbol{\theta}^q_0\| \le c\}$. Similarly, we denote a neighborhood of $\boldsymbol{\theta}^e_0$ as $U_c(\boldsymbol{\theta}^e_0)$ and a neighborhood of $\boldsymbol{\theta}_0 = (\boldsymbol{\theta}^{q \prime}_0, \boldsymbol{\theta}^{e \prime}_0)^\prime$ as $U_c(\boldsymbol{\theta}_0)$.
\begin{enumerate} \label{cond:moment}
\item [($\mathcal{M}$-1)]
We assume the following moments are finite for a given constant $c > 0$: $E\big\{|G_1(Y)|\big\}$, $E\big\{|a(Y)|\big\}$, $E\big\{\|\mathbf{X}\|^r \sup_{\boldsymbol{\theta}^q \in U_c(\boldsymbol{\theta}^q_0)}|G_1^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^q)|^r\big\}$ with $r = 1$ and 2, \\ $E\big\{\|\mathbf{X}\|^2 \sup_{\boldsymbol{\theta} \in U_c(\boldsymbol{\theta}_0)}|G_1^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^q) G_2(\mathbf{X}^\prime \boldsymbol{\theta}^e)|\big\}$, \\
$E\big\{\|\mathbf{X}\|^r \sup_{\boldsymbol{\theta}^e \in U_c(\boldsymbol{\theta}^e_0)}|G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e)|^r\big\}$ with $r = 1$ and 2,\\ $E\big\{\|\mathbf{X}\|^{2r} \sup_{\boldsymbol{\theta}^e \in U_c(\boldsymbol{\theta}^e_0)}|G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e)|^r\big\}$ with $r = 1$ and 2, \\ $E\big\{\|\mathbf{X}\| \sup_{\boldsymbol{\theta}^e \in U_c(\boldsymbol{\theta}^e_0)}|G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e)|E(|Y| \big | \mathbf{X}) \big\}$, \\ $E\big[\|\mathbf{X}\|^3 \sup_{\boldsymbol{\theta}^e \in U_c(\boldsymbol{\theta}^e_0)}\big\{G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e)\big\}^2E(|Y| \big | \mathbf{X}) \big]$ and \\ $E\big[\|\mathbf{X}\|^2 \sup_{\boldsymbol{\theta}^e \in U_c(\boldsymbol{\theta}^e_0)}\big\{G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e)\big\}^2E(Y^2 \big | \mathbf{X}) \big]$.
\end{enumerate}
\subsection{Proofs of Theorems}\label{subsec:proof_of_thm}
Let $\omega_\tau(Y, \mathbf{X}, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e)$ be the derivative of $\rho_\tau(Y, \mathbf{X}, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e)$ w.r.t. $\boldsymbol{\theta}^e$. That is,
\begin{align*}
\omega_\tau(Y, \mathbf{X}, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e) &= \frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \rho_\tau(Y, \mathbf{X}, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e) \\
&= \mathbf{X} G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e) \left\{\mathbf{X}^\prime \boldsymbol{\theta}^e - \mathbf{X}^\prime \boldsymbol{\theta}^q + \frac{(\mathbf{X}^\prime \boldsymbol{\theta}^q - Y) I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q)}{\tau}\right\}.
\end{align*}
For simplicity, denote $\omega_\tau(Y_i, \mathbf{X}_i, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e)$ by $\omega_i(\boldsymbol{\theta}^q, \boldsymbol{\theta}^e)$ for subject $i$. To derive the asymptotic behavior of $\sqrt{n} (\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_0)$, we first present and prove three lemmas.
\begin{lemma} \label{lemma:sep_asy_normality_lemma1}
Assume the conditions in Theorem \ref{thm:sep_asy_normality} hold. Then we have \begin{equation}
\label{equ:sep_asy_normality_lemma1}
\frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) \big|_{\boldsymbol{\theta}^e = \boldsymbol{\theta}^e_0} \overset{P}{\to} E\big\{\mathbf{X} \mathbf{X}^\prime G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0)\big\}.
\end{equation}
\end{lemma}
\begin{proof}
By LLN (law of large numbers), we have \begin{equation*}
\frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) \big|_{\boldsymbol{\theta}^e = \boldsymbol{\theta}^e_0} \overset{a.s.}{\to} E\left\{\frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \omega(Y, \mathbf{X}, \hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) \big|_{\boldsymbol{\theta}^e = \boldsymbol{\theta}^e_0}\right\},
\end{equation*}
where \begin{align*}
&\frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \omega(Y, \mathbf{X}, \hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) \big|_{\boldsymbol{\theta}^e = \boldsymbol{\theta}^e_0} \\
&= \mathbf{X} \mathbf{X}^\prime \left[G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0) + G_2^{(2)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0)\left\{\mathbf{X}^\prime \boldsymbol{\theta}^e_0 - \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q + \frac{(\mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q - Y) I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q)}{\tau}\right\}\right].
\end{align*}
It then sufficient to show that the expectation of the second term in the above equation is $o(1)$. Notice that
\begin{align*}
& E\left[\mathbf{X} \mathbf{X}^\prime G_2^{(2)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0)\left\{\mathbf{X}^\prime \boldsymbol{\theta}^e_0 - \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q + \frac{(\mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q - Y) I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q)}{\tau}\right\}\right] \\
&= E\left[\mathbf{X} \mathbf{X}^\prime G_2^{(2)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0) \cdot E\left\{\mathbf{X}^\prime \boldsymbol{\theta}^e_0 - \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q + \frac{(\mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q - Y) I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q)}{\tau} \big| \mathbf{X}\right\}\right],
\end{align*}
where \begin{align*}
& E\big\{\mathbf{X}^\prime \boldsymbol{\theta}^e_0 - \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q + \frac{(\mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q - Y) I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q)}{\tau} \big| \mathbf{X}\big\} \\
&= E\left[\mathbf{X}^\prime (\boldsymbol{\theta}^q_0 - \hat{\boldsymbol{\theta}}^q) + \frac{1}{\tau} \big\{\mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0)I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q) + (\mathbf{X}^\prime \boldsymbol{\theta}^q_0 - Y) (I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q) - I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q_0)) \big\} \big| \mathbf{X} \right].
\end{align*}
For the above equation, it's easy to verify that the terms involving $\mathbf{X}^\prime (\boldsymbol{\theta}^q_0 - \hat{\boldsymbol{\theta}}^q)$ are $o_p(1)$ due to the consistency of $\hat{\boldsymbol{\theta}}^q$. Besides, by Taylor expansion we have
\begin{align*}
E\big\{I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q) - I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q_0) \big| \mathbf{X} \big\} &= F_{Y | \mathbf{X}} (\mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q) - F_{Y | \mathbf{X}} (\mathbf{X}^\prime \boldsymbol{\theta}^q_0) \\
&= f_{Y | \mathbf{X}} (\mathbf{X}^\prime \boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + o\big\{\mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0)\big\}\\
&= o_p(1).
\end{align*}
Therefore, with the assumption that the conditional distribution of $Y$ given $\mathbf{X}$ has finite second moment, the last term $E\big\{Y(I(Y \le \mathbf{X}^\prime \hat{\boldsymbol{\theta}}^q) - I(Y \le \mathbf{X}^\prime \boldsymbol{\theta}^q_0)) \big| \mathbf{X} \big\} = o_p(1)$ holds by Cauchy-Schwarz inequality.
\end{proof}
\begin{lemma} \label{lemma:sep_asy_normality_lemma2}
Assume the conditions in Theorem \ref{thm:sep_asy_normality} hold. Then we have \begin{equation}
\label{equ:sep_asy_normality_lemma2}
\underset{\hat{\boldsymbol{\theta}}^q: ||\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}_0^q|| \le c \cdot n^{-1/2}}{\sup}\left \| \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \left[ \big\{\omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0^e) - \omega_i(\boldsymbol{\theta}_0^q, \boldsymbol{\theta}_0^e)\big\} - \big\{\lambda_i(\hat{\boldsymbol{\theta}}^q) - \lambda_i(\boldsymbol{\theta}_0^q)\big\} \right] \right \|
\overset{P}{\to} 0,
\end{equation}
where $c$ is a positive constant and
\begin{equation*}
\lambda_i(\boldsymbol{\theta}^q) = E\big\{\omega(Y_i, \mathbf{X}_i, \boldsymbol{\theta}^q, \boldsymbol{\theta}_0^e) | \mathbf{X}_i\big\}.
\end{equation*}
\end{lemma}
\begin{proof}
According to the Remark on page 410 of \cite{doukhan1995invariance}, we can obtain the stochastic equicontinuity of $n^{-1/2} \sum_{i=1}^n \big\{\omega_i(\boldsymbol{\theta}^q, \boldsymbol{\theta}^e_0) - \lambda_i(\boldsymbol{\theta}^q)\big\}$. That is, for any $\epsilon > 0$, there exist a $\delta_\epsilon > 0$ such that
\begin{align*}
\underset{n \to \infty}{\limsup} P\Bigg(\underset{\hat{\boldsymbol{\theta}}^q: ||\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}_0^q|| \le \delta_\epsilon}{\sup} \Bigg\|\frac{1}{\sqrt{n}} \sum_{i=1}^n \big[ \big\{\omega_i&(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0^e) - \lambda_i(\hat{\boldsymbol{\theta}}^q)\big\} - \big\{\omega_i(\boldsymbol{\theta}_0^q, \boldsymbol{\theta}_0^e) - \lambda_i(\boldsymbol{\theta}_0^q)\big\} \big] \Bigg\| > \epsilon \Bigg) < \epsilon,
\end{align*}
which implies the desired result.
\end{proof}
\begin{lemma} \label{lemma:sep_asy_normality_lemma3}
Under the conditions in Theorem \ref{thm:sep_asy_normality}, we have \begin{equation}
\label{equ:sep_asy_normality_lemma3}
\frac{1}{\sqrt{n}}\sum_{i=1}^{n} \big\{\lambda_i(\hat{\boldsymbol{\theta}}^q) - \lambda_i(\boldsymbol{\theta}_0^q)\big\} = \frac{1}{\sqrt{n}}\sum_{i=1}^{n}\big[E\big\{\omega(Y_i, \mathbf{X}_i, \hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0^e) | \mathbf{X}_i\big\} - E\big\{\omega(Y_i, \mathbf{X}_i, \boldsymbol{\theta}^q_0, \boldsymbol{\theta}_0^e) | \mathbf{X}_i\big\}\big] \overset{P}{\to} 0.
\end{equation}
\end{lemma}
\begin{proof}
Recall that \begin{align*}
\lambda_i(\boldsymbol{\theta}^q) &= E\big\{\omega(Y_i, \mathbf{X}_i, \boldsymbol{\theta}^q, \boldsymbol{\theta}^e_0 )\big | \mathbf{X}_i\big\} \\
&= \mathbf{X}_i G_2^{(1)}(\mathbf{X}^\prime_i \boldsymbol{\theta}^e_0) \left[\mathbf{X}^\prime_i \boldsymbol{\theta}^e_0 - \mathbf{X}^\prime_i \boldsymbol{\theta}^q + \tau^{-1}E\big\{ (\mathbf{X}^\prime\boldsymbol{\theta}^q - Y_i)I(Y_i \le \mathbf{X}^\prime\boldsymbol{\theta}^q)\big| \mathbf{X}_i\big\}\right] \\
&= \mathbf{X}_i G_2^{(1)}(\mathbf{X}^\prime_i \boldsymbol{\theta}^e_0) \left[\mathbf{X}^\prime_i \boldsymbol{\theta}^e_0 - \mathbf{X}^\prime_i \boldsymbol{\theta}^q + \tau^{-1} \mathbf{X}^\prime\boldsymbol{\theta}^q F_{Y_i|\mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q) - \tau^{-1}E\big\{Y_iI(Y_i \le \mathbf{X}^\prime\boldsymbol{\theta}^q)\big| \mathbf{X}_i\big\}\right].
\end{align*}
Therefore we have
\begin{align}
\label{equ:lemma3_lambdadiff_ES}
\lambda_i(\hat{\boldsymbol{\theta}}^q) - \lambda_i(\boldsymbol{\theta}^q_0) = \mathbf{X}_i G_2^{(1)}(\mathbf{X}^\prime_i \boldsymbol{\theta}^e_0) \left\{ \mathbf{X}_i^\prime(\boldsymbol{\theta}^q_0 - \hat{\boldsymbol{\theta}}^q) + \tau^{-1} k_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) + \tau^{-1} l_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0)\right\},
\end{align}
where \begin{align*}
k_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) &= \mathbf{X}^\prime_i \hat{\boldsymbol{\theta}}^q F_{Y_i|\mathbf{X}_i}(\mathbf{X}^\prime\hat{\boldsymbol{\theta}}^q) - \mathbf{X}^\prime\boldsymbol{\theta}^q_0 F_{Y_i|\mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0),\\
l_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) &= E\big\{Y_iI(Y_i \le \mathbf{X}^\prime\boldsymbol{\theta}^q_0)\big| \mathbf{X}_i\big\} - E\big\{Y_iI(Y_i \le \mathbf{X}^\prime\hat{\boldsymbol{\theta}}^q)\big| \mathbf{X}_i\big\}.
\end{align*}
By Taylor expansion we have
\begin{align*}
k_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) &= \mathbf{X}^\prime_i \hat{\boldsymbol{\theta}}^q\big\{F_{Y_i|\mathbf{X}_i}(\mathbf{X}^\prime\hat{\boldsymbol{\theta}}^q) - F_{Y_i|\mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0)\big\} + F_{Y_i|\mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime (\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0)\\
&= \mathbf{X}^\prime_i \hat{\boldsymbol{\theta}}^q f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + \tau \mathbf{X}^\prime (\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + O\big(\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2\big),
\end{align*}
where the first term can be written as
\begin{align*}
& \mathbf{X}^\prime_i \hat{\boldsymbol{\theta}}^q f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0)\\
&= \mathbf{X}^\prime_i \boldsymbol{\theta}^q_0 f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + \big\{\mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0)\big\}^2 f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \\
&= \mathbf{X}^\prime_i \boldsymbol{\theta}^q_0 f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + O\big(\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2\big).
\end{align*}
Therefore,
\begin{align}
\label{equ:lemma3_lambdadiff_term1_ES}
k_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) = \mathbf{X}^\prime_i \boldsymbol{\theta}^q_0 f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + \tau \mathbf{X}^\prime (\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + O\big(\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2\big).
\end{align}
For the second term $l_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0)$, notice that $E\big(Y_iI(Y_i \le \mathbf{X}^\prime\boldsymbol{\theta}^q)\big| \mathbf{X}_i\big)$ is continuously differentiable for all $\boldsymbol{\theta}^q$ in some neighborhood $U(\boldsymbol{\theta}_0^q)$ around $\boldsymbol{\theta}_0^q$, since we assume $F_{Y_i|\mathbf{X}}$ has a density which is strictly positive, continuous and bounded in this area. So $\forall \boldsymbol{\theta}^q \in U(\boldsymbol{\theta}_0^q)$, we can choose $\boldsymbol{\theta}_1^q \in U(\boldsymbol{\theta}_0^q)$ such that $\mathbf{X}_i^\prime \boldsymbol{\theta}_1^q \le \mathbf{X}_i^\prime \boldsymbol{\theta}^q$, then
\begin{align}
\label{equ:quantile_partial_detivative_ES}
\frac{\partial}{\partial (\boldsymbol{\theta}^q)^\prime} &E\{Y_i I(Y_i \le \mathbf{X}_i^\prime \boldsymbol{\theta}^q) | \mathbf{X}_i\} \\
&= \frac{\partial}{\partial (\boldsymbol{\theta}^q)^\prime} E\{Y_i I(Y_i \le \mathbf{X}_i^\prime \boldsymbol{\theta}_1^q) | \mathbf{X}_i\} + \frac{\partial}{\partial (\boldsymbol{\theta}^q)^\prime} E\{Y_i I(\mathbf{X}_i^\prime \boldsymbol{\theta}_1^q < Y_i \le \mathbf{X}_i^\prime \boldsymbol{\theta}^q) | \mathbf{X}_i\} \nonumber\\
&= \frac{\partial}{\partial (\boldsymbol{\theta}^q)^\prime} \int_{-\infty}^{\mathbf{X}_i^\prime \boldsymbol{\theta}_1^q} u \ dF_{Y_i|\mathbf{X}_i}(u) + \frac{\partial}{\partial (\boldsymbol{\theta}^q)^\prime} \int_{\mathbf{X}_i^\prime \boldsymbol{\theta}_1^q}^{\mathbf{X}_i^\prime \boldsymbol{\theta}^q} u \ dF_{Y_i|\mathbf{X}_i}(u) \nonumber\\
&= \mathbf{X}_i^\prime(\mathbf{X}_i^\prime \boldsymbol{\theta}^q) f_{Y_i|\mathbf{X}_i}(\mathbf{X}_i^\prime \boldsymbol{\theta}^q).
\end{align}
It then follows that \begin{align}
\label{equ:lemma3_lambdadiff_term2_ES}
l_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) = \mathbf{X}_i^\prime \boldsymbol{\theta}^q_0 f_{Y_i | \mathbf{X}_i}(\mathbf{X}^\prime\boldsymbol{\theta}^q_0) \mathbf{X}^\prime(\boldsymbol{\theta}^q_0 - \hat{\boldsymbol{\theta}}^q) + O\big(\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2\big).
\end{align}
Substituting $k_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0)$ and $l_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0)$ in \eqref{equ:lemma3_lambdadiff_ES} by \eqref{equ:lemma3_lambdadiff_term1_ES} and \eqref{equ:lemma3_lambdadiff_term2_ES}, we have
\begin{equation*}
\lambda_i(\hat{\boldsymbol{\theta}}^q) - \lambda_i(\boldsymbol{\theta}^q_0) = O\big(\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2\big).
\end{equation*}
Together with the condition $\|\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0\|^2 = O_p(n^{-1}) = o_p(n^{-1/2})$, we can obtain that \begin{align*}
\frac{1}{\sqrt{n}} \sum_{i=1}^n \big\{\lambda_i(\hat{\boldsymbol{\theta}}^q) - \lambda_i(\boldsymbol{\theta}^q_0)\big\} = o_p(1).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:sep_asy_normality}]
Applying the Taylor expansion, we have \begin{align}
\mathbf{0} &= \frac{1}{n} \sum_{i=1}^n \omega_i(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}^e) \nonumber \\
&= \frac{1}{n} \sum_{i=1}^n \omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e_0) + \frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) \big|_{\boldsymbol{\theta}^e = \boldsymbol{\theta}^e_0} \cdot (\boldsymbol{\theta}^e_0 - \hat{\boldsymbol{\theta}}^e) + R_n, \nonumber
\end{align}
where $\boldsymbol{\theta}^e_0$ is the true parameter vector and $R_n$ is a reminder term such that $\sqrt{n} R_n \to \mathbf{0}$ as $n \to \infty$. It then follows that \begin{align*}
\sqrt{n} (\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_0) = &\left[\frac{1}{n} \sum_{i=1}^n \frac{\partial}{\partial (\boldsymbol{\theta}^e)^\prime} \omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e) \big|_{\boldsymbol{\theta}^e = \boldsymbol{\theta}^e_0}\right]^{-1} \\
&\times \left[\frac{1}{\sqrt{n}} \sum_{i=1}^n \omega_i(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}^e_0) + \frac{1}{\sqrt{n}} \sum_{i=1}^n \big\{\omega_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}^e_0) - \omega_i(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}^e_0) \big\} + \sqrt{n} R_n\right].
\end{align*}
Combining this with Lemmas 1-3, the result of Theorem \ref{thm:sep_asy_normality} follows by
\begin{equation*}
\sqrt{n}(\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_0) \overset{d}{\simeq} \left[E\big\{\mathbf{X} \mathbf{X}^\prime G_2^{(1)}(\mathbf{X}^\prime \boldsymbol{\theta}^e_0)\big\}\right]^{-1} \left\{\frac{1}{\sqrt{n}} \sum_{i=1}^n \omega_i(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}^e_0)\right\} = N(\mathbf{0}, \Lambda^{-1} \Omega \Lambda^{-1}).
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:joint_var_consistency}]
Based on the Taylor expansion on $\hat{\Lambda}$, and application of the asymptotic normality of $\hat{\boldsymbol{\theta}}^e$ and Slutsky's theorem, we have
\begin{align*}
\hat{\Lambda} &= n^{-1} \sum_{i=1}^n (\mathbf{X}_i \mathbf{X}_i^\prime) \cdot \left[G_2^{(1)} (\mathbf{X}_i^\prime \boldsymbol{\theta}^e_0) + \big\{G_2^{(1)} (\mathbf{X}_i^\prime \hat{\boldsymbol{\theta}}^e) - G_2^{(1)} (\mathbf{X}_i^\prime \boldsymbol{\theta}^e_0)\big\}\right] \\
&= n^{-1} \sum_{i=1}^n (\mathbf{X}_i \mathbf{X}_i^\prime) \cdot G_2^{(1)} (\mathbf{X}_i^\prime \boldsymbol{\theta}^e_0) + n^{-1} \sum_{i=1}^n (\mathbf{X}_i \mathbf{X}_i^\prime) \left\{G_2^{(2)} (\mathbf{X}_i^\prime \boldsymbol{\theta}^e_0) \mathbf{X}_i^\prime (\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_0) + O\big(\|\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_0\|^2\big)\right\}\\
& \overset{P}{\to} \Lambda.
\end{align*}
Similar arguments yields the result for $\hat{\Omega}$.
\end{proof}
The proof of Theorem \ref{thm:score_stat_null} requires the following Lemmas 4-5.
\begin{lemma}\label{lemma:score_stat_null_lemma1}
Let $\boldsymbol{\theta}^q_0$ and $\boldsymbol{\theta}^e_{10}$ be the true parameters under $H_0$, with the assumptions in Theorem \ref{thm:score_stat_null},
we have \begin{align*}
\underset{\|
(\boldsymbol{\theta}^{q \prime}, \boldsymbol{\theta}_1^{e \prime})^\prime
- (\boldsymbol{\theta}^{q \prime}_0, \boldsymbol{\theta}_{10}^{e \prime})^\prime \| \le c \cdot n^{-1/2}}{\sup} \left\| S_n(\boldsymbol{\theta}^q, \boldsymbol{\theta}_1^e) - S_n(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}_{10}^e) - E\big\{S_n(\boldsymbol{\theta}^q, \boldsymbol{\theta}_{1}^e) - S_n(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}_{10}^e) \big| \mathbf{X}_i\big\} \right\| = o_p(1).
\end{align*}
\end{lemma}
\begin{proof}
Applying similar arguments in the proof of Lemma \ref{lemma:sep_asy_normality_lemma2}, the result follows by the stochastic equicontinuity of $S_n(\boldsymbol{\theta}^q, \boldsymbol{\theta}_1^e) - E\big\{S_n(\boldsymbol{\theta}^q, \boldsymbol{\theta}_1^e) \big| \mathbf{X}_i\big\}$.
\end{proof}
\begin{lemma}\label{lemma:score_stat_null_lemma2}
With the assumptions in Theorem \ref{thm:score_stat_null}, under $H_0$, we have
\begin{align*}
E\big\{S_n(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}_1^e) - S_n(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}_{10}^e) \big| \mathbf{X}_i\big\} = o_p(1).
\end{align*}
\end{lemma}
\begin{proof}
The difference can be written as \begin{align*}
&E\big\{S_n(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}_1^e) - S_n(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}_{10}^e) \big| \mathbf{X}_i\big\} \\
&= \frac{1}{\sqrt{n}} \sum_{i=1}^n \mathbf{Z}_i^* G_2^{(1)}(\mathbf{X}_i^\prime \boldsymbol{\theta}_{0}^e) \left\{\mathbf{W}_i^\prime (\hat{\boldsymbol{\theta}}^e_1 - \boldsymbol{\theta}^e_{10}) - \mathbf{X}_i^\prime(\hat{\boldsymbol{\theta}}^q - \boldsymbol{\theta}^q_0) + \tau^{-1}k_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0) + \tau^{-1} l_i(\hat{\boldsymbol{\theta}}^q, \boldsymbol{\theta}_0)\right\},
\end{align*}
where the functions $k_i$ and $l_i$ are defined in \eqref{equ:lemma3_lambdadiff_ES}. Based on \eqref{equ:lemma3_lambdadiff_term1_ES} and \eqref{equ:lemma3_lambdadiff_term2_ES}, the difference can be simplified to \begin{align*}
&E\big\{S_n(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}_1^e) - S_n(\boldsymbol{\theta}^q_0, \boldsymbol{\theta}_{10}^e) \big| \mathbf{X}_i\big\} \\
&= \frac{1}{\sqrt{n}} \sum_{i=1}^n \mathbf{Z}_i^* G_2^{(1)}(\mathbf{X}_i^\prime \boldsymbol{\theta}_{0}^e) \mathbf{W}_i^\prime (\hat{\boldsymbol{\theta}}^e_1 - \boldsymbol{\theta}^e_{10}) + o_p(1)\\
&= \left\{\frac{1}{n} \sum_{i=1}^n \mathbf{Z}_i^* G_2^{(1)}(\mathbf{X}_i^\prime \boldsymbol{\theta}_{0}^e) \mathbf{W}_i^\prime\right\} \sqrt{n}(\hat{\boldsymbol{\theta}}^e_1 - \boldsymbol{\theta}^e_{10}) + o_p(1) \\
&= \left[\Pi_Z^\prime G \Pi_W - \Pi_Z^\prime G \Pi_W (\Pi_W^\prime G \Pi_W)^{-1} \Pi_W^\prime G \Pi_W \right] \sqrt{n}(\hat{\boldsymbol{\theta}}^e_1 - \boldsymbol{\theta}^e_{10}) + o_p(1) = o_p(1).
\end{align*}
The last equality holds due to the orthogonal transformation given in \eqref{equ:ortho_score}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:score_stat_null}]
For any $(\boldsymbol{\theta}^{q \prime}, \boldsymbol{\theta}^{e \prime}_1)^\prime$ such that $\|(\boldsymbol{\theta}^{q \prime}, \boldsymbol{\theta}^{e \prime}_1)^\prime - (\boldsymbol{\theta}^{q \prime}_0, \boldsymbol{\theta}^{e \prime}_{10})^\prime\| \le c \cdot n^{-1/2}$, define \begin{align*}
S_n(\boldsymbol{\theta}^q, \boldsymbol{\theta}^e_1) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \mathbf{Z}^*_i G^{(1)}(\mathbf{X}^\prime_i \boldsymbol{\theta}^e_{0}) \left\{\mathbf{W}^\prime_i \boldsymbol{\theta}^e_1 - \mathbf{X}^\prime_i \boldsymbol{\theta}^q + \tau^{-1} (\mathbf{X}^\prime_i \boldsymbol{\theta}^q - Y_i)I_(Y_i \le \mathbf{X}^\prime_i \boldsymbol{\theta}^q)\right\}.
\end{align*}
Under $H_0: \boldsymbol{\theta}^e_2 = \mathbf{0}$, $\|\mathbf{Z}^*_i G^{(1)}(\mathbf{X}^\prime_i \boldsymbol{\theta}^e_{0}) - \hat{\mathbf{Z}}^*_i G^{(1)}(\mathbf{X}^\prime_i \hat{\boldsymbol{\theta}}^e)\| = O_p(\|\hat{\boldsymbol{\theta}}^e - \boldsymbol{\theta}^e_{0}\|) = O_p(n^{-1/2})$. Therefore, if we have the asymptotic normality of $S_n(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}^e_1)$, then $$
S_n - S_n(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}^e_1) = O_p(n^{-1/2}) O_p(1) = o_p(1),
$$ which implies $S_n \overset{d}{\simeq} S_n(\hat{\boldsymbol{\theta}}^q, \hat{\boldsymbol{\theta}}^e_1)$.
According to Lemmas 4-5, we can obtain that $S_n \overset{d}{\simeq} S_n(\boldsymbol{\theta}_0^q, \boldsymbol{\theta}_{10}^e)$, where \begin{align*}
S_n(\boldsymbol{\theta}_0^q, \boldsymbol{\theta}_{10}^e) &= \frac{1}{\sqrt{n}} \sum_{i=1}^{n} \mathbf{Z}_i^* G_2^{(1)}(\mathbf{X}_i^\prime \boldsymbol{\theta}_{0}^e) \big\{\mathbf{W}_i^\prime\boldsymbol{\theta}_{10}^e - \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q + \tau^{-1} (\mathbf{X}_i^\prime \boldsymbol{\theta}_0^q - Y_i) I(Y_i \le \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q)\big\} \\
&= AN(0, \Sigma),
\end{align*}
and
\begin{align*}
\Sigma &= \frac{1}{n} \sum_{i=1}^n \left[\mathbf{Z}_i^* \mathbf{Z}_i^{*\prime} \big\{G_2^{(1)}(\mathbf{W}_i^\prime \boldsymbol{\theta}_{10}^e)\big\}^2 \frac{1}{\tau^2} \text{Var}\big\{\epsilon_i^q I(\epsilon_i^q \le 0) | \mathbf{X}_{i} \big\}\right],
\end{align*}
here $\epsilon_i^q = Y_i - \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q$ are the quantile residuals. Similar to the arguments in the proof of Theorem \ref{thm:joint_var_consistency}, we have $\hat{\Sigma}_n(\hat{\psi}) \overset{P}{\to} \Sigma$, and therefore the desired result under $H_0$ follows.
Under the local alternative $H_n: \boldsymbol{\theta}_2^e = \boldsymbol{\theta}^e_{20} / \sqrt{n}$, notice that
\begin{align*}
&E\big\{\mathbf{W}_i^\prime\boldsymbol{\theta}_{10}^e - \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q + \tau^{-1} (\mathbf{X}_i^\prime \boldsymbol{\theta}_0^q - Y_i) I(Y_i \le \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q) \big | \mathbf{X}_i\big\} \\
&= \mathbf{W}_i^\prime\boldsymbol{\theta}_{10}^e - \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q + \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q F_{Y_i | \mathbf{X}_i}(\mathbf{X}_i^\prime \boldsymbol{\theta}_0^q)- \tau^{-1} E\big\{Y_i I(Y_i \le \mathbf{X}_i^\prime \boldsymbol{\theta}_0^q)\big\} \\
&= -\mathbf{Z}_i^\prime \boldsymbol{\theta}^e_{20} / \sqrt{n}.
\end{align*}
The result follows since $S_n \overset{d}{\simeq} S_n(\boldsymbol{\theta}_0^q, \boldsymbol{\theta}_{10}^e)$ under $H_n$.
\end{proof}
|
1,314,259,993,322 | arxiv | \section{Introduction}
The evaluation of
integrals over functions with an unbounded or even infinite number of variables is
an important task
in physics, quantum chemistry or in quantitative finance, see, e.g.,
\cite{Gil08a, WW96} and the references therein. In recent years a large number of
researchers contributed to the design of new algorithms as, e.g., multilevel and changing dimension algorithms or dimension-wise quadrature methods, to approximate such integrals efficiently.
Multilevel algorithms were introduced by Heinrich and Sindambiwe \cite{Hei98, HS99} in the context of integral equations and parametric integration, and by Giles
\cite{Gil08a, Gil08b} in the context of stochastic differential equations.
Changing dimension algorithms were introduced by Kuo et al. \cite{KSWW10} in the context of infinite-dimensional integration in weighted Hilbert spaces and dimension-wise quadrature methods were introduced by Griebel and Holtz \cite{GH10} for multivariate integration. (Changing dimension algorithms and dimension-wise quadrature methods are based on a similar idea.)
In this paper we want to study infinite-dimensional numerical integration on a weighted reproducing kernel Hilbert space of functions with infinitely many variables as it has
been done in
\cite{HW01, KSWW10, NH09, HMNR10, NHMR11, Gne10, PW11, Gne12, B10, BG12, Gne12a}. The Hilbert spaces we consider here posses so-called
anchored function space decompositions.
For a motivation of this specific function space setting and connections to problems in the theory of stochastic processes and mathematical finance we refer to \cite{HMNR10, NH09, NHMR11}.
We provide error bounds for the worst case error of deterministic linear algorithms; these bounds are expressed in terms
of the cost of the algorithms.
We solely take account of function evaluations, i.e., the
cost of function sampling, and neglect other cost as, e.g., combinatorial cost.
To evaluate the cost of sampling, we
consider two cost models: the \emph{nested subspace sampling model} (introduced in \cite{CDMR09}, where it was called \emph{variable subspace sampling model}) and the \emph{unrestricted subspace sampling model}
(introduced in \cite{KSWW10}).
In the nested subspace sampling model lower error bounds for infinite-dimensional
integration were provided in \cite{NHMR11} for general $n$-point quadrature formulas in the case where
the weighted Hilbert space of integrands is defined via an anchored kernel and the weights are product weights. We generalize these error bounds to general weights.
In the unrestricted subspace sampling model lower error bounds where provided
for product weights and anchored kernels in \cite{KSWW10}, and for general weights and
the Wiener kernel in \cite{Gne10}. We generalize these results to anchored kernels and general weights. (Let us mention that in the randomized setting similar general lower
error bounds for
infinite-dimensional integration on weighted Hilbert spaces are provided
for anchored decompositions in \cite{Gne12} and for underlying ANOVA-type decompositions
in \cite{BG12}; to treat the latter decompositions, a technically more involved analysis
is necessary.)
In this paper we further study two classes of weights in more depths: The class
of \emph{product and order-dependent (POD) weights}, which includes, in particular,
product weights and finite-product weights, and the class of \emph{weights of finite active dimension}, which includes, in particular, finite-diameter weights and (the more general)
finite-intersection weights. We derive several new results for both classes of weights
which might also be of interest for other tractability studies of continuous numerical
problems on weighted spaces, apart from the infinite-dimensional integration problem.
For these two classes of weights we provide upper error bounds with the help of multilevel algorithms and
changing dimension algorithms. These bounds show that for the cost functions most relevant
in applications, namely those cost functions which grow at least linearly in the number
of active variables, the convergence rate of our algorithms is arbitrarily close to the convergence rate of the $N$th minimal integration error and our lower bounds are thus sharp. For the remaining cost functions, which grow sub-linearly in the number of active variables, our bounds are still sharp in most of the cases (depending on the smoothness of the kernel and the decay rate of the weights).
These new upper bounds improve on the results obtained for product weights in \cite{NHMR11} and \cite{Gne10}. Furthermore, in contrast to \cite[Thm.~3]{NHMR11}, we are
able to formulate our results on upper bounds without introducing additional auxiliary weights that are not problem inherent.
We provide explicit quasi-Monte Carlo multilevel and changing dimension algorithms based
on higher order polynomial lattice rules for weighted Hilbert spaces of integrands that
correspond to anchored Sobolev spaces with smoothness parameter $\alpha >1$.
These algorithms are optimal in the sense that they achieve convergence rates arbitrarily
close to the optimal convergence rate (i.e., the convergence rate of the $N$th minimal
integration error).
The article is organized as follows: In Section \ref{TGS} the setting we want to study is
introduced. In Section \ref{LB} we provide lower error bounds for deterministic quadrature formulas for solving the infinite-dimensional integration problem on weighted Hilbert spaces.
In Section \ref{LB_GEN} we present the most general form of the lower bounds which is valid
for arbitrary weights. In Section \ref{LB_SPEC} we state the form of the lower
bounds for the two specific classes of weights we consider.
In Section \ref{MLA} and \ref{CDA} we explain multilevel and changing dimension
algorithms. In Section \ref{UB_POD} we provide upper error bounds for POD weights,
and in Section \ref{UB_FAD} for weights with finite active dimension.
In Section \ref{HOC} we illustrate the upper and lower bounds in the situation where
the space of integrands is based on the univariate anchored Sobolev space with smoothness parameter $\alpha >1$. Here we consider specific quasi-Monte Carlo multilevel and changing dimension
algorithms that achieve higher-order convergence.
\section{The general setting}
\label{TGS}
\subsection{Notation}
For $n\in\N$ we denote the set $\{1,\ldots,n\}$ by $[n]$.
If $u$ is a finite set, then its size is denoted by $|u|$.
We put
\begin{equation*}
\U := \{ u\subset \N \,|\, |u| < \infty \}.
\end{equation*}
We use the common Landau $O$-notation. For two non-negative functions $f$ and $g$
we write occasionally $f=\Omega(g)$ for $g=O(f)$, and $f=\Theta(g)$ if $f=\Omega(g)$
and $f=O(g)$ holds.
\subsection{The function spaces}
As spaces of integrands of infinitely many variables, we consider
\emph{reproducing kernel Hilbert spaces} which are discussed in
more detail in
\cite{HMNR10, GMR12}.
Our standard reference for
general reproducing kernel Hilbert spaces is \cite{Aro50}.
We start with univariate functions. Let $D\subseteq \R$ be a Borel
measurable set of
$\R$ and let $K:D\times D\to \R$ be a measurable reproducing kernel
with \emph{anchor}
$c\in D$, i.e., $K(c,c) = 0$. This implies
$K(\cdot,c) \equiv 0$. We assume that $K$ is non-trivial,
i.e., $K\neq 0$.
We denote the reproducing kernel Hilbert space with kernel $K$ by
$H = H(K)$ and its scalar product and norm by $\langle \cdot, \cdot \rangle_H$ and
$\|\cdot\|_H$, respectively. We use corresponding notation for other
reproducing kernel Hilbert spaces. If $g$ is a constant function in $H(K)$,
then the reproducing property implies
$g = g(c) = \langle g, K(\cdot,c) \rangle_H = 0$.
Let $\rho$ be a probability measure on $D$. We assume that
\begin{equation}
\label{cond-M}
M:= \int_D K(x,x) \,\rho({\rm d}x) <\infty.
\end{equation}
For arbitrary $\bsx,\bsy \in D^\N$ and $u\in \U$ we define
\begin{equation*}
K_u(\bsx,\bsy) := \prod_{j\in u} K(x_j, y_j),
\end{equation*}
where by convention $K_{\emptyset} \equiv 1$.
The Hilbert space with reproducing kernel $K_u$ will be denoted
by $H_u = H(K_u)$.
Its functions depend only on the coordinates $j\in u$. If it is convenient for us,
we identify $H_u$ with the space of functions defined on $D^u$ determined by
the kernel $\prod_{j\in u} K(x_j, y_j)$, and write $f_u(\bsx_u)$ instead of $f_u(\bsx)$
for $f_u\in H_u$ and $\bsx\in D^{\N}$, where $\bsx_u := (x_j)_{j\in u}\in D^u$. For all $f_u\in H_u$ and $\bsx\in D^{\N}$ we have
\begin{equation}
\label{vanish}
f_u(\bsx) = 0
\hspace{3ex}\text{if $x_j=c$ for some $j\in u$.}
\end{equation}
This property yields an \emph{anchored decomposition} of functions,
see, e.g., \cite{KSWW10a}.
Let now $\bsgamma = (\gamma_u)_{u\in \U}$ be weights, i.e.,
a family of non-negative numbers. We assume that $\bsgamma$ satisfies
\begin{equation}
\label{summable}
\sum_{u\in\U} \gamma_u M^{|u|} <\infty.
\end{equation}
(One may also consider slightly weaker conditions as done, e.g., in \cite[Sect.~5]{KSWW10}
or \cite{PW11}; for a comparison of these different conditions see \cite{GMR12}.)
We denote the \emph{set of active
coordinate sets}, $\{u\in \U \,|\, \gamma_u>0\}$ by $\AC = \AC(\bsgamma)$. (Sets $u \subseteq \mathbb{N}$ with $|u| = \infty$ are always assumed to be inactive.) We always assume that $\AC$ is non-trivial, i.e., that there
exists a $\emptyset \neq u \in \U$ with $u\in \AC$.
Let us define the domain $\X$ of functions of infinitely many variables by
\begin{equation*}
\X:= \left\{\bsx \in D^{\N} \,|\, \sum_{u\in \AC} \gamma_u \prod_{j\in u}
K(x_j,x_j) <\infty \right\}.
\end{equation*}
Let $\mu$ be the infinite-product probability measure of $\rho$ on $D^{\N}$.
Due to our assumptions we have $\mu(\X) = 1$, see \cite[Lemma~1]{HMNR10}
or \cite{GMR12}.
For $\bsx,\bsy \in\X$ we define
\begin{equation*}
\mathcal{K}_{\bsgamma}(\bsx,\bsy) := \sum_{u\in\AC} \gamma_u K_u(\bsx,\bsy).
\end{equation*}
$\mathcal{K}_{\bsgamma}$ is well-defined and, since $\mathcal{K}_{\bsgamma}$ is symmetric and positive
semi-definite, it is a reproducing
kernel on $\X \times \X$, see \cite{Aro50}. We denote the corresponding
reproducing kernel Hilbert space by $\mathcal{H}_{\bsgamma} = H(\mathcal{K}_{\bsgamma})$ and its norm by $\|\cdot\|_{\bsgamma}$.
For the next lemma see \cite[Cor.~5]{HW01} or \cite{GMR12}.
\begin{lemma}
\label{Lemma6}
The space $\Hg$ consists of all functions
$f=\sum_{u\in\AC} f_u$, $f_u\in H_u$, such that
\begin{equation*}
\sum_{u\in\AC} \gamma^{-1}_u \|f_u\|^2_{H_u} <\infty.
\end{equation*}
In the case of convergence, we have
\begin{equation*}
\|f\|^2_{\bsgamma} = \sum_{u\in\AC} \gamma^{-1}_u \|f_u\|^2_{H_u}.
\end{equation*}
\end{lemma}
For $u\in \AC$ let $P_u$ denote the orthogonal projection
$P_u:\Hg \to H_u$, $f\mapsto f_u$ onto $H_u$. Then each $f\in \Hg$
has a unique representation
\begin{equation*}
f = \sum_{u\in \AC} f_u
\hspace{2ex}\text{with $f_u = P_u(f) \in H_u$, $u\in \AC$.}
\end{equation*}
\subsection{Infinite-dimensional integration}
\label{IDI}
Due to (\ref{summable}), we have $\Hg \subseteq L_1(\X,{\rm d}\mu)$, and
the integration functional
\begin{equation*}
I(f) := \int_{\X} f(\bsx)\,\mu({\rm d}\bsx)
\end{equation*}
is continuous on
$\Hg$, i.e., the operator norm of $I$ is finite:
\begin{equation}
\label{bedingung}
\|I\|^2_{{\mathcal{H}_{\bsgamma}}} = \sum_{u\in \AC} \gamma_u C^{|u|}_0 <\infty,
\hspace{2ex}\text{where}\hspace{2ex}
C_0:= \int_D\int_D
K(x,y)\,\rho({\rm d}x)\,\rho({\rm d}y) < \infty,
\end{equation}
see, e.g., \cite{GMR12}.
We assume that $I$ is non-trivial, i.e., that $C_0>0$.
Notice that $C_0 \le M$.
For a given set of weights $\bsgamma$ we denote by $\widehat{\bsgamma}$
the set of weights defined by
\begin{equation}
\label{gammahut}
\widehat{\gamma}_u := \gamma_u C_0^{|u|}
\hspace{2ex}\text{for all $u\in \U$.}
\end{equation}
The representer $h\in \Hg$ of $I$, i.e., the function $h$ satisfying
$I(f) = \langle f, h \rangle_{\bsgamma}$ for all $f\in\Hg$, is given by
\begin{equation*}
h(\bsx) = \int_{\X} \mathcal{K}_{\bsgamma}(\bsx,\bsy) \mu({\rm d}\bsy)
\end{equation*}
and consequently the operator norm of the functional $I$ satisfies
$\|I\|_{\Hg} = \|h\|_{\bsgamma}$.
For $u\in\AC$ we define $I_u := I\circ P_u$ on $\Hg$,
i.e., $I_u(f) = \langle f, P_u(h) \rangle_{\bsgamma}$ for all $f\in\Hg$ .
More concretely, we have
\begin{equation*}
I_u(f) = \int_{D^u} f_u(\bsx_u) \,\rho^u({\rm d}\bsx_u),
\end{equation*}
and the representer $h_u$ of $I_u$ in $\Hg$ is given by
$h_u(\bsx_u) = P_u(h)(\bsx_u)$.
Thus we have
\begin{equation*}
I(f) = \sum_{u\in\AC} I_u(f_u)
\hspace{2ex}\text{for all $f\in\Hg$.}
\end{equation*}
\subsection{Admissible algorithms, errors, and cost models}
We define the \emph{set of admissible sample points} $S$ by
\begin{equation}
\label{admissable_set}
S:= \{ (\bsx_u;\bsc) \,|\, u\in \U\}.
\end{equation}
Here again $\bsx_u = (x_j)_{j\in u}\in D^u$, and
$(\bsx_u;\bsc)$ denotes the vector $\bsy = (y_1,y_2,\ldots) \in D^{\N}$
with $y_j = x_j$ if $j\in u$ and $y_j = c$ otherwise.
Note that $(\bsx_u;\bsc) \in \X$.
We consider algorithms of the form
\begin{equation}
\label{def-alg}
Q(f) = \sum_{i=1}^n a_i f(\bst_{v_i};\bsc),
\hspace{2ex}\text{for $v_1, \ldots, v_n\in \U$,}
\end{equation}
with points $\bst_{v_i}\in (D\setminus \{c\})^{v_i}$ and coefficients $a_i\in\R$.
The worst case error is given by
\begin{equation*}
e(Q; \Hg) := \sup_{\|f\|_{\bsgamma} \le 1}
|I(f)-Q(f)|.
\end{equation*}
For an algorithm $Q$ of the form (\ref{def-alg}) we put
$(Q)_{u} := Q\circ P_u$, i.e.,
\begin{equation*}
(Q)_{u}(f) = \sum^n_{i=1} a_i f_u(\bst_{v_i\cap u}; \bsc).
\end{equation*}
We have the identity
\begin{equation}
\label{worid}
[e(Q;\Hg)]^2 =
\sum_{u\in\AC} \gamma_u [e((Q)_{u}; H_u)]^2,
\end{equation}
where
\begin{equation*}
e((Q)_{u}; H_u) = \sup_{\|g\|_{H_u} \le 1}
|I_u(g)- (Q)_{u}(g)|.
\end{equation*}
For the cost of an algorithm we only take into account the cost
for function evaluations.
To make this more precise, let us fix a
\emph{cost function} $\$:\N\to [1,\infty)$, which is non-decreasing.
In this paper we consider two models for the cost of function
evaluations,
the nested subspace
sampling and the unrestricted subspace sampling model.
In the \emph{nested subspace sampling model} we first define for a fixed
strictly increasing sequence $\bsw = (w_i)_{i\in\N}$ of coordinate sets
$w_1\subset w_2 \subset \cdots \in \U$ the cost of a function evaluation
in $\bsx\in\X$ to be
\begin{equation}
\label{sechs}
\mathfrak{c}_{\bsw,c}(\bsx) := \inf\{\$(|w_i|) \,|\, x_j=c \hspace{1ex}\forall j\notin w_i\}.
\end{equation}
Here we use the standard convention that $\inf\emptyset = \infty$.
For a linear algorithm $Q$ of the form (\ref{def-alg}) we define
\begin{equation*}
\mathfrak{c}_{\bsw,c}(Q) := \sum^n_{i=1} \mathfrak{c}_{\bsw,c}(\bst_{v_i};\bsc).
\end{equation*}
Let $C_{\nes}$ denote the set of all cost functions $c_{\bsw, c}$ of the form \eqref{sechs} where $\bsw$ runs through all strictly increasing sequences $\bsw$ of coordinate sets. Then we define the cost of $Q$ in the nested subspace sampling model to be
\begin{equation*}
\cost_{\nes}(Q) := \inf_{\mathfrak{c}_{\bsw,c}\in C_{\nes}} \mathfrak{c}_{\bsw,c}(Q).
\end{equation*}
This model was introduced in
\cite{CDMR09}.\footnote{In \cite{CDMR09} it was actually called ``variable subspace sampling
model''. We have chosen a different name to emphasize the difference between
this model and the ``unrestricted subspace sampling model''
explained below.}
In the \emph{unrestricted subspace sampling model} a function
evaluation $f(\bsx)$ costs
\begin{equation*}
\mathfrak{c}_{c}(\bsx) := \inf\{\$(|u|) \,|\, u\in \U\,, \hspace{1ex}
x_j=c \hspace{1ex}\forall j\notin u\}.
\end{equation*}
The cost of a linear algorithm $Q$ of the form (\ref{def-alg}) in the unrestricted
subspace sampling model is given by
\begin{equation*}
\cost_{\unr}(Q) := \sum^n_{i=1} \mathfrak{c}_{c}(\bst_{v_i};\bsc) = \sum^n_{i=1} \$(|v_i|).
\end{equation*}
The \emph{unrestricted subspace sampling model} was introduced in
\cite{KSWW10}.\footnote{In \cite{KSWW10} the cost model did not get a specific name.}
We denote the cost of an algorithm $Q$ in the nested and
unrestricted subspace sampling model by
$\cost_{\nes}(Q)$ and $\cost_{\unr}(Q)$,
respectively.
Obviously, the unrestricted subspace sampling model is more generous
than the nested subspace sampling model.
Note that in both sampling models the cost for function evaluations in
non-admissible sample points is infinite.
\subsection{Strong tractability}
\label{tractabilitysection}
Let $\mo \in \{ \nes, \unr\}$.
The {\em $\e$-complexity} is defined as the minimal
cost among all algorithms of the form (\ref{def-alg}), whose worst case
errors are at most $\e$, i.e.,
\begin{equation}
\label{comp}
{\rm comp}_{\mo}(\e;\Hg)
\,:=\, \inf\left\{{\rm cost_{\mo}}(Q) \,|\,
Q \hspace{1ex}\text{is of the form (\ref{def-alg}) and}
\hspace{1ex} e(Q;\Hg)\le\e\right\}.
\end{equation}
The integration problem $I$ is said to be {\em strongly
tractable}\footnote{We chose this notion, since it seems to us to be consistent with the usual notion of tractability in the multivariate setting. A more precise notion would be
``strongly polynomially tractable'', to distinguish this kind of tractability from more
general notions of tractability as introduced in \cite{GW07}, see also \cite{NW08}. But for
convenience we stay with the shorter notion ``strongly tractable''.}
if there are non-negative constants $C$ and $p$ such that
\begin{equation}
\label{pol-tr}
{\rm comp}_{\mo}(\e;\Hg)\le C \,\e^{-p} \qquad
\mbox{for all $\e>0$}.
\end{equation}
The {\em exponent of strong tractability} is given by
\begin{equation*}
p^{\mo}= p^{\mo}(\bsgamma) := \inf\{ p\,|\, \text{$p$ satisfies \eqref{pol-tr}}\}.
\end{equation*}
Essentially, $1/p^{\mo}$ is the
\emph{convergence rate} of the
\emph{$N$th minimal worst case error}
\begin{equation}
\label{worstcaseerr}
e^{\mo}(N;\Hg) := \inf\{ e(Q;\Hg )\,|\,
Q \hspace{1ex}\text{is of the form (\ref{def-alg}) and}
\hspace{1ex} \cost_{\mo}(Q)\le N\}.
\end{equation}
In particular, we have for all $p>p^{\mo}$ that
$e^{\mo}(N;\Hg) = O(N^{-1/p})$.
\subsection{Weights}
\label{WEIGHTS}
Here we introduce further definitions and notation which is necessary
for our analysis of lower and upper bounds for the exponents of
strong tractability in the different models.
Let $\bsgamma=(\gamma_u)_{u\in \U}$
be a given family of weights.
Weights $\bsgamma$ are called \emph{finite-order weights
of order $\omega$}
if there exists an
$\omega\in \N$ such that
$\gamma_{u}=0$ for all $u\in \U$ with $|u|>\omega$.
Finite-order weights were introduced in \cite{DSWW06} for spaces
of functions with a finite number of variables.
The following definition is taken from \cite{Gne10}.
\begin{definition}
\label{Cut-Off}
For weights $\bsgamma$ and $\sigma \in\N$ let us define the \emph{cut-off weights}
of order $\sigma$
\begin{equation}
\label{gammasigma}
\bsgamma^{(\sigma)}
= (\gamma_u^{(\sigma)})_{u\in \U}
\hspace{2ex}\text{via}\hspace{2ex}
\gamma^{(\sigma)}_{u} =
\begin{cases}
\,\gamma_u
\hspace{2ex}&\text{if $|u| \le \sigma$},\\
\,0
\hspace{2ex} &\text{otherwise.}
\end{cases}
\end{equation}
\end{definition}
Clearly, cut-off weights of order $\sigma$ are in particular
finite-order weights of order $\sigma$.
We always assume that the weights $\bsgamma$ we consider satisfy
(\ref{summable}).
Let us denote by $u_1(\sigma), u_2(\sigma),\ldots $,
the distinct non-empty sets $u\in \U$ with
$\gamma_u^{(\sigma)} >0$
for which
$\widehat{\gamma}_{u_1(\sigma)}^{(\sigma)} \ge
\widehat{\gamma}_{u_2(\sigma)}^{(\sigma)} \ge \cdots$.
Let us put $u_0(\sigma) := \emptyset$. We can make the
same definitions for
$\sigma = \infty$; then we have obviously
$\bsgamma^{(\infty)} = \bsgamma$.
For convenience we will usually suppress any reference to $\sigma$
in the case where $\sigma = \infty$.
For $\sigma\in\N\cup\{\infty\}$ let us define
\begin{equation*}
\tail_{\bsgamma,\sigma} (d):= \sum_{j=d+1}^\infty
\widehat{\gamma}_{u_j(\sigma)}^{(\sigma)} \in [0,\infty]
\hspace{2ex}\text{and}\hspace{2ex}
\decay_{\bsgamma,\sigma} :=
\sup \left\{ p\in \R \,\Big|\, \lim_{j\to\infty}
\widehat{\gamma}_{u_j(\sigma)}^{(\sigma)}j^p =0
\right\}.
\end{equation*}
The following definition is from \cite{Gne10}.
\begin{definition}
For $\sigma\in\N\cup\{\infty\}$ let $t^*_\sigma \in [0,\infty]$
be defined as
\begin{equation*}
t^*_\sigma := \inf \big\{t\ge 0\,|\, \,\exists\, C_t>0 \,\,\forall
\,v \in \U: |\{i\in \N \,|\,
u_i(\sigma) \subseteq v\}| \le C_t|v|^t \big\}.
\end{equation*}
\end{definition}
Let $\sigma\in\N$. Since $|u_i(\sigma)|\le \sigma$ for all $i\in\N$, we have
obviously $t^*_\sigma \le \sigma$.
On the other hand, if we have an infinite sequence
$(u_j(\sigma))_{j\in\N}$, it is not hard to verify that
$t^*_\sigma \ge 1$, see \cite{Gne10}.
In the following two subsections we describe the classes of weights we
want to consider in this article.
\subsubsection{Product and order-dependent weights}
\emph{Product and order-dependent (POD) weights} $\bsgamma$ were introduced in
\cite{KSS11} and are a hybrid of so-called \emph{product weights} and
\emph{order-dependent weights}.
Their general form is
\begin{equation}
\label{pod}
\gamma_u = \Gamma_{|u|} \prod_{j\in u}\gamma_j,
\hspace{3ex}\text{where $\gamma_1\ge\gamma_2\ge \cdots \ge 0$, and
$\Gamma_0=\Gamma_1=1$, $\Gamma_2,\Gamma_3,\ldots \ge 0$.}
\end{equation}
Special cases are product and finite-product weights that are defined as
follows.
\begin{definition}
Let $(\gamma_j)_{j\in\N}$ be a sequence of non-negative real
numbers satisfying $\gamma_1\ge \gamma_2 \ge \ldots.$ With the
help of this sequence we define for $\omega\in\N\cup\{\infty\}$ weights
$\bsgamma = (\gamma_{u})_{u\subset_f\N}$ by
\begin{equation}
\label{gammafpw}
\gamma_{u} =
\begin{cases}
\prod_{j\in u} \gamma_j
\hspace{2ex}&\text{if $|u| \le \omega$},\\
\,0
\hspace{2ex} &\text{otherwise,}
\end{cases}
\end{equation}
where we use the convention that the empty product is $1$.
In the case where $\omega=\infty$, we call such weights \emph{product weights},
in the case where $\omega$ is finite, we call them \emph{finite-product weights of order}
(at most) $\omega$.
\end{definition}
Product weights were introduced by Sloan and Wo\'zniakowski in \cite{SW98}
and have been studied extensively since then.
Finite-product weights were considered in \cite{Gne10} and are
obviously finite-order weights of order at most $\omega$.
It is easily seen that product weights and finite product weights of order $\omega$ are POD weights;
in (\ref{pod}) one just has to choose $\Gamma_\nu =1$ for all $\nu\in\N$ to obtain
product weights and $\Gamma_{|u|} = 1$ for $|u| \le \omega$
and $\Gamma_{|u|} = 0$ for $|u| > \omega$ to obtain finite product weights.
Other concrete examples of POD weights can be found in \cite{KSS11, KSS12}.
\subsubsection{Algorithmic dimension}\label{sec_alg_dim}
The following definition introduces the concept of the \emph{algorithmic dimension} of
a family of weights.
\begin{definition}
Let $\mathcal{W}\subseteq \U$. Let $d \in \mathbb{N} \cup \{\infty\}$ be such that there exists a function
\begin{equation}
\label{phi}
\phi: \mathbb{N} \to [d]
\hspace{2ex}\text{with the property}\hspace{2ex}
\forall u \in \mathcal{W}\,\, \forall j
\neq j' \in u:\, \phi(j) \neq \phi(j'),
\end{equation}
where $[\infty] = \mathbb{N}$. That is, $\phi|_u$ is injective for each $u\in\mathcal{W}$. If $d \in \mathbb{N}$, then we say that $\mathcal{W}$ has
\emph{finite algorithmic dimension}. In this case we call the minimal $d^\ast = d^\ast(\mathcal{W})$ for which such a $\phi$ exists the \emph{algorithmic dimension} of $\mathcal{W}$.
Let $\bsgamma = (\gamma_u)_{u\in \U}$ be a family of weights.
If its set $\mathcal{A}$ of active coordinate sets has algorithmic
dimension $d^\ast(\mathcal{A})$, we say that the family of weights $\bsgamma$ has
\emph{algorithmic dimension $d^\ast(\bsgamma) := d^\ast(\mathcal{A})$}.
If we do not want to specify the
algorithmic dimension $d^\ast$, we just say that $\bsgamma$ has
\emph{finite algorithmic dimension}.
\end{definition}
Weights $\bsgamma$ of finite algorithmic dimension $d^*$ are obviously
finite-order weights of order $\omega \le d^*$, but finite-order weights do
not necessarily have finite algorithmic dimension.
We define a graph associated with $\mathcal{W}$ in the following way. For a given set $\mathcal{W} \subseteq \U$ we consider the infinite simple graph $G_{\mathcal{W}} =(\N,E_{\mathcal{W}})$, where $(i,j)$ with $i \neq j$, belongs to the set of edges $E_{\mathcal{W}}$ if and only if there exists
a $u\in \mathcal{W}$ with $i,j \in u$. The graph $G_{\mathcal{W}}$ does not contain loops, i.e. edges $(i,i)$. We call $G_{\mathcal{W}}$ the \emph{associated graph}
of $\mathcal{W}$. Notice that two different subsets $\mathcal{W}$, $\mathcal{W}'$ of
$\U$ may have the same associated graph.
The following lemma connects the concept of minimal algorithmic dimension to the \emph{chromatic number} $\chi(G_{\mathcal{W}})$ of $G_{\mathcal{W}}$. Recall that the chromatic number of a graph $G$ is the minimal number of colors needed to color the vertices of $G$ in such a way that any two vertices connected by an edge have a different color.
\begin{lemma}\label{lem_graph}
Let $\mathcal{W} \subseteq \U$ and $G_{\mathcal{W}}$ be the associated graph. Then the algorithmic dimension $d^\ast(\mathcal{W})$ of $\mathcal{W}$ and the chromatic number $\chi(G_{\mathcal{W}})$ coincide, i.e.
\begin{equation*}
d^\ast(\mathcal{W}) = \chi(G_{\mathcal{W}}).
\end{equation*}
\end{lemma}
\begin{proof}
Assume that we have given a coloring of the vertices of the graph $G_{\mathcal{W}}$. Let the vertices of $G_{\mathcal{W}}$ be denoted by $\mathbb{N}$ and the colors be denoted by $1,2,\ldots, \chi(G_{\mathcal{W}})$. Then we can define the function $\phi:\mathbb{N} \to [\chi(G_{\mathcal{W}})]$ by setting $\phi(i) = c_i$, where $c_i \in [\chi(G_{\mathcal{W}})]$ denotes the color of the vertex $i$. On the other hand, if we have a function $\phi:\mathbb{N} \to [d^\ast(\mathcal{W})]$ given, then we can obtain a coloring of the graph $G_{\mathcal{W}}$ by coloring the vertex $i$ by $\phi(i)$. By the definition of the function $\phi$ and the graph $G_{\mathcal{W}}$ this yields a coloring of the graph $G_{\mathcal{W}}$. Since both $d^\ast(\mathcal{W})$ and $\chi(G_{\mathcal{W}})$ are minimal, the result follows.
\end{proof}
With the help of Lemma \ref{lem_graph} we derive in the following remark a lower bound on the algorithmic dimension.
\begin{remark}
A complete graph $G$ with $n$ vertices has chromatic number $n$, since all vertices are connected to each other by an edge and hence all vertices must have a different color.
If $\mathcal{W}$ has algorithmic dimension $d\in\N$, then $|u| \le d$ for all coordinate sets $u$ in $\mathcal{W}$, since $G_{\mathcal{W}}$ contains a subgraph which is a complete graph with $|u|$ vertices. Hence
\begin{equation}\label{lowboualgdim}
d^*(\mathcal{W}) \ge \sup_{u\in\mathcal{W}}|u|.
\end{equation}
Thus weights with algorithmic dimension
$d\in\N$ are necessarily finite-order weights of order $\omega \le d$.
The lower bound (\ref{lowboualgdim}) is not necessarily sharp, as
shown by the following example: Let $|u| \le 2$ for all $u \in \mathcal{W}$ and let there exist a sequence of sets $\{i_1,i_2\}, \{i_2,i_3\}, \ldots, \{i_{k-1}, i_k\}$, $\{i_k,i_1\} \in \mathcal{W}$ where $k$ is odd. In other words, $G_{\mathcal{W}}$ contains an odd cycle. Then this graph has chromatic number $3$ as can easily
be shown. An even more drastic example is the set $\mathcal{W}:= \{ u\in\U \,|\, |u|=2\}$,
which has not even finite algorithmic dimension.
\end{remark}
Let us now turn to upper bounds on the algorithmic dimension.
\begin{remark}
As a consequence of Lemma \ref{lem_graph}, we obtain that if $G_{\mathcal{W}}$ is a planar graph (meaning that every finite subgraph is planar), then the famous Four Color Theorem
\cite{AH77a, AH77b}
says that $G_{\mathcal{W}}$ can be colored with at most four colors. Hence in
this situation the minimal algorithmic dimension of $\mathcal{W}$ is at most four.
\end{remark}
We provide further upper bounds on the algorithmic dimension in Theorem \ref{bound_dstar} and \ref{Brooks}.
\begin{theorem}\label{bound_dstar}
Let $\mathcal{W} \subseteq \U$. Then the minimal algorithmic dimension of $\mathcal{W}$ is bounded by
\begin{equation*}
d^\ast(\mathcal{W}) \le \sup_{i \in \mathbb{N}} \left|\bigcup_{u \in \mathcal{W}: i \in u} u \right|.
\end{equation*}
\end{theorem}
\begin{proof}
By Lemma~\ref{lem_graph} it follows that it suffices to show that $\chi(G_{\mathcal{W}})$ satisfies the bound. By \cite[Theorem~8.1.3]{Diestel} it follows that $\chi(G_{\mathcal{W}})$ is equal to the maximum of the chromatic numbers $\chi(H)$ over all finite subgraphs $H$ of $G_{\mathcal{W}}$. Thus it suffices to show that for all finite subgraphs $H$ of $G_\mathcal{W}$ the chromatic number $\chi(H)$ satisfies the bound.
Let $H$ be an arbitrary finite subgraph of $G_{\mathcal{W}}$ and let $V_H$ denote the set of vertices of $H$. By \cite[p.115]{Diestel} we have $\chi(H) \le \Delta(H)+1$, where $\Delta(H)$ is the maximum degree of the vertices of $H$. But the degree of a vertex $i$ in the graph $G_{\mathcal{W}}$ is equal to
\begin{equation*}
\Delta(i) = \left|\bigcup_{u \in \mathcal{W}: i \in u} u \right| - 1.
\end{equation*}
By taking the maximum of the degrees over all vertices in the graph $G_{\mathcal{W}}$ we obtain the result.
\end{proof}
In some circumstances the above result can be slightly improved using Brooks' theorem from graph theory, see \cite[Theorem~8.1.3]{Diestel}.
\begin{theorem}\label{Brooks}
Let $\mathcal{W}\subseteq \U$ such that $\sup_{u \in \mathcal{W}} |u| \ge 3$. Let $Z = \sup_{i \in \mathbb{N}} \left|\bigcup_{u \in \mathcal{W}: i \in u} u \right|$. Let $i_1,i_2,\ldots$ be the set of vertices for which $|\cup_{u \in \mathcal{W}: i_k \in u} u| = Z$. Assume that for each $k \ge 1$ the subgraph consisting of the vertices in $\cup_{u \in \mathcal{W}: i_k \in u} u$ is not complete. Then
\begin{equation*}
d^\ast(\mathcal{W}) \le \max_{i \in \mathbb{N}} \left|\bigcup_{u \in \mathcal{W}: i \in u} u \right| -1.
\end{equation*}
\end{theorem}
Various other bounds on $d^\ast$ can be obtained from graph theory via bounds on the chromatic number of the associated graph, see for instance \cite{Diestel}.
\begin{remark}
In general it is difficult to find a function $\phi$ as in (\ref{phi})
for a given set $\mathcal{W}$. This can be done by a greedy algorithm for graph coloring, see \cite[p. 114]{Diestel}. However, this algorithm does not necessarily find a coloring with the smallest possible number of colors.
\end{remark}
A particular class of weights whose set $\mathcal{W} = \AC$ of active coordinate sets has a finite minimal algorithmic dimension $d$, is the class of finite-intersection weights defined in \cite{Gne10}.
\begin{definition}
\label{def-fiw}
Let $\rho \in \N$.
The finite-order weights $(\gamma_{u_i})_{i\in \N}$, where $\gamma_{u_i} > 0$, are
called \emph{finite-intersection weights} with \emph{intersection
degree} at most $\rho\in \N_0$ if we have
\begin{equation}
\label{fiw}
|\{j\in\N \, | \, u_i\cap u_j \neq \emptyset \}| \le 1+\rho
\hspace{2ex}\text{for all $i\in\N$.}
\end{equation}
\end{definition}
Note that for finite-order weights condition (\ref{fiw}) is
equivalent to the following
condition: There exists an $\eta\in\N$ such that
\begin{equation}
\label{cond}
|\{ i\in\N \,|\, k\in u_i \}| \le \eta
\hspace{2ex}\text{for all $k\in\N$.}
\end{equation}
Indeed, if (\ref{fiw}) is satisfied, then (\ref{cond}) holds with
$\eta \le 1+\rho$, and if (\ref{cond}) is satisfied, then
(\ref{fiw}) holds with $\rho \le (\eta-1)\omega$.
Due to \cite[Lemma~3.10]{Gne10} the set $\AC$ of active
coordinate sets of finite intersection
weights has algorithmic dimension $d^\ast(\AC)$ at most
$[\eta(\omega -1)+1]$; this was shown by constructing inductively a
mapping $\phi:\N \to [\eta(\omega -1)+1]$ that satisfies (\ref{phi}). It also follows from Theorem~\ref{bound_dstar} by
\begin{equation*}
d^\ast(\mathcal{A}) \le \max_{i \in \mathbb{N}} \left|\bigcup_{u \in \mathcal{A}: i \in u} u\right| \le \max_{i \in \mathbb{N}} |\{ u \in \mathcal{A}\,|\, i \in u\} | (\omega-1) + 1 \le \eta (\omega-1) + 1.
\end{equation*}
\section{Lower bounds}
\label{LB}
Here we provide lower bounds for the exponents of tractability in the nested
and in the unrestricted subspace sampling model.
We assume that there exist constants $\varrho,\beta >0$ such that
the $n$th minimal error of univariate integration on $H=H(K)$ satisfies
\begin{equation}
\label{assuni}
e(n;H) \ge \varrho (n+1)^{-\beta}
\hspace{2ex}\text{for all $n\in\N_0$,}
\end{equation}
where
\begin{equation}
e(n;H) := \inf \bigg\{e(Q;H) \,\bigg|\,
Q(f) = \sum^n_{i=1} a_i f(x^{(i)}) \hspace{1ex}
\text{with $a_i\in \R$, $x^{(i)}\in D$} \bigg\}.
\end{equation}
Since for $\emptyset \neq u \in \U$ the integration
problem over $H_u$ is at least as hard as in the univariate
case, assumption (\ref{assuni}) results in
\begin{equation}
\label{betalowbou}
e(Q_{u}; H_u) \ge \varrho C^{\frac{|u|-1}{2}}_0 (n+1)^{-\beta}
\end{equation}
for any quadrature of the form
\begin{equation*}
Q_{u}(f) = \sum_{i=1}^n a_i f(x^{(i)}),
\hspace{2ex} a_i\in\R, x^{(i)}\in D^u, f\in H_u,
\end{equation*}
see \cite[Theorem~17.11]{NW10}.
If now $Q$ is an algorithm of the form (\ref{def-alg}) and
$(Q)_u=Q\circ P_u$, then (\ref{worid}) and (\ref{betalowbou}) imply
\begin{equation}
\label{blowbou}
[e(Q;\Hg)]^2 = \sum^\infty_{j=0} \gamma_{u_j}
[e((Q)_{u_j}; H_{u_j})]^2
\ge b^2 \sum^\infty_{j=1}
\frac{\widehat{\gamma}_{u_j}}{(n_j+1)^{2\beta}},
\end{equation}
where $b^2 := \varrho^2 C_0^{-1}$ and
$n_j := |\{v_i\,|\, u_j\subseteq v_i\}|$.
Since we assumed that $\AC$ is non-trivial, we obtain from (\ref{blowbou})
\begin{equation}
\label{pmodbeta}
p^{\nes} \ge p^{\unr} \ge 1/\beta.
\end{equation}
\subsection{Lower bounds for general weights}
\label{LB_GEN}
In this section we study general weights; here ``general'' means that we only
require the condition (\ref{summable}) to hold.
\subsubsection{Nested subspace sampling}
We start with a new lower bound for the exponent of strong tractability
for general weights in the nested subspace sampling model.
\begin{theorem}
\label{NesLowBou}
Let $\$(k) = \Omega(k^s)$ for some $s> 0$, and let
$\bsgamma$ be weights that satisfy (\ref{summable}).
Then $I$ is only strongly tractable in the nested subspace sampling model if
$\decay_{\bsgamma} >1$. In this case,
\begin{equation}
\label{neslowbou}
p^{\nes} \ge \max \left\{ \frac{1}{\beta}\,,\,
\sup_{\sigma\in\N} \frac{2 s/t^*_{\sigma}}{\decay_{\bsgamma,\sigma} - 1} \right\}.
\end{equation}
\end{theorem}
\begin{proof}
Let $Q$ be of form (\ref{def-alg}) with $\cost_{\nes}(Q) \le N$.
Then there exists an increasing sequence of sets $\bsw = (w_i)_{i\in\N}$
such that $c_{\bsw,c}(Q) \le N+1$. Let $m$ be the largest integer that satisfies
$\$(|w_m|) \le N+1$. Hence, $v_1,\ldots, v_n \subseteq w_m$. Let $\sigma \in \N$, and let $\bsgamma^{(\sigma)}$ be the corresponding
cut-off weights of $\bsgamma$. Then it is easily seen that
$e(Q; \mathcal{H}_{\bsgamma}) \ge e(Q; \mathcal{H}_{\bsgamma^{(\sigma)}})$,
cf. \cite[Remark~3.3]{Gne10}. Thus we get from (\ref{blowbou})
\begin{equation*}
[e(Q; \mathcal{H}_{\bsgamma})]^2 \ge b^2 \sum_{j: u_j(\sigma) \nsubseteq
w_m} \widehat{\gamma}^{(\sigma)}_{u_j(\sigma)}.
\end{equation*}
Let now $t>t^*_{\sigma}$. Then, for a suitable constant $C_t>0$,
\begin{equation*}
\tau_m := |\{j \,|\, u_j(\sigma) \subseteq w_m\}| \le C_t |w_m|^t
= O(N^{t/s}),
\end{equation*}
since $N +1 \ge \$(|w_m|) = \Omega(|w_m|^s)$. Hence we obtain
for every $p_\sigma > \max\{1,\decay_{\bsgamma, \sigma}\}$
\begin{equation*}
[e(Q; \mathcal{H}_{\bsgamma})]^2 \ge b^2 \sum_{j=\tau_m +1}^\infty \widehat{\gamma}^{(\sigma)}_{u_j(\sigma)}
= \Omega(\tau_m^{1-p_\sigma})
= \Omega (N^{t(1-p_\sigma)/s}).
\end{equation*}
This shows that $I$ is only strongly tractable if $\decay_{\bsgamma} >1$.
In that case,
\begin{equation*}
p^{\nes} \ge \frac{2 s/t^*_{\sigma}}{\decay_{\bsgamma,\sigma} - 1}.
\end{equation*}
From this and (\ref{pmodbeta}) follows the statement of the theorem.
\end{proof}
Note that we have on the one hand $t^*_1 \le t^*_2 \le t^*_3 \le
\cdots$, and on the other hand $\decay_{\bsgamma,1} \ge
\decay_{\bsgamma,2} \ge \decay_{\bsgamma,3} \ge \cdots$.
Thus it is not a priori clear for which $\sigma\in \N$ the supremum
in (\ref{unrlowbou}) is attained. As shown in \cite{Gne10} and as we will
see below,
this may vary for different classes of weights.
\subsubsection{Unrestricted subspace sampling}
The next theorem is a generalization of \cite[Cor.~4.1]{Gne10}, where
only the specific kernel $K(x,y) = \min\{x,y\}$ on $D\times D = [0,1]^2$
was treated.
\begin{theorem}
\label{UnrLowBou}
Let $\$(k) = \Omega(k^s)$ for some $s> 0$, and let
$\bsgamma$ be weights that satisfy (\ref{summable}).
Then $I$ is only strongly tractable in the unrestricted subspace sampling model
if $\decay_{\bsgamma} >1$. In this case,
\begin{equation}
\label{unrlowbou}
p^{\unr} \ge \max \left\{ \frac{1}{\beta}\,,\,
\sup_{\sigma\in\N} \frac{2\min\{ 1,s/t^*_{\sigma} \}}{\decay_{\bsgamma,\sigma} - 1} \right\}.
\end{equation}
\end{theorem}
\begin{proof}
The proof of Theorem \ref{UnrLowBou} is essentially identical with the
one of Theorem 3.4 and Corollary 4.1 in \cite{Gne10}. One just has
to keep in mind that the simple lower bound $p^* \ge 1$ appearing there has to
be replaced by $p^{\unr} \ge 1/\beta$, see (\ref{pmodbeta}).
\end{proof}
\subsection{Lower bounds for special classes of weights}
\label{LB_SPEC}
\subsubsection{Product and order-dependent weights}
Recall that POD weights include as special cases product weights and finite
product weights. We now present a generalized version of \cite[Lemma~3.8]{Gne10},
which holds not only for product and finite product weights, but for general
POD weights.
\begin{lemma}
\label{Lemma3.8}
Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights as in (\ref{pod}). Then
\begin{equation*}
\decay_{\bsgamma,1} = \decay_{\bsgamma,\sigma}
\hspace{3ex}\text{for all $\sigma \in\N$.}
\end{equation*}
This holds still if we replace condition (\ref{summable}) by the weaker condition
that the weights $\widehat{\bsgamma}$ are bounded and have only $0$ as accumulation point.
\end{lemma}
\begin{proof} Let $\sigma\in \N$. Since
$\decay_{\bsgamma,1} \ge \decay_{\bsgamma,\sigma} \ge 0$,
it remains to show that $\decay_{\bsgamma,1} \le \decay_{\bsgamma,\sigma}$.
We can confine ourselves to the case $\decay_{\bsgamma,1}>0$. Let $p\in (0,\decay_{\bsgamma,1})$.
This implies $\sum_{j\in\N} \gamma^{1/p}_j < \infty$. Thus we get
\begin{equation*}
\begin{split}
&\sum_{j\in\N}\widehat{\gamma}^{1/p}_{u_j(\sigma)}
\le \max_{\nu\in [\sigma]} \Gamma^{1/p}_\nu \sum_{j\in\N}\prod_{j\in u_j(\sigma)}
(\gamma_j C_0)^{1/p}
\le \max_{\nu\in [\sigma]} \Gamma^{1/p}_\nu \prod_{j\in\N} \big(
1 + (\gamma_j C_0)^{1/p} \big)\\
\le &\max_{\nu\in [\sigma]} \Gamma^{1/p}_\nu \exp \bigg( \sum_{j\in\N} \ln \big(
1 + (\gamma_j C_0)^{1/p} \big) \bigg)
\le \max_{\nu\in [\sigma]} \Gamma^{1/p}_\nu \exp \bigg( \sum_{j\in\N}
(\gamma_j C_0)^{1/p} \bigg) < \infty,
\end{split}
\end{equation*}
where we used the estimate $\ln(1+x) \le x$, which holds for all non-negative $x$.
Since the sequence $\widehat{\gamma}_{u_j(\sigma)}$, $j\in\N$, is monotonically
decreasing, this implies $\widehat{\gamma}_{u_j(\sigma)} = o(j^{-p})$.
Hence $p\le \decay_{\bsgamma,\sigma}$. Since we may choose $p$ arbitrarily close to
$\decay_{\bsgamma,1}$, we obtain $\decay_{\bsgamma,1} \le \decay_{\bsgamma,\sigma}$.
\end{proof}
For POD weights with $\decay_{\bsgamma}>1$ Lemma \ref{Lemma3.8}, and Theorem \ref{NesLowBou} and
\ref{UnrLowBou} imply strong tractability and
\begin{equation}
\label{neslowboupod}
p^{\nes} \ge \max \left\{ \frac{1}{\beta}, \frac{2 s}{\decay_{\bsgamma,1} - 1}
\right\}
\hspace{2ex}\text{and}\hspace{2ex}
p^{\unr} \ge \max \left\{ \frac{1}{\beta}\,,\,
\frac{2\min\{ 1,s \}}{\decay_{\bsgamma,1} - 1} \right\}.
\end{equation}
For product weights
the lower bound for $p^{\nes}$ can be derived from \cite[Thm.~4]{NHMR11}, and the one for $p^{\unr}$ from \cite[Thm.~3.3 \& Sect.~5.6]{KSWW10}.
Notice that the lower bounds for $p^{\nes}$ and $p^{\unr}$ for finite-product weights are not weaker than for product weights.
\subsubsection{Weights with finite algorithmic dimension}
For the special case of finite-intersection weights of order $\omega$
it was observed in \cite{Gne10}
that if $\AC(\bsgamma^{(\sigma)}) = \infty$, then $t^*_\sigma = 1$ for all $\sigma \in \N$.
Hence for finite-intersection weights the lower bounds (\ref{neslowbou}) and (\ref{unrlowbou}) result in
\begin{equation}
\label{neslowboufad}
p^{\nes} \ge \max \left\{ \frac{1}{\beta}\,,\,
\frac{2s}{\decay_{\bsgamma,\omega} - 1} \right\}
\hspace{2ex}\text{and}\hspace{2ex}
p^{\unr} \ge \max \left\{ \frac{1}{\beta}\,,\,
\frac{2\min\{ 1,s \}}{\decay_{\bsgamma,\omega} - 1} \right\}.
\end{equation}
For the Wiener kernel $K(x,y) = \min\{x,y\}$, defined on $[0,1]^2$, the lower
bound for $p^{\unr}$ in (\ref{neslowboufad}) was already proved in
\cite[Sect.~3.1.1]{Gne10}.
For general weights of finite algorithmic dimension it is however not necessarily true that $t^*_\sigma = 1$ for all $\sigma\in\N$ as the following two lemmas show.
\begin{lemma}\label{lem_alg_dim1}
Let $d\in\N$. Then there exists a set of weights $\bsgamma$ with algorithmic dimension $d$ such that for all $k>d$ there exists a $v \in \U$ with $|v| = k$ and
\begin{equation}
\label{vest}
|\{u \subseteq v: u \in \mathcal{\AC(\bsgamma^{(\sigma)})}\}| \ge
\left( \left\lfloor \frac{|v|}{d} \right\rfloor \right)^\sigma {d \choose \sigma} > \left(\frac{|v|}{d}-1\right)^\sigma
\hspace{2ex}\text{for all $\sigma \in [d]$.}
\end{equation}
\end{lemma}
\begin{proof}
We construct a graph $\widetilde{G}$ with vertex set $\mathbb{N}$ and chromatic number $d$ in the following way: color the vertex $j \in \mathbb{N}$ by the color $c \in [d]$ given by $c \equiv j \pmod{d}$. Now each pair of vertices $(i,j) \in \mathbb{N}^2$ is an edge of the graph $\widetilde{G}$ if and only if
$i \not\equiv j \pmod{d}$, i.e., if the coloring of the vertices $i$ and $j$ differs. Let $k>d$ and $\sigma \in [d]$ be given. Let $G$ be the subgraph of $\widetilde{G}$ with vertex set $v:= [k]$. Thus for any set $u\subset v$ that consists of $\sigma$ differently colored vertices, the corresponding subgraph is complete.
We now provide a lower bound for the number of ways a subset of $v$ having $\sigma$ differently colored vertices can be chosen. Let $r=\lfloor k/d \rfloor$.
For each color $c \in [d]$, there are at least $r$ vertices in $v$ with color $c$. There are ${d \choose \sigma}$ ways of choosing a set of $\sigma$ different colors out of the $d$ possible colors and for each color $c$ there are at least $r$ possible choices of vertices with this color $c$. Thus the number of possible choices is at least $r^\sigma {d \choose \sigma}$. Hence $G$ contains
at least $r^{\sigma}{d \choose \sigma}$ cliques of size $\sigma$. We now may
define $\bsgamma$, e.g., by $\gamma_u = \prod_{j\in u} j^{-2}$ if $u$ is a clique
in $\widetilde{G}$ and $\gamma_u =0$ else. By construction the algorithmic dimension
of $\bsgamma$ is $d$, see Lemma \ref{lem_graph}, and in addition (\ref{vest}) holds.
\end{proof}
\begin{lemma}
\label{Tsternsigma}
For each $d \in\N$ there exists a set of weights $\bsgamma$ such that $\AC(\bsgamma)$ has algorithmic dimension $d$ and such that for all
$\sigma \in \N \cup \{\infty\}$ we have
$t^\ast_\sigma = \min\{\sigma, d\}$.
\end{lemma}
\begin{proof}
For $d\in\N$ let $\bsgamma$ be weights as in Lemma \ref{lem_alg_dim1}. Due
to (\ref{vest}) we have for all $\sigma \in \N \cup \{\infty\}$ that
$t^*_{\sigma} \ge \min\{\sigma,d\}$. Since the algorithmic dimension of
$\bsgamma$ is $d$, we have additionally that $t^*_{\sigma} \le d$.
Since always $t^*_{\sigma} \le \sigma$, the statement of the lemma is valid.
\end{proof}
For general weights with finite algorithmic dimension we
just know that the values $\decay_{\bsgamma,1},\ldots,\decay_{\bsgamma,\omega}$
satisfy the relation $\decay_{\bsgamma,1} \ge \ldots \ge \decay_{\bsgamma,\omega}$. We can, e.g., easily construct weights of finite algorithmic dimension
whose set of active coordinate sets $\AC(\bsgamma)$ consists only of sets
of size at least $\sigma \in \{2,\ldots,\omega\}$. Thus $\decay_{\bsgamma,1}
= \ldots = \decay_{\bsgamma, \sigma-1} = \infty$, but $\decay_{\bsgamma, \sigma}$
may be either finite or infinite. Together with Lemma \ref{Tsternsigma} this
argument shows that for general weights with finite algorithmic dimension
we should use the general form of the bounds (\ref{neslowbou}) and (\ref{unrlowbou})
to fully exploit the specific features of the weights we are working with.
\section{Upper bounds}
\label{UB}
Here we provide constructive upper bounds on the exponents of tractability
in the nested and in the unrestricted subspace sampling model. To this purpose
we consider two types of algorithms: \emph{multilevel algorithms}, which perform well
in the nested subspace sampling model, and \emph{changing dimension algorithms},
which are well suited for the unrestricted subspace sampling model.
\subsection{Multilevel algorithms}
\label{MLA}
Let us describe the
general form of the algorithms we want to use more precisely:
Let $L_0:=0$, and let $L_1<L_2<L_3 <\ldots$ be natural numbers,
and let
\begin{equation}
\label{vk12}
v^{(1)}_k := \cup_{j\in [L_k]} u_j
\hspace{2ex}\text{and}\hspace{2ex}
v^{(2)}_k := [L_k]
\hspace{2ex}\text{for $k\in\N$}.
\end{equation}
In the general case we will use the sets
$v^{(1)}_k$, $k=1,\ldots,m$.
In the special cases of POD weights,
it is more convenient to
make use of the relatively simple
ordering of the corresponding set system $u_j$, $j\in\N$,
and choose
the sets $v^{(2)}_k$ for $k=1,\ldots,m$.
In all definitions and results that hold for both choices
of the $v_k^{(i)}$, $i=1,2$, we simply write $v_k$, and we put $v_0:=\emptyset$.
We will choose the numbers $L_1,L_2,\ldots$ in general such that
$|v_k| = \Theta(a^{k})$ for some $a\in (1,\infty)$. (A default choice
would be $a=2$.) Let
\begin{equation*}
V_k := \{j\in\N \,|\, u_j\subseteq v_k
\hspace{1ex}\text{and}\hspace{1ex} u_j\not\subseteq v_{k-1}\}
\hspace{2ex}\text{for $k\ge 1$.}
\end{equation*}
Let us furthermore define
$$
U(m) := \cup_{k=1}^m V_k \cup \{0\}.
$$
For $u\in \U$ we define the mapping $\Psi_{u}:\Hg\to \Hg$ by
\begin{equation*}
(\Psi_{u}f)(\bsx) = f(\bsx_u;\bsc)
\hspace{2ex}\text{for all $\bsx\in D^\N$.}
\end{equation*}
We put
\begin{equation}
\label{algobaustein}
Q_{v_k}(f)
:= \sum_{j=1}^{n_k} a_j^{(k)}
f(\bst^{(j,k)}_{v_{k}};{\bf c}), \hspace{2ex}\text{and}\hspace{2ex} \widehat{Q}_{k}(f) := Q_{v_k}(f - \Psi_{v_{k-1}}f),
\end{equation}
where the numbers $n_1\ge n_2 \ge \ldots \ge n_m$, the coefficients $a_j^{(k)}$,
and the points
$\bst^{(1,k)}_{v_{k}},
\ldots,\bst^{(n_k,k)}_{v_{k}}\in [0,1]^{v_k}$
will be chosen later, depending on the weights $\bsgamma$.
Define the \emph{multilevel algorithm} $Q^{\ML}_m$ via
\begin{equation}
\label{multilevel-algo}
Q^{\ML}_m(f)
:= f(\bsc) + \sum_{k=1}^m \widehat{Q}_{k}(f)
= f(\bsc) + \sum_{k=1}^m \sum_{j=1}^{n_k} a_j^{(k)}
(f - \Psi_{v_{k-1}}f) (\bst^{(k,j)}_{v_{k}};{\bf c}).
\end{equation}
If we choose the nested sequence of coordinate sets $v_1 \subset v_2
\subset v_3 \subset \ldots$ in the nested subspace sampling model,
then the cost of the multilevel algorithm $Q^{\ML}_m$ satisfies
\begin{equation}
\label{costML}
\cost_{\nes}(Q^{\ML}_m) \le \$(0) + 2 \sum^m_{k=1} n_k \$(|v_k|),
\end{equation}
and the same cost bound is valid in the more generous unrestricted
subspace sampling model.
From (\ref{worid}) we obtain
\begin{equation*}
[e(Q^{\ML}_m;\Hg)]^2 = \sum_{j\in\N_0}
\gamma_{u_j} [e((Q^{\ML}_{m})_{u_j};H_{u_j})]^2,
\end{equation*}
where $(Q^{\ML}_{m})_{u_j} = Q^{\ML}_{m} \circ P_{u_j} = \sum^m_{k=1}
(\widehat{Q}_{k})_{u_j}$. Note that
$e((Q^{\ML}_{m})_{u_0};H_{u_0}) =
e((Q^{\ML}_{m})_{\emptyset};H_{\emptyset}) = 0$,
since $Q^{\ML}_m$ is exact on constant functions. Notice furthermore that
we have $(\widehat{Q}_{k})_{u_j}(f) = 0$ whenever $j\notin V_k$, and
$(\widehat{Q}_{k})_{u_j}(f) = (Q_{v_k})_{u_j}(f) = Q_{v_k}(f_{u_j})$
if $j\in V_k$. Thus we get
\begin{equation}
\label{worMLid}
[e(Q^{\ML}_m;\Hg)]^2 = \sum^m_{k=1} \sum_{j\in V_k}
\gamma_{u_j} [e((Q_{v_k})_{u_j};H_{u_j})]^2
+ \sum_{j\notin U(m)} \widehat{\gamma}_{u_j}.
\end{equation}
Let us now for simplicity assume that $v_k = [\max v_k]$ for all $k\in\N$, which
is always possible by simply renumbering the variables recursively.
Helpful for the construction of good multilevel algorithms for
higher order convergence and general weights is a result of the following kind:
There exists an $\alpha \ge 1/2$ such that for each $k\in \N$ and each $n_k\in\N$
we find a quadrature $Q_{v_k}$ as in (\ref{algobaustein})
which satisfies in the case $\alpha = 1/2$ for $\tau = 1/2$, and in the case
$\alpha >1/2$ for $\tau \in [1/2, \min\{\alpha, \decay_{\bsgamma}/2\})$, $\tau$
arbitrarily close to $\min\{\alpha, \decay_{\bsgamma}/2\}$, the bound
\begin{equation}
\label{assumption3.9}
\sum_{\ell \in u \subseteq [\ell]} \gamma_{u} \left[ e \left( \left(
Q_{v_k} \right)_{u}; H_{u} \right) \right]^2
\le \widehat{C}_{\ell,\tau,\gamma} n_k^{-2\tau}
\hspace{3ex}\text{for all $\ell \in v_k\setminus v_{k-1}$,}
\end{equation}
where
\begin{equation}
\label{cktaugamma}
\widehat{C}_{\ell,\tau,\gamma} = \left( \sum_{\ell \in u \subseteq [\ell]} \gamma^{1/(2\tau)}_{u}
C^{|u|}_\tau \right)^{2\tau}
\hspace{3ex}\text{for some $C_\tau$ independent of $k$.}
\end{equation}
For many reproducing kernels $K$ quadratures like this can be constructed
as quasi-Monte Carlo quadratures. Examples are (shifted) rank-$1$
lattice rules or polynomial lattice rules constructed with the
help of a component-by-component algorithm, see Section \ref{NUMINT}
or, e.g., \cite[Theorem~8]{K}, \cite[Corollary~5.4]{DKPS}, \cite[Prop.~3.9]{Gne10}.
If we use algorithms $Q_{v_k}$ that satisfy condition (\ref{assumption3.9}) to define $\widehat{Q}_k$ as in (\ref{algobaustein}),
then we obtain from (\ref{worMLid})
\begin{equation}
\label{errorestimate}
[e(Q^{\ML}_m;\Hg)]^2 \le \sum^m_{k=1} C_{k,\tau,\gamma} n_k^{-2\tau}
+ \sum_{j\notin U(m)} \widehat{\gamma}_{u_j},
\end{equation}
where
\begin{equation}
\label{c_k_tau_gamma}
C_{k,\tau,\gamma} = \sum_{\ell \in v_k\setminus v_{k-1}} \widehat{C}_{\ell,\tau,\gamma}.
\end{equation}
The aim is to minimize the right hand side of this error bound for
given cost by choosing $\tau$, $m$, and $n_1,\ldots,n_m$
(nearly) optimal. To this purpose one needs a good estimate for the
constants $C_{k,\tau,\gamma}$ and for the tail
$\sum_{j\notin U(m)} \widehat{\gamma}_{u_j}$, i.e., more specific
information about the weights.
\subsection{Changing dimension algorithms}
\label{CDA}
For given weights $\bsgamma$ let $\mathcal{A}_0$ be a finite subset
of $\mathcal{A}(\bsgamma)$.
A \emph{changing dimension algorithm} $Q^{\CD}$ is an algorithm of the
form
\begin{equation}
\label{CD-algo}
Q^{\CD}(f) = \sum_{u \in \mathcal{A}_0} Q_{n_u,u}(f_u),
\end{equation}
where the integrand $f \in \Hg$ has the uniquely determined anchored decomposition
\begin{equation*}
f(\bsx) = \sum_{u\in\mathcal{A}} f_u(\bsx)
\end{equation*}
and $Q_{n_u,u}$ is a quadrature rule for approximating $I_u(f_u)$.
If the building blocks $Q_{n_u,u}$ are linear algorithms, then also
$Q^{\CD}$ is linear; this follows from the explicit formula
\begin{equation*}
f_u(\bsx) = \sum_{v \subseteq u} (-1)^{|u\setminus v|} f(\bsx_v;\bsc)
\end{equation*}
for arbitrary $u\in\AC$, see \cite{KSWW10a}.
Thus a function evaluation $f_u(\bsx)$ can be done at cost bounded by
$|\{v\in\AC \,|\, v\subseteq u\}| \$(|u|) \le 2^{|u|}\$(|u|)$.
Changing dimension algorithms for infinite-dimension\-al integration were
introduced in \cite{KSWW10}. For POD weights we use a slight modification
of the changing dimension algorithms presented in \cite{PW11} and for weights with
finite active dimension we employ the changing dimension algorithms from \cite[Sect.~4]{KSWW10}.
\subsection{Product and order-dependent weights}
\label{UB_POD}
We consider now product and order-dependent weights (POD) weights, where for each $u \in\U$ we have
\begin{equation*}
\gamma_u = \Gamma_{|u|} \prod_{j \in u} \gamma_j,
\end{equation*}
where $(\Gamma_{|u|})_{u \in \U}$ and $(\gamma_j)_{j\in \mathbb{N}}$ are sequences of nonnegative real numbers as in (\ref{pod}). (Note to distinguish between $\gamma_u$, where $u \in \U$ is a finite set of positive integers, and $\gamma_d, \gamma_j$, where $d, j \in \N$ are positive integers.
)
Before we present the concrete algorithms that we use to obtain upper
bounds for the exponents of tractability $p^{\nes}$ and $p^{\unr}$,
we provide some useful results on POD weights.
\begin{lemma}\label{lem_pod_example}
Let $p^\ast \ge 2q^\ast \ge 2$ such that $p^\ast/(2q^\ast) \in \mathbb{N}$. For the POD weights determined by $\gamma_j = j^{-p^*}$ for $j\in\N$, $\Gamma_0=1=\Gamma_1$, and
\begin{equation*}
\Gamma_k= (k!)^{p^\ast} k^{p^\ast/2-q^\ast} \left(\frac{(p^\ast/q^\ast) \sin ( q^\ast \pi/ p^\ast )}{\pi} \right)^{k p^\ast}
\hspace{2ex}\text{for $k\ge 2$,}
\end{equation*}
we have
\begin{equation*}
\mathrm{decay}_{\bsgamma,\infty} = q^\ast \quad \mbox{ and } \quad \mathrm{decay}_{\bsgamma,\sigma} = p^\ast \quad \mbox{for all } \sigma \in \mathbb{N}.
\end{equation*}
\end{lemma}
A rigorous proof of Lemma \ref{lem_pod_example} can be found in
Section \ref{APPENDIX}.
We suspect that the condition $p^\ast/(2q^\ast) \in \mathbb{N}$ in the above lemma is not necessary. If the condition $q \le p^\ast/2$ can be replaced by $q \le p^\ast$ in Corollary~\ref{cor_pod_criteria} in Section \ref{APPENDIX}, then the condition $p^\ast \ge 2 q^\ast$ can be replaced by $p^\ast \ge q^\ast$ in the above lemma.
Lemma~\ref{lem_pod_example} considers the boundary case where for given product weights $\gamma_j$, the $\Gamma_k$ are made as large as possible such that the POD weights still have finite decay. This allows us to obtain cases where the decay of the POD weights differs from the decay of the corresponding product weights, cf. also Lemma \ref{Lemma3.8}. In the following theorem we consider POD weights where $\Gamma_k$ is smaller such that the decay of the POD weights is always the same as the decay of the corresponding product weights.
\begin{theorem}\label{Corollary2}
Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights with $\gamma_u
= \Gamma_{|u|}\prod_{j\in u} \gamma_j$. Let $p^*:= \decay_{\bsgamma,1} >1$
and $q\le p^*$. Let there exist a constant $C_q>0$ such that
$\Gamma_k \le C_q (k!)^q$ for all $k\in\N$.
In the case where $q=p^*$, we additionally assume
$\sum^\infty_{j=1} \gamma_j^{1/p^*} <1$.
Then we get the following results:
If $p^*=q$, then
\begin{equation*}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/p^*} = \Theta(\gamma_d^{1/p*}).
\end{equation*}
If $p^* > q$, then
\begin{equation*}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/p} = \Theta(\gamma_d^{1/p})
\hspace{2ex}\text{for all $p\in (q,p^*)$.}
\end{equation*}
The last identity holds also for $p = p^*$ if $\sum^\infty_{j=1} \gamma_j^{1/p^*} <\infty$.
In particular, our assumptions lead for all $q\le p^*$ to
$\decay_{\bsgamma,\infty} = \decay_{\bsgamma,1}$.
\end{theorem}
In the proof we use the multi-index notation, which we recall here:
For $\bsnu=(\nu_j)^d_{j=1} \in \NN_0^d$ we write $|\bsnu| := \nu_1 + \cdots
+\nu_d$ and $\bsnu !:= \prod^d_{j=1} \nu_j !$.
\begin{proof}
Obviously, we always have
\begin{equation*}
\gamma_d^{1/p} = \Gamma_1 \gamma_d^{1/p}
\le \sum_{d\in u\subseteq [d]} \gamma_u^{1/p}
\end{equation*}
and $\decay_{\bsgamma,\infty} \le \decay_{\bsgamma,1}$.
Now let us consider the case where $q=p^*$ and $T:= \sum^\infty_{j=1} \gamma_j^{1/p^*} <1$. Then
\begin{equation*}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/p^*} = \sum_{d\in u\subseteq [d]} \Gamma_{|u|}^{1/p^*}\prod_{j\in u} \gamma_j^{1/p^*}
\le C_{p^*}^{1/p^*} \sum_{d\in u\subseteq [d]} (|u|!)\prod_{j\in u} \gamma_j^{1/p^*}.
\end{equation*}
Similar as in \cite[Lemma~6.2]{KSS11} we now employ the multinomial formula
and the formula for (finite) geometric series to obtain
\begin{equation*}
\begin{split}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/p^*}
&\le C_{p^*}^{1/p^*} \sum_{\bsnu \in \N_0^d; \nu_d \neq 0} \frac{|\bsnu|!}{\bsnu !}
\prod_{j\in u} \gamma_j^{\nu_j/p^*}\\
&= C_{p^*}^{1/p^*} \sum_{\kappa =0}^\infty \left( \sum_{\bsnu \in \N_0^d; |\bsnu|
= \kappa} \frac{\kappa !}{\bsnu !} \prod_{j=1}^d \gamma_j^{\nu_j/p^*}
- \sum_{\bsnu \in \N_0^{d-1}; |\bsnu|
= \kappa} \frac{\kappa !}{\bsnu !} \prod_{j=1}^{d-1} \gamma_j^{\nu_j/p^*}
\right) \\
&= C_{p^*}^{1/p^*} \sum_{\kappa =0}^\infty \left[ \left( \sum^d_{j=1} \gamma_j^{1/p^*}
\right)^{\kappa} - \left( \sum^{d-1}_{j=1} \gamma_j^{1/p^*} \right)^{\kappa}
\right] \\
&\le C_{p^*}^{1/p^*} (1-T)^{-2} \gamma_d^{1/p^*}.
\end{split}
\end{equation*}
In particular, we showed that $\sum_{u\in \U} \gamma_u^{1/p*} < \infty$,
which implies that $\decay_{\bsgamma,\infty} \ge p^*$.
Let now $\Gamma_{k} \le C_q (k!)^{q}$ for some $q<p^*$, and let $p\in (q,p^*]$
with $\sum^\infty_{j=1} \gamma_j^{1/p}<\infty$. (Recall that this sum is
always finite if $p < p^*$.)
Let $S := \left( 2 \sum_{j=1}^\infty \gamma_j^{1/p} \right)^p$ and set $\gamma^\ast_j := \gamma_j/S$.
Then $\sum_{j=1}^\infty (\gamma_j^\ast)^{1/p} = 1/2 < 1$.
Set $\Gamma^\ast_{k} = S^{k} \Gamma_{k}$ for all $k\in\N_0$. Then there is a constant $C^\ast > 0$ such that $\Gamma^\ast_{k} = S^{k} \Gamma_{k} \le S^{k} C_q (k!)^{q} \le C^\ast (k!)^p$. Thus, by the argument used in the case
$p^*=q$, we get
\begin{equation*}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/p} =
\sum_{d\in u\subseteq [d]} \left( \Gamma^*_{|u|} \prod_{j\in u} \gamma_j^* \right)^{1/p} = O(\gamma_d^{1/p}).
\end{equation*}
In particular, we showed that $\sum_{u\in \U} \gamma_u^{1/p} < \infty$
for all $p<p^*$, which implies that $\decay_{\bsgamma,\infty} \ge p^*$.
The same holds for $p=p^*$ if $\sum_{j=1}^\infty \gamma_j^{1/p} < \infty$.
\end{proof}
\begin{corollary}\label{Corollary}
Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights with $\gamma_u
= \Gamma_{|u|}\prod_{j\in u} \gamma_j$. Let $p^*:= \decay_{\bsgamma,1} >1$
and $q < p^*$. Let there exist a constant $C_q>0$ such that
$\Gamma_k \le C_q (k!)^q$ for all $k\in\N$.
Then we have for every $\tau\in [1,p^*)$ and every constant $\widetilde{C}_\tau>0$ that
\begin{equation*}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/\tau} \widetilde{C}_{\tau}^{|u|}
= \Theta(\gamma_d^{1/\tau}).
\end{equation*}
\end{corollary}
\begin{proof}
Let $\tau$ and $\widetilde{C}_\tau$ be given.
Obviously, $\sum_{d\in u\subseteq [d]} \gamma_u^{1/\tau} \widetilde{C}_{\tau}^{|u|}
= \Omega(\gamma_d^{1/\tau})$. Now let
$p\in (\max\{\tau,q\}, p^*)$. Define the POD weights $\widetilde{\bsgamma}
= \Gamma_{|u|}\prod_{j\in u} \widetilde{\gamma_j}$ by
$\widetilde{\gamma}_j = \gamma_j \widetilde{C}^\tau_\tau$. Then $p^* = \decay_{\widetilde{\bsgamma},1}$ and, due to Jensen's inequality and
Theorem \ref{Corollary2}, we obtain
\begin{equation*}
\sum_{d\in u\subseteq [d]} \gamma_u^{1/\tau} \widetilde{C}^{|u|}_\tau
= \sum_{d\in u\subseteq [d]} \widetilde{\gamma}_u^{1/\tau}
\le \left( \sum_{d\in u\subseteq [d]} \widetilde{\gamma}_u^{1/p} \right)^{p/\tau}
= \Theta((\widetilde{\gamma}_d^{1/p})^{p/\tau}) = \Theta(\gamma_d^{1/\tau}).
\end{equation*}
\end{proof}
From Corollary \ref{Corollary} we immediately get the following useful corollary.
\begin{corollary}\label{Lemma3.15}
Let $\bsgamma$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary}, and let $v_k = v_k^{(2)} = [L_k]$
for all $k\in\N$. Let $\tau\in [1/2, \decay_{\bsgamma,1}/2)$.
Then we have for $C_{k,\tau,\gamma}$ as in (\ref{c_k_tau_gamma})
\begin{equation*}
C_{k,\tau,\gamma} = \Theta(\sigma_k),
\hspace{3ex}\text{where $\sigma_k:= \sum_{j=L_{k-1}+1}^{L_k} \gamma_j$,}
\end{equation*}
and furthermore
\begin{equation*}
\sum_{j\notin U(m)} \widehat{\gamma}_{u_j}
= \Theta \left( \sum_{j=L_m+1}^\infty \gamma_j \right).
\end{equation*}
\end{corollary}
\subsubsection{Nested subspace sampling}
Let $\bsgamma$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary}.
Let $L_k: = L\lceil a^{k-1} \rceil$ for $k\in\N$, where
$L\in\N$ and $a\in(1,\infty)$ are fixed.
(A canonical
choice would be $L=1$ and $a=2$, but in some applications other
choices may be more convenient.)
Furthermore, let $v_k = v_k^{(2)} = [L_k]$
for all $k\in\N$. Let $\alpha \ge 1/2$.
We use multilevel algorithms $Q^{\ML}_m$ as in (\ref{multilevel-algo})
that employ quadratures
$Q_{v_k}$ fulfilling the estimate (\ref{assumption3.9}).
In particular, these multilevel algorithms satisfy
the error estimate (\ref{errorestimate}).
\begin{theorem}\label{Theorem6}
Let $\$(k)=O(k^s)$ for some $s\ge 0$. Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary}.
We assume that there exists an $\alpha \ge 1/2$ such that for all $k\in\N$ and
all $n_k\in \N$ we find quadratures $Q_{v_k}$ as in (\ref{algobaustein})
that satisfy (\ref{assumption3.9}). Then our multilevel algorithms $Q^{\ML}_m$,
defined as in (\ref{multilevel-algo}), establish the following result:
In the case where $s\ge (2\alpha-1)/2\alpha$ we obtain
\begin{equation}
\label{uppboupod}
p^{\nes} \le \max \left\{ \frac{1}{\alpha}, \frac{2s}{\decay_{\bsgamma,1} - 1} \right\}.
\end{equation}
In the case where $0 \le s < (2\alpha-1)/2\alpha$, we obtain for
\begin{itemize}
\item[] $\decay_{\bsgamma,1} \ge 2\alpha$:
\begin{equation*}
p^{\nes} \le \frac{1}{\alpha},
\end{equation*}
\item[] $2\alpha > \decay_{\bsgamma,1} > 1/(1-s)$:
\begin{equation*}
p^{\nes} \le \frac{2}{\decay_{\bsgamma,1}},
\end{equation*}
\item[] $1/(1-s) \ge \decay_{\bsgamma,1} >1$:
\begin{equation*}
p^{\nes} \le \frac{2s}{\decay_{\bsgamma,1}-1}.
\end{equation*}
\end{itemize}
\end{theorem}
If the assumptions of Theorem \ref{Theorem6} hold and if additionally
the $n$th minimal worst case error of univariate integration satisfies
$e(n; H(K)) = \Omega(n^{-\alpha})$, then, due to the lower bound on $p^{\nes}$ in
(\ref{neslowboupod}), we have a sharp upper bound on the exponent $p^{\nes}$ if $s\ge (2\alpha-1)/2\alpha$, and for
$\decay_{\bsgamma,1} \ge 2\alpha$ and for $1/(1-s) \ge \decay_{\bsgamma,1} >1$
if $0 \le s < (2\alpha-1)/2\alpha$.
Observe that the case $s\ge(2\alpha -1)/2\alpha$ is more interesting and relevant
than the case $0 \le s < (2\alpha-1)/2\alpha$, see, e.g., \cite{Gil08a, NHMR11, PW11}.
Notice further that Theorem \ref{Theorem6} improves on the corresponding
results in \cite{Gne10, NHMR11} for product weights. (Compare, e.g., Theorem \ref{Theorem6}
with \cite[Thm.~4.2]{Gne10} and \cite[Cor.~2]{NHMR11}, where the Wiener kernel
$K(x,y) = \min\{x,y\}$ is treated.)
\begin{proof}
Let $p \in (1,\decay_{\gamma,1})$ and let $\tau\in [1/2,\min\{\alpha, p/2\})$
satisfy (\ref{assumption3.9}).
(Here we treat in detail only the case $\alpha > 1/2$; in the easier case $\alpha =1/2$
one chooses always $\tau = 1/2$.)
Let $\sigma_k$ be as in Corollary \ref{Lemma3.15}. Then we get from (\ref{errorestimate})
and Corollary \ref{Lemma3.15} that
\begin{equation*}
[e(Q^{\ML}_m;\Hg)]^2 = O \left( \sum^m_{k=1} \sigma_{k} n_k^{-2\tau}
+ \sum_{j= L_m +1}^\infty \gamma_{j} \right).
\end{equation*}
Let $m$ be given, and put $M:= \sum^m_{k=1}L_k^s$. For given cost $S \ge M$
of order $S= \Theta(L^s_m)$ we choose the number of sample points $n_k$ as $n_k := \lceil x_k \rceil$,
where
\begin{equation*}
x_k = C \sigma_k^{\frac{1}{2\tau+1}} L_k^{-\frac{s}{2\tau+1}},
\hspace{3ex}\text{with}\hspace{3ex}
C= S \left( \sum^m_{k=1} \sigma_k^{\frac{1}{2\tau+1}} L_k^{\frac{2\tau s}{2\tau+1}}
\right)^{-1}.
\end{equation*}
The cost of the multilevel algorithm $Q^{\ML}_m$ is then of order
$\cost_{\nes}(Q^{\ML}_m) = O(S)$.
We get
\begin{equation*}
\sum_{k=1}^m \sigma_k n_k^{-2\tau}
\le S^{-2\tau} \left( \sum_{k=1}^m \sigma_k^{\frac{1}{2\tau +1}}
L_k^{\frac{2\tau s}{2\tau +1}} \right)^{2\tau +1}.
\end{equation*}
Since $\sigma_k = O(L_{k-1}^{1-p})$ and $\sum_{j= L_m+1}^\infty \gamma_{j}
= O(L_m^{1-p})$, we obtain the error estimate
\begin{equation}\label{ausgangsgleichung}
[e(Q^{\ML}_m;\Hg)]^2 = O \left( S^{-2\tau} \big(1 + L_m^{1- p +2s\tau} \big)
+ L_m^{1-p} \right) = O \left( S^{-2\tau} + S^{-\frac{p-1}{s}} \right).
\end{equation}
\emph{Case 1}: $s\ge (2\alpha -1)/2\alpha$.
Here we have two subcases.
\emph{Subcase 1a}: $p\ge 1+2\alpha s$.
This implies $(p-1)/s \ge 2\alpha$ and $p \ge 2\alpha$. Hence we obtain
\begin{equation}\label{Fall1a}
[e(Q^{\ML}_m;\Hg)]^2 = O \left( S^{- 2\tau } \right),
\end{equation}
and we may choose $\tau$ arbitrarily close to $\alpha$.
\emph{Subcase 1b}: $1+ 2\alpha s> p >1$. Then it is not hard to verify that
$(p -1)/s \in (0,\min\{2\alpha, p\})$.
Thus we may choose $\tau \ge (p -1)/2s$ and get
\begin{equation}\label{Fall1b}
[e(Q^{\ML}_m;\Hg)]^2 = O \left( S^{- \frac{p -1}{s} } \right).
\end{equation}
If we let $p$ tend to $\decay_{\bsgamma,1}$, we see that the estimates (\ref{Fall1a}) and (\ref{Fall1b}) imply (\ref{uppboupod}).
\emph{Case 2}: $(2\alpha -1)/2\alpha > s \ge 0$.
Here we have three subcases.
\emph{Subcase 2a}: $p\ge 2\alpha$. Then $(p -1)/s > 2\alpha$
and we get (\ref{Fall1a}), where we again can choose $\tau$ arbitrarily close
to $\alpha$.
\emph{Subcase 2b}: $2\alpha > p > 1/(1-s)$. Then $(p -1)/s > p$. Hence we get (\ref{Fall1a}) and may choose $\tau$ arbitrarily close to $p/2$.
\emph{Subcase 2c}: $1/(1-s) \ge p >1$. Then $2\alpha > p \ge (p-1)/s$.
Choosing $\tau \ge (p-1)/2s$, we obtain (\ref{Fall1b}).
Letting again $p$ tend to $\decay_{\bsgamma}$, we have thus verified the theorem.
\end{proof}
\subsubsection{Unrestricted subspace sampling}
If the cost function satisfies $\$(k) = O(k^s)$ for $0\le s \le 1$,
we may again use multilevel algorithms as done in the previous
subsection.
In the case where we have a cost function $\$(k) = \Omega(k^s)$ for
$s \ge 1$ and product weights, changing dimension algorithms, as
considered in \cite{KSWW10, PW11}, have proved to be the essentially
optimal choice in the unrestricted subspace sampling setting,
see the analysis in \cite{PW11}.
We present here a slight modification of the changing dimension
algorithms from \cite{PW11} which ensures that the results from
\cite{PW11} do not only hold for product weights but for all
POD weights that satisfy the conditions of Corollary \ref{Corollary}.
As in \cite{PW11}, we assume that there exist positive constants
$c$, $C$, $\tau$, a non-negative $\lambda_1$, and a $\lambda_2\in [0,1]$ such
that for each $u\in\U\setminus \{\emptyset\}$ and $n\in\N$ there are algorithms $Q_{n,u}$
using $n$ function evaluations of functions $f_u\in H_u$ with
\begin{equation}
\label{CDA-Bausteine}
e(Q_{n,u};H_u)^2 \le \frac{cC^{|u|}}{(n+1)^{2\tau}}
\left( 1 + \frac{\ln(n+1)}{(|u|-1)^{\lambda_2}} \right)^{\lambda_1(|u|-1)^{\lambda_2}},
\end{equation}
where by convention the last factor in (\ref{CDA-Bausteine}) should be $1$ for $|u|=1$.
We may assume that $c\ge 1$ and $C\ge C_0$, so that (\ref{CDA-Bausteine}) holds also
true for $n=0$.
With the help of the building blocks $Q_{n,u}$ one can define changing
dimension algorithms for a fixed $\lambda_0 \in (0, 1-1/\decay_\gamma)$ and
any given $\varepsilon >0$ in the
following way:
Let us put
\begin{equation}
\label{Ls}
L_r := \sum_{\emptyset \neq u \in \U} \gamma_u^r
\end{equation}
for suitable $r\ge 0$.
Choose $\tau$ such that
$\tau < \lambda_0\cdot \decay_{\bsgamma}/2$.
For each $u\in \U$ satisfying
$\gamma_u^{\lambda_0} L_{1-{\lambda_0}} c\, C^{|u|} \le \varepsilon^2$
we choose
$n_u = n_u(\e, \lambda_0)$ to be zero and $Q_{n_u,u}$ to be the trivial zero
algorithm $Q_{n_u,u}f_u = 0$ for all $f_u\in H_u$.
Otherwise, we put
$n_u = \lfloor (\gamma^{\lambda_0}_u L_{1-{\lambda_0}}
c\,C^{|u|} \e^{-2})^{1/2\tau} \rfloor$
and choose $Q_{n_u,u}$ as in (\ref{CDA-Bausteine}).
We define the changing dimension algorithm $Q^{\CD}_{\e}$ by
\begin{equation}
\label{def-CD}
Q^{\CD}_{\e}(f) = f(\bsc) + \sum_{\emptyset \neq u \in \U}
Q_{n_u,u}(f_u).
\end{equation}
Observe that for any $\varepsilon >0$ there are only finitely many $u\in \U$ with
$n_u \ge 1$.
For given $\varepsilon>0$ let
\begin{equation*}
d(\e) := \max\left\{ \ell \in \N \,|\, c\,C^{\ell}
L_{1-{\lambda_0}}\gamma^{\lambda_0}_{[\ell]} > \varepsilon^2 \right\}.
\end{equation*}
Then it is easily verified that $|u|> d(\varepsilon)$ implies $n_u=0$.
Thus the ``$\e$-dimension'' $d(\e)$ is the largest number of active variables used by the
changing dimension algorithm $Q^{\CD}_{\e}$.
Due to Section \ref{CDA} we obtain
\begin{equation*}
\cost_{\unr}(Q^{\CD}_{\e})
\le \$(0) + \sum_{\emptyset \neq u \in \U} 2^{|u|}\$(|u|) n_u
\le \$(0) + \$(d(\e)) \sum_{\ell =1}^{d(\e)} 2^{\ell}\sum_{|u|=\ell} n_u.
\end{equation*}
The following theorem is a slight generalization of \cite[Thm.~1]{PW11}.
\begin{theorem}\label{Thm_hoc_cda}
Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary}. Let $\lambda_0 \in (0, 1-1/\decay_\gamma)$, and let
$\tau <\lambda_0 \cdot \decay_{\bsgamma}/2$ satisfy (\ref{CDA-Bausteine}).
Then the changing dimension algorithm $Q^{\CD}_{\e}$ defined in
(\ref{def-CD}) satisfies
\begin{equation*}
e(Q^{\CD}_{\e};\Hg) \le \e^{1-o(1)}
\hspace{2ex}\text{as $\e\to 0$,}
\end{equation*}
and its cost satisfies
\begin{equation*}
\cost_{\unr}(Q^{\CD}_{\e})
= O\left( \$(d(\varepsilon))
\varepsilon^{-1/\tau} \right),
\end{equation*}
where
\begin{equation*}
d(\e)
= O \left( \frac{\ln(1/\e)}{\ln\ln(1/\e)} \right)
= o(\ln(1/\e)).
\end{equation*}
If the cost function $\$$ satisfies
$\$(d) = O(e^{\ell d})$ for some $\ell \ge 0$, then
the integration problem is strongly tractable with exponent
\begin{equation*}
p^{\unr} \le \max \left\{ \frac{1}{\tau},
\frac{2}{\decay_{\bsgamma}-1} \right\}.
\end{equation*}
Let us now additionally assume that $\$(d) = \Omega(d)$ and that
the $n$th minimal worst case error of univariate integration satisfies
$e(n;H(K))= \Omega(n^{-\alpha})$. If (\ref{CDA-Bausteine}) holds for
$\tau$ arbitrarily close to $\alpha$, then
\begin{equation*}
p^{\unr} = \max \left\{ \frac{1}{\alpha},
\frac{2}{\decay_{\bsgamma}-1} \right\}.
\end{equation*}
\end{theorem}
In the case of product weights, the statement of Theorem \ref{Thm_hoc_cda}
was proved in \cite{PW11}, see Theorem 1 and 2 there.
In the case where we have general POD weights satisfying the assumptions of
Corollary \ref{Corollary}, we see that $\decay_{\bsgamma,\infty} = \decay_{\bsgamma,1}$,
see Theorem \ref{Corollary2}, and these quantities do not change if we multiply
the $\gamma_j$, $j\in\N$, by some constant.
With the help of this observation one can verify that for the upper bound on
$p^{\unr}$ the analysis in \cite{PW11}
only needs to be slightly modified to carry over to POD weights that satisfy the
assumptions of Corollary \ref{Corollary}. The lower bound follows from
(\ref{neslowboupod}).
\subsection{Weights with finite algorithmic dimension}
\label{UB_FAD}
Let $\mathcal{W}\subseteq \U$ with
minimal algorithmic dimension $d\in\N$, and let $(\gamma_u)_{u\in \U}$
be weights with $\gamma_u = 0$ for all $u\notin \mathcal{W}$ (i.e.,
$\AC = \AC(\bsgamma) \subseteq \mathcal{W}$).
Assume furthermore, that there exist non-negative constants $c,C,\beta_1,\beta_2$,
an $\alpha >0$, and for any $n\in\N_0$ a
quadrature $Q_n$, given by
\begin{equation}
\label{algobaustein-FAD}
Q_n(f) = \sum^n_{i=1} a_i^{(n)} f(\bst^{(i,n)})
\hspace{2ex}
\text{with $a_i^{(n)}\in\R$, $\bst^{(i,n)} \in D^{d}$,}
\end{equation}
such that
\begin{equation}
\label{assuconv}
e((Q_n)_{u}; H_u) \le
cC^{|u|}\,(n+1)^{-\alpha}\, (1+\ln(n+1))^{\beta_1 |u| + \beta_2}
\hspace{2ex}\text{for all $u\subseteq [d]$.}
\end{equation}
With the help of the algorithms $Q_n$ and a mapping $\phi$ that
satisfies
(\ref{phi}), we can construct for arbitrary $v\in \U$
algorithms $Q^{\mathcal{W}}_{v}$ on
$\Hg$ in the following way (cf. also \cite[Prop.~3.11]{Gne10}):
First we formally consider infinite vectors
\begin{equation*}
\bst^{(i,n)}_{\infty} \in D^\N,
\hspace{2ex}\text{where the $j$th component is}\hspace{2ex}
t^{(i,n)}_{\infty,j} := t^{(i,n)}_{\phi(j)}.
\end{equation*}
Then we define the quadrature
$Q^{\mathcal{W}}_{n,v}$ by
\begin{equation}
\label{algoW}
Q^{\mathcal{W}}_{n,v}(f) := \sum^n_{i=1} a_i^{(n)}
f(\bst^{(i,n)}_{\infty,v};\bsc)
\hspace{2ex}\text{for all $f\in\Hg$.}
\end{equation}
Note that for $u\subseteq v$, $u\in \mathcal{W}$, we have $|u|=|\phi(u)|$
and $e((Q^{\mathcal{W}}_{n,v})_u; H_u) = e((Q_n)_{\phi(u)};H_{\phi(u)} )$.
By combining such algorithms in a suitable way, we get the
following results for nested and unrestricted subspace
sampling.
\subsubsection{Nested subspace sampling}
In the nested and in the unrestricted subspace sampling regime
we propose to use multilevel algorithms $Q^{\ML}_m$ that employ
the quadratures $Q_{v_k} = Q^{\mathcal{W}}_{n,v_k}$ defined in (\ref{algoW}).
Here we consider for the $k$th level the set of coordinates
$v_k = v_k^{(1)} = \cup_{j\in [L_k]} u_j$ and
$L_k:= L \lceil a^{k-1} \rceil$,
where $L\in \N$ and $a\in (1,\infty)$ are fixed.
As in (\ref{algobaustein}),
the quadrature $\widehat{Q}_k$ on the $k$th level is given by
\begin{equation*}
\widehat{Q}_k(f) := Q^{\mathcal{W}}_{n_k,v_k}(f-\Psi_{v_{k-1}}f).
\end{equation*}
Due to (\ref{worMLid}) and (\ref{assuconv}) we get for arbitrarily
small $\delta >0$
\begin{equation*}
\begin{split}
[e(Q^{\ML}_m;\Hg)]^2 &=
\sum^m_{k=1}\sum_{j\in V_k} \gamma_{u_j}
[e((Q^{\mathcal{W}}_{n_k, v_k})_{u_j};H_{u_j})]^2
+ \sum_{j\notin U(m)} \widehat{\gamma}_{u_j}\\
&\le \widetilde{C}^2 \sum^m_{k=1} \left( \sum^{L_k}_{j=L_{k-1}+1}
\gamma_{u_j} \right) (n_k+1)^{2(\delta- \alpha)}
+\tail_{\bsgamma}(L_m),
\end{split}
\end{equation*}
where the constant $\widetilde{C}$ depends on
$d, \alpha,\delta, c,C, \beta_1$, and $\beta_2$, but not on $m$ or the
specific values $n_k$, $k=1,\ldots,m$. Notice that in the last
inequality we implicitly used $n_1\ge n_2 \ge \cdots \ge n_m$,
since it might happen for some $L_{k-1} < j \le L_k$ that
$u_j\subseteq v_l$ for an $l<k$.
This estimate is almost identical with estimate (45) in \cite[Sect.~3.2.2]{Gne10}:
there one just has to replace $n_k$ by $n_k+1$ and $\delta - 1$ by $\delta-\alpha$,
and rename the constant $C_{\eta,\omega,\delta}$ by $\widetilde{C}^2$. Adapting
the reasoning in \cite{Gne10} that follows after estimate (45), we obtain the
following theorem.
\begin{theorem}
\label{UppBouFAD}
Let $\$(k) = O(k^s)$ for some $s\ge 0$. Let the weights $\bsgamma$ have finite
algorithmic dimension, and let $\decay_{\bsgamma} >1$.
Assume that there exist for $\alpha >0$ and all $n\in\N$ algorithms $Q_n$ as in
(\ref{algobaustein-FAD}) that satisfy (\ref{assuconv}). For $k=1,2,\ldots$,
let $Q_{v_k} = Q^{\mathcal{W}}_{n,v_k}$ be as in (\ref{algoW}).
Then the multilevel algorithms $Q^{\ML}_m$, defined as in (\ref{multilevel-algo}),
establish the following result:
The exponent of strong tractability in the nested subspace sampling
model satisfies
\begin{equation}
\label{uppboufad}
p^{\nes} \le \max \left\{ \frac{1}{\alpha}, \frac{2s}{\decay_{\bsgamma} - 1} \right\}.
\end{equation}
\end{theorem}
If the assumptions of Theorem \ref{UppBouFAD} hold and if additionally
the $n$th minimal worst case error of univariate integration satisfies
$e(n; H(K)) = \Omega(n^{-\alpha})$, then, due to the lower bound on $p^{\nes}$ in
(\ref{neslowboupod}), we see that our upper bound on $p^{\nes}$ in (\ref{uppboufad})
is sharp for finite-intersection weights; cf. also
Section \ref{HOC-POD-NEST}.
\subsubsection{Unrestricted subspace sampling}
\label{SUBSEC_4.4.2}
In the case where the cost function $\$$ is of the form $\$(k) = \Omega(k^s)$ for some
$s>1$, we can improve the bound on the exponent of tractability from Theorem
\ref{UppBouFAD} by changing from the nested to the more generous unrestricted
subspace sampling model. For general finite-order weights $\bsgamma$ of order $\omega$ appropriate changing dimension algorithms were provided
in \cite{KSWW10}. These algorithms can in particular be used
for weights with finite algorithmic dimension $d$, which are
finite-order weights of order $\omega = d$.
If $\decay_{\bsgamma, \omega} > 1$ and if there exist algorithms $Q_n$ as in
(\ref{algobaustein-FAD}) satisfying (\ref{assuconv}), then
changing dimension algorithms lead to an upper bound
\begin{equation}
\label{ksww10}
p^{\unr} \le \max \left\{ \frac{1}{\alpha}, \frac{2}{\decay_{\bsgamma}-1} \right\},
\end{equation}
see \cite[Thm.~5(a) \& Sect.~5.7]{KSWW10}.
Together with Theorem \ref{UppBouFAD} this implies the following result.
\begin{theorem}
\label{UppBouFADunr}
Let $\$(k) = O(k^s)$ for some $s\ge 0$. Let the weights $\bsgamma$ have finite
algorithmic dimension, and let $\decay_{\bsgamma} >1$. Assume that there exists
for some $\alpha >0$ and all $n\in\N$ algorithms $Q_n$ as in (\ref{algobaustein-FAD})
that satisfy assumption (\ref{assuconv}).
Then the exponent of tractability in the unrestricted subspace sampling model
satisfies
\begin{equation}
\label{uppboufadunr}
p^{\unr} \le \max \left\{ \frac{1}{\alpha}, \frac{2\min\{1,s\}}{\decay_{\bsgamma}-1} \right\}.
\end{equation}
\end{theorem}
Our lower bound on $p^{\unr}$ in (\ref{neslowboufad}) shows that the upper bound
(\ref{uppboufadunr}) is sharp for the sub-class of finite-intersection weights
if $e(n;H(K)) = \Omega(n^{-\alpha})$.
For finite-intersection weights and the Wiener kernel
$K(x,y) = \min\{x,y\}$ the bound (\ref{uppboufadunr}) was proved in
\cite[Thm.~3.12]{Gne10}.
\section{Higher Order Convergence}
\label{HOC}
In this section we confine ourselves to the domain $D=[0,1]$, endowed with
the restricted Lebesgue measure. We assume that $\alpha \ge 1$ is an integer.
\subsection{Higher order polynomial lattice rules}
Here we introduce polynomial lattice rules which can achieve arbitrary high convergence rates of the integration error for suitably smooth functions, see \cite{DP07}.
Classical polynomial lattices were introduced in \cite{nie92} (see also \cite[Section 4.4]{niesiam}) by Niederreiter. These lattices are obtained from rational functions over finite fields. For a prime $b$ let $\FF_b((x^{-1}))$ be the field of formal Laurent series over $\FF_b$. Elements of $\FF_b((x^{-1}))$ are formal Laurent series, $$L=\sum_{l=w}^{\infty}t_l x^{-l},$$ where $w$ is an arbitrary integer and all $t_l \in \FF_b$. Note that $\FF_b((x^{-1}))$ contains the field of rational functions over $\FF_b$ as a subfield. Further let $\FF_b[x]$ be the set of all polynomials over $\FF_b$.
The following definition is a slight generalization of the definition from \cite{nie92}, see also \cite{niesiam}, which first appeared in \cite{DP07}; see also \cite[Chapter~15.7]{DP12}.
\begin{definition}
Let $b$ be prime and $1 \le m \le n$. Let $\vartheta_n$ be the map from $\FF_b((x^{-1}))$ to the interval $[0,1)$ defined by $$\vartheta_n\left(\sum_{l=w}^{\infty}t_l x^{-l}\right)=\sum_{l=\max(1,w)}^n t_l b^{-l}.$$
For a given dimension $s \ge 1$, choose an irreducible polynomial $p \in \FF_b[x]$ with $\deg(p)=n \ge 1$ and let $\bsq = (q_1,\ldots ,q_s) \in (\FF_b[x])^s$. For $0 \le h<b^m$ let $h=h_0+h_1 b +\cdots +h_{m-1}b^{m-1}$ be the $b$-adic expansion of $h$. With each such $h$ we associate the polynomial $$h(x)=\sum_{r=0}^{m-1}h_r x^r \in \FF_b[x].$$ Then $\cS_{p,m,n}(\bsq)$ is the point set consisting of the $b^m$ points $$\bsx_h=\left(\vartheta_n\left(\frac{h(x) q_1(x)}{p(x)}\right),\ldots ,\vartheta_n\left(\frac{h(x) q_s(x)}{p(x)}\right)\right) \in [0,1)^s,$$ for $0 \le h < b^m$. An equal quadrature rule $\frac{1}{N} \sum_{h=0}^{N-1} f(\bsx_h)$ using the point set $\cS_{p,m,n}(\bsq) = \{\bsx_0, \bsx_1,\ldots, \bsx_{b^m-1}\}$ is called a polynomial lattice rule.
\end{definition}
We call $\bsq$ the generating vector of the polynomial lattice rule and $p$ the modulus. For more information on (higher order) polynomial lattice rules see \cite{DP07,DP12}.
Let $x = \sum_{i=1}^\infty \tfrac{x_i}{b^i} \in [0,1)$ and let $\sigma = \sum_{i=1}^\infty \tfrac{\sigma_i}{b^i} \in [0,1)$, where $x_i, \sigma_i \in \{0, \ldots, b-1 \}$. We define the digital $b$-adic shifted point $y$ by
\[
y = x \oplus \sigma = \sum_{i=1}^\infty \tfrac{y_i}{b^i},
\]
where $y_i = x_i + \sigma_i \in \integer_b$. For points $\bsx \in [0,1)^s$ and $\bssigma \in [0,1)^s$ the digital $b$-adic shift $\bsx \oplus \bssigma$ is defined component wise.
\begin{definition}\label{defshiftednet}
A polynomial lattice rule $Q_{\bsq,p}$ for which the underlying quadrature points are digitally shifted by the same $\bssigma \in [0,1)^s$ is called a digitally shifted polynomial lattice rule or simply a shifted polynomial lattice rule $Q_{\bsq, p}(\bssigma)$.
\end{definition}
\subsection{Reproducing kernel of smoothness $\alpha$}
Let $c \in [0,1]$ and let $\alpha \ge 1$ be an integer. We consider the anchored reproducing kernel for smooth functions anchored at $c$ given by (see
\cite[Example~4.2]{KSWW10a})
\begin{equation*}
K_{\alpha,c}(x,y) = \left\{\begin{array}{ll} \sum_{r=1}^{\alpha-1} \frac{(x-c)^{r}}{r!}
\frac{(y-c)^r}{r!} + \int_c^1
\frac{(x-t)_+^{\alpha-1}}{(\alpha-1)!}
\frac{(y-t)_+^{\alpha-1}}{(\alpha-1)!} \,\mathrm{d} t, & \mbox{if
} x, y > c, \\ \sum_{r=1}^{\alpha-1} \frac{(x-c)^{r}}{r!}
\frac{(y-c)^r}{r!} + \int_0^c \frac{(t-x)_+^{\alpha-1}}{(\alpha-1)!}
\frac{(t-y)_+^{\alpha-1}}{(\alpha-1)!} \,\mathrm{d} t, & \mbox{if }
x, y < c, \\ 0 & \mbox{otherwise},
\end{array} \right.
\end{equation*}
where $(x-t)_+ = \max(x-t,0)$ and $(x - t)_+^0 := 1_{x > t}$ and for $\alpha=1$ the empty sum $\sum_{r=1}^{\alpha-1}$ is defined as $0$. The
inner product of the corresponding reproducing kernel Hilbert space
$H(K_{\alpha,c})$ is given by
\begin{equation*}
\langle f, g \rangle_{H(K_{\alpha,c})} = \sum_{r=1}^{\alpha-1} f^{(r)}(c)
g^{(r)}(c) + \int_0^1 f^{(\alpha)}(x) g^{(\alpha)}(x) \,\mathrm{d}
x,
\end{equation*}
with corresponding norm $\|\cdot \|_{H(K_{\alpha,c})} = \sqrt{ \langle \cdot, \cdot \rangle_{H(K_{\alpha, c})}}$. Note that for every $f \in H(K_{\alpha, c})$ we have $f(c) =0$.
It is well known that the $n$th minimal error of univariate integration on $H(K_{\alpha,c})$
is of order
\begin{equation}\label{univariate_convergence}
e(n;H(K_{\alpha,c})) = \Omega(n^{-\alpha}).
\end{equation}
\subsection{Embedding theorem}
\label{EMBEDDING}
We now investigate the decay of the Walsh coefficients for functions in $H(K_{\alpha,c})$. To do so, we briefly introduce Walsh functions in base $b$ \cite{chrest, Fine, walsh}. Let $b \ge 2$ be an integer and let $\omega_b = \mathrm{e}^{2\pi \mathrm{i} / b}$ be the $b$-th root of unity. For a nonnegative integer $k$ let $k = \kappa_0 + \kappa_1 b + \cdots + \kappa_{a-1} b^{a-1}$ denote the $b$-adic representation of $k$ and for $x \in [0,1)$ let $x = \xi_1 b^{-1} + \xi_2 b^{-2} + \cdots$ denote the $b$-adic representation of $x$, where we assume that infinitely many $\xi_i$ are different from $b-1$. Then the $k$th Walsh function in base $b$ is given by
\begin{equation*}
\wal_k(x) = \omega_b^{\kappa_0 \xi_1 + \kappa_1 \xi_2 + \cdots + \kappa_{a-1} \xi_a}.
\end{equation*}
For a function $f$ defined on $[0,1]$ we define the $k$th Walsh coefficient by
\begin{equation*}
\widehat{f}(k) = \int_0^1 f(x) \overline{\wal_k(x)} \,\mathrm{d} x.
\end{equation*}
See also \cite[Chapter~14, Appendix~A]{DP12} for more information on Walsh functions in the context of numerical integration.
Let $k = \kappa_1 b^{a_1-1} + \cdots + \kappa_\nu b^{a_\nu-1}$ with
$a_1 > \cdots > a_\nu > 0$ and $\kappa_1,\ldots, \kappa_\nu \in
\{1,\ldots, b-1\}$. Set
\begin{equation*}
\mu_\alpha(k) = \left\{\begin{array}{ll} 0 & \mbox{if } k = 0, \\
a_1 + \cdots + a_{\min(\alpha,\nu)} & \mbox{if } k > 0.
\end{array} \right.
\end{equation*}
For $\alpha \ge 2$ let $\mathcal{W}_\alpha$ denote the space of all Walsh series
$f:[0,1) \to\mathbb{R}$ given by
\begin{equation*}
f(x) = \sum_{k=1}^\infty \widehat{f}(k) \wal_k(x),
\end{equation*}
with
\begin{equation*}
\|f\|_{\mathcal{W}_\alpha} := \sup_{k \in \mathbb{N}}
|\widehat{f}(k)| b^{\mu_\alpha(k)} < \infty.
\end{equation*}
It was shown in \cite[Lemma~3]{D09} that there is a constant $C_{1,r}
> 0$ such that
\begin{equation*}
\left| \int_0^1 \frac{x^r}{r!} \overline{\wal_k(x)} \,\mathrm{d} x
\right| \le \left\{\begin{array}{ll} 0 & \mbox{if } \nu > r, \\ C_{1,r}
b^{-\mu_r(k)} & \mbox{if } 0 \le \nu \le r
\end{array} \right\} \le C_{1,r} b^{-\mu_{\alpha}(k)}.
\end{equation*}
The constant $C_{1,r}$ can be chosen as
\begin{equation}\label{const_c1}
C_{1,r} = r! \left(\frac{3}{2\sin \pi/b}\right)^r \left(1 + \frac{1}{b} + \frac{1}{b(b+1)}\right)^{r-1}.
\end{equation}
Thus, there is a constant $C_{2,\alpha} > 0$ such that
\begin{equation*}
\left| \sum_{r=1}^{\alpha-1} \int_0^1 \frac{(x-c)^r}{r!}
\overline{\wal_k(x)} \,\mathrm{d} x \int_0^1 \frac{(y-c)^r}{r!}
\wal_l(y) \,\mathrm{d} y \right| \le C_{2,\alpha} b^{-\mu_{\alpha}(k) -
\mu_{\alpha}(l)}.
\end{equation*}
We can choose $$C_{2,\alpha} = \sum_{r=1}^{\alpha-1} C_{1,r}^2.$$
For $k \in \mathbb{N}_0$ let $J_k(x) = \int_0^x
\overline{\wal_k(t)}\,\mathrm{d} t$. Note that for $k > 0$ we have
$J_k(0) = J_k(1) = 0$. The following result goes back to
Fine~\cite{Fine} (see also \cite[Lemma~1]{D09}). The function
$J_k(x)$ can be represented by a Walsh series
\begin{equation*}
J_k(x) = \sum_{m=0}^\infty r_k(m) \wal_k(x),
\end{equation*}
where for $k \in \mathbb{N}$ with $k = \kappa_1 b^{a_1-1} + \cdots + \kappa_\nu b^{a_\nu-1}$ and $k' = k-\kappa_1 b^{a_1-1}$ we have
\begin{equation*}
r_{k}(m) = \left\{\begin{array}{ll} b^{-\mu_1(k)}
(1-\omega_b^{-\kappa_1})^{-1} & \mbox{if } m = k', \\
b^{-\mu_1(k)} (1/2+(\omega_b^{-\kappa_1}-1)^{-1}) & \mbox{if } m =
k, \\ b^{-\mu_1(m)} (\omega_b^{\theta}-1)^{-1} & \mbox{if } m =
\theta b^{a_1+a+1} + k, \\ 0 & \mbox{otherwise}.
\end{array} \right.
\end{equation*}
For $k=0$ we have
\begin{equation*}
r_{0}(m) = \left\{\begin{array}{ll} b^{-\mu_1(m)}
(\omega_b^{\theta}-1)^{-1} & \mbox{if } m = \theta b^{a+1},
\\ 0 & \mbox{otherwise}.
\end{array} \right.
\end{equation*}
For $k \in \mathbb{N}_0$ and $\alpha =1$ let $\chi_1(k) = \int_0^1
1_{[t,1]}(x) \overline{\wal_k(x)} \,\mathrm{d} x$ and for $\alpha >
1$ let
\begin{equation*}
\chi^{(+)}_\alpha(k) = \int_0^1 (x-t)_+^{\alpha-1}
\overline{\wal_k(x)}\,\mathrm{d} x
\end{equation*}
and
\begin{equation*}
\chi^{(-)}_\alpha(k) = \int_0^1 (t-x)_+^{\alpha-1}
\overline{\wal_k(x)}\,\mathrm{d} x.
\end{equation*}
\begin{lemma}\label{lem_bound_taylor}
For $\alpha \in \mathbb{N}$ and $t \in [0,1]$ we have
\begin{equation*}
\left|\chi^{(+)}_\alpha(k) \right|, \left|\chi^{(-)}_\alpha(k)
\right| \le C b^{-\mu_{\alpha}(k)} \quad \mbox{for all } k \in
\mathbb{N}_0.
\end{equation*}
\end{lemma}
\begin{proof}
We show the result by induction. Let $\alpha = 1$. Then $(x-t)_+^0 = 1_{[t,1]}(x)$ and $(t-x)_+^0 = 1_{[0,t]}(x)$ and therefore the result follows from
\cite[Lemma~1]{D09}. Assume now the result holds for some $\alpha
\in \mathbb{N}$. Let $k \in \mathbb{N}$, $k = \kappa_1 b^{a_1-1} + \cdots + \kappa_\nu b^{a_\nu-1}$ and $k' = k- \kappa_1 b^{a_1-1}$ with $\kappa_1, \ldots, \kappa_\nu \in \{1,\ldots, b-1\}$, $a_1 > a_2 > \cdots > a_\nu >0$ and $0 \le k' <
b^{a_1-1}$. Then
\begin{eqnarray*}
\chi^{(+)}_{\alpha+1}(k) & = & \int_0^1 (x-t)_+^{\alpha}
\overline{\wal_k(x)} \,\mathrm{d} x \\ & = & J_k(x) (x-t)_+^{\alpha}
\mid_{x=0}^1 - \alpha \int_0^1 (x-t)_+^{\alpha-1} J_k(x)
\,\mathrm{d} x \\ & = & - \alpha \int_0^1 (x-t)_+^{\alpha-1} J_k(x)
\,\mathrm{d} x \\ & = & -\alpha \sum_{m=0}^\infty r_k(m) \int_0^1
(x-t)_+^{\alpha-1} \overline{\wal_m(x)} \,\mathrm{d} x \\ & = &
-\alpha \sum_{m=0}^\infty r_k(m) \chi^{(+)}_\alpha(m).
\end{eqnarray*}
Thus there is some constant $C > 0$ such that
\begin{equation*}
|\chi^{(+)}_{\alpha+1}(k)| \le C \alpha \left(b^{-\mu_1(k) -
\mu_{\alpha}(k')} + b^{-\mu_1(k) - \mu_{\alpha}(k)} + b^{-\mu_1(k)
- \mu_{\alpha}(k)} \sum_{a=1}^\infty b^{-a} \right) \le C'_\alpha
b^{-\mu_{\alpha+1}(k)}.
\end{equation*}
The result for $\chi^{(-)}_{\alpha+1}$ can be shown by the same
arguments.
\end{proof}
By keeping track of the constant in Lemma~\ref{lem_bound_taylor} one can show that the constant can be chosen as $C_{1,\alpha}$ given by \eqref{const_c1}.
We now prove the following continuous embedding.
\begin{theorem}\label{thm_embedding}
Let $\alpha \in \mathbb{N}$ with $\alpha \ge 2$. There is a constant $C > 0$ such that for all $f \in
H(K_{\alpha,c})$ we have
\begin{equation*}
\|f\|_{\mathcal{W}_\alpha} \le C \|f\|_{H(K_{\alpha,c})}.
\end{equation*}
Thus we have the continuous embedding
\begin{equation*}
H(K_{\alpha,c}) \hookrightarrow \mathcal{W}_\alpha.
\end{equation*}
\end{theorem}
\begin{proof}
Let $f \in H(K_{\alpha,c})$. Then for $x \in [c,1]$ we
have the Taylor series expansion with integral remainder
$$f(x) = \langle f, K_{\alpha,c}(\cdot, x) \rangle_{H(K_{\alpha,c})} =
\sum_{r=1}^{\alpha-1} f^{(r)}(c) (x-c)^r + \int_c^1 f^{(\alpha)}(t)
\frac{(x-t)_+^{\alpha-1}}{(\alpha-1)!} \,\mathrm{d} t$$ and for $x
\in [0,c]$ we have the Taylor series expansion with integral remainder
$$f(x) = \langle f, K_{\alpha,c}(\cdot, x) \rangle_{H(K_{\alpha,c})} =
\sum_{r=1}^{\alpha-1} f^{(r)}(c) (x-c)^r + \int_0^c f^{(\alpha)}(t)
\frac{(t-x)_+^{\alpha-1}}{(\alpha-1)!} \,\mathrm{d} t.$$ Therefore
\begin{eqnarray*}
\widehat{f}(k) & = & \int_0^1 f(x) \overline{\wal_k(x)} \,\mathrm{d}
x \\ & = & \sum_{r=1}^{\alpha-1} f^{(r)}(c) \int_0^1 (x-c)^r
\overline{\wal_k(x)}\,\mathrm{d} x + \int_0^1 \int_0^1 1_{[c,1]}(t)
f^{(\alpha)}(t) \frac{(x-t)_+^{\alpha-1}}{(\alpha-1)!}
\overline{\wal_k(x)} \,\mathrm{d} t \,\mathrm{d} x \\ && + \int_0^1
\int_0^1 1_{[0,c]}(t) f^{(\alpha)}(t)
\frac{(t-x)_+^{\alpha-1}}{(\alpha-1)!} \overline{\wal_k(x)}
\,\mathrm{d} t \,\mathrm{d} x \\ & = & \sum_{r=1}^{\alpha-1}
f^{(r)}(c) \int_0^1 (x-c)^r \overline{\wal_k(x)}\,\mathrm{d} x +
\int_0^1 1_{[c,1]}(t) f^{(\alpha)}(t) \chi_\alpha^{(+)}(k) \,\mathrm{d} t \\
&& + \int_0^1 1_{[0,c]}(t) f^{(\alpha)}(t) \chi_\alpha^{(-)}(k)
\,\mathrm{d} t.
\end{eqnarray*}
Thus, using \cite[Lemma~3]{D09} and Lemma~\ref{lem_bound_taylor}
there is some constant $C > 0$ such that
\begin{eqnarray*}
|\widehat{f}(k)| & \le & \sum_{r=1}^{\alpha-1} |f^{(r)}(c)|
\left|\int_0^1 (x-c)^r \overline{\wal_k(x)}\,\mathrm{d} x \right|
\\ && + \int_0^1 |f^{(\alpha)}(t)| \left[1_{[c,1]}(t) |\chi_\alpha^{(+)}(k)| +
1_{[0,c]}(t) |\chi_\alpha^{(-)}(k) | \right] \,\mathrm{d} t \\ & \le & C
b^{-\mu_\alpha(k)} \left(\sum_{r=1}^{\alpha-1} |f^{(r)}(c)|^2 +
\int_0^1 |f^{(\alpha)}(t)|^2 \,\mathrm{d} t \right)^{1/2} \\ & = & C
b^{-\mu_\alpha(k)} \|f\|_{H(K_{\alpha,c})},
\end{eqnarray*}
where the constant $C >0$ is independent of $k$ and $f$.
\end{proof}
One can show that the constant in Theorem~\ref{thm_embedding} can be chosen as $C_{3,\alpha} := \sqrt{\alpha} C_{1,\alpha}$, where $C_{1,\alpha}$ is given by \eqref{const_c1}.
The result can be generalized for tensor product spaces. Let $u
\subset \mathbb{N}$ be a finite set. For $\bsx_u = (x_i)_{i \in u},
\bsy_u = (y_i)_{i \in u} \in [0,1]^{|u|}$ let
\begin{equation*}
K_{\alpha,c,u}(\bsx_u,\bsy_u) = \prod_{i \in u}
K_{\alpha,c}(x_i,y_i).
\end{equation*}
This reproducing kernel defines a reproducing kernel Hilbert space
$H(K_{\alpha,c,u})$ with inner product $\langle \cdot,
\cdot \rangle_{\alpha,c,u}$ and corresponding norm $\|\cdot
\|_{\alpha,c,u}$.
For $\bsk_u = (k_i)_{i \in u} \in \mathbb{N}_0^{|u|}$ let
\begin{equation*}
\mu_{\alpha}(\bsk_u) = \sum_{i \in u} \mu_{\alpha}(k_i).
\end{equation*}
We define the Walsh functions
\begin{equation*}
\wal_{\bsk_u}(\bsx_u) = \prod_{i\in u} \wal_{k_i}(x_i).
\end{equation*}
For $\alpha \ge 2$ we define the Walsh space $\mathcal{W}_{\alpha,u}$ as the
space of all Walsh series
\begin{equation*}
f(\bsx_u) = \sum_{\bsk_u \in \mathbb{N}^{|u|}_0} \widehat{f}(\bsk_u)
\wal_{\bsk_u}(\bsx_u)
\end{equation*}
with
\begin{equation*}
\|f\|_{\mathcal{W}_{\alpha,u}} = \sup_{\bsk_u \in \mathbb{N}^{|u|}_0}
|\widehat{f}(\bsk_u)| b^{\mu_\alpha(\bsk_u)} < \infty.
\end{equation*}
Using the representation $f(\bsx) = \langle f, K_{\alpha,c,u}
(\cdot, \bsx)
\rangle_{H(K_{\alpha,c,u})}$ one obtains a multidimensional Taylor series
with integral remainder. The $k_i$th Walsh coefficients of products
of $(x_i-c)^{r_i}$, $(x_i-t_i)_+^{\alpha-1}$ and
$(t_i-x_i)_+^{\alpha-1}$ can all be estimated by $C
b^{-\mu_\alpha(k_i)}$. Thus we obtain the following corollary.
\begin{corollary}
\label{Tau_Konvergenz}
Let $u \subset \mathbb{N}$ be a finite set. For $\alpha \ge 2$ the tensor product space
$H(K_{\alpha,c,u})$ is continuously embedded in
$\mathcal{W}_{\alpha,u}$. That is, there is a constant
$C_{4,\alpha,|u|} > 0$ such that for all $f \in
H(K_{\alpha,c,u})$ we have
\begin{equation*}
\|f\|_{\mathcal{W}_{\alpha,u}} \le C_{4,\alpha,|u|}
\|f\|_{H(K_{\alpha,c,u})}.
\end{equation*}
\end{corollary}
The constant $C_{4,\alpha,|u|}$ can be chosen as $C_{4,\alpha,|u|} = (C_{3,\alpha})^{|u|} = \alpha^{|u|/2} (C_{1,\alpha})^{|u|}$.
Consider now a reproducing kernel of the form
\begin{equation}
\label{alpha-kern}
K_{\alpha, \bsgamma}(\bsx,\bsy) = \sum_{u \subseteq [s]} \gamma_u K_{\alpha, c, u}(\bsx_u, \bsy_u),
\end{equation}
which defines the reproducing kernel Hilbert space $H(K_{\alpha, \bsgamma})$ with inner product $\langle \cdot, \cdot \rangle_{H(K_{\alpha, \bsgamma})}$ and corresponding norm $\|\cdot \|_{H(K_{\alpha, \bsgamma})}$. Further we define the Walsh space $\mathcal{W}_{\alpha, \bsgamma}$, $\bsgamma = (\gamma_u)_{u\subseteq [s]}$, as the space of all Walsh series
\begin{equation*}
f(\bsx) = \sum_{\bsk \in \mathbb{N}_0^s} \widehat{f}(\bsk) \wal_{\bsk}(\bsx),
\end{equation*}
with finite norm
\begin{equation*}
\|f\|_{\mathcal{W}_{\alpha,\widetilde{\bsgamma}}} = \max_{u \subseteq [s]} \widetilde{\gamma}_u^{-1} \|f_u\|_{\mathcal{W}_{\alpha,u}}
\end{equation*}
where $f_u = \langle f, K_{\alpha,c,u}\rangle_{H(K_{\alpha,\bsgamma})}$ is the projection of $f$ onto $H(K_{\alpha,c,u})$. Then we have
\begin{equation*}
\|f\|_{\mathcal{W}_{\alpha,\widetilde{\bsgamma}}} \le \left( \sum_{u\subseteq [s]} \gamma_u^{-1} \|f_u\|^2_{H(K_{\alpha,c,u})} \right)^{1/2} = \|f\|_{H(K_{\alpha, \bsgamma})},
\end{equation*}
where $\widetilde{\bsgamma} = (\widetilde{\gamma}_u)_{u \subseteq [s]}$ and $\widetilde{\gamma}_u = C_{4,\alpha,|u|} \sqrt{\gamma_u}$.
\subsection{Numerical integration}
\label{NUMINT}
Let $\alpha > 1$ be an integer. The worst-case integration error in $H(K_{\alpha,c,u})$ using a quasi-Monte Carlo algorithm $Q_P(f) = \frac{1}{|P|} \sum_{\bsx \in P} f(\bsx)$ based on the point set $P = \{\bsx_0,\ldots, \bsx_{N-1}\} \subset [0,1]^{u}$ is given by
\begin{equation*}
e(Q; H(K_{\alpha,c,u})) = \sup_{f \in
H(K_{\alpha,c,u}), \|f\|_{H(K_{\alpha,c,u})} \le 1}
\left|\int_{[0,1]^u} f(\bsx_u) \,\mathrm{d} \bsx_u - \frac{1}{N}
\sum_{n=0}^{N-1} f(\bsx_n) \right|.
\end{equation*}
Since the reproducing kernel Hilbert space
$H(K_{\alpha,c,u})$ is continuously embedded in
$\mathcal{W}_{\alpha,u}$, the results on numerical integration of
\cite{BDGP11} in $\mathcal{W}_{\alpha,u}$ apply. From
\cite[Theorem~3.1]{BDGP11} we obtain the following result which will be used in the changing dimension algorithm.
\begin{proposition}\label{prop1}
Let $b$ be a prime number, $m \ge 1$ and $\alpha \ge 2$ be integers. Then a higher order polynomial lattice point set $\cS_{p,m,\alpha}(\bsq)$ with modulus $p$ of degree $\alpha m$ constructed over the finite field $\mathbb{Z}_b$ of order $b$
and generating vector $\bsg \in \mathbb{Z}_b^{|u|}$ can be constructed component-by-component such that the quasi-Monte Carlo rule $Q_{\bsg,p}$ using the quadrature points $\cS_{p,m,\alpha m}(\bsq)$ satisfies
\begin{equation}
\label{estimate-for-CDA}
e(Q_{\bsg,p}; H(K_{\alpha,c,u})) \le \frac{1}{b^{\tau m}}
C_{b,\alpha,1/\tau}^{|u| \tau} \quad \mbox{for all } 1 \le \tau <
\alpha.
\end{equation}
The constant here is given by
\begin{equation*} C_{b,\alpha,1/\tau} := 1 + C_{3,\alpha} \left(\widetilde{C}_{b,\alpha,1/\tau} + \frac{(b-1)^\alpha}{b^{\alpha/\tau} -b} \prod_{j=1}^{\alpha -1} \frac{1}{b^{j/\tau}-1} \right),
\end{equation*}
where $C_{3,\alpha}$ is as in Section \ref{EMBEDDING} and
\begin{equation*}
\widetilde{C}_{b,\alpha,1/\tau} := \left\{
\begin{array}{ll}
\alpha-1 & \mbox{ if } \tau =1,\\
\frac{(b-1)((b-1)^{\alpha-1} - (b^{1/\tau} -1)^{\alpha-1})}{(b-b^{1/\tau})(b^{1/\tau}-1)^{\alpha-1}}& \mbox{ if } \tau > 1 .
\end{array}\right.
\end{equation*}
\end{proposition}
Note that one does not require a random digital shift of the polynomial lattice point set in Proposition~\ref{prop1} due to the embedding of the function space $H(K_{\alpha,c,u})$ in the Walsh space. This random digital shift is however required for $\alpha = 1$ to get a corresponding result (which is not covered in Proposition~\ref{prop1}).
The construction cost of the component-by-component algorithm is of $O(|u| N^\alpha \alpha \log N)$ operations using $O(N^\alpha)$ memory (where $N = b^m$ is the number of points), see \cite{BDLNP12}.
Consider now a reproducing kernel of the form (\ref{alpha-kern}).
For functions $f \in H(K_{\alpha,\bsgamma})$ with anchored decomposition $f= \sum_{u\subseteq [s]} f_u = \sum_{u \subseteq [s]} \sum_{\bsk_u \in \mathbb{N}_0^{|u|}} \widehat{f}_u(\bsk_u) \wal_{\bsk_u}$
we have
\begin{align*}
\left|\int_{[0,1]^s} f(\bsx)\,\rd \bsx - \frac{1}{b^m} \sum_{n=0}^{b^m-1} f(\bsx_n) \right| \le & \|f\|_{\mathcal{W}_{\alpha,\widetilde{\bsgamma}}} \sum_{\emptyset \neq u \subseteq [s]} \widetilde{\gamma}_u \frac{1}{b^m} \sum_{n=0}^{b^m-1} \sum_{\bsk_u \in \mathbb{N}_0^{|u|} \setminus \{\bszero\}} b^{-\mu_\alpha(\bsk_u)} \wal_{\bsk_u}(\bsx_{n,u}) \\ \le & \|f\|_{\mathcal{W}_{\alpha,\widetilde{\bsgamma}}} \sum_{\emptyset \neq u \subseteq [s]} \frac{1}{b^m} \sum_{n=0}^{b^m-1} \widetilde{\gamma}'_u \sum_{\bsk_u \in \mathbb{N}^{|u|}} b^{-\mu_\alpha(\bsk_u)} \wal_{\bsk_u}(\bsx_{n,u}) \\ \le & \|f\|_{H(K_{\alpha,\bsgamma})} \sum_{\emptyset \neq u \subseteq [s]} \frac{1}{b^m} \sum_{n=0}^{b^m-1} \widetilde{\gamma}'_u \sum_{\bsk_u \in \mathbb{N}^{|u|}} b^{-\mu_\alpha(\bsk_u)} \wal_{\bsk_u}(\bsx_{n,u}),
\end{align*}
where $\widetilde{\gamma}_u = C_{3,\alpha}^{|u|} \sqrt{\gamma_u}$ and
$\widetilde{\gamma}'_u = \sum_{u \subseteq v\subseteq [s]} \widetilde{\gamma}_v$ (note that $\frac{1}{b^m} \sum_{n=0}^{b^m-1} \wal_{\bsk_u}(\bsx_{n,u})$ only takes on the values $0$ or $1$). Let $\gamma'_u = \sum_{v \supseteq u} \gamma_v$. Using a slight generalization of \cite[Theorem~3.1]{BDGP11} we obtain that a higher order polynomial
lattice point set $\cS_{p,m,\alpha m}(\bsq)$ with modulus $p$ of degree $\alpha m$
and generating vector $\bsg$ can be constructed
component-by-component such that the quasi-Monte Carlo rule $Q_{\bsg,p}$ using the quadrature points $\cS_{p,m,\alpha m}(\bsq)$ satisfies
\begin{align*
e(Q_{\bsg,p};H(K_{\alpha,\bsgamma})) \le & \frac{1}{b^{\tau m}}
\left( \sum_{u \subseteq [s]} (\widetilde{\gamma}'_u)^{1/(\tau)} (2 C_{b,\alpha,1/\tau})^{|u|}
\right)^\tau \nonumber \\ \le & \frac{1}{b^{\tau m}}
\left( \sum_{u \subseteq v \subseteq [s]} \gamma_v^{1/(2\tau)} C_{3,\alpha}^{|v|/\tau} (2 C_{b,\alpha,1/\tau})^{|u|}
\right)^\tau \nonumber \\ = & \frac{1}{b^{\tau m}}
\left( \sum_{v \subseteq [s]} \gamma_v^{1/(2\tau)} C_{3,\alpha}^{|v|/\tau} (1 + 2 C_{b,\alpha,1/\tau})^{|v|} \right)^\tau \quad \mbox{for all } 1 \le \tau < \alpha.
\end{align*}
Note that the construction above is explicit, however, the range of $\tau$ is restricted to $1 \le \tau < \alpha$. In the following we therefore consider the range $1/2 \le \tau < 1$. If one chooses $1/2 \le \tau < 1$, then one can use the construction of polynomial lattice rules from \cite{DKPS} to obtain the result that there exists a digital shift $\bssigma \in [0,1)^s$ such that
\begin{align}\label{eq_error_tauhalb}
e(Q_{\bsg,p}(\bssigma);H(K_{1,\bsgamma})) \le & \frac{1}{b^{\tau m}}
\left( \sum_{u \subseteq [s]} \gamma_u^{1/(2\tau)} (C'_{\tau})^{|u|}
\right)^\tau \quad \mbox{for all } 1/2 \le \tau < 1,
\end{align}
for some suitable constant $C'_\tau > 0$ independent of $s$ and $m$. Note that the space $H(K_{\alpha,\bsgamma})$ is continuously embedded in the space $H(K_{\alpha-1, \bsgamma'})$, where $\bsgamma' = (2^{|u|} \gamma_u)_{u \subseteq [s]}$. This follows from the tensor product structure of the reproducing kernel Hilbert spaces $H(K_{\alpha,c,u})$ and
\begin{equation*}
\frac{1}{2} \int_0^1 |f^{(\alpha-1)}(x)|^2 \rd x \le |f^{(\alpha-1)}(c)|^2 + \int_0^1 |f^{(\alpha)}(x)|^2 \rd x,
\end{equation*}
which in turn follows from
\begin{equation*}
f^{(\alpha-1)}(x) = f^{(\alpha-1)}(c) + \int_c^x f^{(\alpha)}(t) \rd t,
\end{equation*}
for $x \ge c$ and an analogous expression for $x < c$. Thus functions in $H(K_{\alpha,\bsgamma})$ are also in $H(K_{1,\bsgamma''})$, where $\bsgamma'' = (2^{(\alpha-1) |u|} \gamma_u)_{u \subseteq [s]}$. Therefore \eqref{eq_error_tauhalb} applies for functions in $H(K_{\alpha,\bsgamma})$ where one replaces the constant $C'_\tau$ with $2^{\frac{\alpha -1}{2\tau}} C'_\tau$.
Note that we have
\begin{align*}
& [e(Q_{\bsg,p}(\bssigma); H(K_{\alpha,\bsgamma}))]^2 \\ = & \sum_{u \subseteq [s-1]} \gamma_u \left[ e \left( \left( Q_{\bsg, p}(\bssigma) \right)_{u}; H(K_{\alpha,c,u}) \right) \right]^2 + \sum_{s \in u \subseteq [s]} \gamma_u \left[ e \left( \left(
Q_{\bsg, p}(\bssigma) \right)_{u}; H(K_{\alpha,c,u}) \right) \right]^2.
\end{align*}
In the component-by-component algorithm one updates the components $g_j$ of $\bsg$ inductively. The first sum over all subsets $u \subseteq [s-1]$ does not depend on the last component and is therefore fixed when updating $g_s$. The component-by-component algorithm then minimizes the second sum over all subsets $s \in u \subseteq [s]$ and this sum is then shown to satisfy the bound
\begin{equation}\label{genau-richtig}
\sum_{s \in u \subseteq [s]} \gamma_u \left[ e \left( \left(
Q_{\bsg, p}(\bssigma) \right)_{u}; H(K_{\alpha,c,u}) \right) \right]^2 \le \frac{1}{b^{2 \tau m}} \left( \sum_{s \in u \subseteq [s]} \gamma_u^{1/(2\tau)} (C'_{\tau})^{|u|}
\right)^{2\tau }.
\end{equation}
This implies that for any $1/2 \le \tau < \alpha$ there is a polynomial lattice rule together with a digital shift $\bssigma$ such that
(\ref{genau-richtig}) holds. For $\tau \ge 1$ one can choose the digital shift $\bssigma = \bszero$.
Such polynomial lattice rules can be constructed using a component-by-component algorithm as shown in \cite{DKPS} for $1/2 \le \tau < \alpha = 1$ and in \cite{BDGP11} for $1 \le \tau < \alpha$.
\subsection{Results for product and order-dependent weights}
\label{HOC-POD}
\subsubsection{Nested subspace sampling}
\label{HOC-POD-NEST}
Let $\$(k)=O(k^s)$ for some $s\ge 0$. Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary} and have $\decay_{\bsgamma,1} >1$. For $k\in\N$ and the set
$v_k = v_k^{(2)} = [L_k]$, see Section \ref{MLA},
we may apply estimate (\ref{genau-richtig}) to see that our assumption (\ref{assumption3.9}) holds.\footnote{Recall that polynomial lattice rules consist
of $n$ points, where $n$ is a power of a prime $b$. If required to construct a quadrature rule consisting of $n$ points, $n\in\N$ arbitrary, we generate a polynomial lattice rules consisting of $b^m$ points, $b^m\le n <b^{m+1}$, and set the quadrature weights corresponding to the ``missing'' $n-b^m$ points simply to zero.}
Thus the estimates from Theorem \ref{Theorem6} for $p^{\nes}$ can be established
by multilevel algorithms using as
building blocks the polynomial lattice rules explained above.
Due to the fact that $e(n;H(K_{\alpha,c})) = \Omega(n^{-\alpha})$ and our lower bound (\ref{neslowboupod}) we get, in particular, the following result.
\begin{corollary}\label{Cor_Theorem6}
Let $\$(k)=\Theta(k^s)$ for some $s\ge 0$. Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary}. Let $\alpha \ge 1$ be an integer.
Then our quasi-Monte Carlo multilevel algorithms $Q^{\ML}_m$,
defined as in (\ref{multilevel-algo}) with polynomial
lattice rules as in Section \ref{NUMINT} as quadrature rules $Q_{v_k}$, establish the following result: The infinite-dimensional integration problem is strongly tractable in the nested subspace sampling model.
In the case where $s\ge (2\alpha-1)/2\alpha$ we obtain
\begin{equation}
p^{\nes} = \max \left\{ \frac{1}{\alpha}, \frac{2s}{\decay_{\bsgamma,1} - 1} \right\}.
\end{equation}
In the case where $0 \le s < (2\alpha-1)/2\alpha$, we obtain for
\begin{itemize}
\item[] $\decay_{\bsgamma,1} \ge 2\alpha$:
\begin{equation*}
p^{\nes} = \frac{1}{\alpha},
\end{equation*}
\item[] $2\alpha > \decay_{\bsgamma,1} > 1/(1-s)$:
\begin{equation*}
\max \left\{ \frac{1}{\alpha}, \frac{2s}{\decay_{\bsgamma,1}-1} \right\} \le p^{\nes} \le \frac{2}{\decay_{\bsgamma,1}},
\end{equation*}
\item[] $1/(1-s) \ge \decay_{\bsgamma,1} >1$:
\begin{equation*}
p^{\nes} = \frac{2s}{\decay_{\bsgamma,1}-1}.
\end{equation*}
\end{itemize}
\end{corollary}
\subsubsection{Unrestricted subspace sampling}
\label{USSHIGHERORDER}
If the cost function satisfies $\$(k) = O(k^s)$ for $0\le s\le 1$, we can use
the quasi-Monte Carlo multilevel algorithms from Section \ref{HOC-POD-NEST} and achieve the
same result as in Corollary~\ref{Cor_Theorem6}.
If $\$(k) = \Omega(k^s)$ for $s \ge 1$, we can use changing dimension algorithms
as in (\ref{def-CD}) with polynomial lattices rules as in Proposition \ref{prop1}.
Due to Corollary \ref{Cor_Theorem6} and Theorem \ref{Thm_hoc_cda} these QMC multilevel
and changing dimension algorithms lead to
the following result.
\begin{corollary}
Let $\$(k) =\Theta(k^s)$ for some $s\ge 0$.
Let $\bsgamma = (\gamma_u)_{u\in \U}$ be POD weights that satisfy the assumptions of
Corollary \ref{Corollary}. Let $\alpha > 1$ be an integer.
If $s\ge (2\alpha - 1)/2\alpha$, then
the infinite-dimensional integration problem is strongly tractable with exponent
\begin{equation*}
p^{\unr} = \max \left\{ \frac{1}{\alpha},
\frac{2 \min\{1,s\}}{\decay_{\bsgamma,1}-1} \right\}.
\end{equation*}
If $s < (2\alpha - 1)/2\alpha$, then
the infinite-dimensional integration problem is strongly tractable and
$p^{\unr}$ satisfies the same relations as $p^{\nes}$ in Corollary
\ref{Cor_Theorem6}.
\end{corollary}
\subsection{Results for weights with finite algorithmic dimension}
Let us briefly mention the results that our quasi-Monte Carlo multilevel and
changing dimension algorithms achieve in the case of weights with finite algorithmic
dimension.
We now show how quadrature rules which satisfy \eqref{assuconv} can be constructed explicitly. Choose $\bst^{(i,n)}$ in \eqref{algobaustein-FAD} to be the first $n$ points of a $(t,\alpha,d)$-sequence as constructed in \cite{D08}. The weights $a_i^{(n)}$ can be chosen in the following way: Let $m$ be an integer such that $b^m \le n < b^{m+1}$. Then set $a_i^{(n)} = b^{-m}$ for $1 \le i \le b^m$ and $0$ for $b^m < i \le n$. Then \cite[Theorem~5.4]{D08} together with Corollary~\ref{Tau_Konvergenz} implies that this quadrature rule satisfies \eqref{assuconv}. In the following two theorems let $Q_n$ denote the higher order quasi-Monte Carlo rule as described in this paragraph.
\subsubsection{Nested subspace sampling}
Due to Theorem \ref{UppBouFAD} we obtain the following corollary.
\begin{corollary}\label{Vorletztes_Korollar}
Let $\$(k) = O(k^s)$ for some $s\ge 0$. Let the weights $\bsgamma$ have finite
algorithmic dimension, and let $\decay_{\bsgamma} >1$. Let $\alpha \ge 1$ be an integer. Then for all $n \in \N$ the higher order quasi-Monte Carlo rules $Q_n$ satisfies (\ref{assuconv}). For $k=1,2,\ldots$,
let $Q_{v_k} = Q^{\mathcal{W}}_{n,v_k}$ be as in (\ref{algoW}).
Then the multilevel algorithms $Q^{\ML}_m$, defined as in (\ref{multilevel-algo}),
establish the following result:
The exponent of strong tractability in the nested subspace sampling
model satisfies
\begin{equation*}
p^{\nes} \le \max \left\{ \frac{1}{\alpha}, \frac{2s}{\decay_{\bsgamma} - 1} \right\}.
\end{equation*}
\end{corollary}
The lower bound (\ref{neslowboufad}) on $p^{\nes}$ shows that the upper bound in
Corollary \ref{Vorletztes_Korollar} is sharp for finite-intersection weights.
\subsubsection{Unrestricted subspace sampling}
In the unrestricted subspace sampling setting we use for $\$(k)=O(k^s)$ and $s\le 1$
multilevel algorithms $Q^{\ML}_m$ as in Corollary \ref{Vorletztes_Korollar}, and for
$s>1$ changing dimension algorithms, see Section \ref{SUBSEC_4.4.2},
that rely on the higher order quasi-Monte Carlo rules $Q_n$ described above.
This results in the following corollary.
\begin{corollary}\label{Letztes_Korollar}
Let $\$(k) = O(k^s)$ for some $s\ge 0$. Let the weights $\bsgamma$ have finite
algorithmic dimension, and let $\decay_{\bsgamma} >1$.
Let $\alpha \ge 1$ be an integer. Then the exponent of strong tractability in the
unrestricted subspace sampling model satisfies
\begin{equation*}
p^{\unr} \le \max \left\{ \frac{1}{\alpha}, \frac{2\min\{1,s\}}{\decay_{\bsgamma} - 1} \right\}.
\end{equation*}
\end{corollary}
The lower bound (\ref{neslowboufad}) on $p^{\unr}$ shows that the upper bound in
Corollary \ref{Letztes_Korollar} is sharp for finite-intersection weights.
\section{Appendix}
\label{APPENDIX}
Here we provide a detailed proof of Lemma \ref{lem_pod_example}.
\begin{lemma}\label{lem_pod_gen}
Let $r > 1$ be a real number and define the POD weights $\gamma_u = \Gamma_{|u|} \prod_{j\in u} j^{-r}$ for $u \in \U$. Then there is a constant $c_r > 0$ such that
\begin{equation}
\label{hurwitz1}
\sum_{u \in \U} \gamma_u \ge \Gamma_0 + c_r \sum_{k=1}^\infty \frac{\Gamma_k}{(k!)^{2 \lceil r/2 \rceil}} k^{-\lceil r/2 \rceil} \left(\frac{\pi}{2 \lceil r/2 \rceil \sin \pi / (2 \lceil r/2 \rceil) } \right)^{rk}.
\end{equation}
If $r \ge 2$, then there is a constant $C_r > 0$ such that
\begin{equation}
\label{hurwitz2}
\sum_{u \in \U} \gamma_u \le \Gamma_0 + C_r \sum_{k=1}^\infty \frac{\Gamma_k}{(k!)^r} k^{-r/2} \left(\frac{\pi}{2 \lfloor r/2 \rfloor \sin \pi / (2 \lfloor r/2 \rfloor) }\right)^{rk}.
\end{equation}
\end{lemma}
Note that $\sin x < x$ for $x > 0$, thus $\sin \pi/r < \pi/r$, which implies
\begin{equation*}
1 < \frac{\pi}{r \sin \pi/r}.
\end{equation*}
\begin{proof}
We have
\begin{equation*}
\sum_{u \in \U} \gamma_u = \sum_{k=0}^\infty \Gamma_{k} \sum_{\satop{u \in \U}{|u| = k}} \prod_{j\in u} j^{-r} = \Gamma_0 + \sum_{k=1}^\infty \Gamma_{k} \sum_{1 \le j_1 < j_2 < \cdots < j_k} \prod_{i=1}^k j_i^{-r} = \Gamma_0 + \sum_{k=1}^\infty \Gamma_k \zeta(\underbrace{r,\ldots, r}_{k \mbox{ times}}),
\end{equation*}
where $\zeta(\underbrace{r,\ldots, r}_{k \mbox{ times}})$ is the multiple Hurwitz zeta function.
The general behavior of the multiple Hurwitz zeta function is given in \cite[Eq. (48)]{BBB97}. From \cite[p. 8]{BBB97} it is known that if $r \ge 2$ is an even integer, then
\begin{equation*}
\zeta(\underbrace{r,\ldots, r}_{k \mbox{ times}}) = \frac{r (2\pi)^{rk}}{(rk + r/2)!} \left(\frac{1}{2 \sin \pi/r} \right)^{rk+r/2} \left(1 + \sum_{j=2}^{N_{r}} R_{r,j}^{rk+r/2} \right),
\end{equation*}
where $R_{r,j}$ are some numbers with $|R_{r,j}| < 1$ and $N_{r}$ is a positive integer satisfying $N_r < 2^{r/2}/r$. From Stirling's formula we obtain
\begin{equation*}
\frac{(k!)^r}{(rk)!} \asymp_k \frac{\sqrt{2\pi}}{\mathrm{e}} \frac{k^{kr} \mathrm{e}^{-rk}}{(rk)^{rk} \mathrm{e}^{-rk}} = \frac{\sqrt{2\pi}}{\mathrm{e}} r^{-rk},
\end{equation*}
where $f(k) \asymp_k g(k)$ means that there are constants $C,c> 0$ independent of $k$ such that $c g(k) \le f(k) \le C g(k)$. Thus
\begin{align*}
(k!)^r \zeta(\underbrace{r,\ldots, r}_{k \mbox{ times}}) & = \frac{(k!)^r r (2\pi)^{rk}}{(rk + r/2)!} \left(\frac{1}{2 \sin \pi/r} \right)^{rk+r/2} \left(1 + \sum_{j=2}^{N_{r}} R_{r,j}^{rk+r/2} \right) \\ & \asymp_k \frac{\sqrt{2\pi} r}{\mathrm{e}} \left(\frac{1}{2\sin \pi/r} \right)^{r/2} \frac{1}{(rk+r/2)^{r/2}} \left(\frac{\pi}{r \sin \pi/r} \right)^{rk} \left(1 + \sum_{j=2}^{N_{r}} R_{r,j}^{rk+r/2} \right) \\ & \asymp_k \frac{1}{k^{r/2}} \left(\frac{\pi}{r \sin \pi/r} \right)^{rk}.
\end{align*}
Thus, for any fixed positive even integer $r$ we have
\begin{equation*}
\sum_{k=1}^\infty \Gamma_k \zeta(\underbrace{r,\ldots, r}_{k \mbox{ times}}) = \sum_{k=1}^\infty \frac{\Gamma_k}{(k!)^r} (k!)^r \zeta(\underbrace{r,\ldots, r}_{k \mbox{ times}}) \asymp \sum_{k=1}^\infty \frac{\Gamma_k}{(k!)^r} k^{-r/2} \left(\frac{\pi}{r \sin \pi/r}\right)^{rk}.
\end{equation*}
Therefore (\ref{hurwitz1}) follows since decreasing $r$ only increases the sum $\sum_{u \in \U} \gamma_u$ and the result holds for all even integers $r \ge 2$ as shown above.
Now assume that $r \ge 2$. For $1/r < \lambda \le 1$ we have by Jensen's inequality that
\begin{equation*}
[\zeta(r,\ldots, r)]^\lambda = \left[\sum_{1\le j_1 < \cdots < j_k} \prod_{i=1}^k j_i^{-r} \right]^\lambda \le \sum_{1 \le j_1 < \cdots < j_k} \prod_{i=1}^k j_i^{-r \lambda} = \zeta(r\lambda,\ldots, r\lambda).
\end{equation*}
Choose $1/r < \lambda \le 1$ such that $\lambda r$ is the largest even integer smaller or equal than $r$. Then
\begin{equation*}
(k!)^r \zeta(r,\ldots, r) \le \left[(k!)^{\lambda r} \zeta(r\lambda,\ldots, r\lambda)\right]^{1/\lambda} \le C_r \frac{1}{k^{r/2}} \left(\frac{\pi}{\lambda r \sin \pi/ (\lambda r)} \right)^{rk},
\end{equation*}
for some constant $C_r > 0$. Thus
\begin{equation*}
\sum_{u \in \U} \gamma_u \le \Gamma_0 + C_r \sum_{k=1}^\infty \frac{\Gamma_k}{(k!)^r} k^{-r/2} \left(\frac{\pi}{\lambda r \sin \pi/ (\lambda r)} \right)^{rk},
\end{equation*}
from which (\ref{hurwitz2}) follows.
\end{proof}
\begin{corollary}\label{cor_pod_criteria}
Let $\bsgamma = (\gamma_u)_{u \in \U}$ be POD weights with $\gamma_u = \Gamma_{|u|} \prod_{j\in u} \gamma_j$. Let
$p^*:=\decay_{\bsgamma,1}
< \infty$.
Further let $c, c_0 > 0$ be constants such that
\begin{equation*}
c_0 j^{-p^\ast} \le \gamma_j \le c j^{-p^\ast} \quad \mbox{for all } j \ge 1.
\end{equation*}
If for some $q \le p^\ast/2$ we have
\begin{align}\label{eq_pod1}
\sum_{k=1}^\infty \frac{c^{k/q} \Gamma_k^{1/q}}{(k!)^{p^\ast /q}} k^{-p^\ast/ (2q)} \left(\frac{\pi}{2 \lfloor p^\ast/(2q) \rfloor \sin \pi/(2 \lfloor p^\ast / (2q) \rfloor} \right)^{k p^\ast/q} < \infty,
\end{align}
then $\mathrm{decay}_{\bsgamma,\infty} \ge q$.
On the other hand, if for $q < p^\ast$ we have
\begin{equation}\label{eq_pod2}
\sum_{k=1}^\infty \frac{c_0^{k/q} \Gamma_k^{1/q}}{(k!)^{2 \lceil p^\ast/(2q) \rceil}} k^{-\lceil p^\ast/(2q) \rceil} \left(\frac{\pi}{2 \lceil p^\ast/(2q) \rceil \sin \pi / (2 \lceil p^\ast/(2q) \rceil) } \right)^{k p^\ast /q} = \infty,
\end{equation}
then $\mathrm{decay}_{\bsgamma,\infty} \le q$.
\end{corollary}
\begin{proof}
We have
\begin{equation*}
\mathrm{decay}_{\bsgamma,\infty} = \sup\left\{q \in \mathbb{R}: \sum_{u \in \U} \gamma_u^{1/q} < \infty \right\}.
\end{equation*}
Thus we have for some $q \le p^\ast/2$
\begin{align*}
\sum_{u \in \U} \gamma_u^{1/q} \le \Gamma_0^{1/q} + C_{p^*/q} \sum_{k=1}^\infty \frac{c^{k/q} \Gamma_k^{1/q}}{(k!)^{p^\ast /q}} k^{-p^\ast/ (2q)} \left(\frac{\pi}{2 \lfloor p^\ast/(2q) \rfloor \sin \pi/(2 \lfloor p^\ast / (2q) \rfloor} \right)^{k p^\ast/q}
\end{align*}
that the right hand side is finite, then $\mathrm{decay}_{\bsgamma,\infty} \ge q$.
On the other hand, for $q < p^\ast$ we have
\begin{equation*}
\sum_{u \in \U} \gamma_u^{1/q} \ge \Gamma_0^{1/q} + c_{p^*/q} \sum_{k=1}^\infty \frac{c_0^{k/q} \Gamma_k^{1/q}}{(k!)^{2 \lceil p^\ast/(2q) \rceil}} k^{-\lceil p^\ast/(2q) \rceil} \left(\frac{\pi}{2 \lceil p^\ast/(2q) \rceil \sin \pi / (2 \lceil p^\ast/(2q) \rceil) } \right)^{p^\ast k/q}.
\end{equation*}
If the right hand side is infinite for some $q < p^\ast$, then $\mathrm{decay}_{\bsgamma,\infty} \le q$.
\end{proof}
We suspect that the condition $q \le p^\ast/2$ in the above corollary can be replaced by $q \le p^\ast$.
The corollary above allows us to construct an example of POD weights where
\begin{equation*}
1 \le \mathrm{decay}_{\bsgamma,\infty} < \mathrm{decay}_{\bsgamma,1}.
\end{equation*}
For instance, let $\gamma_j = j^{-p^\ast}$. Thus $\mathrm{decay}_{\bsgamma,1} = p^\ast$ and $c_0 = c = 1$ in the above corollary.
Let $q^\ast$ be such that $p^\ast/(2q^\ast) \in \mathbb{N}$.
For $k \in \mathbb{N}_0$ let
\begin{equation*}
\Gamma_k = (k!)^{p^\ast} k^{p^\ast/2-q^\ast} \left(\frac{(p^\ast/q^\ast) \sin (q^*\pi/p^\ast)}{\pi} \right)^{k p^\ast}.
\end{equation*}
Then we have for $q=q^*$ that \eqref{eq_pod2} is of the same form as \eqref{eq_pod1}, which is
\begin{equation}\label{eq_pod_example}
\sum_{k=1}^\infty \frac{\Gamma_k^{1/q}}{(k!)^{p^\ast/q}} k^{-p^\ast/(2q)} \left(\frac{\pi}{2 \lfloor p^\ast/(2q) \rfloor \sin \pi / (2 \lfloor p^\ast/(2q) \rfloor) } \right)^{k p^\ast/q} = \sum^\infty_{k=1} k^{-1}= \infty.
\end{equation}
Due to (\ref{eq_pod2}) we have $\decay_{\bsgamma,\infty} \le q^*$.
Let now $q<q^*$ such that $\lfloor p^*/2q \rfloor = p^*/2q^*$.
For this $q$ the left hand side of (\ref{eq_pod1}) is
\begin{equation*}
\sum^\infty_{k=1} \frac{\Gamma_k^{1/q}}{(k!)^{p^*/q}}
k^{-p^*/2q} \left( \frac{\pi}{(p^*/q^*) \sin(q^*\pi/p^*)} \right)^{kp^*/q}
= \sum^\infty_{k=1} k^{-q^*/q} <\infty.
\end{equation*}
Thus (\ref{eq_pod1}) gives us $\decay_{\bsgamma,\infty} \ge q$.
Together with Lemma \ref{Lemma3.8} this establishes Lemma \ref{lem_pod_example}.
\subsection*{Acknowledgment}
Both authors want to thank Michael Griebel for suggesting them
to study algorithms for infinite-dimensional integration of
higher order convergence. We are grateful for the opportunity to work at the Hausdorff Institute in Bonn where the work on this paper was initiated.
Josef Dick is supported by an ARC Queen Elizabeth II Fellowship.
Michael Gnewuch was supported by the German Science Foundation DFG under grant GN 91/3-1 and by the Australian Research Council ARC.
|
1,314,259,993,323 | arxiv | \section{Introduction}
A \textit{transcendental function} is a function $f(x)$ such that the only complex polynomial satisfying $P(x, f(x)) =0$ for all $x$ in its domain, is the null polynomial. For instance, the trigonometric functions, the exponential function, and their inverses.
The study of the arithmetic behavior of transcendental functions at complex points has attracted the attention of many mathematicians for decades. The first result concerning this subject goes back to 1884, when Lindemann proved that the transcendental function $e^z$ assumes transcendental values at all nonzero algebraic point. In 1886, Strauss tried to prove that an analytic transcendental function cannot assume rational values at all rational points in its domain. However, in 1886, Weierstrass supplied him with a counter-example and also stated that there are transcendental entire functions which assume algebraic values at all algebraic points. This assertion was proved in 1895 by St\"{a}ckel \cite{19} who established a much more general result: for each countable subset $\Sigma\subseteq \mathbb{C}$ and each dense subset $T\subseteq \mathbb{C}$, there exists a transcendental entire
function $f$ such that $f(\Sigma) \subseteq T$. In another construction, St\"{a}ckel \cite{20} produced a transcendental function $f(z)$, analytic in a neighbourhood of the origin, and with the property that both $f(z)$ and its inverse function assume, in this neighbourhood, algebraic values at all algebraic points. Based on this result, in 1976, Mahler \cite[p. 53]{bookmahler} suggested the following question
\begin{question}\label{q1}
Does there exist a transcendental entire function
\[
f(z)=\sum_{n=0}^{\infty}a_nz^n,
\]
with rational coefficients $a_n$ and such that the image and the preimage of $\overline{\Q}$ under $f$ are subsets of $\overline{\Q}$?
\end{question}
We refer the reader to \cite{bookmahler,wal1} (and references therein) for more about this subject.
In this paper, we give an affirmative answer to Mahler's question. For the sake of preciseness, we stated it as a theorem.
\begin{theorem}\label{1}
There are uncountable many transcendental entire functions
\[
f(z)=\sum_{n=0}^{\infty}a_nz^n,
\]
with rational coefficients $a_n$ and such that the $f(\overline{\Q})\subseteq \overline{\Q}$ and $f^{-1}(\overline{\Q})\subseteq \overline{\Q}$.
\end{theorem}
\section{The proof}
In order to simplify our presentation, we set $\mathbb{A}=\overline{\Q}\cap \mathbb{R}$ and we use the familiar notation $[a, b] = \{a, a + 1,\ldots, b\}$, for integers $a < b$.
Let $\{\alpha_1,\alpha_2,\alpha_3,\ldots\}$ be an enumeration of $\overline{\Q}$ such that $\alpha_1=0$, and for any $n\geq 1$, the numbers $\alpha_{3n-1},\alpha_{3n}\notin \mathbb{R}$ with $\alpha_{3n}=\overline{\alpha}_{3n-1}$, $\alpha_{3n+1}\in \mathbb{R}$ and $|\alpha_{3n+i}|<n$, for $i\in [-1,1]$. Now, let us construct our desired function inductively.
Define $f_1(z)=z$. Observe that $f_1(\alpha_1)=0=(f_1)^{-1}(\alpha_1)$. Now, we want to construct a sequence of analytic functions $f_2(z), f_3(z),\ldots$ recursively of the form
\[
f_m(z)=f_{m-1}(z)+\epsilon_mz^mP_m(z),
\]
with
\begin{itemize}
\item[(i)] $f_{m-1},P_m\in \mathbb{A}[z]$ and then $f_m(z)=\sum_{i=1}^{t_m}a_iz^i$ with $t_m>m$;
\item[(ii)] $P_{m-1}(z)\mid P_m(z)$ and $P_m(0)\neq 0$;
\item[(iii)] $\epsilon_m\in \mathbb{A}$;
\item[(iv)] $0<|\epsilon_m|<\frac{1}{L(P_m)m^{m+\deg P_m}}$;
\item[(v)] $a_1,\ldots, a_m\in \mathbb{Q}$;
\end{itemize}
Here $L(P)$ denotes the length of a polynomial (the sum of the absolute values of its coefficients).
The requested function will have the form $f(z)=z+\sum_{n\geq 2}\epsilon_nz^nP_n(z)$. Since $|P(z)|\leq L(P)\max\{1,|z|\}^{\deg P}$, we have that, for all $z$ belonging to the open ball $B(0,R)$,
\[
|\epsilon_nz^nP_n(z)|<\dfrac{1}{L(P_n)n^{n+\deg P_n}}L(P_n)\max\{1,R\}^{n+\deg P_n}=\left(\dfrac{\max\{1,R\}}{n}\right)^{n+\deg P_n}.
\]
Thus $f$ is an entire function, since the series $f(z)=z+\sum_{n=2}^{\infty}\epsilon_nz^nP_n(z)$, which defines $f$, converges uniformly in any of these balls.
Suppose that we have a function $f_{n}$ satisfying (i)-(v). Now, let us construct $f_{n+1}$ with the desired properties.
Since $f_{n}^{-1}(\{\alpha_1,\ldots, \alpha_{3n+1}\})$ is finite, we can choose $n+1<r_{n+1}<n+2$ such that $f_{n}^{-1}(\{\alpha_1,\ldots, \alpha_{3n+1}\})\cap \partial B(0:r_{n+1})=\emptyset$. If $f_{n}^{-1}(\{\alpha_1,\ldots, \alpha_{3n+1}\})\cap B(0:r_{n+1})=\{0,y_1,\ldots, y_s\}$. Define
\[
f_{n+1}(z)=f_{n}(z)+\epsilon_{n+1}z^{n+1}P_{n+1}(z),
\]
where $P_{n+1}(z):=P_n(z)(z-\alpha_{3n-1})(z-\alpha_{3n})(z-\alpha_{3n+1})\prod_{i=1}^s(z-y_i)^{\deg f_n}$. Note that since $f_n(z)\in \mathbb{A}[z]$, one has that if $y_i\notin \mathbb{R}$, then $\overline{y_i}=y_j$, for some $j\in [1,s]$ (since $f_{n+1}(y_i)=f_n(y_i)$ and $\{\alpha_1,\ldots, \alpha_{3n+1}\}$ is closed under complex conjugation). Thus $P_{n+1}\in \mathbb{A}[z]$ and $P_{n+1}(0)\neq 0$.
For $i\in [1,3n+1]$, since $f_{n}^{-1}(\alpha_i)\cap \partial B(0:r_{n+1})$ is empty, then $\min_{|z|=r_{n+1}} |f_{n}(z)-\alpha_i|>0$ and we can choose $\epsilon_{n+1}$ satisfying
\begin{equation}\label{rouche0}
|\epsilon_{n+1}|<\dfrac{\displaystyle\min_{|z|=r_{n+1}}|f_n(z)-\alpha_i|}{\displaystyle\max_{|z|=r_{n+1}}|z^{n+1}P_{n+1}(z)|}.
\end{equation}
In particular,
\begin{eqnarray}\label{rouche}
|f_{n}(z)-\alpha_i| & \geq & \min_{|z|=r_{n+1}} |f_{n}(z)-\alpha_i|\nonumber \\
& >& |\epsilon_{n+1}|\max_{|z|=r_{n+1}}|z^{n+1}P_{n+1}(z)|\nonumber \\
& \geq & |\epsilon_{n+1}z^{n+1}P_{n+1}(z)|.\nonumber
\end{eqnarray}
Now, by using Rouch\' e's theorem, we ensure that the functions $f_n(z)-\alpha_i$ and $f_n(z) -\alpha_i+ \epsilon_{n+1}z^{n+1}P_{n+1}(z)=f_{n+1}(z)-\alpha_i$ have the same number of zeros (counted with multiplicity) inside $B(0:r_{n+1})$. Note that $z=0$ is a zero of $f_n$ and $f_{n+1}$ of multiplicity 1 (since $f'_n(0)=f_{n+1}'(0)=1$). Suppose that $y_j$ ($j\in [1,s]$) is a zero of $f_n(z)-\alpha_i$ of multiplicity $m\geq 1$. Since $m \le \deg f_n$, $y_j$ is a zero of $f_{n+1}$ with multiplicity at least $m$. Thus, the sets $f_{n}^{-1}(\{\alpha_j\})\cap B(0:r_{n+1})$ and $f_{n+1}^{-1}(\{\alpha_j\})\cap B(0:r_{n+1})$ have the same cardinality and then they are equal, for $1\le j\le 3n+1$. This argument ensures that in our construction no new preimage under $f_{n+1}$ of $\{\alpha_1,\ldots, \alpha_{3n+1}\}$ lying in $B(0:r_{n+1})$ will appear apart from those ones under $f_n$. Note also that, since we will only choose $\epsilon_{m+1}\in \mathbb{A}$, we have that, for every $n$, $f_{n+1}(\{\alpha_1,\ldots, \alpha_{3n+1}\})$ and $f_{n+1}^{-1}(\{\alpha_1,\ldots, \alpha_{3n+1}\})$ are subsets of $\overline{\Q}$.
Now, we shall prove that it is possible to choose a nonzero $\epsilon_{n+1}\in \mathbb{A}$ satisfying (iv) and such that $a_{n+1}$ is a rational number (note that, by construction, the first $n$ coefficients of $f_{n+1}$ remains unchanged). In fact, let $c_{n+1}$ be the coefficient of $z^{n+1}$ in $f_n(z)$. We have that $a_{n+1}=c_{n+1}+P_{n+1}(0)\epsilon_{n+1}$. Thus, one can choose a rational number $p/q$ such that
\[
0<|c_{n+1}-p/q|<\min\left\{\dfrac{|P_{n+1}(0)|}{L(P_{n+1})(n+1)^{n+1+\deg P_{n+1}}}, \dfrac{|P_{n+1}(0)|\displaystyle\min_{|z|=r_{n+1}}|f_n(z)-\alpha_i|}{\displaystyle\max_{|z|=r_{n+1}}|z^{n+1}P_{n+1}(z)|}\right\},
\]
for all $i\in [1,3n+1]$. Therefore, by defining $\epsilon_{n+1}=(p/q-c_{n+1})/P_{n+1}(0)$, we get $a_{n+1}=p/q$ and $0<|\epsilon_{n+1}|<\frac{1}{L(P_{n+1})(n+1)^{n+1+\deg P_{n+1}}}$ (also it satisfies (\ref{rouche0})).
Thus, by construction, the function $f(z)=z+\sum_{n\geq 2}\epsilon_nz^nP_n(z)=\sum_{n\geq 1} a_nz^n$ is entire, $f(\overline{\Q})\cup f^{-1}(\overline{\Q})\subseteq \overline{\Q}$ and $a_n\in \mathbb{Q}$ as desired. Indeed, if $j\le 3n+1$, then $f_{n+1}(\alpha_j)=f_n(\alpha_j)$, so in particular $f_n(\alpha_j)=f_j(\alpha_j), \forall n\ge j$, and $f(\alpha_j)=\lim_{n\to \infty} f_n(\alpha_j)=f_j(\alpha_j)\in \overline{\Q}$. On the other hand, if $j\le 3n+1$ and $i\le n$ then $f_{n+1}^ {-1}(\alpha_j)\cap B(0,r_i)=f_n^ {-1}(\alpha_j)\cap B(0,r_i)$. Therefore, if $k=\max\{i,j\}$, we have $f_n^{-1}(\alpha_j)\cap B(0,r_i)=f_k^{-1}(\alpha_j)\cap B(0,r_i)$ for every $n\ge k$. This implies that $f^{-1}(\alpha_j)\cap B(0,r_i)\supseteq f_k^{-1}(\alpha_j)\cap B(0,r_i)$. Since $f$ is non-constant, we should have $f^{-1}(\alpha_j)\cap B(0,r_i)=f_k^{-1}(\alpha_j)\cap B(0,r_i)\subseteq \overline{\Q}$. Indeed, if there were another element $w$ of $f^{-1}(\alpha_j)\cap B(0,r_i)$, it should be at positive distance of the finite set $f_k^{-1}(\alpha_j)\cap B(0,r_i)$, but, since $f=\lim_{n\to \infty} f_n$, arbitrarily close to $w$ there should be, for $n$ large, an element of $f_n^{-1}(\alpha_j)$ (again by Rouch\' e's theorem), which contradicts the equality $f_n^{-1}(\alpha_j)\cap B(0,r_i)=f_k^{-1}(\alpha_j)\cap B(0,r_i)$.
The proof that we can choose $f$ to be transcendental follows because there is an $\infty$-ary tree of different possibilities for $f$ (in each step we have infinitely many possible choices for $\epsilon_{n+1}$, and so for $a_{n+1}$). Thus, we have constructed uncountably many possible functions, and the algebraic entire functions taking $\overline{\Q}$ into itself must be polynomials belonging to $\overline{\Q}[z]$, which is a countable subset.
\qed
|
1,314,259,993,324 | arxiv | \section{Introduction}
Let $\mathfrak F$ be a class of finite groups and $G$ a finite group. We may consider a graph $\w{\Gamma}_\mathfrak{F}(G)$ whose vertices are the elements of $G$ and where two vertices $g,h\in G$ are connected \ifa $\gen{g,h}\notin\mathfrak{F}$. We denote by $\mathcal{I}_\ff(G)$ the set of isolated vertices of $\w{\Gamma}_\mathfrak{F}(G)$. We define the non-$\mathfrak{F}$ graph $\G_\ff(G)$ of $G$ as the subgraph of $\w{\Gamma}_\mathfrak{F}(G)$ obtained by deleting the isolated vertices.
In the particular case when $\mathfrak F$ is the class $\mathfrak A$ of the abelian groups, the graph $\Gamma_{\mathfrak A}(G)$ has been introduced by Erd\"{o}s and it is known with the name of non-commuting graph (see for example\cite{ncg}, \cite{neu}). If $\mathfrak F$ is the class $\mathfrak N$ of the finite nilpotent groups, then $\Gamma_{\mathfrak N}(G)$ is the non-nilpotent graph, studied for example in \cite{az}. When $\mathfrak F$ is the class $\mathfrak S$ of the finite soluble groups, we obtain the non-soluble graph (see \cite{ns}).
\
A group (resp. subgroup) is called an $\mathfrak{F}$-\emph{group} (resp. $\mathfrak{F}$-\emph{subgroup}) if it belongs to $\mathfrak{F}$. We say that $\mathfrak F$ is \emph{hereditary} whenever if $G\in\mathfrak{F}$ and $H\leq G$, then $H\in\mathfrak{F}$. If $\mathfrak F$ is hereditary, it is interesting to consider the intersection $\phi_\mathfrak{F}(G)$ of all maximal $\mathfrak F$-subgroups of $G,$ that is, the subgroups which are maximal with respect to being an $\mathfrak F$-group.
It turns out that if $\mathfrak F \in \{\mathfrak A, \mathfrak N,
\mathfrak S\},$ then $\phi_\mathfrak{F}(G)=\mathcal{I}_\ff(G)$ for any finite group $G.$ Indeed $\mathcal{I}_{\mathfrak A}(G)=Z(G),$ $\mathcal{I}_{\mathfrak N}(G)=Z_\infty(G)$ \cite[Proposition 2.1]{az}, $\mathcal{I}_{\mathfrak S}(G)=\R(G)$
\cite[Theorem 1.1]{gu}, denoting by $Z_\infty(G)$ and $\R(G)$, respectively, the hypercenter and the soluble radical of $G.$
This motivates the following definition: we say that $\mathfrak{F}$ is \emph{regular} if $\mathfrak{F}$ is hereditary and $\fr_\ff(G)=\mathcal{I}_\ff(G)$ for every finite group $G$.
\
The first question that we address in the paper is how to characterize the hereditary saturated formations that are regular. Recall that a formation $\mathfrak F$ is a class of groups which is closed under taking homomorphic images and subdirect products. The second condition ensures the existence of the
$\mathfrak F$-residual $G^{\mathfrak F}$ of each group $G,$ that is, the smallest normal subgroup of $G$ whose factor
group is in $\mathfrak F$. A formation $\mathfrak F$ is said to be saturated if $G \in\mathfrak F$ whenever the Frattini factor
$G/\Phi(G)$ is in $\mathfrak F.$ A group $G$ is \emph{critical} for $\mathfrak{F}$ (or $\mathfrak{F}$-\emph{critical}) if $G\notin\mathfrak{F}$ and every proper subgroup of $G$ lies in $\mathfrak{F}$, while a group $G$ is \emph{strongly critical} for $\mathfrak{F}$ if $G\notin\mathfrak{F}$ and every proper subgroup and proper quotient of $G$ lies in $\mathfrak{F}$.
\begin{thm}\label{regolari}Let $\mathfrak F$ be an hereditary saturated formation, with $\mathfrak A \subseteq \mathfrak{F} \subseteq \mathfrak S.$ Then $\mathfrak{F}$ is regular if and only if every finite group $G$ which is soluble and strongly critical for $\mathfrak{F}$ has the property that $G/\soc(G)$ is cyclic.
\end{thm}
It follows from Theorem \ref{regolari} that a formation is not in general regular. For example, if $\mathfrak U$ is the formation of the finite supersoluble groups, then there exists a strongly critical group $G$ for $\mathfrak U$ such that $\soc(G)$ is an elementary abelian group of order 25 and $G/\soc(G)$ is isomorphic to the quaternion group $Q_8.$ It is an interesting question to see if and when $\mathcal{I}_\ff(G)$ is a subgroup of $G$.
\
Consider the class $\mathfrak{F}$ of finite groups in which normality is transitive. The group $G=:\langle a,b,c \mid a^5=1, b^5=1, c^4=1, [a,b]=1, a^c=a^2, b^c=b^3\rangle$
is critical for $\mathfrak{F}$ (see \cite{transitiveN}). Then $\gen{a,g}$ and $\gen{b,g}$ are proper subgroups for every $g\in G$, so they belong to the class, while $\gen{ab,y}=G$ does not belong to the class. Thus $a, b \in \mathcal{I}_\ff(G)$ but $ab\notin \mathcal{I}_\ff(G)$. So in general $\mathcal{I}_\ff(G)$ is not a subgroup of $G.$
\
We say that a formation $\mathfrak F$ is \emph{semiregular} if $\mathcal{I}_\ff(G)\leq G$ for any finite group $G.$ In Section \ref{riduzione} we will investigate the structure of a group $G$ which is minimal with respect to the property that $\mathcal{I}_\ff(G)$ is not a subgroup. To state our result we need to recall another definition: we say that $\mathfrak{F}$ is \emph{2-recognizable} whenever a group $G$ belongs to $\mathfrak{F}$
if all $2$-generated subgroups of $G$ belong to $\mathfrak{F}.$
\begin{thm}\label{semi}
Let $\mathfrak F$ be an hereditary saturated formation, with $\mathfrak A \subseteq \mathfrak{F} \subseteq \mathfrak S.$
Assume that $\mathfrak{F}$ is 2-recognizable and not semiregular and let $G$ be a finite group of minimal order with respect to the property that $\mathcal{I}_\ff(G)$ is not a subgroup of $G.$ Then $G$ is a primitive monolithic soluble group. Moreover, if $N=\soc(G)$ and $S$ is a complement of $N$ in $G,$ then the following hold.
\begin{enumerate}
\item $N=\soc(G)=G^{\ff}$.
\item $N\!\gen{s}\in\mathfrak{F}$ for every $s\in S$; in particular $S$ is not cyclic.
\item if $n\in N$ and $s\in S,$ then $ns\in\mathcal{I}_\ff(G)$ \ifa $N\!\gen{s,t}\in\mathfrak{F}$ for all $t\in S$; in particular $ns\in\mathcal{I}_\ff(G)$ \ifa $s\in\mathcal{I}_\ff(G)$.
\item Suppose that $\mathfrak{F}$ is locally defined by the formation function $f$ and, for every prime $p,$ let
$\overline{f(p)}$ be the formation of the finite groups $X$ with the property that $X/\oo_p(X)\in f(p).$
If $K\leq S$, we have that $NK\in\mathfrak{F}$ \ifa $K\in\overline{f(p)}$, in particular $\mathcal{I}_\ff(G)=N\is{\overline{f(p)}}(S)$, where $p$ is the unique prime dividing $|N|$.
\end{enumerate}
\end{thm}
As an application of the previous theorem we will prove.
\begin{thm}\label{appli}
The following formations are semiregular:
\begin{enumerate}
\item the formation $\mathfrak U$ of the finite supersoluble groups.
\item the formation $\mathfrak D=\mathfrak N\mathfrak A$ of the finite groups with nilpotent derived subgroup.
\item the formation $\mathfrak{N}^t$ of the finite groups with Fitting length less or equal then $t,$ for any $t\in \mathbb N.$
\item the formation $\mathfrak{S}_p\mathfrak{N}^t$ of the finite groups $G$ with $G/\oo_p(G) \in \mathfrak{N}^t.$
\end{enumerate}
\end{thm}
We will say that a formation $\mathfrak{F}$ is \emph{connected} if
the graph $\G_\ff(G)$ is connected for any finite group $G.$
In Section \ref{connesso} we consider the case when
$\mathfrak{F}$ is a 2-recognisable hereditary saturated semiregular formation with $\mathfrak A \subseteq \mathfrak{F} \subseteq \mathfrak S$. In particular we investigate the structure of a group $G$ of minimal order with the property that $\G_\ff(G)$ is not connected (when $\mathfrak{F}$ is not connected) and we use this information to prove the following result.
\begin{thm}\label{conreg}
Let $\mathfrak F$ be an hereditary saturated formation, with $\mathfrak A \subseteq \mathfrak{F} \subseteq \mathfrak S.$ If $\mathfrak{F}$ is regular, then $\mathfrak{F}$ is connected.
\end{thm}
A corollary of this result is \cite[Theorem 5.1]{az}, stating that the non-nilpotent graph $\Gamma_{\mathfrak N}(G)$ is connected for any finite group $G$. Moreover our approach allows to prove:
\begin{thm}
\label{conaltri}
If $\mathfrak{F}\in\{\mathfrak{U},\mathfrak{D},\mathfrak{S}_p\mathfrak{N}^t,\mathfrak{N}^t\}$, then $\mathfrak{F}$ is connected.
\end{thm}
Recall that a graph is said to be embeddable in the plane, or \emph{planar}, if it can be drawn in the plane so that its edges intersect only at their ends. Abdollahi and Zarrin proved
that if $G$ is a finite non-nilpotent group, then the non-nilpotent graph $\Gamma_{\mathfrak N}(G)$ is planar if and only if $G\cong S_3$ (see \cite[Theorem 6.1]{az}). We generalize this result proving:
\begin{thm}\label{planar}
Let $\mathfrak{F}$ be a 2-recognizable, hereditary, semiregular formation, with $\mathfrak{N}\subseteq \mathfrak{F},$ and let $G$ be a finite group. Then $\G_\ff(G)$ is planar if and only if either $G\in \mathfrak{F}$ or $G\cong S_3$.
\end{thm}
\section{Some preliminary results}
This section contains some auxiliary results, that will be needed in our proofs.
\begin{defn} Let $G$ be a finite group. We denote by $V(G)$ the subset of $G$ consisting of the elements $x$ with the property that $G=\langle x, y\rangle$ for some $y.$
\end{defn}
\begin{prop}\label{lemma10}
Let $G$ be a primitive monolithic soluble group. Let $N=\soc(G)$ and $H$ a core-free maximal subgroup of $G$. Given $1\neq h\in H$ and $n\in N$, $hn \in V(G)$ if and only if $h\in V(H).$
\end{prop}
\begin{proof} Clearly if $hn\in V(G),$ then $h\in V(H).$ Conversely assume that $h\in V(H)$ and let $n\in N.$ There exists $k\in H$ such that $\langle h, k\rangle=H.$ For any $m\in N,$ let $H_m:=\langle hn, km\rangle.$ Since $H_mN=\langle h, k \rangle N=G,$ either $H_m=G$ or $H_m$ is a complement of $N$ in $G.$ In particular, if we assume, by contradiction, $hn\notin V(G),$ then $H_m$ is a complement of $N$ in $G$ for any $m\in G$, and consequently $H_m=H^{g_m}$ for some $g_m\in G.$ If $H_{m_1}=H_{m_2}$ then $m_1^{-1}m_2=(km_1)^{-1}(km_2)\in H_{m_1}\cap N=1$ so $m_2=m_1.$ Since $N_G(H)=H,$ $H$ has precisely $|G:H|=|N|$ conjugates in $G$ and therefore $\{H_m\mid m\in N\}$ is the set of all the conjugates in $G$. This implies $1\neq hn \in \bigcap_{g\in G}H^g=\core_G(H)=1,$ a contradiction.
\end{proof}
\begin{lemma}\label{N}Let $\mathfrak{F}$ be a saturated formation with $\mathfrak{F} \subseteq \mathfrak S$ and let $G$ be a finite group. Suppose $G\notin\mathfrak{F}$ but every proper quotient is in $\mathfrak{F}$. Then either $\R(G)=1$ or $G$ is a primitive monolithic soluble group and $\soc(G)=G^{\ff}$.
\end{lemma}
\begin{proof}
If $\R(G)\neq1$, we have $G/\R(G)\in\mathfrak{F}$, hence $G/\R(G)$ is soluble, which implies that $G$ is soluble. If $G$ contains two different minimal normal subgroups, $N_1$ and $N_2,$ then $G=G/(N_1\cap N_2)\leq G/N_1\times G/N_2 \in \mathfrak{F},$ against our assumption. So $\soc(G)$ is the unique minimal normal subgroup of $G$. Moreover $G/\soc(G)\in \mathfrak{F}$, hence $\soc(G)=G^{\ff}.$ Finally, since $\mathfrak{F}$ is a saturated formation and $G\notin \mathfrak{F}$, it must be $\phi(G)=1,$ so $G$ is a primitive monolithic soluble group.
\end{proof}
The following is immediate.
\begin{lemma}\label{FFF} Let $g,h\in G$ and $N\mathrel{\unlhd} G$.
\begin{enumerate}[label=(\alph*)]
\item If $gN$ and $hN$ are adjacent vertices of $\G_\ff(G/N)$, then $g$ and $h$ are adjacent vertices of $\G_\ff(G)$.
\item If $g\in \mathcal{I}_\ff(G),$ then $gN\in \mathcal{I}_\ff(G/N).$
\item $\mathcal{I}_\ff(G)^\sigma=\mathcal{I}_\ff(G)$ for every $\sigma\in\aut(G)$.
\end{enumerate}
\end{lemma}
\begin{prop}\cite[Theorem A]{sk}\label{frf}
Let $\mathfrak{F}$ be a saturated formation. Let $H\leq G$ and $N\mathrel{\unlhd} G$. Then
\begin{enumerate}[label=(\alph*)]
\item If $H\in\mathfrak{F}$, then $H\fr_\ff(G)\in\mathfrak{F}$;
\item If $N\mathrel{\unlhd}\fr_\ff(G)$, then $\fr_\ff(G)/N=\fr_\ff(G/N)$.
\end{enumerate}
\end{prop}
\section{Proof of Theorem \ref{regolari}}
Let $\mathfrak F$ be an hereditary saturated formation, with $\mathfrak A \subseteq \mathfrak{F} \subseteq \mathfrak S.$
\
First we claim that
$\fr_\ff(G)\subseteq \mathcal{I}_\ff(G)$.
Since $\mathfrak{F}$ contains all the cyclic groups, by Proposition \ref{frf} (a), $\gen{x}\fr_\ff(G)\in\mathfrak{F}$ for any $x\in G.$ The conclusion follows from the fact that $\mathfrak{F}$ is hereditary.
\
Suppose that $\mathfrak{F}$ is regular and let $G$ be a soluble strongly critical group for $\mathfrak{F}$. By Lemma \ref{N}, $G$ is a primitive monolithic soluble group. Moreover, since $G$ is critical for $\mathfrak{F}$,
all the maximal subgroups of $G$ are in $\mathfrak{F},$ and therefore
$\mathcal{I}_\ff(G)=\fr_\ff(G)=\phi(G)=1.
$ Let $N=\soc(G)$ and $S$ a complement of $N$ in $G.$ Fix $1\neq n \in N.$ Since $n\notin \mathcal{I}_\ff(G)$, $\langle n, g\rangle \notin \mathfrak{F}$ for some $g\in G.$ Since $G$ is $\mathfrak{F}$-critical, it must be $\langle n, g\rangle=G$ and therefore $G/N$ is cyclic.
\
Conversely, suppose that $\mathfrak{F}$ is not regular and every soluble strongly critical group $G$ for $\mathfrak{F}$ is such that $G/\soc(G)$ is cyclic. Let $G$ be a smallest finite group such that $\fr_\ff(G)\subset\mathcal{I}_\ff(G)$. Of course $G\notin\mathfrak{F}$, otherwise $G=\fr_\ff(G)=\mathcal{I}_\ff(G)$. Let $x\in\mathcal{I}_\ff(G)\setminus\fr_\ff(G)$ and let $H$ be an $\mathfrak{F}$-maximal subgroup of $G$ which does not contain $x$.
\begin{step}
$G=\gen{x,H}$.
\end{step}
\begin{proof}
Suppose, by contradiction, $\gen{x,H}<G$. Then $x\in\mathcal{I}_\ff(\gen{x,H})=\fr_\ff(\gen{x,H})$, hence, by Proposition \ref{frf} (a), $\gen{x,H}=\fr_\ff(\gen{x,H})H\in\mathfrak{F}$, against the fact that $H$ is an $\mathfrak{F}$-maximal subgroup of $G$.
\end{proof}
\begin{step}\label{regolari:quo}
If $1\neq M\mathrel{\unlhd} G$, then $G/M\in\mathfrak{F}$.
\end{step}
\begin{proof}
By Lemma~\ref{FFF} and the minimality of $G,$ $xM\in\mathcal{I}_\ff(G/M)=\fr_\ff(G/M),$ hence $G/M=\gen{xM,HM/M}=\fr_\ff(G/M)HM/M\in\mathfrak{F}$, since $HM/M\cong H/(M\cap H)\in\mathfrak{F}$.
\end{proof}
\begin{step}
$G$ is a primitive monolithic soluble group and $\soc(G)=G^{\ff}$.
\end{step}
\begin{proof}
From Step \ref{regolari:quo} we are in the hypotheses of Lemma \ref{N}. If $\R(G)=1$, by \cite[Theorem 6.4]{gu}, for every $1\neq g_1\in G$ there exist $g_2\in G$ such that $\gen{g_1,g_2}$ is not soluble, and then $\gen{g_1,g_2}\notin\mathfrak{F}$, since $\mathfrak{F}$ contains only soluble groups. So, $\mathcal{I}_\ff(G)=1$, hence $\fr_\ff(G)=1$, which means $\mathcal{I}_\ff(G)=\fr_\ff(G)$, against the assumptions on $G$.
\end{proof}
Let $N=\soc(G)$, $S$ a complement of $N$ in $G$ and write $x=\bar n\bar s$ with $\bar n\in N, \bar s\in S.$
\begin{step}
There exists $1\neq n^*\in N\cap\mathcal{I}_\ff(G)$.
\end{step}
\begin{proof} We may assume $\bar s\neq 1$ (otherwise $x=\bar n\in N\cap \mathcal{I}_\ff(G)$) and $\bar s\notin V(S)$ (otherwise, by Proposition \ref{lemma10}, $\langle x, g\rangle=G\notin \mathfrak{F}$ for some $g\in G$ and $x=\bar n \bar s\notin \mathcal{I}_\ff(G)).$ Since $C_G(N)=N,$ there exists $m\in N$ such that $x^m\neq x.$ We claim that $n^*=[m,x^{-1}]\in N\cap \mathcal{I}_\ff(G).$ Indeed let $g\in G.$ Since $\bar s\not\in V(S),$ $K:=\gen{x,x^m,g}=
\gen{x, n^*x, g}=\gen{\bar n\bar s,n^*\bar n\bar s,g}\leq N\gen{\bar s,g}<G.$
In particular, again by the minimality of $G,$ $x,x^m\in\mathcal{I}_\ff(K)=\fr_\ff(K)$, hence $K=\fr_\ff(K)\gen{g}$ and, since $\gen{g}\in\mathfrak{F}$, $K\in\mathfrak{F}$. Since $\langle n^*, g\rangle \leq K,$ we conclude $\langle n^*, g\rangle \in \mathfrak{F}.$
\end{proof}
\begin{step}
$S$ is not cyclic.
\end{step}
\begin{proof}
Suppose, by contradiction, $S=\gen{s}.$ Since $N$ is an irreducible $S$-module and $n^*\neq 1,$ we have $\gen{n^*,s}=G.$ However $n^*\in \mathcal{I}_\ff(G),$ so this would imply $G\in \mathfrak{F}.$
\end{proof}
\begin{step}
$N\subseteq \mathcal{I}_\ff(G)$.
\end{step}
\begin{proof}
Suppose, by contradiction, that there exist $m\in N$ and $g\in G$ such that
$\gen{g,m}\not\in \mathfrak{F}.$ This implies $K:=N\gen{g}\notin \mathfrak{F}.$ By the previous step, $K<G.$ By Lemma \ref{FFF}, $(n^*)^s\in \mathcal{I}_\ff(G)$ for any $s\in S.$ So in particular
$X=\{(n^*)^s\mid s\in S\}\subseteq \mathcal{I}_\ff(K).$ However, by the minimality of $G,$ $\mathcal{I}_\ff(K)=\fr_\ff(K)$ is a subgroup of $G$, so $\langle X\rangle = N\leq \fr_\ff(K)$ and consequently $K=\fr_\ff(K)\langle g \rangle\in \mathfrak{F}.$
\end{proof}
\begin{step}
$G$ is a strongly critical group for $\mathfrak{F}$.
\end{step}
\begin{proof}
By Step \ref{regolari:quo}, we just need to prove that every maximal subgroup of $G$ is in $\mathfrak{F}$. Notice that $S\cong G/N\in\mathfrak{F}$, and so does every conjugate of $S$. The other maximal subgroups of $G$ are of the form $K:=NM$, with $M$ maximal in $S$. In particular, by the minimality of $G$, $\mathcal{I}_\ff(K)=\fr_\ff(K)$, and, by the previous step, $N\leq\fr_\ff(K)$. Hence $K=\fr_\ff(K)M\in\mathfrak{F}$, since $M\in\mathfrak{F}$.
\end{proof}
Finally, $G$ is a soluble strongly critical group for $\mathfrak{F}$, so $G/N\cong S$ is cyclic, but we excluded this possibility in Step 5. We have a contradiction, so $\mathfrak{F}$ must be regular.
\section{Proof of Theorem \ref{semi}}\label{riduzione}
To prove the theorem we need the following lemma.
\begin{lemma}\label{Icyc}
Suppose that $\mathfrak{F}$ is a 2-recognizable formation. If $\mathcal{I}_\ff(G)$ is a subgroup of $G$ and $G=\mathcal{I}_\ff(G)\!\gen{g}$ for some $g\in G$, then $G\in\mathfrak{F}$.
\end{lemma}
\begin{proof}
Let $x$ be an arbitrary element of $G$. We have $x=ig^\alpha$ for some $i\in\mathcal{I}_\ff(G)$ and $\alpha\in\mathbb{N}$. Moreover $\gen{g,ig^\alpha}=\gen{g,i}\in\mathfrak{F}$, since $i\in\mathcal{I}_\ff(G)$. Hence $g\in\mathcal{I}_\ff(G)$, so $G=\mathcal{I}_\ff(G)$ and, because $\mathfrak{F}$ is 2-recognizable, $G\in\mathfrak{F}$.
\end{proof}
\begin{proof}[Proof of the Theorem \ref{semi}]
Let $x,y\in\mathcal{I}_\ff(G)$ such that $xy\notin\mathcal{I}_\ff(G)$. There exists $g\in G$ such that $\gen{xy, g}\notin \mathfrak{F}$. Notice that the minimality property of $G$ implies $G=\gen{x,y,g}.$ Let $M$ be a non-trivial normal subgroup of $G$ and set $I/M:=\mathcal{I}_\ff(G/M)\mathrel{\unlhd} G/M$. By Lemma \ref{FFF}, $xM,yM\in\mathcal{I}_\ff(G/M)$. Since $G=\gen{x,y,g}$, we have $\gen{gM}I/M=G/M$.
By Lemma \ref{Icyc}, $G/M\in\mathfrak{F}$.
So we are in the hypotheses of Lemma \ref{N}. If $\R(G)=1$, then, as in the proof of Theorem \ref{regolari}, $\mathcal{I}_\ff(G)=1$, in contradiction with the assumption that $\mathcal{I}_\ff(G)$ is not a subgroup of $G.$ So $G$ is a primitive monolithic soluble group and $N=\soc(G)=G^{\ff}.$
We will show now that there is an element $1\neq n^*\in N\cap \mathcal{I}_\ff(G)$. We write $x$ in the form $x=\bar{n}\bar{s},$ with $\bar n\in N$ and $\bar s\in S$. If $\bar s=1,$ then $x\in N\cap \mathcal{I}_\ff(G)$ and we are done (notice that $xy \not\in \mathcal{I}_\ff(G)$ implies $x\neq 1).$
Suppose $\bar s\neq1$. Since $G\notin \mathfrak{F},$ $x\notin V(G)$, hence $\bar s\notin V(S)$ by Proposition \ref{lemma10}. Since $C_N(x)\neq N,$ there exists $m\in N$ such that $x^m \neq x.$ We claim that $n^*:=[m,x^{-1}]\in N\cap \mathcal{I}_\ff(G).$ Indeed let $g\in G.$ Since $\bar s\not\in V(S),$ $K:=\gen{x,x^m,g}=
\gen{x, n^*x, g}=\gen{\bar n\bar s,n^*\bar n\bar s,g}\leq N\gen{\bar s,g}<G.$
In particular $x,x^m\in\mathcal{I}_\ff(K)$ and $K=\mathcal{I}_\ff(K)\gen{g}$ and therefore $K\in\mathfrak{F}$ by Lemma \ref{Icyc}.
We prove now that $N\subseteq\mathcal{I}_\ff(G)$. As in Step 6 of the proof of Theorem \ref{regolari}, assume by contradiction that $\gen{g,m}\notin \mathfrak{F},$ for some $m\in N$ and $g\in G.$ Setting $K:=N\gen{g},$ it follows, with the same argument, that $N\leq \mathcal{I}_\ff(K)$ and consequently $K=\mathcal{I}_\ff(K)\gen{g}\in \mathfrak{F}$ by Lemma \ref{Icyc}, a contradiction.
Let $s$ be an arbitrary element of $S$ and let
$H:=N\!\gen{s}.$ Since $N\subseteq \mathcal{I}_\ff(G)\cap H\subseteq \mathcal{I}_\ff(H),$ we deduce that $H\in \mathfrak{F}$ from Lemma \ref{Icyc}. This proves (2).
Let now $n\in N$ and $s\in S$. If $ns\in\mathcal{I}_\ff(G)$,
then $ns\notin V(G)$ and therefore $s\notin V(S)$ by
Proposition \ref{lemma10}. Let $t$ be an arbitrary element of $S$ and set $H:=N\!\gen{s,t}<G$. Since $H<G$, by the minimality of $G,$ $\mathcal{I}_\ff(H)$ is a subgroup of $G,$ and therefore $N\!\gen{s}\leq \mathcal{I}_\ff(H)$, and consequently $H=\mathcal{I}_\ff(H)\!\gen{t}$ and $H\in\mathfrak{F}$ by Lemma \ref{Icyc}. If, on the contrary, $ns\notin\mathcal{I}_\ff(G)$, then there exist $n^*\in N$ and $s^*\in S$ such that $\gen{n^*s^*,ns}\notin \mathfrak{F}$, hence $N\!\gen{s,s^*}\notin\mathfrak{F}$. This proves (3).
Finally, we prove (4). Let $K\leq S$. Suppose $H:=NK\in\mathfrak{F}$. Let $U/V$ be a $p$-chief factor of $H$ with $U\leq N$. Since $H\in\mathfrak{F}$, we have $\aut_H(U/V)=H/C_H(U/V)\in f(p)$; moreover, since $N$ is abelian, $N\leq\C_H(U/V)$, so $\C_H(U/V)=N\C_K(U/V)$ and hence $\aut_K(U/V)\cong\aut_H(U/V)\in f(p)$. Let $1=N_0\mathrel{\unlhd} N_1\mathrel{\unlhd}\dots\mathrel{\unlhd} N_t=N$ with $N_i/N_{i-1}$ a chief factor of $H$ for every $i$. Since $N$ is a $p$-group, $\aut_K(N_i/N_{i-1})\in f(p)$ for every $i$, so $K/T\in f(p)$ with $T:=\bigcap_{i=1}^t C_K(N_i/N_{i-1})$. Since $C_T(N)\leq C_K(N)=1,$ $T$ is a $p$-group, hence $K^{f(p)}\leq T\leq \oo_p(K)$ and $K\in\overline{f(p)}$. Conversely,
suppose $K\in\overline{f(p)}$. Let $1=N_0\mathrel{\unlhd}\dots\mathrel{\unlhd} N_t=N=NK_0\mathrel{\unlhd}\dots\mathrel{\unlhd} NK_s=NK=H$ be a chief series of $H$ and denote by $\F(H)$ the Fitting subgroup of $H.$ If $1\leq i \leq t,$ then $\aut_H(N_i/N_{i-1})$ is an epimorphic image of $H/\F(H)$, since $\F(H)\leq C_H(N_i/N_{i-1})$. On the other hand, $\F(H)=N\oo_p(K)$, hence $H/\F(H)\cong K/\oo_p(K)\in f(p)$, and so $\aut_H(N_i/N_{i-1})\in f(p)$. Consider now $\aut_H(NK_j/NK_{j-1})$ for $1\leq j\leq a$ and let $q$ be the prime dividing $|NK_j/NK_{j-1}|$. Then we have $H/C_H(NK_j/NK_{j-1})\cong K/C_K(K_j/K_{j-1})=\aut_K(K_j/K_{j-1})\in f(q)$, since $NK_j/NK_{j-1}\cong K_j/K_{j-1}$ is a chief factor of $K$ and $K\in\mathfrak{F}$. So $H$ satisfies all the local conditions, and then it is in $\mathfrak{F}$.
\end{proof}
\section{Proof of Theorem \ref{appli}}
\begin{prop}\label{ssol}
The formation $\mathfrak{U}$ of finite supersoluble groups is semiregular.
\end{prop}
\begin{proof}The formation
$\mathfrak{U}$ is 2-recognizable since every $\mathfrak{U}$-critical group is 2-generated (see for instance \cite[Example 1]{minnonF}). Assume by contradiction that $\mathfrak{U}$ is not semiregular and let $G$ be a group of minimal order with respect to the property that $\mathcal{I}_\uu(G)$ is not a subgroup. We can apply Theorem \ref{semi}. Let $N=\soc(G)$: we have $|N|=p^k$ for a prime $p$ and some $k$. Let $q\neq p$ be another prime divisor of the order of a complement $S$ of $N$ in $G$ and choose $s\in S$ with $|s|=q$. By Theorem \ref{semi}, $N\!\gen{s}\in\mathfrak{U}.$ Applying Maschke's Theorem, $N$ can be decomposed into a direct sum of irreducible submodules and, since $N\!\gen{s}$ is supersoluble, these submodules must have order $p$. So $s$ acts faithfully on a cyclic group of order $p$, hence $q$ divides $p-1$ and in particular $q<p$. If $p\mid |S|$, then $p$ would be the greatest prime divisor of $|S|$. Since $S\in\mathfrak{U}$, the Sylow $p$-subgroup of $S$ is normal in $S$. However, since $S$ acts faithfully and irreducibly on the finite $p$-group $N$,
$\oo_p(S)=1$. This implies $\gcd(|N|,|S|)=1$ and since $N\gen{s}\in\mathfrak{U}$ for every $s\in S$, the exponent of $S$ divides $p-1$. The local definition $f(p)$ of $\mathfrak{U}$ is the formation of abelian group with exponent dividing $p-1$, therefore, since $p$ does not divide $|S|$, $NK\in\mathfrak{U}$ \ifa $K$ is abelian, hence $\mathcal{I}_\uu(G)=N\Z(S)$ is a subgroup of $G$, so we reached a contradiction.
\end{proof}
\begin{prop}
The formation $\mathfrak{D}$ of the finite groups with nilpotent derived subgroup is semiregular.
\end{prop}
\begin{proof}
The $\mathfrak{D}$-critical groups are $2$-generated (see for instance \cite[Example 2]{minnonF}), so $\mathfrak{D}$ is 2-recognizable.
Suppose by contradiction it is not semiregular and let $G$ be a minimal example of group such that $\mathcal{I}_\dd(G)$ is not a subgroup. We can apply Theorem \ref{semi}. Let $N=\soc(G)$ and $S$ a complement of $N$. We will prove that if $H\leq S$, then $NH\in\mathfrak{D}$ \ifa $H$ is abelian. Since $\mathfrak{D}$ has local screen $f$ with $f(q)$ the formation of the abelian groups for every prime $q$, if $H$ is abelian, then $NH\in\mathfrak{D}$. On the other hand, suppose $NH\in\mathfrak{D}$. Let $1=N_0\mathrel{\unlhd}\dots\mathrel{\unlhd} N_l=N$ be a composition series of $N$ as $H$-module. Let $V_i:=N_i/N_{i-1}$ and $C_i:=\C_H(V_i)$. For every $1\leq i\leq l,$ we have that $H/C_i\cong \aut_{NH}(V_i)$ is abelian, since $V_i$ is a chief factor of a group in $\mathfrak{D}$. Then we have that $H/T$ is abelian, with $T:=\bigcap_{i=1}^lC_i$. Therefore $H'\leq T$.
Since $C_T(N)\leq C_H(N)=1,$ $T$ is a $p$-group, but $|S'|$ is not divisible by $p$ (otherwise, since $S^\prime$ is nilpotent,
we would have $\oo_p(S)\neq 1),$
so $H^\prime \leq T\cap S^\prime=1$ and $H$ is abelian. Hence $\mathcal{I}_\dd(G)=N\Z(S)$, a contradiction.
\end{proof}
Let $\mathfrak{N}^t$ the formations of finite groups with Fitting length less or equal then $t$. It is a 2-recognizable, saturated formation \cite[Example 3]{minnonF}. As an immediate application of Theorem \ref{semi}, we prove its semiregularity by proving that the formation $\overline{f(p)}=\mathfrak{S}_p\mathfrak{N}^{t-1}$ is semiregular for every prime $p$.
We will need two preliminary lemmas.
\begin{lemma}\label{snt}
$\mathfrak{S}_p\mathfrak{N}$ is regular for every prime $p$.
\end{lemma}
\begin{proof}
Let $G=N\rtimes S$ be a strongly-critical group for $\mathfrak{S}_p\mathfrak{N}$. The socle $N$ of $G$ is a $q$-group. If $q=p$, then, since $S\cong G/N\in\mathfrak{S}_p\mathfrak{N}$ and $\oo_p(S)=1$, if follows $S\in\mathfrak{N}$ and $G\in\mathfrak{S}_p\mathfrak{N}$, so it must be $q\neq p$. If $K<S$, then $NK\in\mathfrak{S}_p\mathfrak{N}$. Since $C_S(N)=1,$ we deduce $\oo_p(NK)=1$, hence $NK\in\mathfrak{N}$, which implies that $NK$ is a $q$-group (otherwise $C_K(N)\neq 1)$. We have then that all proper subgroups of $S$ are $q$-groups, but $S$ itself is not a $q$-group, so $S$ must be cyclic of order a prime $r\neq q$. We deduce from Theorem \ref{regolari} that $\mathfrak{S}_p\mathfrak{N}$ is regular.
\end{proof}
\begin{lemma}
$\mathfrak{S}_p\mathfrak{N}^t$ is a 2-recognizable saturated formation for every $t$ and every prime $p$.
\end{lemma}
\begin{proof}The formation $\mathfrak{S}_p\mathfrak{N}^t$ is saturated (see \cite[IV, 3.13 and 4.8]{dh}).
We prove by induction on $t$ that $\mathfrak{S}_p\mathfrak{N}^t$ is a 2-recognizable. We have seen in Lemma \ref{snt} that $\mathfrak{S}_p\mathfrak{N}$ is 2-recognizable for every prime $p$. Let $t\neq 1$ and let $G$ be a group of minimal order with respect to the property that every 2-generated subgroup of $G$ is in
$\mathfrak{S}_p\mathfrak{N}^t$ but $G$ is not. Clearly $G$ is strongly critical for $\mathfrak{S}_p\mathfrak{N}^t$, so, by Lemma \ref{N}, $G=N\rtimes S$, where $N=\soc(G)$ is an elementary abelian group of prime power order and $S\in\mathfrak{S}_p\mathfrak{N}^t$. If $N$ is a $p$-group, then $G\in\mathfrak{S}_p\mathfrak{N}^t$, hence $N$ is a $q$-group with $q\neq p$. If $K<S$, then $NK\in \mathfrak{S}_p\mathfrak{N}^t.$ Since $C_K(N)=1,$ it must be $\oo_p(NK)=1$ so $NK\in\mathfrak{N}^t$. Moreover the Fitting subgroup $\F(NK)$ of $NK$ coincides with $\oo_q(NK)=N\oo_q(K)$ and therefore $K\in\mathfrak{S}_q\mathfrak{N}^{t-1}$, so $S$ is critical for $\mathfrak{S}_q\mathfrak{N}^{t-1}$. Since, by induction,
$\mathfrak{S}_q\mathfrak{N}^{t-1}$ is 2-recognizable, the group $S$ is 2-generated. By Proposition \ref{lemma10}, $G$ itself is $2$-generated and hence $G\in\mathfrak{S}_p\mathfrak{N}^t$, a contradiction.
\end{proof}
\begin{prop}
$\mathfrak{S}_p\mathfrak{N}^t$ is semiregular for every $t$ and every prime $p$.
\end{prop}
\begin{proof}
We prove by induction on $t$ that $\mathfrak{S}_p\mathfrak{N}^t$ is semiregular for every $t$. By Lemma \ref{snt} we may assume $t>1.$ Suppose by contradiction that $\mathfrak{S}_p\mathfrak{N}^t$ is not semiregular and let $G$ be a minimal example of group such that $\is{\mathfrak{S}_p\mathfrak{N}^t}(G)$ is not a subgroup. We can apply Theorem \ref{semi}. Let $N=\soc(G)$ and $S$ a complement of $N$. Since $S\in\mathfrak{S}_p\mathfrak{N}^t$, if $N$ were a $p$-group, then $G$ would be in $\mathfrak{S}_p\mathfrak{N}^t$, hence $N$ is a $q$-group with $q\neq p$. Let now $s,t\in S$ and $K:=\gen{s,t}$: since $\F(NK)=N\oo_q(K)$, we have $NK\in\mathfrak{S}_p\mathfrak{N}^t$ if and only if $NK\in\mathfrak{N}^t$, if and only if $K\in\mathfrak{S}_q\mathfrak{N}^{t-1}$. Hence by induction we conclude that $\is{\mathfrak{S}_p\mathfrak{N}^t}(G)=N\is{\mathfrak{S}_q\mathfrak{N}^{t-1}}(S)$ is a subgroup, a contradiction.
\end{proof}
\begin{prop}
$\mathfrak{N}^t$ is semiregular for every $t$.
\end{prop}
\begin{proof}
Since $\overline{f(p)}=\mathfrak{S}_p\mathfrak{N}^{t-1}$, the statement follows from Theorem \ref{semi} and Proposition \ref{snt}.
\end{proof}
\section{Connectedness of $\G_\ff$}\label{connesso}
In this section we study for which formations the graph $\G_\ff(G)$ is connected for every finite group $G$. In the spirit of the previous sections we will build, under the additional assumption that $\mathfrak{F}$ is semiregular, a smallest group $G$ such that $\G_\ff(G)$ is not connected. First we need a preliminary lemma.
\begin{lemma}\label{swa}
Let $G$ be a 2-generated finite soluble group, with $G\notin \mathfrak{F}.$ If $x,y\in V(G),$ then $x$ and $y$ belong to the same connected component of $\G_\ff(G).$
\begin{proof}
Consider the graph $\Delta(G)$ whose vertices are the elements of
$V(G)$ and in which $g_1, g_2$ are adjacent if and only if $\langle g_1, g_2 \rangle=G.$ If $G$ is soluble then $\Delta(G)$ is a connected graph (see \cite[Theorem 1]{cl}). The conclusion follows from the fact that $\Delta(G)$ is a subgraph of $\G_\ff(G).$
\end{proof}
\end{lemma}
\begin{thm}\label{connection}
Let $\mathfrak F$ be a 2-recognizable, hereditary, saturated formation, with $\mathfrak A \subseteq \mathfrak{F} \subseteq \mathfrak S.$
Assume that $\mathfrak{F}$ is semiregular and suppose that there exists a finite group $G$ such that $\G_\ff(G)$ is not connected. If $G$ has minimal order with respect to this property, then $G$ is a primitive monolithic soluble group, $N=\soc(G)=G^{\ff}$ and $N\subseteq\mathcal{I}_\ff(G)$. Moreover, the same statements of point (2-4) of Theorem \ref{semi} hold. With the same notation, we have also that $\Gamma_{\overline{f(p)}}(S)$ is not connected.
\end{thm}
\noindent Given a finite group $X$, we will write $x_1\sim x_2$ to denote that $x_1$ and $x_2$ are two adjacent vertices of $\G_\ff(X)$
and $x_1\approx x_2$ if $x_1$ and $x_2$ belong to the same connected component of $\G_\ff(X).$
We divide the proof in the following steps.
\begin{stepp}
$G$ is a primitive monolithic soluble group and
$N=\soc(G)=G^{\ff}$.
\end{stepp}
\begin{proof} Suppose there exists $1\neq M\mathrel{\unlhd} G$ such that $G/M\notin\mathfrak{F}$. Set $I/M:=\mathcal{I}_\ff(G/M)\mathrel{\unlhd} G/M$ and let $a_1M,a_2M\notin I/M$. We have $a_1M\approx a_2M$ by minimality of $G$. Since, by Lemma \ref{FFF} (a), $g_1M\sim g_2M$ implies $g_1\sim g_2$, we can ``lift'' a path from $a_1M$ to $a_2M$ in $\G_\ff(G/M)$ to a path from $a_1$ to $a_2$ in $\G_\ff(G)$, so $a_1\approx a_2$. So there exists a unique connected component of $\G_\ff(G),$ say $\Omega,$ containing $G\setminus I.$
If $I\in\mathfrak{F}$, then every element of $I\setminus \mathcal{I}_\ff(G)$ must be adjacent to an element of $G\setminus I,$ so $I\setminus \mathcal{I}_\ff(G) \subseteq \Omega$. But this implies $\Omega=G\setminus \mathcal{I}_\ff(G)$, and consequently $\G_\ff(G)$ is connected. Therefore $I\notin\mathfrak{F}$. Since $\mathfrak{F}$ is 2-recognizable, this implies $\mathcal{I}_\ff(I)<I.$ Let $H$ be a maximal subgroup of $G$ containing $I$. Since
$\G_\ff(H)$ is connected, there exists a unique connected component of
$\G_\ff(G),$ say $\Delta,$ containing $H\setminus \mathcal{I}_\ff(H).$ Of course $I\setminus \mathcal{I}_\ff(I) \subseteq H\setminus \mathcal{I}_\ff(H),$ so $I\setminus \mathcal{I}_\ff(I)\subseteq \Delta.$ Recall that $G\setminus I\subseteq \Omega.$ Moreover if $x\in \mathcal{I}_\ff(I) \setminus \mathcal{I}_\ff(G),$ then $x\sim y$ for some $y\in G\setminus I,$ so $\mathcal{I}_\ff(I) \setminus \mathcal{I}_\ff(G)\subseteq \Omega$. If $\Delta\cap \Omega\neq \emptyset,$ then $\Delta=\Omega=G\setminus \mathcal{I}_\ff(G)$ and $\G_\ff(G)$ is connected. So we may assume $\Delta\cap \Omega=\emptyset,$ and consequently
$(H\setminus \mathcal{I}_\ff(H))\cap (H\setminus I)=\emptyset$, i.e. $H=I\cup \mathcal{I}_\ff(H).$ Since $H\notin \mathfrak{F}$ and $\mathfrak{F}$ is 2-recognizable, $\mathcal{I}_\ff(H)\neq H,$ and consequently $H=I.$ If $g\in G\setminus I,$ then $G=\gen{g}I$, so $G/M=\gen{gM}I/M=\gen{gM}\mathcal{I}_\ff(G/M)$ and, by Lemma \ref{Icyc}, $G/M\in\mathfrak{F}$, a contradiction.
So all the proper factors of $G$ are in $\mathfrak{F}$ and we may use Lemma \ref{N}. If $\R(G)=1$, then $\mathcal{I}_\ff(G)=1$. Let $a,b\in G,$ both different from $1$. By \cite[Theorem 6.4]{gu} there is a path in $\G_\mathfrak{S}(G)$ from $a$ to $b$. This path is also a path in $\G_\ff(G)$ since $H\notin\mathfrak{S}$ implies $H\notin\mathfrak{F}$ for every group $H$. So if $\R(G)=1$, then $\G_\ff(G)$ is connected. Hence we conclude that
$G$ is a primitive monolithic soluble group and $N=\soc(G)=G^{\ff}$.
\end{proof}
\begin{stepp}
$N\subseteq\mathcal{I}_\ff(G)$.
\end{stepp}
\begin{proof} Since $\mathcal{I}_\ff(G)\mathrel{\unlhd} G$ and $N$ is the unique minimal normal subgroup, if $\mathcal{I}_\ff(G)\neq 1$, then $N\subseteq\mathcal{I}_\ff(G)$. Hence we may assume by contradiction that $\mathcal{I}_\ff(G)=1$.
Let $S$ be a complement of $N$ in $G.$ Suppose that $S=\gen{s}$ is cyclic.
Since $S$ is a maximal subgroup of $G$, $\langle g,s\rangle =G$ for any $g\notin \gen{s},$ hence there exists a connected component $\Lambda$ of $\G_\ff(G)$ containing $s$ and $G\setminus \gen{s}.$
Moreover, every nontrivial element of $S$, being non-isolated in $\G_\ff(G),$ is adjacent to some element of $G\setminus S$, so $\Lambda=G\setminus \{1\}$ and $\G_\ff(G)$ is connected, a contradiction. So we may assume that $S$ is not cyclic. Take now $n_1, n_2\in N\setminus \{1\}$ and for $i\in\{1,2\}$ let $M_i<S$ such that $n_i\notin\mathcal{I}_\ff(NM_i)$ (this is possible since $S$ is not cyclic). We have $N_i:=N\cap\mathcal{I}_\ff(NM_i)<N$, so $N_1\cup N_2\neq {N}$ and there exists $n\in N\setminus(N_1\cup N_2)$. We have then $n_1 \approx n$ in $\G_\ff(NM_1)$ and $n_2\approx n$ in $\G_\ff(NM_2)$, therefore $n_1\approx n_2$ in $\G_\ff(G)$. Hence there exists a connected component $\Pi$ of $\G_\ff(G)$ containing $N\setminus\{1\}$. Let now $g=ns$ be an arbitrary element of $G\setminus N.$ First assume $g\notin V(G)$. Since $N\not\leq C_G(g),$ there exists $n^*\in N\setminus \{n\}$ with the property that $g=(n^*s)^x$ for some $x\in G.$
We claim that $g\in \Pi.$ Since $n^*n^{-1}\neq 1$, there exists $\bar g= \bar n \bar s$ such that $\bar g\sim n^*n^{-1}$. Set $H:=N\!\gen{s,\ov{s}}$ (it is a proper subgroup of $G$, since for Proposition \ref{lemma10}, $s\notin V(S))$. If $g\notin\mathcal{I}_\ff(H)$, then $ns\approx n^*n^{-1}$ (since $\G_\ff(H)$ is connected) and then $g\in \Pi.$ Assume $g\in\mathcal{I}_\ff(H)$.
We have $n^*s\not\in\mathcal{I}_\ff(H)$, (otherwise, since $\mathcal{I}_\ff(H)$ is a subgroup, $(n^*s)(ns)^{-1}=n^*n^{-1}\in \mathcal{I}_\ff(H)$), but then
$n^*s\approx n^*n^{-1}$ in $\G_\ff(H)$ and consequently $n^*s\in \Pi.$ This implies $g=(n^*s)^x\in \Pi^x=\Pi$ (notice that $\Pi^x=\Pi$ since $N\setminus \{1\}
\in \Pi \cap \Pi^x).$ Suppose now $g\in V(G)$.
Choose $n_1, n_2\in N$ and $t \in S$ such that $n_2\sim n_1t$ and
set $H:=N\gen{s,t}.$
If $H=G$, then $t\in V(S)$ and consequently $n_1t\in V(G).$ Since $G$ is soluble, it follows from Lemma \ref{swa} that $g\approx n_1t \approx n_2$ and $g\in \Pi.$
If $H<G$, then $ms \notin \mathcal{I}_\ff(H)$ for some $m\in N$ (otherwise $N\leq \mathcal{I}_\ff(H)).$ By Proposition \ref{lemma10}, $ms\in V(G)$ and, again by Lemma \ref{swa},
$g \approx ms.$ Moreover, since $\G_\ff(H)$ is connected, $ms\approx n_2$. So $g \approx n_2$ and therefore $g\in \Pi.$
We reached in this way the conclusion that $\G_\ff(G)$ is connected, against the assumptions on $G$.
\end{proof}
\begin{stepp}
Statements (2-4) of Theorem \ref{semi} hold.
\end{stepp}
\begin{proof} We can use the same argument of the proof of Theorem \ref{semi}.
\end{proof}
\begin{stepp}
$\Gamma_{\overline{f(p)}}(S)$ is not connected.
\end{stepp}
\begin{proof}
Suppose that $\Gamma_{\overline{f(p)}}(S)$ is connected. Let $s,t\in S$ such that $s\sim t$ in $\Gamma_{\overline{f(p)}}(S)$. We claim that $ns\approx mt$ for every $n,m\in N$. Suppose $\gen{s,t}=S$. By Proposition \ref{lemma10} $ns,mt\in V(G)$ so, by Lemma \ref{swa}, they are in the same connected component of $\G_\ff(G)$. Suppose instead that $\gen{s,t}<S$. We have that $H:=N\gen{s,t}<G$ is not in $\mathfrak{F}$ since $\gen{s,t}\notin\overline{f(p)}$. Therefore $ns$ and $mt$ are not isolated in $H$ and, for minimality, $\G_\ff(H)$ is connected, so $ns\approx mt$ in $\G_\ff(G)$ too. Choose now two non-isolated vertices $n_1s_1,n_2s_2\in\G_\ff(G)$ with $n_1,n_2\in N$ and $s_1,s_2\in S$. Since they are not isolated, $s_1,s_2\notin\is{\overline{f(p)}}(S)$, hence there is a path $s_1=z_0\sim\dots\sim z_l=s_2$ in $\Gamma_{\overline{f(p)}}(S)$ and since $z_i\sim z_{i+1}$, we have, for every $m,h\in N$ and every $i$, that $mz_i\approx hz_{i+1}$ in $\G_\ff(G)$ and so $n_1s_1\approx n_2s_2$, a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{conreg}]
Suppose $G$ has minimal order with respect to the property that $\G_\ff(G)$ is not connected. By Theorem \ref{connection}, $G$ is a primitive monolithic group and $N\mathrel{\unlhd}\mathcal{I}_\ff(G)=\fr_\ff(G)$. By Proposition \ref{frf}, $\fr_\ff(G)/N=\fr_\ff(G/N)=G/N$, hence $\fr_\ff(G)=G$, a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{conaltri}]
It follows applying Theorem \ref{connection}, noticing that:
\begin{itemize}
\item If $\mathfrak{F}\in\{\mathfrak{U},\mathfrak{D}\}$, then $\Gamma_{\overline{f(p)}}(S)=\Gamma_{\mathfrak{A}}(S)$ is connected.
\item If $\mathfrak{F}=\mathfrak{S}_p\mathfrak{N}^t$ for some prime $p$ and some $t$, then $\Gamma_{\overline{f(p)}}(S)=\Gamma_{\mathfrak{S}_q\mathfrak{N}^{t-1}}(S)$ for some other prime $q$. Therefore we can use induction on $t$, considering that $\mathfrak{S}_p\mathfrak{N}$ is regular for every $p$ and that Theorem \ref{conreg} holds.
\item If $\mathfrak{F}=\mathfrak{N}^t$ for some $t$, then $\Gamma_{\overline{f(p)}}(S)=\Gamma_{\mathfrak{S}_p\mathfrak{N}^{t-1}}(S)$ for some prime $p$ and we can use the point above.\qedhere
\end{itemize}
\end{proof}
\section{Planarity of $\G_\ff$}
The generating graph $\tilde \Delta(G)$ of a finite group $G$ is the graph whose vertices are the elements of $G$ and in which two vertices $g_1$ and $g_2$ are adjacent if and only if $\langle g_1, g_2\rangle =G.$ Moreover $\Delta(G)$ is the subgraph of $\tilde \Delta(G)$ induced by the subset of its non isolated vertices. Notice that if $G$ is a 2-generated $\mathfrak{F}$-critical group, then $\G_\ff(G)\cong \Delta(G).$
\begin{proof}[Proof of Theorem \ref{planar}]
One implication is easy: if $G\in \mathfrak{F}$ then $\G_\ff(G)$ is a null graph, while if $G\cong S_3$ and $S_3\notin \mathfrak F,$ then $\G_\ff(G)\cong \Delta(G)$ is planar, as it is noticed in \cite{planar}. Conversely, suppose $G\notin \mathfrak{F}$ and $\G_\ff(G)$ is planar. Since $\mathfrak{F}$ is 2-recognizable, there exist $a, b \in G$ such that $\langle a, b \rangle \notin \mathfrak{F}.$ Since $\Delta(\langle a, b\rangle)$ is a subgraph of $\G_\ff(G)$, it must be planar. Finite groups with planar generating graph have been completely classified in \cite{planar}. In particular, if $\Delta(X)$ is planar, then either $X$ is nilpotent or $X\in \{S_3, D_6\}.$ Since $\mathfrak{N} \subseteq \mathfrak{F},$ $\langle a, b \rangle$ is not nilpotent, so either $\langle a, b \rangle \cong S_3$ or $\langle a, b \rangle \cong D_6.$ Since $D_6\cong S_3\times C_2$ and $C_2\in \mathfrak{F},$ $D_6\notin \mathfrak{F}$ implies $S_3\notin \mathfrak{F}.$ Let $A$ be the set of the non-central involutions of $D_6$ and let $B$ the set of the elements of $D_6$ of order divisible by 3: then $\Gamma_\mathfrak{F}(D_6)$
contains the complete bipartite graph whose partition has the parts $A$ and $B$, so it is not planar.
Hence $\gen{a,b}$ can only be isomorphic to $S_3$. We show that all the elements of $G$ have order less or equal to $3$. Suppose in fact that there is $g\in G$ such that $|g|\geq 4$. Since $\G_\ff(G)$ is planar, $g\notin \mathcal{I}_\ff(G)$ would imply that it generates a copy of $S_3$ with another element, but this is impossible since $|g|\geq 4$. We have then that $g\in \mathcal{I}_\ff(G)$ and therefore $|\mathcal{I}_\ff(G)|\geq 4.$ We claim that this is not possible. Indeed $G$ contains $X=\langle a, b \rangle \cong S_3\notin \mathfrak F$. Since $\mathfrak{F}$ is semiregular, $I:=\mathcal{I}_\ff(G)$ is a normal subgroup of $G$. Since $I\cap X=1,$ for every $x,y \in I$ we have
$$\frac{\langle ax, by \rangle}{I\cap \langle ax, by \rangle} \cong \frac{\langle ax, by \rangle I}{I} \cong \frac{\langle a, b \rangle I}{I}\cong {\langle a, b \rangle }\cong S_3\notin \mathfrak{F},
$$ hence
$\langle ax, by \rangle \notin \mathfrak F.$ But then $\G_\ff(G)$ contains the complete bipartite graph on the two parts $aI$ and $bI$
and then it is not planar. We have so proved that all the elements of $G$ have order order less or equal than $3$.
Groups with this property have been classified in \cite{groups23}.
Since $G$ is not nilpotent and contains a subgroup isomorphic to
$S_3$, $G\cong A\rtimes \langle x \rangle$, with $A\cong C_3^t$ and $x$ acting on $A$ sending every element into its
inverse. In particular the subgraph of $\G_\ff(G)$ induced by the $3^t$ involutions is complete, so it is planar only if $t=1$, i.e. $G\cong S_3.$
\end{proof}
|
1,314,259,993,325 | arxiv |
\section{HyFL\xspace{} Framework}
\label{sec:framework}
We now present the details of our~HyFL\xspace{} framework, which aims to address multiple key requirements:
complete model privacy, scalability, support for resource-constraint~(mobile/edge) devices, reduction of attack surface, ability to defend against multiple remaining threats, and high levels of user engagement.
Additionally, our framework seeks to capture various proposed architectures for~FL in a single abstraction.
We illustrate our framework in~\figref{fig:architecture} and detail the underlying algorithm in~\algref{alg:algorithm-workflow}.
\begin{figure*}[htb!]
\centering
\resizebox{0.98\textwidth}{!}{
\input{figures/architecture.tex}
}
\vspace{-3mm}
\caption{Three-layer architecture in HyFL\xspace{} for federated training of a machine learning model.}
\label{fig:architecture}
\end{figure*}
\subsection{HyFL\xspace{} Architecture}
\label{sec:framework-layers}
Our~HyFL\xspace{} framework is based on a three-layer architecture and extends the established hierarchical~FL paradigm~\cite{MLSys:BonawitzEGHIIKK19,ICLR:LSPJ20,IJCAI:Yang21}.
In hierarchical~FL, clients are initially organized into clusters, and their data is aggregated at cluster level.
This cluster-level data is then further aggregated globally, resulting in an additional layer of aggregation.
We pursue a hierarchical approach as it facilitate large-scale deployments: Firstly, it can effectively model the taxonomy of trust among clients in real-world scenarios, such as trust among members of the same region or country~\cite{CN:MartiG06}.
Secondly, it distributes the workload among multiple clusters instead of relying on a central aggregator.
Furthermore, this type of hierarchy is ubiquitous in real-world scenarios such as~P2P gaming, organizations, and network infrastructures~\cite{INFOCOM:SARK02,CSUR:HV19,ICLR:LSPJ20}.
HyFL\xspace{} utilizes a different approach for model training than prior works: they involve interactive collaboration among clients to preserve privacy during model training and thus are costly for a large number of clients~\cite{NDSS:SavPTFBSH21}.
Another existing method is pairwise training among clients and a representative, followed by an aggregation step similar to~FL~\cite{CCSW:MandalG19}; however, this approach only preserves privacy among clients and not against the aggregator, assuming no collusion to violate privacy.
We provide details for each layer in~HyFL\xspace next.
We focus on the sequence of operations performed by the entities in our architecture~(cf.~\figref{fig:architecture}) for a training over~$T$ iterations while considering necessary~MPC protocols and setup requirements.
Since we have a generic design, MPC protocols are abstracted and their specifics are given in~\tabref{tab:mpc-functionalities} in~\secref{app:mpc-functionalities}.
The notations used in~HyFL\xspace{} are listed in~\tabref{tab:notation}.
\input{figures/tables/notations.tex}
\subsubsection{Layer III: Clients}
\label{sec:framework-layer-III}
This layer is composed of~$\sizeFL{\footnotesize \mathcal{M}}$ distinct sets of clients~$\clientFLsetTotal{i}$~(with~$i \in [\sizeFL{\footnotesize \mathcal{M}}]$), called~\emph{clusters}, which are formed based on specific criteria relevant to the application~(e.g., European Union~(EU) member states for~EU smart metering scheme~\cite{CHAPTER:CK13,REPORT:EUSmartMeter}).
Similar to standard~FL, only a random subset of clients, denoted by~$\clientFLset{i} \subseteq \clientFLsetTotal{i}$, will be selected by the training algorithm in an iteration~$t \in [1,T]$.
During iteration~$t$, each client~$\clientFL{i}{j} \in \clientFLset{i}$~(with~$j \in [\clusterclientsize{i}]$) holding data~$\HyFLData{t}{\clientFL{i}{j}}$ uses the~$\textsc{Share}$ protocol to securely distribute its data to a set of cluster servers~$\clusterserverset{i}$.
As detailed in~\secref{sec:framework-layer-II}, $\clusterserverset{i}$ constitute a representative group of high-performance servers that clients have a sufficient level of trust in.
HyFL\xspace{} allows clients to share input and then leave at any time.
They can also rejoin the system later and provide additional data in the next iteration they get selected.
Hence, the clusters~$\clientFLsetTotal{i}$ are dynamic and change with each iteration.
Our method differs from the standard concept of~\enquote{data residing at the clients} in~FL, but we expect it to not negatively impact user engagement as data remains within the users' trust zone.
Additionally, the reduced computational load allows for the use of resource-constrained devices in training complex models and eliminates the need for shared-key setup among clients, making it easier to handle dropouts.
\subsubsection{Layer II: MPC Clusters}
\label{sec:framework-layer-II}
The second layer consists of~$\sizeFL{\footnotesize \mathcal{M}}$ sets of distributed training servers~$\clusterserverset{i}$~(with~$i \in \sizeFL{\footnotesize \mathcal{M}}$), called~\emph{MPC clusters}, with each~$\clusterserverset{i}$ corresponding to the cluster~$\clientFLsetTotal{i}$ in~Layer~III.
In iteration~$t$, Layer~I servers~(denoted by $\ensuremath{\mathcal{G}}$) initiate~ML training by sharing the current global model~$\HyFLmodel{t-1}$ among servers in~$\clusterserverset{i}$.
As will be discussed in~\secref{sec:framework-layer-I}, $\HyFLmodel{t-1}$ is also in a secret-shared form among~$\ensuremath{\mathcal{G}}$, represented by~$\MPCSharing{\HyFLmodel{t-1}}{\ensuremath{\mathcal{G}}}$.
To account for varying availability and trustworthiness of servers across regions, MPC clusters in~HyFL\xspace{} may use different~MPC configurations and differ, e.g., in their corruption threshold and security model~\cite{FTPS:EvansKR18}.
Therefore, $\ensuremath{\mathcal{G}}$ uses the~$\textsc{Reshare}$ protocol to convert the secret shares of~$\MPCSharing{\HyFLmodel{t-1}}{\ensuremath{\mathcal{G}}}$ to those of~$\clusterserverset{i}$, i.e., $\MPCSharing{\HyFLmodel{t-1}}{\clusterserverset{i}}$.
Given~$\MPCSharing{\HyFLmodel{t-1}}{\clusterserverset{i}}$, servers in~$\clusterserverset{i}$ use~$\textsc{Train}$ to employ~MPC-based~PPML techniques for private~ML training~\cite{NeurIPS:KnottVHSIM21,ICML:Keller022} on the cumulative data from all clients in the cluster~$\clientFLset{i}$, denoted by~$\MPCSharing{\HyFLData{t}{}}{\clusterserverset{i}}$.
This data may include leftover data from the same cluster in the previous iteration.
Furthermore, by utilizing a larger pool of training data, we can leverage the known benefits of batching, resulting in faster convergence~\cite{ARXIV:GDGNWKTJH17,SIAM:BCN18}.
After completing training, servers in~$\clusterserverset{i}$ utilize~$\textsc{Reshare}$ to secret-share the updated model with the~Layer~I servers, i.e., $\MPCSharing{\FLmodel{i}{t}}{\ensuremath{\mathcal{G}}}$.
To preserve the system's integrity, the servers for each~MPC cluster must be chosen with care to ensure that clients are willing to share their data among the servers and that not all of the servers are colluding.
One possible option is to build non-profit partnerships, such as in the~MOC alliance~\cite{CLOUDNET:ZinkICSKHDDLH21}, where organizations with mutual distrust can securely co-locate servers in the same data center with high-speed network connections.
Alternatively, trusted entities like government organizations with limited infrastructure can host their servers in confidential cloud computing environments~\cite{CACM:RCFCD21}.
\subsubsection{Layer I: Global Servers}
\label{sec:framework-layer-I}
The top layer consists of a set of~MPC servers~$\ensuremath{\mathcal{G}}$, named~\emph{Global Servers}, that~\emph{securely} aggregate trained models from all the~MPC clusters in~Layer~II, similarly to a standard~FL scheme with a distributed aggregator~\cite{DLS:FereidooniMMMMN21}.
Concretely, given the locally trained models~$\FLmodel{i}{t}$ for~$i \in [\sizeFL{\footnotesize \mathcal{M}}]$, servers in~$\ensuremath{\mathcal{G}}$ execute the secure aggregation protocol~$\textsc{Agg}$~~\cite{PETS:MOJC23} to compute the updated global model in secret-shared form, i.e.,~$\MPCSharing{\FLmodel{}{t}}{\ensuremath{\mathcal{G}}}$.
Global servers~$\ensuremath{\mathcal{G}}$ use the~$\textsc{Reshare}$ protocol to distribute the aggregated model~$\FLmodel{}{t}$ to each of the~Layer~II clusters~$\clusterserverset{i}$ to start the next iteration~($t+1$).
\subsubsection*{HyFL\xspace{} - The Complete Picture}
\label{sec:framework-workflow}
\looseness=-1
\algref{alg:algorithm-workflow} provides the model training algorithm in HyFL\xspace.
Though HyFL\xspace{} has a three-layer architecture, it can easily accommodate more levels of hierarchy depending on the size of the deployment. For this, additional layers of~MPC clusters can be added between layers~I~and~II, with the clusters performing secure aggregation instead of~PPML training.
Existing schemes for global model privacy, such as~\citet{CCSW:MandalG19} and~\citet{DLS:FereidooniMMMMN21}, only protect the model from either clients or aggregator servers, leaving the possibility of collusion especially in a cross-device setting.
HyFL\xspace{} addresses this issue by keeping the global model in a secret-shared fashion, ensuring that no single entity or group of colluding entities~(up to an allowed corruption threshold) can access the model.
This provides a stronger sense of privacy and also protection against unauthorized use or misuse, such as a client disclosing the trained model to another organization for further training or commercial use.
\input{figures/algorithm_workflow.tex}
\subsection{Private Inference in HyFL\xspace}
\label{sec:framework-inference}
In~HyFL\xspace, after the defined number of training iterations~$T$ are completed, the~MPC clusters begin to function as clusters for~ML inference.
Here, we again utilize~PPML techniques to enable clients to query their clusters in a privacy-preserving way~\cite{NeurIPS:KnottVHSIM21,EPRINT:MWCB22}.
Consider the scenario where client~$\clientFL{}{}$ holding query~$\MPCQuery{}{}$ wants to use the inference service on a model~$\HyFLmodel{}$ that is secret shared with a cluster~$\clusterserverset{k}$.
This is accomplished by~$\clientFL{}{}$ generating~$\MPCSharing{\MPCQuery{}{}}{\clusterserverset{k}}$ using~$\textsc{Share}$, followed by cluster servers in~$\clusterserverset{k}$ invoking~$\textsc{Predict}$ on~$\MPCSharing{\HyFLmodel{}}{\clusterserverset{k}}$ and~$\MPCSharing{\MPCQuery{}{}}{\clusterserverset{k}}$ to generate the inference result in secret-shared form.
Finally, $\clusterserverset{k}$ reveals the result to~$\clientFL{}{}$ using~$\textsc{Reveal}$ protocol.
\subsection{Abstraction of Existing FL Schemes}
\label{sec:framework-abstraction}
Our three-layer~HyFL\xspace{} architecture~(cf.~\figref{fig:architecture}) consolidates many existing~FL frameworks~(cf.~\tabref{tab:abstraction}).
This abstraction simplifies comparisons and facilitates advanced hybrid designs, such as incorporating differential privacy.
Standard~FL with a single aggregator~(\textsc{Single}~$\ensuremath{\mathcal{S}}$)~\cite{AISTATS:McMahanMRHA17} is a variant of~HyFL\xspace, where each~Layer~III cluster~$\clientFLset{i}$ consists of only one client that also serves as the~MPC cluster server~$\clusterserverset{i}$ in~Layer~II.
Thus, it is sufficient to conduct~ML training without privacy concerns and then send the results to a single global server~$\ensuremath{\mathcal{G}}$ in~Layer~I for aggregation.
The case of distributed aggregators~(\textsc{Multi}~$\ensuremath{\mathcal{S}}$)~\cite{DLS:FereidooniMMMMN21} follows similarly, except secure aggregation being performed at~Layer~I with multiple~($\ensuremath{\sizeFL{\globalserverset}}>1$) global servers.
Finally, existing hierarchical~FL schemes~\cite{IJCAI:Yang21} share a similar three-layer architecture with~HyFL\xspace, but have a single server at both the global and cluster level~($\ensuremath{\sizeFL{\globalserverset}}=1$, $\clusterserversize{i}=1$).
While~HyFL\xspace{} employs~PPML training at the cluster-server level, hierarchical~FL uses secure aggregation.
Additionally, clients in the hierarchical~FL approach perform local model training, as opposed to data sharing in~HyFL\xspace.
\vspace{-2mm}
\input{figures/tables/abstraction.tex}
\vspace{-2mm}
\subsection{MPC Functionalities in~HyFL\xspace}
\label{app:mpc-functionalities}
The~MPC functionalities utilized in~HyFL\xspace{} are summarized in~Tab.~\ref{tab:mpc-functionalities}.
While~$\textsc{Share}$ and~$\textsc{Reshare}$ are used for generating the secret-shares as per the underlying~MPC semantics, $\textsc{Reveal}$ is used to reconstruct the secret towards a designated party.
The~$\textsc{Train}$ and~$\textsc{Predict}$ functionalities correspond to~PPML training and inference protocols, respectively.
Similarly, $\textsc{Agg}$ denotes the secure aggregation functionality in~FL~(cf.~\secref{app:related_work_secagg}).
In~HyFL\xspace{}, these functionalities are realized using the~CrypTen framework, in which two semi-honest~MPC servers carry out the computation with the help of a trusted third server~\cite{CRYPTO:DamgardPSZ12,NDSS:Demmler0Z15,NeurIPS:KnottVHSIM21}.
However, HyFL\xspace{} is not bound to any specific~MPC setting and could be instantiated using any~MPC protocol.
\vspace{-4mm}
\input{figures/tables/mpc_functions_wide.tex}
\vspace{-3mm}
The functionalities~$\textsc{TM-List}$ and~$\textsc{TopK-Hitter}$ are used in our proposed~\enquote{Trimmed Mean Variant} given in~\algref{alg:algorithm-tr}.
$\textsc{TM-List}$ takes as input a set of vectors, say~$\myMPCSet{W}$, consisting of~$\ensuremath{{\beta}}$-sized vectors of the form~$\FLmodel{i}{j}$ for~$i \in [\ensuremath{{\beta}}], j \in [\sizeFL{\myMPCSet{W}}]$.
Moreover, the values in the vector comes from a fixed source, i.e, the~Layer~II~MPC clusters in our case, and are thus represented as a tuple of the form~$\FLmodel{i}{j} = (u_i,v_i)_j$.
Here~$u_i$ denotes the source~ID~(MPC cluster in HyFL\xspace) and~$v_i$ represents the corresponding value.
W.l.o.g., consider the first index position of these vectors~($i=1$). $\textsc{TM-List}$ sorts the list~$\{(u,v)_j\}_{j \in [\sizeFL{\myMPCSet{W}}]}$ using the value~$v$ as the key and selects the IDs~($u$) associated with the top and bottom~$\ensuremath{{\alpha}}$ values.
Intuitively, the operation results in selecting the~MPC clusters whose local update fall in either the top-$\ensuremath{{\alpha}}$ or bottom-$\ensuremath{{\alpha}}$ position among all the updates at that index.
This procedure is performed in parallel for all~$\ensuremath{{\beta}}$ indices and results in a set~$\myMPCSet{U}$ of~$2\ensuremath{{\alpha}}\ensuremath{{\beta}}$ IDs~(with duplicates).
The~$\textsc{TopK-Hitter}$ functionality, parameterized by~$\Gamma$, takes this set~$\myMPCSet{U}$ as input and returns a set of~$\Gamma$ values that occur most frequently in~$\myMPCSet{U}$.
\subsection{Incorporating Data-Poisoning Attacks in HyFL\xspace}
\label{app:data-poisoning-attacks}
In this section, we provide a comprehensive overview of the attack setting and the types of poisoning attacks implemented in the HyFL\xspace{} framework. It is important to note that all poisoning attacks implemented in this framework are data poisoning attacks, as model poisoning attacks are not applicable in the context of HyFL\xspace{}.
Regarding the attack setting, it is assumed that all malicious clients are controlled by a single attacker and are executing a coordinated attack. This implies that only one attack is executed at a time, and if two malicious clients possess the same data sample, the poisoned labels of these samples will be identical. Additionally, it is assumed that a malicious client will always poison all of their data samples to maximize the impact of the attack. Furthermore, the data is only poisoned once, and the labels remain constant throughout the training of the model.
For data poisoning attacks, we implement four different types of label-flipping attacks: random label flipping~(RLF~\cite{ECAI:XiaoXE12}), static label flipping~(SLF~\cite{USENIX:FangCJG20, SP:SHKR22}), dynamic label flipping~(DLF~\cite{SP:SHKR22}), and~targeted label flipping~(TLF~\cite{ESORICS:TolpeginTGL20}).
RLF is the most basic label-flipping attack.
As the name suggests, each poisoned sample is assigned a random class label, i.e., $\textit{new}\_\textit{label} = \texttt{randint}(0, \textit{num}\_\textit{classes} - 1)$ with~$\textit{num}\_\textit{classes}$ as the number of classes in the dataset.
This attack shows the effect of random label noise added to the training.
SLF, originally proposed by~\citet{USENIX:FangCJG20} and named by~\citet{SP:SHKR22}, uses a fixed permutation to determine the new label for each poisoned sample.
The attack is described by the following equation: $\textit{new}\_\textit{label} = \textit{num}\_\textit{classes} - \textit{old}\_\textit{label} - 1$.
This means for a~10 class dataset that labels~0 and~9, 1 and~8, 2 and~7, and so on are switched.
Since the permutation is fixed, it is called a static attack~\cite{SP:SHKR22}.
DLF~\cite{SP:SHKR22} uses a surrogate model to flip the labels of each sample.
In our implementation, the data from all malicious clients is combined and used to train a model of the same architecture as used in~HyFL\xspace training.
After training, the model is used for inference on the data and the labels are set to the least probable output by the surrogate model.
The name dynamic is chosen because the labels depend on the trained model.
By varying the training setting, the poisoned labels will change.
The exact train settings of the surrogate model are given in~Tab.~\ref{tab:dlf_setting}.
\vspace{-4mm}
\input{figures/tables/parameters_DLF_attack.tex}
TLF~\cite{ESORICS:TolpeginTGL20} is the only targeted data poisoning attack.
It simply flips all labels from a source class to a target class.
In the evaluation, we always set the source class as~0 and the target class as~1.
\subsubsection{CrypTen Implementation Details}
CrypTen itself has no oblivious sorting functionality built in, so we implement privacy-preserving sorting for so-called~CrypTensors.
Sorting is necessary to compute trimmed mean and our optimized trimmed mean variant.
We minimize the number of comparisons by implementing a bitonic sorting network that generates the necessary comparisons between elements to allow for a non-stable sorting algorithm.
For trimmed mean, it is not necessary to preserve the relative order of same valued keys as each of them would have been seen as suspicious anyway.
The comparisons are stored in plaintext, as they do not reveal any information about the data.
The comparison steps are only computed once and then executed in parallel for each coordinate.
For~100 elements, we perform~1077 comparisons and for~10 elements only~31.
As the result of each comparison is hidden, we perform the swap operation for each pairs as described in~Lst.~\ref{listing:sorting_crypten}.
\lstset{style=python, escapechar=|}
\begin{figure}[htb!]
\lstinputlisting[language=python,
caption={{CrypTen: Parallel Sorting}},
basicstyle=\scriptsize\ttfamily,
label={listing:sorting_crypten}
]
{figures/source_code/sorting.py}
\end{figure}
\lstset{style=python, escapechar=|}
\begin{figure}[htb!]
\lstinputlisting[language=python,
caption={{CrypTen: Sum of Benign Updates}},
basicstyle=\scriptsize\ttfamily,
label={listing:sumofbenign}
]
{figures/source_code/sumbenign.py}
\end{figure}
When computing the proposed trimmed mean variant, we are only interested in the indices of the outliers and therefore only perform the swap operations on an indices list.
This is done to minimize the compute operations and thereby reduce time and communication.
After identifying which gradients were most often detected as outliers and then compute the set of benign indices, lastly we compute the sum over those while preserving privacy.
The procedure is shown in~Lst.~\ref{listing:sumofbenign}.
\subsubsection{Run configuration}
In this section, we list all the hyper parameters used during training that do not change between runs.
The parameters which change between runs will be explained in the appropriate section.
Tab.~\ref{tab:basic_run_config} shows the settings that affect the training by the clients and~MPC clusters respectively.
We consider~1000 clients overall from which~100 are randomly sampled during each training iteration.
In the~FL setting, all sampled clients provide their computed update directly to the aggregation server.
In the~HyFL\xspace setting, each of the~10~MPC clusters has~100 associated clients from which each samples~10 clients at random.
Each client has~200 data points assigned to it at random with duplicates allowed between clients.
To allow for a fair comparison between~FL and~HyFL\xspace, we scale the learning rate from~0.005 for the~FL setting to~0.05 for the~HyFL\xspace setting.
Each client and~MPC cluster performs~5 local training epochs with either a batch size of~8 or~80, respectively, following the recommendation of~\citet{ARXIV:GDGNWKTJH17}.
\input{figures/tables/parameters_training_setting.tex}
Tab.~\ref{tab:aggregation_run_config} shows the hyper parameters used by the aggregation server.
The trimmed mean threshold~$\ensuremath{{\alpha}}$ has been chosen such that it can exclude all malicious updates on either side in the~FL setting.
The worst-case scenario we consider is a~0.2 poison rate, which is equivalent to approximately~20 malicious clients selected per round.
In the~HyFL\xspace setting, the cluster-focused setting is the strongest adversarial setting and we therefore appropriately scale~$\ensuremath{{\alpha}}$.
For the trimmed mean variant, we exclude complete updates based on how often they were detected as outliers and therefore need to double~$\ensuremath{{\alpha}}$ to later aggregate the same number of parameters per coordinate.
FLTrust uses a root data set to calculate the server model.
To be most effective, the server model must be similar to the benign models and therefore the root dataset must to be representative for the whole dataset.
Therefore, we sample~(like for all clients)~200 data points for that dataset.
\input{figures/tables/parameters_aggregation_setting.tex}
\subsubsection{Additional Results for Q1}
\label{sec:app:CIFAR10-attacks-Q1}
Fig.~\ref{fig:FL-HyFL-FedAvg-2000-no-attack} shows the full~2000 training iterations.
As seen in~Fig.~\ref{fig:FL-HyFL-FedAvg-500-no-attack}, HyFL\xspace trains the model faster than regular~FL and converges to a higher validation accuracy.
For~MNIST, HyFL\xspace achieves~98.95\% compared to~98.72\% for regular~FL, and~89.04\% compared to~82.14\% on CIFAR10, respectively.
\begin{figure}[htb!]
\centering
\includegraphics[width=8.5cm]{figures/plots/MNIST_FedAvg_2000_no_attack_HyFL_FL_Comparison.pdf}
%
\includegraphics[width=8.5cm]{figures/plots/CIFAR10_FedAvg_2000_no_attack_HyFL_FL_Comparison.pdf}
\caption{Validation accuracy for~FL and~HyFL\xspace for~2000 epochs~(top:~LeNet/MNIST, bottom:~ResNet9/CIFAR10)}
\label{fig:FL-HyFL-FedAvg-2000-no-attack}
\end{figure}
\subsubsection{Additional Results for Q4}
\label{sec:app:CIFAR10-attacks}
\paragraph{ResNet9/CIFAR10}
We present the full evaluation results for~ResNet9 trained on~CIFAR10 under attack.
We evaluate~FedAvg, FLTrust, and trimmed mean in regular~FL and~HyFL\xspace under four different data-poisoning attacks with three different poison rates and three different distributions of malicious clients.
Fig.~\ref{fig:CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-equally-distributed} shows the results for equally-distributed malicious clients for~2000 rounds.
Fig.~\ref{fig:CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-focused} shows the focused attack and~Fig.~\ref{fig:CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-cluster-focused} shows cluster-focused results.
HyFL\xspace outperforms regular~FL in nearly all settings and has more than~5\% higher validation accuracy after~2000 rounds.
In both regular~FL and~HyFL\xspace, all three aggregation schemes show robustness against data poisoning in the realistic settings with~0.01 and~0.1 poison rate and equally-distributed as well as focused attacks.
This is no longer the case for a poison rate of~0.2.
Especially~FedAvg as a non-robust aggregation scheme struggles to converge.
Even robust aggregations cannot fully counteract the effect of the attacks.
The same is true for the cluster-focused attack distribution.
Here, all aggregations have spikes in their validation accuracy; only~FLTrust manages to train a model comparable to the other settings.
This is because~FLTrust is designed to be robust against model poisoning, and the cluster-focused setting is the closest to a model-poisoning attack.
\paragraph{LeNet/MNIST}
We now show additional evaluation results for~LeNet trained on~MNIST.
The settings are the same as previously.
Fig.~\ref{fig:MNIST-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-equally-distributed} shows equally-distributed, Fig.~\ref{fig:MNIST-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-focused} shows focused, and~Fig~\ref{fig:MNIST-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-cluster-focused} shows the cluster-focused attack distribution for~2000 iterations.
As for~ResNet9, HyFL\xspace trains the model faster than regular~FL, but both converge to a similar accuracy in most cases after~2000 rounds.
Here, the advantage of~HyFL\xspace over regular~FL mostly lies in the faster convergence: after only~100 rounds of training, HyFL\xspace nearly reaches~98\% accuracy, regardless of the attack or attack setting.
Also, there is little difference between the three aggregation schemes for the realistic settings.
For the~0.2 poison rate, FedAvg starts to struggle a lot to train the model.
This can mostly be seen for~FedAvg in regular~FL and sometimes in~HyFL\xspace.
\subsubsection{Additional Results for Q6}
\label{sec:app:Q6-res}
\paragraph{ResNet9/CIFAR10}
We present the extended evaluation of the trimmed mean variant for~CIFAR10 under all attacks and attack settings in~HyFL\xspace.
The plots are divided into the three distributions for malicious clients and show the evaluation results for~2000 iterations.
Fig.~\ref{fig:CIFAR10-HyFL-TrimmedMean-Variant-2000-all-attack-equally-distributed} shows equally-distributed, Fig.~\ref{fig:CIFAR10-HyFL-TrimmedMean-Variant-2000-all-attack-focused} shows focused, and~Fig.~\ref{fig:CIFAR10-HyFL-TrimmedMean-Variant-2000-all-attack-cluster-focused} shows cluster-focused.
We compare trimmed mean with our trimmed mean variant when samling~10, 100 and~1000 coordinates.
In most cases, all four aggregations perform equally and the final validation accuracy is nearly the same.
The only difference can be seen with the cluster-focused attack distribution and~0.2 poison rate.
Here, trimmed mean achieves a substantially lower accuracy than the trimmed mean variant.
We assume that is because cluster-focused acts more like a model poisoning attack and trimmed mean variant deals better with this by discarding whole gradients rather than operating coordinate-wise.
\paragraph{LeNet/MNIST}
We also compared trimmed mean against our trimmed mean variant for~LeNet being trained on~MNIST.
However, almost all settings, the aggregations perform very similar.
Hence, we omit the full plots.
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Equally_Distributed_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~ResNet9/CIFAR10 training with~FedAvg, FLTrust, and~trimmed mean for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the equally-distributed setting.}
\label{fig:CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-equally-distributed}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Focused_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~ResNet9/CIFAR10 training with~FedAvg, FLTrust, and~trimmed mean for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the focused setting.}
\label{fig:CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-focused}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Cluster_Focused_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~ResNet9/CIFAR10 training with~FedAvg, FLTrust, and~trimmed mean for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the cluster-focused setting.}
\label{fig:CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-cluster-focused}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/MNIST_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Equally_Distributed_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~LeNet/MNIST training with~FedAvg, FLTrust, and~trimmed mean for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the equally-distributed setting.}
\label{fig:MNIST-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-equally-distributed}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/MNIST_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Focused_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~LeNet/MNIST training with~FedAvg, FLTrust, and~trimmed mean for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the focused setting.}
\label{fig:MNIST-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-focused}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/MNIST_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Cluster_Focused_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~LeNet/MNIST training with~FedAvg, FLTrust, and~trimmed mean for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the cluster-focused setting.}
\label{fig:MNIST-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-all-attack-cluster-focused}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Equally_Distributed_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~ResNet9/CIFAR10 training with~trimmed mean and our~trimmed mean variant for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the equally-distributed setting.}
\label{fig:CIFAR10-HyFL-TrimmedMean-Variant-2000-all-attack-equally-distributed}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Focused_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~ResNet9/CIFAR10 training with~trimmed mean and our~trimmed mean variant for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the focused setting.}
\label{fig:CIFAR10-HyFL-TrimmedMean-Variant-2000-all-attack-focused}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.925\linewidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_all_attacks_Cluster_Focused_Aggregation_Comparison.pdf}
\vspace{-4mm}
\caption{Validation accuracy for~ResNet9/CIFAR10 training with~trimmed mean and our~trimmed mean variant for~2000 iterations under~RLF, SLF, DLF, and~TLF attacks in the cluster-focused setting.}
\label{fig:CIFAR10-HyFL-TrimmedMean-Variant-2000-all-attack-cluster-focused}
\end{figure*}
\section{Introduction}
\label{sec:intro}
Federated learning~(FL) as proposed by~\citet{ARXIV:konevcny2016federated,AISTATS:McMahanMRHA17} is a leading paradigm in distributed machine learning; in a~cross-device setting, FL allows thousands or more clients to participate in a training process.
To improve~\emph{scalability} for large-scale deployments, also hierarchical~FL has been proposed~\cite{MLSys:BonawitzEGHIIKK19,ICLR:LSPJ20,IJCAI:Yang21}, which layers multiple levels of aggregators.
One of the primary benefits of~FL outlined in~\citet{AISTATS:McMahanMRHA17} is the~(perceived)~\emph{privacy} of training data and thus increased user engagement: as participating clients train the model locally and transfer only gradient updates to an aggregator, training data never leaves the clients' devices.
However, it was shown that these gradient updates still leak a significant amount of information~\cite{NIPS:ZhuLH19,NIPS:GeipingBD020}.
Hence, \emph{secure aggregation} protocols have been introduced that either ensure that a single aggregator sees only blinded~(or~\emph{masked}) values~\cite{CCS:BIKMMP17,CCS:BellBGL020}, use a distributed aggregator based on secure multi-party computation~(MPC)~\cite{DLS:FereidooniMMMMN21,ARXIV:BMPSTVWYY22}, or by adding noise can guarantee differential privacy~(DP)~\cite{EPRINT:OA22}.
Unfortunately, there are still three pressing issues:
\begin{enumerate}[label=P\arabic*]
\item
Malicious participants can perform~\emph{attacks}~(e.g., backdoor~\cite{AISTATS:BagdasaryanVHES20,ICLR:XieHCL20}, model-poisoning~\cite{NIPS:WangSRVASLP20,USENIX:FangCJG20}, or data-poisoning attacks~\cite{ICML:BiggioNL12,ESORICS:TolpeginTGL20}) to manipulate the aggregated model.
\item
Recently, serious concerns about privacy vulnerabilities when using secure aggregation with a~\emph{single} aggregator have been raised~\cite{ARXIV:SAGJA21,ARXIV:BDSSSP21,WEB:BDSSSP22,ICLR:FowlGCGG22,ICML:WenGFGG22}.
\item
Research has shown that with unrestricted access to the aggregated model, it is still possible to extract traces of the original training data~\cite{CCS:PFA22,ARXIV:BDSSSP23}.
\end{enumerate}
\paragraph{Our Contributions}
In this paper, we address all of the issues outlined above in a unified framework called~HyFL\xspace that enables private and robust distributed machine learning at scale.
Our framework is based on a novel abstraction that also captures existing regular and hierarchical~FL architectures in a~\emph{hybrid} manner.
One key property of~HyFL\xspace is that we achieve~\emph{complete model privacy}.
Briefly, in our framework, FL participants use secret-sharing techniques to securely outsource training data to~\emph{distributed training clusters} that are based on~MPC.
The participants then might leave and only sporadically return to provide more training data -- this makes our framework robust against real-world issues such as drop-outs, requires no interaction between clients, and relieves resource-constraint~(mobile or edge) devices from significant workload.
The trained models are then aggregated across all training clusters using one or multiple levels of distributed aggregators.
For secure distributed aggregation, we again utilize~MPC.
Note that after aggregation, models are not publicly released but in secret-shared form handed back to training clusters for the next training iteration.
After training is completed, known secure inference protocols can be used to allow private queries~\cite{EPRINT:MWCB22} in a controlled~(potentially rate-limited) way.
This architecture design addresses issues~P2 and~P3.
We observe that a neat property of our framework is the strictly limited attack surface: malicious participants are restricted to data-poisoning attacks as there is no possibility to access and manipulate the model itself.
We show experimentally that state-of-the-art data-poisoning attacks in the suggested hierarchical configuration are less effective than in plain~FL.
Furthermore, we implement and evaluate different robust aggregation schemes to further mitigate the effect of such attacks; for this, we additionally propose new heuristics that improve the efficiency for the corresponding~MPC implementation.
This addresses issue~P1.
Finally, we implement all~HyFL\xspace components based on~Meta's~CrypTen~MPC framework~\cite{NeurIPS:KnottVHSIM21} and evaluate the performance when training neural networks for standard image classification tasks in realistic network settings and using~GPU-accelerated~AWS~EC2 instances.
In summary, we provide the following contributions:
\begin{myitemize}
\item
New scalable~(hierarchical)~FL framework called~HyFL\xspace{} that achieves complete model privacy, supports resource-limited mobile or edge devices, and significantly limits the attack surface for malicious participants.
\item
Analysis of data-poisoning attacks by malicious participants with new efficiency improvements for secure robust aggregation.
\item
Open-source implementation and evaluation of~HyFL\xspace on standard image classification tasks.
\end{myitemize}
In~Tab.~\ref{tab:relatedworksconcise}, we furthermore clarify how~HyFL\xspace distinguishes itself from related works.
In addition to this concise summary, we provide a detailed overview in~\secref{app:related_work_fl}.
\input{figures/tables/related_works_concise.tex}
\section{Performance Evaluation}
\label{sec:evaluation}
We evaluate the practical performance of~HyFL\xspace in terms of computation and communication overhead empirically.
\paragraph{Implementation}
We implement~HyFL\xspace based on the~CrypTen framework developed by Meta~\cite{NeurIPS:KnottVHSIM21}.
CrypTen provides a~TensorFlow/PyTorch-style interface but implements operations based on secure multi-party computation~(MPC) with~GPU support.
Specifically, CrypTen implements semi-honest arithmetic and~Boolean two- and multi-party protocols that use a third~\enquote{helper} party to generate correlated randomness.
CrypTen provides a~\enquote{simulation} mode where the specified computation is performed on a single node in plaintext yet simulates all effects that computation in~MPC would have on the results~(e.g., due to limited fixed-point precision and trunction).
We leverage this mode to efficiently evaluate~HyFL\xspace accuracy and later the impact of data-poisoning attacks; yet we run the full~MPC computation to obtain realistic run-time and communication measurements.
In all our experiments, the fixed-point precision in~CrypTen is set to~22 decimal bits~(the maximum developer-recommended number).
We use~CrypTen to implement~(I) private training on~Layer~II and~(II)~distributed aggregation on~Layer~I.
CrypTen out of the box supports private inference between~Layer~II and~III, which, however, is not the focus of our evaluation.
We extend~CrypTen with an identity layer to enable model conversions and re-sharing.
Additionally, we extend the implementation of convolutional layers to enable full~GPU-accelerated training for such model architectures.
Moreover, we provide the necessary code to orchestrate the various parties and components, thereby creating a unified simulation framework.
\paragraph{Setup}
Plaintext~FL and~CrypTen-based~HyFL\xspace~\emph{simulations} are run on a single computing platform equipped with two~Intel Xeon Platinum~8168~CPUs, 1.5TB~RAM, and~16~NVIDIA Tesla~V100 GPUs.
To provide realistic results for a~\emph{distributed~MPC deployment} with two computational and one helper party, we use three~Amazon~AWS~g3s.xlarge instances with~4 vCPUs, and~8GB~GPU memory on a~NVIDIA Tesla~M60.
These instances are located in the same~AWS availability zone~(due to high costs associated with routing traffic between different zones), yet we simulate intra- and inter-continental network connections by setting the bandwidth and latency to~1Gbps/100Mbps and~20ms/100ms, respectively.
\paragraph{Tasks}
Following prior work~\cite{ARXIV:BMPSTVWYY22}, we evaluate~HyFL\xspace on two standard image classification tasks: recognizing~(I)~hand-written digits using~LeNet trained on~MNIST~\cite{DBLP:journals/pieee/LeCunBBH98} and~(II)~objects in one of~10 classes using~ResNet9 trained on~CIFAR10~\cite{Krizhevsky_2009_17719}.
We simulate~1000 clients from which~100 are randomly selected per round.
For plain~FL, we use batch size~8 and learning rate~0.005, and train locally for~5 epochs before central aggregation.
For~HyFL\xspace, we simulate~10 Layer~II clusters that use a correspondingly~\emph{scaled} batch size of~80 and a learning rate of~0.05~\cite{ARXIV:GDGNWKTJH17}.
\paragraph{Overview}
Using the implementation and setups described above, we run an empirical accuracy evaluation to answer the following questions:
\begin{enumerate}[label=Q\arabic*]
\item
What is the accuracy difference between~FL~\cite{AISTATS:McMahanMRHA17} and~HyFL\xspace~(in plaintext)?
\item
What is the impact on accuracy for~HyFL\xspace when moving from plaintext to~(simulated)~MPC?
\item
What are the run-time and communication overheads of~(MPC-based)~HyFL\xspace compared to~FL?
\end{enumerate}
In the following, we describe how we answer the individual questions and discuss our results.
\paragraph{Q1 -- FL vs~HyFL\xspace}
In~Fig.~\ref{fig:FL-HyFL-FedAvg-500-no-attack}, we compare the validation accuracy of~FL and~HyFL\xspace for image classification tasks for~500 rounds.
Here, we note that~HyFL\xspace converges significantly faster than regular~FL, e.g., after~500 rounds of training~ResNet9 on~CIFAR10, HyFL\xspace reaches~85.68\% validation accuracy, whereas regular~FL only reaches~65.95\%.
We attribute this to~HyFL\xspace pooling training data at cluster level and thus being able to exploit the known benefits of batching~\cite{ARXIV:GDGNWKTJH17,SIAM:BCN18}.
Plots for up to~2000 epochs can be found in~\figref{fig:FL-HyFL-FedAvg-2000-no-attack} in~App.~\ref{sec:app:CIFAR10-attacks-Q1}.
\begin{figure}[htb!]
\centering
\includegraphics[width=8.5cm]{figures/plots/MNIST_FedAvg_500_no_attack_HyFL_FL_Comparison.pdf}
\vspace{-5mm}
\includegraphics[width=8.5cm]{figures/plots/CIFAR10_FedAvg_500_no_attack_HyFL_FL_Comparison.pdf}
\caption{Validation accuracy for~FL and~HyFL\xspace for~500 iterations~(top:~LeNet/MNIST, bottom:~ResNet9/CIFAR10).}
\label{fig:FL-HyFL-FedAvg-500-no-attack}
\end{figure}
\paragraph{Q2 -- Impact of~MPC}
In~Fig.~\ref{fig:FL-HyFL-FedAvg-500-2000-no-attack-Crypten-Comparison}, we compare the plaintext validation accuracy~(cf.~Q1) to our~CrypTen simulation to measure the impact of~MPC~(i.e., fixed-point representation with~22 bit decimal representation and truncation).
Here, we can only provide results for~LeNet/MNIST, as~ResNet9 training on~GPU in~CrypTen is currently not supported due to limitations in the backward pass implementation.
While there is a slight difference in initial rounds, both implementations quickly converge to almost the same validation accuracy, with only a small difference on the order of~0.1\%.
\begin{figure}[htb!]
\centering
\includegraphics[width=8.5cm]{figures/plots/MNIST_FedAvg_500_no_attack_Crypten_Comparison.pdf}
\vspace{-5mm}
\caption{Validation accuracy for~FL and~HyFL\xspace in plaintext and~MPC~(CrypTen simulation) for~LeNet/MNIST training.}
\label{fig:FL-HyFL-FedAvg-500-2000-no-attack-Crypten-Comparison}
\end{figure}
\paragraph{Q3 -- MPC Overhead}
Finally, we study the overhead of~MPC for secure training and aggregation.
For this, we measure the run-times and communication for one iteration of~LeNet/MNIST training~(i.e., 5 local epochs) in~AWS for one cluster~(with~1Gbps bandwidth and~20ms latency) and one iteration of global aggregation~(with~100Mbps bandwidth and~100ms latency).
The training on cluster level takes~315.17s and requires~5279.25MB inter-server communication, which is multiple orders of magnitude overhead compared to local plaintext training in~PyTorch~(which only takes~0.07s).
The aggregation over~10 cluster inputs is very efficient with~0.023s run-time and has no communication overhead since only linear operations are required, which can be conducted locally over shares in~MPC.
Additional overhead that must be considered for clients is sharing data with the training cluster servers.
In our setup, clients on expectation have to upload~3.31MB and~9.86MB in total for~500 rounds of training for~MNIST and~CIFAR10, respectively.
Furthermore, we have to account for sharing the trained models from training clusters to the aggregation servers.
Given the number of model parameters and~CrypTen sharing semantics, each training cluster must transfer~0.49MB and~39.19MB per server for~LeNet and~ResNet9, respectively.
This clearly shows that it is significantly more efficient for participants to upload their training data in secret-shared form compared to down- and uploading model parameters for each training round.
Note that in our evaluation setup, training clusters and the aggregation layer use the same~MPC configuration, hence no interactive re-sharing is necessary.
\section{Attacks}
\label{sec:attacks_defenses}
Malicious~FL participants can try to manipulate the global model to either produce specific outputs for specific inputs or simply degrade the overall accuracy.
These attacks are referred to as backdoor~\cite{AISTATS:BagdasaryanVHES20,ICLR:XieHCL20} and poisoning attacks~\cite{ACMCS:ZLJS22}, respectively.
In terms of poisoning attacks, the two options are to perform data poisoning~\cite{ICML:BiggioNL12,ESORICS:TolpeginTGL20} or model poisoning~\cite{NIPS:WangSRVASLP20,USENIX:FangCJG20}.
Since models in our setting are not available to clients at any time, malicious participants are~\emph{inherently} limited to manipulate the training data they provide.
This rules out the entire class of more powerful model-poisoning attacks~\cite{ICML:BCMC19}.
Hence, we evaluate the the effectiveness of state-of-the-art data-poisoning attacks in the~HyFL\xspace setting as well as possible mitigations.
\subsection{Data-Poisoning Attacks}
In data-poisoning attacks, malicious clients can perform arbitrary manipulations to the training data.
State-of-the-art attacks are based on label flipping, where clients keep the legitimate training samples, yet exchange the associated labels according to different strategies.
Specifically, we consider the following attacks~(cf.\ App.~\S\ref{app:data-poisoning-attacks}): random~(RLF), static~(SLF), dynamic~(DLF), and~targeted~(TLF) label flipping.
RLF changes the labels of samples at random~\cite{ECAI:XiaoXE12}.
In~SLF, labels are swapped following a fixed assignment~\cite{USENIX:FangCJG20,SP:SHKR22}.
In~DLF, the attacker trains a surrogate model locally; this is then used to flip the label of each sample to the least probable output and thus can be considered the most powerful attack~\cite{SP:SHKR22}.
Finally, TLF changes all labels from a source class to a specified target class~\cite{ESORICS:TolpeginTGL20}.
\subsection{Robust Aggregation Schemes}
The most common~FL aggregation scheme is~\enquote{\mbox{FedAvg}}, which simply computes a~(weighted) average of all inputs~(cf.~App.~\ref{app:related_work_fl}).
In contrast, \emph{robust} aggregation schemes detect and then exclude outliers, and are thus a suitable mitigation against data poisoning.
An overview of such schemes is given in~\citet{SP:SHKR22}.
From the surveyed schemes, we identify~\enquote{FLTrust}~\cite{NDSS:CaoF0G21} and~\enquote{Trimmed Mean}~(TM)~\cite{ICML:YinCRB18} as the most~MPC-efficient ones.
\emph{FLTrust} assumes the aggregator has access to a clean training set and can train the global model of the previous iteration on that; then it measures the cosine similarity of the own training result against the inputs of participants and excludes the least similar ones.
\emph{Trimmed Mean} for each coordinate computes the mean across the provided gradient updates and excludes the values that deviate the most in either direction of the mean.
For our experiments, the number of excluded coordinates corresponds to the maximum assumed poison rate in the system~(e.g., when assuming at most~20\% of clients are corrupted, we discard the top and bottom~20\%).
Performing this aggregation obliviously in~MPC requires implementing costly sorting to determine the ranking in each coordinate.
We observe that, intuitively, data poisoning in contrast to model poisoning does not result in specific coordinates producing extreme outliers.
Hence, we propose a heuristic~\enquote{Trimmed Mean Variant} that computes the mean and ranking only for a small~\emph{randomly sampled subset} of coordinates.
Then, during aggregation, it excludes those gradient updates that occurred the most as outliers in the sample.
We detail the algorithm of our variant in~Alg.~\ref{alg:algorithm-tr}.
\input{figures/algorithm_tr.tex}
\subsection{Evaluation}
\label{sec:attack-evaluation}
We now empirically evaluate the impact of data-poisoning attacks on~HyFL\xspace considering different~(robust) aggregation schemes.
For this, we implement all four attacks in our framework and add~CrypTen-based implementations of the three robust aggregation schemes.
Using the setup described in~\secref{sec:evaluation}, we want to answer the following questions:
\begin{enumerate}[label=Q\arabic*]
\setcounter{enumi}{3}
\item
What is the impact of data-poisoning attacks on the accuracy of~FL and~HyFL\xspace using~FedAvg and robust aggregation schemes?
\item
What is the run-time and communication overhead for different robust aggregation schemes in~MPC?
\item
How does our~TM variant compare to regular~TM~w.r.t.\ accuracy and~MPC performance.
\end{enumerate}
\paragraph{Q4 -- Attack Impact}
For our evaluation, we consider three poison rates~(0.01, 0.1, 0.2) and distributions of attackers:
in the~\emph{equally-distributed} setting, we assume that on expectation each cluster has the same number of malicious clients; in the~\emph{focused} setting, we assume that malicious clients are concentrated on as few clusters as possible while there is still an honest majority in each cluster~(a standard assumption in~FL); finally, in the~\emph{cluster-focused} setting, we see what happens if we lift the honest-majority assumption and concentrate all malicious clients in as few clusters as possible.
In~Fig.~\ref{fig:main_CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-DLF-attack-focused-equally-distributed}, we study the effectiveness of the most powerful~DLF data-poisoning attack on both regular~FL and~HyFL\xspace when training~ResNet9 on~CIFAR10 in the equally distributed and focused setting.
Results for less powerful attacks, the unrealistic cluster-focused setting, and training of~LeNet on~MNIST can be found in~App.~\ref{sec:app:CIFAR10-attacks}.
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/plots/CIFAR10_FedAvg_FLTrust_TrimmedMean_2000_DLF_equally_distributed_focused_Comparison.pdf}
\vspace{-6mm}
\caption{Validation accuracy for~FL and~HyFL\xspace with~FedAvg, FLTrust, and~trimmed mean as aggregation schemes under~DLF attack for three different poison rates~(top: equally distributed, bottom: focused setting).}
\label{fig:main_CIFAR10-FL-HyFL-FedAvg-FLTrust-TrimmedMean-2000-DLF-attack-focused-equally-distributed}
\end{figure*}
For the fairly aggressive~0.2 poison rate, we see in both visualized attacker distributions a significant negative impact of the~DLF attack on~FL when using~FedAvg with drops below~30\% accuracy.
However, these can be successfully mitigated with robust aggregation schemes.
While there is also negative impact on~HyFL\xspace, especially in the focused setting, the accuracy even with~FedAvg never drops below that of~FL.
And even though robust aggregation schemes help to slightly smoothen the curve, we conclude that~\emph{applying defenses in~HyFL\xspace against data-poisoning attacks can be considered optional but not strictly necessary}.
\paragraph{Q5 -- Robust Aggregation in MPC}
We evaluate the run-time and communication overhead of our~FLTrust and~TM implementation in~CrypTen in~Tab.~\ref{tab:crypten_aggregation}.
The run-time overhead for both robust aggregation schemes compared to~Fed\-Avg is four to five orders of magnitude.
Also, FLTrust requires~5$\times$ more run-time and communication than~TM.
Given that both produce fairly similar results when applied to~HyFL\xspace, the overhead for~FLTrust seems not warranted.
\vspace{-2mm}
\input{figures/tables/communication_crypten_aggregation.tex}
\paragraph{Q6 -- Trimmed Mean Variant}
In~Fig.~\ref{fig:main_HyFL-TrimmedMean-variant-2000-DLF-focused}, we additionally compare the effectiveness of our~TM variant to the original~TM~\cite{ICML:YinCRB18} for three sample sizes~(10, 100, and~1000).
It turns out that our heuristic approach barely reduces the effectiveness, even with aggressive parameters.
In fact, in the focused setting, the~TM variant outperforms the original.
This is because our variant completely excludes gradient updates of~(poisoned) outliers, whereas in regular trimmed mean, those poisoned updates might still be considered for some coordinates.
Results for all other settings are presented in~App.~\ref{sec:app:Q6-res}.
\begin{figure}[htb!]
\centering
\includegraphics[width=8.5cm]{figures/plots/CIFAR10_TrimmedMean_2000_DLF_Focused_0_2_TrimmedMean_Variants_Comparison.pdf}
\vspace{-5mm}
\caption{Effectiveness of trimmed mean~(TM) and our variant~(with sample sizes~10, 100, and~1000) against focused~DLF attacks on~HyFL\xspace at~0.2 poison rate for~ResNet9/CIFAR10 training.}
\label{fig:main_HyFL-TrimmedMean-variant-2000-DLF-focused}
\end{figure}
In~Tab.~\ref{tab:crypten_trimmed_mean_variant}, we also provide run-times and communication results for our optimizations.
Compared to the original with~1021.59MB of communication, we can see an improvement by two orders of magnitude with a communication of~11.90MB for the variant with~100 random samples.
However, we see a higher and fairly stable run-time across all three examined variants.
This is because the algorithm for determining the overall ranking of outliers across coordinates increases the number of~MPC communication rounds compared to the original.
In the studied inter-continental~WAN setting, this has severe impact but does not correspond to actual compute time.
\input{figures/tables/communication_crypten_trimmedmean_variant.tex}
Overall, if~HyFL\xspace is combined with a robust aggregation scheme, our~TM variant offers an excellent trade-off between accuracy and~MPC overhead compared to significantly more expensive~FLTrust and the original~TM.
\subsection{Privacy Enhancing Technologies~(PETs)}
\label{app:related_work_pets}
Privacy enhancing technologies~(PETs) are techniques that allow for computations to be performed on data while keeping that data private.
There are a variety of~PETs available, but the three most widely considered in the literature are secure multi-party computation~(MPC), homomorphic encryption~(HE), and differential privacy~(DP).
Below, we provide a brief overview of each of these methods.
\subsubsection{Secure Multi-Party Computation~(MPC)}
\label{app:related_work_mpc}
MPC~\cite{FOCS:Yao86,STOC:GolMicWig87} enables a set of mutually distrusting parties to evaluate a public function~$f()$ on their private data while preserving input data privacy.
The corruption among the parties is often modelled via an~\emph{adversary} that may try to take control of the corrupted parties and coordinate their actions.
There exists various orthogonal traits of adversarial corruption like honest vs dishonest majority, semi-honest vs malicious corruption, etc.~\cite{FTPS:EvansKR18,JACM:Lin20}.
For practical efficiency, MPC with a small number of parties is often considered~\cite{CCSW:CCPS19,NDSS:PatSur20,NDSS:ChaudhariRS20}.
In~HyFL\xspace{}, we use a minimal setting of two parties with semi-honest corruption and resort to a trusted third party for efficiency~\cite{CRYPTO:DamgardPSZ12,IMA:DamgardH0O19,NeurIPS:KnottVHSIM21,USENIX:PSSY21}.
\subsubsection{Homomorphic Encryption~(HE)}
\label{app:related_work_he}
HE schemes~\cite{Rivest1978,STOC:Gentry09} enable computations on encrypted data without the need for decryption.
The additive homomorphic encryption~(AHE) scheme is a widely used method that allows for the generation of a new ciphertext representing the sum of multiple plaintexts through operations on their corresponding original ciphertexts~\cite{EUROCRYPT:paillier1999public}.
In scenarios involving multiple parties, recent schemes like multi-party homomorphic encryption~(MHE) have been shown to reduce the communication complexity of their~MPC counterpart, but incurs a significant computation overhead~\cite{PETS:mouchet2021multiparty}.
\citet{CSURV:carAUC18} presents a survey of various~HE schemes.
\subsubsection{Differential Privacy~(DP)}
\label{app:related_work_dp}
The concept of~DP~\cite{TTC:DworkR14} is based on the idea of adding noise to data in order to reduce information leakage when sharing it, while still allowing for meaningful computations to be carried out on the data.
A randomized algorithm~$\Psi$ is said to satisfy~$(\epsilon, \delta)$-DP, if for all adjacent datasets~$d, d' \in \mathcal{D}$ and for all~$S\subseteq \mathsf{Range}(\Psi)$, it holds that
\[
\textsf{Pr}[\Psi(d)\in S]\leq e^\epsilon \cdot \textsf{Pr}[\Psi(d')\in S] + \delta
\]
At a high level, this means that when given a dataset~$d$, the likelihood of the algorithm~$\Psi$ producing a result within set~$S$ should not be greatly different from the likelihood of the algorithm producing a result within the same set~$S$ when given an adjacent dataset~$d'$.
We refer to~\citet{PETS:DP20} for more elaborate details.
\subsection{Privacy-preserving Machine Learning~(PPML)}
\label{app:related_work_ppml}
In recent years, there has been a surge in privacy concerns and privacy rules, such as the~European General Data Protection Regulation~(GDPR) and California Consumer Privacy Act~(CCPA), leading the community to develop privacy-preserving algorithms for machine learning applications, often known as~PPML~\cite{ICML:Gilad-BachrachD16,PLDI:DathathriS0LLMM19,MPCLeague}.
These techniques aim to protect individuals' privacy while still allowing for the development of accurate and effective machine learning models.
While there exists a plethora of such~PPML techniques utilizing~PETs~(cf.~\secref{app:related_work_pets}), we resort to~PPML training and inference using MPC techniques in HyFL\xspace~\cite{SP:MohasselZ17,NDSS:KotiPRS22}.
In particular, most of these works employ~2-4 MPC servers~\cite{CCS:MohRin18,PETS:ByaliCPS20,USENIX:KPPS21,NDSS:KotiPRS22}, while recent works started focusing on more servers~\cite{EPRINT:KPPS22}.
We refer to~\citet{PETS:Cabrero-Holgueras21} for an overview of various PPML approaches.
\subsection{Federated Learning~(FL)}
\label{app:related_work_fl}
Unlike conventional~PPML techniques, FL~\cite{ARXIV:konevcny2016federated} allows for the training of machine learning models over distributed data by allowing the model to be trained locally on each device using its own data. The locally trained models are then transferred to a central server and combined to form a global model.
This enables large models to be trained using data scattered across different devices or companies without the need to centralize the data.
At a high level, an~FL scheme iterates through the following steps:
\begin{enumerate}
\item The global server~$\ensuremath{\mathcal{S}}$ sends the current global model~$\FLmodel{}{t}$ to a selected subset of~$n$ out of~$N$ clients.
\item Each selected client~$C^i$, $i \in [n]$ utilizes its own local training data~$D^i$ for~$E$ epochs to fine-tune the global model and obtains an updated local model~$w_{t+1}^{i}$:
\[
w_{t+1}^{i} \leftarrow W_{t}-\eta_{C^i} \frac{\partial L (W_{t},B_{i,e})}{\partial W_{t}},
\]
where~$L$ is a loss function, $\eta_{C^i}$ is the clients' learning rate, and~$B_{i,e}\subseteq D^i$ is a batch drawn from~$D^i$ in epoch~$e$, where~$e \in [E]$. The local model updates~$w_{t+1}^{i}$ are then transmitted back to~$\ensuremath{\mathcal{S}}$.
\item $\ensuremath{\mathcal{S}}$ employs an aggregation rule~$f_{\textit{agg}}$ to combine the received local model updates~$w_{t+1}^{i}$, resulting in a global model~$W_{t+1}$, which will serve as the starting point for next iteration:
\[
W_{t+1} \leftarrow W_{t}-\eta_\mathsf{S} \cdot f_{\textit{agg}}(w_{t+1}^{i},\ldots,w_{t+1}^{n}),
\]
where~$\eta_\mathsf{S}$ is the server's learning rate.
\end{enumerate}
The above procedure is repeated until a predefined stopping criterion, such as a specified number of training iterations or a specific level of accuracy, is satisfied.
\subsubsection{Aggregation}
\label{app:related_work_agg}
With the advent of~FL, the focus was shifted towards developing an efficient aggregation function that could scale.
Google presented~FedAvg~\cite{AISTATS:McMahanMRHA17}, which aggregates the models using a simple weighted averaging algorithm, given by
\[
\mathsf{FedAvg} (w_{t+1}^{1},\ldots,w_{t+1}^{n}) = \sum_{i=1}^{n} \frac{|D^i|}{|D|} w_{t+1}^{i},
\]
where~$w_{t+1}^{i}$ is the model trained on~$|D^i|$ data samples and~$|D|$ denotes the total number of training data samples.
One potential drawback of~FL over~PPML techniques is that~FL increases the attack surface for malicious actors.
This is because in~FL, each user trains their own model locally, allowing them the potential to manipulate their model instead of data~\cite{ICML:BiggioNL12,ICML:BCMC19,ESORICS:TolpeginTGL20,AISTATS:BagdasaryanVHES20,USENIX:FangCJG20}.
To counter these kind of~\emph{poisoning} attacks, various robust aggregation methods have been proposed, in which local models are analyzed and appropriate measures such as outlier removal, norm scaling, and so on are applied~\cite{NIPS:BlanchardMGS17,ICML:YinCRB18,AAAI:li2019rsa,NDSS:CaoF0G21,TIFS:SSY23}.
See~\citet{NDSS:ShejwalkarH21} for a comprehensive overview of different robust aggregations.
\subsubsection{Secure Aggregation}
\label{app:related_work_secagg}
Previous research had assumed that the use of standard~FL with plain aggregation would protect the privacy of client data.
However, later works have shown that it is possible to extract private information from individual model updates~\cite{NIPS:ZhuLH19,NIPS:GeipingBD020}.
To address this concern, secure aggregation~(SA) schemes have been proposed~\cite{CCS:BIKMMP17,PETS:MOJC23}.
These schemes ensure that the aggregator~$\ensuremath{\mathcal{S}}$ only receives the final aggregated model, rather than individual models, thus preserving the privacy of client data.
\paragraph{Single Server~SA (\textsc{Single~$\ensuremath{\mathcal{S}}$}).}
When utilizing a single aggregator~$\ensuremath{\mathcal{S}}$, several works used masking techniques~\cite{cryptologia:Rubin96a} to hide individual updates from the aggregator~\cite{CCS:BIKMMP17,CCS:BellBGL020}.
However, more recent studies, such as~\citet{TIFS:PhongAHWM18}, have shifted their focus towards using~HE-based techniques, specifically~AHE~\cite{CSURV:carAUC18}, to completely hide the aggregated model from the server and make it available to the users.
Similarly, PrivFL~\cite{CCSW:MandalG19} proposed a HE-based training to protect the global model from the users and reveal only to the aggregator, while preserving privacy of individual updates.
However, recent research has shown that a single malicious server can reconstruct individual training data points from users' local models even when using secure aggregation protocols~\cite{CCS:BIKMMP17,CCS:BellBGL020}.
\paragraph{Multi Server~SA (\textsc{Multi~$\ensuremath{\mathcal{S}}$}).}
In this case, the aggregation will be carried out by a group of servers in a private and distributed fashion~\cite{DLS:FereidooniMMMMN21}.
One major advantage of these schemes is that, unlike a single aggregator case, users do not need to communicate with each other or perform any key setup.
In an orthogonal line or work, such as~SPINDLE~\cite{PETS:froelicher2021scalable} and~POSEIDON~\cite{NDSS:SavPTFBSH21} adopt~MHE techniques to perform a collaborative learning, but in a federated fashion.
However, these works do not scale well with the number of parties and are thus limited for a cross-device setting.
Recent works like~\citet{ESORICS:DongCLWZ21} and \citet{USENIX:NguyenRCYMFMMMZ22} proposed private and robust~FL by combining techniques from~HE and~MPC, but will incur significant overhead in both computation and communication for a cross-device setting.
In~HyFL\xspace, we employ a multi-server aggregation scheme with three servers and mitigate scalability issues of existing works, especially for a cross-device setting.
\subsubsection{Hierarchical Federated Learning~(HFL)}
\label{app:related_work_hfl}
A modified approach to standard~FL, called hierarchical~FL~(HFL), was introduced to address scalability and heterogeneity issues in real-world systems~\cite{MLSys:BonawitzEGHIIKK19}.
In~HFL, aggregation occurs at multiple levels, forming a hierarchical structure in which the aggregated values from one level serve as inputs for the next higher level.
This procedure eventually leads to the top level, when the final model is aggregated~\cite{ICDCS:DengLRZZZY21,INFOCOM:WangXLHQZ21}.
Previous works in the~HFL setting has primarily focused on improving scalability and communication.
However, the recent work of~\citet{IJCAI:Yang21} introduced methods for ensuring the privacy of individual updates in~HFL, allowing for secure aggregation.
Despite this advancement, \citet{IJCAI:Yang21} did not address global model privacy or the robustness against malicious users, which are crucial goals in our proposed~HyFL\xspace{} framework.
\section{Conclusion}
\label{sec:conclusion}
In this work, we presented~HyFL\xspace, a novel unified abstraction and framework for~(hierarchical) federated learning that provides complete model privacy, faster convergence, smaller attack surface and better resilience against poisoning attacks than regular~FL.
As part of future work, we plan to investigate potential further performance improvements by incorporating quantization techniques for private training~\cite{ICML:Keller022} and secure aggregation~\cite{ARXIV:BMPSTVWYY22}.
\section{Related Work}
\label{app:related-work}
\input{Sections/6_Appendix_Related_Work}
\section{Additional Details for~HyFL\xspace}
\label{app:extras}
\input{Sections/7_Appendix_AdditionalDetails}
\end{document}
|
1,314,259,993,326 | arxiv | \section{Introduction}
Take a system in an equilibrium or nonequilibrium steady state and apply a small drive to it.
We expect the system to move in the direction of the drive, and increasingly so with stronger drives. However, various setups have been found where intuition is violated and the response is more surprising. Among the simplest manifestations of this behavior are Negative Differential Mobility (NDM), where the response coefficient depends on the applied perturbation in a nonmonotonic way, and Absolute Negative Mobility (ANM), where the sign of the response coefficient is opposite to what would intuitively be expected.
Theoretical examples of NDM include uniformly driven systems~\cite{bottger_b1982, hopfel_s_g1986, vrhovac_p1996, benenti2009, turci_p_s2012, reichhardt_r2017} or single driven tracers in quiescent media~\cite{cecchi_m1996, slater_g_n1997, zia_p_m2002, perondi2005, kostur2006, jack2008, sellitto2008, leitmann_f2013, baerts2013, basu_m2014, benichou2014, baiesi_s_v2015, benichou2016, sarracino2016, cecconi2017, burlatsky1992, burlatsky1996, landim_o_v1998, benichou1999, deconinck_o_m1997, benichou2001, benichou2000b, benichou2015, brummelhuis_h1989, illien2013, illien2015, benichou2013a,benichou2013b,benichou2013c, benichou2016, demery_d2010a, demery_d2011, cividini2016a, cividini2016b, cividini_m_p2017}. Also condensed matter experiments have been performed~\cite{conwell1970, nava1976, bottger_b1982, stanton_b_w1986, lei_h_c1991}.
The appearance of NDM mostly relies on trapping mechanisms that can be implemented through \textit{e.g.} complicated potentials~\cite{stanton_b_w1986, cecchi_m1996, kostur2006, sarracino2016, cecconi2017} or impurities, either present by definition~\cite{zia_p_m2002, perondi2005, jack2008, sellitto2008, leitmann_f2013, baerts2013, baiesi_s_v2015} or effectively created by a slow relaxation of the surrounding medium~\cite{turci_p_s2012, basu_m2014, benichou2014, benichou2016}. A pedagogical explanation for NDM is given in Ref.~\cite{zia_p_m2002}, and a modified Green-Kubo formula that accounts for NDM has been proposed in Ref.~\cite{baerts2013}.
Absolute Negative Mobility has been observed in a variety of setups.
Typically, one does not expect ANM to take place when the unperturbed system is in equilibrium, since, as has been argued, this would constitute a violation of the Second Law of Thermodynamics~\cite{eichhorn_r_h2002, eichhorn_r_h2002b, cleuren_v2002}. Thus, previous studies demonstrating ANM considered a driving field which acts on nonequilibrium steady states. These include systems with a periodic~\cite{keay1995, ignatov1995, hartmann_g_h1997, aguado_p1997, cannon2000, machura2007, eichhorn2010, spiechowicz_l_h2013, slapik_l_s2018, hanggi_m2009, speer_e_r2007} or a random~\cite{goychuk_p_m1998, haljas2004, spiechowicz_h_l2014b, spiechowicz_l_m2016, hanggi_m2009} drive, random walkers~\cite{cleuren_v2002, eichhorn_r_h2002, eichhorn_r_h2002b, sarracino2016}, strong interactions and noise in spatially periodic potentials~\cite{reimann1999, reimann_b_k1999, buceta2000, cleuren_v2001, mangioni_d_w2001}, and others~\cite{rozenberg_l_r1988, reguera_r_p2000, ghosh2014, dotsenko_m_o2017}. A different setup where ANM has been found both experimentally and theoretically involves quantum mechanical effects such as absolute negative conductivity for semiconductors, where negative conductivity is associated with a negative effective mass of the carriers (either electrons or holes)~\cite{kromer1958, mattis_s1959, cannon2000}, or interactions between light and matter~\cite{aronov_s1975, gershenzon_f1986, gershenzon_f1988, dakhnovskii_m1995, aguado_p1997}.
In the present work we focus on cases, where the unperturbed system is in \emph{thermal equilibrium}, and demonstrate that, depending on the drive mechanism, ANM can take place in such systems. This is done in the context of a model of a tracer moving on a discrete ring populated by neutral particles, which obey a Simple Symmetric Exclusion Process (SSEP) -type dynamics. We show that, when a driving force is applied to the tracer, it moves in a direction opposite to the
drive. We then consider a continuum analogue of the model by studying the Langevin-type motion of a tracer particle in a narrow channel of a gas of hard disks. Here we find that the model exhibits NDM but not ANM.
In the discrete ring model introduced in the present work, the dynamics of the tracer is such that it can move by two different processes, either by hopping towards a neighboring vacant site or by exchanging its position with close bath particles.
The exchange move requires enough 'free volume' in the vicinity, which places restrictions on the dynamics and is reminiscent of kinetically-constrained models~\cite{ritort_s2003, jack2008, sellitto2008, turci_p_s2012}.
The system is studied analytically within a mean-field approximation and numerically, and the existence of ANM is demonstrated. The model may be considered as a kind of toy model for hard disks performing Brownian motion dynamics in a narrow planar channel. Therefore, we carried out Langevin-type simulation studies of this hard disks model where NDM but no ANM has been found. But we believe that some simple variants of this model, which have not yet been tested, could exhibit ANM for selected sets of parameters.
The paper is organized as follows: in Section~\ref{section:ssep} we study the lattice model analytically and numerically. In Section~\ref{section:hd} we introduce the model of hard disks moving in a narrow channel and present the results of molecular dynamics simulations. In Section~\ref{section:ccl} we conclude with a discussion of the results.
\section{Hard-core particles on a lattice}
\label{section:ssep}
\subsection{Definition}
Consider a set of $N$ bath particles and a tracer particle occupying $L$ sites of a ring of length $L$ while satisfying the simple exclusion constraint. Time is continuous and each transition occurs with a probability $R \mathrm{d} t$ during each infinitesimal time step $\mathrm{d} t$, where $R$ is the rate of the transition. The bath particles are regular SSEP particles and can hop towards the site directly to their left or to their right, each with constant rate $1$, under the condition that the target site is vacant. Their average density is denoted by $\overline{\rho} = \frac{N}{L-1}$.
The tracer is different from the bath particles in two ways. First, it hops to the right and to the left with different rates that we call $p$ and $q$, respectively. Second, it can exchange its position with a bath particle two sites away under the condition that the site between the tracer and the bath particle is vacant. This process takes place with rate $p'$ to the right and $q'$ to the left. The condition that the intermediate site has to be empty mimics the fact that, in more realistic systems of \textit{e.g.} particles moving in a narrow channel, overtakes are easier when particles have more space.
The allowed transitions are summarized in Fig.~\ref{fig:system}. Such a system can be easily simulated using the Monte Carlo algorithm.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.4\textwidth]{scheme.eps}
\end{center}
\caption{\small Allowed transitions. Bath particles (black) hop to the right and to the left with constant rate $1$ each (top row), the tracer (red) hops to the right with rate $p$ and to the left with rate $q$ (middle row) and can exchange its position with a particle two sites away with rates $p'$ and $q'$ if the intermediate site is empty (bottom row).}
\label{fig:system}
\end{figure}
At large times we expect the tracer to have a finite velocity and the bath to reach a stationary nonequilibrium state in the frame of the tracer. We define the occupation variables in the frame of the tracer by $\tau_l = 0$ or $1$ for $l=1,\ldots,L-1$ and use $\langle \ldots \rangle$ for ensemble averages. The tracer occupies site $l=0$.
The average velocity of the tracer is given by
\begin{eqnarray}
\label{eq:vtrdef}
{v_\text{tr}} &=& p (1-\langle \tau_1 \rangle) -q (1- \langle \tau_{L-1} \rangle) \nonumber \\
&&+ 2 p' \langle (1-\tau_1) \tau_2 \rangle - 2 q' \langle (1-\tau_{L-1}) \tau_{L-2}\rangle.
\end{eqnarray}
The first term accounts for hops of the tracer one step to the right: the transition is allowed if site $l=1$ is empty, contributing a factor $1-\tau_1$, and then occurs with rate $p$. The third term accounts for exchanges to the right: this transition is allowed if site $l=1$ is empty and site $l=2$ is occupied (factor $\tau_2(1-\tau_1)$). It occurs with rate $p'$, and the tracer moves two steps to the right (factor $2$). The second and fourth terms are hops and exchanges to the left, respectively. An ensemble average of the whole expression is taken.
We also define the densities $\rho_l = \langle \tau_l \rangle$ for $l=1,\ldots,L-1$.
A configuration of the system is entirely specified by the $\{\tau_l\}_{l=1,\ldots,N-1}$, supplemented with the position of the tracer in the lab frame. In the case $p=q$ and $p' = q'$ it is clear that the rate of every allowed transition between two states of the system is equal to the rate of the inverse transition. This implies that detailed balance is satisfied and that the stationary distribution is flat.
Interesting phenomena happen when a drive is applied to the system, namely $q \neq p$ or $q' \neq p'$. We now present numerical results for the tracer velocity and the density profile before showing how they can be understood analytically.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{vtrd.eps}
\end{center}
\caption{\small Velocity of the tracer as a function of $\delta$ for fixed $r=1$, $r'=0.5$ and $\delta'=0$ and various densities in a system of length $L=500$. For large densities the sign of the velocity is opposite to the one of $\delta$. Note also the NDM occurring for $\overline{\rho} = 0.25$ and $\delta \gtrsim 0.5$. Numerical results (symbols) are compared to the theory of Section~\ref{section:linres} for small $\delta$ (lines).}
\label{fig:vtrd}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{vtrdp.eps}
\end{center}
\caption{\small Velocity of the tracer as a function of $\delta'$ for fixed $r=1$, $r'=0.5$ and $\delta=0$ and various densities in a system of length $L=500$.
Numerical results (symbols) are compared to the theory of Section~\ref{section:linres} for small $\delta'$ (lines).}
\label{fig:vtrdp}
\end{figure}
\subsection{Numerical results}
The system defined above can be easily simulated for any values of $p = r + \frac{\delta}{2}$, $q = r -\frac{\delta}{2}$, $p' = r' + \frac{\delta'}{2}$ and $q' = r' - \frac{\delta'}{2}$. In Figures~\ref{fig:vtrd} and~\ref{fig:vtrdp} we present numerical results of the tracer velocity for fixed values of $r$ and $r'$ as a function of the respective biases, first $\delta \neq 0$ and then $\delta' \neq 0$. In particular, for $\delta \neq 0$ and $\delta' = 0$ (Fig.~\ref{fig:vtrd}), the curves are monotonously increasing for small densities, but start to exhibit NDM and even ANM for larger densities. On the contrary, for $\delta = 0$ and $\delta' \neq 0$ the curves are monotonously increasing (Fig.~\ref{fig:vtrdp}).
In Figures~\ref{fig:dend} and~\ref{fig:dendp} we plot the corresponding density profiles for different values of the average density. The density is found to be flat in the bulk of the system, \textit{i.e.} far from the tracer, with a meniscus appearing on one side of the tracer. It appears that the change of sign in the velocity for $\delta \neq 0$ as the density is increased is accompanied by a qualitative change in the density profile, where a meniscus appears at the front of the tracer for low densities and at its back for high densities (see Fig.~\ref{fig:dend}).
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{dendb.eps}
\end{center}
\caption{\small Density profiles in the tracer frame for different average densities $\overline{\rho}$ in systems with $r=1$, $r'=0.5$, $\delta=0.4$ and $\delta' = 0$. Numerical results (solid lines) are compared to the
predictions of section~\ref{section:density} (dashed lines). The decay length~\eqref{eq:decaylngth} changes sign at $\rho = 0.5$, which goes hand in hand with the change in the sign of the velocity observed in Figure~\ref{fig:vtrd}.}
\label{fig:dend}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{dendpb.eps}
\end{center}
\caption{\small Density profiles in the tracer frame for different average densities $\overline{\rho}$ in systems with $r=1$, $r'=0.5$, $\delta=0$ and $\delta' = 0.4$. Numerical results (solid lines) are compared to the
predictions of section~\ref{section:density} (dashed lines). The density difference between the right and the left of the tracer changes sign at $\overline{\rho} = 0.5$, see the theoretical expression~\eqref{eq:solphi} with the simplification~\eqref{eq:signC1} for the prefactor.
}
\label{fig:dendp}
\end{figure}
In the next subsection we study the system around its equilibrium state using a mean-field approximation, and compute ${v_\text{tr}}$ in the linear response regime.
\subsection{Tracer velocity}
\label{section:linres}
We start by writing mean-field equations for the densities $\{\rho_l\}_{l=1,\ldots,L-1}$,
\begin{widetext}
\begin{eqnarray}
\label{eq:mfdensity}
\frac{\mathrm{d} \rho_1}{\mathrm{d} t} &=& \rho_2 -\rho_1 + p (1-\rho_1) \rho_2 - q (1-\rho_{L-1} ) \rho_1 +p' (1-\rho_1) \rho_2 \rho_3 - q' (1-\rho_{L-1}) \rho_{L-2} \rho_1, \nonumber \\
\frac{\mathrm{d} \rho_2}{\mathrm{d} t} &=& \rho_3 -2 \rho_2 + \rho_1 + p (1-\rho_1) (\rho_3 - \rho_2) + q (1-\rho_{L-1} ) (\rho_1-\rho_2) \nonumber \\ && +p' (1-\rho_1) \rho_2 (\rho_4 -1) + q' (1-\rho_{L-1}) \rho_{L-2} (1-\rho_2), \nonumber \\
\frac{\mathrm{d} \rho_l}{\mathrm{d} t} &=& \rho_{l+1} -2 \rho_l + \rho_{l-1} + p (1-\rho_1) (\rho_{l+1} - \rho_l) + q (1-\rho_{L-1} ) (\rho_{l-1}-\rho_l) \\ && +p' (1-\rho_1) \rho_2 (\rho_{l+2} -\rho_l) + q' (1-\rho_{L-1}) \rho_{L-2} (\rho_{l-2}-\rho_l), \qquad \qquad l=3,\ldots, L-3, \nonumber \\
\frac{\mathrm{d} \rho_{L-2}}{\mathrm{d} t} &=& \rho_{L-1} -2 \rho_{L-2} + \rho_{L-3} + p (1-\rho_1) (\rho_{L-1} - \rho_{L-2}) + q (1-\rho_{L-1} ) (\rho_{L-3}-\rho_{L-2}), \nonumber \\ && + p' (1-\rho_1) \rho_2 (1 - \rho_{L-2}) + q' (1-\rho_{L-1}) \rho_{L-2} (\rho_{L-4}-1), \nonumber \\
\frac{\mathrm{d} \rho_{L-1}}{\mathrm{d} t} &=& -\rho_{L-1} + \rho_{L-2} - p (1-\rho_1) \rho_{L-1} + q (1-\rho_{L-1} ) \rho_{L-2} - p' (1-\rho_1) \rho_2 \rho_{L-1} + q' (1-\rho_{L-1}) \rho_{L-2} \rho_{L-3}, \nonumber
\end{eqnarray}
where we factorized correlations $\langle \tau_{l_1} \ldots \tau_{l_k}\rangle = \rho_{l_1} \ldots \rho_{l_k}$ for distinct positions. Since correlations are factorized in equilibrium, it is reasonable to expect that this approximation will give good results at least close to equilibrium. A similar technique has also been proven accurate in closely related systems, see \textit{e.g.}~\cite{benichou1999, deconinck_o_m1997, benichou2001, benichou2000b, benichou2015, brummelhuis_h1989, illien2013, illien2015, benichou2013a,benichou2013b,benichou2013c, benichou2016}. The tracer velocity~\eqref{eq:vtrdef} becomes
\begin{eqnarray}
\label{eq:vtrmf}
{v_\text{tr}} &=& p (1-\rho_1) -q (1- \rho_{L-1}) + 2 p' (1-\rho_1) \rho_2 - 2 q' (1-\rho_{L-1}) \rho_{L-2}.
\end{eqnarray}
\end{widetext}
We consider equations~\eqref{eq:mfdensity} in the stationary state $\mathrm{d} \rho_l/\mathrm{d} t = 0$. Because of particle number conservation, they give only $L-2$ independent conditions. The missing $(L-1)^\mathrm{th}$ condition is obtained by fixing the number of particles,
\begin{equation}
\label{eq:norm}
\sum_{l=1}^{L-1} \rho_l = N.
\end{equation}
When there is no bias, $\delta = \delta' = 0$, it is easy to confirm that a flat density profile $\rho_l = \overline{\rho}$ solves equations~\eqref{eq:mfdensity}-\eqref{eq:norm} so that the tracer velocity~\eqref{eq:vtrmf} vanishes.
For simplicity, we now consider the case where the biases $\delta$ and $\delta'$ are small and of the same order. We expand the density,
\begin{equation}
\label{eq:defsigma}
\rho_l = \overline{\rho} + \delta \sigma_l + \delta' \sigma_l' + \mathcal{O}(\delta^2),
\end{equation}
and study the solution to linear order in $\delta$ and $\delta'$. While the tracer velocity is rather well-described to this order, this is not the case for the density profile. In order to obtain the density profile, one has to study the equations to second order in $\delta$ and $\delta'$. This will be done in the next subsection.
We start by solving the equations for the bulk sites $l=3,\ldots,L-3$. The terms linear in $\delta$ and $\delta'$ give equations for $\sigma_l$ and $\sigma_l'$, respectively. Both bulk equations turn out to be the same,
\begin{eqnarray}
\label{eq:bulk}
(1+r (1-\overline{\rho})) (\sigma_{l+1}-2 \sigma_l+\sigma_{l-1}) && \\ + r' \overline{\rho} (1-\overline{\rho}) (\sigma_{l+2}-2 \sigma_l+\sigma_{l-2}) &=& 0,\nonumber
\end{eqnarray}
for $l=3,\ldots,L-3$, and the very same equation for the $\{\sigma_l'\}_{l=1,\dots,L-1}$. Note that in the continuum limit this equation would reduce to a Laplace equation $\partial_l^2 \sigma = 0$, giving a linear density profile.
The solution of the discrete equation is
\begin{eqnarray}
\label{eq:solbulk}
\sigma_l &=& \alpha + \beta l +\gamma_+ X^l + \gamma_- X^{L-l}, \nonumber \\
\sigma_l' &=& \alpha' + \beta' l +\gamma_+' X^l + \gamma_-' X^{L-l},
\end{eqnarray}
for $l=1,\ldots,L-1$, where
\begin{eqnarray}
\label{eq:Xdef}
X &=& - \left( 1+ \frac{1+r (1-\overline{\rho})}{2 r' \overline{\rho} (1-\overline{\rho})}\right) \\ &&+ \sqrt{\left( 1+ \frac{1+r (1-\overline{\rho})}{2 r' \overline{\rho} (1-\overline{\rho})}\right)^2-1} \nonumber
\end{eqnarray}
and $X^{-1}$ are the roots of
\begin{eqnarray}
\label{eq:Xeq}
(1+r(1-\overline{\rho})) X +r' \overline{\rho} (1-\overline{\rho}) (1+X)^2 = 0.
\end{eqnarray}
Note that $|X| < 1$.
We now have to satisfy the boundary and normalization conditions. In the large $N$ and $L$ limit with $\frac{N}{L-1} = \overline{\rho}$, the normalization~\eqref{eq:norm} gives $\beta = - \frac{2 \alpha}{L}$ and $\beta' = - \frac{2 \alpha'}{L}$. Let us now take the sum of the equations for $\rho_1$ and $\rho_{L-1}$. Sorting the $\delta$ and $\delta'$ terms, we again get the same equation for $\sigma_l$ and $\sigma_l'$,
\begin{eqnarray}
\label{eq:s1Lm1}
(1+r(1-\overline{\rho})) (\sigma_2 - \sigma_1 + \sigma_{L-2} - \sigma_{L-1})&& \\ + r' \overline{\rho} (1-\overline{\rho}) (\sigma_3 - \sigma_1 + \sigma_{L-3} - \sigma_{L-1}) &=& 0. \nonumber
\end{eqnarray}
Taking the sum of the equations for $\rho_2$ and $\rho_{L-2}$
leads to the same result.
For large $L$, we have that $\sigma_l \simeq \alpha + \gamma_+ X^l$ for $l=\mathcal{O}(1)$, and $\sigma_{L-l} \simeq -\alpha + \gamma_- X^l$ for $L-l=\mathcal{O}(1)$. Inserting these forms in equation~\eqref{eq:s1Lm1}, we get $\gamma_- = - \gamma_+$ and, similarly, $\gamma_-' = - \gamma_+'$. The linear perturbations to the densities therefore have the form
\begin{eqnarray}
\label{eq:solbulks}
\sigma_l &=& \alpha \left(1- 2 \frac{l}{L} \right) +\gamma_+ \left( X^l - X^{L-l} \right), \nonumber \\
\sigma_l' &=& \alpha' \left(1- 2 \frac{l}{L} \right) +\gamma_+' \left( X^l - X^{L-l} \right).
\end{eqnarray}
The equations for, say, $\rho_1$ and $\rho_2$ give two systems of two linear equations for $\alpha$ and $\gamma_+$, and for $\alpha'$ and $\gamma_+'$. The solutions to these equations are obtained using Mathematica and are given in Appendix~\ref{section:appagag}.
The density profile obtained in this analysis is linear in the bulk with exponential layers on both sides of the tracer. This is different from the numerical observation of a flat profile in the bulk and an exponential layer only on one side of the tracer. This discrepancy is a result of the fact that the analysis has been carried out to linear order in $\delta$ and $\delta'$. This will be corrected in Section~\ref{section:density}.
The velocity of the tracer can be obtained from the mean-field expression~\eqref{eq:vtrmf} and the solutions~\eqref{eq:solbulks}, where the coefficients are given by~\eqref{eq:ag}. We separate it into two contributions, namely the one coming from the hops of the tracer towards an empty site (terms proportional to $p$ and $q$ in
equation~\eqref{eq:vtrmf}) and the one coming from exchanges (terms proportional to $p'$ and $q'$ in
the same equation). We can write ${v_\text{tr}} = v_\text{tr,H} + v_\text{tr,E}$, where the subscripts $\text{H}$ and $\text{E}$ indicate the contributions coming from hops and exchanges, respectively. Each of these pieces has a term proportional to its
corresponding bias,
\begin{equation}
{v_\text{tr}}_{,A} = \mu_{A,\text{H}} \delta + \mu_{A,\text{E}} \delta',
\end{equation}
which gives four coefficients $\mu_{A,B}$ with $A,B \in \{\text{H},\text{E}\}$. Explicitly, they are
\begin{widetext}
\begin{eqnarray}
\label{eq:musol}
\mu_{\text{H},\text{H}} &=& (1-\overline{\rho}) - r (\sigma_1 - \sigma_{L-1}) = (1-\overline{\rho}) - 2 r (\alpha + \gamma_+ X) = \frac{r' (1-2 \overline{\rho})^2}{2 r \overline{\rho}^2} \mu_{\text{E},\text{E}}, \nonumber \\
\mu_{\text{H},\text{E}} &=& - r (\sigma_1' - \sigma_{L-1}') = - 2 r (\alpha' + \gamma_+' X) = \frac{1-2 \overline{\rho}}{2 \overline{\rho}} \mu_{\text{E},\text{E}}, \\
\mu_{\text{E},\text{H}} &=& 2 r' \left( (1-\overline{\rho}) (\sigma_2 - \sigma_{L-2}) + \overline{\rho} (\sigma_{L-1} - \sigma_1)\right) = 4 r' \left( (1-\overline{\rho}) (\alpha + \gamma_+ X^2) - \overline{\rho} (\alpha + \gamma_+ X) \right) = \frac{r' (1-2 \overline{\rho})}{ r \overline{\rho}} \mu_{\text{E},\text{E}}, \nonumber
\end{eqnarray}
with the last coefficient
\begin{eqnarray}
\label{eq:muee}
\mu_{\text{E},\text{E}} &=& 2 \overline{\rho} (1-\overline{\rho}) + 2 r' (1-\overline{\rho}) (\sigma_2' - \sigma_{L-2}') + 2 r' \overline{\rho} (\sigma_{L-1}'-\sigma_1') = 2 \overline{\rho} (1-\overline{\rho}) + 4 r' (1-\overline{\rho}) (\alpha' + \gamma_+' X^2) - 4 r' \overline{\rho} (\alpha' + \gamma_+' X) \nonumber \\
&=& \frac{2 \overline{\rho}^2 (1-\overline{\rho}) r \sqrt{1+r (1-\overline{\rho})}}{(2 \overline{\rho} - 1) (r+r' (2 \overline{\rho} - 1)) \sqrt{1+r (1-\overline{\rho})} + r (1-\overline{\rho}) \sqrt{1+r(1-\overline{\rho}) + 4 r' \overline{\rho} (1-\overline{\rho})}}.
\end{eqnarray}
\end{widetext}
In Appendix~\ref{section:appmuee} we show that $\mu_{\text{E},\text{E}} > 0$ for all $0 < \overline{\rho} < 1$, $r > 0$, $r' > 0$.
For clarity, let us group the linear response coefficients~\eqref{eq:musol}-\eqref{eq:muee} into a linear response matrix,
\begin{equation}
\label{eq:linresmat}
\left( \begin{array}{c} v_\text{tr,H} \\ v_\text{tr,E} \end{array} \right)
= \frac{r' \mu_{\text{E},\text{E}}}{2 \overline{\rho}^2} \left( \begin{array}{cc} (1-2\overline{\rho})^2 & 2 \overline{\rho} (1-2\overline{\rho}) \\ 2 \overline{\rho} (1-2\overline{\rho}) & 4 \overline{\rho}^2 \end{array} \right) \left( \begin{array}{c} \delta/r \\ \delta'/(2r') \end{array} \right),
\end{equation}
where the entries of the column vector on the RHS are the thermodynamically conjugate forces. In this basis the response matrix is symmetric, as expected from the Onsager relations.
In~\eqref{eq:linresmat} we see that the diagonal coefficients of the response matrix are always positive, consistent with fluctuation-dissipation relations. Conversely, the off-diagonal coefficients need not be positive and indeed they change sign at $\overline{\rho} = \frac{1}{2}$, which allows for ANM. Thus ANM found in this model in the linear response regime is a direct result of the fact that the dynamics involves two driving mechanisms, namely hopping ($p/q$) and exchange ($p'/q'$).
Note that the columns of the linear response matrix are proportional, which shows that the response of the tracer to the two driving fields is the same. This indicates that exchange and hopping are completely coupled in the sense of~\cite{kedem_c1965}.
The total velocity becomes
\begin{equation}
\label{eq:linresvtr}
{v_\text{tr}} = \frac{\mu_{\text{E},\text{E}}}{2 r \overline{\rho}^2} \left( r' (1-2\overline{\rho}) \delta + r \overline{\rho} \delta' \right).
\end{equation}
For fixed $\delta, \delta' > 0$, the velocity starts out positive for small $\overline{\rho}$, and changes sign for $\overline{\rho} = \left(2 - \frac{r \delta'}{r' \delta} \right)^{-1}$, which is smaller than $1$ for $r \delta' < r' \delta$.
The prediction~\eqref{eq:linresvtr} is compared to the results of Monte-Carlo simulations in Figures~\ref{fig:vtrd} and~\ref{fig:vtrdp}, and the agreement is very good.
Similarly, one can predict the current $\mathcal{J}_\mathrm{B}$ of bath particles in the linear regime. It can be expressed as a function of the densities in the neighborhood of the tracer (see Appendix~\ref{section:appJb}),
\begin{equation}
\label{eq:Jb}
\mathcal{J}_\mathrm{B} = \frac{\rho_1 - \rho_{L-1} + 2 q' (1-\rho_{L-1}) \rho_{L-2} - 2 p' (1-\rho_1) \rho_2}{L}.
\end{equation}
Using the computed values of $\rho_1$, $\rho_2$, $\rho_{L-2}$ and $\rho_{L-1}$,, one obtains analytical predictions for $\mathcal{J}_\mathrm{B}$. They are compared to Monte Carlo simulations in Figures~\ref{fig:Jbd} and~\ref{fig:Jbdp}. The slope at the origin is predicted accurately.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{Jbd.eps}
\end{center}
\caption{\small The bath particle current - for the same parameters as in Fig.~\ref{fig:vtrd} - are compared
to the analytical predictions for small $\delta$~\eqref{eq:Jb}.}
\label{fig:Jbd}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{Jbdp.eps}
\end{center}
\caption{\small The bath particle current - for the same parameters as in Fig.~\ref{fig:vtrdp} - are compared
to the analytical prediction for small $\delta'$~\eqref{eq:Jb}.}
\label{fig:Jbdp}
\end{figure}
Note that the analysis presented in this section yields good agreement for ${v_\text{tr}}$ and $\mathcal{J}_\mathrm{B}$, since they are determined by the average density of the sites close to the tracer (Eqs.\,\eqref{eq:vtrmf},\eqref{eq:Jb}). The total current can be obtained as $\frac{{v_\text{tr}}}{L} + \mathcal{J}_\mathrm{B}$ and is therefore predicted accurately by the linear analysis as well.
While these densities are well described by Eqs.\,\eqref{eq:solbulk},\eqref{eq:Xdef}, the overall density profile is not.
Let us now focus on correcting the discrepancy in the density profile by extending the analysis to second order in $\delta$ and $\delta'$.
\subsection{Density profile}
\label{section:density}
In order to explain the form of the density profile for small biases, one has to keep higher-order terms in the expansion. We consider a system with small biases $\delta$ and $\delta'$ and write $\phi_l = \rho_l - \overline{\rho}$. Expanding the bulk equation to second order in $\phi_l$ gives
\begin{widetext}
\begin{eqnarray}
\label{eq:phi}
0 &=& (1+r(1-\overline{\rho})) (\phi_{l+1} - 2 \phi_l + \phi_{l-1}) + r' \overline{\rho} (1-\overline{\rho}) (\phi_{l+2} - 2 \phi_l + \phi_{l-2}) \nonumber \\ &&+ \left( (1-\overline{\rho}) \frac{\delta}{2} - r \phi_1 \right) (\phi_{l+1} - \phi_l) + \left( -(1-\overline{\rho}) \frac{\delta}{2} - r \phi_{L-1} \right) (\phi_{l-1} - \phi_l) \\ && + \left( (1-\overline{\rho}) \overline{\rho} \frac{\delta'}{2} - r' \overline{\rho} \phi_{1} + r' (1-\overline{\rho}) \phi_{2} \right) (\phi_{l+2} - \phi_l) + \left( -(1-\overline{\rho}) \overline{\rho} \frac{\delta'}{2} - r' \overline{\rho} \phi_{L-1} + r' (1-\overline{\rho}) \phi_{L-2} \right) (\phi_{l-2} - \phi_l) \nonumber \\
&\simeq& \left[1+r(1-\overline{\rho}) + 4 r' \overline{\rho} (1-\overline{\rho})\right] \partial_l^2 \phi + \left[ (1-\overline{\rho}) \delta + 2 \overline{\rho} (1-\overline{\rho}) \delta' - (r+2r' \overline{\rho}) (\phi_1 - \phi_{L-1}) + 2 r' (1-\overline{\rho}) (\phi_2 - \phi_{L-2}) \right] \partial_l \phi, \nonumber
\end{eqnarray}
\end{widetext}
where we approximated discrete differences by derivatives. The coefficient of the first order derivative is exactly the part of ${v_\text{tr}}$ linear in $\phi$, and it depends on values of $\phi$ close to the tracer.
Let us replace them by their values from the preceding section,
\begin{equation}
\label{eq:approxphi}
\phi_n - \phi_{L-n} \simeq 2 \delta (\alpha + \gamma_+ X^n) + 2 \delta' (\alpha' + \gamma_+' X^n),
\end{equation}
for $n=1,2$, so that the aforementioned coefficient exactly becomes the ${v_\text{tr}}$ as obtained in equation~\eqref{eq:linresvtr}. The solution of~\eqref{eq:phi} is exponential,
\begin{equation}
\label{eq:solphi0}
\phi_l = C_1 \mathrm{e}^{-\frac{l}{\xi}} + C_2,
\end{equation}
where $C_1$ and $C_2$ are integration constants, and the decay length is
\begin{equation}
\label{eq:decaylngth}
\xi = \frac{1+r(1-\overline{\rho}) + 4 r' \overline{\rho} (1-\overline{\rho})}{{v_\text{tr}}}.
\end{equation}
We use the definition~\eqref{eq:decaylngth}, where $\xi$ can be either positive or negative, to make the presentation simpler.
The decay length is of order $\delta^{-1}$ or $\delta'^{-1}$, which explains why we found linear profiles when we neglected $\mathcal{O}(\delta^2)$ terms. One can check that expression~\eqref{eq:decaylngth} diverges at $\overline{\rho}=\frac{1}{2}$ for $\delta' = 0$, and at $\overline{\rho}=1$ for any $\delta$,$\delta'$. This is consistent with the respective linear profiles obtained in Fig.\,\ref{fig:dend} and the high-density calculation of Section~\ref{section:highrho}. In the low-density limit $\overline{\rho} \to 0$ one simply gets $\xi \sim \frac{1+r}{\delta}$, which is the diffusion coefficient of the free tracer divided by the bias.
The constants $C_1$ and $C_2$ are linked by mass conservation, $\sum_{l=1}^{L-1} \phi_l = 0$, giving
\begin{equation}
\label{eq:consmC1C2}
C_2 = - \frac{1}{L-1} \frac{\mathrm{e}^{-\xi^{-1}} - \mathrm{e}^{-\xi^{-1} L}}{1-\mathrm{e}^{-\xi^{-1}}} C_1.
\end{equation}
As $C_1 = \left( \mathrm{e}^{-\xi^{-1}} - \mathrm{e}^{-\xi^{-1}(L-1)} \right)^{-1} \left( \phi_1 - \phi_{L-1}\right)$, we may employ approximation~\eqref{eq:approxphi} again in order to obtain the value of $C_1$. We end up with
\begin{eqnarray}
\label{eq:solphi}
\phi_l &=& \frac{2 \left[ (\alpha+\gamma_+ X) \delta + (\alpha'+\gamma_+' X) \delta' \right]}{1 - \mathrm{e}^{-\xi^{-1}(L-2)}} \\ && \times \left[ \mathrm{e}^{-\xi^{-1} (l-1)} - \frac{1}{L-1} \frac{1 - \mathrm{e}^{-\xi^{-1}(L-1)}}{1-\mathrm{e}^{-\xi^{-1}}} \right]. \nonumber
\end{eqnarray}
The form~\eqref{eq:solphi} is shown to be in good agreement with simulations in Figs.\,\ref{fig:dend} and~\ref{fig:dendp}.
\subsection{High density regime}
\label{section:highrho}
Here we go beyond linear response for high densities $\overline{\rho} \simeq 1$. In that case the holes are very sparse and can be considered independent, so that we can start by studying a system with $L-2$ bath particles and only one hole. Let $l = 1,\ldots,L-1$ denote the position of the hole with respect to the tracer. Examination shows that the probability distribution of the position of the hole, $P_l(t)$, obeys
\begin{eqnarray}
\label{eq:meqhole}
\frac{\mathrm{d} P_l}{\mathrm{d} t} &=& P_{l+1} - 2 P_l + P_{l-1} \\ && + \delta_{l,1} [(1-p-p')P_1 + (q+q') P_{L-1}] \nonumber \\ && + \delta_{l,L-1} [ (1-q-q')P_{L-1} + (p+p') P_1 ], \nonumber
\end{eqnarray}
with the convention $P_0 = P_L = 0$, and the normalization $\sum_{l=1}^{L-1} P_l = 1$. When the hole is far from the tracer, $l \neq 1, L-1$, it simply diffuses as seen on the first line of~\eqref{eq:meqhole}. The two other terms in this equation correspond to hopping and exchange processes which take place when the hole is next to the tracer.
In the stationary state and large $L$ limit, these equations give
\begin{equation}
\label{eq:solPY}
P_l = \frac{2}{L(p+p'+q+q')} \left( (p+p'-q-q') \frac{l}{L} +q+q' \right).
\end{equation}
This gives, for one hole,
\begin{equation}
\label{eq:linresmathighole}
\left( \begin{array}{c} v_\text{tr,H}^{(1)} \\ v_\text{tr,E}^{(1)} \end{array} \right)
= \left( \begin{array}{c} p P_1 - q P_{L-1} \\ 2 p' P_1 - 2 q' P_{L-1} \end{array} \right)
= \frac{r \delta' - r' \delta}{L (r + r')} \left( \begin{array}{c} -1 \\ 2 \end{array} \right),
\end{equation}
to all orders in $\delta$ and $\delta'$. For a system with not too large a number of holes $L (1-\overline{\rho})$, we can simply add the effect of each hole. This gives
\begin{equation}
\label{eq:vtrhighrho}
{v_\text{tr}} = - v_\text{tr,H} = \frac{v_\text{tr,E}}{2} = (1-\overline{\rho}) \frac{r \delta' - r' \delta}{r + r'},
\end{equation}
and the density profile is linear,
\begin{equation}
\label{eq:denhighrho}
\rho_l = \overline{\rho}- (1-\overline{\rho}) \frac{\delta+\delta'}{r +r'} \left(\frac{l}{L} - \frac{1}{2}\right).
\end{equation}
The results~\eqref{eq:vtrhighrho} and~\eqref{eq:denhighrho} are expected to be exact in the high-density limit.
They also match the high-density limits of the mean-field predictions for the velocity~\eqref{eq:linresvtr} and the density profile~\eqref{eq:solphi}.
Indeed, the agreement between the predicted profile~\eqref{eq:denhighrho} and
the numerical results can be checked to be very good for large densities.
More importantly, considering a system with one hole helps to shed light on the way ANM occurs, see Fig.\,\ref{fig:mechanm} and the explanation in the caption.
The sequence of transitions shown in Fig.\,\ref{fig:mechanm} contains a step where the tracer hops to the right and is therefore favored by an increase of $p$. The net result of this sequence is an overall displacement of the system to the left. Symmetrically, an increase of $q$ favors a sequence of transitions that
results in a net displacement of the tracer to the right. Therefore, when $p > q$ the tracer moves preferentially to the left and ANM is observed.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.48\textwidth]{mechanm_modif.eps}
\end{center}
\caption{\small Mechanism leading to ANM in a system with one hole. Far from the tracer, the hole diffuses symmetrically and will at some point reach a site neighboring the tracer, say to its right. In that case, the tracer may either hop forward (A) or exchange positions with the next bath particle (B). Option (B) eventually brings the tracer back to its starting state shifted one site to the right and is therefore the strategy to make progress to the right. On the other hand, after another exchange step option (A) also brings the tracer back to its starting position, but this time shifted one step to the left. For $p q' > q p'$ the preferred route is (A), which leads to ${v_\text{tr}} < 0$.}
\label{fig:mechanm}
\end{figure}
\section{Brownian hard disks in a narrow channel}
\label{section:hd}
Motivated by the lattice model of driven tracer presented in the preceding section, we move a small step in the direction of a more realistic system and consider
a gas of hard disks in a planar narrow channel. Even though the fluctuation-dissipation relation forbids ANM in linear response, the gas could in principle exhibit ANM at large driving field. The channel is oriented parallel to the
$x$ axis with the origin of the coordinate system in its center. It is periodic in
$x$ with periodicity $L_x$. The channel width $L_y' = 2 \sigma + \epsilon $ is
chosen to allow passing and overtaking of the particles with diameter $\sigma$. Thus, $\epsilon > 0$
is assumed.
In this channel there are $N-1$ neutral particles with mass $m_k$, position $\mathbf{r}_k = (x_k,y_k)$,
and velocity $\mathbf {v}_k = (v_{k,x},v_{k,y})$, $k \in \{1,2,\cdots,N-1\}$. In addition, there is a
tracer particle with index $k = 0$, which is driven by a homogeneous force ${\bf F} = (F,0)$
parallel to the channel axis.
The dynamics of all disks is assumed to follow an underdamped Langevin equation,
\begin{eqnarray}
\frac{ d \mathbf{r}_k}{dt} & = &\mathbf{v}_k \label{eom1} \\
m_k \frac{ d \mathbf{v}_k}{dt} & = &
\mathbf{F} \delta_{k,0} -\gamma \mathbf{v}_k + \sqrt{ 2 \gamma k_B T} \xi_0, \label{eom2} + \{\mbox{coll.}\}\\
\nonumber
\end{eqnarray}
for $ k \in \{0,1,\cdots,N-1 \}$. As usual, $\xi$ is a delta-correlated Gaussian white noise,
$\delta_{k,0} $ is unity for $k = 0$ and zero otherwise, $k_B$ is the
Boltzmann constant, and $T$ is the temperature of the bath.
The stochastic motion equations are solved to first order in the time step $\Delta t$ according to the updating formulas of Gillespie~\cite{gillespie1996a, gillespie1996b}. Reduced units are used
throughout, for which the mass of the tracer, $m_0$, the particle diameter $\sigma$ and the
energy $k_B T$ are unity. In these units, the Langevin friction parameter $\gamma$, which
determines the noise strength, is set to $2$ and the time step $\Delta t$ to $10^{-3}$. In all simulations below, the following parameters are
used: channel length $L_x = 300$, total number of particles $N = 200$, and tracer mass $m_0 = 1.0$. The
masses $m_k \equiv m$ of the neutral particles $k = 1,\cdots, 199$ and the driving force $F$ are indicated
where needed.
There are two kinds of collisions, namely collisions between particles and collisions of a particle with a
long hard boundary of the channel. Particle-particle collisions are strictly elastic. The long channel boundaries, however,
are thermal van Beijeren walls~\cite{vanbeijeren2014}, which re-emit colliding particles in equilibrium with the
boundary temperature $T_b$. The latter is taken to agree with the bath temperature, $T_b = T$.
Since in such a narrow channel the long thermal walls are of significant importance for the
non-equilibrium transport, a short description of the van Beijeren walls used here is given in
Appendix~\ref{section:appvbthermostat}.
The dependence of the mean tracer speed $<v_0>$ on the drive $F$ is shown in Fig. \ref{NDM}
for a channel of width $L_y' = 2.6$. The
four curves are for different masses of the neutral particles as indicated by the labels. For low $F$ the velocity increases almost linearly with $F$ (Ohm's law). For
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{v_vs_F_Lx_300_Ly_2_6.eps}
\end{center}
\caption{\small Dependence of the mean tracer velocity on the homogeneous
force $F$ for channels of width $L_y' = 2.6$.
The masses of the neutral
particles differ for the curves as indicated by the labels.}
\label{NDM}
\end{figure}
larger fields, however, the curves bend over and reach a regime with a negative slope indicating
negative differential mobility.
This behavior is most prominent for $2.3 < L_y' < 2.8 $ and deteriorates quickly for broader channels.
This is demonstrated by a comparison of the
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{v_vs_F_Lx_300_Ly_2_8.eps} \\
\includegraphics[width=0.5\textwidth]{v_vs_F_Lx_300_Ly_2_9.eps}
\end{center}
\caption{\small Dependence of the mean tracer velocity on the
driving force $F$ for a channel width $L_y' = 2.8$ (top panel) and
$2.9 $ (bottom panel). The masses of the neutral
particles differ from curve to curve as indicated by the labels.}
\label{NDM1}
\end{figure}
top and bottom panels of Fig \ref{NDM1} for $L_y' = 2.8$ and $2.9$, respectively.
For $L_y' = 2.8$, one observes very strong negative mobility.
But an increase of the channel width by a moderate amount to $2.9$ gives a totally
different picture. The slopes of the characteristic curves always remain positive.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{f_2_7_569.eps}
\end{center}
\caption{\small Instantaneous configuration in a small neighborhood of the tracer (red disk).
The neutral particles (green disks) accumulate in front of the tracer and
form crystal-like structures, which do not dissolve easily and slow down the tracer.
The channel width $L_y' = 2.7$.
The masses of the tracer and of the neutral particles are set to unity,
the driving force $F = 70$. }
\label{crystal}
\end{figure}
The origin of this strange and unexpected behavior may be understood by inspecting a snapshot of the particle configuration
in the neighborhood of the tracer. See Figure \ref{crystal}. The (green) neutral particles accumulate in front of the tracer, which diffuses in the direction of $F$. The particles in front condense to crystal-like structures, which do not dissolve easily and which become longer and more stable with increasing force $F$. In the limit of very large $F$ all neutral particles are included in this cluster, and the average speed of the tracer becomes minimal but still remains positive. The transition from the Ohmic regime for small $F$ to this nearly blocked transport for large $F$ is characterized by the negative differential mobility mentioned above.
For a channel width larger than $2.8$, the tracer and the neutral particles have a tendency
to stay closer to one of the long channel boundaries than in the channel center. If this happens, the tracer
has no difficulty to pass other neutral particles
close to the other boundary which, consequently, enhances its speed
and disallows NDM. Also the very narrow channels geometrically do not favor the nucleation of
crystal-like clusters in front of the tracer, which could block its advance. Consequently, NDM is
also not observed in this limit.
At this stage a few comments are in order:
i) We have stressed above the importance of the boundary collisions for the nonlinear transport.
We have verified that NDM also takes place
- albeit for stronger forces $F$ - if the thermal walls are replaced by simple elastically-reflecting
boundaries. Thus, this effect appears to be very robust. For small $F$, however, the nature of this boundary becomes unimportant~\cite{cividini_m_p2017}.
ii) In our quest for absolute negative mobility, also modifications of the equations of motion (\ref{eom2}) were investigated.
In one attempt, an attractive harmonic force acting on the $y$ coordinate of the tracer was added.
This increases its position probability in the center of the channel. In another, also a short-range force between
the tracer and bath particles was added, which was designed to enhance the passing probability
of these particles when close by. None of these attempts have indicated ANM.
Unfortunately, the parameter space increases enormously by such attempts and has not yet been
explored to any significant extent.
\section{Conclusion}
\label{section:ccl}
In this work we exhibited two instances of negative response. The first one features ANM around an equilibrium state and occurs in a simple lattice model. We are able to predict this phenomenon analytically for small biases and shed light on the underlying mechanism in very dense systems. Keeping the same spirit, we then probed a more realistic system composed of hard disks in a channel. No ANM was found in that case, however we showed that a 'crystallization' of the neutral Brownian particles at large biases can lead to NDM.
It is natural to ask which kind of modification would allow ANM in the hard-disks model in a way similar to the lattice model. Since the drive is homogeneous, the model as studied in this work cannot be expected to exhibit ANM in linear response around equilibrium. However, in principle it could exhibit ANM at large drive, something which we have not observed. Extending the model, \textit{e.g.} by introducing a $y$-dependent force or an extra potential, would introduce a second driving field that could reproduce the behaviour observed in the lattice model.
Emulating the exchange move in the hard-disks system is however not such an easy task. Other variants also come to mind, like changing the masses or the radii of the particles, not to mention their shapes. The parameter space being extremely large, we leave this question open for future research.
\section*{Acknowledgments}
We thank M.~Aizenmann, H.~van Beijeren, B.~Derrida, Y.~Kafri, A.~Kundu, S.N.~Majumdar and A.~Miron for interesting discussions about this problem. One of us (HAP) wants to acknowledge the hospitality and support of the Weizmann Institute of Science, where parts of the work reported
here have been performed. The support of the Israel Science Foundation (ISF) is gratefully acknowledged. The molecular dynamics simulations were carried out on the Vienna Scientific Cluster (VSC).
We are grateful for the generous allocation of computer resources.
|
1,314,259,993,327 | arxiv | \section{Introduction}
Among different semileptonic processes, the muon capture
is one
of the weak observables that, together with the $\beta$-decay, has
available a fruitful set of experimental data that were collected
in the last fifty years. Several works were focused to establish
the universal $V-A$ character of nuclear muon capture, the role of
induced currents, second-class currents, and nonexistence of $V+A$
interactions. It is known that the experimental value of the
induced pseudoscalar coupling $g_{\sss P}$ is the least known
of the four constants ($g_{\sss V}, g_{\sss A}, g_{\sss M},g_{\sss P}$)
defining the weak nucleon
current.
Its size is dictated by chiral symmetry arguments,
and its measurement represents an important test of quantum
chromodynamics at low energies~\cite{Fea97}. During the past two decades a large
body of new data relevant to the coupling $ g_{\sss P}$ has been accumulated
from measurements of radiative and non radiative
muon capture on targets ranging from $^{3}$He to complex nuclei. Only
transitions to unnatural parity states depend on $ g_{\sss P}$, as can be seen
from Eq. \rf{2.5}.
A summary of references on these issues are cited in review papers
Ref.~\cite{Mea01,Gor04,Gor06}.
Simultaneously, the muon capture processes have been used to scrutinize the
nuclear structure models, since they provide a testing ground for wave functions and, indirectly,
for the interactions that generate them. Being the momentum transfer of the order of the
muon mass $\mass_\mu= 105.6$ MeV, the phase space and the nuclear response favor
lower nuclear excitation energies, and thus the transitions to nuclear
states in the giant resonance region are the dominate ones.
We will cite only a few of them. Most of these works were done within the shell model (SM)
framework~\cite{Gor06,Hax90,War94,Vol00,Aue02}. Several studies were performed by
employing the random phase approximation (RPA)~\cite{Vol00,Kol94,Kol94a,Zin06}.
In the last work, where the total muon capture rates for a large number
of nuclei with $6<Z<94$ have been evaluated, the authors claimed that an
important benchmark was obtained by introducing the pairing correlations.
They have done this ad-hoc by multiplying the one-body transition
matrix elements by the BCS occupation probabilities.
However, we know that
the quasiparticle RPA (QRPA) formalism
is a full self-consistent procedure to describe consistently both i) short-range
particle-particle ($pp$) pairing correlations, and ii) long-range particle-hole ($ph$),
correlations handled with RPA. Quite recently, the relativistic
QRPA (RQRPA) was applied in the calculation of total muon capture rates
on a large set of nuclei from $^{12}$C to $^{244}$Pu, for which
experimental values are available~\cite{Mar09}.
In the present work we do a systematic study of the muon
capture rates of nuclei with $12 \leq A \leq 56$ masses ($^{12}$C, $^{20}$Ne, $^{24}$Mg,
$^{28}$Si, $^{40}$Ar, $^{52}$Cr, $^{54}$Cr, $^{56}$Fe) within
the number projected QRPA (PQRPA).
The motivation for this investigation comes from the successful description
of weak observables in the triad
$\{{{^{12}{\rm B}},{^{12}{\rm C}},{^{12}{\rm N}}}\}$ within this model~\cite{Krm02,Sam11}.
There, it was shown that the projection procedure played an essential role
in properly accounting for the configuration mixing in the ground state wave
function of $^{12}$N.
The employment of PQRPA for the inclusive
$^{12}$C$(\nu_e,e^-)^{12}$N cross section, instead of the continuum
RPA (CRPA) used by the LSND collaboration in the analysis of
${\nu}_\mu \go{\nu}_e$ oscillations of the 1993-1995 data sample,
leads to an increased oscillation probability~\cite{Sam06}.
The charge-exchange PQRPA,
derived from the time-dependent variational principle, was used to
study the two-neutrino $\beta\beta$-decay amplitude $\M_{2\nu}$ in $^{76}$Ge~\cite{Krm93}.
In that work, the projection procedure was less important and the
QRPA and PQRPA yield qualitatively similar results for $\M_{2\nu}$.
The PQRPA was recently used to calculate the
$^{56}$Fe$(\nu_e,e^-)^{56}$Co cross section~\cite{Sam08}.
We will also give a glance on the violation of the CVC by the Coulomb field, which
was worked out recently~\cite{Sam11}, and appears in the first operator \rf{2.4}
for natural parity states
\footnote{ When the consequences of the CVC are not considered, as in Ref.~\cite{Gor04},
the factor $({\mass_\mu-\Delta E_{\rm
Coul}-E_B^\mu})/{E_\nu}$ in this relation goes to unity.}.
This effect is expected to be tiny for the nuclei studied here, since $\Delta E_{\rm Coul}$ is relatively small in comparison
with $\mass_\mu$; it goes from $3.8$ MeV in $^{12}$C to $9.8$ MeV in $^{56}$Fe.
\section{$\mu$-capture rates formalism}
When negative muons pass through matter, they can
be captured into high-lying atomic orbitals. From there they then
quickly cascade down into the $1S$ orbit with binding energy $E_B^\mu$, where two competing
processes occur: one is ordinary decay $\mu^-\go
e^-+\nu_\mu+{\tilde \nu}_e$ with characteristic free lifetime $2.197\times 10^6$ sec, and the other is (weak) capture by the nucleus
$\mu^- + (Z, A)\go (Z-1, A)+ \nu_\mu$. The latter, naively
expected to scale with $Z$, is drastically enhanced by an additional factor of
$Z^3$, originating from the square of the atomic wave function $\phi_{1S}$ evaluated at
the origin [2]. Thus, its rate is roughly proportional to $Z^4$ and dominates decay at
large $Z$. This dominance is however significantly diminished by the gradual decrease of the
effective-charge correction factor $\R(Z)$ \cite{Mar09,Wal04}.
Then
the muon capture rate from the ground state in the initial nucleus $(Z, A)$ to the
state ${\sf J}^\pi_n$ in the final nucleus $(Z-1, A)$
reads
\br
\Lambda({\sf J}^\pi_n)&=&\frac{E_\nu^2}{2\pi}|\phi_{1S}|^2\R(Z)
\T_{\Lambda}(E_\nu,{\sf J}^\pi_n), \label{2.1}\er
where
\br
E_\nu\equiv\kappa=\mass_\mu-(\Mass_n-\Mass_p)-E_B^\mu-\w_{{\sf J}^\pi_n}
\label{2.2}\er
is the neutrino energy, and \br
\T_{\Lambda}(E_\nu,{\sf J}^\pi_n)&=&4\pi G^2
\left[|\Bra{{\sf J}^\pi_n}{\sf O}_{\emptyset{ {\sf J}}}(E_\nu)-
{\sf O}_{0{ {\sf J}}}(E_\nu)\Ket{0^+}|^2+2|\Bra{{\sf J}^\pi_n}{\sf O}_{-1{ {\sf J}}}(E_\nu)
\Ket{0^+}|^2\right],
\label{2.3}\er
is the transition probability, being the Fermi
coupling constant $G=(3.04545\pm 0.00006){\times} 10^{-12}$ natural units.
The nuclear operators are:
\br
{\sf O}_{\emptyset{\sf J}}-{\sf O}_{0, \sf J}&=& \gV\frac{\mass_\mu-\Delta E_{\rm
Coul}-E_B^\mu}{E_\nu}\M^{\sss V}_{\sf J},
\nn\\
{\sf O}_{-1{\sf J}} &=&-(\gA +\gw){\M}^{\sss A,I}_{-1{\sf J}}
+\gV \M^{\sss V,R}_{-1\sf J}, \label{2.4}\er
for {\it natural parity states} ($\pi=(-)^J$,
i.e., $J^\pi=0^+,1^-,2^+,3^-,\cdots$),
and
\br
{\sf O}_{\emptyset{\sf J}}-{\sf O}_{0 \sf J}&=&\gA\M^{\sss A}_{\sf J}
+ \left(\gA+\ga-\gp\right)\M^{\sss A}_{0{\sf J}},
\nn\\
{\sf O}_{-1{\sf J}} &=&-(\gA +\gw)\M^{\sss A,R}_{-1\sf J}
-\gV{\M}^{\sss V,I}_{-1{\sf J}},
\label{2.5}\er
for {\it unnatural parity states} ($\pi=(-)^{J+1}$, i.e., $J^\pi=0^-,1^+,2^-,3^+,\cdots$).
The elementary operators are:
\br
\M^{\sss V}_{\sf J}=j_{\sf J}(\rho) Y_{{\sf J}}(\hat{\rb})&,&
\M^{\sss V}_{{m\sf J}}={\rm M}^{-1}\sum_{{\sf L}\ge 0}i^{ {\sf J-L}-1}
F_{{{\sf LJ}m}}j_{\sf L}(\rho)[ Y_{\sf L}(\hat{\rb})\otimes\mbn]_{{\sf J}},
\nn\\
\M^{\sss A}_{\sf J}=
{\rm M}^{-1}j_{\sf J}(\rho)Y_{\sf J}(\hat{\rb})(\mbs\cdot\mbn)&,&
\M^{\sss A}_{{m\sf J}}=\sum_{{\sf L}\ge 0}i^{ {\sf J-L}-1}
F_{{{\sf LJ}m}}j_{\sf L}(\rho)
\left[Y_{{\sf L}}(\hat{\rb})\otimes{\mbs}\right]_{{\sf J}}
\label{2.6}\er
where $F_{{\sf LJ}m}=(-) ^{1+ m}(1,-m{\sf J}m|{\sf L}0)$
is a Clebsch-Gordon coefficient, $\rho=\absk r$, and the superscripts
$ R$, and $I$ in \rf{2.4} and \rf{2.5} stand for real and imaginary
pieces of the operators \rf{2.6}. Moreover,
\br
\Delta E_{\rm Coul}\cong \frac{6e^2Z}{5R} \cong 1.45 ZA^{-1/3}~~\mbox{MeV},~~~~
E_B^\mu=(eZ)^2\frac{\mass_\mu}{2}\cong 2.66\times 10^{-5}Z^2{\mass_\mu},
\label{2.7}\er
and
\be
\ga=\gA\frac{E_\nu}{2\Mass},~~
\gw=(\gV+\gM)\frac{E_\nu}{2\Mass};~~\gp=\gP\frac{E_\nu}{2\Mass},
\label{2.8}\ee
with $g_{\sss V}$, $g_{\sss A}$, $g_{\sss M}$, and $g_{\sss P}$, being the effective
vector, axial-vector, weak magnetism, and pseudo-scalar
coupling constants, respectively.
We adopt
\br g_{\sss V}&=&1, ~~~g_{\sss A}=1.135 , ~g_{\sss M}=3.70,
~g_{\sss P}=g_{\sss A}\frac{2\Mass \mass_\mu
}{k^{2}+\mass_\pi^2}\cong 6.7, \label{2.9}\er where the value for
$g_{\sss P}$ comes from the PCAC, pion-pole dominance and the
Goldberger-Trieman relation~\cite{Gol58}, and for $g_{\sss A}$ we
use the same value as in Ref.~\cite{Mar09}.
From Eqs. (A6) and (A7) in Ref.~\cite{Sam11}
one sees that $g_{\sss P}$ is contained in
axial-vector pieces of both operators ${\sf O}_{\emptyset{\sf
J}}$ (temporal), and ${\sf O}_{0, \sf J}$ (spacial). They
contribute destructively, being dominant the second one.
In Ref. \cite{Mar09} $g_{\sss P}$ appears only in the temporal operator.
However, after making use of the energy conservation condition
\rf{2.2}, \ie $\kappa\cong \mass_\mu+k_{\emptyset}$
($k_{\emptyset}=-\w_{{\sf J}^\pi_n}$) one ends up with the same result for
${\sf O}_{\emptyset{\sf J}}-{\sf O}_{0\sf J}$.
The $0^+\lra0^-$ transitions are determined by two nuclear
matrix elements only: $\M^{\sss A}_{\sf 0}$ and $\M^{\sss A}_{0{\sf
0}}$, as can be seen from the first relation in \rf{2.5}. As such
they are
the most appropriate to extract the magnitude of $g_{\sss P}$ from
the muon capture experiments.
In fact, studies
of the $^{16}{\rm O}(0^+_1) \go ^{16}{\rm N}(0^-_1)$ transition
within large-basis SM calculations have yielded values of $g_{\sss P}= 6-9$
\cite{Hax90}, and $g_{\sss P}=7.5\pm0.5$
\cite{War94} that are consistent with the estimate \rf{2.9} as well as with
theoretical prediction $g_{\sss P}=8.2$ from
chiral symmetry arguments \cite{Fea97}.
More recently, Gorringe~\cite{Gor06} reported from the SM study of muon capture
$^{40}{\rm Ca}(0^+_1) \go ^{40}{\rm K}(0^-_1)$ have extracted from the experimental result
of $\Lambda$ the values $g_{\sss P}=14.3^{+ 1.8}_{-1.6}$, and $g_{\sss P}=10.3^{+ 2.1}_{-1.9}$.
\section{Numerical results}
For the set of nuclei discussed here we have adopted the single-particle energies (s.p.e.)
from the self-consistent calculation performed by Marketin
\etal~\cite{Mar09} within the relativistic Hartree-Bogoliubov model
(RHB), using effective Lagrangians with density-dependent
meson-nucleon couplings and DD-ME2 parametrization.
The residual interaction is approximated by the $\delta$-force
(in MeV fm$^3$)
\[
V=-4 \pi \left(v_sP_s+v_tP_t\right) \delta(r),
\]
with singlet ($v_s$), and triplet ($v_t$) coupling constants
different $ph$, $pp$, and pairing
channels.
The proton and neutron pairing parameters $v_s^{pair}(p)$ and
$v_s^{pair}(n)$, used in solving the BCS and PBCS equations,
were determined from the experimental data by the adjusting
procedure described in Ref.~\cite{Hir90}.
For the parameters in the $ph$ channel we employe the values
$v_s^{ph}=27$ and $v_t^{ph}=64$, which were obtained in
a systematic study of the GT resonances~\cite{Krm94}.
The ratio $t={v_t^{pp}}/{v_s^{pair}}$ was considered as free
parameter within the $pp$ channel. It was found~\cite{Kor02} that
the muon capture, just like the $\b\b$-decay, probes the final leg
of a $\b\b$-transition and as such strongly depends on the
strength of the $pp$ interaction. Even worse, the QRPA model
collapses as whole in the physical region of
$t$~\cite{Krm93,Krm94,Krm93a}.
Yet, the distinction
between the initial and final legs in the $\b\b$-decay only makes
sense in nuclei that possess an appreciable neutron excess, which doesn't
happen in nuclei under discussion where $N\cong Z$. Moreover, the results of
the PQRPA calculations in $^{12}$C, displayed in Fig. 5 of
Ref.~\cite{Krm05} suggest that the choice
$t=0$ could be appropriate for the description of $N\cong Z$ nuclei.
Therefore, this value of the $pp$ coupling strength is adopted here.
\begin{figure}
\vspace{-3.cm}
\begin{center}
\includegraphics[width=9.cm,height=12.cm]{grafico3a.eps}
\end{center}
\vspace{-3.cm} \caption{\label{figure1}(Color online)
Ratios of theoretical to experimental inclusive muon capture rates for different
nuclear models, as
function of the mass number $A$. The present QRPA and PQRPA results, as well as
the RQRPA calculation~\cite{Mar09}
were done with $\gA=1.135$, while in the
RPA+BCS model~\cite{Zin06} was used the unquenched value $\gA=1.26$ for all multipole
operators, except for the GT ones where it was reduced to $\gA\sim1$.}
\end{figure}
Ratios of theoretical to experimental inclusive muon capture rates for different
nuclear models are exhibited in Fig. \ref{figure1}. It is self evident that the number
projection plays an important role in light nuclei with $A \lsim 30$, making that the
PQRPA agrees better with data than the plain QRPA. On the other hand it is difficult
to judge whether our estimates are better or worse than the previous
ones~\cite{Zin06,Mar09}.
We have found that the consequences of the violation of the CVC by the Coulomb
potential~\cite{Sam11} for the nuclei considered here is very tiny. In fact,
the major effect appears in $^{56}$Fe, where the total muon capture
is reduced from $\Lambda=4260 \times 10^3$~s$^{-1}$ to $\Lambda=4056 \times 10^3$~s$^{-1}$.
\begin {table}[h]
\begin{center}
\caption{\label{table1} Energies (in units of MeV)
and exclusive muon capture rates (in units of $10^3$~s$^{-1}$) for
the bound excited states in $^{12}$B.
Besides the present PQRPA result, we also show a previous one~\cite{Krm02},
as well as those evaluated within the SM~\cite{Aue02}, and the RPA~\cite{Kol94,Kol94a}.}
\label{table1}
\newcommand{\cc}[1]{\multicolumn{1}{c}{#1}}
\renewcommand{\tabcolsep}{0.3pc}
\renewcommand{\arraystretch}{1.2}
\bigskip
\begin{tabular}{| c c|c|c|c|c|c|}\hline
Model &${\sf J}^\pi_n$&$1^+_1$&$2^+_1$&$2^-_1$&$1^-_1$&$\Lambda_{inc}$\\\hline\hline
PQRPA &E & $0.00$& $0.43$& $6.33$& $6.83$&\\
&$\Lambda$ &$8.80$& $0.20$& $0.60$& $0.85$&$37$\\
\hline
PQRPA~\cite{Krm02}&E & $0.00$& $0.50$& $2.82$& $3.31$&\\
&$\Lambda$ & $6.50$& $0.16$& $0.18$& $0.51$&$40$\\
\hline
SM~\cite{Aue02} &E & $0.00$& $0.76$& $1.49$& $1.99$&\\
&$\Lambda$ & $6.0 $& $0.25$& $0.22$& $1.86$&\\\hline
RPA {\cite{Kol94,Kol94a}} &$\Lambda$&$25.4~(22.8)$&$\leq
10^{-3}$&$0.04~(0.02)$&$0.22~(0.74)$&\\
\hline
Exp.~\cite{Mea01,Sto02}&E & $0.00$& $0.95$& $1.67$& $2.62$&\\
&$\Lambda$ & $6.00\pm 0.40$& $0.21\pm 0.10$& $0.18\pm
0.10$& $0.62\pm 0.20$&$38\pm 1$\\
\hline\hline
\end{tabular}
\end{center}
\end {table}
In the case of $^{12}$C we have at our disposal also the experimental data
for exclusive muon capture rates to bound excited states
${\sf J}^\pi_n=1^+_1,2^+_1,2^-_1$, and $1^-_1$
in $^{12}$B~\cite{Mea01,Sto02}. They have been discussed
previously in the framework of the PQRPA~\cite{Krm02,Krm05}, but for
the sake of completeness we show them again in Table \ref{table1}.
The most relevant to highlight in this table is that, while both
PQRPA calculations of the inclusive muon capture rates agree fairly
well with the experiment, the corresponding exclusive reactions are
very different in the two calculations.
In other words, the agreement between theory and data for the
inclusive muon capture does not guarantee the goodness of the model that is used.
\section{Final remarks}
We have shown that, when the capture of muons
is evaluated in the context of the QRPA, the conservation of the number of
particles is very important not only for carbon but in all light nuclei with $A < 30$.
The consequence of this is the superiority of the PQRPA on the QRPA in this
nuclear mass region, as can be seen from Fig. \ref{figure1}.
The violation of the CVC by the Coulomb field in this mass region is of minor importance, since in \rf{2.4} is
$\Delta E_{\rm Coul}+E_B^\mu$ is $\cong 11.7$ MeV, which is small in comparison with $\mass_\mu$.
However, this effect could be quite relevant for medium and heavy nuclei studied in
Refs.~\cite{Gor06,Zin06}. For instance, for $^{208}$Pb is $\Delta E_{\rm
Coul}+E_B^\mu\cong 39.0$ MeV, which implies a reduction
of the operator ${\sf O}_{\emptyset{\sf J}}-{\sf O}_{0, \sf J}$ for
natural parity states by a factor $0.37$, or equivalently that its contribution is only
$\sim 13\%$ of that when the Coulomb field is not considered.
We agree with the finding of Kortelainen and Suhonen~\cite{Kor02}
on the extreme sensitivity of the muon capture rates on the $pp$ coupling strength
when described within the QRPA, as well as on a possible collapse of
this approximation for the ${\sf J}^\pi_n=1^+_1$ state.
Yet, in our opinion the QRPA behaves in this way dominantly in nuclei with a
large neutron excess such as those analyzed in Refs.~\cite{Zin06,Mar09}.
It is clear that the RQRPA calculation~\cite{Mar09} is sensitive to the $pp$ coupling, while the
RPA+BCS model~\cite{Zin06} is not since it totally ignores the $pp$ interaction.
Finally, we conclude that the comparison between theory and data for the
inclusive muon capture is not a fully satisfactory test on a nuclear model.
The exclusive muon transitions are much more robust with respect to such a comparison.
\section*{Acknowledgements}
This work was partially supported by the Argentinean agency CONICET under
contract PIP 0377. A.R.S and D.S.S. acknowledge the support by Brazilian agency FAPESB and
UESC, and thank to Nils Paar for the values of s.p.e.
used in this work. D.S.S thanks to CPqCTR, where the numerical calculations were
performed.
|
1,314,259,993,328 | arxiv | \section{Introduction}\label{sec1}
Planetary systems constitute a paradigm of classical N-body problems. It has long been known that a general N-body system with $N \ge 3$ is not integrable. \citet{arnold_1963} showed that a typical near-integrable Hamiltonian system (HS) with more than 2 degrees of freedom is topologically unstable, even for a negligible value of the perturbation. Thus, given a sufficiently long period of time, the actions in the phase-space could diffuse from their initial values and lead to orbital instabilities. However, estimates for the instability time-scales are given just for extremely small perturbations \citep{neckhoroshev_1977, chirikov_1979, cincotta_2014}, being exponentially large. General estimates of diffusion time-scales for low-to-moderate perturbations are still lacking.
In planetary systems, the diffusion timescale may be a strong function of the initial conditions, particularly in the vicinity of mean-motion resonances. Thus, how long a system can last until completely destroyed is an unsolved problem with great astronomical interest \citep{laskar_1989}.
In Hamiltonian systems, orbital instabilities (and, consequently, strong chaotic diffusion) are generated by the overlap of resonances \citep{wisdom_1980}, and planetary dynamics are no exception. Although
many works in recent times have tried to establish a relationship between chaos and instability \citep{marchal_saari_1975, marchal_bozis_1982, chambers_1996, smit_lissauer_2009, giuppone_morais_correia_2013, deck_etal_2013, ramos_etal_2015}, no general results have been so far obtained, particularly for the case $N > 2$.
As the number of detected exoplanets increased, so did their orbital diversity. Short period with nearly circular orbit planets are supposed to have undergone large scale orbital migration from beyond the snow line, where giant planets are known to be formed. Many of these short period planets are so close to their parent star that tidal dissipation would have likely circularized their orbits \citep{marti_beauge_2015}. Thus, current orbital parameters of such bodies do not provide a good indicator of their dynamical history.
On the other hand, planets in eccentric orbits are generally believed to have formed on nearly circular orbits and later evolved to their presently observed large eccentricities. Among the proposed mechanisms for producing large eccentricities are a passing binary star \citep{laughlin_adams_1998}, secular perturbations due to a distant stellar or planetary companion \citep{ford_etal_2000} and strong planet-planet scattering events \citep{rasio_ford_1996, weidenshilling_marzari_1996, juric_tremaine_2008, beauge_nesvorny_2012}.
Multi-resonant configurations are supposed to be a natural outcome of disk-driven planetary migration \citep{masset_snellgrove_2001, morbidelli_etal_2007ii,hands_dehnen_2014}, and their orbital features are not believed to have been affected by planetary instabilities such as planet-planet scattering or Lidov-Kozai resonance. Thus, their configurations should be more representative of the end product of the formation process, and thus indicative of the stability of ``dynamically quiet'' systems.
Among the population of resonant and near-resonant systems \citep{rivera_etal_2010, fabrycky_etal_2012, wang_etal_2012, marti_giuppone_beauge_2013, rowe_etal_2014}, a large number has been discovered by the \emph{Kepler} mission. However, some of these are still awaiting confirmation, and several key orbital parameters (including their masses) are not known. Thus, in order to perform a detailed dynamical analysis of resonant systems, it seems preferable to turn to those radial velocity detections in which the inclination of the orbital plane has been (at least qualitatively) estimated. One of the best choices is GJ-876, and will be used as our main target in our analysis of diffusion in extrasolar multi-resonant planetary systems.
The GJ-876 system contains, up to date, four confirmed planets orbiting an M-type central star ($M_{\bigstar}$ from 0.32 to 0.334 $M_{\odot}$ depending on author). The inner planet (GJ-876 d) is
very small, located very close to the star, and dynamically detached from the rest of the system. The three other planets are known to be in the vicinity of a Laplace-type resonance, and have been the subject of several investigations (e.g. \citet{rivera_etal_2010, baluev_2011, marti_giuppone_beauge_2013, batygin_holman_2015}).
A detailed dynamical analysis of this system was presented in \citet{marti_giuppone_beauge_2013}, where it was shown that the multi-resonant configuration displayed by GJ-876 is chaotic, albeit long-term stable. In that paper we presented a series of dynamical maps and found stability limits on the mass ratio of the outer planets as well as precise boundaries on the mutual inclination of the system, inferring that the most likely dynamically relaxed configuration is the co-planar case. Most important, once acknowledged that the system is actually multi-resonant, we retrieved specific values for the angular parameters of the planets to ensure a better representation for the plane of initial conditions. In this way we were able to fix initial angular variables in order to define a representative plane obtained via dynamical considerations, where the Laplace resonance can easily be identified.
In this work we aim to give a qualitative picture of the different chaotic processes (regimes) that can be explored by the three-body resonant configuration depicted by the paradigmatic GJ-876 system. Through this, we want to quantify the variation of the actions of the system, associated with fundamental orbital parameters of the planets, by means of a realistic numerical computation of the diffusion coefficients.
\section{Chaotic diffusion}
\label{chaos0}
\subsection{Summary of resonant perturbation theory}
\label{chaos1}
In order to sketch the geometry of resonant dynamics in action space, following \citet{chirikov_1979} and \citet{cincotta_2002}, let $\bm{I}$ denote the $N$-dimensional action vector and $\bm{\theta}$ its conjugate canonical $N$-dimensional angle, and $H_0(\bm{I})$ the unperturbed integrable non-linear Hamiltonian. Then the frequency vector $\bm{\omega}(\bm{I})=\nabla_{\bm{I}}H_0$ is always normal to the unperturbed energy surface $H_0(\bm{I})=h$. The resonance condition $\bm{k}\cdot\bm{\omega}(\bm{I^r})=0$, where $\bm{k}$ is a non-zero
$N$-dimensional vector of integers and $\bm{I^r}$ the resonant action, leads to the resonance surface $\Sigma_{\bm{k}}$. Thus on any resonant torus, the resonant vector $\bm{k}$, is tangent to the energy surface.
Any perturbation to $H_0(\bm{I})$, $\varepsilon V(\bm{I},\bm{\theta})$, where $\varepsilon\ll 1$ and $V$ is an analytic function introduces variations in the unperturbed actions or global integrals. The latter can be Fourier expanded in the angular variables with coefficients that depend on the actions as:
$$\varepsilon V=\varepsilon\sum_{\bm{k}\ne 0}V_{\bm{k}}(\bm{I})\exp{(i\bm{k}\cdot\bm{\theta})}.$$
In the single resonance formulation, for sufficiently small $\varepsilon$ and initial conditions such that the system is close to the resonance $\bm{m}\cdot\bm{\omega}(\bm{I^r})=0$, retaining only
the largest (real) term corresponding to the resonant phase, $\bm{m}\cdot\bm{\theta}$, averaging out all the remaining ones we get for $|\bm{I}-\bm{I^r}|\lesssim 2\sqrt{\varepsilon}$ the local Hamiltonian
\begin{equation}
H(\bm{I},\bm{\theta})=H_0(\bm{I})+\varepsilon V_{\bm{m}}(\bm{I})\cos(\bm{m}\cdot\bm{\theta}),
\label{eq1}
\end{equation}
and thus
\begin{equation}
\dot{\bm{I}}=-\frac{\partial H}{\partial\bm{\theta}}=\varepsilon \bm{m}V_{\bm{m}}(\bm{I})
\sin(\bm{m}\cdot\bm{\theta}).
\label{eq2}
\end{equation}
The above relation shows that the variation of $\bm{I}$ has the direction of the resonant vector $\bm{m}$, tangent to the energy surface.
Since the motion is one-dimensional, it is possible to introduce a canonical local change of coordinates (or local change of basis) around
$\bm{I^r}$:
$(\bm{I},\bm{\theta})\to(\bm{J},\bm{\psi})$ such that
$\psi_1=\bm{m}\cdot\bm{\theta}$,
and $\bm{I}=\bm{I^r}+\bm{m}J_1$, where
$J_1\lesssim |\bm{I}-\bm{I^r}|\sim\mathcal {O}(\sqrt{\varepsilon})$. Since the resonant Hamiltonian is cyclic in $\psi_2,\cdots, \psi_N$, we can neglect $J_2;\cdots,J_N$ and then
keeping terms up to $J_1^2$,
it takes the well-known pendulum form
\begin{equation}
H_r(J_1,\psi_1)=\frac{J_1^2}{2M}+V_{\bm{m}}(\bm{I^r})\cos\psi_1
\label{eqHr}
\end{equation}
where
$$M^{-1}=\sum_{i,j}
m_{i}\left(\frac{\partial\omega_i}
{\partial I_j}\right)_{\bm{I^r}}\!\!m_{j},$$
is the inverse of the non-linear mass, assumed to be different from zero. All the $N-1$ actions $J_2,\cdots, J_N$ are local integrals of the motion whose numerical values should be zero for $\bm{I^r}$ to be an allowed value for the perturbed motion. While $J_1$ is the action component in the direction of $\bm{m}$, $J_2$ could be taken normal to the energy surface (in the direction of
$\bm{\omega}(\bm{I^r})\equiv\bm{\omega^r}$) and thus motion in $J_2$ could be ignored. The remaining $N-2$ components, $J_3,\cdots, J_N$ belong to the $N-2$ dimensional manifold, the diffusion manifold, defined
by the intersection of the energy and resonance surfaces.
Now, let us discuss a crucial difference between HS with $N\le 2$ and $N>2$ degrees of freedom.
In low-dimensional non-degenerated HS, for instance $N=2$, the unperturbed
energy surface $H_0(I_1,I_2)=h$ is 1-dimensional, just a curve. The resonance surface
$m_1\omega_1(I_1,I_2)+m_2\omega_2(I_1,I_2)=0$ is also 1-dimensional. Therefore the intersection
of both, energy and resonance surfaces is a single point, $(I_1^r, I_2^r)$, a unique torus, the resonant torus. Then the motion takes place along the resonant vector $\bm{m}$, tangent to the energy surface. Thus due to the dimensionality of the energy surface and the invariant tori, any transition from one torus to another is only possible through all the intermediate tori between them. Thus the motion under a single resonant perturbation is tangent to the energy surface (curve) and \emph{transverse} to the resonance surface (curve). Since the dense set of resonance surfaces do not intersect each other over the energy surface, large chaos and possibly diffusion is only possible if the perturbation is large enough such that the overlap of nearby resonances takes place. For very small perturbations, chaos is just confined to the thin chaotic layers around the unperturbed separatrix of any resonance and thus the motion is mostly stable.
For $N$--dimensional HS with $N\ge 3$, the intersection of energy and resonance surfaces has dimension $N-2\ge 1$. Now it is clear that the set of all resonance surfaces intersect over the whole energy surface, leading to the so-called \emph{Arnol'd web}. Focusing again on an isolated resonance, since the motion is confined to the energy surface and has the direction of $\bm{m}$ ($J_1$), there are $N-2$ additional directions where motion could proceed when considering the effects of the perturbing terms in the Fourier expansion of $\varepsilon V$ (besides the resonant one).
For instance, when $N=3$ the remaining direction could be taken \emph{along} the direction of the intersection of the energy and resonance surface. This additional direction for the motion corresponds to the third component of the local action $\bm{J}$, $J_3$. For $\varepsilon$ small enough and initial conditions such that $\bm{I}\approx\bm{I^r}$, retaining all (or at least the two largest) perturbing terms in the Fourier expansion (besides the resonant one), a slightly perturbed pendulum model is expected, with a thin chaotic layer instead of a smooth separatrix. And moreover, motion in $J_3$--\emph{along the resonance} could also take place.
It has been conjectured that any orbit lying in this thin chaotic layer might visit the
the whole Arnol'd web \citep{arnold_1964}. Arnol'd showed the existence of motion along the chaotic layer of a given resonance in a rigorous way, for a rather simple near-integrable 3D Hamiltonian.
He proved that for a small enough perturbation it is possible to find a trajectory in the vicinity of the separatrix of a given resonance that connects two points separated by a finite distance, i.e. independent of the size of the perturbation but on a very long timescale. Arnol'd's proof rests on the existence of a chain of tori along the center of this resonance that provide a path for the orbit. If these tori are very close to each other, this orbit could transit over the chain. Since every torus in the chain is labeled by an action value, a large but finite variation of this action could take place. This mechanism, which permits motion along the resonance chaotic layer, is known as the \emph{Arnol'd Mechanism}, while the term \emph{Arnol'd diffusion} generally refers to a possibly global phase-space instability \citep{giorgilli_1990, lochak_1999, cincotta_2002}, that is any (chaotic) orbit might visit the full Arnol'd web in a finite time.
However the problem of how to extend Arnol'd mechanism to a generic Hamiltonian remains unsolved. One of the main difficulties is related to the construction of such a chain of tori.
Regardless of this severe limitation to understand Arnol'd diffusion as a global instability, it was largely assumed that Arnol'd diffusion does occur, and it is responsible for the chaotic mixing in relatively large regions of phase space. Nevertheless, in spite of the mathematical difficulties in dealing with this conjecture as a global instability, a local formulation shows that
exponentially large times are necessary in order to observe any appreciable variation of the unperturbed integrals. This suggests that Arnol'd diffusion should be irrelevant in actual systems.
On the other hand, those systems exhibiting a divided phase space, where the chaotic component is relevant (and not only confined to the chaotic layers), the timescale for any diffusion (not Arnol'd diffusion) would be much shorter but still very long \citep[see for instance][]{chirikov_1997, giordano_cincotta_2004}, like a power law on the perturbation parameter. In the limit of completely random motion, this time-scale -- the inverse of the diffusion coefficient -- should go as $\sim\varepsilon^{-2}$. When resonance overlap takes place, any description such as the Arnol'd Mechanism is no longer possible since the connected resonance domains become almost completely chaotic, and the required chain of tori does not exist. So, we should use numerical experiments to quantify any diffusion.
\subsection{Diffusion}
\label{chaos2}
In this section we discuss the so-called chaotic mixing. In terms of the planetary orbits, roughly speaking, chaotic mixing means that trajectories starting in a very small neighborhood of a given point in phase space, will loose their memory about initial conditions and, for large enough times, all these trajectories appear uncorrelated. This expected ``random'' behavior could be described as a diffusion process in action space. In the limit of a Brownian type motion, the variance of any action grows linearly with time and thus, a local diffusion coefficient could be defined as the constant rate at which the variance changes with time.
However, in any realistic HS the dynamical behavior is rather far from a completely random motion. Thus, in order to characterize and quantify diffusion we should proceed with numerical experiments. Assume we are dealing with a 3D HS, which can be described in the following action-angle variables: $(\bm{I},\bm{\vartheta})$. Perform a dynamical map with any chaos indicator over a large set of initial conditions, for instance taking a grid on the $(I_1, I_2)$ plane, and keeping fixed the values of $\vartheta_i=\vartheta_i^0, i=1,2,3,$ and $I_3=I_3^0$. Any chaos indicator will provide information about the local exponential divergence around any given point of the full phase space, in this case represented by the plane where we let the initial values of the actions vary, $(I_1, I_2)$.
With this dynamical information at hand, let us consider an ensemble of $n_p$ initial conditions in a small neighborhood of size $\sigma$ around a given point $(I_1^* I_2^*)$ on the plane $(I_1^0,I_2^0)$ with the very same values for the remaining variables, $\vartheta_i=\vartheta_i^0, I_3=I_3^0$ and where the indicator reveals an unstable, chaotic behavior. We integrate the equations of motion for all the $n_p$ points in the ensemble. The space and time distribution of all the points in $\sigma$ would give us information about the relevance of diffusion for that point. Moreover we could compute the time evolution of the space variance of the two action components distributions.
As it was already shown in \citep{cincotta_2014}, the above mentioned variance computation should be done after performing a sequence of canonical transformations to a ``good'' set of variables. Indeed, in that work it was shown that using the original set of actions, particularly when the perturbation is small, stable oscillations could hide the slow secular growth of the variance with time and thus the local diffusion coefficient would be largely underestimated. However this normal form computation to get the appropriate set of variables is not easy to be done in general, and since we will not deal with very small perturbations, we adopt an alternative way \citep{guzzo_etal_2006,lega_etal_2003}, to reduce somewhat the effect of oscillations in the drift. The above mentioned procedure to measure the diffusion in the action plane means considering a section of phase-space such that all initial conditions starting in $\sigma$ should satisfy at a given time $t$: $$|\vartheta_1(t)-\vartheta_1^0|+|\vartheta_2(t)-\vartheta_2^0|+|\vartheta_3(t)-\vartheta_3^0|< \delta_1,\quad
|I_3(t)-I_3^0|<\delta_2,$$
with $\delta_1,\delta_2\ll 1.$ This procedure, though computational expensive, will effectively
reduce the presence of fast periodic oscillations in the time evolution of the action variances.
\section{Hamiltonian Model for a Three-Body Resonance} \label{model}
Let us consider a system of three planets (masses $m_1$, $m_2$ and $m_3$) orbiting a star $m_0$ under their mutual gravitational forces. The index is chosen such that the initial semi-major axes satisfy the condition $a_1 < a_2 < a_3$.
A canonical set of variables introduced by Poincar\'e allows us to write the Hamiltonian for the four-body problem. Following \citet{laskar_robutel_1995}, let $\mathbf{r}_{i}$ be the astrocentric positions of the planets, and $\mathbf{p}_{i}$ be the barycentric momentum vectors. The pairs $(\mathbf{r}_{i},\mathbf{p}_{i})$ form a canonical set of variables with the Hamiltonian given by:
\begin{equation}
H = H_{0} + \st{H}{dir} + \st{H}{kin}.
\label{hamil}
\end{equation}
Here $H_{0}$ is the keplerian part while the perturbations are given by the two remaining terms. $H_{\textrm{dir}}$ is the direct part, and $H_{\textrm{kin}}$ is the kinetic part of the Hamiltonian, each expressed in terms of the canonical $(\mathbf{p}_{i},\mathbf{r}_{i})$ variables as
\begin{equation}
\begin{split}
H_{0} =& -\sum_{i=1}^{3}\left(\frac{p_{i}^{2}}{2\beta_{i}} - \frac{m_{0}m_{i}}{r_{i}}\right)\\
\st{H}{dir} =& -{\cal G} \sum_{i,j=1 \; i \ne j}^{3}\frac{m_{i}m_{j}}{\Delta_{ij}}\\
\st{H}{kin} =& \sum_{i,j=1 \; i \ne j}^{3}\frac{\mathbf{p}_{i}\cdot\mathbf{p}_{j}}{m_{0}},
\label{hamiltonian}
\end{split}
\end{equation}
where $\beta_{i} = m_{0}m_{i}/(m_{0} + m_{i})$, $\Delta_{ij} = |\vec{r_{i}} - \vec{r_{j}}|$, and ${\cal G}$ denotes the gravitational constant. The first term of Eq. \eqref{hamil} defines the Keplerian motion of each planet around the star, while \st{H}{dir} and \st{H}{kin} represent the mutual interactions among the planets. The barycentric momentum $\mathbf{p}_{i}$ in the four-body problem are defined as
\begin{equation}
\mathbf{p}_{i} = \frac{m_{i}}{m_{T}}\left[ \dot{\mathbf{r}}_{i} \sum_{j \ne i} m_{j} - \sum_{j \ne i}m_{j} \dot{\mathbf{r}}_{j}\right],
\end{equation}
where $\dot{\mathbf{r}}_{i}$ are the derivatives of the astrocentric positions and $m_{T}=\Sigma_{i=0}^{3}m_i$. Since we are assuming co-planar orbits, our system contains a total of six degrees of freedom.
Performing a canonical transformation to the modified Delaunay variables, which for the planar case are given by
\begin{equation}
\begin{split}
& L_{j} = \beta_{j}\sqrt{\mu_{j}a_{j}} \\
& S_{j} = L_{j}(1-\sqrt{1 - e_{j}^{2}})
\end{split}
\label{2}
\end{equation}
with $\mu_{j}\!=\!{\cal G}(m_{0}+m_{j})$, the Keplerian part of the Hamiltonian is simply given by the expression:
\begin{equation}
H_{0} = -\sum_{i=1}^{3}\frac{\mu_{i}^{2}\beta_{i}^{3}}{2L_{i}^{2}}.
\label{keplerian}
\end{equation}
In the vicinity of a Laplace-type resonance, we introduce new angular variables in terms of the primary resonant angles for each of the single resonances:
\begin{equation}
\begin{split}
\sigma_{1} &= 2\lambda_{2} - \lambda_{1} - \varpi_{1}\\
\sigma_{2} &= 2\lambda_{3} - \lambda_{2} - \varpi_{2}\\
\Delta\varpi_{1} &= \varpi_{2} - \varpi_{1}\\
\Delta\varpi_{2} &= \varpi_{3} - \varpi_{2}.\\
\end{split}
\label{4}
\end{equation}
The resonant angle of the Laplace resonance may be written in terms of the mean longitudes as:
\begin{equation}
\phi_{lap} = \lambda_{1} - 3\lambda_{2} + 2\lambda_{3} .
\end{equation}
After an averaging process with respect to the short-period terms, the resulting resonant Hamiltonian reduces to a system of four degrees-of-freedom.
\section{Dynamical Maps}
\subsection{Numerical Setup}
\label{sec4.1}
For all our numerical runs we used an N-body code based on a Bulirsh-Stoer integrator with a variable step-size in order to control the relative error ($E_{r}$) in each time-step. This value was taken equal to $E_{r} = 10^{-12}$.
We constructed a series of dynamical maps using a rectangular grid of initial conditions in the representative plane $(a_{3},e_{3})$. All other variables, as well as the planetary masses, were taken from Table \ref{table2}, which correspond to values of the angles that lead to minimum excursions in the eccentricities (see \citet{marti_giuppone_beauge_2013} for details).
\begin{table}
\centering
\begin{tabular}{ l c c c }
\\[1ex]
\hline\hline \\[-1.3ex]
\multicolumn{4}{c}{Orbital Parameters for the GJ-876 system} \\ [1ex]
\hline\\
{\bf Parameter} & {\bf Planet c} & {\bf Planet b} & {\bf Planet e} \\
\hline \\
$P$ (days) & $30.0881$ & $61.1166$ & $124.26$ \\
$m \, (\textrm{M}_{jup})$ & $0.7142$ & $2.2756$ & $0.0459$ \\
$a$ (AU) & $0.129590$ & $0.208317$ & $0.3343$ \\
$e$ & $0.25591$ & $0.0324$ & $0.055$ \\
$\varpi \, (^{\circ})$ & $0.0$ & $0.0$ & $180.0$ \\
$M \, (^{\circ})$ & $240.0$ & $120.0$ & $60.0$ \\
\hline
\end{tabular}\label{table2}
\\[1ex]
\caption{
Masses and orbital elements for the three planets of GJ-876 involved in the Laplace resonance. The values of the angular variables ($\varpi$ and $M$) were chosen to minimize the variations of the orbital elements over time, and lead to small-amplitude librations of the resonant angles. The $(a_3,e_3)$ values correspond to those obtained by the four-planet co-planar fit in \citet{rivera_etal_2010}.}
\end{table}
The top frame of Figure \ref{fig1} reproduces the structure of the phase-space in the $(a_3,e_3)$ representative plane in the vicinity of the 2/1 mean-motion resonance (MMR) between $m_3$ and $m_2$. Black symbols correspond to the orbital fits of \citep{rivera_etal_2010, baluev_2011}, each numerically integrated in order to intersect the representative plane. The dynamical map was constructed with a $82 \times 82$ grid, and each initial condition was integrated for $5 \times 10^4$ years. The plot shows the value of $\Delta e_3$ obtained during this time-span, with a color code in the range of $0.0<\Delta e_{3}<0.6$. The region associated to the 2/1 commensurability is clearly seen around $a_3 \simeq 0.335$ AU, while other resonances are also detected for larger semi-major axis. This plot is analogous to Figure 7 of \citet{marti_giuppone_beauge_2013}.
Initial conditions identified with red correspond to unstable orbits that lead to a disruption of the system within the integration time-span. Stable orbits in the vicinity of the 2/1 MMR define a horse-shoe type region with eccentricity reaching up to $e_3 \simeq 0.1$. Close to the stability boundary, the values of $\Delta e_3$ are relatively large (of the order of $0.2$). We also identified, deep inside the resonance domain, a small region characterized by very low eccentricity variations.
\begin{figure}
\centering
\mbox{\includegraphics[width=8.0cm]{./representative_amplitudes+fits.eps}}
\caption{Top frame shows a $\Delta e_3$ dynamical map in the vicinity of the 2/1 MMR between $m_3$ and $m_2$ (corresponding to $a_3 \simeq 0.335$ AU). The middle plot shows the amplitude of libration of the primary resonant angle $\sigma_2$ of the two-body resonant, while the bottom graph shows the amplitude of libration of the Laplace resonance.}
\label{fig1}
\end{figure}
The two lower graphs show the semi-amplitude of libration of $\sigma_2$ (middle plot) and of the Laplace angle $\phi_{lap}$ (lower plot). Both show very similar behavior, indicating that practically all initial conditions within the 2/1 MMR also correspond to motion within the Laplace multi-planet resonance.
Moreover, the region within the Laplace resonance with $\Delta e_3 \sim 0$ corresponds to small-amplitude librations of its critical argument, as expected.
\subsection{Structure of the Laplace Resonance}
\label{sec4.2}
In order to realistically assess the chaotic diffusion of this system, we must first define the basic configurations with which to compare the time-evolved parameters.
The best fit for the 3-body GJ-876 system is, according to a variety of works \citep{rivera_etal_2010, baluev_2011}, a chaotic condition; however, it has also been established that the configuration is stable and locked in a resonant state for extremely long timescales. In \citet{marti_giuppone_beauge_2013} we presented a thorough exploration of the parameter space, yielding several dynamical constrains.
For instance, we concluded that both dynamical tests and stability considerations point towards a co-planar configuration. We also showed that finite masses are necessary in order to guarantee stability, and estimated upper bounds for the mass ratio. Here we expand on those results and discuss in more details the evolution of both regular and chaotic orbits with a higher resolution.
\begin{figure}
\centering
\includegraphics[width=8.0cm]{./deltae3-2xmegno-multiplot.eps}
\caption{Dynamical maps in the representative plane $(a_3,e_3)$ in the vicinity of the Laplace resonance. The color code in the top frame corresponds to $\Delta e_3$ while the two remaining graphs plot values of the MEGNO indicator $\langle Y \rangle$.}
\label{fig2}
\end{figure}
Figure \ref{fig2} presents new dynamical maps for the central region of the Laplace resonance, corresponding to low-amplitude librations of $\phi_{lap}$. Since we are interested in a detailed analysis of the resonance structure, we increased the resolution to a $300 \times 250$ grid of initial conditions in the $(a_3,e_3)$ plane. The total integration time was also increased to $10^{5}$ years. The values of $\Delta e_{3}$ for each initial condition are shown in the top panel, with a color code in the range of $0.0<\Delta e_{3}<0.06$.
It is important to recall that $\Delta e_{3}$ is not a chaotic indicator (e.g.\citet{ramos_etal_2015}), although it constitutes an important tool with which to map changes in the structure of the phase space, such as those stemming from separatrix crossings. The MEGNO indicator \citep{cincotta_2000,cincotta_giordano_simo_2003}, on the other hand, is a robust and efficient chaos indicator.
The middle panel of Figure \ref{fig2} shows the same map although this time the colors correspond to the MEGNO values $\langle Y \rangle$, where $2$ the lowest value, indicates regular motion. We note a very sharp transition between moderate values close to $2$ deep within the libration domain, and highly chaotic motion with $\langle Y \rangle \ge 1000$. The low-Megno region is located in the core of the resonant domain and corresponds to small-amplitude librations of the Laplace angle, as discussed in Figure \ref{fig1}.
Although both indicators do not show exactly the same results, they share some qualitative features. In both cases, the phase-space appears separated into two distinct regions: a moderately regular ($\langle Y \rangle < 3$) domain surrounded by a significantly more chaotic region identified with $\langle Y \rangle > 10$. Hereafter we will refer to each as the {\it inner} and {\it outer} resonant domains, respectively.
The lower frame presents, once again, a MEGNO color map, only this time limited to values found in the inner core of the resonance. We can now see a number of dynamical structures deep within this commensurability. Although similar structures may also be seen in the $\Delta e_3$ map, these are not so clearly defined. A second interesting result of the MEGNO map is that all initial conditions appear chaotic (reaching a minimum value of $\langle Y \rangle \simeq 2.89$), even for very low amplitudes of libration. Moreover this figure clearly
shows the very signatures of high order resonances within this domain as narrow channels or simply as
smooth curves (see below).
This general chaoticity is not unexpected. Indeed, \citet{nesvorny_morbidelli_1999} considered the full three-body-resonance as a configuration in the SS system (asteroid, Jupiter and Saturn), in which the time derivative of a generic resonant angle $\sigma$ satisfies:
\begin{equation}
\dot{\sigma} = j_1\dot{\lambda}_1 + j_2\dot{\lambda}_2 + j_3\dot{\lambda}_3 + l_1\dot{\varpi}_1 + l_2\dot{\varpi}_2 + l_3\dot{\varpi}_3 \approx 0,
\label{tbres}
\end{equation}
where $\lambda_{i}$ and $\varpi_{i}$ denote the mean and perihelion longitudes respectively. The indexes $(j_1,j_2,j_3) \in \mathbb{Z}^3\setminus\{0\}$ and $(l_1,l_2,l_3) \in \mathbb{Z}^3$ are conditioned by D'Alembert's rule:
\begin{equation}
\sum_{i = 1}^{3}(j_i + l_i) = 0.
\label{dalembert}
\end{equation}
For a specific three-body mean motion resonance (i.e. $\dot{\lambda}_i=n_i$ yields a fixed value of $a_i$), eq. \eqref{tbres} defines several multiplets associated to different vectors of integers $\bm{l}$, each located at slightly different resonant values of the corresponding semi-major axis. These multiplets (or sub-resonances) will inevitably overlap, generating an extended chaotic region in the three-body resonance. The full $(j_1,j_2,j_3)$ MMR Hamiltonian $(P_1,P_2,P_3)$, up to second order in the eccentricity of the small body $(P_3)$ can be reduced to a four dimensional one. Indeed, after defining:
\begin{equation}
\bm{I}=\left(N_3, S_1,S_{2},S_{3}\right),\qquad\bm{\theta}=\left(\phi,\varpi_1,\varpi_{2},\varpi_{3}\right),
\label{variables}
\end{equation}
where $\phi=j_1{\lambda}_1 + j_2{\lambda}_2 + j_3{\lambda}_3$ and $N_3=L_3/j_3$, then
following the approach of Section~\ref{chaos0}, the local Hamiltonian reads
\begin{equation}
\begin{split}
H(\bm{I},\bm{\theta})=&-\frac{1}{2j_3^{2}N_3^{2}}-\beta_{0}\left(1+\frac{S_3}{j_3N_3}\right)^{2} +\\ &\left(j_{1}n_{1}+j_{2}n_{2}\right)N_3 +
\nu_{1}S_{1}+\nu_{2}S_{2}+V(\bm{I},\bm{\theta}),
\end{split}
\label{MMRH}
\end{equation}
where $\beta_0\sim e_{3}^2$, $\nu_{1,2}$ are perihelion motions of $P_1$ and $P_2$ massive planets respectively. The perturbation takes the form:
\begin{equation}
V\left(\bm{I},\bm{\theta}\right)=\sum_{\bm{l}}\beta_{\bm{l}}(\bm{I})
\cos(\phi+l_1\varpi_1+l_2\varpi_2+l_3\varpi_3),
\label{pertMMRH}
\end{equation}
and the small coefficients $\beta_{\bm{l}}(\bm{I})$ can be given in terms of a power series of the small body's eccentricity (see \citet{nesvorny_morbidelli_1999}).
Considering three different multiplets of the asteroidal three-body resonance $(5, -2, -2)$, \citet{cachucho_etal_2010} applied Chirikov's diffusion theory to investigate, among other effects, variations of the eccentricities of the (490) Veritas family. They clearly show that it is necessary to consider at least the three strongest terms in (\ref{pertMMRH}) in order to explain the observed distribution of eccentricities of this asteroidal family. This multiplet of three resonances for this particular MMR in the SS, given by $\bm{l}=(-1,0,0), (0,-1,0), (0,0,1)$, is represented in Fig.~\ref{multi}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{./fig2.eps}
\caption{Three resonances model for the $(5, -2, -2)$ three-body MMR. The strength of
each resonances is given by the corresponding width. The largest one corresponds to the resonance
$(5\lambda_J-2\lambda_S-2\lambda-\varpi)$ while the smallest one to $(5\lambda_J-2\lambda_S-2\lambda+\varpi_S)$.}
\label{multi}
\end{figure}
This simple model shows that the three resonances are in overlap and thus the full domain of the MMR is expected to be chaotic and therefore diffusion might occur. Moreover, the above figure
allows us to say that diffusion \emph{along} the resonance corresponds to variations of the
eccentricity while diffusion \emph{across} the resonance measures variations of the semi-major axis.
In the case of Gliese-876, $m_3\ll m_1 < m_2$, and since we are dealing with the full four dimensional resonant Hamiltonian in a small domain around Laplace resonance, a dense
set of resonances of the form
$$\dot{\phi}_{lap}+l_1\dot{\varpi}_1+l_2\dot{\varpi_2}+l_3\dot{\varpi_3}\approx 0$$
would appear as well as many other nearby MMR. Thus regular motion is not expected in this region but a very complex domain of overlap of many resonances. Therefore the only way to investigate diffusion is by numerical experiments.
In order to understand the structure of the Laplace resonances and the role of the different resonances in the multiplet in the diffusion process, let
us write the Hamiltonian in Chirikov's style, taking again the same variables as defined
in (\ref{variables}) with $\phi=\phi_{lap}$. Due to the D'Alembert's rule for the Laplace
resonance, the harmonic vector $\bm{m}\equiv(1,0,0,0)$ should be resonant, thus we take the
angle $\bm{m}\cdot\bm{\theta}=(\lambda_1-3\lambda_2+\lambda_3)$ as the resonant one. Taking
away from the perturbation the resonant term, the Hamiltonian becomes
\begin{equation}
\begin{split}
H_r(\bm{I},\bm{\theta})=&-\frac{1}{2j_3^{2}N_3^{2}}-\beta_{0}\left(1+\frac{S_3}{j_3N_3}\right)^{2} + \left(j_{1}n_{1}+j_{2}n_{2}\right)N_3 +\\
&\nu_{1}S_{1}+\nu_{2}S_{2}+\beta_{{\bm{m}}}(\bm{I})\cos(\bm{m}\cdot\bm{\theta})+V,
\end{split}
\label{H_r}
\end{equation}
where $V$ includes all terms of the form $\cos(\phi_{lap}+l_1\varpi_1+l_2\varpi_2+l_3\varpi_3)$ with $l_i\ne 0$. Following the formulation of Section~\ref{chaos1}, a canonical transformation or local change of basis $(\bm{I},\bm{\theta})\to(\bm{J},\bm{\psi})$ such that
$$\psi_k=\sum_{i=1}^4 \mu_{ik}\theta_k,\qquad I_s=I^r_s+\sum_{k=1}^4J_k\mu_{ks},$$
where $\mu_{ik}$ are the coefficients of the transformation with $\mu_{1k}=m_k$,
$\mu_{2k}=\omega^r_2/||\bm{\omega^r}||,\dots$, (so $\psi_1=\bm{m}\cdot\bm{\theta}$),
allows one to reduce the resonant Hamiltonian to
\begin{equation}
\begin{split}
H(\bm{J},\bm{\psi})=&{J_1^2\over 2M}
+|\bm{\omega}^r|J_2 + \sum_{s=1}^4 \sum_{k+ s> 2}^4 \frac{J_kJ_s}{2M_{ks}}+\\
&\beta_{{\bm{m}}}(\bm{I^r})\cos\psi_1+
\sum_{\bm{l}} \beta_{\bm{l}}(\bm{I^r})
\cos(\bm{l}\cdot\bm{\theta}(\bm{\psi})),
\end{split}
\label{Hfull}
\end{equation}
where $M$ is the non-linear mass defined in Section~\ref{chaos1} while the
$M_{ks}$ are similar constants to $M$ but involving different coefficients of the basis
transformation $(\mu_{ik})$ and
$\bm{I^r}$ is the resonant action that satisfies the resonance condition
$$n_1(\bm{I^r})-3n_2(\bm{I^r})+2n_3(\bm{I^r})=0.$$
Recalling that the dot product is invariant, the replacement $\bm{\theta}\to\bm{\psi}$ is easily done since $\bm{l}\cdot\bm{\theta}=\bm{r}\cdot\bm{\psi}$, where now the components of $\bm{r}$ are real numbers.
Keeping only the actual resonance ($\phi_{lap}$) and neglecting all the perturbation terms $\beta_{\bm{l}}(\bm{I^r})$ for all $\bm{l}$, the components $J_2,J_3, J_4$ become local integrals of motion whose value is equal to zero if $\bm{I^r}$ is a point of the orbit. Then, the Hamiltonian reduces to a pendulum-like model
\begin{equation}
\tilde{H}_r(J_1,\psi_1)=\frac{J_1^2}{2M}+V(\bm{I^r})\cos\psi_1.
\label{pen2}
\end{equation}
Thus the motion \emph{across} the resonance is given by $J_1$, the pendulum action.
It librates or circulates depending on the value of $\tilde{H}_r$ and for $\tilde{H}_r=V(\bm{I^r})$ the system lies at the separatrix. When switching on the perturbation $(\beta_{\bm{l}}(\bm{I^r})\ne 0)$ the main effect to the pendulum model is to produce a distortion of the separatrix
and the motion in the neighborhood of this asymptotic trajectory becomes chaotic leading to
the so-called chaotic layer. However a non-vanishing perturbation, including at least two perturbing terms, also leads to variation
of the unperturbed local integrals $J_2,J_3,J_4$, after a simple inspection of (\ref{Hfull}).
The variation of $J_2$ has a direction normal to the energy surface and thus it can be ignored.
Changes in $J_3$ and $J_4$ lie in the diffusion space and therefore \emph{along} the resonance.
In other words, due to the particular geometry of the resonance (see Fig.~\ref{multi}), $J_1$ measures diffusion in the semi-major axis of $P_3$ while, $J_3$ and $J_4$ lying in the diffusion space, take into account diffusion in the eccentricity of the small body.
From the above discussion it becomes clear that if $\bm{m}\cdot\bm{\theta}=(\lambda_1-3\lambda_2+\lambda_3)$ is a resonant angle, then $(\lambda_1-3\lambda_2+\lambda_3+l_1\nu_1+l_2\nu_2+l_3\nu_3)$ is also resonant for any integers $l_i\ne 0$ that satisfy the D'Alembert's rule. And as we have already shown, all these resonances are overlapping since all of them have almost the same $\bm{I^r}$ (or $a^r$). Hence we expect a fully chaotic domain within the Laplace resonance and therefore diffusion in both directions, along and across the resonance. Moreover, since many other MMR are very close to this Laplace resonance, a large chaotic sea should surround it. As can be seen in Figs.~\ref{fig2}, the correspondence between the simple model and the full numerical experimentations is, at least qualitatively, most evident. All this is what we observe in Figs.~\ref{fig2}. However, from the above discussion, nothing could be said about the diffusion rate or if the diffusion has a normal character.
\begin{figure*}
\centering
\mbox{\includegraphics*[width=16.0cm]{./diffusion_all-multi_lowres_lowres.eps}}
\caption{Diffusion of 9 ensembles of 256 initial conditions defined in different regions of the representative plane. Total integration time was $2 \times 10^5$ yrs. Black rectangles show the location of the initial ensembles, while the color dots indicate their diffusion during this time-span.}
\label{fig3}
\end{figure*}
\section{Diffusion Inside the Laplace Resonance}
Having analyzed the general structure and chaoticity of the Laplace resonance, our next step is to estimate the diffusion times in the different regions within this commensurability.
We performed a series of integrations of ensembles of initial conditions at specific locations in the $(a_{3},e_{3})$ plane. Each ensemble consisted of a total of 256 initial conditions, all centered around a given point in the plane, and defining very narrow regions of at most $10^{-3}$ in $\Delta e_3$ and $2\times 10^{-4}$ in $\Delta a_3$. Each initial condition was again integrated for a total time of $2\times 10^{5}$ years, twice longer than the time-span used for the original map.
During the evolution, we kept a record of every time the particles crossed the representative plane. This was said to occur when the following conditions were satisfied:
\begin{itemize}
\item $\Sigma_{i=1}^{3} (|M_{i}-M^{0}_{i}| + |\varpi_{i}-\varpi^{0}_{i}|) < \epsilon_{ang}$ ,
\item $\Sigma_{i=1}^{2} |e_{i} - e^{0}_{i}| < \epsilon_{e}$ ,
\item $\Sigma_{i=1}^{2} |a_{i} - a^{0}_{i}| < \epsilon_{a}$,
\end{itemize}
where $\epsilon_{ang}$, $\epsilon_{e}$ and $\epsilon_{a}$ are predefined values. For this set of simulations we adopted $\epsilon_{ang} = 6^{\circ}$, $\epsilon_{a} = 0.005 \textrm{AU}$ and $\epsilon_{e} = 0.005$ .
We integrated a set of nine ensembles (hereafter referred to as 1S, 2S, ... , 9S). The first was located in the outer resonant region, while the other ensembles were placed inside the inner resonant region.
Figure \ref{fig3} shows the evolution of each ensemble in the representative plane, superimposed with the resonant structure as determined by the $\Delta e_3$ indicator. The initial conditions are indicated with black rectangles while their subsequent diffusive evolution is depicted in colors.
As seen from the left-hand plot, the S1 ensemble suffers a large-scale diffusion, rapidly covering all the outer resonant region. Motion is highly chaotic and the times between crossings are unpredictable. More interesting, all intersections with the representative plane occur in the red region, which appears detached from the inner resonant zone indicated in green and blue.
The remaining frames in Figure \ref{fig3} correspond to initial conditions in the inner resonant zone. In all cases the diffusion is very localized, at least compared with the evolution of 1S. Moreover, the time evolution of the ensembles never leave the inner domain, indicating no noticeable mixing between both parts. This seems to suggest that perhaps both regions are dynamically unconnected (at least up to the considered length of the simulations) and that the limit between them could represent a kind of dynamical boundary inside the resonance. In consequence, initial conditions within the inner region seem to be characterized by very small diffusion rates. The opposite seems to occur for initial conditions in the outer domain.
\subsection{Diffusion Coefficients}
In this Section, we proceed to quantify the different chaotic regimes within the Laplace resonance.
To this end we take advantage of the ensembles 1S to 9S described in the previous section to ensure a sufficiently representative ensemble to compute the variances in both $a_{3}$ and $e_{3}$. The ensembles labeled as $i = 2, ... , 9$ have a considerable number of intersections with the $(a_3, e_3)$ plane that ensure a sufficiently good approximation to the actual experimental value of the variance of $(a_3,e_3)$. In the case of S1, we already noticed the difficulty of having a significant amount of crossings with the representative plane.
The numerical computation of the variance proceeds as follows: {\bf i)} We subdivided the total integration time of the ensembles (i.e. $T_{tot} = 2\times 10^5$ years) into $N_t$ time intervals of fixed length $T_{imp}$, so that $T_{imp} = T_{tot}/N_t$. {\bf ii)} At each time interval $[(i-1) T_{imp},i T_{imp}], \quad i = 1, N_t$ we computed the total plane crossings conditions $(N_{i})$ which occur at shorter times than the extreme value of the time interval (i.e. if $T_{cr} < i T_{imp}$). {\bf iii)} A representative value of the variances both, for $a_{3}$ and $e_{3}$ is calculated using all the plane crossing conditions in each time interval following:
\begin{equation}
\sigma_x = \frac{1}{N_i}\Sigma (x(T_{cr}) - x_{0})^2,
\end{equation}
were $x$ should be replaced by any of the fundamental parameters $a_{3}$ or $e_{3}$, and $x_{0}$ is either the semi-major axis $a_{3}$ or the eccentricity $e_{3}$, at the center of the ensemble.
Diffusion processes are commonly characterized by a power law relationship of the form $\sigma^2(t)=c~t^\alpha$, with $c>0$. If $\alpha=1$ we have normal diffusion, while in case of $\alpha<1$ the phenomenon is called sub-diffusion, or when $\alpha>1$ it is called super-diffusion. In the normal diffusion case, that corresponds to purely random motion, it is possible to define a numerical diffusion coefficient, $D$, as the constant rate at which the variance grows with time. The computation of the diffusion coefficient in case of sub-diffusion or super-diffusion for a generic HS is yet an open and difficult
problem. Therefore in this work we focus on which type of diffusion dominates the different regions of the resonance discussed above.
Thus we associate to $\sigma_x^2$ a power law
\begin{equation}
\label{intro_dif_coef_variance}
\sigma_x^2(t) = c_x t^{\alpha_x},
\end{equation}
where $c_x$ and $\alpha_x$ are the fitted parameters. In case of an exponent $\alpha\approx 1$ the parameter $c_x$ is an estimate of the actual and standard diffusion rate coefficient, $D_x$ . On the other hand, if $\alpha$ is far from $1$, nothing could be said about the diffusion coefficient.
Only a qualitative description about the diffusion processes in phase-space could be provided.
\begin{figure}
\centering
\mbox{\includegraphics[width=8.0cm]{./varianzas_e_S1-S9-25points.eps}}
\caption{Variance of the eccentricity as function of time, obtained for each of the ensembles 1S to 9S. The $\sigma_e$ values are shown in logarithmic scale in order to see how each curve moves away from the Normal Diffusion curve, depicted in dashed black line on the plot.}
\label{fig4}
\end{figure}
In Figure \ref{fig4} we show the time-evolutions of $\sigma_e$ for each of the ensembles 1S - 9S. We also include in the figure the corresponding time-evolution for the completely random case ($\sigma^2\propto t$) just for the sake of comparison. The figure shows that in all cases the nine ensembles have a smaller rate than the expected one for normal diffusion. The ensemble 1S, taken in the outer part of the resonance, presents some similarities with the normal case. For the rest of the ensembles, the 4S shows the highest rate of evolution at large times, but the computed variance for this ensemble is one order of magnitude less than that for the ensemble 1S. This result clearly shows that the inner region of the resonance, while chaotic, presents a dynamical behavior that looks almost stable and therefore the diffusion, is not well approximated as a Brownian type motion. In Table~\ref{table3} we show the values of these exponents for the nine ensembles. Clearly only the ensembles 1S and 4S present an exponent close to 1. The associated diffusion coefficient, obtained by a linear fit for $t\gtrsim 10^4$ years results $D\sim 10^{-9}$ for both ensembles. The rest of the ensembles are highly sub-diffusive and thus the dynamical behavior is rather stable at least for $t=2\times 10^5$ years. The particular case of ensemble 4S might be explained since its initial position $(a_0, e_0)\approx (0.335, 0.01)$ is also in the outer region of the resonance but very close to the boundary defined by the MEGNO computation (see for instance Fig.\ref{fig5}). Initially the evolution of the ensemble shows a sub-diffusive behavior but, for moderate times, the diffusion becomes more normal and maybe for larger times it could reach higher values of the eccentricity.
In this direction, \citet{batygin_holman_2015} developed a 2-Dimensional model and studied the diffusion on this same system. Besides the difference between the analytic and numerical approximations, they assume a normal diffusion to derive a diffusion coefficient. Our approach shows that this assumption for diffusion in a multi-resonant system such as GJ-876 is not well suited, at least in the inner region of the Laplace resonance.
\begin{table}
\centering
\begin{tabular}{ c c }
\\[1ex]
\hline\hline \\[-1.3ex]
{\bf Ensemble} & $\alpha$\\
\hline \\
1S & $0.942715$ \\
2S & $0.585784$ \\
3S & $0.494802$ \\
4S & $0.923109$ \\
5S & $0.648737$ \\
6S & $0.448689$ \\
7S & $0.686534$ \\
8S & $0.592316$ \\
9S & $0.462431$ \\
\hline
\end{tabular}\label{table3}
\\[1ex]
\caption{
Exponents $\alpha$ calculated by a least-squares fit for the data obtained by the variances from each of the nine ensembles.}
\end{table}
\section{Orbital Stability in the Inner and Outer Resonant Regions}
Finally, we wanted to analyze the orbital stability and dynamics of a set of initial conditions in both regions of the Laplace resonance. Figure \ref{fig5} shows the Lyapunov characteristic exponent (LCE) calculated for 10 initial conditions in the representative plane. Their locations, supper-imposed to the Megno-map, are shown in the upper frame, while the time evolution of their LCE is shown in the bottom plot. Along with the evolution of their LCE, we have plotted the corresponding evolution of the initial conditions represented by the co-planar orbital fits already mentioned in section \ref{sec4.1}.
\begin{figure}
\centering
\mbox{\includegraphics*[width=9.0cm]{./Megno-structure-IC+fits.eps}}
\caption{Bottom frame shows the Maximum Lyapunov Coefficient (LCE) for 10 initial conditions chosen in different regions of the representative plane (identified in top graph).}
\label{fig5}
\end{figure}
All the initial conditions set in the outer resonant region (IC 7 through IC 10) are characterized by very large values of LCE, of the order of $10^{-1}$ yrs$^{-1}$, corresponding to extremely chaotic motion. However, as shown in the left panels of Figure \ref{fig6} for IC 10, there is no indication of orbital instability, at least within several $10^7$ years. The system is inside the Laplace resonance, although the resonant angle displays large-amplitude librations. The resonant angles of the individual two-body resonances are also librating, and the behavior of $\Delta \varpi_1$ indicates that $m_1$ and $m_2$ are trapped in an Apsidal Corotation Resonance (ACR) \citep{beauge_etal_2003}. The difference in longitudes of pericenter of the outer pair ($\Delta \varpi_2$), however, circulates, indicating that this sub-system is not in an ACR.
\begin{figure}
\centering
\mbox{\includegraphics*[width=9.0cm]{./angulos-resonantes_CI4-CI10.eps}}
\caption{Time evolution of the resonant angles corresponding to initial conditions IC 10 and IC 4 described in Figure \ref{fig5}. These show the evolution two characteristic conditions placed at the inner (IC 4) and at the outer (IC 10) resonant regions.}
\label{fig6}
\end{figure}
These values of LCE are very similar to those obtained by \citet{batygin_holman_2015}, where they estimate a lyapunov time for Rivera's orbital fit using the aforementioned 2-dimensional model. In fact, all orbital fits show a similar behavior (see Figure \ref{fig5}), with values of LCE somewhere between those corresponding to the IC7-IC10 and IC2-IC5-IC6 groups of initial configurations.
Continuing with Figure \ref{fig5}, initial conditions placed in the red streaks within the inner resonant region (IC 2, 5 and 6) have moderate values of LCEs. While these are significantly smaller than before, they still correspond to significant chaotic motion. Finally, the initial conditions placed inside the relatively regular inner resonant region (IC 1, 3 and 4) all show almost identical very small values of the Lyapunov exponent. At the end of the simulation, at $T=1.2 \times 10^7$ years, the value of LCE has yet to reach a plateau, indicating that this region is characterized by very regular motion. Indeed, the theoretical expected final value of the LCE for regular motion is $\ln T/T\sim 10^{-6}$.
The right-hand frames of Figure \ref{fig6} shows the time evolution of the resonant angles for IC 4. All, together with differences in longitudes of pericenter, exhibit small-amplitude librations, indicating that this configuration is not only trapped inside the Laplace resonance but also exhibits a double-ACR. The same is noted for the other initial conditions in this region. This seems to indicate that the difference in dynamics between the inner and outer resonant domains is defined by the behavior of the auxiliary resonant angles, particularly that of the outer pair. Thus, it appears that the almost regular region deep within the Laplace resonance corresponds to Double-ACR orbits, while the highly chaotic outer region is associated to an ACR for the inner pair and a $\sigma_2$-libration of the outer pair of planets.
\section{Conclusions}
The choice of the GJ-876 system, although arbitrary, is due to two main factors. On one hand, we want to analyze the diffusive process and chaotic mixing in a system which could have avoided other chaotic processes during its early stages after gas depletion. In this sense, GJ-876 is a well characterized system that displays a resonant chain of planetary bodies. On the other hand, a natural motivation was the extensive quantity of previous works where this specific planetary system has been used as prime example.
We have started our analysis by improving our representation of the region covered by the Laplace resonance in the $(a_{3},e_{3})$ plane. We integrated one order of magnitude more initial conditions than we had previously done, and also extended the total integration time for each to 2$\times 10^{5}$ years. We therefore explored in a very precise way the main dynamical structures that this system represents.
As was already pointed out in \citet{marti_giuppone_beauge_2013}, we recognized two main regions in the surroundings of the resonance. The one we called inner resonant region is characterized by lower values of $\Delta e_{3}$, a MEGNO indicator value of $\langle Y \rangle \sim 2$ and utterly very small values for the LCE which result in seemingly large lyapunov times. The outer resonant region is, however, dominated by extremely chaotic dynamics, presenting high values of $\Delta e_{3}$ and $\langle Y \rangle$, and having LCE's somewhat higher than in the inner region. Moreover, we also concluded that the inner zone corresponds very well with the region of lower libration amplitude of the resonant angle $\phi_{lap}$. This feature, although trivial, is extremely important because it shows that the multi-resonant configuration of the four-body system ($m_{0} + m_{i}, i = 1, 3$) is responsible for its long-term stability. The coincidence in the low-amplitude libration regions of $\sigma_{2}$ and $\phi_{lap}$ on the phase-space allows us to state that the system is unable to show a libration of the Laplace angle without being trapped in the two single two-body resonances.
Although both the MEGNO and $\Delta e_{3}$ indicators point towards chaoticity within the inner resonant region, this characteristic should be considered with care. Indeed, we have already stated that aside the overall chaoticity of the system, we could still define regions with completely different dynamical behaviors. The higher precision used in our grid-simulations of initial conditions allowed us to perform a much more detailed map of the inherent chaotic structure inside the Laplace resonance. As it was shown in the bottom frame of Figure \ref{fig2}, several thin strips of higher values of $\langle Y \rangle$ cross each other along the whole inner domain. This behavior is completely expected (see section \ref{sec4.2}) due to the overlapping of resonances associated with slight variations of the the longitudes of perihelion of the planets, which are located at the same region as the Laplace resonance.
In order to get a quantitative idea of how the different aspects of chaotic behavior affect the dynamics of the system and its resonant structure, we performed numerical calculations concerning the diffusive process, which occur inside the multi-resonance domain. Although diffusion is always present, we show here that the rate at which the local variation of fundamental parameters $(a_{3},e_{3})$ associated with the actions in phase-space (see section \ref{model}), is completely limited to the inner region of the resonance as long as their initial values reside in that domain. In a few cases where the initial conditions were located at the borders of the inner region or at the strips of moderate chaos, the diffusion rate seems to be higher. We also performed calculations of the diffusive process for an ensemble of initial conditions located outside the inner resonant region, yielding a time-evolution of the variance very close to normal diffusion (i.e. $\alpha = 0.942715$ in the model $\sigma^2(t)=c~t^\alpha$), while for any of the other ensembles the fit of this exponent was seemingly smaller. This result clearly shows that the assumption of normal diffusion $(\sigma_2(t)\propto t$) for these kinds of systems is not well sustained.
The LCE calculated for 10 different initial conditions, chosen to represent some crucial aspects of the resonance, are clearly in accordance with the overall analysis developed here. There is a direct link between the lower values of LCE and initial conditions at the inner zone. Accordingly, for systems with initial conditions placed outside the inner part, they not only reached higher values of LCE, but they also reach these values at earlier times than systems with initial conditions at the inner region. Moreover, Those conditions which were located specifically at the moderate MEGNO strips, show an intermediate value of LCE, and even some, seem not to have reached its asymptotic LCE value at the final time of the simulation.
The LCE obtained by \citet{batygin_holman_2015} corresponds to the outer resonant region of the Laplace resonance, as they make use of the fit from \citet{rivera_etal_2010} (see also table \ref{table2}). However, we found that the inner region, characterized by a Double-ACR and small amplitude of librations of the resonant angles, contains initial conditions which are less chaotic, associated with Lyapunov times larger than $10^5$ years. In fact we have run a simulation of Rivera's orbital fit, which led to a Lyapunov time of $\sim 100$ years. Our integrations for initial conditions in the inner resonant region which are not specifically on any of the moderate MEGNO strips, are not only stable for more than $10^{7}$ years, they also show a much more limited evolution of the libration amplitudes of the resonant angles (see right-hand frame of Figure \ref{fig6}), as well as a much regular variation. This strongly suggests that although chaotic, the system could and in fact has long-term stability, and that chaotic mixing should not have occurred in systems which display resonant dynamics similar to that of GJ-876.
Although this research was developed for a specific planetary system, it seems reasonable that the main characteristics of any system representing similar multi-resonant configurations could share the main features that were described throughout this paper. The implementations, although numerically expensive, should not carry major problems, and so, an extension to any such a system would only need a sufficiently precise orbital fit. As the number of multi-resonant systems is constantly increasing, this type of dynamical study is of fundamental importance mainly for stability considerations, and secondly because of the constrains that multi-resonant planetary systems can impose on the planetary formation theories.
\section*{Acknowledgments}
This work used computational resources from CCAD – Universidad Nacional de C\'ordoba (http://ccad.unc.edu.ar/), in particular the Mendieta Cluster, which is part of SNCAD – MinCyT, Rep\'ublica Argentina. Other Numerical simulations were made on the local computing resources from the Instituto de Astronom\'ia Te\'orica y Experimental (IATE), at the University of C\'ordoba (C\'ordoba, Argentina) and also on the IFLySIB computational resources at the Instituto de F\'isica de L\'iquidos y Sistemas Biol\'ogicos - CONICET - UNLP. We also want to thank the Instituto de Astrofísica de La Plata (IALP) - CONICET - UNLP, La Plata, for supporting this research.
The authors also wish to express their gratitude to an anonymous referee for important suggestions and comments.
\bibliographystyle{mnras.bst}
|
1,314,259,993,329 | arxiv | \section{Introduction}
The theoretical investigation of the glass transition and its relation to jamming
in hard sphere systems has made considerable progress in the last 30
years~\cite{SW84,SSW85,Sp98,CFP98,PZ10}. This has been possible mainly
because of the powerful analogy between jammed states and inherent
structures~\cite{SW82,LS90,Sp98,KK07} and of the development of methods based
on spin glass theory~\cite{Mo95,MP99} to describe the glass transition
of particle systems. This progress led to the proposal that amorphous
jammed states of hard spheres can be thought of as the states obtained
in the infinite pressure limit
of metastable glasses, and therefore described using tools of (metastable-)equilibrium
statistical mechanics.
\begin{figure}
\includegraphics[width=8cm]{dia_totale.eps}
\caption{
Schematic mean-field phase diagram of hard spheres in three dimensions,
see the text and~\cite{PZ10} for a detailed description.
}
\label{dia_totale}
\end{figure}
The phase diagram of hard spheres
that results from these mean-field studies is summarized in Fig.~\ref{dia_totale},
where we plot the pressure as a function of the packing fraction $\f$
which is the fraction of space covered by the spheres.
The full black line represents the
equilibrium phase diagram with the liquid-to-crystal transition.
If this transition can be avoided (by compressing fast enough or by introducing
some degree of polydispersity), one enters into a metastable liquid
phase. The nature of this metastable liquid changes at $\f=\f_{\rm d}$.
It consists of a single ergodic state for $\f<\f_{\rm d}$. When $\f>\f_{\rm
d}$, the available phase space splits into many glassy states. If the
system is stuck in one of these states and compressed, it follows one of
the glass branches of the phase diagram, until its pressure eventually diverges at
some packing fraction $\f_j$ which depends on the state.
At density $\f_{\rm K}$ a thermodynamic glass transition happens
(in the sense of mean field spin glasses~\cite{CC05}) towards an {\it ideal glass}.
The pressure of the latter diverges at $\f_{\rm GCP}$. In the inset, the complexity,
{i.e. }}\def\eg{{e.g. } the logarithm of the number of glassy states, is plotted as function of
the jamming density $\f_j$: this approach predicts that there exist jammed states in
a finite interval of density $\f_j \in [\f_{\rm th}, \f_{\rm K}]$.
The boxes show a schematic picture of the ($3N$-dimensional,
where $N$ is the number of particles)
phase space of the system: black configurations are allowed by the
hard-core constraint, white ones are forbidden. In the supercooled liquid phase the allowed configurations form a connected
domain; however, on approaching $\f_{\rm d}$ the connections between different metastable regions
become smaller and smaller. Above $\f_{\rm K}$, they disappear in the thermodynamic limit and glassy
states are well defined.
The above mean-field picture has been obtained by a succession of
works which start from the studies of some categories of spin-glasses
with so-called `one step replica symmetry breaking', and have
gradually matured into analytic approximation tools for the theory of
hard spheres (see \cite{PZ10} and references therein).
A very interesting model has been introduced recently by Mari,
Krzakala and Kurchan~\cite{MKK08}.
It displays exactly the phase diagram presented in Fig.~\ref{dia_totale}:
it undergoes an equilibrium
glass transition and it has an interval of densities where it shows
all the phenomenology which is now associated to jamming, like
marginal mechanical stability and the associated presence of anomalous soft
modes in the vibrational spectrum~\cite{OSLN03,Wyart,WNW05}.
The model has been studied
numerically in~\cite{MKK08} in order to show the existence of separate
glass and jamming transitions and to clarify to some extent the relation between
the two.
This model is interesting in that it is in principle solvable: it can
be investigated by mean of modern methods that have been developed in the context
of mean field spin glasses, the replica method~\cite{MPV87} and the
cavity method~\cite{MM09}. This investigation is the purpose of the present paper, where
we derive the cavity equations that describe the model and we present some
approximated analytical
solutions to them, along with a detailed numerical resolution.
Since it will turn out that the exact solution requires quite heavy numerical
calculations (heavier than a direct Monte Carlo study of the model, at least
for a moderate number of particles,
such as the one performed in~\cite{MKK08}), one might wonder why this
solution is interesting at all. There are at least two reasons why this study
is interesting, in our opinion.
The first is that Monte Carlo methods are not able to access the deep
glassy phase or the densest part of the jammed phase: they are confined to explore the region close to $\f_d$
(at equilibrium) and $\f_{\rm th}$ (at jamming). Therefore if one wants
to study, for instance, how the properties of the packings change when going
from $\f_{\rm th}$ to $\f_{\rm GCP}$, the exact solution is needed.
Moreover, we will show that the cavity method allows to derive simple analytical
approximations to the true solution. Similar approximations have been used to study finite dimensional hard spheres~\cite{PZ10};
their investigation in the controlled setting of the present
'solvable' model allows to assess their reliability.
Finally, there are some generic structures in the correlations of jammed packings
that one would like to explain analytically. Our work is a first step
in this direction.
This paper is meant to be read by specialists in the field, so we did not make
much attempt to explain in details the basis of the method. Recent complete reviews of
the physical problem~\cite{PZ10,He10,LNSW10,TS10} as well as of the method we
used~\cite{Pa07b,MM09}
exist, and the reader is assumed to be familiar with these concepts.
\section{Definitions}
\begin{figure}
\includegraphics[width=5cm,angle=90]{factor_graph.eps}
\caption{
An illustration of the model for $p=6$, $z=3$ and $N=8$. Each white
square is a box, each black dot is a variable (sphere).
Each box contains all the spheres connected
to it by a link. The sphere inside one box must not overlap
(note that for $z=1$ one obtains $N/p$ systems of $p$ hard spheres).
}
\label{factor_graph}
\end{figure}
The model that we study in this paper is a simple generalization of the one introduced
in~\cite{MKK08}, defined as follows.
We consider a ``factor graph'', namely a bipartite graph
made by two types of nodes: {\it variables} and {\it boxes}.
Each variable is connected to $z$ boxes and each box is
connected to $p$ variables.
In a system with $N$ variables the number of boxes is $Nz/p$ and the total
number of links ({i.e. }}\def\eg{{e.g. } variable-box connections) is $Nz$. We will consider
an ensemble of `random regular' factor graphs where each graph satisfying this requirement
has the same probability. A crucial properties of this ensemble, that allows
for the solution of the model, is that in the thermodynamic limit $N\rightarrow\io$
almost all graphs are locally tree-like, in a sense that can be defined
precisely~\cite{MM09}.
Each variable is a vector $x_i \in [0,1]^d$ with periodic boundary conditions,
where $d$ is the {\it dimension} and $i = 1, \cdots, N$.
In the following we denote by
$|x_i - x_j| = \sqrt{\sum_{\mu=1}^d( |x_i^\mu - x_j^\mu|_{\text{mod }1})^2}$
the distance between $x_i$ and its closest periodic image of $x_j$.
If we call $\chi(x_i,x_j)$ the characteristic function of the hard sphere constraint
(with periodic boundary conditions),
{i.e. }}\def\eg{{e.g. } $\chi(x_i,x_j)=1$ if $|x_i - x_j| \geq D$ and $0$ otherwise,
then each box $a = 1, \cdots, Nz/p$ imposes the condition
\begin{equation}
\chi(a) \equiv \chi(x^a_1,\cdots,x^a_p) \equiv \prod_{i<j}^{1,p} \chi(x^a_i,x^a_j) \neq 0 \ ,
\end{equation}
where $x^a_i$ are the variables connected to box $a$.
The partition function of the model is
\begin{equation}
Z = \int dx_1 \cdots dx_N \prod_{a=1}^{Nz/p} \chi(a) \ .
\end{equation}
A pictorial description of the model is the following (see Fig.~\ref{factor_graph}).
Each box can be thought
of as a cubic region $[0,1]^d$ with periodic boundary conditions.
Each variable node $i=1,\dots,N$ represents a ``sphere'' of diameter
$D$ and this sphere appears in position $x_i$ in
all the $z$ boxes to which the node is connected. On the other
hand, each box contains exactly $p$ spheres. The constraint is that, for each
box, the $p$ spheres present in the box do not overlap.
The model therefore differs from a standard hard sphere model, since each
sphere interacts only with a finite subset of neighbors, and the topology of
the interaction network is fixed by the random graph construction described
above. This structure is such that the model becomes a mean field model
and is therefore exactly solvable, at least in principle, as we will discuss in the
following. It is worth to note, however, that there are two ``formal'' limits where
one gets back the standard hard sphere model:
in the case $z=1$ the model reduces to $N/p$ independent
systems of $p$ hard spheres each, while for $p=2$ and $z=N-1$ one
gets back a single system of $N$ hard spheres. Note also that in \cite{MKK08}
only the version with $p=2$ has been studied.
Our investigations showed, however, that the model defined above undergoes
a ``crystallization'' phenomenon at high density: the spheres tend to localize
around a discrete set of positions inside the unit box. This has been avoided
in~\cite{MKK08} by introducing a small degree of polydispersity of the size of
spheres. Here, in the analytical treatment of the model, we do not need to
use this trick since we can impose directly that the solutions are translationally
invariant, therefore discarding all crystalline phase of the model. In this way
one effectively restricts to the amorphous phases, but one should keep in mind
that these are metastable with respect to the crystal in the true model.
Another possibility to remove the non-translationally invariant phase is to
introduce local ``random shifts'': on each link we introduce a quenched variable
$s_{ai} \in [0,1]^d$,
such that the corresponding particle appears in the corresponding box translated
by $s_{ai}$. On a tree with open boundary conditions,
this will not change the model since one can always perform a change of variable
to remove the shifts. In presence of loops however, the random shifts will frustrate
the periodic order. But since the cavity solution is based
on local recursions, the solutions describing the model with random shifts will be
the same as the translationally invariant solutions of the model without random
shifts.
A similar situation occurs when studying an antiferromagnetic model on
a random graph: local recursion relations allow both an
antiferromagnetic and an amorphous ordering. The former is irrelevant
on a random graph because long loops of odd length frustrate the antiferromagnetic
order. The antiferromagnetic system thus behaves like the spin glass in
which the sign of the couplings are quenched random variables.
See Ref.~\cite{col2} for a more detailed discussion in the context of a very similar
model.
We define $V_d(R)$ the volume of a $d$-dimensional hypersphere of radius $R$; then
$V_s = V_d(D/2) = 2^{-d} V_d(D)$ is the volume of one hard sphere (since the spheres
have diameter $D$),
and $\f = p V_s$ is the packing fraction, that represents the fraction of the unit
box that is covered by the $p$ interacting spheres. It is trivial to check that there
are no configurations with $\f > 1$.
The parameter that controls
the packing fraction is the diameter $D$ since the box size is fixed; for this reason
in the following we will use directly the sphere diameter $D$ as control parameter and label the different
transitions as $D_{\rm K}$, $D_{\rm GCP}$, $D_{\rm d}$, etc.
For a system of $p$ hard spheres in $d$ dimensions, we define the following quantities:
\begin{equation}\begin{split}
&Z^0_p = \int dx_{1}\cdots dx_{p} \prod_{i<j}^{1,p} \chi(x_{i},x_{j}) \ , \\
&g^0_p(x-y) = \frac1{p(p-1)} \left\langle \sum_{i\neq j}^{1,p} \d(x-x_i)\d(y-x_j) \right\rangle
= \frac{1}{Z^0_p} \int dx_3 \cdots dx_p \chi(x,y,x_3,\cdots,x_p)
\ ,
\end{split}\end{equation}
such that $Z^0_p$ is the partition function of $p$ hard spheres (apart from a $p!$),
and $g^0_p$ is related to the usual pair correlation function~\cite{Hansen} by
\begin{equation}
g(r) = \frac{p-1}{p} g^0_p(r) \ .
\end{equation}
For the following discussion, it will be useful to define
\begin{equation}\label{voidspace}
v_n(x_1,\cdots,x_n) = \int dx \prod_{i=1}^n \chi(x,x_i)
\end{equation}
which is the so called {\it void space} or {\it cavity volume}, namely
the volume available to insert an additional sphere in a box
given the positions
of $n$ other spheres, $\{x_1, \cdots, x_n \}$.
\section{Cavity equations}
\label{sec:cavityeq}
The cavity method has now become a standard method to solve statistical models
defined on random graphs.
We will not explain here the method and refer the reader to~\cite{MM09,cavity}.
Here we only
write the equations for our specific case.
\subsection{Bethe free energy}
We define by $\partial i$ the set of boxes connected to variable $i$, and by
$\partial a$ the set of variables connected to box $a$.
On each link we define two fields: $\f_{a\rightarrow i}(x_i)$ is the
probability density of the variable
$x_i$ when connected only to the box $a$; $\psi_{i\rightarrow a}(x_i)$ is the
probability density
of the same variable when connected to all the boxes in its neighborhood but $a$.
Both are normalized to 1 and
they satisfy the equations:
\begin{equation}\label{recurrence}
\begin{split}
&\psi_{i\rightarrow a}(x_i) = \frac{1}{Z_{i\rightarrow a}} \prod_{b \in \partial i \setminus a} \f_{b\rightarrow i}(x_i) \ , \\
&\f_{a\rightarrow i}(x_i) = \frac{1}{Z_{a\rightarrow i}} \int
\left(\prod_{j\in \partial a \setminus i} dx_j \psi_{j\rightarrow a}(x_j) \right) \chi(a) \ ,
\end{split}
\end{equation}
which can derived from the stationarity of the Bethe entropy:
\begin{equation}\label{Sbethegraph}
S = - \sum_{\text{links }a-i} \log \int dx_i \psi_{i\rightarrow a}(x_i) \f_{a\rightarrow i}(x_i)
+ \sum_a \log \int
\left(\prod_{j\in \partial a} dx_j \psi_{j\rightarrow a}(x_j) \right) \chi(a)
+\sum_i \log \int dx_i \prod_{a \in \partial i } \f_{a\rightarrow i}(x_i) \ .
\end{equation}
These equations have the general form of the cavity (or Bethe) equations that can be derived
for any model with local interactions~\cite{MM09}. With respect to
previous studies of frustrated systems with the cavity method, the main difference here (and the main
source of difficulty) is the fact that the variables $x$ are {\it continuous}.
Although the Bethe free energy is not variational in general, it has the property that the
cavity equations can be obtained imposing its stationarity with respect to the cavity fields.
In some special cases one can argue that it provides indeed an upper or lower bound to the
true free energy, but a proof of this is still lacking.
\subsection{Replica symmetric cavity equations}
The replica symmetric (RS) equations for such a regular graph are trivially obtained by dropping
the spatial dependence of the fields. In this case we use the notation
$Z_\f = Z_{a\rightarrow i}$ and $Z_\psi = Z_{i\rightarrow a}$, and we get
\begin{equation}\label{RSitera}
\begin{split}
&\psi(x) = \frac{1}{Z^{RS}_{\psi}} \f(x)^{z-1} \ , \\
&\f(x) = \frac{1}{Z^{RS}_{\f}} \int
\left(\prod_{j=1}^{p-1} dx_j \psi(x_j) \right) \chi(x,x_1,\cdots,x_{p-1}) \ ,
\end{split}
\end{equation}
and the RS entropy per particle is
\begin{equation}
S_{RS} = - z \log \int dx \psi(x) \f(x)
+ \frac{z}p \log \int
\left(\prod_{j=1}^p dx_j \psi(x_j) \right) \chi(x_1,\cdots,x_p)
+ \log \int dx \f(x)^z \ .
\end{equation}
These equations admit the trivial translationally invariant solution
$\psi(x)=\f(x)=1$ with $Z^{RS}_\psi = 1$ and
\begin{equation}
Z^{RS}_\f = \int dx \int \left( \prod_{j=1}^{p-1} dx_j \psi(x_j)
\right) \chi(x,x_1,\cdots,x_{p-1}) \equiv Z^0_p \ ,
\end{equation}
that is the partition function of $p$ Hard Spheres in the unit box.
Therefore the entropy of the RS phase is
\begin{equation}\label{SRS}
S_{RS} = \frac{z}{p} \log Z^0_p \ .
\end{equation}
\subsection{1-Step replica symmetry breaking cavity equations}
In the standard interpretation~\cite{MM09}, the glass phase is signaled
by the appearance of multiple solutions $\psi_{i\rightarrow a}^{(\a)}$,
$\f_{a\rightarrow i}^{(\a)}$, of Eq.~(\ref{recurrence}).
Each of these solutions represents a glass state with entropy $s_\a$ given by the
Bethe entropy (\ref{Sbethegraph}) computed on the corresponding set of fields.
Although one does not have direct access to individual glassy solutions (since the
direct numerical solution of the Bethe equations by iteration on a single graph is extremely unstable
in this region), a statistical treatment of the properties of the solutions in this regime
exists and goes under the name of 1-step replica symmetry breaking (1RSB)
description~\cite{cavity}.
It is based on an
entropy $S(m)$ which is the sum over all solutions $\a$
of the corresponding partition
function $Z_\a = e^{N s_\a}$ to power $m$~\cite{Mo95}.
The latter is computed by looking to the evolution of the solutions of the Bethe equations
under an iteration that adds one more variable to the graph~\cite{cavity}, or more simply by
introducing an auxiliary model and assuming that a RS description holds for that model~\cite{MM09}.
We do not discuss here these derivations and only report the resulting equations for our model,
which are the following:
\begin{equation}\label{S1RSB}
\begin{split}
& S(m) = \frac{1}N \log \sum_\a Z_\a^m = m s(m) + \Si(m) = -z S_{link}(m) + \frac{z}p S_{box}(m) + S_{site}(m) \\
& S_{link}(m) = \log \int d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi] d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f] \left[ \int dx \psi(x) \f(x) \right]^m
\equiv \log \left\langle Z_{link}^m \right\rangle \ , \\
& S_{box}(m) = \log \int d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi_1]\cdots d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi_p]
\left[ \int \left( \prod_{j=1}^p \psi_{j}(x_j) dx_j \right) \chi(x_1,\cdots,x_p) \right]^m
\equiv \log \left\langle Z_{box}^m \right\rangle \ , \\
& S_{site}(m) = \log \int d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f_1]\cdots d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f_z]
\left[ \int dx \prod_{i=1}^z \f_i(x) \right]^m \equiv \log \left\langle Z_{site}^m \right\rangle\ .
\end{split}\end{equation}
The stationarity of this function with respect to ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi]$ and ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f]$ gives the 1RSB equations:
\begin{equation}\label{rec1RSB}
\begin{split}
& {\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi] = \frac1{\ZZ_\psi} \int \prod_{i=1}^{z-1} d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f_i]
\d\left[ \psi(x) - \frac{1}{Z_\psi} \prod_{i} \f_{i}(x) \right] (Z_\psi)^m \ , \\
& {\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f] = \frac1{\ZZ_\f} \int \prod_{i=1}^{p-1} d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi_i]
\d\left[ \f(x) - \frac{1}{Z_\f} \int
\prod_j dx_j \psi_j(x_j) \chi(x,x_1,\cdots,x_{p-1}) \right] (Z_\f)^m \ .
\end{split}\end{equation}
where the normalization constants are
\begin{equation}\label{reweighting}
\begin{split}
& Z_{\psi}[\f_1,\cdots,\f_{z-1}] = \int dx \prod_{i} \f_{i}(x) \ , \\
& Z_\f[\psi_1,\cdots,\psi_{p-1}] = \int dx \prod_j dx_j \psi_j(x_j) \chi(x,x_1,\cdots,x_{p-1}) \ , \\
& \ZZ_\psi = \left\langle (Z_\psi)^m \right\rangle \ , \\
& \ZZ_\f = \left\langle (Z_\f)^m \right\rangle \ .
\end{split}\end{equation}
The internal entropy can then be written, using the standard method of~\cite{Mo95}, as
\begin{equation}\label{sSig1RSB}
\begin{split}
& s(m) = \frac{\partial S(m)}{\partial m} =
-z \frac{\left\langle Z_{link}^m \log Z_{link}\right\rangle }{\left\langle Z_{link}^m \right\rangle}
+ \frac{z}p \frac{\left\langle Z_{box}^m \log Z_{box}\right\rangle }{\left\langle Z_{box}^m \right\rangle}
+ \frac{\left\langle Z_{site}^m \log Z_{site}\right\rangle }{\left\langle Z_{site}^m \right\rangle} \\
\end{split}\end{equation}
and the complexity is $\Si(m) = S(m) - m s(m)$.
The parameter $m$ is the 1RSB parameter, whose equilibrium value must be
fixed imposing that the replicated entropy is stationary~\cite{MPV87}.
\section{The stability of the RS solution}
\label{sec:RSstability}
To study the stability of the RS phase we perturb around it:
\begin{equation}
\psi_{i\rightarrow a}(x) = 1 + A e^{-ikx + i\th_{i\rightarrow a}} \ ,
\end{equation}
and look at the linear stability of $A$ assuming that the phase $\th$ is
random, {i.e. }}\def\eg{{e.g. } when substituting in the right hand side of (\ref{RSitera})
each $\psi$ get a random independent phase. This is done in order to enforce
translational invariance, otherwise we would study the instability towards
modulated phases, which is indeed interesting but we do not consider here, for
reasons discussed in the introduction.
Note that we have $k = 2 \pi (n_1,\cdots,n_d)$, where $n_i$ are integer numbers.
Then at first order we have
\begin{equation}\label{STlin}
A e^{-i k x + i \th} = A \frac{1}{Z^0_p} \sum_{a=1}^{z-1} \sum_{j=1}^{p-1}
\int dx_2 \cdots dx_{p} \chi(x,x_2,\cdots,x_{p}) e^{-i k x_2 + i \th_{j \rightarrow a}} \ .
\end{equation}
Now we can bring the factor $e^{-ikx}$ on the other side and integrate over
$x$; moreover we take the square and use that the $\th_{j \rightarrow a}$ are random and
uncorrelated and we obtain the final result
\begin{equation}
A^2 = A^2 (z-1)(p-1) \left| \frac{1}{Z^0_p}\int dx_1 \cdots dx_{p}
\chi(x_1,\cdots,x_{p-1}) e^{i k (x_1-x_2)} \right|^2 \ .
\end{equation}
Defining
\begin{equation}\begin{split}
&g^0_p(k) = \int dx dy e^{ik(x-y)} g^0_p(x-y) =
\frac{1}{Z^0_p}\int dx_1 \cdots dx_{p}
\chi(x_1,\cdots,x_{p}) e^{i k (x_1-x_2)} \ ,
\end{split}\end{equation}
the stability condition is
\begin{equation}\label{RSstabcondition}
\sqrt{(p-1)(z-1)} |g^0_p(k)| \leq 1 \ , \hskip2cm \forall k =2 \pi (n_1,\cdots,n_d) \neq 0 \ .
\end{equation}
Hence from the knowledge of $Z^0_p$ and $g^0_p(k)$
we can compute the RS entropy and the stability of the RS solution.
\subsection{Results for $p=2$, any dimension}
For $p =2$, $k\neq 0$ and $D < 1/2$, we have simply $g^0_2(x-y) = \chi(x-y)/(1-V_d(D))$ and
\begin{equation}
g^0_2(k) = \int_{[0,1]^d} dx \, \frac{e^{i k x} \chi(x)}{1-V_d(D)} =
-\int_{[-1/2,1/2]^d} dx \, \frac{e^{i k x} \th(|x|<D)}{1-V_d(D)} =
- \left(\frac{2\pi D}k\right)^{d/2} \frac{J_{d/2}(k D)}{1-V_d(D)}
\ .
\end{equation}
One can show that for the values of $D$ we are interested in, the maximum
of $g^0_2(k)$ is assumed for $k = 2 \pi$, {i.e. }}\def\eg{{e.g. } the smallest $k$.
Then the condition on $D$ is
\begin{equation}
\frac{D^{d/2} J_{d/2}(2\pi D)}{1-V_d(D)} \leq \frac1{\sqrt{z-1}} \ .
\end{equation}
In the limit $z\rightarrow\io$, as $D$ is small,
we can use $J_{n}(x) \sim (x/2)^{n} / \G(n+1)$, and
neglecting the denominator
\begin{equation}
\frac{D^{d/2} J_{d/2}(2\pi D)}{1-V_d(D)} \sim \frac{\p^{d/2} D^d}{\G(d/2+1)}
= V_d(D) \leq \frac1{\sqrt{z-1}} \ .
\end{equation}
\subsection{Results for $d=1$, any $p$}
In $d=1$ we get, from the exact solution
\begin{equation}\begin{split}
& Z^0_p = [1-p D]^{p-1} \ , \\
& g^0_p(k) = \frac{1}{p-1} \sum_{n=0}^{p-2} e^{-i (n+1) k D}
\ _1 F_1[1+n;p;-i(1-p D) k] \ .
\end{split}\end{equation}
where $_1 F_1[a;b;z]$ is the confluent hypergeometric function of the first
kind. Also in this case the lowest $k$ becomes unstable in the first place.
\subsection{Results for $d=2$ and $p=3$}
As a last interesting case, we consider $d=2$ and $p=3$.
In the following for simplicity
we consider $D<1/4$ to avoid problems coming from periodic boundary
conditions.
We start by the computation of the partition function $Z^0_3$
of three spheres in a box, which can be done using the standard virial expansion.
For convenience we fix the first sphere,
as well as the origin of the coordinate frame, in the center of the box.
The center of the second sphere can be anywhere in the box outside a disk of
radius $D$ centered in the origin.
Given the position of the second sphere,
the third sphere can be anywhere outside the union of two disks centered
around the first two spheres.
If the second sphere is at distance $r = |x_2-x_1|$ from the origin $x_1=0$, the free volume accessible
to the third sphere is
\begin{equation}
v_2(x_1,x_2) = 1 - 2 \pi D^2 + \theta(2D-r) D^2 \left( 2 \arccos \frac{r}{2D} - \frac{r}{2D} \sqrt{4 - \frac{r^2}{D^2}} \right)
\end{equation}
This has to be integrated over the position of the second sphere. There are three possible cases:
\begin{enumerate}
\item $r \in [D,2D]$; in this case the first and second exclusion spheres have an overlap, and the second sphere
can rotate at any angle without hitting the boundary of the box. Therefore one has
\begin{equation}
Z^0_3(1) = 2\pi \int_D^{2D} dr \, r \left[1 - 2 \pi D^2 +D^2 \left( 2 \arccos \frac{r}{2D} - \frac{r}{2D} \sqrt{4 - \frac{r^2}{D^2}} \right) \right]
\end{equation}
\item $r \in [2 D, 1/2]$ (recall that the box has side 1 so $r$ is at most $1/2$); in this case the first and second exclusion spheres have no overlap, and the second sphere can rotate
at any angle, therefore
\begin{equation}
Z^0_3(2) = 2\pi \int_{2D}^{1/2} dr \, r \left(1 - 2 \pi D^2 \right)
\end{equation}
\item $r \in [1/2,\sqrt{2}/2]$; also in this case there is no overlap contribution, but the second sphere can only be at some angles because of the cubic shape
of the box.
The total angle that can be spanned is $8 (\pi/4 - \arccos(1/(2r)))$, therefore
\begin{equation}
Z^0_3(3) = 8 \int_{1/2}^{\sqrt{2}/2} dr \, r \left(1 - 2 \pi D^2 \right) \left(\frac{\pi}4 - \arccos \left( \frac1{2r} \right) \right)
\end{equation}
\end{enumerate}
All the integrals can be evaluated and summing the three contributions one gets the final result
\begin{equation}\label{Z03}
Z^0_3 = 1 - 3 \pi D^2 + \frac14 \pi D^4 \big( 3 \sqrt{3} + 8 \pi \big) \ ,
\hskip2cm
D < 1/4 \ .
\end{equation}
We also need the value of the pair correlation at contact, $g^0_3(D)$. Following the same reasoning this is given by
\begin{equation}
g^0_3(D) = \frac{v_2(r=D)}{Z^0_3} = \frac{1 - 2 \pi D^2 + D^2 \left( \frac{2 \pi}{3} - \frac{\sqrt{3}}{2} \right)}{1 - 3 \pi D^2 + \frac14 \pi D^4 \big( 3 \sqrt{3} + 8 \pi \big)} \ ,
\hskip2cm
D < 1/4 \ .
\end{equation}
Finally, $g^0_3(x-y) = v_2(x,y)/Z^0_3$, from which one can compute $g^0_3(k)$ numerically and determine
the stability of the RS solution.
\section{The Gaussian approximation}
We now introduce an approximation to describe the 1RSB phase of the model. We assume that
the fields $\psi_j(x)$ and $\f_i(x)$ are
localized around a position which is randomly distributed
in the box (this maintains the global translational invariance).
This Ansatz, of course, is not a solution of the 1RSB equations.
However, we expect that it provides a reasonable estimate of $S(m)$,
which is expected to become more and
more accurate for large connectivity and close to the random close-packing
point. Moreover, we will see in the following, that even if
the variational nature of the replicated entropy cannot be proven,
these approximations give upper bounds for $D_{\rm K}$.
For this reason we will refer from
now on to these approximations as ``variational'' approximations.
Note that if a variational approximation predicts that the Kauzmann
radius is less than the radius where the RS solution is unstable, $D_{\rm K} < D_{\rm RS}$,
then we know for sure that there is a discontinuous transition
occuring at a value of $D$ smaller than $D_{\rm RS} $.
We assume a Gaussian shape for the fields, which leads
to the following assumption for their distribution:
\begin{equation}\begin{split}
&{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi] = \int dX \, \d\left[ \psi(x) - \frac{e^{-\frac{(x-X)^2}{2 A}}}{(2 \p
A)^{d/2}} \right] \ , \\
&{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\f] = \int dX \, \d\left[ \f(x) - \frac{e^{-\frac{(x-X)^2}{2 \d A}}}{(2 \p
\d A)^{d/2}} \right] \ .
\end{split}\end{equation}
We substitute this Ansatz in the Bethe free energy (\ref{S1RSB}) and determine
the variational parameters $A$ and $\d$ by its extremization.
In the following we will use the definition
$\g_A(x) = \frac{e^{-\frac{x^2}{2 A}}}{(2 \p A)^{d/2}}$.
Substituting the expressions above in (\ref{S1RSB}), we obtain the following
results:
\begin{equation}\begin{split}
&S_{link} = \log \left[ m^{-d/2} [2 \pi (1+\d) A]^{d (1-m)/2} \right] \ , \\
&S_{site} = \log \left[ m^{(1-z)d/2} z^{(1-m)d/2} (2\pi \d A)^{-(1-m)(1-z)d/2}
\right] \ . \\
\end{split}
\end{equation}
Note that $S_{box}$ does not depend on $\d$. Therefore we first write the
contribution of $S_{link}$ and $S_{site}$ and optimize with respect to $\d$:
\begin{equation}
S_{site}-z S_{link} = -\frac{d}2 (1-m) \log (2 \pi A)
+\frac{d}2 \log m + \frac{d}2(1-m) \log \left[ \frac{z \d^{z-1}}{(1+\d)^z}
\right] \ .
\end{equation}
The optimization is straightforward and gives $\d = z-1$ as expected from
the first Eq.~(\ref{recurrence}). The optimized result is
\begin{equation}
S_{site}-z S_{link} = -\frac{d}2 (1-m) \log (2 \pi A)
+\frac{d}2 \log m + \frac{d}2(1-m) (z-1) \log \left[ 1-\frac{1}z
\right] \ .
\end{equation}
The last term to be computed is $S_{box}$, which has the form:
\begin{equation}
S_{box} = \log \int dX_1 \cdots dX_p \left[ \int dx_1\cdots dx_p
\g_A(x_1-X_1) \cdots \g_A(x_p-X_p) \chi(x_1,\cdots,x_p) \right]^m
\end{equation}
Unfortunately this cannot be computed exactly and we have to resort to
further approximations.
\subsection{Small cage expansion, first order}
The small cage expansion proceeds as follows~\cite{PZ10}. First we assume that $m$ is
an integer and write $S_{box}$ as:
\begin{equation}
S_{box} = \log \int d\bar x_1 \cdots d\bar x_p \r(\bar x_1) \cdots \r(\bar
x_p) \prod_{i<j}^{1,p} \bar \chi(\bar x_i,\bar x_j) \ ,
\end{equation}
where $\bar x = (x_1,\cdots,x_m)$ is the coordinate of a ``molecule'' made of
$m$ particles,
$\bar \chi(\bar x,\bar y) = \prod_{a=1}^m \chi(x_a,y_a)$, and
$\r(\bar x) = \int dX \prod_{a=1}^m \g_A(x_a - X)$.
Observing that $\int dx_2 \cdots dx_m \r(\bar x) = 1$,
we write
\begin{equation}\begin{split}
S_{box} &= \log \int d\bar x_1 \cdots d\bar x_p \r(\bar x_1) \cdots \r(\bar
x_p) \prod_{i<j}^{1,p} [\bar \chi(\bar x_i,\bar x_j) - \chi(x_{1i},x_{1j})
+\chi(x_{1i},x_{1j})] \\ &\sim
\log \left[ \int dx_{11}\cdots dx_{1p} \prod_{i<j}^{1,p} \chi(x_{1i},x_{1j})
+ \sum_{i<j}^{1,p}
\int dx_{11}\cdots dx_{1p} \left(\prod_{i'<j'}^{1,p} \chi(x_{1i'},x_{1j'})\right)
Q(x_{1i}-x_{1j})
\right]
\ ,
\end{split}\end{equation}
where we omitted the second order in the development in series of $\bar\chi -
\chi_1$ and we defined
\begin{equation}
Q(x-y) = \int dx_1\cdots dx_m dy_1\cdots dy_m \r(\bar x) \r(\bar y) \left[
\prod_{a=2}^m \chi(x_a,y_a) - 1 \right] \ .
\end{equation}
In \cite{PZ10} it is shown that the second order gives a contribution $O(A)$
and that at lowest order (see Appendix C3 of \cite{PZ10}) $Q(r) =2 \sqrt{A}
Q_0(m) \d(r-D)$, where $Q_0(m)$ is a function of $m$ defined in
\cite{PZ10} as:
\begin{equation}
Q_0(m) = \int_{-\infty}^{\infty}\left[\Theta(t)^m-\Theta(t)\right]\ \ \ ; \ \
\Theta(t)=\frac{1}{2}[1+\text{erf}(t)]=
\frac{1}{\sqrt{\pi}}\int_{-\infty}^t dx e^{-x^2}
\end{equation}
We get then
\begin{equation}\begin{split}
S_{box} &\sim \log Z^0_p + \frac{p(p-1)}2 \int dx dy Q(x-y) g_0^p(x-y) \\ &=
\log Z^0_p + \frac{p(p-1)}2 \frac{2d \sqrt{A}}{D} Q_0(m) g_p^0(D) V_d(D)
\end{split}\end{equation}
and collecting all the terms we get
\begin{equation}\begin{split}
S(m) = &\frac{d}2 (m-1) \log (2 \pi A)
+\frac{d}2 \log m + \frac{d}2(1-m) (z-1) \log \left[ 1-\frac{1}z
\right] \\
&+ \frac{z}p \log Z^0_p + \frac{z (p-1)}2 \frac{2d \sqrt{A}}{D} Q_0(m) g_p^0(D)
V_d(D) \ .
\end{split}\end{equation}
Optimization with respect to $A$ gives
\begin{equation}\label{Astar_Gauss}
\sqrt{A^*} = D \frac{1-m}{Q_0(m)} \frac{1}{z (p-1) V_d(D) g^0_p(D)} \ ,
\end{equation}
and
\begin{equation}\label{SmGauss}
\begin{split}
S(m) = &\frac{d}2 (m-1) \log (2 \pi A^*)
+\frac{d}2 \log m + d(1-m)+ \frac{d}2(1-m) (z-1) \log \left[ 1-\frac{1}z
\right]
+ \frac{z}p \log Z^0_p \ .
\end{split}\end{equation}
In particular, using the results $Q_0(m\rightarrow 0) \sim \sqrt{\pi/4m}$ and
$Q_0(m \sim 1)=Q_0 \times (1-m)$ with $Q_0 = 0.638$~\cite{PZ10}, one can show
that this expression trivially reduces to the RS entropy (\ref{SRS}) for $m=1$,
and that
\begin{equation}\nonumber
\begin{split}
&\Si_j = \lim_{m\rightarrow 0} S(m) =
-d \log \left[ \frac{2\sqrt{2} D}{z (p-1) V_d(D) g^0_p(D)} \right] + d +
\frac{d}2 (z-1) \log \left[ 1-\frac{1}z \right]+ \frac{z}p \log Z^0_p \ , \\
&\Si_{eq} = - \lim_{m\rightarrow 1} m^2 \partial_m [S(m)/m] =
- \frac{d}2 \log\frac{2\pi}{e} - d \log \left[ \frac{D}{z (p-1) V_d(D) g^0_p(D)
Q_0}\right]
+ \frac{d}2 (z-1) \log \left[ 1-\frac{1}z \right]
+ \frac{z}p \log Z^0_p
\end{split}\end{equation}
\begin{figure}[t]
\includegraphics[width=.9\textwidth]{gauss_d.eps}
\caption{Special values of the sphere radius as functions of
$z$ at $p=2$ for different values of $d$ in the Gaussian approximation: $D_{RS}$ beyond which
the RS solution becomes unstable, $D_{\rm GCP}$ where the pressure
diverges, and $D_{\rm K}$
where the Kauzmann transition takes place. When $D_{\rm K}<D_{RS}$ the transition is necessarily first order.}
\label{gauss2}
\end{figure}
\begin{figure}[t]
\includegraphics[width=.9\textwidth]{gauss_p.eps}
\caption{$D_{RS}$, $D_{\rm GCP}$ and $D_{\rm K}$ as functions of
$z$ for different values of $p$ at $d=1$ in the Gaussian approximation.}
\label{gauss}
\end{figure}
\subsection{Results for $p=2$, any dimension}
For $p=2$ we have trivially $Z^0_2 = 1 - V_d(D)$ and $g^0_2(x,y)=\chi(x,y)/Z^0_2$,
therefore $g^0_2(D)=1/Z_2^0$. We get
\begin{equation}\begin{split}
S(m) &= \frac{d}2 (m-1) \log \left[ \frac{2 \pi D^2 (1-V_d(D))^2}{z^2 V_d(D)^2} \frac{(1-m)^2}{Q_0(m)^2}\right]
+\frac{d}2 \log m \\ &+ d(1-m)+ \frac{d}2(1-m) (z-1) \log \left[ 1-\frac{1}z
\right] + \frac{z}2 \log [1-V_d(D)] \ ,
\end{split}\end{equation}
and
\begin{equation}\nonumber
\begin{split}
&\Si_j = \lim_{m\rightarrow 0} S(m) =
-\frac{d}2 \log \left[ \frac{8 D^2 (1-V_d(D))^2}{z^2 V_d(D)^2} \right] + d +
\frac{d}2 (z-1) \log \left[ 1-\frac{1}z \right]+ \frac{z}2 \log [1-V_d(D)] \ , \\
&\Si_{eq} = - \lim_{m\rightarrow 1} m^2 \partial_m [S(m)/m] =
- \frac{d}2 \log \left[ \frac{2 \pi D^2 (1-V_d(D))^2}{z^2 V_d(D)^2
Q_0^2}\right]
+\frac{d}2 + \frac{d}2 (z-1) \log \left[ 1-\frac{1}z \right]
+ \frac{z}2 \log [1-V_d(D)]
\end{split}\end{equation}
and $D_{\rm K}$ is defined by $\Si_{eq}=0$ while $D_{\rm GCP}$ is defined by $\Si_j=0$.
The results are reported in Fig.~\ref{gauss2}.
\subsection{Results for $d=1$, any $p$}
Also in $d=1$ the integrations can be performed for all $p$. We get
\begin{equation}\begin{split}
& Z^0_p = [1-p D]^{p-1} \ , \\
& g^0_p(D) = \frac{1}{1-p D} \ .
\end{split}\end{equation}
Then
\begin{equation}\begin{split}
S(m) = &\frac{1}2 (m-1) \log \left[\frac{\pi (1-p D)^2}{4 z^2 (p-1)^2 } \frac{(1-m)^2}{Q_0(m)^2} \right]
+\frac{1}2 \log m \\ & + (1-m)+ \frac{1}2(1-m) (z-1) \log \left[ 1-\frac{1}z
\right]
+ \frac{z(p-1)}p \log (1-p D) \ ,
\end{split}\end{equation}
and
\begin{equation}\nonumber
\begin{split}
&\Si_j = -
\frac{1}2 \log \left[\frac{2 (1-p D)^2}{z^2 (p-1)^2 } \right] + 1 + \frac{1}2 (z-1) \log \left[ 1-\frac{1}z \right]
+ \frac{z(p-1)}p \log (1-p D) \ , \\
&\Si_{eq} =
- \frac{1}2 \log \left[
\frac{\pi (1-p D)^2}{2 z^2 (p-1)^2
Q_0^2}\right]
+\frac{1}2 + \frac{1}2 (z-1) \log \left[ 1-\frac{1}z \right]
+ \frac{z(p-1)}p \log (1-p D) \ .
\end{split}\end{equation}
The results are reported in Fig.~\ref{gauss}.
\section{The delta approximation}
In this section we introduce
another variational approximation scheme, that we shall call the ``delta approximation''.
The motivation is that within the Gaussian Ansatz, $A \rightarrow 0$ at jamming: therefore,
both $\psi(x)$ and $\f(x)$ become delta functions in this limit.
We would therefore like to compute the free energy directly for delta function
fields; we expect this to give a simpler expression of the free energy, that should be good
close to jamming.
The problem is that the Gaussian expressions are divergent for $A\rightarrow 0$ unless $m$
also goes to zero proportionally to $A$.
This is due to the fact that {\it both} fields $\psi(x)$ and $\f(x)$
become delta functions for $A \rightarrow 0$. We therefore construct here a different approximation
by eliminating the field $\f(x)$ and making a delta function Ansatz only for the field $\psi(x)$:
in this way the field $\f(x)$ is computed exactly and in particular it is not a delta function.
One can show in general that by using equations (\ref{rec1RSB}),
one can eliminate the field $\f(x)$ and
the replicated entropy can be equivalently written as
\begin{equation}
S(m) = S_{site'} - \frac{z(p-1)}{p} S_{box}
\end{equation}
where $S_{box}$ is defined as in Eq.~(\ref{S1RSB}) and
\begin{equation}\begin{split}
S_{site'} &=
\log \int d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi^1_1]\cdots d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi^z_{p-1}]
\left[ \int dx \prod_{k=1}^z \int dx^k_1 \cdots dx^k_{p-1} \psi_1^k(x^k_1) \cdots \psi_{p-1}^k(x_{p-1}^k)
\chi(x,x^k_1,\cdots,x^k_{p-1}) \right]^m \\
&\equiv \log \left\langle Z_{site'}^m \right\rangle \ .
\end{split}\end{equation}
The ``delta approximation''
is then based on the following Ansatz for ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}(\psi)$:
\begin{equation}\label{PPdelta}
{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi] = \int d X \, \delta \left[ \psi(x) - \delta (x - X) \right] \ ,
\end{equation}
namely on each site $i$ the probability of the variable $x_i$ is a delta
function centered in a i.i.d. random point.
Under approximation (\ref{PPdelta}),
the replicated entropy becomes
\begin{equation}\label{Smdelta}
\begin{split}
S(m) &= \log \int dX_1^1 \cdots dX_{p-1}^z \left( \int dx
\prod_{k=1}^z \chi(x,X^k_1,\cdots,X^k_{p-1}) \right)^m -
\frac{z(p-1)}{p} \log \int dX_1 \cdots dX_p \chi(X_1,\cdots,X_p) \\
&=
\log \int \left(
\prod_{k=1}^z dX_1^k \cdots dX_{p-1}^k \chi(X^k_1,\cdots,X^k_{p-1}) \right)
v_{z(p-1)}(X_1^1 \cdots X_{p-1}^z)^m
-
\frac{z(p-1)}{p} \log Z^0_p
\ ,
\end{split}\end{equation}
recalling the definition of $v_n$ in Eq.~(\ref{voidspace}).
Introducing the normalized measure of $n$ spheres
in a unit box,
\begin{equation}
d\mu(x_1 \cdots x_{n}) = \frac{dx_1 \cdots dx_{n} \chi(x_1 \cdots x_n)}{Z^0_{n}} \ ,
\end{equation}
we can rewrite $S(m)$ given in Eq.~(\ref{Smdelta})
in the equivalent form
\begin{equation}\label{Smdelta_sampling}
\begin{split}
S(m) = \log \int \left( \prod_{k=1}^z d\mu(X_1^k \cdots X_{p-1}^k) \right)
\left[ v_{z(p-1)}(X^1_1,\cdots,X^z_{p-1}) \right]^m
+ z \log Z^0_{p-1}
- \frac{z(p-1)}{p} \log Z^0_p \ .
\end{split}\end{equation}
In the following we study this expression for several specific values of $p$ and $d$.
In this section we will derive the expressions for the complexity, and in
section~\ref{sec:results} we will present the results together with a comparison with numerical resolution
of the cavity equations.
Note that for $m=1$ one can easily show that $S(m)$ given above is equal
to the RS entropy (\ref{SRS}), which is an important requirement for the consistency
of this approximation.
\subsection{One dimension}
\subsubsection{Results for $p=2$}
We first consider the simplest case, namely one spatial dimension and only
two-particles-in-a-box interactions ($p=2$). Since $Z^0_1 = 1$ and $Z^0_2 = (1-2D)$, we get
\begin{equation}
S(m) =\log \int \prod_{i=1}^z dX_i \left[ v_z(X_1 \cdots X_z)
\right]^m - \frac{z}2 \log(1-2 D) \ .
\end{equation}
We have therefore to compute the probability distribution $P_z(v)$ of
the void space left in $[0,1]$
for the insertion of a new particle, after having put $z$ particles in
random positions $\{ X_i \}$. Then we have
\begin{equation}\label{SmPvapp}
S(m) =\log \int_0^{1-2D} dv \, P_z(v) \, v^m - \frac{z}2 \log(1-2 D) \ .
\end{equation}
Note that $v$ ranges from $0$ (no void space) to $1-2D$ (in the limiting
case where all points $X_i$ coincide), and we expect that
$P_z(v) = p_0 \d(v) + P_z^{reg}(v)$ since a finite fraction of configurations have
zero void space at large enough $D$. Since the delta function does not
contribute to $S(m)$, we will omit it from now on.
In order to estimate $P_z(v)$ we can make the assumption that
whenever $v>0$, there is only one hole large enough to contribute to $v$
({i.e. }}\def\eg{{e.g. } a hole whose length is bigger than $2D$).
The function $P_z(v)$ can then be easily evaluated in the following way.
The hole that contributes to $v$ must have length $2 D + v$, and must
be delimited by two particles that we can choose in $z (z-1)$ different ways,
since particles are distinguishable. We can put the first particle in
$x_1=0$ and the second in $x_2 = 2 D + v$
(integration over $x_1$ can be omitted since it gives a factor of 1, the length
of the box).
The remaining $z-2$ particles must be in the space between $x_2$ and $1$,
therefore giving a contribution $(1 - 2 D - v)^{z-2}$.
Therefore, within the one-hole approximation, we get
$P_z(v) = z (z-1) (1-2 D -v)^{z-2}$.
We notice that the total probability of $v>0$
must be smaller then one since some configurations might have $v=0$.
This gives the condition
\begin{equation}\label{one-hole-app-p2}
\int_0^{1-2D} dv P_z(v) = z (1-2 D)^{z-1} \leq 1
\hskip1cm \Rightarrow \hskip1cm D \geq (1 - z^{-1/(z-1)})/2 \ ,
\end{equation}
which gives an estimate
of the limits of validity of the one-hole approximation.
Plugging the result for $P_z(v)$ in Eq.~(\ref{SmPvapp}), we get
an approximate formula for the replicated free energy which
depends on $z$ and $D$,
\begin{equation}
S(m) = \log\left(\frac{\Gamma(z+1)\Gamma(m+1)}{\Gamma(z+m)}\right) +\left( m - 1 + \frac{z}{2} \right)\log (1 - 2D) \ .
\end{equation}
Recall that $\Si_{eq} = -[m^2 \partial_m (S(m)/m) ]\vert_{m=1}$ and that
$D_{\rm K}$ is the point where the latter quantity vanishes.
We get
\begin{equation}\label{eq_delta_d1_p2}
\Si_{eq} =
\sum_{q=2}^{z}\frac{1}{q}
+\frac{z-2}{2}\log\left({1-2 D}\right)\ ,
\hskip1cm
D_{\rm K}=\frac{1}{2}\left[ 1-e^{-\frac{2}{z-2}\sum_{q=2}^{z} \frac1q }\right] \ .
\end{equation}
On the other hand, $\Si_j = S(m=0)$ and it vanishes at the
close packing diameter $D_{\rm GCP}$. We get
\begin{equation}\label{j_delta_d1_p2}
\Si_j =\log(z) + \frac{z-2}{2}\log(1-2 D) \ ,
\hskip1cm
D_{\rm GCP}=\frac{1}{2}\left[1-z^{-2/(z-2)}\right] \ .
\end{equation}
The complexity curve can be obtained explicitely, using $\Sigma = - m^2 \partial_m (S(m)/m)$ and $s=\partial_m S(m)$,
which gives the parametric representation:
\begin{equation}\begin{split}
& s=\log(1-2 D)-\sum_{q=1}^{z-1}\frac{1}{m+q}\ , \\
& \Sigma=\frac{z-2}{2}\log(1-2 D)+
\log\left( \frac{\Gamma(z+1)\Gamma(m+1)}{\Gamma(z+m)} \right)
+m \sum_{q=1}^{z-1}\frac{1}{m+q}\ .
\end{split}\end{equation}
One can check easily that
both critical diameters $D_{\rm K}$ and $D_{\rm GCP}$ are well
within the region of validity of the one-hole
approximation given by Eq.~(\ref{one-hole-app-p2}), and they
scale as
$D_{\rm K},D_{\rm GCP} \sim \log z / z$ in the large connectivity
limit.
The values of $D_{\rm K}$ and $D_{\rm GCP}$ can be compared to the stability of the RS
solution (which scales as $D_s \sim 1/\sqrt{z}$).
\subsubsection{Results for $p=3$}
We now consider the three-particles-in-a-box case $p=3$, still for $d=1$.
Since $Z^0_2 = 1-2D$ and $Z^0_3 = (1-3 D)^2$, we get from
Eq.~(\ref{Smdelta_sampling}):
\begin{equation}\label{SmPvapp_p3}
S(m) =\log \int_0^{1-3D} dv \, P_{2,z}(v) \, v^m + z \log(1-2 D) - \frac{4z}{3} \log(1-3D) \ .
\end{equation}
where now $P_{2,z}(v)$ is the probability distribution of the void space in $[0,1]$
for the insertion of a new particle, after having thrown at random $z$
pairs of particles, each pair being at distance bigger than $D$.
The latter ranges from $0$ (no void space) to $1-3D$ (in the case where
each pair is exactly at distance $D$ and superposed to all the others).
Within the same one-hole approximation,
we can approximate $P_{2,z}(v)$ as follows.
The hole must have length $L=2 D +v$.
We have to distinguish
between two different situations: {\it i)} The hole is made by the same
couple of particle; {\it ii)} The hole is made by two different couples.
In the case {\it i)} we have $z$ ways of choosing the couple. We fix then one of
the two particles of the couple in $0$ and the other one in $L$ (which
gives an extra factor $2$). Finally the other $z-1$ couples of particles
must be in the interval $[L,1]$ with the conditions that they are pairwise
compatible, which gives a factor
$f(L,D) = \int_L^1 dx \int_L^1 dy \, \chi(x,y) = (1 - L - D)^2$ for each pair.
With this definition the contribution due to the same couple finally reads:
$2 z f(L,D)^{z-1}$.
In the case {\it ii)}, instead, we can fix one particle of one couples in $0$
(we have $2 z$ ways to choose it) and one particle of another couple in $L$
(we have $2 (z-1)$ ways of choosing it). The free particle of the first couple
must be in $[L,1-D]$, due to
the condition that it is compatible with its partner
which has been fixed in $0$. This gives a contribution $(1 - L - D)$.
An analogous contribution comes from the the free particle of the second
couple, which must be in the interval $[L+D,1]$.
The other $z-2$ couples must be in the interval $[L,1]$
and must satisfy the compatibility condition, and therefore give a contribution
$f(L,D)^{z-2}$. The sum of the two contributions is
$(4 z^2 - 2 z) (1 - L - D)^{2(z-1)}$, and it has to be normalized by the
total integral $(1-2 D)^z$; going back to $v = L -2D$ we get
\begin{equation}
P_{2,z}(v) = \frac{2 z (2 z - 1) (1 - v -3 D)^{2(z-1)}}{(1-2D)^z} \ .
\end{equation}
As in the previous case we get the condition
\begin{equation}\label{one-hole-app-p3}
\int_0^{1-3D} dv \, P_{2,z}(v) = \frac{2z (1-3 D)^{2z-1}}{(1-2D)^z} \leq 1 \ ,
\end{equation}
which gives a lower limit of validity in $D$ of the one-hole approximation.
Plugging this results in Eq.~(\ref{SmPvapp_p3})
we get for the replicated entropy
\begin{equation}\label{eq_delta_d1_p3}
S(m) = \log \left[
\frac{ \Gamma(m+1) \Gamma(1 + 2 z) }
{ \Gamma(m + 2 z) }
\right] +
\left( m-1 - \frac{2z}{3} \right) \log(1-3 D)
\end{equation}
from which we get
\begin{equation}\label{j_delta_d1_p3}
\Si_{eq} = \sum_{q=2}^{2 z} \frac{1}{q}+ \frac{2 z-3}3 \log(1 - 3 D)
\ , \hskip1cm D_{\rm K} = \frac13 \left[ 1 -e^{-\frac{3}{2z-3}\sum_{q=2}^{2 z} \frac{1}{q}
} \right] \ ,
\end{equation}
and
\begin{equation}
\Si_j = \log(2 z) + \frac{2z-3}3 \log(1 - 3 D)
\hskip1cm
D_{\rm GCP}=\frac{1}{3}\left[ 1-(2z)^{-3/(2z-3)}\right]\ .
\end{equation}
We checked that both $D_{\rm GCP}$ and $D_{\rm K}$ are well within the region
of validity of the one-hole approximation; actually, the value of the left hand side
of Eq.~(\ref{one-hole-app-p3}) never exceeds 0.1.
Again, $D_{\rm GCP}$ and $D_{\rm K}$
are found to scale as $2 \log z /z$ for large $z$.
\subsubsection{Conjecture for arbitrary $p$ ($2,3,\cdots,\io$)}
A comparison of Eqs.~(\ref{eq_delta_d1_p2}) and (\ref{eq_delta_d1_p3})
and of Eqs.~(\ref{j_delta_d1_p2}) and (\ref{j_delta_d1_p3})
allows to guess the form for general $p$:
\begin{equation}\begin{split}
& \Si_{eq} = \sum_{q=2}^{(p-1) z} \frac{1}{q}+ \frac{(p-1) z-p}p \log(1 - p D)
\ , \hskip1cm D_{\rm K} = \frac1p \left[ 1 -e^{-\frac{p}{(p-1)z-p}\sum_{q=2}^{(p-1) z} \frac{1}{q}
} \right] \ ,\\
&
\Si_j = \log((p-1) z) + \frac{(p-1)z-p}p \log(1 - p D) \ ,
\hskip1cm
D_{\rm GCP}=\frac{1}{p}\left[1-((p-1)z)^{-p/((p-1)z-p)}\right]\ .
\end{split}\end{equation}
however we did not attempt to provide a proof of this conjecture.
\subsection{Two dimensions}
\label{sec:deltad2}
In the $d=2$ case we cannot compute $S(m)$ analytically and
we must resort to a numerical evaluation.
The numerical algorithm consists in writing a routine that is able to compute
the void space $v_n$, defined in Eq.~(\ref{voidspace}),
left by $n$ disks centered in a set of positions $\{X\}$.
We used an adaptation of the algorithm described in \cite{RT95} that
works as follows:
\begin{itemize}
\item We start by a grid of squares of side $\D \ll D$ (typically $\D = 1/100$).
These squares are considered as particular cases of convex polygons.
\item We add disks $X_1 \cdots X_n$ sequentially.
\item Each time a disk is added, we check if a given polygon is entirely
contained in the disk. In this case it is removed from the grid.
\item Next we consider the polygons that intersect the boundary of the new disk.
We approximate the boundary of the void space left in the old polygon by a
new polygon, by approximating the boundary of the disk by a straight line (which
is reasonable if $\D \ll D$, with error $O(\D/D)^2$). The new polygon replaces the old one in the grid.
\item This construction is iterated until all disks have been placed. The area
of the polygons that survived is computed easily using Eq.~(1) of Ref.~\cite{RT95},
and it gives the void space $v_n$.
\end{itemize}
The void space has to be averaged over the
distribution $\prod_{i=1}^z d\mu(X^i_1 \cdots dX^i_{p-1})$,
hence we must sample a configuration of $p-1$ spheres in a box
(and do this $z$ times indepentently).
This can be easily done for $p=2$ (one sphere, flat distribution) and
$p=3$ (put one sphere in the centre of the box, draw a second sphere outside
it, then translate randomly both spheres).
A correct sampling gives access to the void space distribution $P(v)$,
that has the form $P(v) = p_0 \d(v) + P^{reg}(v)$, as in one dimension.
In the following we omit the delta term and only consider $P^{reg}(v)$,
which therefore is not normalized to one (its integral gives the probability
that $v>0$).
From this we can compute Eq.~(\ref{Smdelta_sampling}) as we did in one dimension:
\begin{equation}
S(m) = \log \int dv \, P(v) \, v^m
+ z \log Z^0_{p-1}
- \frac{z(p-1)}{p} \log Z^0_p \ .
\end{equation}
Similarly we get, using the relation
$\int dv \, P(v) \, v = \left\langle v \right\rangle = (Z^0_p/Z^0_{p-1})^z$ (which can be easily checked
and also serves as a check of the correct sampling of $P(v)$),
\begin{equation}\begin{split}
\Si_{eq} &= \frac{z}{p} \log Z^0_p - \left( \frac{Z^0_{p-1}}{Z^0_p} \right)^z \int dv \, P(v) \, v \log v \ , \\
\Si_j &= \log \int dv \, P(v) + z \log Z^0_{p-1}
- \frac{z(p-1)}{p} \log Z^0_p \ .
\end{split}\end{equation}
Therefore both $\Si_{eq}$ and $\Si_j$ can be computed directly from $P(v)$; from them we can determine
the transition points $D_{\rm K}$ and $D_{\rm GCP}$.
\section{Numerical solution of the equations}
In the previous sections we described two analytical approximate
methods yielding the phase diagram of the model.
Beyond these analytical approaches, one can also develop some
algorithms to solve the functional self-consistent 1RSB equations
numerically.
In this section we explain how it is possible to implement a numerical
procedure to solve Eqs.~(\ref{rec1RSB}) in the 1RSB phase for
each value of the connectivities, $z$ and $p$, of the diameter $D$, of
the 1RSB parameter $m$ and, in principle, of the spatial dimension
$d$ (in practice, numerical solutions can only be achieved in one and two
dimensions).
In order to do that we need representations of the cavity fields
$\f(x)$ and $\psi(x)$, and of the distributions
$\cal{P} [\f]$ and $\cal{P} [\psi]$, which can be treated by a computer.
As far as the cavity fields are concerned,
the simplest possibility is to discretize the volume $[0,1]^d$ where the
functions $\f(x)$ and $\psi(x)$ are defined using a regular hyper-cubic
grid with $q$ bins per side of size $1/q$. For instance, in one dimension
we discretize the interval $[0,1]$ in $q$ slices of length $1/q$, and
in two dimension we discretize the square box on a square lattice of
$q\times q$ points.
The coordinate in the
box can assume a discrete set of values, $\vec{i}/q$, with $\vec{i}$ being
a $d$-dimensional vector whose components are integers between $0$ and
$q-1$, identifying the coordinate of the
position of the center of the sphere in the box.
If the position of the center of the sphere occupies a given site of the grid
$\vec{i}$, then all other sites of the lattice that are at Euclidean distance
from $\vec{i}$ smaller than the diameter of the sphere $D$
cannot be occupied by the center of another sphere (we call this number $n_D$).
The volume of the sphere in the discretized version of the model can
be estimated as $V_s = n_D/(2q)^d$, and the packing fraction as
$\f = p V_s = p n_D/(2q)^d$.
Since in the continuum limit $V_s = V_d(1) (D/2)^d$,
we can then define an effective diameter as
$D_{\rm eff} = \frac1q \left[\frac{n_D}{V_d(1)} \right]^{1/d}$.
Note that in general $D_{\rm eff} \neq D$, and we take $D_{\rm eff}$ as representative
of the sphere diameter in the continuum limit. In particular, by symmetry,
in $d=1$ the number of excluded
sites always has the form $n_D = 1 + 2 a$ for integer $a$,
and one has
\begin{equation}\label{effDd1}
D_{\rm eff} = \frac{1+2 a}{2q} \ .
\end{equation}
In $d=2$ the parameter $n_D$ depends in an irregular manner on the choice of $D$
(since the square lattice we use breaks the spherical symmetry) and one has in general
\begin{equation}\label{effDd2}
D_{\rm eff} = \frac1q \sqrt{ \frac{n_D}{\pi} } \ .
\end{equation}
In the discretized version, the fields $\f(x)$ and $\psi(x)$ are
vectors of $q^d$ components
(such that the sum of all components is equal to one), and
the cavity equations, Eqs.~(\ref{recurrence}), become
a set of coupled algebraic equations for the $q^d$ components
of the cavity fields, which can be easily solved numerically
(of course, the numerical complexity of this step grows linearily with
the number of components of the cavity fields, $q^d$).
Note that the discretized version of the model is a generalization of
a very important optimization problem known as the ``random graph coloring''
problem, where the number of colors corresponds
to the number of components of the cavity fields $q^d$. In particular,
for $n_D = 0$ and $p=2$ we recover
the standard $q$-coloring problem, which has been deeply
studied in the past few years, and whose properties and phase diagram are
known in great details~\cite{col2}.
The continuum limit of the model is, of course, recovered
for $q \rightarrow \infty$. As a consequence, in order to make sure that
the numerical results are reliable and that they are not
affected by the discretization,
we solve numerically the 1RSB equations using several values
of $q$, and analyze the scaling properties of the numerical solutions
with the number of bins.
Moreover, one should note that for $d>1$, partitioning the box using an
hyper-cubic grid breaks the spherical symmetry down to some discrete
symmetry. This makes the scaling
towards the continuum limit in two dimensions more problematic than in one
dimension (also
because, due to the fact that the complexity of the numerical algorithm
grows as $q^d$, we are limited to smaller values of $q$ for $d=2$).
Other numerical representations of the cavity fields were also
possible. For instance, as $\f(x)$ and $\psi(x)$ are periodic functions
in the interval $[0,1]^d$, we could have performed a Fourier transformation
of the recurrence equations keeping all the components up to
a certain momentum, yielding a finite set of coupled algebraic
equations for the Fourier coefficients of the cavity fields (similarily
to what we did in Sec.~\ref{sec:RSstability} to study the RS stability).
However, it turns out that this strategy is not efficient in the most
interesting region of the phase diagram, namely at high packing fraction
where a 1RSB glass transition is found. Indeed here the cavity fields
becomes extremely peaked (this is also the reason why the Gaussian and the
delta approximation work very well), and the momentum cut-off needed
to get accurate results becomes too big to be handled.
Another possibility we could have employed, is to represent the
fields as a population of delta functions, \eg $\f (x) = \sum_\alpha
c_\alpha \delta (x - x_\alpha)$. This strategy, which has the advantage
that one does not need to discretize the space, has, on the other hand, the
disadvantage that at each step of the iterative procedure, in order to generate
a new field, one has to sample uniformly one point in
the free space available for the insertion of a new particle, given the
position
of $z(p-1)$ neighboring particles in the box.
This is trivial in $d=1$, however in that case the discretized procedure
work already well enough.
In $d=2$, this could be done using the
algorithm described in Sec.~\ref{sec:deltad2}. However
this algorithm is too slow to be used efficiently to this scope. Therefore
in the following we will not explore further this representation.
\subsection{The population dynamics algorithm}
Now, once that we dispose of the discretized representation of the cavity
fields, we need to be able to implement a computational strategy to
solve the 1RSB
functional self-consistent equations, Eqs.~(\ref{rec1RSB}), for any value
of the connectivities, $z$ and $p$, of the diameter of the spheres, $D$, and
of the 1RSB parameter $m$.
This step is quite standard in the context of the cavity
method, and goes
under the name of ``population dynamics algorithm''~\cite{cavity}.
The idea is to represent the probability distributions $\cal{P} [\f]$
and $\cal{P} [\psi]$ as populations of $\cal{M}$ representative
cavity fields with some weights:
\begin{equation}
{\cal P} [\f] = \sum_{\alpha = 1}^{\cal M} z_\f^\alpha \, \delta [ \f (x)
- \f_\alpha (x) ] , \qquad \textrm {and} \qquad
{\cal P} [\psi] = \sum_{\alpha = 1}^{\cal M} z_\psi^\alpha \, \delta [ \psi (x)
- \psi_\alpha (x) ]
\end{equation}
As previously discussed, we need to consider only translationally invariant
solution of Eqs.~(\ref{rec1RSB}) in order to describe the glassy phase.
A solution ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi(x)]$ is translationally invariant if the property
${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi(x+s)] = {\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi(x)]$ holds for any $s \in [0,1]^d$, where
$\psi(x+s)$ is an arbitrary translation (taking into account periodic
boundary conditions) of $\psi(x)$.
Since we represent the probability distribution ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi]$
by a set of representative samples $\psi_\a(x)$, it is very
easy to implement translational invariance. In principle, we would like
to impose that if $\psi_\a(x)$ is one of the samples, then any translation of it
is also contained in the set of samples with the same weight. But this is just equivalent to do the
following: at each time we use a given sample $\psi(x)$ as a representative
of ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi]$, we apply to it a ``random shift'', namely we extract a vector
$s$ uniformly in $[0,1]^d$ and we translate $\psi(x)$ by $s$. In this way
we impose translational invariance by hand.
The population dynamics algorithm works in the following way:
\begin{itemize}
\item[1)] Pick at random $p-1$ fields $\psi_i$ from the population
$\cal{P} [\psi]$, according to their weights $z^\alpha_\psi$.
Apply a random
shift with flat probability in $[0,1]^d$ to each of the cavity fields.
\item[2)] Using Eq.~(\ref{recurrence}), compute the new cavity field
$\f$, along with its weight $z_\f$, which is given by the normalization
in Eq.~(\ref{reweighting}) to the power $m$, according to Eq.~(\ref{rec1RSB}).
Note that at high density, in the
1RSB phase, the cavity fields becomes extremely peaked. This implies that
there exist some configurations of the $p-1$ fields $\psi_i$ for which
the new field $\f$ is zero everywhere in $[0,1]^d$.
In this case the corresponding weight is zero and we have to reject it
and restart the procedure.
These events, which can cause a
major slowing down of the algorithm, are called ``rejection events''.
\item[3)] Repeat 1) and 2) ${\cal M}$ times, until a whole new population
${\cal P}_{new} [\f]$ is generated, and replace the old population
with the new one (this kind of update is called in the context of population
dynamics algorithm ``parallel update'').
\item[4)] Apply steps 1), 2), and 3) using the population
${\cal P} [\f]$ to generate a new ${\cal P}_{new} [\psi]$.
\item[5)] Repeat steps 1), 2), 3), and 4) until convergence, namely
until the populations ${\cal P} [\psi]$ and ${\cal P} [\f]$ are
stationary.
\end{itemize}
Once this process has converged, we can compute the average values of
the link, the site and the box contribution to the 1RSB entropy,
Eq.~(\ref{S1RSB}), from which one can obtain the complexity $\Sigma(m)$.
This allows to determine the equilibrium value of $m^{\star}$
inside the 1RSB glassy phase
as the point where $S(m)$ has a minimum~\cite{Mo95}.
In practice, instead of computing the replicated entropy using
Eq.~(\ref{S1RSB}), we can use another and equivalent formula (derived below)
which is more advantageous from a numerical point of view.
Indeed, using Eqs.~(\ref{recurrence}) we can easily obtain
the following relations (we omit the arguments of the functions $Z$):
\begin{equation}
Z_{link} = \frac{ Z_{box}}{Z_\f} = \frac{ Z_{site}}{Z_\psi} \ .
\end{equation}
Using these and Eqs.~(\ref{rec1RSB}),
one can rewrite the total and internal entropy as
\begin{eqnarray}\label{S1RSBnv}
\nonumber
S(m) &=&
\left(1-z+\frac{z}p\right) S_{link} + \frac{z}p S_{\f} + S_{\psi} \\
s(m) &=&
\left(1-z+\frac{z}p\right) s_{link} + \frac{z}p s_{\f} + s_{\psi}
\end{eqnarray}
The computation of
$S_{\f} = \log \langle Z_\f^m \rangle$ and $S_{\psi} = \log \langle Z_\psi^m
\rangle$ is numerically less involved than
$S_{site}$ and $S_{box}$ appearing in Eq.~(\ref{S1RSB}). Moreover,
these contributions can be evaluated on-line during steps 1)-5) of the
population dynamics algorithm described above (we have just to compute the
average value of $Z_\f^m$ and $Z_\psi^m$ over all the ${\cal M}$ attempts of
generating a new cavity field), without requiring the implementation of any
further step.
Of course, representing the distributions $\cal{P} [\psi]$ and $\cal{P} [\f]$
as populations of ${\cal M}$ elements is an approximation which becomes
exact only in the ${\cal M} \rightarrow \infty$ limit. On the other hand, the numerical
complexity of the population dynamics algorithm grows linearily with
${\cal M}$.
In practice on has to find a good compromise between a value of ${\cal M}$
small enough such that the execution time of the code stays reasonable,
but big enough to avoid systematic corrections due to the finite size
of the populations.
In the present case, we find that ${\cal M} = 2^{16}$ is close to the
optimal value.
Although we have produced a working version of the algorithm described above
at any finite value of the 1RSB parameter $m$, it turned out that the
execution time is too big to get accurate results in a reasonable time.
However, there are two special limits, namely $m \rightarrow 1$ and $m \rightarrow 0$,
which describe respectively the physics at the Kauzmann point and
in the close packing regime, where some semplifications arise which allow to
perform the numerical study of the model in a more efficient way.
These two limits are discussed below.
\subsection{Reconstruction: the limit $m=1$}
In this section we consider the numerical solution of the 1RSB equations
for $m=1$. Recall that $S(m=1)$ gives back the equilibrium RS entropy of the system
between the dynamical transition (where a non-RS solution of the 1RSB
equations
appears for the first time due to the emergence of glassy metastable states)
and the Kauzmann point.
In this limit, using the approach introduced
in~\cite{MM06} which goes under the name of reconstruction method,
also applied in a similar context to the coloring optimization problem
in~\cite{col2}, the self-consistenf 1RSB equations can
be simplified.
Similarily to~\cite{MM06,col2},
one can indeed introduce two new families of distributions over the
cavity fields for each value of the variable $x$, defined as
\begin{equation}
{\cal R}_x [\psi] \equiv \psi(x) {\cal P} [\psi] \qquad \textrm{and} \qquad
{\cal R}_x [\f] \equiv \f(x) {\cal P} [\f] \ .
\end{equation}
Using the previous definitions, the 1RSB cavity equations,
Eqs.~(\ref{rec1RSB}) can be rewritten in terms of these new distributions.
Furthermore, imposing the translational invariance which implies
that ${\cal R}_x [\psi (y)] = {\cal R}_0 [\psi (y - x)]$
for all $x$ we obtain the
the self-consistent recursion relation
for the new distributions which read:
\begin{eqnarray}
{\cal R}_0 [ \psi ] &=& \int \prod_{i=1}^{z-1} d {\cal R}_0 [ \f_i ] \,
\d\left[ \psi(x) - \frac{1}{Z_\psi} \prod_{i} \f_{i}(x) \right] \\
\nonumber
{\cal R}_0 [ \f ] &=& \int d\mu(x_1 \cdots x_{p-1} | 0)
\prod_{i=1}^{p-1}
d{\cal R}_{0} [ \psi_i ] \,
\d\left[ \f(y) - \frac{1}{Z_\f} \int
\prod_j dy_j \psi_j(y_j-x_j) \chi(y,y_1,\cdots,y_{p-1}) \right]
\end{eqnarray}
where
\begin{equation}
d\mu(x_1 \cdots x_{p-1} | 0) =
\frac{\chi (0, x_1,
\cdots , x_{p-1}) dx_1 \cdots dx_{p-1}}{Z_p^0}
\end{equation}
From a numerical point of view, these latter equations are much easier
to solve than Eqs.~(\ref{rec1RSB}) for two reasons. First, no
reweighting factor is present, which prevent the population to concentrate
on few cavity fields with large weight. Second,
rejection events cannot occur in this case. Indeed, for example, the
procedure to generate a new field $\f$ amounts to:
\begin{itemize}
\item[1)] Pick at random $p-1$ fields $\psi_i$ from the population
${\cal R}_0 [\psi]$. Note that all the fields have the same weight in this
representation.
\item[2)] Pick $p-1$ variables $x_1, \cdots, x_{p-1}$ in the interval
$[0,1]^d$ satisfing the hard-sphere constraint $\chi (0, x_1,
\cdots , x_{p-1})$ with a flat measure.
\item[3)] Shift each of the $p-1$ chosen cavity fields $\psi_i$ by $x_i$.
\item[4)] Using Eq.~(\ref{recurrence}), compute the new cavity fields
$\f$ (again, note that there is no reweighting in this case), and
insert the new field randomly into the population ${\cal R}_0 [\f]$
(this kind of update is called ``serial update'' and ensures a better
convergence than the parallel one).
\end{itemize}
Once the populations ${\cal R}_0 [\f]$ and ${\cal R}_0 [\psi]$ have
attained stationarity, we can compute the complexity of the system.
Since the replicated entropy $S(m=1)$ equals the RS one,
the complexity at $m=1$ is given by $\Sigma_{eq} = S_{RS} - s(m=1)$.
The internal entropy can be evaluated using Eqs.~(\ref{sSig1RSB})
and (\ref{rec1RSB}), where
\begin{eqnarray}
\nonumber
\langle Z_{link} \log Z_{link} \rangle & = &
\int d {\cal R}_0 [\psi] d {\cal R}_0 [\f] \, \log \int dy \, \psi(y) \f (y) \\
\langle Z_{\psi} \log Z_{\psi} \rangle & = &
\int \prod_{i=1}^{z-1} d {\cal R}_0 [\f_i] \, \log \int dy \prod_i \f(y) \\
\nonumber
\langle Z_{\f} \log Z_{\f} \rangle & = &
\int d\mu(x_1 \cdots x_{p-1} | 0) Z_p^0
\prod_{i=1}^{p-1} d{\cal R}_{0} [ \psi_i ] \,
\log \int dy \prod_i d y_i \psi_i (y_i - x_i) \, \chi(y,y_1,\ldots, y_{p-1}).
\end{eqnarray}
From the complexity we can determine the Kauzmann point,
which corresponds to the
value $D_K$ where $\Sigma_{eq}$ vanishes.
In principle this method would also allow to determine the location of the
dynamical transition, which is the first point where a non-RS solution
of the 1RSB equations appear at $m=1$.
The results at $m=1$ obtained with the reconstruction method will be discussed
in Sec.~\ref{sec:results}, and compared with the analytical approximations.
\subsection{Hard fields: the limit $m=0$}
Also this specific limit yields a simplification of the numerical algorithm.
The $m \rightarrow 0$ limit corresponds in this context to the
``close packing limit'', since an inspection of the expression of the internal
entropy $s(m)$ shows that it goes to $-\io$ as $\log(m)$, and the
pressure diverges as well~\cite{PZ10}.
Therefore the limit $m\rightarrow 0$ gives access to the jammed
glassy states at infinite pressure~\cite{PZ10}.
The limit for $m$ going to zero of $Z_{link}^m$,
$Z_{box}^m$, and $Z_{site}^m$ are either zero (for ``incompatible''
configurations of the cavity fields)
or one (for ``compatible'' configurations of the cavity fields)
regardless of the value of the cavity fields.
As a consequence, in order to compute the complexity
(which equals the replicated entropy $S(m \rightarrow 0)$, since the internal
entropy term, $m s(m)$, disappears) we are only
interested in the propagation of this information.
To this aim, we introduce the ``hard'' components of the cavity fields
$\psi_{hard}$ and $\f_{hard}$:
\begin{equation}
\psi_{hard} (x) =
\left \{
\begin{array}{ll}
1 & \textrm{if $\psi(x)>0$}\\
0 & \textrm{otherwise}
\end{array}
\right.
\qquad \textrm{and} \qquad
\f_{hard} (x) =
\left \{
\begin{array}{ll}
1 & \textrm{if $\f(x)>0$}\\
0 & \textrm{otherwise}
\end{array}
\right.
\end{equation}
These functions are defined as being equal to one for all values of
$x$ such that the cavity fields are non vanishing regardless of their
value (i.e., corresponding to a non-vanishing probability of finding a sphere
with center in $x$), and zero otherwise.
Since the reweighting factors in Eq.~(\ref{rec1RSB}) do not depend on
the actual value of the fields in the $m \rightarrow 0$ limit,
the propagation of the hard components
decouples completely from the propagation of the cavity fields and can thus
be treated indepenently. As a consequence, the population dynamics algorithm
described above can be used on the populations encoding the
probability distributoons of the hard fields.
Once a stationary state has been reached, we can compute the complexity
at $m=0$, $\Sigma_j$, from Eq.~(\ref{S1RSB}), computing the logarithm of the
average value of the fraction of attempts yielding a non vanishing
value of $Z_{link}$, $Z_{box}$, and $Z_{site}$.
Using Eq.~(\ref{S1RSBnv}), instead of computing
$\langle Z_{box}^m \rangle$ and $\langle Z_{site}^m \rangle$,
one can more easily compute $\langle Z_{\psi}^m \rangle$ and
$\langle Z_{\f}^m \rangle$, which are given respectively by the average value
of the fraction of non-rejection attempts to generate the new $\psi_{hard}$ and
$\f_{hard}$ fields over the total number of attempts.
Then we can determine the location of $D_{GCP}$ defined as $\Sigma_j
(D_{GCP}) = 0$.
The results at $m=0$ obtained with this method will be reported
in Sec.~\ref{sec:results}, and compared with the analytical approximations.
An important {\it caveat} is that in principle some fields could be proportional
to $\exp(-1/m)$ in the limit $m\rightarrow 0$. If this happens, then the procedure above
fails since these fields give a finite contribution to the normalizations which
is neither 0 nor 1. Although we could not perform a careful systematic investigation
of this effect, it seems that it might happen only for values of $z$ and $p$
where the transition at $m=1$ is continuous. This point surely deserves further
investigation.
Note that
in order to compute the correlation function in the close packing limit
(see Sec.~\ref{corr-func}) we also need to know the
actual values of the
cavity fields. Since the propagation of the hard components decouples
completely from the the one of the fields itself, one can use the population
dynamics algorithm to find the solution of the 1RSB equations for
the distributions of hard fields and of the cavity fields independently
(knowing that the cavity fields can only be non zero where the
hard components are equal to one),
and use Eq.~(\ref{eq:gr}) to compute the pair correlation function.
\begin{figure}[t]
\includegraphics[width=.9\textwidth]{complexity_d1.eps}
\caption{
The complexity in some representative cases of discontinuous
transition at $d=1$, computed with the numerical solution of the
population dynamics algorithm with varying resolution of the
discretization process, is compared to the Gaussian and the Delta approximations.
[Upper panels] $\Si_{eq}$ (left) and
$\Si_j$ (right) for
$p=2$ and $z=110$. In both cases
we fixed the parameter $a = 4,7,10$ in Eq.~(\ref{effDd1}) and changed $q$ to vary
the effective diameter $D_{\rm eff} = (1+2 a)/(2 q)$, which is reported in the horizontal axis.
[Lower panes] $\Si_{eq}$ (left) and
$\Si_j$ (right) for
$p=4$ and $z=3$. In the first case, we varied $q$ at fixed $a$, while in the second
we did the inverse.
}
\label{somesigmad1}
\end{figure}
\begin{figure}[t]
\includegraphics[width=.6\textwidth]{complexity_d2.eps}
\caption{
The complexity at $d=2$, $p=2$ and $z=20$, computed with the numerical solution of the
population dynamics algorithm with varying resolution of the
discretization process, is compared to the Gaussian and the Delta approximations. Here we can only use moderate values of $q$,
and because of the geometry of the discretization the effective diameter of the sphere,
given by Eq.~(\ref{effDd2}) and reported on the horizontal axis,
cannot be varied smoothly. For instance, at $q = 11$ we could not find
a point at positive complexity.
}
\label{somesigmad2}
\end{figure}
\section{Comparison between numerical results and the approximations}
\label{sec:results}
In this section we report the results obtained from the direct numerical calculation with
discretized space and we compare them with the delta and Gaussian approximations.
\subsection{Complexity}
In Fig.~\ref{somesigmad1} we report the complexities $\Si_{eq}$
(the complexity at $m=1$ equal to $(1/N)$ time the logarithm of the
typical
number of glass states when configurations are samples uniformly) and $\Si_j$ (the complexity at $m=0$ equal to $(1/N)$ time the logarithm of the
total number of jammed states)
for several representative cases at $d=1$ where the transition is
discontinuous.
Generically we observe that the delta approximation performs better at $m=0$, while the
Gaussian approximation is more reliable at $m=1$. Both approximations give an upper bound
to the true complexity and therefore
give values for $D_{\rm K}$ and $D_{\rm GCP}$ that are above the true ones.
Moreover, both approximations miss the dynamical transition since by construction the fields
are assumed to be localized.
Some results for $d=2$ are reported in Fig.~\ref{somesigmad2}.
Here the scaling for $q\rightarrow\io$ becomes very difficult because the numerical solution
is computationally demanding and we cannot go beyond $q=20$ for moderate connectivities.
We could perform a systematic investigation only $p=2$ and $z=20$, which is unfortunately
a case where the transition is continuous and the solution might be unstable towards further
RSB in the glass phase. In this case, at $m=1$ we correctly find a continuous transition at a value
of $D$ which is compatible with the result found from the stability analysis of
section~\ref{sec:RSstability}.
At $m=0$, we find good agreement with the result of the Gaussian and delta approximation.
Note however that also at $m=0$ the results could be unstable towards further RSB.
\subsection{Phase diagram}
In Fig.~\ref{fig:phasediagrams} we compare the transition lines obtained by the Gaussian
and delta approximations with the numerical results,
where available. We computed $D_{\rm K}$ and $D_{\rm GCP}$ by performing an extrapolation to $q\rightarrow\io$
(which is simple since the corrections are found to be proportional to $1/q$) in some
representative cases where the transition is continuous or discontinuous; the results
are reported in Fig.~\ref{fig:phasediagrams}.
We observe that indeed the Gaussian and delta approximation give consistent
results, which are also consistent with the exact numerical solution and provide upper bounds
to the latter.
Whenever the RS instability $D_{\rm RS} < D_{\rm K}$, the transition is continuous.
This happens generically for small $z$. On increasing $z$, the lines $D_{\rm RS}$ and $D_{\rm K}$
cross and the transition becomes discountinuos. The value $z^*$ where this crossover happens
depends weakly on the space dimension, but it depends strongly on $p$. Indeed we have
$z^* \sim 100$ for $p=2$, while $z^* \sim 20$ for $p=3$ and (as we can infer from Fig.~\ref{gauss})
the transition is always discontinuous for $p>3$.
\begin{figure}[t]
\includegraphics[width=.9\textwidth]{phase_diagram.eps}
\caption{
Phase diagrams for $p=2,3$ and $d=1,2$. We compare the results of the Gaussian
and delta approximations with the numerical results obtained directly from a
discretization of the cavity equations. In the lower right panel, the horizontal
line indicates the value $D=1/4$ above which the calculation of $Z^0_3$ is not valid,
see Eq.~(\ref{Z03}).
}
\label{fig:phasediagrams}
\end{figure}
\section{Correlation function} \label{corr-func}
\subsection{Definition}
As explained in section~\ref{sec:cavityeq}, in the glass phase the cavity equations have
multiple solutions, each describing a different glass state.
Within each state $\a$ we can define a correlation function $g_\a(x,y)$ as follows.
For each box we have:
\begin{equation}\begin{split}
g^{(\a)}_a(x,y) & = \frac{1}{p(p-1)} \left\langle \sum_{i\neq j}^{1,p} \d(x-x_i) \d(y-x_j) \right\rangle_{a,\a} \\
& = \frac{1}{p (p-1)}
\frac{
\int dx^a_1 \cdots dx^a_p \, \psi^{(\a)}_{a,1}(x^a_1) \cdots \psi^{(\a)}_{a,p}(x^a_p)
\, \chi(x_1^a,\cdots,x_p^a) \sum_{i\neq j}^{1,p} \d(x-x^a_i) \d(y-x^a_j) }{
\int dx^a_1 \cdots dx^a_p \, \psi^{(\a)}_{a,1}(x^a_1) \cdots \psi^{(\a)}_{a,p}(x^a_p)
\, \chi(x_1^a,\cdots,x_p^a) }
\ ,
\end{split}\end{equation}
since the fields $\psi^{(\a)}_{a,i}(x^a_i)$ describe the distribution of the variables adjacents
to box $a$ in absence of the box itself. We now average this quantity over the boxes and over the
states $\a$ with the weight $Z_\a^{m}$. We get
\begin{equation} \label{eq:gr} \begin{split}
& g(x,y) = \frac{p}{N z} \sum_{a=1}^{Nz/p} \frac{1}{\sum_\a Z_\a^m} \sum_\a g^{(\a)}_a(x,y) Z_\a^m \\
& = e^{-S_{box}} \int d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi_1]\cdots d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\psi_p] \, Z_{box}[\psi_1\cdots \psi_p]^{m-1} \,
\psi_1(x)\psi_2(y) \int \left( \prod_{j=3}^p \psi_{j}(x_j) dx_j \right) \chi(x,y,x_3,\cdots,x_p)
\end{split}\end{equation}
Note that in the RS case the above expression reduces to $g^0_p(x,y)$.
We expect that at $m=0$ (close packing),
$g(x,y)$ develops a peak in $|x-y|=D$ describing contacts~\cite{SLN06,DTS05}.
The number of contacts is
\begin{equation}
\z = (p-1) \int_{peak} g(0,y) dy \ .
\end{equation}
The delta peak is also accompanied, in three dimensional sphere packings, by
a square root divergence, $g(r) \sim (r-D)^{-0.5}$~\cite{SLN06,DTS05}, which
we want to investigate here.
Note that in the delta approximation we just get
\begin{equation}
g(x,y) = \frac1{Z^0_p}\int dX_3 \cdots dX_p \chi(x,y,X_3,\cdots,X_p) = g^0_p(x,y)
\end{equation}
therefore all the structure of the correlation in the packings is lost
in this approximation.
One can show, following \cite{PZ10}, that in the Gaussian approximation, as $A\sim m$ for
$m\rightarrow 0$, one gets a delta peak at $r=D$ in the jamming limit, with all particles being non-rattlers
and $\z = 2d$. Therefore this approximation is able to capture some of the peculiar structure
of the correlation.
On the other hand, the square root singularity is missed by the Gaussian
approximation~\cite{PZ10}.
Unfortunately, it is very difficult to study the contact peak in the numerical solution
of the cavity equation, because the discretization makes it hard to define a proper
notion of contacts and separate the delta peak contribution from the background.
Therefore in the following we focus on the square root singularity which is also a non-trivial
and somehow unexpected feature of pair correlations at jamming~\cite{SLN06,DTS05}.
\begin{figure}[t]
\includegraphics[width=.9\textwidth]{gofr.eps}
\caption{
Pair correlation function $g(r)$ at $d=1$, $m=0$ (jamming) and $D \sim D_{\rm GCP}$ (in practice, the closest value to
$D_{\rm GCP}$ compatible with the discretization). (Left) $p=2$, $z=6$;
note that in this case the system undergoes a continuous transition and these results might
be unstable towards further RSB. (Right) $p=4$, $z=3$: here the transition is discontinuous.
Note that for $p=4$ we observe an additional
singularity at $r=2D$~\cite{SLN06}.
}
\label{somegr}
\end{figure}
Numerical results are presented in Fig.~\ref{somegr} for the $g(r)$ in one dimension, and two
representative values of $z$ and $p$ where the transition is continuous or discontinuous.
In both cases,
the divergence is compatible with a square root singularity $(r-D)^{-0.5}$ in a range of $r-D$, but at smaller
$r-D$ the $g(r)$ seems to diverge as $(r-D)^{-\g}$ with an exponent $\g > 0.5$.
However, in this region the square root divergence is probably mixed
with the contact delta peak, because of the discretization.
A detailed analysis of this mixing was not possible
because the values of $q$ we could reach were still too small.
Since this investigation is computationally very demanding,
we could not perform a systematic study of the value of the exponent as a function of $p$ and $z$,
nor investigate the more interesting case $d=2$, which is very hard because our discretization does not
preserve the spherical symmetry around the central particle.
We leave a more systematic numerical analysis for future work.
\subsection{Argument for the square-root singularity}
We now present an analytical argument to relate the shape of the cavity fields to the square root
singularity. We focus on $m=0$, and we study the small $r-D$ behavior of $g(r)$ as follows.
We define the quantity
\begin{equation}
\Psi(z) =
\int \left( \prod_{j=1}^p \psi_{j}(x_j) dx_j \right) \frac{\chi(x_1,\cdots,x_p)}{\chi(x_1,x_2)} \delta(x_1-x_2-z) \ .
\end{equation}
Note that $z = x-y \in [-1,1]^d$ but using periodicity one can restrict to
$z\in [-1/2,1/2]^d$ with periodic boundary conditions.
The probability distribution of $\psi$ induces a distribution ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\Psi]$ on $\Psi$.
Then we have
\begin{equation}\label{gdiz}
g(z) = \int dx dy g(x,y) \d(x-y-z) =e^{-S_{box}} \int d{\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\Psi] \frac{\Psi(z) \chi(z)}{\int dz \Psi(z) \chi(z)}
\theta\left[\int dz \Psi(z) \chi(z)\right]
\ ,
\end{equation}
where the term $e^{-S_{box}}$ ensures the normalization $\int dz g(z) = 1$.
In the following we restrict for simplicity to $d=1$.
Note that by translational invariance the field $\Psi$ is centered around a random
uniformly distributed position $z_0$, while its shape is encoded by a non-trivial
distribution.
Now assume that with a certain finite probability with respect to the shape distribution,
one has that
\begin{itemize}
\item $\Psi(z)$ vanishes at some finite distance from the center given by
$z_\pm = z_0 \pm \d z_0$. The quantities $z_\pm$ are then also random and uniformly
distributed in $[-1/2,1/2]$;
\item the shape of $\Psi(z)$ around the point
where it vanishes is of the form
\begin{equation}\label{shapeexp}
\Psi(z) \sim e^{-\frac{A}{|z-z_\pm|^\a}} \ ;
\end{equation}
\item and $|z_+-z_-| < 2D$, and $z_+ > D$ (the additional symmetric contribution coming from $z_-$ gives
a factor 2 and will be neglected as all proportionality constants).
\end{itemize}
Then the function $\chi(z) \Psi(z)$
vanishes everywhere except in $[D,z_+]$ where it is given by $\exp\big[-A/(z_+-z)^\a\big]$.
The average over ${\cal P}}\def\EE{{\cal E}}\def\MM{{\cal M}} \def\VV{{\cal V}[\Psi]$, for what concerns this contribution,
is translated onto an average over $z_+$ and
Eq.~(\ref{gdiz}) becomes :
\begin{equation}
g(z) \sim \int d z_+ \frac{e^{-A/(z_+-z)^\a} \th(D\leq z \leq z_+)}
{\int_D^{z_+} dz e^{-A/(z_+-z)^\a}}
\theta\left[z_+ \geq D\right] = \int_z^C dz_+ \frac{e^{-A/(z_+-z)^\a}}
{\int_D^{z_+} dz e^{-A/(z_+-z)^\a}} \ ,
\end{equation}
where $C$ is a suitable cutoff that comes from the fact that if $z_+$ is too much larger than
$D$ the approximation Eq.~(\ref{shapeexp}) will break down. We will show that this cutoff does
not matter as the main contribution for $z \rightarrow D$ comes from $z_+$ close to $D$.
To simplify notations, we introduce $\l = (z-D)/D$ and $\e = (z_+-D)/D$. Also we define
$a = A/D^\a$ and $c= (C-D)/D$. With these notations we get
\begin{equation}
g(\l) \propto \int_\l^c d\e \frac{e^{-a/(\e-\l)^\a}}
{\int_0^{\e} d\l \, e^{-a/(\e-\l)^\a}} \ .
\end{equation}
The integral in the denominator is dominated by the small $\l$ behavior, that gives
\begin{equation}
\int_0^{\e} d\l \, e^{-a/(\e-\l)^\a} \sim \int_0^{\e} d\l \, e^{-\frac{a}{\e^\a} \left(1+\a\frac{\l}{\e}\right)}
=e^{-\frac{a}{\e^\a}} \frac{\e^{\a+1}}{a \a} \ ,
\end{equation}
and
\begin{equation}
g(\l) \propto \int_\l^c d\e \, \e^{-(\a+1)} e^{a \left(\frac{1}{\e^\a}-\frac{1}{(\e-\l)^\a}\right)} \ .
\end{equation}
We want now to evaluate the integral by a saddle point for $\l \rightarrow 0$. We assume (and will check self-consistently)
that the saddle point value $\e^* \gg \l$. Then we can expand for $\l/\e \ll 1$ and
\begin{equation}
g(\l) \propto \int_\l^c d\e \, e^{-(\a +1) \log \e - a \a \l \e^{-(\a+1)}} \ .
\end{equation}
The maximum of the above expression is found at $\e^* = (a \a \l)^{1/(\a+1)} \gg \l$ for small $\l$ as
initially assumed. Substituting this in the expression above one obtains $g(\l) \propto 1/\l$.
To get the correct result we need to compute also the quadratic corrections around the saddle
point. Including these, we finally obtain
\begin{equation}
g(\l) \propto \l^{-\frac{\a}{1+\a}} \propto (r-D)^{-\frac{\a}{1+\a}} \ ,
\end{equation}
{i.e. }}\def\eg{{e.g. } a power-law divergence for $z\rightarrow D$ with exponent $\in [0,1]$, which is consistent with
the observed exponents in Fig.~\ref{somegr}.
Note that a square root singularity is obtained for $\a=1$, namely a simple exponential
singularity of the cavity fields.
We checked on our numerical results that indeed
the form of the fields is compatible with the Ansatz (\ref{shapeexp}).
Note that this same argument can be carried out at finite $m$, but in this case we get that
$g(\l)$ is independent of $\l$ for small $\l$. A more complete analysis should show that at
finite $m$, $g(\l)$ is a power law for $\l \gg O(m^\n)$ with some exponent $\n$, and it crosses
over to a finite value for $\l \ll O(m^\n)$.
\section{Discussion on finite dimensional hard spheres}
One way to recover the normal
hard sphere model from our model is to set $p=2$ and $z=N-1$. However,
this limit cannot be investigated within the cavity formalism
which is based on taking first the limit $N\rightarrow\io$ at finite $z$.
Here the limits $N\rightarrow\io$ and $z\rightarrow\io$ do not commute, and if
we first send $N\rightarrow\io$ and then $z\rightarrow\io$ we do not recover the
hard sphere models (a similar behavior is found for the
Bethe lattice spin glass~\cite{cavity}).
Therefore we want here to find a suitable limit that we can take
{\it after} $N\rightarrow\io$ to recover the hard sphere model.
As we discussed in the introduction,
one possibility if to set formally $z=1$
and identify $p$ with the number of particles, therefore taking
$p \gg 1$. Of course, for $z\leq 2$ and finite $p$ the model
does not have any phase transition (it becomes a one-dimensional
model for $z=2$). Therefore, we have to send $p\rightarrow\io$ {\it before}
$z$ becomes smaller than $2$.
As a first check, we note that in this limit the RS entropy
\begin{equation}
S^{RS} = \frac{z}p \log Z^0_p \rightarrow S_{liq}(\f) \ ,
\end{equation}
where $S_{liq}(\f)$ is the entropy of $d$-dimensional hard spheres
in the thermodynamic limit at fixed packing fraction $\f$.
Actually, there is a problem
with the latter identification, since $Z^0_p$ does not contain a factor $p!$
which should take into account indistinguishability of the particles. This is
indeed to be expected, since we took a formal limit $z\rightarrow 1$, but at any
finite $z>1$ the particles are connected to several boxes which makes them
distinguishable. We therefore recover the finite dimensional result for a system
of distinguishable particles.
Next, we can look at the stability of the RS solution according to Eq.~(\ref{RSstabcondition}).
To compare with standard hard spheres it is crucial to observe that here the box side is one
while $D$ becomes very small for $p\rightarrow\io$, in such a way that the packing fraction
$\f = p V_d(D/2) = p V_d(1/2) D^d$ is finite. For $p\rightarrow\io$ first and $z\rightarrow 1$ after,
we have $g^0_p(x) \rightarrow g_{liq}(x)$,
however $x$ is expressed in units of the box length. If we introduce as usual the distance
$r$ measured in units of the sphere diameter, $r = x/D$, we have (for $k \neq 0$)
\begin{equation}
g^0_p(k) = \int dx e^{ikx} g^0_p(x) = D^d \int dr e^{i k D r} g_{liq}(r) = D^d S(k D) \ ,
\end{equation}
where $S(k D)$ is the structure factor, and the stability condition becomes
\begin{equation}
\sqrt{(p-1)(z-1)} D^d | S(k D) | =\sqrt{(p-1)(z-1)} \frac{\f}{p V_d(1/2)} | S(k D) | \leq 1
\end{equation}
which is always verified for $p\rightarrow\io$ since $\f$ and $S(k D)$ are both of order 1.
This is indeed consistent with our investigations of the model at finite $p$ that showed that
the transition is always discontinuous at $p > 4$. We conclude that one cannot observe
a continuous transition in the normal hard spheres model.
This conclusion is consistent with the ones of Biroli and Bouchaud~\cite{BB09} who showed
that indeed replicated liquid theory in finite dimensions does not allow for a
continuous RSB transition.
We also note that starting from Eqs.~(\ref{SmGauss}), (\ref{Astar_Gauss})
and taking first $p\rightarrow\io$ (with $\f = p V_d(D)/2^d$ and
$(p-1) g_p^0(r)= p g_{liq}(r)$) and then $z\rightarrow 1$ we recover Eq.~(74) of \cite{PZ10},
which is the starting point of the Gaussian
small cage replica treatment in finite dimensions,
provided we identify again $\lim_{p \rightarrow \io} \frac1p \log Z^0_p = S_{liq}(\f)$,
neglecting the problem with the missing $p!$.
Apart from this {\it caveat}, this is a nice alternative
derivation of the approximation of \cite{PZ10}, which
is not based on the replica method.
Finally, one could try to take the same formal limit in
Eq.~(\ref{Smdelta_sampling}) to obtain an alternative approximate expression for $S(m)$
in finite dimensions. Using the relation $Z^0_{p}/Z^0_{p-1} = \left\langle v \right\rangle$, where $v$
is the void space of $p-1$ particles, we obtain for $z\rightarrow 1$ (after $p\rightarrow \io$):
\begin{equation}
S(m) = \log \frac{ \left\langle v^m \right\rangle}{\left\langle v \right\rangle} + \frac1p \log Z^0_p \ .
\end{equation}
Note however that the void space $v \propto p$, therefore we must rearrange terms as
\begin{equation}
S(m) = \log \frac{ \left\langle (v/p)^m \right\rangle}{\left\langle (v/p) \right\rangle} + m \log p + \frac1p \log ( Z^0_p / p^p ) \ .
\end{equation}
The term $m \log p$ can be dropped since it gives an additive constant to the internal entropy,
and the resulting expression has a well defined $p\rightarrow \io$ limit, assuming here that
$\lim_{p \rightarrow \io} \frac1p \log (Z^0_p/p!) = S_{liq}(\f)$ (which is however inconsistent with
the previous discussion, for reasons that we do not understand at present).
This expression can in principle be directly computed, even if it is very hard to sample the
distribution $P(v)$ of void space because at high density $v=0$
for most configurations~\cite{STDTS98}.
\section{Conclusions}
In this paper,
we have studied a mean field hard sphere model introduced in~\cite{MKK08}.
The model is similar to a standard hard sphere model, however each sphere
interacts only with a finite and preassigned number of neighbors. The network
of interactions is given by a random graph, such that the model belongs
to the mean field class and is therefore, in principle, exactly solvable via
the cavity method. We therefore derived the cavity equations for the model
and we presented both analytical approximations to their solution and an ``exact''
numerical solution based on a discretization of the space.
We have shown that the analytical approximations give quite reliable results for the phase
diagram and the complexity. In particular, for large enough $z$ and/or $p$, the transition
belongs to the Random First Order class. Therefore,
as suggested in~\cite{MKK08}, the model displays an ideal glass (Kauzmann) transition
to a glass phase. Following the glass phase upon increasing pressure, one gets to a point
where the pressure diverges, similarly to standard hard spheres close to the so-called
J-point. Given that the model has an exponential number of metastable states, one obtains
a set of J-points spanning a finite range in density.
Overall, the phenomenology of the model in this regime is very close to the one
expected for finite dimensional hard spheres based on mean field approximations,
see~\cite{PZ10} and Fig.~\ref{dia_totale}.
We found, in particular, that
the Gaussian approximation is very good for the Kauzmann transition
but tends to overestimate the close packing. This is consistent with what happens
for three-dimensional hard spheres where the Gaussian approximation gives
$\f_{\rm K}\sim 0.62$, which is consistent with numerical estimates,
and $\f_{\rm GCP} \sim 0.68$, while numerical simulations suggest a somewhat smaller
value~\cite{PZ10}.
On the contrary, the delta approximation is very good for close packing
but tends to overestimate the Kauzmann point. We proposed a formula for the complexity
that is based on the delta approximation and can be computed numerically for three-dimensional
hard spheres. It would be very interesting to do this computation and compare the result
with the Gaussian approximation in that case.
We also found a somehow unexpected result,
that the transition is continuous at small $z$ and $p$. In particular,
for the values of $p=2$ and $z=100$ that have been used in~\cite{MKK08}, the transition
should be very weakly first order.
The physics in presence of a second order transition could be very different.
For instance, in the case of the Sherrington-Kirkpatrick model, the intensive
ground state energy can be found easily: this would correspond to a unique
J-point density. However, the details of this depend on the model, and in particular
on the shape of the complexity function, so we cannot give any conclusive statement.
It would be interesting to investigate better this point by repeating the numerical simulations
of~\cite{MKK08} both in a region where the transition should be strongly second order
(e.g. at $p=2$ and small $z$) and in a region where it should be strongly ``random first order'' (e.g.
for $p=4$ and small $z$).
Finally, we partially investigated the structure of the configurations at jamming.
We computed the correlation function of the model and showed that it
displays a power-law singularity close to contact, at least for $d=1$.
We also gave an analytical argument to explain the mathematical origin of the singularity.
Extending this study
to higher dimension could give insight in the physics that is responsible for this divergence
and hopefully connect it to isostaticity and the presence of soft modes in the spectrum,
as suggested in~\cite{Wyart,WNW05}. Additional numerical simulations could be extremely
useful also in this respect.
{\bf Acknowledgements:} We warmly thank J.~Kurchan,
F.~Krzakala, R.~Mari, G.~Semerjian,
and L.~Zdeborova for many useful and stimulating discussions.
|
1,314,259,993,330 | arxiv | \section{Introduction}
Quantum mechanics has taught us that wave nature and particle nature are two
complementary aspects of the same entity \cite{bohr}. Whether we talk of massive particles
or quanta of light, both can behave like particles and waves in different
situations. Young's double-slit experiment carried out with individual
particles showed that a particle passes through two slits and interferes
with itself \cite{jonsson}. Later it was demonstrated that much larger
particles such as $C_{60}$ molecules can also show interference
\cite{buckyball}. It has been convincingly argued that instead of calling
them waves or particles, such entities should be
called \emph{quantons} \cite[p.~235]{bunge}\cite{levy}. Going beyond this, quantum mechanics
also tells us that a group of entities, e.g., many photons
together, can behave as a single quanton. Consequences of this on interference
experiments with many particles, has only been recognized
relatively recently \cite{multiphoton}.
First, we briefly explain the idea which motivated Jacobson and collaborators
\cite{multiphoton} to propose that many photons can behave as a single
quanton in an interference experiment. Consider a beam of diatomic iodine molecules $I_2$
each with mass $2m$, traveling with a velocity $v$, passing through a
double-slit. The resulting interference would be in
accordance with a de Broglie wavelength $\lambda_{2m} = h/2mv$. But suppose that
the molecule dissociates on the way, and only separate iodine atoms, each of
mass $m$, pass through the double-slit. Then the resulting interference would
be in accordance with a de Broglie wavelength $\lambda_m = h/mv$, which
shows that $\lambda_{2m} = \lambda_m/2$.
More generally, $N$ particles with a de Broglie wavelength $\lambda$, can
behave as single quanton of wavelength $\lambda/N$. The same should hold for
photons too. An experiment was subsequently carried out which measured the
de Broglie wavelength of a two-photon wavepacket \cite{multiphotonexpt}.
\begin{figure}[b!]
\rule{245 pt}{0.5 pt}\\[3pt]
\raisebox{-0.2\height}{\includegraphics[width=5mm]{CC}}\raisebox{-0.2\height}{\includegraphics[width=5mm]{BY}}
\footnotesize{This is an open access article distributed under the terms of the Creative Commons Attribution License \href{http://creativecommons.org/licenses/by/3.0/}{CC-BY-3.0}, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.}
\end{figure}
\pagebreak
In the following we carry out a wave-packet analysis of two entangled photons,
typically generated in a type-I spontaneous parametric down conversion (SPDC)
process, and analyze the situation in which they can behave like a single
quanton.
\begin{figure}
\centerline{\resizebox{8.0cm}{!}{\includegraphics{Fig_1.pdf}}}
\caption{Schematic diagram for Young's double-slit experiment with
entangled photons. Detector D1 is capable of detecting pairs of photons.
It should be able to discriminate between one-photon and two-photons events.
}
\label{2slit}
\end{figure}
\section{Theoretical analysis}
\subsection{Entangled photons}
A well known state to describe momentum-entangled particles was discussed by
Einstein, Podolsky and Rosen (EPR) \cite{epr}
\begin{equation}
\Psi_{\textrm{EPR}}(x_1,x_2) = A\!\int_{-\infty}^\infty
e^{-{\imath px_2\over\hbar}} e^{\frac{\imath px_1}{\hbar}} dp. \label{epr}
\end{equation}
This so-called EPR state does capture the properties of entangled particles
well, but has some disadvantages like not being normalized, and also not
describing varying degree of entanglement.
The best state to describe momentum-entangled
particles is the \emph{generalized EPR state} \cite{tqajp,Qureshi2012}
\begin{equation}
\Psi(x_1,x_2) = A\!\int_{-\infty}^\infty
e^{-{p^2\sigma^2\over \hbar^2}}e^{-{\imath px_2\over\hbar}} e^{\frac{\imath px_1}{\hbar}}
e^{-{(x_1+x_2)^2\over 4\Omega^2}} dp, \label{state}
\end{equation}
where $A$ is a normalization constant, and $\sigma,\Omega$ are certain
parameters. In the limit $\sigma\to 0,~~\Omega\to\infty$ the state (\ref{state})
reduces to the EPR state (\ref{epr}).
After performing the integration over $p$, (\ref{state}) reduces to
\begin{equation}
\Psi(x_1,x_2) = {1\over \sqrt{\pi\sigma\Omega}}
e^{\frac{-(x_1-x_2)^2}{4\sigma^2}} e^{\frac{-(x_1+x_2)^2}{4\Omega^2}} .
\label{gepr}
\end{equation}
It is straightforward to show that $\Omega$ and $\hbar/\sigma$ quantify the position
and momentum spread of the particles in the $x$-direction because the
uncertainty in position and the wave-vector of the two photons,
along the $z$-axis, is given by
\begin{equation}
\Delta x_1 = \Delta x_2 = \sqrt{\Omega^2+\sigma^2},
\Delta k_{1x} = \Delta k_{2x} =
\tfrac{1}{4}\sqrt{{1\over \sigma^2} + {1\over \Omega^2}}~. \label{unc}
\end{equation}
Notice that for
$\sigma = \Omega$, the state is no longer entangled, and factors into
a product of two Gaussians centered at $x_1=0$ and $x_2=0$, respectively.
The state (\ref{gepr}) also describes well the two-photon mode function
at the output of the type-I crystal in SPDC generation \cite{photons1,photons2}.
The experiment is schematically described in Figure~\ref{2slit}. Entangled
particles (generally photons) emerge from a source, and pass through
a double-slit to reach a screen or a detector D1 which is movable along the
$x$-axis.
We assume that at time $t=0$, the two particles are in the state (\ref{gepr}),
and travel along the $y$-axis, towards a double-slit, with average momenta
$p_0$. Each particle can then be described as a quanton with a
wavelength $\lambda=h/p_0$. For photons, the wavelength is fixed as
$\lambda=2\pi/k_0$.
\subsection{Time evolution}
Time evolution of a one-dimensional wave-packet, along $x$-axis, is given by
\begin{equation}
\psi(x,t) = {1\over\sqrt{2\pi}}\int_{-\infty}^{\infty} \tilde{\psi}(k_x)
\exp\left[\imath (k_xx-\omega(k_x)t)\right]dk_x.
\end{equation}
For massive particles, one would have assumed $\omega(k_x)=\hbar k_x^2/2m$.
For photons one can work within the Fresnel approximation, ($k_y\approx k_0$,
$k_x \ll k_y$) to write $\omega(k_x)$ as \cite{dillon}
\begin{equation}
\omega(k_x) = c\sqrt{k_x^2+k_y^2} \approx ck_0 + {ck_x^2\over 2k_0}.
\end{equation}
So the spread of a photon wave-packet in the $x$-direction, which is moving
essentially along $y$-direction, is given by
\begin{equation}
\psi(x,t) = {e^{-\imath k_0t}\over\sqrt{2\pi}}\int_{-\infty}^{\infty} \tilde{\psi}(k_x)
e^{\imath k_xx}e^{-\imath ctk_x^2/2k_0} dk_x~.
\end{equation}
Using the above, the time
propagation kernel for the two photons can be written as
\begin{eqnarray}
K_1(x_1,x_1',t) &=& \sqrt{1\over \imath \lambda ct}
\exp\left[{-\pi(x_1-x_1')^2\over \imath \lambda ct}\right],\nonumber\\
K_2(x_2,x_2',t) &=& \sqrt{1\over \imath \lambda ct}
\exp\left[{-\pi(x_2-x_2')^2\over \imath \lambda ct}\right],
\end{eqnarray}
and the two-particle state after a time $t$ is given by
\begin{eqnarray}
\Psi(x_1,x_2,t) &=& \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}K_1(x_1,x_1',t)\times\nonumber\\
&& \qquad K_2(x_2,x_2',t)
\Psi(x_1',x_2')~ dx_1' dx_2'.\nonumber\\
\end{eqnarray}
At this stage it is convenient to introduce new coordinates for the
entangled particles: $r=(x_1+x_2)/2,~~q=(x_1-x_2)/2$. The state of the
entangled particles, at time $t=0$, can then be written as
\begin{equation}
\Psi(r,q) = {1\over \sqrt{\pi\sigma\Omega}}
e^{-q^2/\sigma^2} e^{-r^2/\Omega^2} .
\label{gepr1}
\end{equation}
The time-propagator, in the new coordinates, can be written as
\begin{eqnarray}
K_r(r,r',t) &=& \sqrt{1\over \imath \lambda ct}
\exp\left[{-2\pi(r-r')^2\over \imath \lambda ct}\right]\nonumber\\
K_q(q,q',t) &=& \sqrt{1\over \imath \lambda ct}
\exp\left[{-2\pi(q-q')^2\over \imath \lambda ct}\right].
\label{propagator}
\end{eqnarray}
The state after a general time $t$ can then be evaluated as
\begin{eqnarray}
\Psi(r,q,t) &=& \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} K_r(r,r',t)\times\nonumber\\
&& \qquad K_q(q,q',t)
\Psi(r',q')~dr' dq'.
\end{eqnarray}
Let us assume that during a time $t_0$, the photons travel a distance $L$,
from the source to
the double-slit, and the state at the double-slit takes the form:
\begin{equation}
\Psi(r,q,t_0) = C \exp\left({-q^2\over\sigma^2+\imath \alpha}\right)
\exp\left({-r^2\over\Omega^2+\imath \alpha}\right) ,
\label{t_slit}
\end{equation}
where $C=\frac{1}{\sqrt{\pi\sqrt{\sigma+\imath {\alpha/\sigma}}
\sqrt{\Omega+\imath {\alpha/\Omega}}}}$, and $\alpha = \lambda L/2\pi$.
\subsection{Effect of the double-slit}
After a time $t_0$, the two photons reach the double-slit and pass through it
to emerge on the other side. A rigorous, but immensely difficult approach
would be to consider the double-slit as a potential, and let the two photons
evolve under the action of that potential. We take a simpler and less
rigorous approach, by assuming that the effect of the double-slit is to
truncate the wave-function abruptly such that only the part of the wave
function in the region $-{d\over 2}-{\epsilon\over 2} \le x_1,x_2 \le
-{d\over 2}+{\epsilon\over 2}$ and
${d\over 2}-{\epsilon\over 2} \le x_1,x_2 \le
{d\over 2}+{\epsilon\over 2}$ survives. This region corresponds to the
region of the two slits, if the slits of width $\epsilon$ are located
at $x=-{d\over 2}$ and $x={d\over 2}$.
In our new coordinates, this region corresponds approximately to
(a) $\pm{d\over 2}-{\epsilon\over 2} \le r \le
\pm{d\over 2}+{\epsilon\over 2}$ together with
$-{\epsilon\over 2} \le q \le {\epsilon\over 2}$
and
(b) $\pm{d\over 2}-{\epsilon\over 2} \le q \le
\pm{d\over 2}+{\epsilon\over 2}$ together with
$-{\epsilon\over 2} \le r \le {\epsilon\over 2}$.
This is not completely accurate as far as $\epsilon$ is concerned, but
since the interference will be seen in the limit of very small $\epsilon$,
this approximation suffices for our purpose.
Case (a) corresponds to both photons passing through the same slit,
whereas case (b) corresponds to both photons passing through
different slits. Notice that if the two photons have a high spatial
correlation, case (b) is expected to have very low probability.
The two photons travel a distance $D = ct$ to reach the screen/detector.
The state at the screen is given by the following time-evolution
\begin{eqnarray}
\Psi(r,q,t) &=& \int_{-{d\over 2}-{\epsilon\over 2}}^{-{d\over 2}+{\epsilon\over 2}}dr'K_r(t)
\int_{-{\epsilon\over 2}}^{{\epsilon\over 2}}dq'K_q(t) \Psi(r',q',t_0)\nonumber\\
&+& \int_{{d\over 2}-{\epsilon\over 2}}^{{d\over 2}+{\epsilon\over 2}}dr'K_r(t)
\int_{-{\epsilon\over 2}}^{{\epsilon\over 2}}dq'K_q(t) \Psi(r',q',t_0)\nonumber\\
&+& \int_{-{d\over 2}-{\epsilon\over 2}}^{-{d\over 2}+{\epsilon\over 2}}dq'K_q(t)
\int_{-{\epsilon\over 2}}^{{\epsilon\over 2}}dr'K_r(t) \Psi(r',q',t_0)\nonumber\\
&+& \int_{{d\over 2}-{\epsilon\over 2}}^{{d\over 2}+{\epsilon\over 2}}dq'K_q(t)
\int_{-{\epsilon\over 2}}^{{\epsilon\over 2}}dr'K_r(t) \Psi(r',q',t_0) ,\nonumber\\
\label{psiformal}
\end{eqnarray}
where the propagator and the initial state are given by (\ref{propagator}) and
(\ref{t_slit}), respectively. For brevity, the $q,q',r,r'$ dependence of
the propagators has been suppressed.
A typical integral in (\ref{psiformal}) looks like the following:
\begin{eqnarray}
I &=& \int_{{d\over 2}-{\epsilon\over 2}}^{{d\over 2}+{\epsilon\over 2}}
\exp\left[{-{2\pi(r-r')^2\over \imath \lambda L}}\right]
\exp\left[{-{r'^2\over\Omega^2+\imath \alpha}}\right] dr'.\nonumber\\
\label{term}
\end{eqnarray}
Since the profile of the incoming beam is wide, $\Omega^2 \gg \lambda L/2\pi$.
The slit width $\epsilon$ is assumed to be very small. Since in the integral
above, $r'$ varies only between ${d\over 2}-{\epsilon\over 2}$ to
${d\over 2}+{\epsilon\over 2}$, the term
$\exp\left({-{r'^2\over\Omega^2+\imath \alpha}}\right)$
can be assumed to be constant in this region, and equal to
$\exp\left({-{d^2/4\over\Omega^2+\imath \alpha}}\right)$. Keeping in mind the
smallness of $\epsilon$, we can make an additional approximation,
$(r-r')^2 \approx (r-{d\over 2})^2-2(r-{d\over 2})(r'-{d\over 2})$,
ignoring terms of order $\epsilon^2$. With these assumptions, the
integral in (\ref{term}) can be approximated by
\begin{eqnarray}
I &\approx& e^{2\pi \imath (r-d/2)^2\over\lambda D}
e^{-{d^2/4\over\Omega^2+\imath \alpha}}
\int_{{d\over 2}-{\epsilon\over 2}}^{{d\over 2}+{\epsilon\over 2}}
e^{-{4\pi\imath (r-d/2)(r'-d/2)\over \lambda D}}dr' \nonumber\\
&=& e^{2\pi \imath (r-d/2)^2\over\lambda D}
e^{-{d^2/4\over\Omega^2+\imath \alpha}}
\tfrac{\sin\left(2\pi(r-d/2)\epsilon/\lambda D\right)}{ 2\pi(r-d/2)/\lambda D}.
\end{eqnarray}
If similar algebra is carried out over all the integrals in (\ref{psiformal}),
one obtains the following form of the final state of the biphoton
\begin{eqnarray}
\Psi(r,q,t) &=& C_t\left(e^{{2\pi\imath \over\lambda D}(r-{d\over 2})^2}
e^{{2\pi\imath \over\lambda D}q^2}f(r-\tfrac{d}{2})f(q)
e^{-{d^2\Omega^2\over4\Omega^4+4\alpha^2}}\right.\nonumber\\
&&\left.+ e^{{2\pi\imath \over\lambda D}(r+\tfrac{d}{2})^2}
e^{{2\pi\imath \over\lambda D}q^2}f(r+\tfrac{d}{2})f(q)
e^{-{d^2\Omega^2\over4\Omega^4+4\alpha^2}}\right.\nonumber\\
&&\left.+ e^{{2\pi\imath \over\lambda D}(q+\tfrac{d}{2})^2}
e^{{2\pi\imath \over\lambda D}r^2}f(q+\tfrac{d}{2})f(r)
e^{-{d^2\sigma^2\over4\sigma^4+4\alpha^2}}\right.\nonumber\\
&&\left.+ e^{{2\pi\imath \over\lambda D}(q-\tfrac{d}{2})^2}
e^{{2\pi\imath \over\lambda D}r^2}f(q-\tfrac{d}{2})f(r)
e^{-{d^2\sigma^2\over4\sigma^4+4\alpha^2}}\right) ,\nonumber\\
\label{finalstate}
\end{eqnarray}
\begin{figure*}
\centerline{\resizebox{155mm}{!}{\includegraphics{Fig_2.pdf}}}
\caption{Schematic diagram for the proposed nonlocal biphoton experiment.
Photons 1 and 2 effectively move in opposite directions along $y$-axis.
Detectors D1 and D2 move along the $x$-axis in synchrony such that their
$x$-positions are always the same. They also count the photons in coincidence.
}
\label{nonlocal}
\end{figure*}
\noindent where $C_t={\imath \over\lambda D}(\pi)^{-1/2}(\sigma+\imath {\imath \alpha\over\sigma})^{-1/4}
(\Omega+\imath {\imath \alpha\over\Omega})^{-1/4}$,
and $f(x)\equiv \frac{\sin\left({2\pi x\epsilon/\lambda D}\right)}
{2\pi x/\lambda D}$ governs the spatial spread of the interference pattern.
When the spatial spread of the biphoton at the
double-slit is much larger than the slit separation, the term
$e^{-{d^2\Omega^2\over4\Omega^4+4\alpha^2}}$ is of the order of unity.
If the spatial correlation between the two photons is high at the
double-slit, $\sigma$ is very small and consequently, the term
$e^{-{d^2\sigma^2\over4\sigma^4+4\alpha^2}}$ becomes much smaller
than unity. For $\epsilon \ll 1$, in a large region around $r=0$ on the
screen, we can make
the approximation $f(r-\tfrac{d}{2})\approx f(r+\tfrac{d}{2}) \approx f(r)$.
One may note that because of the truncation approximation, the state
(\ref{finalstate}) is no longer normalized. However, since we are only
interested in the interference pattern, we will continue to work with the
unnormalized state.
\section{Results}
\subsection{Biphoton with wavelength $\lambda/2$}
If the entanglement between the two photons is good, the last two terms
in (\ref{finalstate}) can be dropped. One would like to see the
distribution of the two photons striking at the same position on the screen.
This can be achieved by putting $x=(x_1+x_2)/2=r$ and $q = (x_1-x_2)/2=0$.
The probability density $P(x)$ of the two photons striking together at a position
$x$ on the screen is then given by $|\Psi(x,0,t)|^2$ where $\Psi$ is given
by (\ref{finalstate}). Within the approximations described above, the
probability density of the biphoton to strike a position $x$ on the screen
is given by
\begin{eqnarray}
P(x) &=& |C_t|^2\epsilon^2f^2(x)\left[1 + \cos\left({4\pi xd\over\lambda D}\right)\right].
\label{double}
\end{eqnarray}
The above expression represents an interference pattern with a fringe width
given by $w = {(\lambda/2)D\over d}$, which means that the biphoton behaves
like one quanton of wavelength $\lambda/2$.
This feature has already been experimentally demonstrated in an experiment
carried out with entangled photons generated via SPDC \cite{multiphotonexpt}.
\subsection{Nonlocal biphoton with wavelength $\lambda/2$}
We now argue that in order for the entangled photons to act as a single
quanton of wavelength $\lambda/2$, it is not necessary that they be
physically close together. That may sound like an outlandish claim, but we
shall see in the following how it may be possible.
We propose a modified experiment in which entangled
photons are separated by a polarizing beam-splitter, and each passes through
a different double-slit kept at equal distance from the beam splitter.
Effectively, the photons may now be assumed to be traveling in opposite
directions along $y$-axis, as shown in Figure~\ref{nonlocal}.
The two entangled photons, emerging from the source, are described by the
state (\ref{gepr}). They travel in opposite direction for a time $t_0$,
after which they reach their respective double-slits. The double-slits are kept
on opposite sides of the source, at a distance $L = ct_0$ from
the source. When the two photons reach the double-slits, their $x$-dependence
is described by (\ref{t_slit}). Of course, the $y$-dependence of the two
particles will be very different: one photon will be a wave-packet centered
at $y=-L$, and the other centered at $y=L$, assuming that the source sits at
$y=0$. However, as far as the
entanglement, and the \mbox{$x$-dependence} of the state is concerned, their
$y$-dependence is unimportant. We assume that the effect of the two double-slits
is to truncate the state of the two photons to the region within the slits,
i.e.,
${-{d\over 2}}-{\epsilon\over 2} \le x_1,x_2 \le
{-{d\over 2}}+{\epsilon\over 2}$ and
${d\over 2}-{\epsilon\over 2} \le x_1,x_2 \le
{d\over 2}+{\epsilon\over 2}$.
Needless to say that for this argument to work, the $x$-positions of the two
double-slits should be exactly the same. This would make sure that the two
photons, although traveling in different directions along $y$-axis, encounter
a slit at the same $x$-position, although their $y$-positions are separated.
It should be recalled that the two photons have a directional uncertainty
along the $x$-axis.
After emerging from the double-slits, the
two photons travel, for a time $t$, a distance $D = ct$, to reach their
respective detectors D1 and D2. The final state of the two photons at the
two detectors is given by (\ref{finalstate}). One would notice that the
same analysis, that was used for both photons traveling in the same direction
and passing through the same double-slit, works for the photons traveling in
opposite direction, and passing through different double-slits.
The probability density of coincident click of D1 at $x_1 = x$ and
D2 at $x_2 = x$, is given by
$P(x) = |C_t|^2\epsilon^2f^2(x)\left[1 + \cos\left({4\pi xd\over\lambda D}\right)\right]$, which is the same as (\ref{double}).
But this is an interference pattern corresponding to a wavelength $\lambda/2$.
Thus we reach an amazing conclusion, that the two photons, although widely
separated in space, behave like a single quanton of wavelength $\lambda/2$
which interferes with itself (see Figure \ref{interf}).
Interestingly, an experiment with entangled photons was carried out in the
context of quantum lithography, which showed the effect predicted here,
namely, the interference pattern appearing corresponding to a wavelength
$\lambda/2$, where $\lambda$ is the wavelength of the photons \cite{shihlitho}.
However, the authors of the experiment have not analyzed it in the light of
multiphoton deBroglie waves \cite{multiphoton,multiphotonexpt}.
Another experiment with electrons emitted from photodouble ionization
of $H_2$ molecules has been performed very recently, which seems to show
an effect closely related to the one predicted here \cite{dielectron}.
The two electrons do not pass through any double-slit, but are produced at
two indistinguishable centers A or B separated by the internuclear distance
of two atoms in the hydrogen molecule.
The authors concluded that the two electrons behave like a \emph{dielectron}
which has a wave-vector of magnitude $k_1+k_2$, $k_1, k_2$ being the
magnitudes of the wave-vectors of the two electrons. It is easy to see that
had the two wave-vectors been of the same magnitude, the \emph{dielectron}
would have a de Broglie wavelength half the wavelength of a single electron.
The authors of this paper too, have not connected their results to
the earlier work on multiphoton interference \cite{multiphoton,multiphotonexpt}.
\begin{figure}
\centerline{\resizebox{8.5cm}{!}{\includegraphics{Fig_3.pdf}}}
\caption{Double-slit interference pattern of the biphoton given by
(\ref{double}), where $\lambda$ is the wavelength of the photons (solid line).
Fringe width is $w = {(\lambda/2) D\over d}$.
Double-slit interference pattern of the photons given by (\ref{single})
(dotted line). Fringe width is $w = {\lambda D\over d}$. A typical profile
of $f(x)$ has been used for the plots.
}
\label{interf}
\end{figure}
\subsection{Single photon interference}
We now investigate the possibility of a photon of the entangled pair behaving
like a standalone quanton. This can be achieved by fixing detector D2 at
$x_2=0$ and counting photons by D1 at various $x_1$, in coincidence with D2.
Putting $x_2=0$ corresponds to $r = x_1/2$ and $q = x_1/2$. Doing that
simplifies (\ref{finalstate}) to
\begin{eqnarray}
\Psi(x_1,x_2=0,t) &\approx& C_t\left(e^{{\pi\imath \over 2\lambda D}(x_1-d)^2}
e^{{\pi\imath \over 2\lambda D}x_1^2}f^2(\tfrac{x_1}{2})
\right.\nonumber\\
&&\left.+ e^{{\pi\imath \over 2\lambda D}(x_1+d)^2}
e^{{\pi\imath \over 2\lambda D}x_1^2}f^2(\tfrac{x_1}{2})
\right),
\end{eqnarray}
where the combined state $\Psi(x_1,x_2,t)$ is labeled by the original
coordinates $x_1,x_2$, and not by $r, q$. The probability density to find
a photon at $x_1$, $P(x_1)$ is given by $P(x_1)=|\Psi(x_1,x_2=0,t)|^2$,
and has the following form:
\begin{eqnarray}
P(x_1) &=& |C_t|^2\epsilon^2f^2(\tfrac{x_1}{2})\left[1 + \cos\left({2\pi x_1d\over\lambda D}\right)\right].
\label{single}
\end{eqnarray}
The above represents a Young's double-slit interference pattern with a
fringe with $w = {\lambda D\over d}$. In this arrangement the photons
detected by D1 behave as independent quantons with wavelength $\lambda$
(see Figure \ref{interf}).
\section{Conclusions}
We have a done a wave-packet analysis of two entangled photons
passing through a double-slit. We have shown that the two photons can
behave like a single quanton of half the wavelength of the photons
when detected in coincidence at the same position. This is in agreement of
an earlier analysis and experiment by Fonseca, Monken and P\'adua
\cite{multiphotonexpt}. Going further, we have shown that the two photons can
continue to behave like a single quanton even when they are widely separated in space, a
highly nonlocal feature. This work extends the theoretical ideas of
multiphoton wave packets \cite{multiphoton,multiphotonexpt} to a nonlocal
scenario. Our result implies that even when two entangled photons are
separated in space, they may act like a single quanton which interferes
with itself. Entangled particles show very strange and counter-intuitive
properties. It has previously been shown that entangled photons can
exhibit a \emph{nonlocal} wave-particle duality \cite{gduality}.
\section*{Acknowledgment}
Ananya Paul is thankful to the Centre for Theoretical Physics, Jamia Millia
Islamia, New Delhi, for providing its facilities during the course of this
work.
|
1,314,259,993,331 | arxiv | \section{Introduction}
Let $R$ be a commutative ring. For any small category $\cC$, a \emph{$\cC$-module} over $R$ is, by definition, a (covariant) functor from $\cC$ to the category of $R$-modules. A \emph{morphism} of $\cC$-modules over $R$ is, by definition, a natural transformation of functors. The category of $\cC$-modules over $R$ is an abelian category. One can define, in a natural way, the notion of a \emph{finitely generated} $\cC$-module over $R$.
Let $\FI$ be the category of finite sets and injective maps, and denote by $\Z_+$ the set of non-negative integers. For any $\FI$-module $V$ over $R$ and $n\in\Z_+$, we write $V_n$ for $V([n])$, where $[n]:=\{1,\ldots,n\}$. Since the automorphism group of $[n]$ in the category $\FI$ is the symmetric group $S_n$, an $\FI$-module $V$ over $R$ gives rise to a sequence $\{V_n\}$ where $V_n$ is a representation of $S_n$. It was shown by Church, Ellenberg, Farb, and Nagpal (see \cite{CEF} and \cite{CEFN}) that many interesting sequences of representations of symmetric groups arise in this way from finitely generated $\FI$-modules. A principle result they proved is that any finitely generated $\FI$-module over a commutative noetherian ring is noetherian. They deduced as a consequence that if $V$ is a finitely generated $\FI$-module over a commutative noetherian ring, then the sequence $\{V_n\}$ admits an inductive description, in the sense that for all sufficiently large integer $N$, one has
\begin{equation} \label{colimit}
V_n \cong \colim_{\substack{S\subset [n] \\ |S|\leqslant N}} V(S) \quad \mbox{ for each } n\in \Z_+,
\end{equation}
where the colimit is taken over the poset of all subsets $S$ of $[n]$ such that $|S|\leqslant N$.
To provide the context for our present article, let us briefly describe the way in which \eqref{colimit} is proved in \cite{CEFN}. Suppose that $V$ is a finitely generated $\FI$-module over a noetherian ring. For any finite set $T$, there is a canonical homomorphism of $R$-modules
\begin{equation} \label{canonical map}
\colim_{\substack{S\subset T \\ |S|\leqslant N}} V(S) \longrightarrow V(T).
\end{equation}
Since $V$ is finitely generated, there exists an integer $N'$ such that $V$ is generated by $\sqcup_{i\leqslant N'} V_i$. It is easy to see that the homomorphism \eqref{canonical map} is surjective if $N\geqslant N'$. To prove the injectivity of \eqref{canonical map} when $N$ is sufficiently large, Church, Ellenberg, Farb, and Nagpal constructed a (Koszul) complex $\widetilde{S}_{-*}V$ of $\FI$-modules which has the property that:
\begin{gather*}
(H_1(\widetilde{S}_{-*} V))(T) = \mathrm{Ker}\Big( \colim_{\substack{S\subset T \\ |T|-2\leqslant |S|\leqslant|T|-1 }} V(S) \to V(T) \Big).
\end{gather*}
They verified that one has:
\begin{equation} \label{central stabilization on lhs}
\colim_{\substack{S\subset T \\ |T|-2\leqslant |S|\leqslant|T|-1 }} V(S) = \colim_{\substack{S\subset T \\ |S|<|T| }} V(S).
\end{equation}
From the noetherian property of $V$, they proved that there exists an integer $N''$ such that $(H_1(\widetilde{S}_{-*} V))(T)=0$ if $|T|\geqslant N''$. As a consequence,
\begin{equation} \label{third isomorphism}
\colim_{\substack{S\subset T \\ |S|< |T| }} V(S) = V(T) \quad \mbox{ if } |T| > \max\{N',N''\}.
\end{equation}
Set $N=\max\{N',N''\}$. The homomorphism in \eqref{canonical map} is an isomorphism if $|T|\leqslant N$, for $T$ is terminal in the poset $\{S \mid S\subset T\}$. It now follows by an induction on $|T|$ that \eqref{canonical map} is an isomorphism too if $|T|>N$, for:
\begin{equation*}
\colim_{\substack{S\subset T \\ |S|\leqslant N}} V(S)= \colim_{\substack{U\subset T \\ |U|<|T| } } \, \colim_{\substack{S\subset U \\ |S|\leqslant N} } V(S) = \colim_{\substack{U\subset T \\ |U|<|T|} } V(U) = V(T),
\end{equation*}
where the first isomorphism is routine, the second isomorphism is by the induction hypothesis, and the third isomorphism is by \eqref{third isomorphism}.
It was subsequently proved by Church and Ellenberg that for any $\FI$-module $V$ over an arbitrary commutative ring, if $V$ is presented in finite degrees (in a suitable sense), then there are integers $N'$ and $N''$ such that $V$ is generated by $\sqcup_{i\leqslant N'} V_i$, and $(H_1(\widetilde{S}_{-*} V))(T)=0$ if $|T|\geqslant N''$. Consequently, using the same arguments as above, they showed that the isomorphism \eqref{colimit} holds for all $N$ sufficiently large. This extends the result of their joint work with Farb and Nagpal to $\FI$-modules which are not necessarily noetherian (since every finitely generated $\FI$-module over a commutative noetherian ring is presented in finite degrees).
The left-hand side of \eqref{central stabilization on lhs} is isomorphic to the central stabilization construction of Putman \cite[\S1]{Putman}. In their joint work, Putman and Sam generalized the isomorphism in \eqref{colimit} to modules over complemented categories with a generator of which $\FI$ is an example. A complemented category with a generator $X$ is the data of a symmetric monoidal category and an object $X$ satisfying a list of axioms. Their proof is similar to the one for $\FI$ described above.
A goal of our present paper is to give a very simple and transparent proof of a generalization of the above results. In particular, our proof does not require consideration of the complex $\widetilde{S}_{-*}V$ or similar complexes. Moreover, the setting for our generalization is much simpler than the one of complemented categories with a generator used by Putman and Sam \cite{PS}, and include many more examples; our generalization is, in fact, motivated by the fact that several combinatorial categories studied by Sam and Snowden in \cite{SS-Grobner} do not fall within the framework of \cite{PS}. Another goal of our paper is to explain the role of the quadratic property of $\FI$ in the isomorphism (\ref{central stabilization on lhs}).
\subsection*{Outline of the paper}
This paper is organized as follows.
In Section \ref{generalities}, we define our generalization of the notion of central stability and introduce the notion of $d$-step central stability. We show that in the special case of complemented categories with a generator studied by Putman and Sam, our notion of central stability is indeed equivalent to their notion of central stability. We also show that for the category $\FI$, it is equivalent to the inductive description \eqref{colimit}.
In Section \ref{central stability}, we give a reminder of a key lemma from Morita theory. We then prove our first main result, that a module is presented in finite degrees if and only if it is centrally stable. We deduce that if every finitely generated module is neotherian, then every finitely generated module is centrally stable. We prove that the converse of the preceding statement holds under a local finiteness assumption on the category when the base ring is a commutative noetherian ring. To illustrate the wide applicability of our results, we recall examples of combinatorial categories introduced by Sam and Snowden in \cite{SS-Grobner} which fall within our framework but are not complemented categories with a generator.
In Section \ref{d-step section}, we prove our second main result, that if the ideal of relations of a category is generated in degrees at most $d$, then every module presented in finite degrees is $d$-step centrally stable; for example, if the category is quadratic, then every module presented in finite degrees is $2$-step centrally stable.
To apply our second main result, one needs to have a practical way to check that the ideal of relations of a category is generated in degrees at most $d$. In Section \ref{last section}, we give sufficiency conditions which allow one to do this.
\section{Generalities} \label{generalities}
\subsection{Notations and definitions} \label{definitions subsection}
We prefer to formulate our main results in the more familiar language of modules over algebras. Throughout this paper, we denote by $R$ a commutative ring and $\Z_+$ the set of non-negative integers.
Let $\cA$ be an $R$-linear category, i.e. a category enriched over the category of $R$-modules. We assume that $\Ob(\cA)=\Z_+$ and $\Hom_{\cA}(m,n)=0$ if $m>n$. We set
\begin{equation}\label{category algebra}
A := \bigoplus_{m,n \in \Ob(\cA)} \Hom_{\cA}(m,n).
\end{equation}
There is a natural structure of a (non-unital) $R$-algebra on $A$. (In the terminology of \cite{BP}, $A$ is a \emph{$\Z$-algebra}.) We call $A$ the \emph{category algebra} of $\cA$. For each $n\in \Ob(\cA)$, we denote by $e_n$ the identity endomorphism of $n$. For any $m, n\in \Ob(\cA)$ with $m\leqslant n$, we set
\begin{equation*}
e_{m,n} := e_m+e_{m+1}+ \cdots+ e_n \in A.
\end{equation*}
One has: $Ae_{m,n} = Ae_m \oplus Ae_{m+1} \oplus \cdots \oplus Ae_n$.
A \emph{graded $A$-module} is an $A$-module $V$ such that $V = \bigoplus_{n\in \Ob(\cA)} e_n V$. Observe that if $V$ is a graded $A$-module, then any $A$-submodule of $V$ is also a graded $A$-module.
A graded $A$-module $V$ is \emph{finitely generated} if it is finitely generated as an $A$-module. Equivalently, $V$ is finitely generated if for some $N\in \Ob(\cA)$, there is an exact sequence $\bigoplus_{i\in I} Ae_{0,N} \to V \to 0$ where the indexing set $I$ is finite. A graded $A$-module $V$ is \emph{noetherian} if every $A$-submodule of $V$ is finitely generated.
A graded $A$-module $V$ is \emph{finitely presented} if for some $N\in \Ob(\cA)$, there is an exact sequence of the form
\begin{equation*}
\bigoplus_{j\in J} Ae_{0,N} \longrightarrow \bigoplus_{i\in I} Ae_{0,N} \longrightarrow V \longrightarrow 0
\end{equation*}
where both the indexing sets $I$ and $J$ are finite.
A graded $A$-module $V$ is \emph{presented in finite degrees} if for some $N\in \Ob(\cA)$, there is an exact sequence
\begin{equation} \label{presentation in finite degree exact sequence}
\bigoplus_{j\in J} Ae_{0,N} \longrightarrow \bigoplus_{i\in I} Ae_{0,N} \longrightarrow V \longrightarrow 0
\end{equation}
where the indexing sets $I$ and $J$ may be finite or infinite. The smallest $N$ for which such an exact sequence exists is called the \textit{presentation degree} of $V$, and we denote it by $\prd(V)$.
\begin{remark} \label{remark on presentation exact sequence}
It is easy to see that if $V$ is a graded $A$-module presented in finite degrees, then for every $N\geqslant \prd(V)$, there exists an exact sequence of the form \eqref{presentation in finite degree exact sequence}, where $I$ and $J$ may depend on $N$.
\end{remark}
The main definitions of this paper are as follows.
\begin{definition} \label{definition of central stability}
A graded $A$-module $V$ is called \emph{centrally stable} if for all $N$ sufficiently large, one has
\begin{equation} \label{central stability isomorphism}
Ae \otimes_{e A e} eV \cong V \quad \mbox{ where } e=e_{0,N}.
\end{equation}
\end{definition}
\begin{definition} \label{definition of d-step centrally stable}
Let $d$ be an integer $\geqslant 1$. A graded $A$-module $V$ is called \emph{$d$-step centrally stable} if for all $N$ sufficiently large, one has
\begin{equation*}
Ae \otimes_{e A e} eV \cong \bigoplus_{n\geqslant N-(d-1)} e_n V \quad \mbox{ where } e=e_{N-(d-1),N}.
\end{equation*}
\end{definition}
\subsection{Inductive descriptions}
We now explain the relation of our definition of central stability given above to the notion of central stability given by Putman and Sam in \cite{PS} for complemented categories with a generator, and the relation to the inductive description \eqref{colimit} in the special case of $\FI$-modules as given by Church, Farb, Ellenberg, and Nagpal \cite{CE, CEFN}.
Let $\cC$ be a small category such that $\Ob(\cC)=\Z_+$, and $\Hom_{\cC}(m,n)=\emptyset$ if $m>n$. We denote by $\cA _{\cC}$ the $R$-linear category with $\Ob(\cA_{\cC})=\Z_+$ and $\Hom_{\cA_{\cC}}(m,n)$ the free $R$-module with basis $\Hom_{\cC}(m,n)$ for each $m,n\in \Z_+$. Let $A_{\cC}$ be the category algebra of $\cA_{\cC}$; see \eqref{category algebra}.
Recall that a $\cC$-module over $R$ is, by definition, a (covariant) functor from $\cC$ to the category of $R$-modules. For any $\cC$-module $V$ over $R$ and $n\in\Z_+$, we write $V_n$ for $V(n)$. If $V$ is a $\cC$-module over $R$, then $\bigoplus_{n\in\Ob(\cC)} V_n$ is a graded $A_{\cC}$-module. This defines an equivalence from the category of $\cC$-modules over $R$ to the category of graded $A_{\cC}$-modules. We say that a $\cC$-module $V$ over $R$ has a certain property (such as centrally stable) if the graded $A_{\cC}$-module $\bigoplus_{n\in\Ob(\cC)} V_n$ has the property.
For any $M, N\in \Ob(\cC)$ with $M\leqslant N$, we write $\cC_{M,N}$ for the full subcategory of $\cC$ on the set of objects $\{M, M+1, \ldots, N\}$. Then the category of $\cC_{M,N}$-modules over $R$ is equivalent to the category of $e A_{\cC} e$-modules where $e=e_{M, N}$. Let $\iota_{M,N}:\cC_{M,N} \to \cC$ be the inclusion functor. We define
the \emph{restriction} functor $\mathrm{Res}_{M,N}$ along $\iota_{M,N}$ by
\begin{align*}
\mathrm{Res}_{M,N}: \mbox{(category of $\cC$-modules over $R$)} &\longrightarrow \mbox{(category of $\cC_{M,N}$-modules over $R$)}, \\
V &\longmapsto V\circ \iota_{M,N}.
\end{align*}
The \emph{left Kan extension functor} $\mathrm{Lan}_{M,N}$ along $\iota_{M,N}$ is a left adjoint functor to $\mathrm{Res}_{M,N}$; for every $\cC_{M,N}$-module $V$ over $R$ and $n\in \Ob(\cC)$, one has:
\begin{equation} \label{Lan formula}
(\mathrm{Lan}_{M,N}V)_n = \colim_{\substack{\alpha: s\to n \\ M\leqslant s\leqslant N }} V_s,
\end{equation}
where $V_s=V(s)$ and the colimit is taken over the (comma) category whose objects are the morphisms $\alpha : s\to n$ in $\cC$ such that $M\leqslant s\leqslant N$; see \cite[Theorem 2.3.3]{KS} or \cite[Section X.3, (10)]{Mac}.
The following proposition gives a reformulation for the notions of central stability and $d$-step central stability for $\cC$-modules over $R$.
\begin{proposition} \label{colimit formulation of central stability}
Let $V$ be a $\cC$-module over $R$. For every $M,N \in \Ob(\cC)$ with $M\leqslant N$, one has:
\begin{equation} \label{LanRes}
\bigoplus_{n\in\Ob(\cC)} \left( \mathrm{Lan}_{M,N} (\mathrm{Res}_{M,N} V) \right)_n \cong A e \otimes_{eAe} e\Big( \bigoplus_{n\in\Ob(\cC)} V_n \Big) \quad \mbox{ where } A=A_{\cC} \mbox{ and } e = e_{M,N}.
\end{equation}
In particular, $V$ is centrally stable if and only if for all sufficiently large $N$, one has
\begin{equation*}
V_n \cong \colim_{\substack{\alpha: s\to n \\ s\leqslant N }} V_s \quad \mbox{ for each }n\in \Z_+.
\end{equation*}
Moreover, $V$ is $d$-step centrally stable if and only if for all sufficiently large $N$, one has
\begin{equation*}
V_n \cong \colim_{\substack{\alpha: s\to n \\ N-(d-1)\leqslant s\leqslant N }} V_s \quad \mbox{ for each }n\geqslant N-(d-1).
\end{equation*}
\end{proposition}
\begin{proof}
Let $e=e_{M,N}$. For any $\cC$-module $V$ over $R$, one has
\begin{equation*}\bigoplus_{M\leqslant n\leqslant N} \left( \mathrm{Res}_{M,N} V\right)_n = e\Big( \bigoplus_{n\in\Ob(\cC)} V_n \Big).
\end{equation*}
Since the functor:
\begin{align*}
\mbox{(category of $eAe$-modules)} &\longrightarrow \mbox{(category of graded $A$-modules)}, \\ W &\longmapsto Ae\otimes_{eAe} W,
\end{align*}
is left adjoint to the functor:
\begin{align*}
\mbox{(category of graded $A$-modules)} &\longrightarrow \mbox{(category of $eAe$-modules)}, \\ V\longmapsto eV,
\end{align*}
it follows that for any $\cC_{M,N}$-module $W$ over $R$, one has
\begin{equation*}
\bigoplus_{M\leqslant n\leqslant N} \left( \mathrm{Lan}_{M,N} W \right)_n \cong A e \otimes_{eAe} \Big( \bigoplus_{M\leqslant n\leqslant N} W_n \Big).
\end{equation*}
We have proven the isomorphism \eqref{LanRes}. The remaining statements now follow from the formula \eqref{Lan formula}.
\end{proof}
A complemented category with a generator, in the sense of Putman and Sam \cite{PS}, is the data of a symmetric monoidal category $\mathtt{A}$ and an object $X$ of $\mathtt{A}$ satisfying a list of axioms. We do not recall those axioms here since we will not need them; however, a consequence of the axioms is that the full subcategory $\cC$ of $\mathtt{A}$ on the set of objects $X^n$ for $n\in\Z_+$ is a skeleton of $\mathtt{A}$, and $\Hom_{\cC}(X^m, X^n)=\emptyset$ if $m>n$. We may identify the set of objects of $\cC$ with $\Z_+$ in the obvious way. Following Putman and Sam \cite[Theorem E]{PS}, an $\mathtt{A}$-module $V$ over $R$ is \emph{centrally stable} if for all sufficiently large $N$, the functor $V$ is the left Kan extension of the restriction of $V$ to the full subcategory of $\mathtt{A}$ spanned by the objects isomorphic to $X^n$ for some $n\leqslant N$.
\begin{corollary}
Let $\mathtt{A}$ be a complemented category with a generator $X$. Suppose that $\cC$ is the skeleton of $\mathtt{A}$ spanned by the objects $X^n$ for all $n\in \Z_+$. Let $V$ be an $\mathtt{A}$-module over $R$, and regard $V$ also as a $\cC$-module by restriction along the inclusion functor $\cC \to \mathtt{A}$. Then $V$ is centrally stable in the sense of Putman and Sam \cite[Theorem E]{PS} if and only if $V$ is centrally stable as a $\cC$-module over $R$.
\end{corollary}
\begin{proof}
This is immediate from \eqref{LanRes}.
\end{proof}
The full subcategory of $\FI$ spanned by the objects $[n]$ for all $n\in \Z_+$ is a skeleton of $\FI$. In the following corollary, we identify the set of objects of this skeleton with $\Z_+$ in the obvious way.
\begin{corollary} \label{inductive central stability of FI-modules}
Suppose that $\cC$ is the skeleton of $\FI$ spanned by the objects $[n]$ for all $n\in \Z_+$. Let $V$ be an $\FI$-module over $R$, and regard $V$ also as a $\cC$-module by restriction along the inclusion functor $\cC\to \FI$. Then $V$ is centrally stable as a $\cC$-module over $R$ if and only if for all sufficiently large $N$, the isomorphism \eqref{colimit} holds.
\end{corollary}
\begin{proof}
By Proposition \ref{colimit formulation of central stability}, it suffices to show that
\begin{equation*}
\colim_{\substack{\alpha: s\to n \\ s\leqslant N }} V_s \cong \colim_{\substack{S\subset [n] \\ |S|\leqslant N}} V(S).
\end{equation*}
But this is immediate from the observation that the natural functor:
\begin{align*}
\{ \alpha \mid \alpha \in \Hom_{\FI}([s],[n]) \mbox{ and } s\leqslant N \} &\longrightarrow \{ S \mid S\subset [n] \mbox{ and } |S|\leqslant N \},\\
\alpha &\longmapsto \mathrm{Im}(\alpha),
\end{align*}
is final; see \cite[Theorem IX.3.1]{Mac}. (Some authors refer to final functors as cofinal functors. See, for example, \cite[Definition 2.5.1]{KS}.)
\end{proof}
Let us also mention that for a principal ideal domain $R$, Dwyer defined the notion of a \emph{central coefficient system} $\rho$ in \cite{Dw} to mean a sequence $\rho_n$ of $\mathrm{GL}_n(R)$-modules and maps $F_n: \rho_n \to \rho_{n+1}$ such that $F_n$ is a $\mathrm{GL}_n(R)$-map (when $\rho_{n+1}$ is considered as a $\mathrm{GL}_n(R)$-module by restriction) and the image of $F_{n+1}F_n$ is invariant under the action of the permutation matrix $s_{n+2}\in \mathrm{GL}_{n+2}(R)$ which interchanges the last two standard basis vectors of $R^{n+2}$. Suppose that $\cC$ is the skeleton of $\FI$ spanned by the objects $[n]$ for all $n\in \Z_+$. A central coefficient system $\rho$ defines a $\cC$-module $V$ over $R$ with $V_n = \rho_n$ and such that the standard inclusion $[n]\hookrightarrow [n+1]$ induces the map $F_n$. If $V$ is finitely generated as a $\cC$-module over $R$, then by \cite[Theorem C]{CEFN} it is a centrally stable $\cC$-module over $R$.
\section{Central stability} \label{central stability}
\subsection{Key lemma} The following lemma is a standard result in Morita theory; see, for instance, \cite[Theorem 6.4.1]{Cohn}. We recall its proof here since it plays a key role in the proof of our main results.
\begin{lemma} \label{main lemma}
Let $A$ be any (possibly non-unital) ring, and $e$ an idempotent element of $A$. If $V$ is an $A$-module such that, for some indexing sets $I$ and $J$, there is an exact sequence
\begin{equation} \label{finite presentation}
\bigoplus_{j\in J} Ae \longrightarrow \bigoplus_{i\in I} Ae \longrightarrow V \longrightarrow 0,
\end{equation}
then $Ae \otimes_{eAe} eV \cong V$.
\end{lemma}
\begin{proof}
Applying the right-exact functor $Ae\otimes_{eAe} e(-)$ to (\ref{finite presentation}), we obtain the first row in the following commuting diagram:
\begin{equation} \label{diagram in main lemma}
\xymatrix{
\bigoplus_{j\in J} Ae \otimes_{eAe} eAe \ar[r]\ar[d] & \bigoplus_{i\in I} Ae \otimes_{eAe} eAe \ar[r]\ar[d] & Ae\otimes_{eAe} eV \ar[r]\ar[d] & 0 \\
\bigoplus_{j\in J} Ae \ar[r] & \bigoplus_{i\in I} Ae \ar[r] & V \ar[r] & 0
}
\end{equation}
Both the rows in (\ref{diagram in main lemma}) are exact. Since the two leftmost vertical maps are isomorphisms, the rightmost vertical map is also an isomorphism.
\end{proof}
\subsection{Central stability and presentation in finite degrees}
Let $\cA$ be an $R$-linear category such that $\Ob(\cA)=\Z_+$ and $\Hom_{\cA}(m,n)=0$ if $m>n$. Denote by $A$ the category algebra of $\cA$; see \eqref{category algebra}.
\begin{theorem} \label{general centrally stable}
Let $V$ be a graded $A$-module. Then $V$ is presented in finite degrees if and only if it is centrally stable. Moreover, if $V$ is presented in finite degrees, then the isomorphism \eqref{central stability isomorphism} holds if and only if $N\geqslant \prd(V)$.
\end{theorem}
\begin{proof}
Suppose that $V$ is presented in finite degrees, and $N\geqslant \prd(V)$. Then there is an exact sequence \eqref{presentation in finite degree exact sequence} with $e=e_{0,N}$; see Remark \ref{remark on presentation exact sequence}. Hence, by Lemma \ref{main lemma}, one has $Ae\otimes_{eAe} eV \cong V$.
Conversely, suppose that $N$ is an integer such that $Ae \otimes_{eAe} eV \cong V$, where $e=e_{0,N}$. There is an exact sequence $\bigoplus_{j\in J} eAe \to \bigoplus_{i\in I} eAe \to eV \to 0$ for some indexing sets $I$ and $J$. Applying the functor $Ae\otimes_{eAe} (-)$, we obtain an exact sequence of the form \eqref{presentation in finite degree exact sequence} with $e=e_{0,N}$. Therefore, $V$ is presented in finite degrees and $N \geqslant \prd(V)$.
\end{proof}
\begin{corollary} \label{noetherian case}
If every finitely generated graded $A$-module is noetherian, then every finitely generated graded $A$-module is centrally stable.
\end{corollary}
\begin{proof}
If $V$ is a finitely generated graded $A$-module, then $V$ is finitely presented, hence centrally stable by Theorem \ref{general centrally stable}.
\end{proof}
The above proofs are very simple in comparison to the proofs for the special cases given in \cite[Theorem B]{CE}, \cite[Theorem C]{CEFN}, and \cite[Theorem E]{PS}. Let us emphasize, however, that the crux is to recognize that the notion of central stability in \cite{CE, CEFN, PS} can be reformulated in Morita theory; \emph{this was not a priori obvious}.
\subsection{Central stability and noetherian property}
We prove a partial converse to Corollary \ref{noetherian case}.
Let $\cA$ be an $R$-linear category such that $\Ob(\cA)=\Z_+$ and $\Hom_{\cA}(m,n)=0$ if $m>n$. Denote by $A$ the category algebra of $\cA$; see \eqref{category algebra}. We say that $A$ is \emph{locally finite} if $\Hom_{\cA}(m,n)$ is a finitely generated $R$-module for all $m, n \in \Z_+$.
\begin{corollary}
Suppose that $R$ is a commutative noetherian ring and $A$ is locally finite. If every finitely generated graded $A$-module is centrally stable, then every finitely generated graded $A$-module is noetherian.
\end{corollary}
\begin{proof}
Let $V$ be a finitely generated graded $A$-module and $U$ a (graded) $A$-submodule of $V$. We want to prove that $U$ is finitely generated as a graded $A$-module. Since $V/U$ is a finitely generated graded $A$-module, it is centrally stable, hence presented in finite degrees by Theorem \ref{general centrally stable}. Thus, for some $N\in \Ob(\cA)$, there is a commuting diagram
\begin{equation*}
\xymatrix{
& \bigoplus_{j\in J} Ae_{0,N} \ar[r] \ar[d]^-{g} & \bigoplus_{i\in I} Ae_{0,N} \ar[r] \ar[d]^-{f} & V/U \ar[r] \ar@{=}[d] & 0 \\
0 \ar[r] & U \ar[r] & V \ar[r] & V/U \ar[r] & 0.
}
\end{equation*}
It suffices to prove that both $\mathrm{Im}(g)$ and $\mathrm{Coker} (g)$ are finitely generated graded $A$-modules.
By the Snake Lemma, there is an isomorphism $\mathrm{Coker} (g) \cong \mathrm{Coker} (f)$, so $\mathrm{Coker} (g)$ is a finitely generated graded $A$-module.
Since $A$ is locally finite and $V$ is finitely generated as a graded $A$-module, it follows that $e_{0,N}V$ is a finitely generated $R$-module, hence $e_{0,N}V$ is a noetherian $R$-module. But $\mathrm{Im}(g)$ is generated as a graded $A$-module by $e_{0,N}\mathrm{Im}(g)$, and $e_{0,N}\mathrm{Im}(g)$ is an $R$-submodule of the noetherian $R$-module $e_{0,N}V$, hence $\mathrm{Im}(g)$ is finitely generated as a graded $A$-module.
\end{proof}
\subsection{Examples} \label{subsection examples}
Let us mention some examples of combinatorial categories introduced by Sam and Snowden \cite{SS-Grobner} which fall within our framework. The central stability of modules presented in finite degrees for these categories follow from Theorem \ref{general centrally stable} (and were not previously known).
In the following examples, the set of objects of the category $\cC$ is $\Z_+$. For any $n\in \Z_+$, we set $[n]:=\{1,\ldots, n\}$.
(i) $\cC=\FI_a$ where $a$ is an integer $\geqslant 1$. For any $m, n \in \Z_+$, a morphism $m\to n$ in $\FI_a$ is a pair $(f,c)$ where $f:[m]\to [n]$ is an injective map and $c:[n]\setminus \mathrm{Im}(f) \to [a]$ is any map. The composition of $(f_1,c_1) : m\to n$ and $(f_2,c_2): n\to \ell$ is defined to be $(f_2\circ f_1, c):m\to \ell$ where $c(i)=c_1(j)$ if $i=f_2(j)$ for some $j\in [n]\setminus \mathrm{Im}(f_1)$, and $c(i)=c_2(i)$ if $i\in [\ell]\setminus \mathrm{Im}(f_2)$.
(ii) $\cC=\OI_a$ where $a$ is an integer $\geqslant 1$. For any $m, n \in \Z_+$, a morphism $m\to n$ in $\OI_a$ is a pair $(f,c)$ where $f:[m]\to [n]$ is a strictly increasing map and $c:[n]\setminus \mathrm{Im}(f) \to [a]$ is any map. The composition of morphisms is defined in the same way as above for $\FI_a$.
(iii) $\cC=\FS^{\op}$, the opposite of the category $\FS$. For any $m,n \in \Z_+$, a morphism $n\to m$ in $\FS$ is a surjective map $[n]\to [m]$. The composition of morphisms in $\FS$ is defined to be the composition of maps.
It was proved by Sam and Snowden \cite{SS-Grobner} that for the categories $\cC$ in (i)-(iii) above, every finitely generated graded $\cC$-module over a commuatative noetherian ring is noetherian (and hence they are centrally stable by Corollary \ref{noetherian case}).
\section{{\it d}-step central stability} \label{d-step section}
\subsection{Ideal of relations}
Suppose $V$ is a finitely generated $\FI$-module over a commutative noetherian ring. The isomorphism \eqref{colimit} says that, provided $N$ is sufficiently large, $V$ can be recovered from its restriction to the full subcategory of $\FI$ on the set of objects $\{S \in \Ob(\FI) \mid |S| \leqslant N\}$; using the isomorphism \eqref{central stabilization on lhs}, we see that in fact we can recover $V(S)$ for $|S|>N$ using only the restriction of $V$ to the full subcategory of $\FI$ on the set of objects $\{S \in \Ob(\FI) \mid N-1\leqslant |S| \leqslant N\}$. The purpose of the present section and the next is to show that this is a consequence of the quadratic property of $\FI$. More generally, we shall prove that if the ideal of relations of a category algebra $A$ is generated in degrees $\leqslant d$ (in the sense of Definition \ref{relations degree definition} below), then any graded $A$-module which is presented in finite degrees is $d$-step centrally stable.
Let $\cA$ be an $R$-linear category such that $\Ob(\cA)=\Z_+$ and $\Hom_{\cA}(m,n)=0$ if $m>n$. Denote by $A$ the category algebra of $\cA$; see \eqref{category algebra}. For any $m\in\Z_+$, let
\begin{equation*}
\wA(m,m) := \End_{\cA} (m);
\end{equation*}
for any $n\geqslant m+1$, let
\begin{equation*}
\wA(m,n) := \Hom_{\cA}(n-1,n) \otimes_{\End_{\cA}(n-1)} \cdots \otimes_{\End_{\cA}(m+1)} \Hom_{\cA}(m,m+1).
\end{equation*}
We define a $R$-linear category $\wA$ with $\Ob(\wA)=\Z_+$ by $\Hom_{\wA}(m,n)=\wA(m,n)$ if $m\leqslant n$, and $\Hom_{\wA}(m,n)=0$ if $m>n$. There is a natural $R$-linear functor from $\wA$ to $\cA$ which is the identity map on the set of objects. For any $n\geqslant m$, let $\wI(m,n)$ be the kernel of the map $\wA(m,n) \to \Hom_{\cA}(m,n)$. One has $\wI(m,n)=0$ whenever $n$ is $m$ or $m+1$.
\begin{definition} \label{relations degree definition}
Let $d$ be an integer $\geqslant 1$. We say that \emph{the ideal of relations of $A$ is generated in degrees $\leqslant d$} if, whenever $n\geqslant m+2$, the map $\wA(m,n) \to \Hom_{\cA}(m,n)$ is surjective, and whenever $n\geqslant m+d$, one has:
\begin{equation}\label{ideal}
\wI(m,n) = \sum_{r=m}^{n-d} \wA(r+d, n)\otimes_{\End_{\cA}(r+d)} \wI(r,r+d)\otimes_{\End_{\cA}(r)} \wA(m,r).
\end{equation}
\end{definition}
\begin{remark}
If $d=1$, so that the ideal of relations of $A$ is generated in degrees $\leqslant 1$, then $\wI(m,n)=0$ for every $m$ and $n$, and so $\wA$ and $\cA$ are the same. In most interesting examples, one has $d\geqslant 2$.
\end{remark}
\begin{remark}
If $d=2$, so that the ideal of relation of $A$ is generated in degrees $\leqslant 2$, then $A$ is a quadratic algebra whose degree $k$ component is $\displaystyle{\bigoplus_{m\in \Z_+}} \Hom_{\cA}(m,m+k)$ for each $k\in \Z_+$. Many combinatorial categories such as $\FI_a$, $\OI_a$ and $\FS^{\op}$ are quadratic; see Section \ref{last section} below.
\end{remark}
\begin{example} \label{plactic monoid}
Fix a finite totally ordered set $\Omega$. Recall (see \cite{LLT}) that the plactic monoid $M$ on $\Omega$ is the monoid generated by $\Omega$ with defining relations
\begin{gather*}
xzy = zxy \quad \mbox{ if } x\leqslant y <z,\\
yxz = yzx \quad \mbox{ if } x< y\leqslant z.
\end{gather*}
An element $w\in M$ is said to be of length $\ell(w)=n$ if $w$ is a product of $n$ elements of $\Omega$; it is clear that $\ell(w)$ is well-defined. Now define $\cC$ to be the category with $\Ob(\cC)=\Z_+$ and
\begin{equation*}
\Hom_{\cC}(m,n) = \{ w\in M \mid \ell(w) = n-m \}.
\end{equation*}
The composition of morphisms in $\cC$ is given by the product in $M$. Then the category algebra $A_{\cC}$ is not quadratic but has ideal of relations generated in degrees $\leqslant 3$.
\end{example}
It is plain that the preceding example can be generalized to any monoid with a presentation whose defining relations do not change the length of words.
\subsection{{\it d}-step central stability}
The following theorem is the second main result of this paper.
\begin{theorem} \label{d-step centrally stable}
Let $d$ be an integer $\geqslant 1$. Suppose the ideal of relations of $A$ is generated in degrees $\leqslant d$. If a graded $A$-module $V$ is presented in finite degrees, then $V$ is $d$-step centrally stable.
\end{theorem}
To prove Theorem \ref{d-step centrally stable}, we need the following lemma.
\begin{lemma} \label{reducing idempotent}
Suppose the ideal of relations of $A$ is generated in degrees $\leqslant d$. If $V$ is a graded $A$-module, and $n>N \geqslant m+d$, then
\begin{equation*}
e_n A f \otimes_{fAf} fV \cong e_n A e \otimes_{eAe} eV \quad \mbox{ where } e=e_{m,N},\quad f=e_{m+1,N}.
\end{equation*}
\end{lemma}
\begin{proof}
Observe that
\begin{equation*}
e_n A e = e_n A e_m \oplus e_n A f, \qquad eV = e_m V \oplus fV.
\end{equation*}
We have a natural map
\begin{equation*}
\Phi: e_n A f \otimes_{fAf} fV \to e_n A e \otimes_{eAe} eV.
\end{equation*}
We shall construct a map $\Psi$ inverse to $\Phi$. First, define the maps
\begin{equation*}
\xymatrix{
\wI(m, n) \otimes_{R} e_m V \ar[d]^{\mu} & \\
\Hom_{\cA}(m+1,n)\otimes_{\End_{\cA}(m+1)} \Hom_{\cA}(m,m+1) \otimes_{R} e_m V \ar[r]^{\hspace{1.2in}\widetilde{\theta}} \ar[d]^{\nu} & e_n A f \otimes_{fAf} fV \\
\Hom_{\cA}(m,n) \otimes_{R} e_m V &
}
\end{equation*}
where $\mu$ and $\nu$ are the obvious maps defined using composition of morphisms, and $\widetilde{\theta}$ is defined by $\widetilde{\theta}(\alpha\otimes \beta\otimes x) = \alpha \otimes \beta x$. The map $\nu$ is surjective and its kernel is the image of $\mu$. Using (\ref{ideal}) and $\Hom_{\cA}(m+1,m+d) \subset fAf$, we see that $\widetilde{\theta}\mu=0$. Therefore, $\widetilde{\theta}$ factors uniquely through $\nu$ to give a map
\begin{equation*}
\theta : \Hom_{\cA}(m,n) \otimes_{R} e_m V \to e_n A f \otimes_{fAf} fV.
\end{equation*}
Now define
\begin{equation*}
\widetilde{\Psi} : e_n A e \otimes_{R} eV \to A f \otimes_{fAf} fV
\end{equation*}
by
\begin{equation*}
\widetilde{\Psi} (\alpha \otimes x) = \left\{ \begin{array}{ll}
\theta(\alpha\otimes x) & \mbox{ if } \alpha\in e_nAe_m \mbox{ and } x\in e_m V,\\
\alpha\otimes x & \mbox{ if } \alpha\in e_n A f \mbox{ and } x\in fV,\\
0 & \mbox{ otherwise }.
\end{array} \right.
\end{equation*}
It is plain that $\widetilde{\Psi}$ descends to a map
\begin{equation*}
\Psi : e_n A e \otimes_{eAe} eV \to A f \otimes_{fAf} fV,
\end{equation*}
and $\Psi$ is an inverse to $\Phi$.
\end{proof}
We can now prove Theorem \ref{d-step centrally stable}.
\begin{proof}[Proof of Theorem \ref{d-step centrally stable}]
By Theorem \ref{general centrally stable}, for all $N$ sufficiently large, one has
\begin{equation*}
A e_{0,N} \otimes_{e_{0,N}Ae_{0,N}} e_{0,N}V \cong V;
\end{equation*}
in particular, for each $n\in \Z_+$, one has $e_n A e_{0,N} \otimes_{e_{0,N}Ae_{0,N}} e_{0,N}V \cong e_n V$.
If $n>N\geqslant d-1$, then by Lemma \ref{reducing idempotent},
\begin{align*}
e_n A e_{N-(d-1),N} \otimes_{e_{N-(d-1),N}Ae_{N-(d-1),N}} e_{N-(d-1),N}V
&\cong e_n A e_{N-d,N} \otimes_{e_{N-d,N}Ae_{N-d,N}} e_{N-d,N}V\\
&\hspace{6pt}\vdots\\
&\cong e_n A e_{0,N} \otimes_{e_{0,N}Ae_{0,N}} e_{0,N}V
\end{align*}
If $N-(d-1) \leqslant n \leqslant N$, the map
\begin{equation*}
e_n A e_{N-(d-1),N} \otimes_{e_{N-(d-1),N}Ae_{N-(d-1),N}} e_{N-(d-1),N}V \to e_n V, \quad \alpha\otimes x \mapsto \alpha x
\end{equation*}
has an inverse defined by $x \mapsto e_n \otimes x$.
\end{proof}
\section{Sufficiency conditions} \label{last section}
\subsection{Sufficiency conditions}
To apply Theorem \ref{d-step centrally stable}, we need to be able to check if the ideal of relations of $A$ is generated in degrees $\leqslant d$. In this section, we provide sufficiency conditions which allow one to check this.
Let $\cC$ be a small category such that $\Ob(\cC)=\Z_+$, and $\Hom_{\cC}(m,n)=\emptyset$ if $m>n$. Recall that $\cA_{\cC}$ is the $R$-linear category with $\Ob(\cA_{\cC})=\Z_+$ and $\Hom_{\cA_{\cC}}(m,n)$ the free $R$-module with basis $\Hom_{\cC}(m,n)$ for each $m,n\in \Z_+$. Let $A_{\cC}$ be the category algebra of $\cA_{\cC}$; see \eqref{category algebra}.
\begin{definition}
We say that the \emph{ideal of relations of $\cC$ is generated in degrees $\leqslant d$} if the ideal of relations of $A_{\cC}$ is generated in degrees $\leqslant d$.
\end{definition}
\begin{proposition} \label{combinatorial condition}
Suppose $d\geqslant 2$. The ideal of relations of $\cC$ is generated in degrees $\leqslant d$ if the following two conditions are satisfied:
(i) The composition map $\Hom_{\cC}(l, n) \times \Hom_{\cC}(m,l)\to \Hom_{\cC}(m,n)$ is surjective whenever $m<l<n$.
(ii) For every $\alpha_1, \alpha_2 \in \Hom_{\cC}(m+1,n)$ and $\beta_1, \beta_2\in \Hom_{\cC}(m,m+1)$ satisfying
\begin{equation*}
\alpha_1\beta_1 = \alpha_2\beta_2 \quad \mbox{ and } \quad n>m+d,
\end{equation*}
there exists $\gamma\in \Hom_{\cC}(m+d,n)$ and $\delta_1, \delta_2 \in Hom_{\cC}(m+1,m+d)$ such that the following diagram commutes:
\begin{equation} \label{combinatorial condition diagram}
\xymatrix{
& m+1 \ar[dr]_{\delta_1} \ar[drrrr]^{\alpha_1} & & & & \\
m \ar[ur]^{\beta_1} \ar[dr]_{\beta_2} & & m+d \ar[rrr]^{\hspace{-1cm}\gamma} & & & n \\
& m+1 \ar[ur]^{\delta_2} \ar[urrrr]_{\alpha_2} & & & & \\
}
\end{equation}
\end{proposition}
\begin{proof}
Condition (i) implies that $\wA_{\cC} (m,n)\to \Hom_{\cA_{\cC}} (m,n)$ is surjective if $n\geqslant m+2$. We shall prove, by induction on $n-m$, that (\ref{ideal}) holds. The case $n-m=d$ is trivial. Now suppose $n-m>d$.
Let
\begin{equation*}
\wA_{\cC}(m,n) \stackrel{\pi_1}{\longrightarrow} \Hom_{\cA_{\cC}}(m+1,n)\otimes_{\End_{\cA_{\cC}}(m+1)} \Hom_{\cA_{\cC}} (m,m+1)
\stackrel{\pi_2}{\longrightarrow} \Hom_{\cA_{\cC}}(m,n)
\end{equation*}
be the obvious maps defined by composition of morphisms. It is easy to see (and can be proved in the same way as \cite[Lemma 6.1]{Li}) that $\wI(m,n)$ is spanned over $R$ by elements of the form
\begin{equation} \label{element in kernel}
\xi_{n-1}\otimes \cdots\otimes \xi_m - \xi'_{n-1}\otimes \cdots\otimes \xi'_m
\end{equation}
such that $\xi_i, \xi'_i \in \Hom_{\cC}(i,i+1)$ for $i=m,\ldots, n-1$ and $\xi_{n-1}\cdots\xi_m = \xi'_{n-1}\cdots\xi'_m$. By condition (ii), there exists $\gamma\in \Hom_{\cC}(m+d,n)$ and $\delta_1, \delta_2 \in \Hom_{\cC}(m+1,m+d)$ such that
\begin{equation*}
\xi_{n-1}\cdots\xi_{m+1}=\gamma \delta_1, \qquad \xi'_{n-1}\cdots\xi'_{m+1}=\gamma\delta_2, \qquad \delta_1\xi_m=\delta_2\xi'_m.
\end{equation*}
We can choose
\begin{equation*}
\wgamma = \wgamma_{n-1}\otimes \cdots \otimes \wgamma_{m+d}\in \wA_{\cC}(m+d,n),
\end{equation*}
where $\wgamma_i \in \Hom_{\cC}(i,i+1)$ for $i=m+d,\ldots, n-1$, such that $\wgamma_{n-1} \cdots\wgamma_{m+d} = \gamma$. We can also choose
\begin{equation*}
\wdelta = \wdelta_{m+d-1}\otimes \cdots \otimes \wdelta_{m+1} \in \wA_{\cC}(m+1,m+d),
\end{equation*}
where $ \wdelta_i \in \Hom_{\cC}(i,i+1)$ for $i=m+1,\ldots, m+d-1$, such that $\wdelta_{m+d-1} \cdots\wdelta_{m+1} = \delta_1$. Similarly, choose
\begin{equation*}
\wdelta' = \wdelta'_{m+d-1}\otimes \cdots \otimes \wdelta'_{m+1} \in \wA_{\cC}(m+1,m+d),
\end{equation*}
where $ \wdelta'_i \in \Hom_{\cC}(i,i+1)$ for $i=m+1,\ldots, m+d-1$, such that $\wdelta'_{m+d-1} \cdots\wdelta'_{m+1} = \delta_2$.
The element in (\ref{element in kernel}) can be written as:
\begin{equation*}
( \xi_{n-1}\otimes \cdots \otimes \xi_{m+1} - \wgamma\otimes \wdelta )\otimes \xi_m
- ( \xi'_{n-1}\otimes \cdots \otimes \xi'_{m+1} - \wgamma\otimes \wdelta' )\otimes \xi'_m
+ \wgamma \otimes (\wdelta \otimes \xi_m - \wdelta' \otimes \xi'_m).
\end{equation*}
Since
\begin{gather*}
\xi_{n-1}\otimes \cdots \otimes \xi_{m+1} - \wgamma\otimes \wdelta \in \wI(m+1, n),\\
\xi'_{n-1}\otimes \cdots \otimes \xi'_{m+1} - \wgamma\otimes \wdelta' \in \wI(m+1,n),\\
\wdelta \otimes \xi_m - \wdelta' \otimes \xi'_m\in \wI(m,m+d),
\end{gather*}
the result follows by induction.
\end{proof}
\begin{corollary} \label{sufficiency conditions}
Suppose that $\cC$ satisfies the two conditions in Proposition \ref{combinatorial condition} for some $d\geqslant 2$. If a $\cC$-module $V$ is presented in finite degrees, then $V$ is $d$-step centrally stable.
\end{corollary}
\begin{proof}
By Proposition \ref{combinatorial condition}, the ideal of relations of $\cC$ is generated in degrees $\leqslant d$. Hence, we may apply Theorem \ref{d-step centrally stable}.
\end{proof}
\begin{remark}
Suppose that $\cC$ satisfies condition (i) in Proposition \ref{combinatorial condition}, and its ideal of relations is generated in degrees $\leqslant d$. In this case, condition (ii) might not hold, so it is not a necessary condition. For example, suppose that $d=2$, and one has
\begin{gather*}
\Hom_{\cC}(0,1)=\{ \beta_1, \beta_2, \beta_3 \},\quad
\Hom_{\cC}(1,2)=\{ \beta'_1, \beta'_2, \beta'_3, \beta'_4 \},\quad
\Hom_{\cC}(2,3)=\{ \beta''_1, \beta''_2\},
\end{gather*}
with the defining relations
\begin{gather*}
\beta'_1 \beta_1 = \beta'_3 \beta_3,\quad
\beta'_2 \beta_2 = \beta'_4 \beta_3,\quad
\beta''_1 \beta'_3 = \beta''_2 \beta'_4,
\end{gather*}
as depicted in the following diagram:
\begin{equation*}
\xymatrix{
&& 1 \ar[rr]^{\beta'_1} && 2 \ar[drr]^{\beta''_1} && \\
0 \ar[urr]^{\beta_1} \ar[drr]_{\beta_2} \ar[rr]^{\hspace{5mm}\beta_3} && 1 \ar[urr]^{\beta'_3} \ar[drr]_{\beta'_4} && && 3 \\
&& 1 \ar[rr]_{\beta'_2} && 2 \ar[urr]_{\beta''_2} && \\
}
\end{equation*}
Let $\alpha_1 = \beta''_1\beta'_1$ and $\alpha_2 = \beta''_2\beta'_2$. Then one has
\begin{equation*}
\alpha_1 \beta_1 = \beta''_1\beta'_1 \beta_1 = \beta''_1 \beta'_3\beta_3 = \beta''_2\beta'_4\beta_3 = \beta''_2\beta'_2\beta_2 = \alpha_2 \beta_2.
\end{equation*}
On the other hand, it is easy to see that there do not exist $\gamma$, $\delta_1$ and $\delta_2$ such that the diagram in \eqref{combinatorial condition diagram} (with $d=2$, $m=0$, $n=3$) commutes.
\end{remark}
\subsection{Examples}
Using Proposition \ref{combinatorial condition}, one can easily check, for example, that (a skeleton of) $\FI$ is quadratic. Let $\cC$ be the full subcategory of $\FI$ on the set of objects $\{[n] \mid n\in\Z_+\}$. It is easy to see that $\cC$ satisfies condition (i) in Proposition \ref{combinatorial condition}. We need to verify condition (ii). Thus, suppose that we have injective maps:
\begin{equation*}
\alpha_1, \alpha_2 : [m+1] \longrightarrow [n] \quad \mbox{ and } \quad \beta_1, \beta_2 : [m] \longrightarrow [m+1],
\end{equation*}
such that $\alpha_1 \circ \beta_1 = \alpha_2 \circ \beta_2$ and $n\geqslant m+2$. Since
\begin{equation*}
|\im(\alpha_1)\cup\im(\alpha_2)| = |\im(\alpha_1)| + |\im(\alpha_2)| - |\im(\alpha_1)\cap \im(\alpha_2)| \leqslant (m+1) + (m+1) - m = m+2,
\end{equation*}
we can choose $S\subset [n]$ such that $\im(\alpha_1)\cup\im(\alpha_2)\subset S$ and $|S|=m+2$. Let $\gamma: [m+2]\to [n]$ be any injective map whose image is $S$. Then there exists a unique map $\delta_1$ (respectively $\delta_2$) such that $\gamma\circ \delta_1 = \alpha_1$ (respectively $\gamma\circ \delta_2 = \alpha_2$). Clearly, $\delta_1$ and $\delta_2$ are injective. We have:
\[ \gamma\circ \delta_1\circ \beta_1 = \alpha_1\circ \beta_1 = \alpha_2\circ \beta_2 = \gamma\circ \delta_2 \circ \beta_2. \]
Since $\gamma$ is injective, it follows that $\delta_1\circ \beta_1 = \delta_2 \circ \beta_2$. Therefore, it follows from Proposition \ref{combinatorial condition} that $\cC$ (or more precisely, the algebra $A_{\cC}$) is quadratic.
Similarly, one can apply Proposition \ref{combinatorial condition} to show that the categories $\FI_a$, $\OI_a$, $\FS^{\op}$, and $\VI(\mathbb{F})$ are quadratic, where for any field $\mathbb{F}$, the set of objects of $\VI(\mathbb{F})$ is $\Z_+$ and the morphisms $m\to n$ are the injective linear maps $\mathbb{F}^m \to \mathbb{F}^n$.
\begin{remark}
A twisted commutative algebra $E$ is an associative unital graded algebra $E=\bigoplus_{n\geqslant 0} E_n$ where each $E_n$ is equipped with the structure of an $S_n$-module such that the multiplication map $E_n\otimes E_m \to E_{n+m}$ is $S_n\times S_m$-equivariant, and $yx=\tau(xy)$ for every $x\in E_n$, $y\in E_m$ (where $\tau\in S_{n+m}$ switches the first $n$ and last $m$ elements of $\{1,\ldots,n+m\}$); see \cite[\S 8.1.2]{SS-tca}. Any twisted commutative algebra $E$ gives rise to an $R$-linear category $\cA$ with $\Ob(\cA)=\Z_+$ and
\begin{equation*}
\Hom_{\cA} (m,n) = RS_n \otimes_{RS_{n-m}} E_{n-m}.
\end{equation*}
In particular, when $E$ is the twisted commutative algebra with $E_n$ the trivial $S_n$-module for every $n$ (and with the obvious multiplication map), the $R$-linear category $\cA$ we obtained is precisely the category $\cA_{\cC}$ associated to a skeleton $\cC$ of $\FI$. One might ask if the category algebra of every category $\cA$ obtained from a twisted commutative algebra is quadratic. This is not true. For example, let $d$ be any integer $>2$, and let $E$ be the twisted commutative algebra with $E_n$ the trivial $S_n$-module if $n<d$, otherwise let $E_n=0$; the multiplication map of $E$ is defined in the obvious way. Then $E$ gives rise to an $R$-linear category $\cA$ with the property that the ideal of relations of its category algebra is generated in degrees $\leqslant d$, but the category algebra is not quadratic.
\end{remark}
|
1,314,259,993,332 | arxiv | \section{Introduction}
The importance of multiobjective optimization problems in various applications in engineering, business and management can be hardly overstated. For a wide range of applications in engineering design see, for example, \cite{deb2001multi}.
\noindent From a theoretical point of view, the idea of multiobjective optimization becomes challenging since we are speaking about minimizing/maximizing a vector-valued function. In order to define the notion of a solution, we need to depend on the partial order, which is often induced by a closed convex pointed cone on the image space of the objective function. This leads to two fundamental notions of solutions, namely the Pareto solutions and weak Pareto solutions. The points corresponding to these solutions in the image space of the objective function are often referred to as efficient solutions and weak efficient solutions. The collection of all efficient solutions is often referred to as the Pareto efficient frontier. We emphasize that the notion of Pareto and weak Pareto solutions are global notions. Mathematically speaking, it is not at all difficult to devise a local counterpart, and it is the global aspect that is sought by the decision makers. Further, the idea of Pareto solutions is often considered more relevant than the weak solutions from the point of view of the applications. We refer the following monographs of Ehrgott \cite{EHR}, Jahn \cite{JAHN}, Luc \cite{luc1987scalarization}, Chankong et al. \cite{chankong2008multiobjective} and the references therein to see the development of multiobjective optimization over the past several decades.\\
There are several approaches to solve multiobjective problems, for example, scalarization methods, descent methods, metaheuristics and many more. But when it comes to actual computation using the mentioned methods, the algorithms always produce approximate solutions. Thus, it is essential to define notions of approximate solutions and characterize their properties. There are various notion of approximate solutions in the literature (see \cite{loridan1984varepsilon}, \cite{valyi1985approximate}, \cite{dutta2001approximate}, \cite{gutierrez2006approximate}, \cite{gutierrez2010optimality}) which deals with characterizing introduced notions in greater details. In this article, our main aim is to revisit the fundamental notion of approximate Pareto solutions and a proper Pareto solution (which we shall describe below). Further, we analyze these solutions through KKT type conditions. This approach to studying KKT type conditions for approximate solutions can lead to the development of stopping criteria for algorithms. It can also be used to check the quality of the approximate solution produced by any algorithm used to solve the multiobjective problem. \\
It is essential to a decision maker, who is taking some decisions based on multiobjective optimization models need not necessarily be interested in all the Pareto solutions of the problem at hand.
In many cases, the decision maker focuses on the part of the Pareto frontier in the image space, which corresponds to a subset of the set of Pareto solutions. These subsets, when chosen in a particular way, gives rise to various classes of proper Pareto solutions (see\cite{EHR}). Very recently, the authors discussed an improved version of Geoffrion proper solutions in \cite{shukla2019practical}. This solution notion is based on the assumption that the decision maker, in practice, usually looks for those proper solutions whose trade-off bound is bounded by a value preset by her/him. The detailed analysis of such solutions and their approximate version has been carried out and shown to be stable than the standard Geoffrion solutions (for more details see \cite{shukla2019practical}). In the present article, our major goal is to analyse saddle point and KKT type conditions for these solutions.\\
The whole paper revolves around answering three questions in which first two questions stem from an attempt to generalize two results, which are on approximate solutions for scalar optimization problems which appeared in \cite{dutta2013approximate}. The first result concerns a scalar optimization problem with locally Lipschitz data (see Theorem 3.2 in \cite{dutta2013approximate}) which says that if a sequence of points each satisfying an approximate version of KKT conditions converges to point under a suitable constraint qualification, then the limit of the sequence is a KKT point. Thus, we have the following first question:
\begin{itemize}
\item {\textbf{Q1:} Can a similar kind of result be deduced for multiobjective optimization problem?}
\end{itemize}
Our second question stems from Theorem 3.7 in \cite{dutta2013approximate} in which the reversed result of the Theorem 3.2 is asked. The result conclude an affirmative answer for the reverse result which proves that for any local minimizer of an optimization problem with suitable constraint qualification, there a sequence of points converge to that local minima and there exists a subsequence of the main sequence which satisfies some type of approximate KKT type conditions when they are very near to the solution.
\begin{itemize}
\item {\textbf{Q2:} Can we generalize the reverse result in the multiobjective settings? Further, do the locally Lipschitz data suffice, or we need more assumptions? Can the convexity assumption give us better results? }
\end{itemize}
Our third question is associated with the KKT-type conditions for the approximate Geoffrion proper solutions with a preset bound.
\begin{itemize}
\item {\textbf{Q3:} Can we develope an approximate KKT type condition which can completely characterize a Geoffrion proper solutions with a preset bound at least in the convex case? Does the saddle point conditions completely characterize such class of solutions? }
\end{itemize}
The paper is organised as follows. In Section 2, we present the problem, basic definitions and the technical tools from convex and non-smooth analysis required in the article. In Section 3, we answer the first two questions raised in this section and Section 4 assures the last question by trying to develop the saddle point conditions and approximate KKT type conditions for the improved Geoffrion proper solutions. We end our discussion by concluding remarks in Section 5. We want to end this section by stating that most symbols used in the article are fairly standard in the literature.
\section{Preliminaries and basic tools}
Let $A\subseteq \mathbb{R}^n$ be a given set, then closure and interior of
set $A$ is denoted by cl$A$ and int$A$ respectively. For vectors $x,y \in \mathbb{R}^n$ the inner product given by $\langle x, y\rangle$. A set $A\subset \mathbb{R}^n$ is a cone, if for each $a\in A$ and positive scalar $\lambda$, $\lambda a \in A$. A cone $A$ is pointed, if $A\cap (-A)=\{0\}$. A normal cone of a convex set $A$ at the point $x_0$, denoted by $N_A(x_0)$, is $N_A(x_0)=\{v\in \mathbb{R}^n: \langle v, x-x_0\rangle \leq 0, \text{ for all } x\in U \}$.
We consider the following form of multiobjective optimization problem (MOP) in this article:
\begin{eqnarray*}
&&\min f(x):=(f_1(x),\ldots,f_m(x)),\\
&&{\rm subject~to}~~ g_j(x)\leq 0,~ j=1,2,..,l.
\end{eqnarray*}
where each $f_i:\mathbb{R}^n\to \mathbb{R}$ and $g_j:\mathbb{R}^n\to \mathbb{R}$. Let us denote the constraint set by $X:=\{x\in \mathbb{R}^n: g_j(x)\leq 0,~j=1,2,..,l\}\subseteq \mathbb{R}^n$, $I:=\{1,2,..,m\}$, $L:=\{1,2,..,l\}$. As we mentioned earlier that there are several notions for approximate solutions but in this article, we consider the notion of approximate solution introduced in Loridan \cite{loridan1984varepsilon}. We consider $\epsilon\in\mathbb{R}^m_+,$ \textit{i.e.}, $\epsilon=(\epsilon_1,\epsilon_2,\ldots,\epsilon_m)$, $\epsilon_i\geq 0$ for each $i\in I$ to formalize our notions. Our focus on this paper is on $\epsilon$- solutions of MOP. The partial order of image space $f(X)\subseteq \mathbb{R}^m$ is induced by natural cone $\mathbb{R}^m_+$ in the following definition.
\begin{defn}\label{d3}
Given $\epsilon\in \mathbb{R}^m_+$, if there is no $x\in X$ such that $f(x)+\epsilon-f(x^*)\in -\mathbb{R}^m_+\setminus\{0\},$ then the point $x^*\in X$ is said to be an $\epsilon$-Pareto optimal solution of MOP. Further if there is no $x\in X$ such that
$f(x)+\epsilon-f(x^*)\in -{\rm int}(\mathbb{R}^m_+),$
then the point $x^*$ is said to be a weak $\epsilon$-Pareto optimal solution of MOP.
\end{defn}
\noindent An $\epsilon$-Pareto (weak) optimal solution with $\epsilon=0$ is commonly known as Pareto (weak) optimal solution. Though not always seen in the literature the following notions of a local solutions are also relevant.
\begin{defn}\label{d2}
A point $x^*$ is said to be a loacl Pareto optimal solution of MOP if there exists $\delta >0$ and no $x\in X\cap B_{\delta}(x^*) $ such that,
$f(x)-f(x^*)\in -\mathbb{R}^m_+\setminus\{0\},$
where $B_{\delta}(x_0)\subset \mathbb{R}^n$ is a ball of radius $\delta$.
\end{defn}
\noindent The weak counter part of local solution can be defined in the similar fashion as in Definition \ref{d2}.
We want to mention that in several situations we consider the particular form of the vector $\epsilon \in \mathbb{R}^m_+$, given by $\epsilon=\varepsilon e$, where $e=(1,1..,1)^T$ and $\varepsilon\in \mathbb{R}_+$. In those cases, the solutions referred to as the $\varepsilon e$-Pareto and $\varepsilon e$-weak Pareto solution respectively. The set of all $\epsilon$-Pareto points is denoted by ${S}_{\epsilon}(f, X)$ and the set of all $\epsilon$-weak Pareto points as $S_{w,\epsilon}(f, X)$.
\begin{defn}\label{d7}
Given $\epsilon\in \mathbb{R}^n_+$, a point $x_0\in X$ is called $\epsilon$-Geoffrion proper solution of MOP if $x_0\in \mathcal{S}_{\epsilon}(f,X)$ and if there exists a number $M>0$ such that for all $i\in I$ and $x\in X$ satisfying $f_i(x)< f_i(x_0)-\epsilon_i$, there exists an index $j\in I$ such that $f_j(x_0)-\epsilon_j< f_j(x)$ and
\begin{eqnarray*}
\frac{f_i(x_0)- f_i(x)-\epsilon_i}{f_j(x)- f_j(x_0)+\epsilon_j}\leq M.
\end{eqnarray*}\end{defn}
\noindent The upper bound of the trade-off in the above definition is not known beforehand and the definition only assures the existence of such a bound. Further, it is clear form the definition that the trade-off varies as we choose different proper points. The improved definition introduced in \cite{shukla2019practical} eliminates the dependence of the bound on the solution points. Let us state the improved notion of Geoffrion proper solutions studied in \cite{shukla2019practical}.
\begin{defn}\label{d13}
Given $\epsilon\in \mathbb{R}^n_+$ and a scalar $\hat{M}>0$, a point $x_0\in X$ is called $(\hat{M},\epsilon)$-Geoffrion proper solution of MOP if $x_0\in \mathcal{S}_{\epsilon}(f,X)$ and for all $i\in I$ and $x\in X$ satisfying $f_i(x)< f_i(x_0)-\epsilon_i$, there exists an index $j\in I$ such that $f_j(x_0)-\epsilon_j< f_j(x)$ and
\begin{eqnarray*}
\frac{f_i(x_0)- f_i(x)-\epsilon_i}{f_j(x)- f_j(x_0)+\epsilon_j}\leq \hat{M}.
\end{eqnarray*}\end{defn}
Given $\hat{M}>0$, we shall denote the set of all $(\hat{M},\epsilon)$- Geoffrion proper as $\mathcal{G}_{\hat{M},\epsilon}(f,X)$. For $\epsilon=0$, the set of exact $\hat{M}$- Geoffrion proper is denoted by $\mathcal{G}_{\hat{M}}(f,X)$. Now we shall present the Ekeland variation principle for vector-valued functions which was introduced in \cite{tammer1992generalization} when the ordering cone is $\mathbb{R}_+^m$. We first define the notion of lower semicontinuity and boundness of vector-valued functions which will be needed in the principle.
\begin{defn}\label{d1}
Let $f:U\rightarrow \mathbb{R}^m$ where $U$ is a non-empty subset of $\mathbb{R}^n$. The function $f$ is {$\mathbb{R}^m_+$-bounded below} if there exists $y\in \mathbb{R}^m$ such that $f(x)-y\in \mathbb{R}_+^m$ for all $x\in U$. Let $c\in int(\mathbb{R}_+^m)$, the function $f$ is {$(c,\mathbb{R}_+^m)$- lower semi continuous} if for all $t\in \mathbb{R}$, $\{x\in U:tc- f(x)\in \mathbb{R}_+^m\}$ is closed.
\end{defn}
\begin{thm}\label{d17}
Let $f:U\rightarrow \mathbb{R}^m$ where $U\subseteq \mathbb{R}^m$ be a $(c_0,\mathbb{R}^m_+)$- lower semi continuous function for $c_0\in int(\mathbb{R}_+^m)$ which is also $\mathbb{R}^m_+$-bounded below . Further, suppose we are given
$\rho >0$ and a point $x_0\in U$ such that,
\begin{equation}\label{eqm}
f(x)+\rho c_0-f(x_0)\not \in -\mathbb{R}^m_+\setminus \{0\}, \text{ for all } x \in U.
\end{equation}
Then, there exists $\bar{x_0}=\bar{x_0}(\rho)\in U$ such that $\|\bar{x_0}-x_0\|\leq \sqrt{\rho}$ and for all $x\in U\setminus \{\bar{x_0}\}$
\begin{enumerate}
\item $f(x)+ \rho c_0-f(\bar{x_0})\not \in -int (\mathbb{R}^m_+) $,
\item $f(x)+\sqrt{\rho}\|\bar{x_0}-x\| c_0-f(\bar{x_0})\not \in -int (\mathbb{R}^m_+)$.
\end{enumerate}
\end{thm}
In this article, we rely on two major tools from non-smooth analysis, namely the subdifferential of a convex function and the Clarke subdifferential of a locally Lipschitz function. Though these notions are very well known in the optimization community, we shall provide the definitions for completeness. We shall however restrict ourselves to the class of functions which are finite-valued function on $\mathbb{R}^n$. \\
Let $f:\mathbb{R}^n\rightarrow \mathbb{R}$ be a convex function, then the subdifferential of $f$ at the point $x$ is a set of vectors in $\mathbb{R}^n$, given as
$$\partial f(x)=\{v\in \mathbb{R}^n: f(y)-f(x)\geq \langle v,y-x\rangle,~\text{for all}~y\in \mathbb{R}^n\}.$$
The subdifferential set is a non-empty, convex and compact for every $x\in \mathbb{R}^n$. The subdifferential is also deeply linked with the notion of the directional derivative of a convex function. The directional derivative of a convex function at a given $x$ in the direction $h$ is given as
$$f'(x,h)=\lim_{\lambda\downarrow 0} \frac{f(x+\lambda h)-f(x)}{\lambda}$$
This directional derivative exists for each $x$ and in each direction $h$, and, the subdifferential of $f$ can be written as $\partial f(x)=\{v\in \mathbb{R}^n:~f'(x,h)\geq \langle v,h \rangle,~\text{for all}~h\in \mathbb{R}^n\}.$ Thus each of these can be recovered from the other. The generalized notion of derivative has properties like the usual derivative of calculus. We will begin with the most fundamental one, the sum rule. Let $f:\mathbb{R}^n\rightarrow \mathbb{R}$ and $g:\mathbb{R}^n\rightarrow \mathbb{R}$ are convex functions. Then
\begin{equation}\label{eq21}
\partial (f+g)(x)=\partial f(x)+\partial g(x).
\end{equation}
\noindent For more details on subdifferentail of convex functions see \cite{bazaraa2013nonlinear}. It is important to note that a point $x_0$ is a global minimum of $f$ on $\mathbb{R}^n$ if and only if $0\in \partial f(x_0)$. Since subdifferential is a generalized version of derivative, it has some limitation. The $\varepsilon$-subdifferential is a relaxed version of the subdifferential which is very useful tool in convex analysis and optimization. We begin with defining the $\varepsilon$-subdifferential of convex function.
\begin{defn}\label{d8}
Let $f:\mathbb{R}^n\rightarrow \mathbb{R}$ be convex function and $\varepsilon \geq 0$. The $\varepsilon$-subdifferential of $f$ at the point $x$ is given as
$$\partial_{\varepsilon} f(x)=\{v\in \mathbb{R}^n: f(y)-f(x)\geq \langle v,y-x\rangle-\varepsilon,~\text{for all}~y\in \mathbb{R}^n\}.$$
\end{defn}
The elements of $\partial_{\varepsilon} f(x)$ are called $\varepsilon$-gradients of $f$ at $x$ and $\partial_{\varepsilon} f(x)\not =\emptyset$ for all $x\in \mathbb{R}^n$. A point $x_0$ is called an $\varepsilon$-minimizer of $f$ on $\mathbb{R}^n$ if $f(y)-f(x)\geq -\varepsilon$, for all $y\in \mathbb{R}^n$. Thus $x_0$ is an $\varepsilon$-minimizer of $f$ on $\mathbb{R}^n$ if and only if $0\in \partial_{\varepsilon}f(x_0)$. For complete description of properties of $\varepsilon$-subdifferential see \cite{dhara2011optimality}.\\
The subdifferential defined above is only defined for convex functions, so the obvious question is to ask what about subdifferential of non-convex functions? We now discuss subdifferential of a non-convex function which is locally Lipschitz in nature. The relation of subdifferential and directional derivative as above becomes a key to develop the notion of a subdifferential for a locally Lipschitz functions.\\
A function $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is Lipschitz around $x\in \mathbb{R}^n$, if there exists a neighborhood $U_x$ of $x$ and $L_x\geq 0$ such that $\|f(y)-f(z)\|\leq L_x\|y-z\|,$ for all $y,z\in U_x$. The constant $L_x$ is the Lipschtiz constant of the function $f$ at the point $x$. A function $f$ is said to be locally Lipschitz if $f$ is Lipschitz around $x$ for any $x\in \mathbb{R}^n$. We shall focus in this article on MOP with locally Lipschitz objective and constraint functions. We now define the Clarke directional derivative of locally Lipschitz function $f$ at $x$ and in the direction $h\in \mathbb{R}^n$ as
$$f^{\circ}(x,h)=\limsup_{y\to x,t\downarrow 0} \frac{f(y+t h)-f(y)}{t}.$$
The Clarke subdifferential of $f$ at $x\in \mathbb{R}^n$ is given as,
$$\partial^{\circ} f(x)=\{\xi\in \mathbb{R}^n:~f^{\circ}(x,h)\geq \langle \xi, h \rangle,~\text{for all}~h\in \mathbb{R}^n\}.$$
For each $x\in \mathbb{R}^n,$ the set $\partial^{\circ} f(x)$ is non-empty, convex and compact. It is important to note that when function $f$ is convex, then $\partial^{\circ} f(x)=\partial f(x),$ for all $x\in \mathbb{R}^n$. Same as subdifferential for convex function, Clarke subdifferential has lots of nice properties. If $x_0\in \mathbb{R}^n$ is a local minimum of $f$ over $\mathbb{R}^n$, then $0\in \partial^{\circ}f(x_0)$ (for proof see \cite{rockafellar2015convex}). It also satisfy sum rule but it gives only one side containment, \textit{i.e.,} for given two locally Lipschtiz function $f$ and $g$, we have
$\partial^{\circ} (f+g)(x)\subset \partial^{\circ} f(x)+\partial^{\circ} g(x).$
\section{Approximate KKT conditions}
In this section, we begin by defining a notion of modified $\epsilon$-KKT points which suits very well for the purpose of convex vector optimization problem. This notion is motivated by a similar notion defined in \cite{dutta2013approximate} for scalar optimization problem and \cite{durea2011stability} for convex vector optimization.
\begin{defn}\label{d21}
A feasible point $x_0\in X$ is said to be a modified $\varepsilon$-KKT point of MOP if for a given $\varepsilon \in \mathbb{R}_+$, there exists $x_{\varepsilon}$ such that $\|x_0-x_{\varepsilon}\|\leq \sqrt{\varepsilon}$ and there exists $u_i\in \partial^{\circ} f_i(x_{\varepsilon})$ for all $i\in I$, $v_r\in \partial^{\circ} g_r(x_{\varepsilon})$ for all $r\in L$, vectors $\lambda \in \mathbb{R}_+^m$ with $\|\lambda\|=1$ and $\mu \in \mathbb{R}_+^l$ such that
\begin{eqnarray*}
\left\|\sum\limits_{i\in 1}^m \lambda_i u_i+\sum\limits_{r=1}^l \mu_r v_r \right\|\leq \sqrt{\varepsilon}, \text{ and, }\sum\limits_{r=1}^r\mu_r g_r(x_0)\geq -\varepsilon. \label{eq320}
\end{eqnarray*}
\end{defn}
\noindent Now we are in position to answer the first two questions asked in the introduction. To begin with we state two constraint qualifications, Slater constraint qualification (SCQ for short) and Basic constraint qualification (BCQ for short) which are used in the main results of this article (see \cite{rockafellar2009variational}).
\begin{defn}\label{d22}
The MOP with constraint functions $g_r$ for all $r\in L$ to be convex satisfies Slater constraint qualification if there exists $\hat{x}\in X$ such that $g_r(\hat{x})<0$, for all $r\in L$.
\end{defn}
\begin{defn}
The MOP with locally Lipschitz constraint functions $g_r$ for all $r\in L$ satisfies Basic Constraint Qualification (BCQ) at a point $\bar x$ if there exists no $p\in \mathbb{R}^l_+\setminus \{0\}$ such that $0\in \sum\limits_{r\in L} p_r\partial^{\circ} g_r(\bar x)$.
\end{defn}
The next theorem answers the first question (\textbf{Q1}) raised in this article which says that if a sequence $x_k$ of modified $\varepsilon_k$-KKT points of MOP converges to a point $x_0$ where basic constraint qualification holds at $x_0$, then $x_0$ is a KKT point of MOP. Observe that we do not need convexity of the objective functions to prove the following result whereas we only require Lipschitz continuity of the objectives. It is important to note that the similar kind of result has been discussed under convexity assumption of objective functions in \cite{durea2011stability}.
\begin{thm}\label{t21}
Consider the problem MOP with locally Lipschitz data and let $\{\varepsilon_k\}$ to be a decreasing sequence of positive real numbers such that $\varepsilon_k \rightarrow 0$ as $k\rightarrow \infty$. Consider $\{x^k\}$ to be a sequence of feasible points of MOP with $x^k \rightarrow x_0$ as $k\rightarrow \infty$. Assume that for each $k$, $x^k$ is a modified $\varepsilon_k$-KKT point of MOP. Further, assume that the BCQ holds at $x_0$. Then $x_0$ is a KKT point of MOP.
\end{thm}
\textit{Proof:}
Note that $x^k$'s are feasible points, \textit{i.e.,} $g_r(x^k)\leq 0$ for all $r\in L$ and $x^k\rightarrow x_0$. Hence, using the convexity of $g_r$'s, we conclude that $g_r(x_0)\leq 0$ for all $r\in L$. Hence, $x_0$ is a feasible point of MOP. Now, as $x^k$ is a modified $\varepsilon_k$- KKT point, for each $k$, Definition~\ref{d21} gives the existence of a point $\hat{x^k}$ such that $\|x^k-\hat{x^k}\|\leq \sqrt{\varepsilon_k}$, the existence of $u_i^k\in \partial^{\circ} f_i(\hat{x^k})$ and $v_r^k\in \partial^{\circ} g_r(\hat{x^k})$ for all $i \in I$ and $r\in L$, and the vectors $\lambda^k \in \mathbb{R}_+^m$ and $\mu^k \in \mathbb{R}_+^l$ with $\|\lambda^k\|=1$ such that
\begin{eqnarray}
&& \left\|\sum\limits_{i\in I} \lambda^k_i u^k_i+\sum\limits_{r\in L} \mu^k_rv_r^k\right\|\leq \sqrt{\varepsilon_k},\label{eq01} \text{ and } \\
&& \sum\limits_{r\in L} \mu_r^k g_r(x^k)\geq -\varepsilon_k. \label{eq02}
\end{eqnarray}
We first claim that $\{\mu^k\}$ is bounded. To prove our claim, on the contrary assume that $\{\mu^k\}$ is unbounded. Thus, $\|\mu^k\|\rightarrow \infty$ as $k\rightarrow \infty$. Further, Equation~(\ref{eq01}), can be re-written as
\begin{equation}\label{eq03}
\left\|\sum\limits_{i\in I} \frac{\lambda^k_i}{\|\mu^k\|} u^k_i+\sum\limits_{r \in L} \frac{\mu^k_r}{\|\mu^k\|} v_r^k \right\|\leq \frac{1}{\|\mu^k\|}\sqrt{\varepsilon_k}.
\end{equation}
Then, in Equation~\eqref{eq03}, we observe the following:
\begin{enumerate}
\item As $\varepsilon_k$ converges to $0$, the same holds for $\frac{1}{\|\mu^k\|}\sqrt{\varepsilon_k}$.
\item Let $p_r^k=\frac{\mu_r^k}{\|\mu^k\|} \in \mathbb{R}_+$, for all $r\in L$. As $\|p^k\| =1$, $\{p^k\}$ is a bounded sequence. So, by the Bolzano-Weierstrass theorem, there exists a subsequence of $\{p^k\}$ which converges to $\hat{p}\in \mathbb{R}^l_+$ with $\|\hat{p}\| = 1$. In fact, without loss of generality, we can assume that $p^k_r$ converges to $\hat{p}_r$. Hence, for all $r\in L$
\begin{equation}\label{eq04}
\frac{\mu^k_r}{\|\mu^k\|}=p^k_r\rightarrow \hat{p}_r,~as ~k\rightarrow \infty.
\end{equation}
\item As $f_i$'s are locally Lipschitz functions, their Clarke subdifferential are locally bounded, \textit{i.e.,} for $x_0\in X$, there exists $\delta>0$ such that for all $z \in B_{\delta}(x_0)$, $\partial^\circ f_i(z)\subset K_i$, where, for all $i\in I$, $K_i$'s are bounded sets on $\mathbb{R}^n$. Since $x^k\rightarrow x_0$, there exists $k_0\in \mathbb{N}$ such that, for all $k\ge k_0 $, $x^k\in B_{\delta}(x_0)$. Therefore, by choosing $K=\bigcup\limits_{i\in I}{\tilde{K_i}}$ where $\tilde{K_i}= K_i\cup\partial^{\circ} f_i({x^1})\cup \ldots \partial^{\circ} f_i({x^k_0}) $, we get $\partial^{\circ} f_i({x^k})\subset K$, for all $i\in I$ and $k\ge 0$. Hence, the sequence $\{u_i^k\}$, where $u_i^k\in \partial^{\circ} f_i({x^k})$, is bounded for all $i \in I$. Hence, using the fact that $\|\lambda^k\|=1$ and $\|\mu^k\|\rightarrow \infty$, we deduce that for all $i\in I$,
\begin{eqnarray}\label{eq06}
\frac{\lambda_i^k}{\|\mu^k\|}u_i^k\rightarrow 0, ~as ~k\rightarrow \infty.
\end{eqnarray}
\item An argument similar to the previous part implies that the sequence $\{ v_r^k\}$ where $v_r^k\in \partial^{\circ} g_r(\hat{x^k}) $, for each fixed $r \in L$, is bounded. Hence,
the sequence $\{v_r^k\}$ has a limit point, for all $r\in L$, say $\hat{v}_r$. Without loss of generality, we can assume that for all $r\in L$,
\begin{equation}\label{eq07}
v_r^k\rightarrow \hat{v}_r, \text{ as } k\rightarrow \infty.
\end{equation}
Since $\partial^{\circ} g_r$'s is graph closed and $\hat{x}_k\rightarrow x_0$, one has $\hat{v}_r\in \partial^{\circ} g_r(x_0)$ for all $r\in L$.
\end{enumerate}
Now, take the limit as $k\rightarrow \infty $ in Inequality~(\ref{eq03}) and in view of the above observations~(\ref{eq04}),(\ref{eq06}) and (\ref{eq07}), we get,
$$\left\|\sum_{r\in L} \hat{p}_r\hat{v}_r\right\| \leq 0.$$
Hence, we have $\sum\limits_{r\in L} \hat{p}_r \hat{v}_r=0$, where $\hat{p}\in \mathbb{R}^l_+$ with $\|\hat{p}\|=1$ and $\hat{v}_r\in \partial^{\circ} g_r(x_0)$ for all $r\in L$. This contradicts the assumption that BCQ holds at $x_0$. Therefore, we have shown the correctness of our claim, {\textit i.e.}, the sequence $\{\mu^k\}$ is a bounded.
As $\{\mu^k\}$ is a bounded sequence, an argument similar to the one above, implies that there exist $\hat{\mu}\in \mathbb{R}^l_+$ such that $\mu_k \rightarrow \hat{\mu} ~as ~k\rightarrow \infty$. Similarly, the sequences $\{\lambda^k\}$ and $\{u^k\}$ have limit points, say $\hat{\lambda}$ and $\hat{u}$, respectively, with $\|\hat{\lambda}\|=1$ and $\lambda_k\rightarrow \hat{\lambda}$, $u^k \rightarrow \hat{u}$.
Now taking $k\rightarrow \infty $ in Inequality~(\ref{eq01}), we get
$\|\sum\limits_{i \in I} \hat{\lambda_i} \hat{u}_i+\sum\limits_{r\in L} \hat{\mu_r} \hat{v}_r\|\leq 0.$
Thus
\begin{eqnarray}\label{eq09}
\sum\limits_{i \in I} \hat{\lambda_i} \hat{u}_i+\sum\limits_{r \in L} \hat{\mu_r} \hat{v}_r= 0, && \hspace*{-.2in}
\text{ where } \hat{\lambda}\in \mathbb{R}^m_+ \text{ with } \|\hat{\lambda}\|=1, \nonumber \\
&& \hat{\mu}\in \mathbb{R}^l_+, \hat{u}_i\in \partial^{\circ} f_i(x_0) \text{ and } \hat{v}_r\in \partial^{\circ} g_r(x_0).
\end{eqnarray}
Since, $x_0$ is a feasible point of MOP and $\hat{\mu}_r\geq 0$ for all $r\in L$, we have
$\sum\limits_{r \in L} \hat{\mu}_r g_r(x_0) \leq 0.$
Taking $k\rightarrow \infty$ in Inequality~(\ref{eq02}), we get $\sum\limits_{r\in L} \hat{\mu}_r g_r(x_0) \geq 0$ and thus, we conclude that
\begin{equation}\label{eq012}
\sum\limits_{r\in L} \hat{\mu}_rg_r(x_0) =0.
\end{equation}
The Inequalities~(\ref{eq09}) and (\ref{eq012}) together imply that $x_0$ is a KKT point of MOP.
\hfill$\Box$\\
\noindent The next theorem deals with the second question asked in the article. Basically, \textbf{Q2} for multiobjective problem can be framed as follows: for every local Pareto points of MOP, does there exists sequence which converges to the point and the sequence has a subsequence which satisfies some type of approximate KKT conditions? We answer this question in Theorem \ref{t22} for MOP with locally Lipschitz objective function and Slater constraint qualification. This result shows that we always have a sequence converging to a local Pareto point of MOP with approximate KKT type of conditions which implies that the idea of constructing approximate KKT type conditions is essential in multiobjective theory. Note that \textbf{Q2} has not been addressed in \cite{durea2011stability}. Before we state theorem, we present the following lemma which will be needed in the proof of the result. This lemma is a special case of Theorem 2.44 in \cite{mordukhovich2013easy}.
\begin{lem}\label{lemma}
Let $A$ and $B$ be two non-empty subsets of $\mathbb{R}^m$. Let $A\cap B\ne \emptyset$ and $\bar{x}\in A\cap B$. Assume that the following qualification condition holds:
$$N_A(\bar{x})\cap (-N_B(\bar{x}))=\{0\}.$$
Then, $N_{A\cap B}(\bar{x})=N_A(\bar{x})+N_B(\bar{x})$.
\end{lem}
\begin{thm}\label{t22}
Consider the problem MOP with locally Lipschtiz objectives $f_i$'s for all $i\in I$ and $g_r$'s for all $r \in L$ to be a convex functions which satisfies the Slater constraint qualification. Further, assume that $x_0$ is a local weak Pareto minima and consider $\{\varepsilon_k\}$ to be a decreasing sequence of positive real numbers converging to $0$. Then, there exists a sequence $\{x^k\}$ of feasible points converging to $x_0$ which has a subsequence $\{y^k\}$ of $\{x^k\}$ such that for each $y^k$, there exists $\hat{y}^k$ satisfying
\begin{enumerate}
\item $\|y^k-\hat{y}^k\|\leq \sqrt{\varepsilon_k}$,
\item there exists $u_i^k\in \partial^{\circ} f_i(\hat{y}^k)$ and $v_r^k\in \partial^{\circ} g_r(\hat{y}^k)$, for all $i \in I$ and $r\in L$, such that
\begin{eqnarray}
\left\|\sum\limits_{i\in I} \lambda^k_i u^k_i+\sum\limits_{r\in L} \mu^k_r v_r^k \right\|\leq \sqrt{\varepsilon_k}, \label{eq311}\\
\sum\limits_{r\in L}\mu_r^k g_r(\hat{y}^k)= 0,\label{eq312}
\end{eqnarray}
where $\lambda^k \in \mathbb{R}^m_+$ with $\|\lambda^k\|=1$ and $\mu^k\in \mathbb{R}^l_+$.
\end{enumerate}
\end{thm}
\textit{Proof:} By assumption, $x_0$ is a locally Pareto minimizer of MOP, \textit{i.e.}, there exists $\delta>0$ such that
\begin{equation*}
f(x)-f(x_0)\not \in - \mathbb{R}^m_+\setminus \{0\}, \text{ for all } x\in V,
\end{equation*}
equivalently,
\begin{equation}\label{eq013}
f(x)-f(x_0)\in \tilde{W}, \text{ for all } x\in V,
\end{equation}
where $\tilde{W}:=\mathbb{R}^m\setminus (- \mathbb{R}^m_+\setminus \{0\})$ and $V=X\cap \overline {B_{\delta}(x_0)}$. The convexity of the constraint functions $g_r$'s together with closed convex feasible set $X$ implies that $V$ is a closed, convex and bounded set. As $x_0\in V$, there exists a sequence $x^k$ in $X$ with $x^k$ converging to $x_0 \in V$ and $x^k \in V$,
for all $k$ sufficiently large. We have broken the rest of the proof in two steps. For the first step, we prove that there exists a sub-sequence $\{y^k\}$ of $\{x^k\}$ such that $y^k \in V$ and is an $\varepsilon_k e$-Pareto minima of MOP with feasible set as $V$ where $e=(1,\dots,1)^T$ and $\varepsilon_k>0$.
As $f_i$'s, for $i\in I$, are locally Lipschitz, $f_i(x^k) \rightarrow f_i(x_0)$ as $k\rightarrow \infty$, for all $i\in I$. So, for a given $\varepsilon_1 >0$, for each $i \in I$ there exist natural numbers $N^i_1$, such that
$$| f_i(x^k)-f_i(x_0) |< \varepsilon_1, \text{ for all } k \ge N^i_1.$$
Now choose $N_1=\max\{N^1_1,N^2_1,\ldots,N^m_1\}$. Thus, for all $i\in I$
\begin{equation}\label{eqn:epsi1}
| f_i(x^k)-f_i(x_0)|< \varepsilon_1, \text{ for all } k \ge N_1.
\end{equation}
Choose $y^1=x^{N_1}$, then $| f_i(y^1)-f_i(x_0) |< \varepsilon_1,$ or equivalently,
\begin{equation}\label{eq015}
f(x_0)+e\varepsilon_1-f(y^1)\in {int}(\mathbb{R}^m_+).
\end{equation}
Note that $\tilde{W} +int(\mathbb{R}^m_+)\subseteq \tilde{W}$, hence, (\ref{eq013}) and~(\ref{eq015}) together gives
\begin{equation}
f(x)+e\varepsilon_1-f(y^1) \not \in - \mathbb{R}^m_+\setminus \{0\}, \text{ for all } x\in V.
\end{equation}
Take $\varepsilon_2 < \varepsilon_1 $ and a similar argument applied to the sequence $\{x_{N_1}, x_{N_1 + 1}, x_{N_1 + 2}, \ldots\}$ gives an element $y^2=x^{N_2}$, with $N_2 > N_1$, such that
$ f(x)+\varepsilon_2 e-f(y^2)\not \in - \mathbb{R}^m_+\setminus \{0\}\text{ for all }x\in V.$ Proceeding as above, gives a sub-sequence $\{y^k\}$ of $\{x^k\}$ such that $y^k \in V$ and
\begin{equation}\label{eq016}
f(x)+\varepsilon_ke-f(y^k)\not \in - \mathbb{R}^m_+\setminus \{0\}, \text{ for all }x\in V.
\end{equation}
Hence, $y^k \in V$ is an $\varepsilon_k e$-Pareto minima of MOP with feasible set as $V$. This completes the proof of the first step. We now come to the second step to complete the proof.
Since each $f_i$ is locally Lipschitz, $f$ is ($e,\mathbb{R}^m_+$)-lower semi continuous and $\mathbb{R}_+^m$-bounded below. Thus, the vector Ekeland Variational Principle (Theorem \ref{d17}) gives the existence of $\hat{y}^k \in V$, for each $y^k \in V$, such that $\|\hat{y}^k-y^k\|\leq \sqrt{\varepsilon_k},$ and for all $x\in V\setminus \{\hat{y}^k\},$
\begin{enumerate}
\item $f(x)+\varepsilon_k e-f(\hat{y}^k)\not \in -int (\mathbb{R}_+^m)$, and
\item $f(x)+\sqrt{\varepsilon_k}\|\hat{y}^k- y^k\| e-f(\hat{y}^k)\not \in -int (\mathbb{R}^m_+)$.
\end{enumerate}
Thus from above, we conclude that $\hat{y}^k$ is a weak Pareto minimizer of the problem
$$\min\limits_{x\in V} g(x), \text{ where } g(x) = f(x)+\sqrt{\varepsilon_k}\|x-\hat{y}^k\| e.$$
Now, using the necessary optimality condition for the above multiobjective problem, there exists $\lambda^k\in \mathbb{R}^m_+$ with $\|\lambda^k\|=1$ such that
$$0\in \sum\limits_{i\in I}\lambda_i^k \partial^{\circ}g_i(\hat{y}^k)+N_V(\hat{y}^k),$$
where $N_V(\hat{y}^k)$ is the normal cone to the set $V$ at $\hat{y}^k$. For proof of above result see for example, page 137 of Chapter 5 in \cite{dutta2012strong}.
Now applying sum rule for the Clarke subdifferential (see \cite{clarke1990optimization}) and using the fact that subdifferential of the norm function at origin is the unit ball, we get
\begin{equation}\label{eq018}
0\in \sum\limits_{i\in I}\lambda_i^k \partial^{\circ} f_i(\hat{y}^k)+\sqrt{\varepsilon_k}B_{1}(0)+N_V(\hat{y}^k).
\end{equation}
Since $x^k\rightarrow x_0$ and $y^k$ is a sub-sequence of $\{x^k\}$, $y^k\in X\cap B_{\delta}(x_0)$, for sufficiently large $k$. As $\hat{y}^k\in B_{\sqrt{\epsilon_k}}(y^k)$ and $\epsilon_k \rightarrow 0$, for sufficiently large $k$, $B_{\sqrt{\epsilon_k}}(y^k) \subset B_{\delta}(x_0)$. Hence, $\hat{y}^k\in B_{\delta}(x_0)$, for $k$ sufficiently large.
Clearly, $X\cap \overline{B_{\delta}(x_0)}\not = \emptyset$. We will now see that the qualification condition for Lemma \ref{lemma} holds in this case. Since $\hat{y}^k\in B_{\delta}(x_0)$, we see that $\hat{y}^k\in{int} \overline{B_{\delta}(x_0)}$, thus $N_{\overline{B_{\delta}(x_0)}}(\hat{y}^k)=\{0\}$. Hence, $N_V(\hat{y}^k)\cap(-N_{\overline{B_{\delta}(x_0)}}(\hat{y}^k))=\{0\}$. Therefore, using Lemma \ref{lemma}, we conclude that
$$ N_V(\hat{y}^k)=N_{X\cap \overline{B_{\delta}(x_0)}}(\hat{y}^k)
=N_X(\hat{y}^k)+N_{\overline{B_{\delta}(x_0)}}(\hat{y}^k).$$
Thus $N_V(\hat{y}^k)=N_X(\hat{y}^k).$ Hence, we can rewrite (\ref{eq018}) as
\begin{equation}\label{eq019}
0\in \sum\limits_{i\in I}\lambda_i^k \partial^{\circ} f_i(\hat{y}^k)+\sqrt{\varepsilon_k}B_{1}(0)+N_X(\hat{y}^k).
\end{equation}
Further as the Slater constraint qualification holds, using Corollary~$23.7.1$ of \cite{rockafellar2015convex},
$$
N_X(\hat{y}^k)=\{\sum\limits_{r\in L}\mu_r^kv_r^k: v_r\in \partial g_r(\hat{y}^k), ~\mu^k_r\geq 0, ~\mu_r^kg_r^k(\hat{y}^k)=0,~r\in L\}.
$$
Now using the above form of $N_X(\hat{y}^k)$ and (\ref{eq019}), it is evident that there exists $u_i^k\in \partial^{\circ} f_i(\hat{y}^k)$ for all $i\in I$, $v_r^k\in \partial g_r(\hat{y}^k)$ for all $r\in L$ and scalars $\lambda^k \in \mathbb{R}^m_+$ with $\|\lambda^k\|=1$, $\mu^k\in \mathbb{R}^l_+$ such that \eqref{eq311} and (\ref{eq312}) holds. This completes the proof of the second part and hence the proof of the theorem is complete.\hfill$\Box$
\begin{remark}\label{rm2}
In the above theorem, the objective functions are taken to be locally Lipschitz only. If the objective function $f_i$'s are convex as well, then we have a more concrete result. To proof the next result we need the following Lemma \ref{lem01} and a result from \cite{durea2011stability} which will play a key role in proving the Theorem \ref{c3t6}.
\end{remark}
\begin{lem}\label{lem01}
Consider the problem MOP with each objective functions $f_i$'s and constraint function $g_r$'s to be convex. Then every local Pareto minima is a global Pareto minima.
\end{lem}
\begin{thm}[Theorem 3.6 of \cite{durea2011stability}]\label{c3co6}
Let $x_0$ be a $\varepsilon e$-weak Pareto minima of the problem MOP with each $f_i$'s and $g_r$'s to be convex functions and assume that Slater constraint qualification holds. Then $x_0$ is a modified $\sigma$-KKT point where $\sigma\in (0,\|e\|\varepsilon]$.
\end{thm}
\begin{thm}\label{c3t6}
Consider the problem MOP with each $f_i$ and $g_r$ being convex functions, for all $i\in I$ and $r\in L$. Let $x_0$ be a Pareto minima and let the Slater constraint qualification hold. Then, for decreasing sequence of positive real numbers $\{\varepsilon_k\}$ converging to $0$, there exists a feasible sequence $\{x^k\}$ converging to $ x_0$ and a sub-sequence $\{y^k\}$ of $\{x^k\}$ such that each $y^k$ is a modified $\sigma_k$-KKT point with $\sigma_k\in (0, \; \|e\|\varepsilon_k ]$.
\end{thm}
\textit{Proof:}
Since the problem data is convex, local Pareto point is global. Now proceed as in the proof of Theorem~\ref{t22} to get a sub-sequence $\{y^k\}$ of $\{x^k\}$ such that $y^k$ is a $\varepsilon_k e$-Pareto minima of MOP with feasible set as $V$), where $V=X\cap \overline{B_{\delta}(x_0)}$ with $\delta >0$, \textit{i.e.,} $y^k$ is a local $\varepsilon_k e$-Pareto minima of MOP. So, by using the assumption of convexity and Lemma~\ref{lem01}, we conclude that $y^k$ is a $\varepsilon_k e$-Pareto minima of MOP. Now using Theorem~\ref{c3co6}, we conclude that $y^k$ is a modified $\sigma_k$-KKT point with $\sigma_k\in (0, \; \| e\|\varepsilon_k]$.\hfill$\Box$
\section{Approximate $\hat{M}$-Geoffrion solutions, Saddle points, and KKT conditions}
In this section, we analyze saddle point conditions and KKT type conditions for the $(\hat{M},\epsilon)$-Geoffrion solutions which give a complete characterization of the considered proper points. We also discuss a scalarization rule for the $(\hat{M},\epsilon)$-Geoffrion solutions which is a connecting bridge for deducing saddle point and KKT type conditions. Before discussing the mentioned results, we shall observe that there is a characterization of $(\hat{M},\epsilon)$-Geoffrion proper points by the system of inequalities which appeared in \cite{shukla2019practical}. For a given $\epsilon\in \mathbb{R}^m_+$ and $\hat{M}>0$, consider $x_0\in X$, $i\in I$ and define the following system of inequalities ($\mathcal{Q}_i(x_0)$) as
\begin{eqnarray*}
\left\{ \begin{array}{ll}
-f_i(x_0)+ f_i(x)+\epsilon_i<0,\\
-f_i(x_0)+ f_i(x)+\epsilon_i<\hat{M} (f_j(x_0)- f_j(x)-\epsilon_j),\text{ for all } j\in I\setminus\{i\}\\
x\in {X}.\end{array} \right.
\end{eqnarray*}
\begin{prop}\label{pro1}
For given $\epsilon\in \mathbb{R}^m_+$ and $\hat{M}>0$, consider the problem MOP. Then a point $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$ if and only if for each $i\in I$, the system $\mathcal{Q}_i(x_0)$ is inconsistent.
\end{prop}
\noindent The above proposition follows from the definition of Proof of the $(\hat{M},\epsilon)$-Geoffrion proper solutions, for complete proof, see \cite{shukla2019practical}. Before discussing the saddle point conditions for the $(\hat{M},\epsilon)$-Geoffrion proper solutions, let us discuss the correspondence between $(M,\epsilon)$-Geoffrion proper solutions and solution of the weighted sum scalar problem. As mentioned earlier, this correspondence plays a pivotal role to prove main results of this section. To this end, let for $s^\ast\in\mathbb{R}^m_+$, the weighted sum scalar problem $P(s^*)$ be defined as $\min\limits_{x\in X} ~\langle s^*,f(x)\rangle.$
\begin{thm}\label{t2}
For a given $\epsilon \in \mathbb{R}^m_+$, $\hat{M}>0$, let $x_0$ is a $\langle s^\ast,\epsilon\rangle$-minimum of $P(s^\ast)$, where $s^\ast \in \text{int}( \mathbb{R}^m_+)$. If $\hat{M}\geq (m-1)\max\limits_{i,j}\{\frac{s^\ast_i}{s^\ast_j}\}$, then $x_0$ is a $(\hat{M},\epsilon)$-Geoffrion proper solution of MOP, \textit{i.e.,} $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$.
\end{thm}
\textit{Proof:}
Let us assume on the contrary that $x_0\notin \mathcal{G}_{\hat{M},\epsilon}(f,X)$. Therefore, from Proposition~\ref{pro1} we obtain an ${i}\in I$ such that $\mathcal{Q}_i(x_0)$ is consistent. Without loss of generality, we assume that $i=1$. Thus, the system $\mathcal{Q}_i(x_0)$, written as
\begin{eqnarray*}\label{eqn}
\left\{ \begin{array}{ll}
-f_1(x_0)+ f_1(x)+\epsilon_1<0,\\
-f_1(x_0)+ f_1(x)+\epsilon_1<\hat{M} (f_j(x_0)- f_j(x)-\epsilon_j), \quad j\in I\setminus\{1\}\\
x\in {X}.\end{array} \right.
\end{eqnarray*}
has a solution. As $\hat{M}\ge (m-1)\{\frac{s^\ast_j}{s^\ast_i}\}$ for all $s^\ast\in \text{int}(\mathbb{R}^m_+)$, the consistency of system $\mathcal{Q}_i(x_0)$ implies that
\begin{equation*}\label{eqnn}
s^\ast_1(-f_1(x_0)+ f_1(x)+\epsilon_1)<s^\ast_j (m-1) (f_j(x_0)- f_j(x)-\epsilon_j),\text{ for all } j\in I\setminus\{1\}.
\end{equation*}
Summing the above equation for all $j\in I\setminus\{1\}$, we obtain that
\begin{equation*}
s^\ast_1(-f_1(x_0)+ f_1(x)+\epsilon_1)<\sum_{j=2}^ms^\ast_j (f_j(x_0)- f_j(x)-\epsilon_j),
\end{equation*}
which further implies
\begin{equation}\label{eqnnn}
\langle s^*,f(x_0)\rangle - \langle s^*, f(x)\rangle -\langle s^*,\epsilon \rangle>0.
\end{equation}
Since (\ref{eqnnn}) is a contradiction to the $\langle s^\ast,\epsilon\rangle$-minimality of $P(s^\ast)$. Therefore, the theorem follows.\hfill$\Box$\\
\noindent All the solutions from $\mathcal{G}_{\hat{M},\epsilon}(f,X)$ satisfy an upper trade-off bound of $\hat{M}$ (in the sense of Geoffrion-proper efficiency). Smaller bounds are more relevant to the decision maker as they provide tighter trade-offs among the criteria values. Therefore, it is of interest to find the minimum ${M}$ such that $\mathcal{G}_{{M},\epsilon}(f,X)$ is non-empty. Under the conditions of Theorem~\ref{t2}, we need minimum value of $\hat{M}$ equals $m-1$, and this occurs when all components of $s^\ast$ are identical. The next example shows that if conditions in Theorem~\ref{t2} are not satisfied, then even smaller values of $\hat{M}$ are possible. This is the case with non-convex or discrete multicriteria optimization problems. In the following example, we consider $\epsilon=0$ and find $\hat{M}$-Geoffrion proper points.
\begin{example}\rm
Let $X:=\{(0,0,1)^\top,\,(0,1,0)^\top,\,(1,0,0)^\top,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})^\top\}$, $m=3$, and $f$ be the identity mapping. The sets $\mathcal{G}_{2}(f,X)$ and $\mathcal{G}_{1}(f,X)$ can be easily computed as follows:
\begin{eqnarray*}
\mathcal{G}_{2}(f,X)&=&\{(0,0,1)^\top,\,(0,1,0)^\top,\,(1,0,0)^\top,(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})^\top\},\\
\mathcal{G}_{1}(f,X)&=&\{(0,0,1)^\top,\,(0,1,0)^\top,\,(1,0,0)^\top\}.
\end{eqnarray*}
Moreover, $\mathcal{G}_{M}(f,X)=\emptyset$ for $M<1$. Therefore, the minimum value of ${M}$ is 1.
\end{example}
\noindent The converse of Theorem \ref{t2} also holds with convexity assumption on the objective functions and the feasible set. Since, if for each $r\in L$, $g_r$ is convex, then the feasible set $X$ is a convex set. We have the following result.
\begin{thm}\label{t3}
Let us consider the problem MOP where for each $i\in I$ and $r\in L$, $f_i$ and $g_r$ are convex functions. If $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$, then there exists an $s^*\in {\rm int}(\mathbb{R}^m_+)$ such that $x_0$ is a $\langle s^*,\epsilon\rangle$-minimum of $P(s^*)$.
\end{thm}
\textit{Proof:}
Let $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$. Then using Proposition \ref{pro1}, we obtain that the system $\mathcal{Q}_i(x_0)$ is inconsistent, for each $i\in I$. Applying the Gordan's Theorem of the alternative (see \cite{rockafellar2015convex}), we conclude, after some rearrangements, that for each $i\in I$, there exists scalars
$\lambda_j^i\geq 0$ with $\sum\limits_{j \in I} \lambda_j^i=1$ such that, for all $x\in X$
\begin{eqnarray*}
f_i(x)+\hat{M}\sum_{j\in I,j\not = i} \lambda_j^if_j(x)\geq f_i(x_0)+\hat{M}\sum_{j\in I,j\not = i}\lambda_j^i f_j(x_0)-\left[\epsilon_i+\hat{M}\sum_{j\in I,j\not = i}\lambda_j^i\epsilon_j \right].
\end{eqnarray*}
Therefore, by summing over all $i$, we get
\begin{eqnarray*}
\sum_{i\in I} f_i(x)+\hat{M}\sum_{i\in I}\sum_{j\in I,j\not = i}\lambda_j^i f_j(x) &\geq& \sum_{i\in I} f_i(x_0)+\hat{M}\sum_{i\in I} \sum_{j\in I,j\not = i}\lambda_j^if_j(x_0)\\
&& \hspace{.35in} -\sum_{i\in I}\left[\epsilon_i+\hat{M}\sum_{j\in I,j\not = i}\lambda_j^i\epsilon_j \right].
\end{eqnarray*}
Hence, for all $x\in X$,
{\small \begin{eqnarray*}
\sum_{j\in I}\left[1+\hat{M}\sum_{i\in I,i\not = j}\lambda_j^i\right]f_j(x) \geq \sum_{j\in I}\left[1+\hat{M}\sum_{i\in I,i\not = j} \lambda_j^i\right]f_j(x_0)-\sum_{j\in I}\left[1+\hat{M}\sum_{i\in I,i\not = j}\lambda_j^i\right]\epsilon_j.
\end{eqnarray*}}
Setting $s_j=1+\hat{M}\sum\limits_{i\in I,i\not = j} \lambda_j^i$, gives $s\in { int}(\mathbb{R}^m_+)$ and $x_0$ is a $\langle s,\epsilon\rangle$-minimum of $P(s)$.\hfill$\Box$
\begin{remark}\rm
Theorem \ref{t3} can also be proved by noting the fact that each $(\hat{M},\epsilon)$-Geoffrion proper point is $\epsilon$-Geoffrion proper point with constant $\hat{M}>0$. Hence using Theorem 3.15 form \cite{EHR}, we can deduce the above result. Now if we denote the set of $\langle s^*,\epsilon\rangle$-minimum of $P(s^*)$ by $Sol_\epsilon(P(s^*))$, then Theorem \ref{t2} and \ref{t3} implies that under convexity assumption on data and for a given $\hat{M}$, there exists $s^*\in \text{int}(\mathbb{R}^m_+)$ such that
$$ Sol_{\epsilon}(P(s^*))\subseteq \mathcal{G}_{\hat{M},\epsilon}(f,X) \subseteq \bigcup\limits_{s\in \text{int} (\mathbb{R}^m_+)}Sol_{\epsilon}(P(s)).$$
\end{remark}
\noindent Now we come to the main attraction of this section, the saddle point conditions for $(\hat{M},\epsilon)$-Geoffrion proper solutions. For this study, we consider the problem MOP where each $f_i$, $i\in I$ and $g_j$, $j\in L$ are a convex function. Whenever the data of problem is convex , we shall denote the problem MOP as CMOP. Given $\hat{M}>0$, and any index $i\in I$, we define the $(\hat{M},i)$-Lagrangian associated with CMOP as follows
\begin{equation}\label{eq410}
L^{\hat{M}}_i(x,\tau^i, \mu^i)=f_i(x)+\sum_{j\in I,j\not = i} \tau^i_j \hat{M} f_j(x)+ \sum_{r\in L}\mu_r^ig_r(x),
\end{equation}
where $\mu^i=(\mu^i_1,\mu_2^i,...,\mu^i_l) \in \mathbb{R}^l_+$ and $\tau^i=(\tau^i_1, \tau^i_2,...,\tau^i_m)\in S^m$ with $S^m=\{x\in \mathbb{R}^m: 0\leq x_i\leq 1,i\in I, \sum_{i=1}^mx_i=1\},$ the unit simplex in $\mathbb{R}^m$.
The motivation behind considering the above Lagrangian comes from the $i$th-objective Lagrangian problem defined in Chapter 4 of \cite{chankong2008multiobjective}. In \cite{chankong2008multiobjective}, they used the above Lagrangian form as a scalarization scheme of multiobjective problems. In the same spirit as \cite{chankong2008multiobjective}, we get a scalar structure of Lagrangian functions which is comparatively easy than vector-valued Lagrangian to work with. Our aim here is to show the key role played by the $(\hat{M},i)$-Lagrangian in analyzing and characterizing the Geoffrion ($\hat{M}, \epsilon$)-Proper solutions.
\begin{thm}\label{t041}
For a given $\epsilon \in \mathbb{R}^m_+$ and $\hat{M}>0$, let us consider the problem CMOP which satisfy the Slater constraint qualification. If $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$ then for each $i$, there exists $\bar{\tau}^i\in S^m$, $\bar{\mu}^i\in \mathbb{R}^l_+$ such that for all $x\in \mathbb{R}^n$ and $\mu \in\mathbb{R}_+^m $,
\begin{description}
\item $(i)$ $L^{\hat{M}}_i(x_0,\bar \tau^i, \mu)-\bar \epsilon_i \leq L^{\hat{M}}_i(x_0,\bar \tau^i, \bar \mu^i)\leq L^{\hat{M}}_i(x,\bar \tau^i, \bar \mu^i)+\bar \epsilon_i$
\item($ii$) $\sum\limits_{r\in L} \bar \mu_r^i g_r(x_0) \geq -\bar \epsilon_i$,
\end{description}
where $\bar \epsilon_i=\epsilon_i+\sum\limits_{j=1,\\ j\not = i}^m\tau^i_j \hat{M} \epsilon_j.$ Conversely if $x_0\in \mathbb{R}^n$ be such that for each $i\in I$, there exists $(\bar \tau^i, \bar \mu^i)\in S^m\times \mathbb{R}^l_+$ such that $(i)$ and ($ii$) holds then $x_0\in \mathcal{G}_{\tilde{M},2\epsilon}(f,X)$, where $\tilde{M}\geq(1+\hat{M})(m-1)$.
\end{thm}
\textit{Proof:}
It is evident from Proposition \ref{pro1} that if $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$, then for each $i\in I$, the system $\mathcal{Q}_i(x_0)$, re-written as
\begin{align*}
& -f_i(x_0)+ f_i(x)+\epsilon_i<0,\\
& -f_i(x_0)+ f_i(x)+\epsilon_i<M (f_j(x_0)- f_j(x)-\epsilon_j),\text{ for all } j\in I\setminus\{i\}\\
& g_r(x)\leq 0,~ r\in L
\end{align*}
has no solution, for all $x\in \mathbb{R}^n$. It is easy to observe that the system $\mathcal{Q}_i(x_0)$ has no solution, if we replace $g_r\leq 0$ by $g_r<0$ for all $r\in L$. Now by applying the Gordan's theorem of the alternative (see \cite{rockafellar2015convex}), there exists $\tau^i=(\tau^i_1,\ldots,\tau^i_m)\in \mathbb{R}^m_+$ and $\mu^i=(\mu^i_1,\ldots,\mu^i_l)\in \mathbb{R}^l_+$ with $(\tau^i,\mu^i)\not = 0$ such that for all $x\in \mathbb{R}^n,$
\begin{multline*}
\tau^i_i(f_i(x)-f_i(x_0)+\epsilon_i)+
\sum\limits_{j\in I,j\not = i}\tau^i_j(f_i(x)+\hat M f_j(x) - f_i(x_0) \\
- \hat M f_j(x_0) +\epsilon_i
+\hat{M}\epsilon_j)
+\sum\limits_{r\in L} \mu^i_r g_r(x) \geq 0.
\end{multline*}
Hence, for all $x\in \mathbb{R}^n,$
\begin{eqnarray}\label{eq430}
\bigl(\sum\limits_{j\in I} \tau^i_j \bigr)(f_i(x)-f_i(x_0)+\epsilon_i)
+\sum\limits_{j\in I,j\not = i} \left[\tau^i_j \hat M f_j(x)- \tau^i_j \hat M f_j(x_0)
+\tau^i_j \hat M \epsilon_j \right]\nonumber \\
+ \sum\limits_{r\in L} \mu^i_r g_r(x) \geq 0.
\end{eqnarray}
Now, we first claim that $\tau^i=(\tau^i_1,\ldots,\tau^i_m)\not =0$. For if, $\tau^i=0$ then $\mu^i \ne 0$ and Inequality~\eqref{eq430} reduces to $\sum\limits_{r\in L} \mu^i_r g_r(x) \geq 0$, for all $x\in \mathbb{R}^n$. But, the Slater constraint qualification implies that there exists a point, say $\hat{x}\in \mathbb{R}^n$, such that
$g_r(\hat{x})<0$. As $\mu^i \ne 0$ and $\mu^i \in \mathbb{R}^l_+$, we obtain
$\sum\limits_{r\in L} \mu^i_r g_r(x) <0$, a contradiction to $\sum\limits_{r\in L} \mu^i_r g_r(x) \geq 0.$ Hence, $\tau^i \not = 0$ and thus $\sum\limits_{j\in I} \tau^i_j >0$. Thus, dividing Inequality~(\ref{eq430}) by $\sum\limits_{j \in I} \tau^i_j $, we get
\begin{equation}\label{eq27}
f_i(x)-f_i(x_0)+\epsilon_i
+\sum\limits_{ j\in I,j\not = i}[\bar{\tau}^i_j \hat M f_j(x)-\bar{\tau}^i_j\hat M f_j(x_0)
+\bar{\tau}^i_j \hat M \epsilon_j]
+ \sum\limits_{r \in L} \bar{\mu}^i_r g_r(x) \geq 0,
\end{equation}
for all $x\in \mathbb{R}^n$, where $\bar{\tau}^i_j= \frac{\tau^i_j}{\sum\limits_{j\in I} \tau^i_j }$ and $\bar{\mu}^i_r= \frac{\mu^i_r}{\sum\limits_{j\in I} \tau^i_j }$. In particular, for $x=x_0$, Inequality~(\ref{eq27}) gives
$\epsilon_i
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M \epsilon_j
+ \sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0) \geq 0.$
By setting $\bar{\epsilon_i}=\epsilon_i
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M \epsilon_j
$, we get Part~($ii$) as
$\sum\limits_{r \in L} \bar{\mu}^i_r g_r(x_0) \geq -\bar{\epsilon_i}$. Further,
Inequality~(\ref{eq27}) reduces to, for all $x\in \mathbb{R}^n$,
\begin{eqnarray}\label{c2eq28}
f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x)
+ \sum\limits_{r\in L} \bar{\mu}^i_r g_r(x)+\bar{\epsilon_i} \geq f_i(x_0)+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0).
\end{eqnarray}
As $x_0$ is feasible to CMOP, $\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0) \leq 0$. Thus, Inequality~(\ref{c2eq28}) becomes
\begin{eqnarray*}
f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x)
+ \sum\limits_{r\in L} \bar{\mu}^i_r g_r(x)+\bar{\epsilon_i} \geq f_i(x_0)+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0)+\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0),
\end{eqnarray*}
which implies that for each $i\in I$ and for all $x\in \mathbb{R}^n$,
\begin{eqnarray}\label{c2eq29}
L_i^{\hat{M}}(x,\bar{\tau}^i,\bar{\mu}^i)+\bar{\epsilon_i} \geq L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i).
\end{eqnarray}
Further, from Equation~\eqref{eq410}, we observe that for all $i\in I$ and any $\mu \in \mathbb{R}^l_+$
\begin{eqnarray*}
L_i^{\hat{M}}(x_0,\bar{\tau}^i,\mu )\leq f(x_0)+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j\hat{M}f_j(x_0),
\end{eqnarray*}
which can be written as $L_i^{\hat{M}}(x_0,\bar{\tau}^i,\mu)\leq f(x_0)+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j\hat{M}f_j(x_0)+ \sum\limits_{r\in L} \bar{\mu}^i_r g_r(x)+\bar \epsilon_i .$
Thus, for all $x\in \mathbb{R}^n$ and $\mu \in \mathbb{R}^l_+$,
\begin{eqnarray}\label{c2eq30}
L_i^{\hat{M}}(x_0,\bar{\tau}^i,\mu ) \leq L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i)+\bar{\epsilon_i}.
\end{eqnarray}
The Inequalities~(\ref{c2eq29}) and (\ref{c2eq30}) together prove Part~($i$). Now, for the sufficient part, let us assume that for a given $x_0\in \mathbb{R}^n$ and each $i \in I$ there exists $\bar{\tau}^i\in S^m$ and $\bar{\mu}^i\in \mathbb{R}^l_+$ such that Conditions ($i$) and ($ii$) hold. Our first step is to show that $x_0$ is feasible to CMOP. As we know from $(i)$, for all $\mu \in \mathbb{R}^l_+$
\begin{eqnarray*}
L_i^{\hat{M}}(x_0,\bar{\tau}^i,\mu )-\bar{\epsilon_i} \leq L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i).
\end{eqnarray*}
Thus, $f_i(x_0)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0)
+ \sum\limits_{r\in L} {\mu}_r g_r(x_0)-\bar{\epsilon_i }\leq f_i(x_0)+\sum\limits_{j
\in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0).$
This shows that for all $ \mu\in \mathbb{R}^l_+$,
\begin{equation}\label{eq049}
\sum\limits_{r\in L} \mu_r g_r(x_0) \leq \bar{\epsilon_i}.
\end{equation}
On the contrary, suppose $x_0$ is not feasible. Then, there exists $r_0\in L$ such that $g_{r_0}(x_0)>0$. Then, choose $\mu=(0,\ldots,0,\mu_{r_0},0,\ldots,0)$, with $\mu_{r_0}>0$ and sufficiently large such that $\mu_{r_0}g_{r_0}(x_0)> \bar{\epsilon_i}.$ Note that this contradicts
Inequality~\eqref{eq049}. Hence, we conclude that $x_0$ is a feasible solution of CMOP.
Now from right hand side of ($i$) we also have, for all $x\in \mathbb{R}^n$
\begin{eqnarray}
L_i^{\hat{M}}(x,\bar{\tau}^i,\bar{\mu}^i)+\bar{\epsilon_i} \geq L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i).
\end{eqnarray}
which implies
\begin{multline*}
f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x)
+ \sum\limits_{r\in L} \bar{\mu}^i_r g_r(x)+\epsilon_i
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M \epsilon_j \geq f_i(x_0) \\
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0)
+\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0).
\end{multline*}
Now, for any feasible $x$, $\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x)\leq 0$. Thus, from the above inequality we have,
\begin{eqnarray}\label{eq55}
f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x)
+\epsilon_i
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M \epsilon_j \geq f_i(x_0) & \nonumber \\
& \hspace*{-1.5in} +\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0)
+\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0).
\end{eqnarray}
Using Condition ($ii$), we have
\begin{multline*}
f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x)
+\epsilon_i
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M \epsilon_j \geq f_i(x_0) \\ +\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0)-(\epsilon_i
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M \epsilon_j).
\end{multline*}
Since, it holds for each $i$, by summing over all the $i$'s we get,
\begin{eqnarray*}
\sum\limits_{i\in I} (1
+\hat M \sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j) f_i(x)
+\sum\limits_{i\in I} (1+ \hat M\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j) (2\epsilon_j) \geq \sum\limits_{i\in I} (1+\hat M\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j ) f_i(x_0).
\end{eqnarray*}
Hence, $x_0$ is $\langle s, 2\epsilon\rangle$-minimizer of $P(s)$, where $s=(s_1, \ldots, s_m)$ with $s_i=1
+\hat M \sum\limits_{k \in I,k\not = i} \bar{\tau}^i_k$, for $i \in I$. Now since $\bar{\tau}^i\in S^m$ for all $i,$ we have for all $i,j\in I$
$$\frac{s_i}{s_j}=\frac{1
+\hat M \sum\limits_{k\in I,k\not = i}\bar{\tau}^i_k}{1
+\hat M \sum\limits_{k\in I,k\not = j}\bar{\tau}^j_k}=\frac{1
+\hat M (1-\bar{\tau}^i_i)}{1
+\hat M (1-\bar{\tau}^j_j)}\leq 1+\hat{M}.$$
Since the above inequality is true for every $i$ and $j$, we have $\max\limits_{i,j}\{\frac{s_i}{s_j}\}\leq 1+\hat{M}$. Now consider $\tilde{M}\geq(1+\hat{M})(m-1)$ and using Theorem \ref{t2}, we conclude that
$x_0\in \mathcal{G}_{\tilde{M},2\epsilon}(f,X).$ This completes the proof.\hfill $\Box$
\begin{remark}\rm
The saddle point type conditions are useful as a sufficient condition if the number of objectives are only few in number. In fact, for sufficiency we can have a much simpler condition which we now state.
Let $x_0\in \mathbb{R}^n$ be a point that satisfies: \\ for each $i\in I$, there exists $\bar{\tau}^i\in S^m$ and $\bar{\mu}^i\in \mathbb{R}^l_+$ such that for all $\mu \in \mathbb{R}^l_+ $ and $x\in \mathbb{R}^n$,
\begin{description}
\item $(a)$ $L_i^{\hat{M}}(x_0,\bar{\tau}^i,\mu)-\epsilon_i\leq L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i)\leq L_i^{\hat{M}}(x,\bar{\tau}^i,\bar{\mu}^i)+\epsilon_i,$
\item $(b)$ $\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0) \geq -{\epsilon_i}$.
\end{description}
Then, $x_0\in \mathcal{G}_{\tilde{M},2\epsilon}(f,X).$
In order to prove the above statement, note that $\bar{\epsilon_i}=\epsilon_i
+\sum\limits_{j=1,j\not = i}\bar{\tau}^i_j \hat M \epsilon_j
$. So, $\bar{\epsilon_i}\geq \epsilon_i$. Hence, Conditions~($a$) and~($b$) above implies that Conditions ($i$) and ($ii$) of Theorem \ref{t041} are satisfied. Therefore, we can simply apply the converse part of Theorem~\ref{t041} to get $x_0\in \mathcal{G}_{\tilde{M},2\bar{\epsilon}}(f,X),$ where $\tilde{M}\geq (1+\hat{M})(m-1)$. Note that Condition~($a$) and ($b$) above are much simpler as compared to checking Conditions~($i$) and ($ii$) as $\bar{\epsilon_i}$ involves the multipliers $\bar{\tau}^i_j$. Hence, for the sufficiency part of Theorem~\ref{t041} which requires the verification of Conditions~$(i)$ and ($ii$), we will be using Conditions~($a$) and ($b$).
Of course from the necessary part of Theorem \ref{t041}, we can also derive a multiplier rule involving $\epsilon$-subdifferentials, however this rule will be quite different. Observe that if $x_0\in \mathcal{G}_{\hat{M},\epsilon}(f,X)$, then Condition~($i$) of Theorem~\ref{t041} implies that for any $i\in I$ there exists $ \bar{\tau}^i\in S^m$ and $\bar{\mu}^i\in \mathbb{R}^l_+$ such that for all $x\in \mathbb{R}^n$,
$$ L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i)\leq L_i^{\hat{M}}(x,\bar{\tau}^i,\bar{\mu}^i)+\bar{\epsilon_i},$$
which implies that $x_0\in \bar{\epsilon_i}- \underset{x\in \mathbb{R}^n}{\arg\min}L^{\hat{M}}_i(\; \cdot ,\bar{\tau}^i,\bar{\mu}^i),$ where $\bar \epsilon_i-\arg\min$ is the set of $\bar \epsilon_i$-minima of the function $L_i^{\hat{M}}(x,\bar{\tau}^i,\bar{\mu}^i)$. Thus, for each $i \in I$, $0\in \partial_{\bar{\epsilon_i}}L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i).$
In fact a more compact necessary condition of the KKT type is given as follows,
\begin{eqnarray}\label{eq33}
0\in \sum_{i=\in I}\partial_{\bar{\epsilon_i}}L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i)\quad
\text{with} \quad
\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0) \geq -\bar{\epsilon_i}.
\end{eqnarray}
\end{remark}
\begin{thm}\label{t42}
For a given $\epsilon \in \mathbb{R}^m_+$ and $\hat{M}>0$, let us consider the problem CMOP. If $x_0\in \mathcal{G}_{\hat{M},{\epsilon}}(f,X)$, then there exist vectors $\bar{\tau}^i\in S^m$ and $\bar{\mu}^i\in \mathbb{R}^l_+$, $i\in I$ such that
\begin{description}
\item $(A)$ $0\in \sum\limits_{i\in I}\partial_{\bar{\epsilon_i}}L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i),$
\item $(B)$ $\sum\limits_{r=1}^l \bar{\mu}^i_r g_r(x_0) \geq -\bar{\epsilon_i}$,
\end{description}
where $\bar{\epsilon_i}=\epsilon_i
+\sum\limits_{j\in I,j\not = i}\bar{\tau}^i_j \hat M \epsilon_j
$, $i\in I$. Conversely, if $x_0\in X$ be a point for which there exist vectors $(\bar{\tau}^i,\bar{\mu}^i)\in S^m\times \mathbb{R}^l_+$, $i\in I$ such that ($A$) and ($B$) hold then $x_0\in \mathcal{G}_{\tilde{M},{2\epsilon}}(f,X)$, where $\tilde{M}=(1+\hat{M})(m-1)$.
\end{thm}
\textit{Proof:}
The necessary part has already been done in above remark. For sufficient part, let conditions $(A)$ and $(B)$ hold for $x_0\in X$. This means that there exists $ \bar{v}^i\in\partial_{\bar{\epsilon_i}}L_i^{\hat{M}}(x_0,\bar{\tau}^i,\bar{\mu}^i) $ for all $i\in I$ such that
\begin{equation}\label{eq35}
0=\bar{v}^1+\bar{v}^2+\ldots+\bar{v}^m.
\end{equation}
Thus, from definition of $\epsilon$-subdifferential, for each $i\in I$,
$$L^{\hat{M}}_i(x,\bar{\tau},\bar{\mu}^i)-L^{\hat{M}}_i(x,\bar{\tau}^i,\bar{\mu}^i)\geq \langle\bar{v}^i,x-x_0\rangle-\bar{\epsilon}^i.$$
Hence,
$$\sum\limits_{i\in I} L^{\hat{M}}_i(x,\bar{\tau},\bar{\mu}^i)-\sum\limits_{i \in I} L^{\hat{M}}_i(x,\bar{\tau}^i,\bar{\mu}^i)\geq \langle \sum\limits_{i \in I}\bar{v}^i,x-x_0\rangle-\sum\limits_{i\in I}\bar{\epsilon}^i.$$
Now using Equation~(\ref{eq35}), we get
\begin{multline*}
\sum\limits_{i \in I} (f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x)
+ \sum\limits_{r \in L} \bar{\mu}^i_r g_r(x))
- \sum\limits_{i \in I} (f_i(x_0) \\
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0)
+\sum\limits_{r\in L} \bar{\mu}^i_r g_r(x_0))\geq -\sum\limits_{i \in I} \bar{\epsilon}_i.
\end{multline*}
So, if $x$ is a feasible point then using Condition~($B$), the above inequality reduces to
\begin{eqnarray*}
\sum\limits_{i \in I} (f_i(x)
+\sum\limits_{j \in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x))\geq
\sum\limits_{i \in I} (f_i(x_0)
+\sum\limits_{j\in I, j\not = i}\bar{\tau}^i_j \hat M f_j(x_0))-\sum\limits_{i \in I} 2\bar{\epsilon}_i,
\end{eqnarray*}
which can be rewritten as
\begin{eqnarray*}
\sum\limits_{i \in I} (1
+\sum\limits_{i \in I, i\not = j}\bar{\tau}^i_j \hat M) f_i(x)\geq
\sum\limits_{i \in I} (1
+\sum\limits_{i \in I, i\not = j}\bar{\tau}^i_j \hat M )f_i(x_0)-\sum\limits_{i \in I} (1
+\sum\limits_{i \in I, i\not = j}\bar{\tau}^i_j \hat M) 2\epsilon_i.
\end{eqnarray*}
Hence, $x_0$ is $\langle s, 2\epsilon\rangle$-minimizer of $P(s)$ where $s_i=1
+\hat M \sum\limits_{k\in I,k\not = i} \bar{\tau}^i_k$. Now using the same argument as in Theorem~\ref{t2}, we conclude that $x_0\in \mathcal{G}_{\tilde{M},{2\epsilon}}(f,X)$, where $\tilde{M}=(1+\hat{M})(m-1)$. This completes the proof.\hfill$\Box$
\section{Concluding remarks}
To analyze the behaviour of an optimization problem from the viewpoint of KKT conditions is deep-rooted in psyche of researchers in optimization theory. Though KKT conditions may not have been used very heavily in multiobjective optimization, but they can, however, act very well as a tool to develop stopping criteria. In this article, we characterize approximate versions of Pareto and proper Pareto solution using KKT type conditions. In fact, in the convex case, we achieve a complete characterization, for example, Theorem \ref{t22} demonstrates that a sequence of points which converge to weak Pareto minimizer has a subsequence where each point satisfies an approximate version of the KKT conditions. This result thus demonstrates the reason why approximate KKT type conditions can be used as stopping criteria.\\
The analysis of the approximate versions of the $\hat{M}$-Geoffrion proper solutions in terms of approximate KKT conditions is a starting point for building stopping criteria to identify such points. Our future research would involve more computational studies by using these optimality conditions as a stopping criterion.
|
1,314,259,993,333 | arxiv | \section{Multiwavelength properties of PMN~J1326$-$5256}
There is a strong connection between AGN exhibiting interstellar
scintillation (ISS) and blazars that were detected at gamma ray
energies with EGRET.
Out of the 19 EGRET blazars observed in the MASIV 5~GHz VLA
Survey \cite{Lovell03,Jauncey07}, 17 showed significant intraday
variability (IDV)
in at least one epoch with rms fractional variations 1--6\%; in
comparison, 56\% of the entire compact, flat-spectrum MASIV sample
showed such IDV. The MASIV survey showed a strong Galactic latitude
dependence of IDV, indicating a predominantly interstellar origin
for IDV at 5~GHz.
The radio source PMN~J1326$-$5256 is a little-studied object with very
few references in the literature, and no optical identification prior
to the present work. It was observed in the ATCA\footnote{The
Australia Telescope Compact Array is part of the Australia Telescope
which is funded by the Commonwealth of Australia for operation as a
National Facility managed by CSIRO.} calibrator survey and discovered
to be intraday variable (R.J.~Sault 2001, private communication). The
source has no large-scale structure, being completely unresolved with
the ATCA and also the AT-LBA with a maximum resolution of 16~mas at
2.3~GHz. The most accurate J2000 coordinates available for
PMN~J1326$-$5256 are $13^{\rm h}26^{\rm m}49.23^{\rm s}\pm 0.02^{\rm
s}, -52^{\circ}56'23.7''\pm 0.1''$, determined from ATCA data at
4.8~GHz. This position is coincident with sources in the USNO-A2 and
2MASS catalogues. We obtained an optical spectrum during AAT service
observations on 5 June 2002. The spectrum is featureless across the
observed range of $\sim 5000-9000$\AA, with S/N $\sim 15$ in the
unaveraged continuum. During the AAT observations it was noted that
the object appeared much brighter than on archival UK Schmidt plates,
indicating strong optical variability. Near-infrared colours from
2MASS (J=14.67, H=13.70, K=12.78) match typical BL Lacertae objects.
PMN~J1326$-$5256 is a candidate BL Lac object,
although further optical spectroscopy, particularly in the blue end
of the spectrum, would be useful to confirm this identification.
3EG~J1316$-$5244 is an unidentified EGRET source at an angular
separation of $1.5^{\circ}$ from PMN~J1326$-$5256. This offset is
larger than the quoted error radius of $0.5^{\circ}$, however we note
that 3EG~1316$-$5244 is flagged as having an irregular or not closed
95\% position likelihood contour, indicating possible large
uncertainty in the EGRET source location \cite{Hartman99}. We suggest
a tentative association between 3EG~J1316$-$5244 and PMN~J1326$-$5256
based on multiwavelength properties.
\section{Scintillation monitoring}
An annual cycle in the timescale of interstellar scintillation (ISS) is
expected from the changing velocity of the Earth relative to the
scattering plasma. In principle, observations of ISS at different
times of the year can be used to determine scattering screen
parameters and microarcsecond-scale source structure.
PMN~J1326$-$5256 was included in an ATCA IDV monitoring programme, in
which it was observed at 4.8 and 8.6~GHz in 14 sessions of $\sim 2$
days or more, over a period of 2.5 years starting in early 2001,
shortly after the discovery of its IDV. This source showed the largest
amplitude IDV of the 21 IDV sources included in the ATCA monitoring
programme, with a maximum modulation index (fractional rms variation)
of 16\% at both 4.8 and 8.6~GHz, although significant IDV was not
observed in every epoch, with the 2-day modulation index being
$\lesssim 1$\% at times. The MASIV Survey showed that episodic IDV is
a common phenomenon amongst flat-spectrum radio sources \cite{Jauncey07}.
PMN~J1326$-$5256 is also a target of the COSMIC project
\cite{McCulloch05}, using the University of Tasmania's 30-m radio
telescope at Ceduna. This programme aims at dedicated monitoring of
several IDV sources at 6.7~GHz. COSMIC observations of
PMN~J1326$-$5256 started in early 2003. Figure~\ref{fig1} shows ATCA
and Ceduna monitoring data up to early 2007. Figure~\ref{fig2} shows
all four Stokes parameters plotted for the first three epochs of ATCA
data. Stokes V (circular polarization) shows a sign flip between the
first two epochs which helps to constrain the origin of the circularly
polarized radiation. The variations at 4.8 and 8.6~GHz are strongly
correlated, indicative of scintillation in the weak scattering regime,
but at Galactic coordinates
$l=308.3^{\circ}, b=9.6^{\circ}$, PMN J1326$-$5256 would be
expected to undergo strong scattering at these frequencies.
An additional nearby scattering ``screen'' may be
responsible for the rapid IDV observed, since scattering material
close to the observer has a lower transition frequency and causes more
rapid variations than the same material at a larger distance.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{alldata.eps}
\caption{Six years of ATCA and Ceduna total flux density monitoring.}
\label{fig1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.55\textwidth]{atca3.eps}
\caption{First three epochs of ATCA data, starting on 4 February, 17
March and 6 April, 2001, showing all four Stokes parameters plotted
with 5-minute or scan averaging.}
\label{fig2}
\end{center}
\end{figure}
There is some evidence of a slow-down in the ISS timescale around
November from the first year of ATCA data and later Ceduna data, but
no clear repeating annual cycle is seen. The IDV displays large
changes in modulation index (fractional rms variation) at different
epochs. Currently it is not clear whether changes in the IDV behaviour
of PMN~J1326$-$5256 are mostly due to intrinsic source changes or
changes in the interstellar medium along the line-of-sight. Because of
the low Galactic latitude of PMN~J1326$-$5256, the distribution of
scattering plasma along the line-of-sight to this source is likely to
be complex. A superposition of multiple scattering ``screens'' could
lead to multiple scintillation timescales, which are evident in the
Ceduna monitoring data. If the edge of one such scattering region
drifts in or out of the line-of-sight to PMN J1326$-$5256, this could
result in changes in the ISS behaviour. From Figure~\ref{fig1},
however, it is clear that source-intrinsic changes occur on timescales
of months to years. From the ATCA monitoring, the observed decrease in
modulation index with time shows a moderate correlation ($\rho \sim
0.6$) with steepening spectral index, suggesting a possible expansion,
quenching of the ISS and subsequent fading of a compact component. We
have investigated the radio spectral variability of PMN~J1326$-$5256
using data between 1.4 and 96~GHz from the ATCA calibrator database,
observed irregularly between 2000 and 2007. Taking data from epochs
close in time to estimate instantaneous spectra, there is evidence
that during the period where the source displayed rapid IDV, the
spectrum peaks above 22~GHz, indicating the presence of a strongly
self-absorbed synchrotron component, while at later times during the
``quiescent'' phase observed at Ceduna, the turnover frequency dropped
down below 10~GHz. Recently the spectrum has again started to become
more inverted. Continued monitoring and more detailed analysis and
modelling of the Ceduna light curves are therefore of interest to
determine the origin of the observed changes in the scintillation of
PMN~J1326$-$5256.
\section{Summary}
The radio spectrum, extreme ISS and longer-term intrinsic radio
variability of PMN J1326--5256 imply the presence of a high
brightness temperature, highly beamed jet component. A first optical
spectrum of the source, together with the photometric properties
indicate that it is a typical low-frequency peaked (classical
``radio-selected'') BL Lac object. We suggest a tentative association
with the unidentified EGRET source 3EG~J1316$-$5244. GLAST will have
the resolution and sensitivity to be able to confirm PMN~J1326$-$5256
as a gamma ray source. Multiwavelength monitoring of sources such as
PMN~J1326$-$5256 in the GLAST era has the potential to make important
contributions to our understanding of the physics of AGN jets and high
energy emission mechanisms.
|
1,314,259,993,334 | arxiv | \section{Introduction}
Filter bases generalise the familiar concept of a Schauder basis in a~Banach space in a~very natural way: let $\mathcal{F}$ be a filter of subsets of the natural numbers (a family of subsets that is closed with respect to finite intersections and is upwards-closed with respect to set inclusion; in order to avoid trivialities, we assume that every filter contains the filter of cofinite sets)
and let $X$ be a~Banach space (we consider Banach spaces over the fixed field $\mathbb K$ of real or complex numbers). A~sequence $(e_n)_{n=1}^\infty$ in $X$ is an $\mathcal{F}$\emph{-basis} for $X$ whenever for each $x\in X$ there exists a unique sequence of scalars $(e^*_n(x))_{n=1}^\infty$ such that $x = \sum_{n,\mathcal{F}} e^*_n(x) e_n$, where the series is interpreted as the $\mathcal{F}$-limit of the sequence of the corresponding partial sums. (A sequence $(x_n)_{n=1}^\infty$ in a normed space $X$ $\mathcal{F}$-\emph{converges} to $x\in X$ whenever for all $\varepsilon > 0$ there is $A\in \mathcal{F}$ such that for all $n\in A$ one has $\|x - x_n\|<\varepsilon$.)\smallskip
Unlike for more familiar Schauder bases (\emph{i.e.}, filter bases with respect to the filter of cofinite sets), the question of continuity of the coordinate (linear) functionals $x\mapsto e^*_n(x)$ ($x\in X$) is non-obvious. This very problem was posed by Kadets and revived in 2011 with a particular emphasis on the filter of statistical convergence, that is the filter of subsets of $\mathbb{N}$ having density $1$ (see, \emph{e.g.}, \cite{CGK,GanichevKadets}). Certainly, the problem is equivalent to the problem of continuity of the initial $\mathcal{F}$-basis projections $(P_n)_{n=1}^\infty$ given by $P_nx = \sum_{i=1}^n e_i^\star(x)e_i$ for $n\in \mathbb{N}$ and $x\in X$. \smallskip
The present paper subsumes (at least in the analytic case) the results obtained in the recent study by the two last-named authors \cite{KS}, where it was proved that, assuming the existence of a supercompact cardinal, the coordinate functionals associated with a filter basis are continuous as long as the filter is projective. The aim of the paper is twofold. First, we remove the extra assumption of the existence of supercompact cardinals used for proving automatic continuity of the evaluation functionals of a filter basis at the expense of restricting to analytic ideals, which appear to be quite sufficient from the point of view of applications. In particular, it is sufficient to answer Kadets' question concerning the filter of statistical convergence in ZFC. Second, we contribute to (and hopefully trigger) further development of the theory of filter bases in Banach spaces, that may serve as a~proxy in the situation where genuine Schauder bases are unavailable. For instance, we prove the principle of small perturbations for such bases (Theorem~\ref{Th:C}) that is a basic result in the context of Schauder bases.\smallskip
We wish to emphasise that filter bases, despite striking similarities to Schauder bases, may behave quite differently.
\begin{itemize}
\item Indeed, there is a filter basis (with respect to the filter of statistical convergence) in a separable Hilbert space whose initial projections are bounded, yet not uniformly (\cite[Example 1]{CGK}).
\item Kochanek proved in \cite{Kochanek} that for filters generated by fewer sets than the pseudo-intersection number (in particular, for countably generated filters) one indeed retains the automatic continuity of initial basis projections. Moreover, for `filter-many' of them there is a uniform bound on the norm of these. However, the filter of statistical convergence is generated by precisely continuum many sets, so Kochanek's result does not apply to it.
\item Avil\'es, Kadets, P\'erez H\'ernandez, and Solecki \cite{AKHS} distilled the key property from Ko\-cha\-nek's proof that makes it work and termed filters having it \emph{Baire filters}. However analytic Baire filters are necessarily countably generated (\cite[Proposition 4.1]{AKHS}).
\end{itemize}
\smallskip
A word of explanation behind topological properties of filters is required. Since there is a natural correspondence between subsets of the set of natural numbers and 0-1 sequences, one may treat filters on this set as subsets of the Cantor set $\{0,1\}^\mathbb{N}$ equipped with the product topology and consider properties such as being Borel, analytic (continuous image
of a Polish space), etc.~that we shall freely do. For more information, we refer to \cite{Farah}, where the the dual notion of an ideal is considered. All unexplained terminology from Descriptive Set Theory (such as the definitions of the $\mathbf{\Sigma}^i_n$, $\mathbf{\Pi}^i_n$, and $\mathbf{\Delta}^i_n$-classes, $i=0,1$, $n\in\mathbb N$, and of Baire-measurability) is in line with \cite{Kechris}.\smallskip
Our first main result, Theorem~\ref{T: cont ZFC}, subsumes the main result of \cite{KS}.
\begin{mainth}\label{T: cont ZFC}
Let $\mathcal{F}$ be an analytic filter on $\mathbb{N}$. Then for every $\mathcal{F}$-basis the corresponding coordinate functionals are continuous.
\end{mainth}
As the filter of statistical convergence is Borel (\cite[Example 1.2.3(d)]{Farah}), hence analytic (by \cite[Theorem 13.7]{Kechris}), we obtain a~solution in ZFC to Kadets' problem.
\begin{corollary*}
Let $\mathcal{F}_{{\rm st}}$ be the filter of statistical convergence. Then for every $\mathcal{F}_{{\rm st}}$-basis the corresponding coordinate functionals are continuous.
\end{corollary*}
Theorem~\ref{T: cont ZFC} has a natural generalisation in the case where all $\mathbf{\Delta}^1_n$-subsets of Polish spaces are Baire-measura\-ble. Since this assumption cannot be proved in ZFC alone, we decided to present the result as a separate theorem to highlight that Theorem~\ref{T: cont ZFC} is a theorem of ZFC, even though Theorem~\ref{T: cont ZFC} follows from Theorem A$^\prime$.
\begin{mainthprime*}\label{T: cont meas}
Let $n\geqslant 1$. Assume that all $\mathbf{\Delta}^1_n$-subsets of Polish spaces are Baire-measurable. Suppose that $\mathcal{F}$ is a $\mathbf\Sigma^1_n$-filter on $\mathbb{N}$. Then for every $\mathcal{F}$-basis the corresponding coordinate functionals are continuous.
\end{mainthprime*}
A word of explanation on the assumption of Baire-measurability of $\mathbf{\Delta}^1_n$-sets is required. A classical result due to Banach and Mazur ensures that for $n \geqslant 1$ Baire-measurability of all $\mathbf{\mathbf{\Delta}}_{n+1}^1$-subsets (and even all $\mathbf{\Sigma}_{n+1}^1$-subsets) of Polish spaces follows from determinacy of $\mathbf{\Sigma}_{n}^1$-games on integers (this is an easy consequence of \cite[Theorem 21.5]{Kechris}). The latter determinacy assumption has been shown by Martin \cite{MartinMeasurable} to follow for $n = 1$ from the existence of a measurable cardinal and by Martin and Steel \cite{MartinSteel} to follow for $n \geqslant 2$ from the existence of $n$ Woodin cardinals and a measurable cardinal above them. Knowing that supercompact cardinals are measurable (see \cite[Lemma 20.16]{Jech}) and limits of Woodin cardinals (see \cite[Exercise 34.1]{Jech}), it follows that the existence of a supercompact cardinal implies the determinacy of all projective games on integers and hence the Baire-measurability of projective subsets of Polish spaces. In particular, from Theorem A$^{\prime}$ we can recover the main result of \cite{KS}.
\begin{corollary*}[\cite{KS}]
Assume the existence of a supercompact cardinal. Let $\mathcal{F}$ be a projective filter on $\mathbb{N}$. Then for every $\mathcal{F}_{{\rm st}}$-basis the corresponding coordinate functionals are continuous.
\end{corollary*}
Nonetheless large cardinals are actually \emph{not} required to get Baire-measurability of projective sets. It was proved by Martin and Solovay \cite{MartinSolovay} that under Martin's Axiom and the negation of the Continuum Hypothesis all $\mathbf{\Sigma}_2^1$-subsets of Polish spaces are Baire-measurable. Moreover, it was proved by Shelah \cite{ShelahBP} that $\mathsf{ZFC}$ + `all projective subsets of Polish spaces are Baire-measurable' is equiconsistent with $\mathsf{ZFC}$ alone, thus completely removing the need for large cardinals. Results as those above are often proved in the literature for specific Polish spaces (\emph{e.g.}, the Cantor space), but here we freely use the fact that Baire-measurability results, if established for certain $\mathbf \Sigma^1_n$-classes ($n\in \mathbb N$) of subsets of the Cantor space, can be elementarily transferred to arbitrary Polish spaces using, \emph{e.g.}, \cite[Theorem 3.15]{CKW}, which asserts that any two uncountable Polish spaces without isolated points are Borel-isomorphic via a map that preserves meagre sets. \smallskip
We remark in passing that, assuming the consistency of one of the above-mentioned large-cardinal assumptions, the conclusion of Theorem A$^{\prime}$ is consistent with results obtained by forcings that do not destroy Woodin cardinals and measurable cardinals. It follows from \cite[Corollary]{HW} and \cite[Theorem 3]{LevySolovay} that forcings of reasonably small (accessible) cardinality satisfy this condition.\smallskip
Our second main result asserts that a filter basis with respect to a filter that is not analytic is also a filter basis with respect to a filter that actually is analytic, provided the coordinate functionals are continuous.
\begin{mainth}\label{T: reduction to analytic}
Let $\mathcal{F}$ be a filter on $\mathbb{N}$ (not necessarily projective). Let $(e_n)_{n=1}^\infty$ be an $\mathcal{F}$-basis with continuous coordinate functionals. Then there exists an analytic filter $\mathcal{F}'\subset\mathcal{F}$ such that $(e_n)_{n=1}^\infty$ is also an $\mathcal{F}'$-basis.
\end{mainth}
Observe that, in the above result, the coordinate functionals of the basis $(e_n)_{n=1}^\infty$ are the same when considered with respect to the filter $\mathcal{F}$ or the filter $\mathcal{F}'$.
\section{Proofs of Theorems~\ref{T: cont ZFC} and A$^\prime$}
\begin{proof}[Proof of Theorem~\ref{T: cont ZFC}]
Let $X$ be a Banach space with an $\mathcal{F}$-basis $(e_n)_{n=1}^\infty$. We denote by $(e_n^\star)_{n=1}^\infty$ the corresponding coordinate functionals. We start by proving that coordinate functionals $e_n^\star$ ($n\in \mathbb{N}$) are $\mathbf{\Sigma}_1^1$-measurable. Indeed, note that for a fixed open set $U\subset \mathbb{R}$ we get
\[
\begin{aligned}
e_n^\star(x) \in U &\iff \exists_{(\alpha_i)\in \mathbb{K}^\mathbb{N}} \left( \sum_{i, \mathcal{F}} \alpha_i e_i = x \wedge \alpha_n \in U\right)\\
& \iff \exists_{(\alpha_i)\in \mathbb{K}^\mathbb{N}} \forall_{l \in \mathbb{N}} \exists_{A \in \mathcal{F}} \forall_{m \in A} \left(\Big\|\sum_{i=1}^m \alpha_i e_i - x\Big\| \leq \frac{1}{l} \wedge \alpha_n \in U \right)
\end{aligned}
\]
By \cite[Proposition 3.3]{Kechris}, the space $X \times \mathbb{K}^\mathbb{N} \times \{0, 1\}^\mathbb{N}$ is Polish. For $m \in \mathbb{N}$ let us define $\Phi_m \colon X \times \mathbb{K}^\mathbb{N} \times \{0, 1\}^\mathbb{N} \to \mathbb{R} $ by
\[\Phi_m(x,(\alpha_i),A)= \Big\|\sum_{i=1}^m \alpha_i e_i - x\Big\|\quad \big(x\in X, (\alpha_i)\in \mathbb K^{\mathbb N}, A\in \{0, 1\}^\mathbb{N}\big),\] which is a continuous map. We then conclude that for every $l \in \mathbb{N}$ the set $S_l$ given by
\[
\begin{aligned}
S_l =& \left\{ (x, (\alpha_i), A) \in X \times \mathbb{K}^\mathbb{N} \times \{0, 1\}^\mathbb{N}\colon \forall_{m \in A} \Big(\Phi_m\left(x,(\alpha_i),A\right) \leq \frac{1}{l} \wedge \alpha_n \in U \Big)\right\}
\\
=& \left\{ (x, (\alpha_i), A)\colon \forall_{m \in \mathbb{N}} \Big[m \notin A \vee \big( \Phi_m\left(x,(\alpha_i),A\right) \leq \frac{1}{l} \wedge \alpha_n \in U\big)\Big] \right\}
\\
=& \bigcap_{m \in \mathbb{N}} \left\{ (x, (\alpha_i), A)\colon m \notin A \vee \Big( \Phi_m(x,(\alpha_i),A) \leq \frac{1}{l} \wedge \alpha_n \in U \Big)\right\}
\\
= &\bigcap_{m \in \mathbb{N}} \Bigg[\left\{ (x, (\alpha_i), A)\colon m \notin A \right\} \cup \Big( \big\{ (x, (\alpha_i), A)\colon \Phi_m\left(x,(\alpha_i),A\right) \leq \frac{1}{l}\big\} \\
& \cap \left\{ (x, (\alpha_i), A)\colon \alpha_n \in U \right\} \Big) \Bigg]
\end{aligned}
\]
is a $G_\delta$ subset of $X \times \mathbb{K}^\mathbb{N} \times \{0, 1\}^\mathbb{N}$ (the condition `$m \notin A$' encodes a set that is both closed and open in $\{0, 1\}^\mathbb{N}$).\smallskip
Recall that countable intersections of analytic subsets of Polish spaces are analytic, and that images and preimages of analytic sets by a continuous maps between Polish spaces are analytic (see \cite[Proposition 14.4]{Kechris}). Also recall that Borel subsets of Polish spaces are analytic (see \cite[Theorem 13.7]{Kechris}). Since the filter $\mathcal{F}$ is analytic, we deduce that its preimage $X \times \mathbb{K}^\mathbb{N} \times \mathcal{F}$ by the projection $X \times \mathbb{K}^\mathbb{N} \times \{0, 1\}^\mathbb{N} \to \{0, 1\}^\mathbb{N}$ is also analytic, thus the set $$S:=\bigcap_{l \in \mathbb{N}}\operatorname{proj}_{X \times \mathbb{K}^\mathbb{N}}[X \times \mathbb{K}^\mathbb{N} \times \mathcal{F} \cap S_l]$$ is analytic. Observe that, for $x \in X$ and $(\alpha_i) \in \mathbb{K}^\mathbb{N}$, we have
\[
\begin{aligned}
(x, (\alpha_i)) \in S &\iff \forall_{l \in \mathbb{N}}\exists_{A \in \mathcal{F}} (x, (\alpha_i), A) \in S \\
& \iff \forall_{l \in \mathbb{N}} \exists_{A \in \mathcal{F}} \forall_{m \in A} \left(\Big\|\sum_{i=1}^m \alpha_i e_i - x\Big\| \leq \frac{1}{l} \wedge \alpha_n \in U \right).
\end{aligned}
\]
Thus we have $(e_n^*)^{-1}(U) = \operatorname{proj}_X[S]$, and we conclude that $(e_n^\star)^{-1}(U)$ is analytic.\smallskip
On the other hand, $e_n^\star$ is also $\mathbf\Pi_1^1$-measurable. Indeed, writing $U^c =: \bigcap_{n \in \mathbb{N}} V_n$, where the $V_n$'s are open subsets of $\mathbb{K}$, we obtain
\[
(e_n^\star)^{-1}(U)^c = \bigcap_{n \in \mathbb{N}}(e_n^\star)^{-1}(V_n),
\]
an analytic set. \smallskip
Once we know that for any open $U \subset \mathbb{K}$ the set $(e_n^\star)^{-1}(U)$ is both analytic and coanalytic, by Souslin's Theorem (\cite[Theorem 14.11]{Kechris}), it is Borel, so $e_n^\star$ is Borel-measurable. Since each Borel set is Baire-measurable, we conclude that $e_n^\star$ is a Baire-measurable homomorphism between Polish groups $X$ and $\mathbb{K}$. Consequently, by the Banach--Pettis Theorem (\cite[Theorem 9.10]{Kechris}) $e_n^\star$ is continuous.
\end{proof}
It was not essential for the proof of Theorem~\ref{T: cont ZFC} to prove $\mathbf{\Pi}_1^1$-measurablity of the coordinate functionals as all analytic sets are already Baire-measureable (see \cite[Theorem 21.6]{Kechris}). Nontheless, we decided to follow this way of reasoning in order to emphasise the analogy with the proof of Theorem A$^\prime$ whose proof is modelled on the proof of Theorem \ref{T: cont ZFC} and the latter follows from the former. However, for reader's convenience, we present a~sketch of the proof with only certain details that are different.
\begin{proof}[Proof of Theorem A$^\prime$]
As in the proof of \ref{T: cont ZFC}, we fix an open $U \subset \mathbb{K}$ and we get that
\[
\begin{aligned}
e_n^\star(x) \in U &\iff \exists_{(\alpha_i)\in \mathbb{K}^\mathbb{N}} \forall_{l \in \mathbb{N}} \exists_{A \in \mathcal{F}} \forall_{m \in A} \left(\Big\|\sum_{i=1}^m \alpha_i e_i - x\Big\| \leq \frac{1}{l} \wedge \alpha_n \in U \right)
\end{aligned}
\]
The family of $\mathbf{\Sigma}_n^1$-sets contains Borel sets, is closed under countable intersections, and under images and preimages with respect to continuous maps between Polish spaces (see \cite[Theorem 37.1]{Kechris}). So we can deduce similarly as in the proof of Theorem \ref{T: cont ZFC} that the set $(e_n^\star)^{-1}(U)$ is $\mathbf{\Sigma}_n^1$.\smallskip
Writing $U^c =: \bigcap_{n \in \mathbb{N}} V_n$, where the $V_n$'s are open sets, we have
\[
(e_n^\star)^{-1}(U)^c = \bigcap_{n \in \mathbb{N}}(e_n^\star)^{-1}(V_n),
\]
an $\mathbf{\Sigma}_n^1$-set. We deduce that $(e_n^\star)^{-1}(U)$ is $\mathbf{\Pi}_n^1$, so taking into account first part of proof we see that it is $\mathbf{\Delta}_n^1$. \smallskip
By the hypothesis, all $\mathbf{\Delta}_n^1$-sets are Baire-measurable, so $e_n^\star$ ($n\in\mathbb N$) is a Baire-measurable group homomorphism. Consequently, by the Banach--Pettis Theorem $e_n^\star$ ($n\in\mathbb N$) is continuous.
\end{proof}
\section{Proof of Theorem~\ref{T: reduction to analytic}}
We are ready to prove Theorem~\ref{T: reduction to analytic}.
\begin{proof}[Proof of Theorem~\ref{T: reduction to analytic}]
Set
\[
\mathcal{A}:= \Bigg\{A \subset \mathbb{N}\colon \exists_{x \in X} \exists_{\varepsilon>0}\; A \supset \Big\{ n \in \mathbb{N}\colon \big\|\sum_{i=1}^n e_i^\star(x) e_i - x \big\| \leq \varepsilon \Big\}\Bigg\}.
\]
It follows from the fact that $(e_n)_{n=1}^\infty$ is an $\mathcal{F}$-basis that $\mathcal{A} \subset \mathcal{F}$. Note that $(e_n)_{n=1}^\infty$ remains an $\mathcal{F}^\prime$-basis for any filter $\mathcal{F}'$ on $\mathbb{N}$ such that $\mathcal{A} \subset \mathcal{F}' \subset \mathcal{F}$. Now, for every $n \in \mathbb{N}$, consider the set
\[
\mathcal{B}_n= \Big\{ (x,\varepsilon,A)\in X \times (0,\infty)\times \{0, 1\}^\mathbb{N}\colon n \in A \vee \big\|\sum_{i=1}^n e_i^\star(x) e_i - x \big\| > \varepsilon \Big\}.
\]
Using the continuity of $e_i^\star$ ($i\in \mathbb N$) and the fact that $\{0, 1\}^\mathbb{N}$ comes equipped with the topology of pointwise convergence, we observe that $\mathcal{B}_n$ is open. Observing that
\[
\mathcal{A} = \operatorname{proj}_{\{0, 1\}^\mathbb{N}}\Big[\bigcap_{n \in \mathbb{N}} \mathcal{B}_n\Big],
\]
we deduce that $\mathcal{A}$ is analytic.\smallskip
As $\mathcal{A} \subset \mathcal{F}$, finite intersections on elements of $\mathcal{A}$ are readily non-empty. Consequently, $\mathcal{A}$ generates a filter $\mathcal{F}' \subset \mathcal{F}$. We have:
\[
\begin{aligned}
\mathcal{F}^\prime= &\{A \supset \mathbb{N}\colon \exists_{n \in \mathbb{N}} \exists_{A_1, A_2, \ldots, A_n \in \mathcal{A}} A \supset A_1 \cap A_2 \cap \ldots \cap A_n \} \\
= & \bigcup_{n \in \mathbb{N}} \operatorname{proj}_{(\{0, 1\}^\mathbb{N})_1} [\{ (A, A_1, \ldots, A_n)\in (\{0, 1\}^\mathbb{N})^{n+1}\colon A \supset A_1 \cap \ldots \cap A_n \wedge A_1, \ldots, A_n \in \mathcal{A}\}],
\end{aligned}
\]
where $\operatorname{proj}_{(\{0, 1\}^\mathbb{N})_1}$ is the projection onto the first coordinate. Consequently, $\mathcal{F}'$ is an analytic filter and $(e_n)_{n=1}^\infty$ is an $\mathcal{F}'$-basis since $\mathcal{A}\subset \mathcal{F}' \subset \mathcal{F}$.
\end{proof}
\section{Closing remarks}
Having established continuity of basis projections (at least for analytic filters as proved in Theorem~\ref{T: cont ZFC}, which in the light of Theorem~\ref{T: reduction to analytic} appears to be sufficient for applications), it is possible to develop a filter basis theory analogous to the theory of Schauder bases. For instance, we may recover the Bessaga--Pe{\l}czy\'nski small-perturbations principle.
\begin{mainth}\label{Th:C}
Let $X$ be a Banach space with an $\mathcal{F}$-basis $(e_n)_{n=1}^\infty$, with continuous coordinate functionals (for example, $\mathcal{F}$ is analytic). If $(f_n)_{n=1}^\infty$ is a sequence in $X$ such that
\[
\sum_{n=1}^\infty {\|f_n-e_n\|\cdot \|e_n^*\|} \leqslant \delta < 1,
\]
for some $\delta\in (0,1)$, then $(f_n)_{n=1}^\infty$ is congruent to $(e_n)_{n=1}^\infty$, that is, $(f_n)_{n=1}^\infty$ is an $\mathcal{F}$-basis and the assignment $e_n \mapsto f_n$ $(n\in \mathbb N)$ extends to a surjective isomorphism of $X$ onto itself.
\end{mainth}
\begin{proof}
The proof are analogous to the familiar case of Schauder bases. Indeed, we observe that the operator $T\colon X\to X$ given by
$T(\sum_{i,\mathcal{F}}a_i e_i) = \sum_{i,\mathcal{F}}(a_i(f_i - e_i))$ has norm strictly less than one. Indeed, for $x = \sum_{i,\mathcal{F}} a_i e_i$ we have
\[
\begin{aligned}
\|Tx\|=& \Big\|\sum_{i,\mathcal{F}} a_i (e_i-f_i)\Big\|
\leq \sum_{i}\| a_i (e_i-f_i)\|
= \sum_{i}\| e_i^\star (x) (e_i-f_i)\|\\
\leq & \sum_{i}\| e_i^\star \| \| x \| \| e_i-f_i \| = \Big( \sum_{i}\| e_i^\star \| \| e_i-f_i \| \Big) \| x \| \leqslant \delta \|x\|.
\end{aligned}
\]
Consequently, $I_X+T$ is invertible
and maps $e_i$ to $f_i$ ($i\in \mathbb N$), which proves that $(f_n)_{n=1}^\infty$ is an $\mathcal F$-basis too.
\end{proof}
Interestingly, general $\mathcal{F}$-bases are different from Schauder bases as, for example, we do not have the basis constant at the disposal. Indeed, it follows from Grunblum's criterion (\cite[Theorem V.1]{Diestel}) that if the basis projections are uniformly bounded, then the $\mathcal{F}$-basis is already a~Schauder basis.\smallskip
We conclude our note with two open problems.
\begin{question}
Does there exists a Banach space $X$, a filter $\mathcal{F}$ on $\mathbb{N}$ and an $\mathcal{F}$-basis $(e_n)_{n=1}^\infty$ for $X$ such that not all coordinate projections $(e_n^\star)_{n=1}^\infty$ are continuous?
\end{question}
Theorem \ref{T: reduction to analytic} demonstrates that filter bases with respect to analytic filters provide a very natural framework for the theory. In \cite[Example 1]{CGK} the authors provided an example of an $\mathcal{F}_{{\rm st}}$-basis in $\ell_2$ that is not a Schauder basis. However, $\mathcal{F}_{{\rm st}}$ has a relatively low complexity being $F_{\sigma\delta}$ (\cite[Example 1.2.3(d)]{Farah}). It is thus natural to ask whether being analytic in Theorem \ref{T: reduction to analytic} can be strengthened to being Borel.
\begin{question}
Let $\mathcal{F}$ be a filter and let $(e_n)_{n=1}^\infty$ be an $\mathcal{F}$-basis for a Banach space $X$ with continuous coordinate functionals. Does there exists a Borel filter $\mathcal{F}'$ such that $(e_n)_{n=1}^\infty$ is also an $\mathcal{F}'$-basis? Should it be the case, what is the smallest complexity of such $\mathcal{F}'$?
\end{question}
\subsection*{Acknowledgements}
The authors are grateful to Ji\v{r}\'{\i} Spurn\'y and Miroslav Zelen\'y for organisning the 49\textsuperscript{th} Winter School in Abstract analysis in Sn\v{e}\v{z}n\'e, Czech Republic in January 2022 during which the research discussed in the present note was initiated. The second-named author thus cordially acknowledges support received from SONATA 15 No. 2019/35/D/\-ST1/\-01734 that allowed him to travel to this conference.
\bibliographystyle{plain}
|
1,314,259,993,335 | arxiv | \section{Introduction}
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{./pic/trade-off.pdf}
\caption{Performance-efficiency trade-off on WIDER Face validation for different face detectors. The proposed ASFD outperforms a range of state-of-the-art methods.}
\label{fig:trade_off}
\end{figure}
Face detection serves as a fundamental step towards various face-related applications, such as face alignment ~\cite{tai2019towards}, face recognition \cite{huang2020curricularface} and face analysis \cite{pan2018mean}. It aims locate the face region (if any) in a given image, which has been a long standing research topic ranging from \cite{viola2004robust} to deep learning based methods \cite{zhang2017s3fd,chi2019srn}.
Beyond the scope of face, general object detection has been significantly pushed by the development of deep convolution neural networks \cite{simonyan2014vgg,he2016resnet,ren2015faster,liu2016ssd}. Among one of the representative framework, single-stage anchor-based detector with pyramid features has been thoroughly studied recently \cite{liu2016ssd,lin2017focal} and is dominant for face detection \cite{zhang2017s3fd,chi2019selective,tang2018pyramidbox,li2019dsfd,zhang2020refineface}. In this framework, the regular and dense anchors with different scales and aspect ratios are tiled over all locations of the feature map, and the pyramid features are extracted by the backbone and enhanced by the neck, which is subsequently plugged with both classification and regression branches.
\begin{figure*}[!t]
\centering
\subfigure[]{\includegraphics[width=0.32\linewidth]{./pic/fae_comparison.pdf}}
\subfigure[]{\includegraphics[width=0.32\linewidth]{./pic/scale_cdf.pdf}}
\subfigure[]{\includegraphics[width=0.32\linewidth]{./pic/density_cdf.pdf}}
\caption{(a) Comparison of our AutoFAE against other FAE modules on WIDER Face and COCO validation. The performance gaps with the baseline are indicated by blue and orange bars respectively, and RetinaNet is adopted as the baseline.
(b) Cumulative distribution function (CDF) of the relative scale of bounding boxes. $51\%$ of objects in COCO have a relative scale below $0.11$. For the same scale, the proportion in WIDER Face is $95\%$, while for a similar proportion, $55\%$ of faces in WIDER Face are less than $0.02$.
(c) CDF of the number of boxes in each image. The distribution of images containing more than $10$ boxes for WIDER Face is long-tailed, \textit{e.g.} $99\%$ of images in COCO have less than $30$ objects, while there are many images in WIDER Face that contain more than $150$ faces.
}
\label{fig:gap_coco_widerface}
\end{figure*}
Towards the design of Feature Aggregation and Enhancement (FAE) modules for these methods, Feature Pyramid Network (FPN) and its variants aggregate hierarchical features via the preset pathway, \textit{e.g.} top-down and bottom-up path, to effectively fuse multi-scale features \cite{tang2018pyramidbox,tan2020efficientdet,liu2018pafpn,li2019dsfd,zhang2020acfd}. For another instance, ASPP \cite{chen2017aspp,qiao2020detectors}, RFB \cite{liu2018rfb} and RFE \cite{deng2019retinaface} modules are proposed to enhance the feature representation by adjusting the effective receptive fields. Recently, Neural Architecture Search (NAS) has been also investigated for object detection, which has achieved remarkable performance gains, such as NAS-FPN \cite{ghiasi2019nasfpn}, AutoFPN \cite{xu2019autofpn} and NAS-FCOS \cite{wang2019nasfcos}. However, such a gain is severely not generalized when applying to face detection.
Fig.~\ref{fig:gap_coco_widerface} (a) shows a quantitative investigation of the cutting-edge FAE modules discussed above, in which the significant drops have been shown when they are applied to face domain. Even the automatic learning based method, \textit{a.k.a.} NAS-FCOS \cite{wang2019nasfcos} that performs $1.6$ lower than the baseline. This phenomenon highlights the domain gap between general object and face detection. To explain, we utilize cumulative distribution function to model the corresponding datasets, \textit{e.g.} WIDER Face \cite{yang2016wider} and COCO \cite{lin2014coco} in terms of the relative size of boxes and the number of boxes in each image, as presented in Fig.~\ref{fig:gap_coco_widerface} (b) and (c) respectively. As a result, the relative scale of faces is much smaller than objects in generic object detection, and there are more faces in each image than objects in COCO.
These characteristics also determine the design principles of modern face detectors. For instance, the shallower feature map is adopted to detect the small faces. And more predicted results are retained before and after the non-maximum-suppression for the high recall rate. Since FAE modules designed for generic object detectors are weak when dealing with small-scale and crowded objects, therefore, false positives inevitably exist when they are applied to face domain, resulting in performance degradation.
In this paper, a novel NAS based face detector framework termed Automatic and Scalable Face Detector (ASFD) is introduced, which is designed upon the basis of quantitative observations as above.
The proposed ASFD is equipped with an effective FAE module, namely AutoFAE, which is discovered in a face-suitable search space, and then automatically scaled up/down to meet different requirements.
In particular, we first analyze why the domain gap between the generic object and face detection would cause such an impact as Fig.~\ref{fig:gap_coco_widerface} (a). The performance degradation in the face domain is caused by the large semantic differences and unreasonable receptive fields for aggregated features.
Then, we propose a face-suitable search space that aggregates a feature with similar-scale ones and enriches the feature presentation with different operations for different pyramid levels. And the AutoFAE module is searched by a gradient-NAS method \cite{liu2018darts,xu2019pcdarts}, and can achieve consistent gains on both face detection and generic object detection, as presented in Fig.~\ref{fig:gap_coco_widerface} (a).
Finally, we build a supernet consisting of the found AutoFAE and a series of backbones, \textit{e.g.} ResNet \cite{he2016resnet}, and automatically obtain the proposed ASFD family to meet different complexity constraints via a one-shot NAS \cite{guo2019spos,chu2019fairnas}.
It is worth noting that the ASFD family achieves the state-of-the-art performance-efficiency trade-off, as presented in Fig.~\ref{fig:trade_off} \cite{yoo2019extd,chi2019srn,tang2018pyramidbox,li2019dsfd,zhang2020refineface}. Especially, the lightweight ASFD-D$0$ can run more than $320$ FPS with VGA-resolution images on a V$100$ GPU, and the strong ASFD-D$6$ obtains the highest AP scores on popular benchmarks, \textit{i.e.} WIDER Face and FDDB. To sum up, this work makes following contributions:
\begin{itemize}
\item We observe an interesting phenomenon that some previous FAE modules perform well in generic object detection but fail in face detection, and conduct extensive experiments to illustrate why this phenomenon occurs.
\item Based on the observations, we design a face-suitable search space for feature aggregation and enhancement modules, and discover an effective and generalized AutoFAE module via a joint searching method.
\item Extensive experiments conducted on the popular benchmarks demonstrate the better performance-efficiency trade-off of the proposed ASFD.
\end{itemize}
\section{Related Work}
\subsection{Feature Aggregation and Enhancement.}
In recent years, generic object detection and face detection have been dominated by deep learning based methods. SSD \cite{liu2016ssd} is the first to predict objects using the multi-scale pyramid features, FPN \cite{lin2017fpn} proposes to enrich the feature presentation of multi-scale features by a top-down pathway. Recently, many works are devoted to how to aggregate and enhance multi-scale features effectively. \cite{liu2018pafpn} and \cite{tan2020efficientdet} enhance the entire feature hierarchy by the bottom-up path augmentation. \cite{qiao2020detectors} proposes a novel recursive FPN that incorporates extra feedback connections from FPN into the bottom-up backbone layers.
Nowadays, NAS-based methods have demonstrated much success in exploring a better architecture for feature fusion and refinement \cite{xu2019autofpn,wang2019nasfcos,ghiasi2019nasfpn}.
Besides, feature enhancement modules are also be widely studied. Inception \cite{szegedy2017inception,szegedy2016rethinking} aims to capture different size of receptive fields via a multi-branch structure. \cite{zhang2020refineface} introduces rectangle receptive fields by a novel enhancement module. \cite{qiao2020detectors,li2019dsfd,liu2018rfb} adopt dilated convolution with different rates to enhance the feature discriminability and robustness. However, as illustrated in Fig.~\ref{fig:gap_coco_widerface}, some of them seem to be ineffective in face detection.
\begin{table}[!t]
\centering
\begin{tabular}{c|cccccc}
\toprule[1pt]
Pyramid Level & P2 & P3 & P4 & P5 & P6 & P7 \\
\midrule[0.5pt]
P2 & $82.9$ & \textcolor{red}{$84.3$} & $85.0$ & $84.7$ & $84.5$ & $82.7$ \\
P3 & \textcolor{blue}{$83.0$} & $82.9$ & \textcolor{red}{$85.1$} & $84.8$ & $84.5$ & $83.2$ \\
P4 & $83.0$ & \textcolor{blue}{$83.2$} & $82.9$ & \textcolor{red}{$84.5$} & $83.8$ & $83.3$\\
P5 & $82.7$ & $83.0$ & \textcolor{blue}{$83.0$} & $82.9$ & \textcolor{red}{$83.7$} & $83.2$ \\
P6 & $82.8$ & $83.0$ & $82.9$ & \textcolor{blue}{$82.9$} & $82.9$ & \textcolor{red}{$83.1$} \\
P7 & $82.7$ & $82.8$ & $83.3$ & $83.0$ & \textcolor{blue}{$83.0$} & $82.9$ \\
\bottomrule[1pt]
\end{tabular}
\caption{ Performance of FPN on Hard subset of WIDER Face validation while a pyramid level (indicated by the row) is aggregated by a specific level (indicated by the column).}
\label{tab:pyramid_fa}
\end{table}
\subsection{Neural Architecture Search.}
NAS first uses reinforcement learning to search for hyper-parameters in the structure or used in the training process \cite{zoph2016neural,zoph2018learning,tan2019efficientnet}. Recent researches focus on the automatic search of network architecture. Based on the idea of weight sharing, some works try to build the final structure by stacking a searched cell several times \cite{liu2018darts,xu2019pcdarts}, and other methods \cite{guo2019spos,chu2019fairnas,cai2019once,liu2020bfbox,chen2019detnas} decouple the training and searching process and directly train a supernet by randomly sampling a single-path network at each time.
As for their applications on object detection to fuse the multi-scale features, NAS-FPN \cite{ghiasi2019nasfpn} searches the irregular connections among pyramid layers with an RNN controller for aggregating the multi-scale features. AutoFPN \cite{xu2019autofpn} and NAS-FCOS \cite{wang2019nasfcos} discover the aggregation modules within a fully-connected search space densely connecting any two layers, in which features from some layers that damage the aggregated feature may be introduced causing accuracy degradation.
BFBox \cite{liu2020bfbox} is the first attempt of NAS on face detection and proposes a face-suitable search space. Although the novel backbone and neck networks are discovered among the search space, its performance is still worse than the state-of-the-art face detectors.
In this work, the sparse cross-scale connections of FA module are searched based on a face-suitable search space rather than in a violent fully-connected manner. And various FE modules with different operations and topologies are discovered for different pyramid levels.
\section{Problem Analysis}\label{sec:problem}
Fig.~\ref{fig:gap_coco_widerface} (a) is sufficient to illustrate the inconsistency between general object detection and face detection. In order to further analyze the reason why this phenomenon occurs, the effects of feature aggregation and enhancement modules are discussed respectively in this section.
\subsection{Feature Aggregation (FA).}
Firstly, extensive experiments are conducted to explore the relationship between performance and cross-connection of FA modules. These experiments are to add a FA module (for simplicity, FPN \cite{lin2017fpn}) in turn between any two pyramid features and aggregate one feature with another one after resizing to the same shape. Table~\ref{tab:pyramid_fa} shows the results on the diagonal indicating RetinaNet without FPN, and upper triangle and lower triangle representing top-down and bottom-up paths respectively, especially, red and blue fonts indicate top-down and bottom-up paths in FPN and PAFPN. it is clear to conclude that aggregating multi-scale features through top-down paths is superior to the bottom-up ones, especially these two layers used are close to each other. As the distance increasing, some connections even cause performance degradation, \textit{e.g.} AP$_{.50}$ drops $0.2$ when P$2$ is aggregated by P$7$.
Therefore, small faces only occur in the shallow features cannot be enhanced by semantic-rich features with large scale difference.
It reveals that NAS-FPN, AutoFPN and NAS-FCOS are sub-optimal to fuse features through a fully-connected or irregular connection in the face domain.
\begin{table}[!t]
\centering
\begin{tabular}{l|cccccc}
\toprule[1pt]
Module & P2 & P3 & P4 & P5 & P6 & P7 \\
\midrule[0.5pt]
ASPP & $86.5$ & $86.8$ & $86.9$ & $87.2$ & $\mathbf{87.5}$ & $87.1$ \\
CPM & $86.8$ & $86.9$ & $87.0$ & $87.1$ & $\mathbf{87.4}$ & $87.2$ \\
RFB & $86.8$ & $86.9$ & $87.0$ & $87.3$ & $\mathbf{87.4}$ & $87.2$ \\
RFE & $87.4$ & $\mathbf{87.6}$ & $87.5$ & $87.4$ & $87.2$ & $87.1$ \\
\bottomrule[1pt]
\end{tabular}
\caption{Performance of different feature enhancement modules when operated on different pyramid levels.}
\label{tab:pyramid_fe}
\end{table}
\subsection{Feature Enhancement (FE).}
Similar experiments are conducted to demonstrate the effects of different feature enhancements modules. As shown in Table~\ref{tab:pyramid_fe}, there are significant performance differences when a FE module is applied to the different pyramid layers.
In general, ASPP \cite{chen2017aspp,qiao2020detectors}, CPM \cite{li2019dsfd,tang2018pyramidbox} and RFB \cite{liu2018rfb} employ dilated convolution with different rates to enlarge the receptive fields, they can obtain the consistent performance when applied to different pyramid layers. Particularly, they would damage the shallow features especially the first two layers, and cause severe performance degradation.
RFE \cite{zhang2020refineface} aims to enrich the features by introducing the rectangle receptive fields and performs well on all pyramid layers especially the shallow layers.
A meaningful conclusion can be drawn that the shallower features seem to prefer a more diverse receptive field while the deeper layers favor a larger one.
This is mainly because detecting faces with occlusion or extreme-pose that appear in the shallow layers expect more robust features, and the large faces require features with large receptive fields to locate accurately.
In summary, reasons for the aforementioned problem are: (1) \textit{The unreasonable connection in FA modules would cause performance degradation}, (2) \textit{Features from different layers should be enhanced by different operations}.
\section{Methodology}
The framework of our ASFD is based on the simple and effective RetinaNet \cite{lin2017focal}, which contains three main components: the backbone for extracting pyramid features, the neck for fusing and enhancing the features, and the head for regression and classification. Our goal is to discover a \textit{better neck architecture} for RetinaNet and scale the ASFD to satisfy different complexity requirements automatically.
\subsection{Search Space of AutoFA and AutoFE}
\subsubsection{AutoFA}
In order to address the limitation of previous NAS-based FPN \cite{wang2019nasfcos, xu2019autofpn,ghiasi2019nasfpn} when applied on the face domain, the above analysis motivates us to design a module that aggregates a feature by the similar-scale features instead of directly using those with large differences in scale.
To this end, we propose a fundamental building cell of AutoFA for aggregating the pyramid features sequentially, shown in Fig.~\ref{fig:fpn_uint}. Initially, the cell contains a pyramid feature pool with a specific one activated, and a candidate feature pool with aggregated features of previous steps. During the searching phase, the specific pyramid feature is selected sequentially, candidate features are chosen with the corresponding probability $\boldsymbol{\alpha}$. Firstly, these candidate features are aggregated together after resizing to the same shape and weighting by $\boldsymbol{\alpha}$; then, it is fused with the pyramid feature and a convolution layer is performed to obtain the corresponding aggregated feature. At last, it is appended to the candidate feature pool for the later feature fusion.
Assume that a pyramid feature and the corresponding aggregated feature are $\mathbf{F}_i$ and $\mathbf{C}_i$, the basic cell can be formulated as,
\begin{equation}
\mathbf{C}_i = f_{post}\left(\beta_0 \mathbf{F}_i + \beta_1 f_{pre}\left(\sum\nolimits_{j<i}\alpha_j f_{re}(\mathbf{C}_j)\right)\right),
\end{equation}
in which $\sum_{j<i}\alpha_j\!=\!1$, $f_{post}(\cdot)$ and $f_{pre}(\cdot)$ are two convolution operations for feature aggregation, and $f_{re}(\cdot)$ is for resizing the feature to the same size, \textit{i.e.} bilinear interpolation for upsampling and maxpooling with stride $2$ for downsampling. Once the searching process is done, the final discrete structure can be obtained according to the probability score $\boldsymbol{\alpha}$ and importance score $\boldsymbol{\beta}$. For a given pyramid feature, it is aggregated by candidate features with probability $\alpha_i\geq0.5$, and $\boldsymbol{\beta}$ is retained as the initial value for weighting the pyramid feature and candidate feature. In this approach, aggregated features of the discrete cell can be denoted as,
\begin{equation}
\begin{split}
\mathbf{T}_i &= f_{pre}\left(\sum\nolimits_{j<i}[\alpha_j\geq0.5]\cdot f_{re}(\mathbf{C}_j)\right), \\
\mathbf{C}_i &= f_{post}\left(\beta_0\cdot\mathbf{F}_i + \beta_1\cdot\mathbf{T}_i\right),
\end{split}
\end{equation}
where $[\!~\cdot~\!]$ equals $1$ if the inner expression is true.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{./pic/fpn_unit.pdf}
\caption{Illustration of the basic cell of AutoFA. $\sum$ means the sum weighted by a factor. $\boldsymbol{\alpha}$ indicates the probability to choose a candidate feature, and $\boldsymbol{\beta}$ is the score for weighting the importance of different features.}
\label{fig:fpn_uint}
\end{figure}
Similar to PAFPN \cite{liu2018pafpn} and BiFPN \cite{tan2020efficientdet}, our AutoFA aggregates the pyramid features along a top-down path and a bottom-up path, each of them is comprised of several basic building cells. For the top-down path, pyramid features are selected in the order of decreasing resolution for aggregation, \textit{i.e.} from P$7$ with stride $128$ to P$2$ with stride $4$, same as Fig.~\ref{fig:fpn_uint}. As the counterpart, the aggregation along bottom-up path is in the reversed order.
\subsubsection{AutoFE.}
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\linewidth]{./pic/fe_unit.pdf}
\caption{Illustration of the basic structure of AutoFE with $4$ nodes. The bold colored arrows have two states: not activated and activated, in which not activated arrows mean disconnecting, $\boldsymbol{\kappa}$ indicates the probability. Thin colored arrows indicate different operations with probability $\boldsymbol{\gamma}$.}
\label{fig:fe_uint}
\end{figure}
The incompatibility of those FE modules for some pyramid levels has been revealed and an important conclusion has been drawn in the aforementioned analysis.
To discover the suitable enhancement module for each pyramid layer, we propose a basic cell for our AutoFE that includes several intermediate features transformed by the candidate operations, which include \{$1\times 1$ conv, $1\times 3$ conv, $3\times 1$ conv, $3\times 3$ conv, $1\times 5$ conv, $5\times 1$ conv, $5\times 5$ conv \}. As presented in Fig.~\ref{fig:fe_uint}, the basic cell is conducted as a directed acyclic graph with several nodes, where node $0$ is input and others are intermediate features.
Each node $i$ is connected to the previous node $j\!<\!i$ with two status indicated by $\kappa_{ji}$, \textit{i.e.} activated if and only if $\kappa_{ji}$ is maximum among $\kappa_{\ast i}$, otherwise not activated.
In this way, the previous feature is transformed by the different operations; otherwise, it is not activated. Assume that the feature of $i$th node is $\mathbf{F}_i$, it can be formulated as follow,
\begin{equation}\label{eq:illu_fe_unit}
\mathbf{F}_i = \sum\nolimits_{j<i} [j\!=\!\mathop{\arg\max}_j \kappa_{ji}]\cdot f_{op}(\mathbf{F}_j, \gamma_{ji}),
\end{equation}
where $f_{op}(\cdot)$ is the sum weighted by $\gamma_{ji}$ when processed by the activated operations. Different from \cite{liu2018darts,xu2019pcdarts}, the output of the cell is the sum of features of all leaf nodes, \textit{i.e.} the intermediate features who are not input to the other nodes, given by
\begin{equation}
\mathbf{F}_{out} = \sum\nolimits_i \left[\sum\nolimits_{k>i} [i\!=\!\mathop{\arg\max}_i \kappa_{ik}] = 0\right] \cdot \mathbf{F}_i.
\end{equation}
In particular, the commonly used convolutions with different kernel shapes and dilation rates are adopted for $f_{op}(\cdot)$.
However, during the search, Eq.~\ref{eq:illu_fe_unit} cannot be optimized because it is equivalent to discrete sampling, which is not differentiable. To allow back-propagation, we use the Gumbel-Max method \cite{dong2019gdas} to re-formulate Eq.~\ref{eq:illu_fe_unit} in an efficient way that samples a discrete probability as follow,
\begin{equation}\label{eq:gumbel}
\begin{split}
\mathbf{F}_i &= \sum\nolimits_{j<i} h_{ji}\cdot f_{op}(\mathbf{F}_j, \gamma_{ji}), \\
\text{s.t.} \quad h_{ji} &= \text{onehot}(\mathop{\arg\max}_j(\kappa_{ji} + o_{ji})),
\end{split}
\end{equation}
where $o_{ji}$ is the \textit{i.i.d.} sample drawn from Gumbel$(0,1)$ \cite{dong2019gdas}. Then, softmax function is used to relax the argmax function so as to make Eq.~\ref{eq:gumbel} being differentiable, in which $\tilde{h}_{ji}$ is for approximating $h_{ji}$, denoted by,
\begin{equation}\label{eq:softmax}
\tilde{h}_{ji} = \frac{\exp{(\kappa_{ji} + o_{ji}/\tau)}}{\sum\nolimits_{j'<i}\exp{(\kappa_{j'i} + o_{j'i}/\tau)}},
\end{equation}
where $\tau$ is the softmax temperature. In this way, argmax is used in the forward pass to achieve discrete sampling of connections between two nodes, but softmax in Eq.~\ref{eq:softmax} is adopted during backward pass to allow gradient back-propagation.
Finally, the discrete architecture of AutoFE is obtained by retaining the connections and operations among intermediate features according to the maximum of $\boldsymbol{\kappa}$ and $\boldsymbol{\gamma}$.
\subsection{Search Strategy of AutoFA and AutoFE}
We have transformed the discrete network structure into several architecture parameters through the design of face-suitable search space for AutoFA and AutoFE. In detail, $\boldsymbol{\alpha}$ is adopted to make the decision on choosing candidate features for a pyramid feature, $\boldsymbol{\beta}$ is used to balance the importance of pyramid and candidate features. For AutoFE, $\boldsymbol{\kappa}$ is employed for selecting the connection of intermediate nodes, and $\boldsymbol{\gamma}$ indicates the probability of different operations. Similar to \cite{liu2018darts,chen2019pdarts,xu2019pcdarts}, we utilize the bi-level optimization method to alternately optimize the network parameters, \textit{e.g.} parameters of convolution layers, and architecture parameters in an end-to-end manner.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{./pic/supernet.pdf}
\caption{The architecture of the supernet to automatically obtain a detector for different AI systems.}
\label{fig:arch_supernet}
\end{figure}
\subsection{Auto Model Scaling}
We automatically obtain the ASFD family with different complexities on the basis of a supernet, as shown in Fig.~\ref{fig:arch_supernet}, which is comprised of the backbones in parallel, the stacked AutoFAE modules, and the stacked convolutions for prediction head. Our method aims to search for a better composition to meet different complexity requirements, \textit{i.e.} which backbone to pick, how many AutoFAE modules and convolutions in head to stack, whether to skip AutoFE for each AutoFAE, and what the number of feature channels.
\subsubsection{Training.}
Based on the idea of weight sharing, the supernet is trained by alternately training a single-path network through uniformly sampling \cite{guo2019spos,chu2019fairnas}.
For instance, as presented in Fig.~\ref{fig:arch_supernet}, the single-path is composed of backbone-$2$, and AutoFAE and prediction convolutions, which are both stacked two layers.
Furthermore, a scalable method is proposed for training the supernet compatible with different feature channels. The supernet is optimized with the maximal feature channels during a long warm-up period until it tends to converge. Then, the candidate feature channels are gradually added to be sampled in the descending order, in which the corresponding tensors are sliced out along each dimension to fit the calculations.
\subsubsection{Searching.}
The searching phase is based on the genetic algorithm \cite{guo2019spos,chen2019detnas,chu2019fairnas} and directly takes inference latency into fitness. At first, populations are randomly initialized with genes encoded by the $5$ degrees of freedom of the supernet, which would be removed if against the constraints. After the initialization, they are evaluated on a mini validation set to obtain the fitness. At each iteration, only the top-$k$ populations with better finesses are retained to generate the next generation by mutation and crossover. By repeating this procedure several times, we can discover a single-path network with the best fitness.
\section{Experiments}
\subsection{Experimental Setup}
\subsubsection{Baseline.} If not specified, RetinaNet \cite{lin2017focal} with FPN is utilized as the baseline of the face detector. Compared to the original generic object detection application, it has the following differences: (1) 6 levels of pyramid features are used for predicting with anchor scales $\{4,8,16,32,64,128\}$ and aspect ratio $1\!\!:\!\!1.5$. (2) The IoU threshold for anchor matching is changed to $0.4$ and the ignore-zone is not implemented. (3) Top-2000 predictions with confidence higher than $0.05$ are processed by non-maximum suppression with a threshold $0.4$ to produce at most $750$ final detections.
The results are reported using AP$_{.50}$ measured with a constant IoU threshold $0.5$, as well as AP averaged under IoU thresholds from $0.5$ to $0.95$ with step $0.05$ to demonstrate the performance at high IoU.
\subsubsection{Train Details.} We use the ImageNet-pretrained models to initialize the backbone parameters, and `kaiming' method for others. SGD algorithm is employed to optimize the network parameters with momentum $0.9$, weight decay $5\!\times\!10^{-4}$ and initial learning rate $0.01$ per $32$ images. For ablative studies, the learning rate is multiplied by factor $0.1$ at $30$, $40$ epochs and ended at $50$ epochs.
For the main results, it is divided by $10$ at $60$, $100$ epochs and ended at $120$ epochs.
\subsubsection{Search Details.} The training set of WIDER Face is divided into two mini training and a validating subsets, with a ratio of $9\!:\!9\!:\!2$, they are used for updating network and architecture parameters, and evaluating the searched modules respectively.
Adam algorithm with learning rate $0.01$ is adopted for optimizing the architecture parameters, which are frozen at the first $50$ epochs and updated during $50\!\sim\!100$ epochs, and other settings are same as training details. To determine the final AutoFAE module, we run the searching algorithm $3$ times with different random seeds and pick the best one based on its performance on the mini validation. All training and searching experiments are conducted on 8 V100 GPUs. The AutoFA and AutoFE can be searched within 3 to 4 hours. The commonly used supernet takes about 12 hours for training, and the ASFD families could be sampled within 1.5 to 4 hours.
\subsection{Ablation Study}
\begin{table}[!t]
\centering
\begin{tabular}{l|ccc|ccc}
\toprule[1pt]
\multirow{2}{*}{Module} & \multicolumn{3}{c|}{AP$_{.50}$} & \multicolumn{3}{c}{AP} \\
\cline{2-7}
& Easy & Medium & Hard & Easy & Medium & Hard \\
\midrule[0.5pt]
Baseline & $95.1$ & $94.0$ & $87.2$ & $61.9$ & $59.2$ & $46.5$ \\
NAS-FPN & $95.1$ & $93.9$ & $86.2$ & $61.8$ & $59.0$ & $45.7$ \\
NAS-FCOS & $94.7$ & $93.1$ & $85.6$ & $61.5$ & $58.1$ & $45.1$ \\
AutoFPN & $94.6$ & $93.4$ & $86.0$ & $61.4$ & $58.5$ & $45.6$ \\
PAFPN & $95.3$ & $94.1$ & $87.3$ & $62.1$ & $59.4$ & $46.8$ \\
BiFPN & $95.5$ & $94.4$ & $87.4$ & $62.3$ & $59.6$ & $46.9$ \\
ABiFPN & $95.3$ & $94.5$ & $87.5$ & $62.2$ & $59.7$ & $47.0$ \\
FEM-FPN & $95.2$ & $94.0$ & $86.7$ & $62.1$ & $59.4$ & $46.5$ \\
DARTS & $95.1$ & $93.5$ & $86.5$ & $61.8$ & $58.6$ & $45.6$ \\
PC-DARTS & $95.0$ & $93.7$ & $86.6$ & $61.8$ & $58.9$ & $46.0$ \\
\bottomrule[0.5pt]
AutoFA & $95.4$ & $94.4$ & $\mathbf{87.8}$ & $\mathbf{62.8}$ & $\mathbf{60.2}$ & $\mathbf{47.4}$\\
\bottomrule[1pt]
\end{tabular}
\caption{Comparison with state-of-the-art feature aggregation modules on WIDER Face validation.}
\label{tab:autofa}
\end{table}
\subsubsection{Effect of Search Space for AutoFA and AutoFE}
To demonstrate the effectiveness of our proposed face-suitable search space for feature aggregation and enhancement modules, the AutoFA and AutoFE modules are discovered and compared to the state-of-the-art modules respectively.
At the first stage, the AutoFA module is searched through a RetinaNet that replaces the FPN with several basic aggregation modules.
As shown in Table~\ref{tab:autofa}, simulations are conducted by comparing to the commonly used FA modules, in which DARTS \cite{liu2018darts} and PC-DARTS \cite{xu2019pcdarts} illustrate the results based on a fully-connected search space \cite{wang2019nasfcos}. Our AutoFA manages to address the limitations of previous NAS-based methods and outperforms them with a large margin, which is more than $1.0$ points on all three subsets indicated by AP.
Besides, it is also significantly better than the hand-crafted ones composed of top-down and bottom-up paths, \textit{i.e.} PAFPN \cite{liu2018pafpn}, BiFPN \cite{tan2020efficientdet}, and ABiFPN \cite{zhang2020acfd}, demonstrating the superiority of connections between the multi-scale features of AutoFA.
Such the large improvement is mainly from predictions with the high IoU, which shows that the features of different scales are fully aggregated and it is helpful for more distinguishable classification and more accurate location.
\begin{table}[!t]
\centering
\begin{tabular}{l|ccc|ccc}
\toprule[1pt]
\multirow{2}{*}{Module} & \multicolumn{3}{c|}{AP$_{.50}$} & \multicolumn{3}{c}{AP} \\
\cline{2-7}
& Easy & Medium & Hard & Easy & Medium & Hard \\
\midrule[0.5pt]
Baseline & $94.7$ & $92.8$ & $82.9$ & $61.7$ & $58.4$ & $45.0$ \\
ASPP & $94.8$ & $93.0$ & $83.4$ & $62.1$ & $58.8$ & $45.3$ \\
RFB & $94.5$ & $92.7$ & $83.0$ & $61.5$ & $58.5$ & $45.2$ \\
CPM & $94.6$ & $92.8$ & $83.0$ & $61.5$ & $58.4$ & $45.2$ \\
FEM-CPM & $94.5$ & $92.9$ & $83.3$ & $61.9$ & $58.7$ & $45.5$ \\
RFE & $94.5$ & $92.8$ & $83.2$ & $61.8$ & $58.7$ & $45.4$ \\
DARTS & $94.7$ & $92.9$ & $83.0$ & $61.6$ & $58.6$ & $45.1$ \\
PC-DARTS & $94.6$ & $93.0$ & $83.0$ & $61.8$ & $58.6$ & $45.2$ \\
\bottomrule[0.5pt]
AutoFE & $\mathbf{95.2}$ & $\mathbf{93.2}$ & $\mathbf{83.5}$ & $\mathbf{62.1}$ & $\mathbf{59.0}$ & $\mathbf{45.8}$ \\
\bottomrule[1pt]
\end{tabular}
\caption{Comparison with state-of-the-art feature enhancement modules on WIDER Face validation.}
\label{tab:autofe}
\end{table}
Then, RetinaNet without FPN is adopted as the baseline to better highlight the effectiveness of FE modules.
Different FE modules are placed between the backbone and detection head to refine the multi-scale features, as shown in Table~\ref{tab:autofe}. In particular, DARTS and PC-DARTS discover FE modules by following their original settings in image classification. However, they only improve the baseline by minor advantages.
With the specified face-suitable search space, the found AutoFE improves the baseline by $0.5/0.4/0.6$ points of AP$_{.50}$ and $0.4/0.6/0.8$ points of AP, far exceeding the other state-of-the-art modules and demonstrating the superiority of our face-suitable search space.
\subsubsection{Effect of Joint Searching AutoFAE}
The AutoFAE module is composed of AutoFA and AutoFE two modules, which can be obtained by cascading the discovered AutoFA and AutoFE modules or jointly searching in an end-to-end manner.
As presented in Table~\ref{tab:joint_search}, only a minor improvement is achieved by cascading the discovered AutoFA and AutoFE directly. And AutoFAE found by the joint searching way can further improve AP$_{.50}$ and AP by clear margins, demonstrating the state-of-the-art performance of proposed AutoFAE.
\begin{table}[!t]
\centering
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{l|ccc|ccc}
\toprule[1pt]
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{AP$_{.50}$} & \multicolumn{3}{c}{AP} \\
\cline{2-7}
& Easy & Medium & Hard & Easy & Medium & Hard \\
\midrule[0.5pt]
Baseline & $95.1$ & $94.0$ & $87.2$ & $61.9$ & $59.2$ & $46.5$ \\
AutoFA+AutoFE & $95.4$ & $94.5$ & $87.9$ & $62.8$ & $60.2$ & $47.5$ \\
Joint Search & $\mathbf{95.7}$ & $\mathbf{95.0}$ & $\mathbf{88.6}$ & $\mathbf{62.9}$ & $\mathbf{60.5}$ & $\mathbf{47.8}$ \\
\bottomrule[1pt]
\end{tabular}}
\caption{The effect of searching method for the AutoFAE.}
\label{tab:joint_search}
\end{table}
\subsubsection{Effect of Different Positions of AutoFE}
Review that our AutoFAE is built upon the top-down and bottom-up paths. Therefore, we have three ways to build the final AutoFAE module. In detail, the AutoFE module can be plugged before and after the AutoFA, as well as between the top-down and bottom-up paths. In this way, three modules are obtained by utilizing the joint searching method. As shown in Table~\ref{tab:different_position}, we observe that the best performance is achieved when AutoFE is in the middle position. This is mainly because similar presentation is generated after the top-down aggregation. Placing AutoFE before the bottom-up path can further enhance these features to carry different context information.
\begin{table}[!t]
\centering
\begin{tabular}{l|ccc|ccc}
\toprule[1pt]
\multirow{2}{*}{Position} & \multicolumn{3}{c|}{AP$_{.50}$} & \multicolumn{3}{c}{AP} \\
\cline{2-7}
& Easy & Medium & Hard & Easy & Medium & Hard \\
\midrule[0.5pt]
Baseline & $95.1$ & $94.0$ & $87.2$ & $61.9$ & $59.2$ & $46.5$ \\
Before & $95.2$ & $94.3$ & $87.5$ & $62.4$ & $60.1$ & $46.9$ \\
Middle & $\mathbf{95.7}$ & $\mathbf{95.0}$ & $\mathbf{88.6}$ & $\mathbf{62.9}$ & $\mathbf{60.5}$ & $\mathbf{47.8}$ \\
After & $95.3$ & $94.4$ & $88.0$ & $62.4$ & $60.2$ & $47.3$ \\
\bottomrule[1pt]
\end{tabular}
\caption{The effect of the position of AutoFE and AutoFA.}
\label{tab:different_position}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\linewidth]{./pic/autofae.pdf}
\vspace{-2.5mm}
\caption{The architecture of the discovered AutoFAE, in which FA-Cell is the basic cell indicated by Fig.~\ref{fig:fpn_uint}, $m\!\times\! n$ denotes the convolution kernel size, and R$x$ is the dilated rate.}
\vspace{-3.5mm}
\label{fig:autofae}
\end{figure}
\begin{table}[!t]
\centering
\resizebox{1\linewidth}{!}{
\begin{tabular}{l|l|ccc|ccc|c}
\toprule[1pt]
\multirow{2}{*}{Model} & \multirow{2}{*}{Single Path} & \multicolumn{3}{c|}{AP$_{.50}$} & \multicolumn{3}{c|}{AP} & \multirow{2}{*}{Lat.} \\
& & Easy & Medium & Hard & Easy & Medium & Hard & \\
\midrule[0.5pt]
D$0$ & R$18$-FA-H$\times1$-$64$ & $95.7$ & $94.8$ & $88.0$ & $63.7$ & $61.1$ & $48.3$ & $3.1$ \\
D$1$ & R$18$-FA-H$\times3$-$128$ & $96.1$ & $95.2$ & $88.8$ & $64.1$ & $61.5$ & $48.9$ & $5.7$ \\
D$2$ & R$34$-FA-H$\times3$-$192$ & $96.4$ & $95.6$ & $89.5$ & $64.6$ & $62.3$ & $49.6$ & $10.5$ \\
D$3$ & R$50$-FAE-H$\times3$-$192$ & $96.6$ & $95.9$ & $90.5$ & $65.1$ & $62.8$ & $50.4$ & $16.6$ \\
D$4$ & R$50$-FAE-H$\times4$-$256$ & $97.0$ & $96.3$ & $91.2$ & $65.8$ & $63.3$ & $50.9$ & $26.2$ \\
D$5$ & R$101$-FAE-H$\times4$-$256$ & $97.0$ & $96.5$ & $91.9$ & $65.9$ & $63.3$ & $51.7$ & $29.6$ \\
D$6$ & R$101$-FAE-FA-FA-H$\times4$-$256$ & $97.2$ & $96.5$ & $92.5$ & $66.2$ & $63.5$ & $52.3$ & $36.1$ \\
\bottomrule[1pt]
\end{tabular}}
\caption{The family of ASFD, where latency (ms) is measured with VGA-resolution images and on Nvidia V100 GPU.}
\label{tab:trade_off}
\end{table}
\begin{figure}[!t]
\centering
\subfigure[WIDER Face: Hard Val]{
\includegraphics[width=0.47\linewidth,height=0.35\linewidth]{./pic/widerface_val_hard.pdf}
}
\subfigure[WIDER Face: Hard Test]{
\includegraphics[width=0.47\linewidth,height=0.35\linewidth]{./pic/widerface_test_hard.pdf}
}
\\
\subfigure[FDDB: Discontinuous]{
\includegraphics[width=0.47\linewidth,height=0.35\linewidth]{./pic/fddb_disc.JPG}
}
\subfigure[FDDB: Continuous]{
\includegraphics[width=0.47\linewidth,height=0.35\linewidth]{./pic/fddb_cont.JPG}
}
\vspace{-2.5mm}
\caption{Evaluation on the popular benchmarks of ASFD.}
\vspace{-3.5mm}
\label{fig:benchmarks}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.3\linewidth]{./pic/demo_1.jpg}
\includegraphics[width=0.3\linewidth]{./pic/demo_2.jpg}
\includegraphics[width=0.3\linewidth]{./pic/demo_3.jpg}
\vfill
\includegraphics[width=0.3\linewidth]{./pic/demo_4.jpg}
\includegraphics[width=0.3\linewidth]{./pic/demo_5.jpg}
\includegraphics[width=0.3\linewidth]{./pic/demo_6.jpg}
\vfill
\includegraphics[width=0.3\linewidth]{./pic/demo_7.jpg}
\includegraphics[width=0.3\linewidth]{./pic/demo_8.jpg}
\includegraphics[width=0.3\linewidth]{./pic/demo_9.jpg}
\vfill
\vspace{-2.5mm}
\caption{Illustration of ASFD to various large variations. Red bounding boxes indicate the detection confidence is above $0.8$.}
\vspace{-3mm}
\label{fig:visual_demo}
\end{figure*}
\subsection{Analysis on AutoFAE}
We visualize the architecture of AutoFAE in Fig.~\ref{fig:autofae}, which can match the previous conclusions drawn in the problem analysis section perfectly.
In general, the AutoFA module aggregates pyramid features along with the sparse cross-scale and similar-scale connections instead of a fully connected manner like \cite{xu2019autofpn,wang2019nasfcos}, which avoids the performance degradation caused by large scale differences.
Most of these cross-scale connections appear on the top-down path of AutoFA, in which the shallow features that lack semantic information are aggregated with not only the adjacent layer but also the others with rich context.
Besides, AutoFE modules with different operations and topological structures are found for different pyramid layers. Particularly, dilated convolutions only appear in the later levels for enlarging the receptive fields, the others are almost rectangle convolutions for more diverse features. Thus, the large faces are located, and small faces in occlusion and with extreme-poses are well distinguished.
\subsection{Model Scaling}
Next, the supernet is trained on the basis of the final AutoFAE and backbone networks of ResNet series \cite{he2016resnet}. Then, the genetic algorithm is adopted to search the single-path networks with $50$ populations and $50$ iterations. We discover $7$ single-path networks under the different GPU inference latencies, \textit{e.g.} $5$ms, $10$ms and so on. These networks are trained for $150$ epochs with the commonly used pyramid anchors \cite{tang2018pyramidbox}, and multi-scale test is employed with factors $0.5,1.0,1.5,2.0$. The detailed results are presented in Table~\ref{tab:trade_off}, in which the network architecture is indicated by single path. For instance, ``R$101$-FAE-FA-FA-H$\times4$-$256$'' means ResNet101 is adopted as the backbone, AutoFAE modules are stacked $3$ times and AutoFE module is skipped within the last two modules, convolution layers are repeated $4$ times in prediction head, and the feature channel is $256$. Obviously, our ASFD family makes a better trade-off between performance and efficiency by scaling the components and channels, especially the ASFD-D$0$ costs about $3.1$ ms, \textit{i.e.} more than 320 FPS.
\subsection{Evaluation on Benchmarks}
We evaluate our ASFD-D$6$ on the popular benchmarks, \textit{i.e.} WIDER Face \cite{yang2016wider} and FDDB \cite{jain2010fddb}, which is trained only on the training set of WIDER Face and test on these benchmarks without any fine-tuning.
Our ASFD-D$6$ obtains the highest AP$_{.50}$ scores with $97.2/96.5/92.5$ on WIDER Face validation, $96.7/96.2/92.1$ on WIDER Face test, and $99.11$ and $86.25$ on FDDB discontinuous and continuous curves, outperforming the prior competitors by a considerable margin and setting a new state-of-the-art face detector, shown as Fig.~\ref{fig:benchmarks} (Easy and Medium results of WIDER Face are ignored due to the space limitation). More examples of our ASFD on handling face with various variations are shown in Fig.~\ref{fig:visual_demo} to demonstrate its effectiveness.
\subsection{Generalization on Generic Object Detection}
To demonstrate the generalization ability of our AutoFAE module, we evaluate the final AutoFAE module with three typical detectors, RetinaNet \cite{lin2017focal}, FCOS \cite{tian2019fcos} and Faster RCNN \cite{ren2015faster} on COCO. In particular, the original FPN module is replaced with our AutoFAE by connecting the corresponding pyramid layers, as presented in Table~\ref{tab:object_detection}, our AutoFAE module can consistently adapt to the general object domain and different detectors, with AP improvements from $0.5$ to $1.0$ points.
\begin{table}[!t]
\centering
\resizebox{0.99\linewidth}{!}{
\begin{tabular}{l|ccc|ccc}
\bottomrule[1pt]
\multirow{2}{*}{Model} & \multicolumn{3}{c|}{FPN} & \multicolumn{3}{c}{AutoFAE} \\
\cline{2-7}
& AP & AP$_{.50}$ & AP$_{.75}$ & AP & AP$_{.50}$ & AP$_{.75}$ \\
\midrule[0.5pt]
RetinaNet & $36.5$ & $55.1$ & $39.0$ & $\mathbf{37.5}$ & $\mathbf{56.6}$ & $\mathbf{39.9}$ \\
FCOS & $38.6$ & $57.2$ & $41.7$ & $\mathbf{39.2}$ & $\mathbf{57.6}$ & $\mathbf{42.2}$ \\
Faster RCNN & $37.4$ & $58.1$ & $40.4$ & $\mathbf{37.9}$ & $\mathbf{58.3}$ & $\mathbf{41.2}$ \\
\bottomrule[1pt]
\end{tabular}}
\caption{The generalization of AutoFAE on generic object detection dataset \textit{i.e.} COCO.}
\label{tab:object_detection}
\end{table}
\section{Conclusion}
Neural architecture search has demonstrated its successes in generic object detection about feature aggregation and enhancement. However, they cannot adapt to the domain difference between face and generic object detection and cause severe performance drops when applied to the face domain. In this paper, we analyze the reason for this phenomenon occurs and propose a face-suitable search space for feature aggregation and enhancement modules. And a better FAE module termed as AutoFAE is discovered using bi-level optimization, which outperforms the current state-of-the-art FAE modules in face detection and can be generalized to general object tasks. Finally, we automatically obtain a family of detectors with different complexities based on a supernet that achieves a better performance-efficiency trade-off.
\bibliographystyle{ACM-Reference-Format}
\balance
|
1,314,259,993,336 | arxiv | \section{Introduction}\label{int}
While spectroscopic redshifts have now been measured for over one million
galaxies, in recent years
digital sky surveys have obtained multi-band imaging
for of order a hundred million galaxies. Deep, wide-area surveys planned for
the next decade will increase the number of galaxies with
multi-band photometry to a few billion. Due to technological and financial
constraints, obtaining spectroscopic redshifts for more than a
small fraction of these galaxies will remain impractical for the foreseeable
future. As a result, over the last decade substantial effort has gone into
developing photometric redshift (photo-z) techniques, which use
multi-band photometry to estimate approximate galaxy redshifts. For many
applications in extragalactic astronomy and cosmology, the resulting
photometric redshift precision is sufficient for the science goals at
hand, provided one can accurately characterize the uncertainties in the
photo-z estimates.
Two broad categories of photo-z estimators are in wide use:
template-fitting and training set methods. In template-fitting, one
assigns a redshift to a galaxy by finding
the redshifted spectral energy distribution (SED), selected
from a libary of templates,
that best reproduces the observed fluxes in the broadband filters.
By contrast, in the training set approach, one
uses a training set of galaxies with
spectroscopic redshifts and photometry to derive an empirical relation
between photometric observables (e.g., magnitudes, colors, and morphological
indicators) and redshift.
Examples of empirical methods include Polynomial Fitting \citep{con95b},
the Nearest Neighbor method \citep{csa03},
the Nearest Neighbor Polynomial (NNP) technique \citep{cun07},
Artificial Neural Networks (ANN) \citep{col04,van04,dab07}, and
Support Vector Machines \citep{wad04}. When a large spectroscopic
training set that is representative of the photometric data set to be
analyzed is
available, training set techniques typically outperform template-fitting
methods, in the sense that the photo-z estimates have smaller scatter
and bias with respect to the true redshifts \citep{cun07}. On the
other hand, template-fitting can be applied to a photometric sample
for which relatively few spectroscopic analogs exist.
For a comprehensive review and comparison of photo-z methods,
see \cite{cun07}.
In this paper, we present a publicly available galaxy photometric redshift
catalog for the Sixth Data Release (DR6) of the Sloan Digital Sky
Survey (SDSS) imaging catalog \citep{bla03b,eis01,gun98,ive04,str02,yor00}.
We use the ANN photo-z method, which we have shown to
be a superior training set method \citep{cun07}, and briefly compare the
results using different photometric observables.
We also compare the ANN results with those from NNP, an empirical
method which achieves similar performance to the ANN method \citep{cun07}.
Since the SDSS photometric catalog covers a large area of sky, a number
of deep spectroscopic galaxy samples with SDSS photometry are available
to use as training sets, as shown in Fig.~\ref{dist.sdss}.
In combination, these spectroscopic samples cover the full apparent
magnitude range of the SDSS photometric sample.
The paper is organized as follows.
In \S \ref{sel} we briefly describe the SDSS DR6 photometric catalog
and the selection criteria used
to obtain the galaxy photometric sample from the catalog.
In \S \ref{tra} we describe the spectroscopic catalogs used
to construct the photo-z training and validation sets.
In \S \ref{met} we outline the photo-z methods as well as the
photo-z error estimator technique applied to the galaxy sample.
Statistical results for photometric redshift performance, errors,
and redshift distributions
are presented in \S \ref{res}. In \S \ref{rec}
we make recommendations for possible
additional cuts on the photo-z catalog based on our
own flags and those in the SDSS database.
In \S \ref{cat} we briefly describe how to access the
photo-z catalog from the public SDSS data server, and in \S \ref{con} we
present our conclusions. For completeness, Appendix \ref{query}
provides the database query used to select the photometric sample,
Appendix \ref{stargal} discusses issues of star-galaxy separation,
and Appendix \ref{photdr5} briefly describes an earlier version
of the photo-z algorithm used for SDSS DR5 \citep{ade07}.
\section{SDSS Photometric Catalog and Galaxy Selection}
\label{sel}
The SDSS comprises a large-area
imaging survey of the north Galactic cap, a multi-epoch imaging survey of
an equatorial stripe in the south Galactic cap, and a spectroscopic survey of
roughly $10^6$ galaxies and $10^5$ quasars
\citep{yor00}.
The survey uses a dedicated, wide-field, 2.5m telescope \citep{gun06} at
Apache Point Observatory, New Mexico.
Imaging is carried out in drift-scan mode using a 142 mega-pixel camera
\citep{gun06} that gathers data in five broad bands, $u g r i z$, spanning
the range from 3,000 to 10,000 \AA \, \citep{fuk96}, with an effective exposure
time of 54.1 seconds per band.
The images are processed using specialized
software \citep{lup01,sto02} and are
astrometrically \citep{pie03} and photometrically \citep{hog01,tuc06}
calibrated using observations of a set of primary standard stars
\citep{smi02} observed on a neighboring 20-inch telescope.
The imaging in the sixth SDSS Data Release (DR6) covers an essentially
contiguous region of the north Galactic cap, with only a few small patches
remaining to be observed. In any region where imaging runs overlap, one run is
declared primary\footnote{For the precise definition of primary objects see
{\tt http://cas.sdss.org/dr6/en/help/docs/glossary.asp\#P}}
and is used for spectroscopic target selection;
other runs are declared secondary.
The area covered by the DR6 primary imaging survey, including the
southern stripes, is $8417 \textrm{ deg}^2$, but
DR6 includes both the primary and secondary observations of
each area and source \citep{dr6}.
\begin{figure}
\begin{minipage}[t]{85mm}
\begin{center}
\resizebox{85mm}{!}{\includegraphics[angle=0]{f1.c.eps}}
\end{center}
\end{minipage}
\caption{Normalized $r$ magnitude distributions for various catalogs.
{\it Top three rows:}
the distributions of the spectroscopic catalogs used for photo-z
training and validation are
shown for 2SLAQ, CFRS, CNOC2, TKRS,
DEEP and DEEP2, and the SDSS spectroscopic sample.
$N_{tot}$ denotes the total number of galaxy measurements used
from each catalog; for galaxies in regions with repeat SDSS imaging,
each independent photometric measurement is counted separately.
{\it Bottom row:} ({\it left})---the distribution of the combined
spectroscopic sample; ({\it right})---the
distribution for the SDSS photometric galaxy sample, where
objects were classified as galaxies according to the
photometric TYPE flag (see text).
}\label{dist.sdss}
\end{figure}
The SDSS database provides a variety of measured magnitudes for each
detected object. Throughout this paper, we use dereddened model magnitudes to
perform the photometric redshift computations. To determine the model
magnitude, the SDSS photometric pipeline fits two
models to the image of each galaxy in each passband: a de Vaucouleurs (early-type) and
an exponential (late-type) light profile.
The models are convolved with the estimated point
spread function (PSF), with arbitrary axis ratio and position angle.
The best-fit model in the $r$ band (which is used to fix the model scale
radius) is then applied to the other passbands and convolved with the
passband-dependent PSFs to yield the model magnitudes.
Model magnitudes provide an unbiased color estimate in the absence of color
gradients \citep{sto02}, and the dereddening procedure removes the
effect of Galactic extinction \citep{sch98}.
\begin{deluxetable}{c c | c c}
\tablewidth{0pt}
\tablecaption{Photometric Sample Properties}
\startdata
\hline
\hline
\multicolumn{2}{c}{\hspace{0.1 in} AB magnitude limits \hspace{0.2 in} }
&\multicolumn{2}{c}{\hspace{0.2 in} RMS photometric \hspace{0.4 in}} \\
\multicolumn{2}{c}{}
& \multicolumn{2}{c}{\hspace{0.2 in} calibration errors } \\
\hline
\hspace{0.1 in} $u$ & 22.0 & \hspace{0.4 in} $r$ & 2\% \\
\hspace{0.1 in} $g$ & 22.2 & \hspace{0.4 in} $u-g$ & 3\% \\
\hspace{0.1 in} $r$ & 22.2 & \hspace{0.4 in} $g-r$ & 2\% \\
\hspace{0.1 in} $i$ & 21.3 & \hspace{0.4 in} $r-i$ & 2\% \\
\hspace{0.1 in} $z$ & 20.5 & \hspace{0.4 in} $i-z$ & 3\% \\
\enddata
\tablecomments{Magnitude limits are for 95\% completeness for point
sources in typical seeing; 50\% completeness numbers are generally
0.4 mag fainter \citep{ade07}. The median seeing for the SDSS imaging
survey is $1.4''$.
} \label{propphot}
\end{deluxetable}
To construct the photometric sample of galaxies for which we wish to
estimate photo-z's, we obtained
a catalog drawn from the SDSS CasJobs website
{\tt http://casjobs.sdss.org/casjobs/}.
We checked some of the SDSS photometric flags to ensure that we have obtained
a reasonably clean galaxy sample. In particular,
we selected all primary objects from DR6 that have the TYPE flag
equal to $3$ (the type for galaxy) and that do not
have any of the flags BRIGHT, SATURATED, or SATUR\_CENTER set.
For the definitions of these flags we refer the reader to the
PHOTO flags entry at the SDSS
website\footnote{{\tt http://cas.sdss.org/dr6/en/help/browser/browser.asp}}
or to Appendix \ref{query}.
We also took into account the nominal SDSS flux limit
(see Table~\ref{propphot}) by only selecting galaxies with dereddened model
magnitude $r<22.0$.
The full database query we used is given in Appendix \ref{query}.
The photometric galaxy catalog we have selected suffers from impurity and
incompleteness at some level, since
the photometric pipeline cannot
separate stars from galaxies with 100\% success
at faint magnitudes. We
describe some of our tests of star/galaxy separation in
Appendix \ref{stargal}, where we show that the SDSS TYPE flag
provides star/galaxy separation performance similar to other
methods.
\begin{figure}
\begin{minipage}[t]{85mm}
\begin{center}
\resizebox{85mm}{!}{\includegraphics[angle=0]{f2.c.eps}}
\end{center}
\end{minipage}
\caption{Distribution of $g-r$ and $r-i$ colors for different SDSS samples. {\it Top row:} the color distributions for galaxies in the SDSS spectroscopic
sample.
{\it Middle row:} the color distributions for galaxies in the other (non-SDSS)
spectroscopic training samples.
{\it Bottom row:} the color distributions for galaxies in the photometric
sample.
As above, galaxy/star classification used the photometric TYPE flag.
}\label{dist.color.sdss}
\end{figure}
The final photometric sample comprises $77,418,767$ galaxies.
The $r$ magnitude distribution of this sample is shown in
the bottom right panel of Fig.~\ref{dist.sdss}; the $g-r$ and
$r-i$ color distributions
are shown in the bottom panels of Fig.~\ref{dist.color.sdss}.
\section{Spectroscopic Training and Validation sets} \label{tra}
Since our methods to estimate photo-z's and photo-z errors are
training-set based, we would ideally like the spectroscopic
training set to be
fully representative of the photometric sample to be analyzed, i.e., to have
similar statistical properties and magnitude/redshift distributions.
Training-set methods can be thought of as inherently Bayesian, in the sense
that the training-set distributions form effective priors for the analysis of the
photometric sample; to the extent that the training-set distributions
reflect those of the photometric sample, we may expect the photo-z estimates
to be unbiased (or at least they will not be biased by the prior).
Given the practical difficulties of carrying out spectroscopy at
faint magnitudes and low surface brightness, such an ideal generally cannot be achieved.
Realistically, all we can hope for is a training set that
(a) is large enough that statistical fluctuations are small and (b)
spans the same magnitude, color, and redshift ranges as the photometric sample.
Fortunately, our tests indicate that the estimated photo-z's
depend only weakly on the shape of the
redshift and magnitude distributions of the training set for the SDSS.
\begin{figure*}
\begin{center}
\resizebox{150mm}{!}{\includegraphics[angle=0]{f3.eps}}
\caption{A simple FFMP network with 3 layers and configuration $2:1:1$.
The inputs are the
two magnitudes, $m_1$ and $m_2$.
Ix denotes the input from node x, and Ox is the corresponding output of this node.
The weights $w$ associated with each connection are found by training the network
using training and validation sets (see text).}
\label{NNsimple}
\end{center}
\end{figure*}
We have constructed a spectroscopic sample consisting of $639,911$
galaxies that have SDSS photometry measurements
(counting repeats; see below) and that have
spectroscopic redshifts measured by the SDSS or by
other surveys, as described below.
We imposed a magnitude limit of $r<23.0$ on the spectroscopic
sample and applied
additional cuts on the quality of the spectroscopic
redshifts reported by the different surveys.
Since we impose a limit of $r<22.0$ for the SDSS photometric sample,
the fainter limit chosen
for the spectroscopic training sample accommodates the full photometric
range of interest without creating boundary effects for photo-z's of
galaxies with magnitudes near the photometric sample limit of $r = 22$.
Each survey providing spectroscopic redshifts defines a redshift
quality indicator; we refer the reader to the respective publications listed
below for their precise definitions.
For each survey, we chose a redshift quality cut roughly corresponding
to 90\% redshift confidence or greater.
The SDSS spectroscopic sample
provides $531,672$ redshifts, principally from the MAIN and
Luminous Red Galaxy (LRG) samples, with confidence level
$z_{\rm conf} > 0.9$. The remaining redshifts are:
$21,123$ from the Canadian Network for Observational Cosmology
Field Galaxy Survey \citep[CNOC2;][]{yee00},
$1,830$ from the Canada-France Redshift Survey \citep[CFRS;][]{lil95}
with Class $> 1$,
$31,716$ from the Deep Extragalactic Evolutionary Probe \citep[DEEP;][]{deep2}
with $q_z$ = A or B and from DEEP2
\citep{wei05}\footnote{{\tt http://deep.berkeley.edu/DR2/ }}
with $z_{\rm quality} \geq 3$,
$728$ from the Team Keck Redshift Survey \citep[TKRS;][]{wir04}
with $z_{\rm quality} > -1$, and
$52,842$ LRGs from the
2dF-SDSS LRG and QSO Survey
\citep[2SLAQ;][]{can06}\footnote{{\tt http://lrg.physics.uq.edu.au/New\_dataset2/ }}
with $z_{\rm op} \geq 3$.
We positionally matched the galaxies with spectroscopic redshifts against photometric
data in the SDSS {\tt BestRuns} CAS database, which allowed us
to match with photometric measurements in different SDSS imaging runs.
The above numbers for galaxies with redshifts count independent photometric
measurements of the same objects due to multiple SDSS imaging of the same
region; in particular SDSS Stripe 82 has been imaged a number of times.
The numbers of {\em unique} galaxies used from these surveys are
$1,435$ from CNOC2,
$272$ from CFRS,
$6,049$ from DEEP and DEEP2,
$389$ from TKRS, and
$11,426$ from 2SLAQ.
The SDSS spectroscopic samples were drawn from the SDSS primary galaxy sample and therefore are all unique.
The spectroscopic sample obtained by combining all these catalogs,
including the repeats, was divided into two catalogs of the
same size ($\sim 320,000$ objects each).
One of these catalogs was taken to be
the {\it training set} used by the photo-z and error estimators, and the other
was used as a {\it validation set} to carry out tests of photo-z
quality (see \S \ref{subsec:meth_photoz}). Our tests indicate that this
procedure of treating different
images of the same training/validation set galaxies as independent objects leads
to good results, provided all the photometric measurements for a given object
are confined to either the training set or the validation set and not mixed. By
contrast,
excluding such multiple images from the spectroscopic sample would result
in much smaller training and validation sets; these would be very sparse at
faint magnitudes, leading to much diminished photo-z quality there. On the other
hand, splitting
the repeat images of a given object between the training and validation sets
may result in ``over-fitting'' of the derived photo-z's
(see \S \ref{subsec:meth_photoz}).
The $r$-magnitude and color ($g-r$ and $r-i$)
distributions for the spectroscopic samples and for
the photometric sample are shown in Figs. \ref{dist.sdss} and
\ref{dist.color.sdss}. While the magnitude and color distributions of
the combined spectroscopic sample are not
identical to those of the photometric sample, the
spectroscopic sample does span the
range of apparent magnitude and color of the photometric sample.
To test the impact of having a training set that is not fully representative
of the photometric sample, we
divided the spectroscopic sample into smaller, alternate training and
validation sets. For instance,
to test the effect of the training-set magnitude distribution on the
photo-z estimates, we created a training set with a flat $r$
magnitude distribution and another with an $r$ magnitude distribution similar to that
of the
photometric sample. Our tests indicated that the photo-z quality
is not strongly affected by the magnitude
distribution of the training set.
The changes in the photo-z performance metrics
(the rms scatter and the 68\% CL region, defined below in
\S \ref{res}) were smaller than $10\%$ when the training-set magnitude
distribution was varied between these different choices.
Since using the entire spectroscopic
sample for the training and validation sets produced marginally better results
than all other cases tested, we have adopted this as our final choice. In addition,
we tested the effect of the size of the training set on
our photo-z calculations. We found that the photo-z performance metrics
defined in \S \ref{res-photoz}
are degraded by no more than 10\% when the training set is artificially
reduced to 10\% of its original size. Even when the training set is
reduced to $\sim 1\%$ of its original size, the photo-z performance metrics are
degraded by less than $25\%$. This gives us confidence that
the spectroscopic training set size used here is sufficient for extracting
nearly optimal photo-z estimates.
\section{Methods}\label{met}
\subsection{ANN and NNP Photometric redshifts}
\label{subsec:meth_photoz}
The ANN method that we use to estimate galaxy photo-z's is
a general classification and interpolation tool used
successfully in an array of fields such as hand writing recognition,
automatic aircraft
piloting\footnote{{\tt http://www.nasa.gov/centers/dryden/news/NewsReleases/2003/03-49.html}},
detecting credit card
fraud\footnote{{\tt http://www.visa.ca/en/about/visabenefits/innovation.cfm}},
and extracting astronomically interesting sources in a telescope image
\citep{bertin96}.
We use a particular type of ANN called a Feed Forward Multilayer
Perceptron (FFMP) to map the relationship between photometric observables
and redshifts.
An FFMP network consists of several input nodes, one or more hidden layers,
and several output nodes, all interconnected by weighted connections
(see Fig.~\ref{NNsimple}).
We follow the notation of \cite{col04} and denote a network with
$N_i$ input nodes, $N_{h_j}$ nodes in hidden layer $j$, and $N_o$
output nodes as $N_i:N_{h_1}:N_{h_2}:...:N_{h_m}:N_o$.
For each input object, the input photometric
data (e.g., magnitudes, colors, concentrations, etc.)
are fed into the input
nodes of the FFMP, which fire signals according to the values of the
input data.
Each node in a hidden layer receives a total input which is a weighted
sum of the outputs from the nodes in the previous layer,
i.e., node $i$ in a hidden layer receives an input $I_i$ given by
\begin{equation}
I_i = \sum_j w_{ij} O_j,
\end{equation}
\noindent where $O_j$ is the output of the $j^{\rm th}$ node of the previous
layer and $w_{ij}$ is the weight of the connection between node $i$ in
the hidden layer and node $j$ in the previous layer.
Given the input $I_i$, the output $O_i$ of node $i$ is a function $f$ of the
input,
\begin{equation}
O_i=f(I_i), \label{act}
\end{equation}
\noindent where $f$ is the activation function.
Repeating this process, signals propagate up to the output nodes.
The activation function is typically a sigmoid function:
\begin{equation}
f(I_i) = \frac{1}{1 + e^{-I_i}}. \label{sigm}
\end{equation}
\noindent However, there are various alternatives, such as step
functions and hyperbolic tangents.
\cite{van04} show that the choice of activation functions makes
no significant difference in the result.
We use $X$:20:20:20:1 networks to estimate photo-z's, where $X$ is the
number of input photometric parameters per galaxy.
The corresponding number of degrees of freedom (the number of weights) is
roughly 1,000, depending on the actual value of $X$.
We use hyperbolic tangent functions as the activation function of the
hidden layers and a linear activation function for the output layer.
Despite the occasional aura of mystery surrounding neural networks,
an FFMP is nothing more than a complex
mathematical function; in fact, one can always write down the analytic
expression corresponding to a neural network function.
Once the network configuration is specified, it can be trained to
output an estimate of redshift given the input photometric observables.
The training process involves
finding the set of weights $w_{ij}$ that
minimize a score function $E$, chosen here to be
\begin{equation}
E = \frac{1}{2}\sum_i(z_{\rm spec}^{i} - z_o^{i})^2 ~,
\label{eq:score}
\end{equation}
\noindent where $z_{\rm spec}$ is the measured spectroscopic redshift, $z_o$ is the
output redshift of the output node, and the sum is over all galaxies
in the training set. Note that the choice of score function is not unique,
and different choices will in general lead to different photo-z estimates.
The minimization of this score function can be done efficiently
because its derivatives with respect to the weights
are available analytically.
We use a Variable Metric method as described in \cite{pre92} for the minimization.
In machine learning, over-fitting refers
to the tendency of an algorithm with many adjustable parameters
to fit to the noise in the training set data.
In order to avoid over-fitting, we use the technique of
early stopping.
The spectroscopic sample is divided into two
independent subsets, the
{\it training} and {\it validation} sets,
and the formal minimizations are done using the training set.
After each minimization step, the network is evaluated on the
validation set, and
the set of weights that performs best on the validation set
is chosen as the final set. Another issue in machine learning is that
minimization procedures that start at different initial choices of weights
generally end at different local minima of the score
function.
To reduce the chance of ending in a less-than-optimal local minimum,
we minimize five networks starting at different positions in the space of weights.
Among these, we choose the network that gives the lowest photo-z scatter
(cf. Eq. \ref{eq:score})
in the validation set.
For more details of our implementation of the ANN and its performance on
mock catalogs and real data, see \cite{cun07}.
The ANN photo-z algorithm is very flexible in the sense that it is easy
to change the input parameters, the training set, and the network configurations.
We tried a variety of combinations of possible input photometric
observables to see their effects on photo-z quality.
We calculated photo-z's using galaxy magnitudes, colors, and the
concentration indices for some or all of the passbands.
The concentration index $c_i$ in passband $i$ is defined as the ratio of {\tt PetroR50}
and {\tt PetroR90}, which are the radii that encircle 50\% and 90\% of the
Petrosian flux, respectively. Early-type (E and S0) galaxies, with centrally
peaked surface brightness profiles, tend to have low values of the
concentration index, while late-type spirals, with quasi-exponential light
profiles, typically have higher values of $c$.
Previous studies \citep{morg58,shi01,yam05,par05} have shown
that the concentration parameter correlates well
with galaxy morphological type, and we used it to help break the
degeneracy between redshift and galaxy type.
We present the photo-z results for different combinations of input
parameters in \S\ref{res}.
For comparison, we also computed photo-z's for the
validation set using another empirical method, the Nearest Neighbor
Polynomial (NNP) technique \citep{cun07}.
In NNP, to derive a photo-z for a galaxy in the photometric sample,
we look for its training-set nearest neighbors in the space of
photometric observables (magnitudes, colors, etc.).
Suppose we have $N_D$ photometric data entries for each galaxy.
The data vector for the galaxy of interest in the photometric sample is
denoted by $\ D^{\mu}=(D^1,D^2,...,D^{N_D})$,
while the data vector for the $i^{\rm th}$ galaxy in the training set is
$\ D^{\mu}_i=(D^1_i,D^2_i,...,D^{N_D}_i)$.
The distance $d_i$ between the photometric object and the $i^{\rm th}$
training set galaxy is defined using a flat metric in data space,
\begin{equation}
d_i^2 = \sum_{\mu=1}^{N_D} (D^{\mu} - D_{i}^{\mu})^2~. \label{nndef}
\end{equation}
\noindent The nearest neighbors are the training-set objects
for which $d_i$ is minimum. Once the nearest neighbors for a given
galaxy are identified,
they are used to fit the coefficients of a local, low-order polynomial relation
between photometric observables and redshift.
The galaxy photo-z is then obtained by applying
the derived relation to the photometric object.
For the NNP method employed in this work, we take the
photometric data $D^{\mu}$ in Eq.~(\ref{nndef})
to be the four ``adjacent'' galaxy colors $u-g, \ g-r, \ r-i, \ i-z$; we found that
this choice produces results marginally better than using the galaxy
magnitudes.
We use the nearest $1000$ neighbors to fit a quadratic polynomial
relation between redshift and the photometric data, here chosen
to be the five magnitudes in each passband ($ugriz$) and their
corresponding concentration indices.
We note that \cite{wan07} used a similar technique to estimate
photo-z's for a small sample of SDSS {\it spectroscopic} galaxies.
They applied the Kernel Regression method of order 0, weighting
the training-set neighbors and computing photo-z's by using the
weighted average of the neighbors' redshifts.
Our NNP method is closer to a Kernel Regression of order 2, since
we perform quadratic fits; however, we do not apply variable weights to the neighbors
but treat them equally in the fit.
Whereas the ANN method provides
a single, nonlinear, global fit using the whole
training set and applies the derived photo-z relation to all photometric objects,
the NNP method yields a separate, linear (in parameters), local fit for
each photometric object using its neighbors. If
the galaxy magnitude-concentration-redshift hypersurface is a differentiable manifold,
i.e., if it can be locally approximated by a hyperplane even though it
is globally curved, then these two photo-z methods should be roughly
equivalent. Indeed, as we show in \S \ref{res}, their performance is very similar.
\subsection{Photometric redshift errors}\label{meter}
We estimated photo-z errors for objects in the photometric catalog using
the Nearest Neighbor Error (NNE) estimator \citep{oya07}.
The NNE method is training-set based, with
a neighbor selection similar to the NNP photo-z estimator; it
associates photo-z errors to photometric objects by considering the
errors for objects with similar multi-band magnitudes in the
validation set.
We use the validation set, because the photo-z's of the training set could be
over-fit, which would result in NNE underestimating the photo-z errors.
The procedure to calculate the redshift error for a galaxy in the photometric
sample is as follows.
We find the validation-set nearest neighbors to the galaxy of
interest. In contrast to NNP,
where the distance in Eq.~(\ref{nndef}) was defined in color space,
the NNE distance is defined in magnitude space, since photo-z errors
correlate strongly with magnitude.
Since the selected nearest neighbors are in the spectroscopic sample,
we know their photo-z errors, $\delta z = z_{\rm phot}-z_{\rm spec}$, where
$z_{\rm phot}$ is computed using the ANN or the NNP method.
We calculated the $68\%$ width of the $\delta z$ distribution
for the neighbors and assigned that number as the photo-z error
estimate for the photometric galaxy. Here we selected
the nearest $200$ neighbors of each object to estimate its photo-z error.
In studies of photo-z error estimators applied
to mock and real galaxy catalogs, we found that NNE
accurately predicts the photo-z error when the training set is
representative of the photometric sample \citep{oya07}.
\subsection{Estimating the Redshift Distribution}\label{estdist}
As we shall see in \S \ref{res-photoz}, estimates for
galaxy photo-z's suffer from statistical biases that in general
cannot be completely removed on an object-by-object basis. However, we
can seek an unbiased estimate of the true redshift {\it distribution}
for the photometric sample that is independent of individual
galaxy photo-z estimates. For some statistical applications,
the redshift distribution of the photometric sample, as opposed
to individual galaxy photo-z's, is all that is required.
One way to estimate this distribution is to
assign a weight to every galaxy in the spectroscopic sample
such that the {\it weighted} spectroscopic sample has the same
distributions of magnitudes and colors as the photometric sample.
The $z_{\rm spec}$ distribution of this weighted spectroscopic
sample provides an estimate of the true, underlying
redshift distribution of the photometric sample.
The weight $W^{\alpha}$ of the $\alpha^{\rm th}$ spectroscopic
galaxy is calculated by comparing
the local density around the galaxy in the spectroscopic sample with
the density of the corresponding region in the photometric sample.
The local density is evaluated by counting the number of
nearest neighbors using the distance measured in the space of photometric
observables, as in Eq.~(\ref{nndef}). We fix the number of spectroscopic
neighbors, $N_{\rm S}$, which determines the distance $d_{\rm max}$
to the $N_{\rm S}^{\rm th}$-nearest spectroscopic neighbor.
We then find the number of neighbors $N_{\rm P}$ in the photometric
sample within the same distance $d_{\rm max}$ of the spectroscopic
galaxy. Up to an arbitrary normalization factor, the weight is defined as
\begin{eqnarray}
W^{\alpha} \sim \frac{N_{\rm P} }{ N_{\rm S} } ~.
\label{eqn:weight}
\end{eqnarray}
\noindent For our estimates, we chose $N_{\rm S}=20$, which provides a good
match of the weighted spectroscopic distributions of magnitudes
and colors to those of the photometric sample. We note that if
additional cuts in magnitude or color are applied to the photometric
sample, then this procedure must be repeated for the newly selected photometric
sample.
More details and tests of this method and comparisons with
other methods for estimating the
underlying redshift distribution (e.g., deconvolving the error distribution
from the $z_{\rm phot}$ \ distribution) will be presented
separately \citep{lim07}.
\begin{figure*}
\begin{center}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4a.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4b.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4c.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4d.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4e.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4f.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4g.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4h.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{46mm}
\begin{center}
\resizebox{46mm}{!}{\includegraphics[angle=0]{f4i.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{ $z_{\rm phot}$ versus $z_{\rm spec}$ for the validation set for
different ranges of $r$ magnitude and for different photo-z techniques.
{\it Left column:} objects with $r<20$; {\it middle column:} objects with $r>20$;
{\it right column:} all objects.
{\it Top row:} ANN case D1, where the input photometric data comprise
the 5 magnitudes ($ugriz$) and the 5 concentration parameters, and the training
is split into 5 bins of $r$ magnitude
{\it Middle row:} ANN case CC2, where the input data are
the 4 colors $u-g$, $g-r$, $r-i$, $i-z$, and 3 concentration parameters $c_gc_rc_i$.
{\it Bottom row:} results for the NNP method, where the input data are
the 5 magnitudes and 5 concentration parameters.
In all cases, the photo-z methods
used a training set with $\sim 320,000$ objects, and the derived solutions were
applied to an independent validation set with $\sim 309,000$ objects and
$r < 22$, reflecting the magnitude limit of the photometric sample.
The solid line in each panel indicates $z_{\rm phot}=z_{\rm spec}$; the
dashed and dotted lines show the 68\% and 95\% confidence regions as a function
of $z_{\rm spec}$.
The points display results for a random $10\%$ subset of the validation set in
each magnitude range.
}
\label{zpzs_valid_all}
\end{figure*}
\section{Results} \label{res}
\subsection{Photometric redshifts}
\label{res-photoz}
The photo-z precision (variance) and accuracy (bias) are
limited by a number of factors. There are
intrinsic degeneracies in
magnitude-redshift space: low-luminosity, intrinsically red galaxies at low
redshift can have apparent magnitudes similar to those of high-luminosity,
intrinsically blue galaxies at high redshift.
This natural degeneracy is amplified by
photometric errors, since magnitude uncertainties
propagate to photo-z errors.
In addition to these observational limitations, which are
determined by the photometric precision and the number of passbands of a survey,
the photo-z estimator itself may have inherent limitations. For example,
for training set methods, the size and representativeness of the training
set are important factors, as are the number of parameters or weights in
the fitting functions.
To test the quality of the photo-z estimates,
we use four photo-z performance metrics.
The first two metrics are the photo-z bias, $z_{\rm bias}$, and the photo-z {\it rms}
scatter, $\sigma$, both averaged over all $N$ objects in the validation
set, defined by
\begin{eqnarray}
z_{\rm bias}&=&\frac{1}{N}\sum_{i=1}^{N}\left( z_{\rm phot}^{i}-z_{\rm spec}^{i}\right) ~, \\
\sigma^2&=&\frac{1}{N}\sum_{i=1}^{N}\left(z_{\rm phot}^{i}-z_{\rm spec}^{i}\right)^2 ~.
\end{eqnarray}
\noindent The third performance metric, denoted by $\sigma_{68}$, is
the range containing $68\%$ of the validation set objects in the distribution of
$\delta z = z_{\rm phot}-z_{\rm spec}$. This metric is useful because
the probability distribution function
$P(\delta z)$ is in general non-Gaussian and asymmetric (for a Gaussian
distribution, $\sigma$ and $\sigma_{68}$ coincide). Explicitly, $\sigma_{68}$ is
defined by the value of $|z_{\rm phot} - z_{\rm spec}|$ such that 68\% of the objects have $|z_{\rm phot} - z_{\rm spec}| < \sigma_{68}$.
We also use the $95\%$ region $\sigma_{95}$, defined similarly.
In addition to these global metrics, we also define local versions of them
in bins of redshift or magnitude.
\begin{deluxetable}{llcc}
\tablewidth{0pt}
\tablecaption{Summary of ANN cases}
\startdata
\hline
\hline
\multicolumn{1}{c}{Case} & \multicolumn{1}{c}{Inputs/Description} & \multicolumn{1}{c}{$\sigma$} & \multicolumn{1}{c}{$\sigma_{68}$}\\
\hline
O1& $ugriz$ &0.0525 & 0.0229\\
C1& $ugriz$ + $c_uc_gc_rc_ic_z$ &0.0519 & 0.0224\\
D1& $ugriz$ + $c_uc_gc_rc_ic_z$. Split training&0.0519 & 0.0209\\
CC1&$u-g$, $g-r$, $r-i$, $i-z$ &0.0668 & 0.0272\\
CC2&$u-g$, $g-r$, $r-i$, $i-z$ + $c_gc_rc_i$ &0.0593 & 0.0245\\
\enddata
\label{table:method}
\tablecomments{Photo-z performance metrics $\sigma$ and $\sigma_{68}$
for the validation set using different input parameters
(magnitudes, colors, and concentration indices) and training procedures.}
\end{deluxetable}
To search for an optimal photo-z estimator, we computed
photo-z's using the ANN method with
different combinations of input photometric observables. Five of
these combinations are listed in Table \ref{table:method}.
In the first case, dubbed O1, the training and photo-z estimation
are carried out using only the five magnitudes $ugriz$. In case C1,
we use the five magnitudes and the five concentration indices
$c_uc_gc_rc_ic_z$ as the input parameters. In case CC1, we
use only the four colors
$u-g$, $g-r$, $r-i$, and $i-z$. In case CC2, we combine the
four colors with
the concentration indices $c_gc_rc_i$ in the $gri$ filters.
Finally, in case D1, we use the $ugriz$ magnitudes
and the $c_uc_gc_rc_ic_z$ concentration indices, but we split the
training set and the photometric sample into 5 bins of $r$ magnitude and
perform separate ANN fits in each bin.
In all five cases, we use an ANN with three hidden layers and tune
the number of hidden nodes to keep the total
number of degrees of freedom of the network roughly the same for all cases.
Table~\ref{table:method} provides a summary of the performance results of the
different ANN cases.
We find that using concentration indices in addition to magnitudes
(C1 vs. O1) helps break some degeneracies and reduces the
photo-z scatter by a few percent.
Using only colors (CC1) degrades the photo-z performance by as much as 20\%,
mostly because the degeneracy between intrinsically red, nearby galaxies
and intrinsically blue, distant galaxies (with red observed colors)
cannot be broken.
Adding concentration indices to color-only training (CC2)
helps break such a degeneracy, because the concentration index correlates
with galaxy type and hence intrinsic color. Of the five,
case CC2 also yields the most realistic photometric redshift
distribution for the photometric sample (see \S \ref{subsec:red_dist}).
Finally, splitting the training set and photometric sample into
magnitude bins (D1) produces
results with the best performance metrics ($\sigma$ and $\sigma_{68}$) of
all the ANN cases we have tested.
We choose D1 and CC2 as the best ANN cases and describe their
results in more detail below; their outputs for the photometric sample
are included in the public DR6 database.
In Fig.~\ref{zpzs_valid_all}, we plot photometric redshift, $z_{\rm phot}$,
for all objects in the validation set vs. true
spectroscopic redshift, $z_{\rm spec}$, for the different photo-z methods
and cases and in different ranges of $r$ magnitude.
The top row shows results for ANN case D1, the middle row shows
the performance of ANN case CC2, and the bottom row shows results for
the NNP method using magnitudes and concentration indices as the input
parameters. In each panel,
the values of the corresponding global
photo-z performance metrics $\sigma$ and $\sigma_{68}$ are shown.
The redshift bias $z_{\rm bias}$ is typically much smaller than $\sigma$ or
$\sigma_{68}$, since the photo-z methods are designed to minimize it (see
Fig. \ref{plot:statvsm}). In each panel of Fig. \ref{zpzs_valid_all},
the solid line traces
$z_{\rm phot}=z_{\rm spec}$, i.e., the line
for a perfect photo-z estimator.
The dashed and dotted lines show the corresponding $68\%$ and $95\%$ regions,
defined as above but in $z_{\rm spec}$ bins. Although
each photo-z method probes the
hypersurface defined by the photometric observables and redshift in a different
way,
they produce very similar results, suggesting that our results are
limited not by the photo-z technique employed but by the
intrinsic degeneracies in magnitude-concentration-redshift space and
by the photometric errors.
\begin{figure}
\resizebox{85mm}{!}{\includegraphics[angle=0]{f5.eps}}
\caption{The performance metrics
$z_{\rm bias}$, $\sigma$, and $\sigma_{68}$ for the ANN D1 and CC2
validation sets are shown
as a function of $r$ magnitude.
CC2 performs relatively poorly for bright objects ($r < 16$), where the color-redshift
relation is contaminated by faint objects with similar colors. In D1,
this problem is alleviated by the effective magnitude prior imposed by
the training set. At faint magnitudes, the performance degrades as the photometric
errors increase.
}
\label{plot:statvsm}
\end{figure}
\begin{figure*}
\begin{center}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f6a.c.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f6b.c.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{Performance metrics
$z_{\rm bias}$, $\sigma$, and $\sigma_{68}$ for the ANN D1 and CC2 validation sets
are shown as a function of $z_{\rm spec}$ for $r<20$ and $r>20$.
The increased scatter for objects with $z > 0.6$ is due to
the 4000 \AA \ break shifting out of the $r$ passband at
around $z = 0.7$; beyond that redshift, the estimator effectively relies
on only two passbands ($i$ and $z$) to determine the photo-z's. Note that
faint objects ($r > 20$) have worse scatter at low redshifts for
both cases. This is likely due to the fact that the faint, low-redshift
objects in the validation set are predominantly blue
dwarf or irregular galaxies that do not have
strong 4000 \AA \ breaks; in this case, the photo-z estimator must rely on less
pronounced spectral features, resulting in larger photo-z scatter.
}
\label{plot:statvsz}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f7a.c.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f7b.c.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{
$g-r$ color vs spectroscopic redshift for galaxies in the
validation set: {\it left panel:} galaxies with $r<20$; {\it right panel:}
galaxies with $r>20$. The solid curves show expected color-redshift relations of
galaxies with different SED types, calculated using the \cite{col80}
spectral templates. The different
colors (shades of grey)
indicate galaxies from the different spectroscopic surveys contributing
to the validation set. The 2SLAQ objects, denoted by red triangles, were
selected to be mostly early-type galaxies. They are
responsible for the minimum in $\sigma$ vs. $z_{spec}$
for the $r>20$ subsample in Fig. \ref{plot:statvsz}.
}
\label{plot:grvsz}
\end{figure*}
In Figs. \ref{plot:statvsm} and \ref{plot:statvsz}, we show the performance
metrics
$z_{\rm bias}$, $\sigma$, and $\sigma_{68}$ as a function of $r$ magnitude
and $z_{\rm spec}$ for the validation set for the two preferred ANN cases.
We see that the photo-z precision degrades considerably
for objects with $r > 20$.
This increased scatter is expected, since the relative photometric errors
increase as the nominal detection limit of the SDSS photometry is approached
(see Table \ref{propphot}). While the bias for CC2 increases at $r<17$,
we note that the fraction of objects in the photometric sample which are
that bright is very small.
As a function of redshift, $\sigma$ and $\sigma_{68}$ increase dramatically
beyond $z \sim 0.6$
for the validation set.
For the $r < 20$ part of the sample, the number of spectroscopic objects with
$z > 0.6$ is simply too small
to characterize the redshift-magnitude surface, as shown in
the left panel of Fig. \ref{plot:grvsz}. For the
faint objects ($r > 20$), the scatter is low for $z$ between 0.4 and
0.6 and increases outside of that range.
It's important to note that the photo-z performance metrics were
calculated independently of spectral type.
Since the the neural network and the training set were not optimized
for any specific galaxy population (e.g., galaxies in clusters) it is possible
that certain galaxy types may have photo-z's with worse (or better!)
biases and dispersion.
In Figure~\ref{plot:grvsz}, we plot $g-r$ color versus spectroscopic
redshift for the validation set for both bright ($r<20$) and faint ($r>20$) galaxies.
The 2SLAQ and DEEP2 galaxies are highlighted by different
colors (shades of grey),
and the expected color-redshift relations for the four spectral templates from
\cite{col80}
(from early to late types) are indicated by the solid lines.
We see that for the faint sample, in the range $0.4 < z < 0.6$, the galaxies
come mostly from the 2SLAQ survey, which used
specific color cuts to select early-type galaxies at
$z\sim0.5$. Because early-type galaxies have a well-defined
4000 \AA \ break feature, their photo-z's are well determined and
their photo-z scatter is low.
Outside of the range $0.4 < z < 0.6$, the validation set at faint magnitudes
is dominated by bluer galaxies
that do not have strong, broad spectral features, resulting in the
larger photo-z scatter seen in Fig. \ref{plot:statvsz}.
Fig.~\ref{plot:statvsz} shows that the common assumption that the
photo-z scatter
scales as $(1+z)$ is not consistent with our estimates for the SDSS sample.
The functional form of the scatter versus redshift depends
strongly on the underlying galaxy type distribution.
\subsection{Redshift Distributions}
\label{subsec:red_dist}
So far, we have considered the scatter and bias of photo-z estimates.
As discussed in \S \ref{estdist}, it is also of interest to consider
the predicted photo-z distribution as a whole. Different photo-z estimators
may achieve similar values for the metrics $z_{\rm bias}$, $\sigma$, and $\sigma_{68}$,
but predict different forms for the photo-z distribution of the photometric
sample. As we shall see, this is the case with the two ANN cases D1 and CC2.
We therefore define two additional performance metrics to quantify the
quality of the predicted photo-z distribution.
The first metric, $\sigma_{\rm dist}$, measures the {\it rms} difference between
the binned $z_{\rm phot}$ and $z_{\rm spec}$ distributions of the validation set,
\begin{eqnarray}
\sigma^2_{\rm dist}&=&\frac{1}{N_{\rm bin}}\sum_{i=1}^{N_{\rm bin}}\left(P_{\rm phot}^{i}-P_{\rm spec}^{i}\right)^2,
\end{eqnarray}
\noindent where $P_{\rm phot}^{i}$ is the height of the
$i^{\rm th}$ redshift bin of the $z_{\rm phot}$ distribution,
$P_{\rm spec}^{i}$ is the height of the same redshift
bin of the $z_{\rm spec}$ distribution, and $N_{\rm bin}$ is the total number
of redshift bins used.
Here we use $N_{\rm bin}=120$ equally spaced redshift bins running
from $z=0$ to $z=1.2$.
The second redshift distribution
metric we employ is the KS statistic $D$, the
maximum value of the absolute difference between the two ($z_{\rm phot}$ and
$z_{\rm spec}$) cumulative
redshift distribution
functions. An advantage of the KS statistic is that it does not require
binning the data in redshift. However, our
use of the KS statistic to quantify the difference between the $z_{\rm phot}$
and $z_{\rm spec}$ distributions of the validation set likely does
not adhere to formal statistical practice,
since it turn outs that the probability for the KS statistic for both cases we consider
is very close to zero \citep{pre92}.
Table \ref{table_sigdist_ks} shows the values of
$\sigma_{\rm dist}$ and of the KS statistic $D$ for the validation set for the
D1 and CC2 ANN photo-z's, for different ranges of $r$ magnitude.
Although the CC2 photo-z distribution is
a worse overall match to the $z_{\rm spec}$ distribution for the
validation set, it works better than D1 for $r>18$.
Since the photometric sample
is dominated by objects at $r>20$ (see Fig. \ref{dist.sdss}),
these results suggest that CC2 should do a better job in
estimating the redshift distribution of the photometric sample,
even though D1 performs better by the standards of $z_{\rm bias}$ and
$\sigma$.
\begin{deluxetable}{cc|cc|ccc}
\tablewidth{0pt}
\tablecaption{$\sigma_{\rm dist}$ and KS statistic for Redshift distribution}
\startdata
\hline
\hline
\multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{$\sigma_{\rm dist}$} & \multicolumn{2}{c}{KS statistic}\\
\hline
\multicolumn{1}{c}{} & \multicolumn{1}{c|}{$r$-mag bin} & \multicolumn{1}{c}{CC2} & \multicolumn{1}{c|}{D1} & \multicolumn{1}{c}{CC2} & \multicolumn{1}{c}{D1}\\
\hline
&$r < 18$ & 0.0392 & 0.0330 & 0.0632 & 0.0391& \\
&$18<r<19$& 0.0390 & 0.0430 & 0.0520 & 0.0533& \\
&$19<r<20$& 0.0391 & 0.0399 & 0.0366 & 0.0413&\\
&$20<r<21$& 0.0403 & 0.0471 & 0.0363 & 0.0665&\\
&$21<r<22$& 0.0652 & 0.0702 & 0.1051 & 0.1306&\\
\hline
&All & 0.0383 & 0.0338 & 0.0485 & 0.0307&
\enddata
\label{table_sigdist_ks}
\tablecomments{$\sigma_{\rm dist}$ and KS statistic results for CC2 and D1 ANN photo-z's for the validation set.}
\end{deluxetable}
The redshift distributions for the validation set are shown in
Fig.~\ref{dndz.valid} for the same bins of $r$ magnitude as in
Table \ref{table_sigdist_ks}.
The D1 and CC2 $z_{\rm phot}$ \ distributions are shown
in color,
and the solid curves correspond to the $z_{\rm spec}$ \ distributions.
The similarities between the $z_{\rm phot}$ \ and $z_{\rm spec}$ \ distributions
are consistent with the results of
Table \ref{table_sigdist_ks}.
In \S \ref{estdist}, we noted that the $z_{\rm spec}$ \ distribution of the
spectroscopic sample, weighted to reproduce the color and magnitude
distributions of the photometric sample, provides an estimate of the
unknown redshift distribution of the photometric sample. The $z_{\rm phot}$ \
distribution for the photometric sample, computed using ANN D1 or CC2, provides
another estimate of the true redshift distribution for the photometric
sample, but one that we know suffers from bias (e.g., Fig. \ref{plot:statvsm}).
While we have not shown that the weighted $z_{\rm spec}$ \ estimate of the
redshift distribution is unbiased, it has the advantage that it makes
direct use of the statistical properties of the photometric sample, and
we believe it is our best estimate of the photometric sample redshift distribution.
Our final test of photo-z performance therefore compares the $z_{\rm phot}$
\ distribution for the photometric sample for the two ANN cases
with the weighted $z_{\rm spec}$ \ distribution of the spectroscopic sample.
Agreement between the weighted $z_{\rm spec}$ \ distribution and either one of the
$z_{\rm phot}$ \ distributions does not guarantee that they are correct, but
it at least provides a useful consistency check.
In Fig.~\ref{dndz.photo} we show the estimated redshift distributions of a
random subsample containing $\sim 1\%$ of the objects in the DR6
photometric sample for both the CC2 and D1 ANN cases.
The
colored regions
correspond to the $z_{\rm phot}$ \ distributions, and the solid lines indicate
the weighted $z_{\rm spec}$ \ distribution of the spectroscopic sample.
The $z_{\rm phot}$ \ distributions for CC2 are closer matches to
the weighted $z_{\rm spec}$ \ distributions for $r>18$, and they do
not show the peculiar features that the D1 photo-z distributions
display, particularly at faint magnitudes. By the criterion of
producing a more realistic redshift distribution for the photometric
sample, the CC2 ANN estimator is preferred.
\subsection{Photo-z Errors}
In order to test the quality of our photo-z error estimates
calculated with the NNE method, we introduce the concept of
empirical error. For a set of objects (within the validation set) with similar
NNE error,
$\sigma_{z}^{\rm NNE}$, the empirical error is defined as the $68\%$
width of the $|z_{\rm phot}-z_{\rm spec}|$ distribution for the set.
If the NNE estimator works properly,
objects with similar NNE error should have similar underlying
error distributions, i.e.,
the NNE error should correlate
well with the empirical error.
Fig.~\ref{erer} shows the performance of the photo-z error estimator
by plotting the computed NNE error $\sigma_{z}^{\rm NNE}$ as a function
of the corresponding empirical error for the validation set.
Results are shown for the D1 and CC2 ANN photo-z's.
The empirical error was calculated for bins containing $100$ objects
with similar $\sigma_z^{\rm NNE}$.
As expected, faint objects ($r > 20$) have larger errors than bright
objects ($r < 20$).
The NNE estimated error correlates well with the
empirical error even for the faint objects, indicating that the
error estimator works properly for all magnitudes.
The bulk of the bright objects have $\sigma_z^{\rm NNE}$ in the range
$0.01-0.04$, consistent with the overall {\it rms} photo-z scatter of
$\sigma \sim 0.03$ indicated in Fig \ref{zpzs_valid_all}.
Likewise, faint objects have $\sigma_z^{\rm NNE}$ in the range $0.02-0.3$,
while $\sigma \sim 0.13$ for those objects.
The NNE error is therefore a robust indicator of an object's
photo-z quality. In particular, we have carried out tests in which we
cut objects with large NNE error from the sample and found that the
remaining sample has smaller photo-z scatter and fewer catastrophic
outliers. For applications in which
photo-z precision is more important than
completeness of the photometric sample, this can be a
useful procedure.
\begin{figure*}
\begin{center}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f8a.c.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f8b.c.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{Redshift distributions for the galaxies in the
validation set for different $r$ magnitude bins. {\it Left panels:} ANN D1;
{\it right panels:} ANN CC2.
The
colored regions indicate the ANN
photo-z distributions, while the lines are
the spectroscopic redshift distributions. By eye,
both ANN cases recover the true redshift distributions of the
validation set well, except
in the faintest magnitude bin, where the photometric errors become large.
}\label{dndz.valid}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f9a.c.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f9b.c.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{Estimated redshift distributions for a random subsample of
1\% of the galaxies in the
DR6 photometric sample in different $r$-magnitude bins. {\it Left panels:}
ANN D1; {\it right panels:} ANN CC2. Colors show the $z_{\rm phot}$ \ distributions.
The lines show the estimated redshift distributions from the spectroscopic
sample weighted to match the magnitude and color distributions of the
photometric sample.
Even though the two ANN cases correctly recover the
validation set redshift distribution (Fig. \ref{dndz.valid}),
their photo-z
distributions for the photometric sample disagree. The photo-z distribution
for D1 shows a peak at
$z\sim0.4$ that results mainly from the $20 < r < 21$ bin.
The CC2 distribution does not show such strong features, and in general it matches
the weighted $z_{\rm spec}$ \ distribution better.
}\label{dndz.photo}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f10a.c.eps}}
\end{center}
\end{minipage}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f10b.c.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{The estimated error from the NNE method, $\sigma_z^{\rm NNE}$, is
shown against the empirical error for objects in the validation set.
{\it Left panel:} D1 ANN; {\it right panel:} CC2 ANN.
Each point corresponds to a bin
of $100$ objects with similar $\sigma_z^{\rm NNE}$.
The black squares show results for bright objects ($r < 20$),
the red triangles for faint objects ($r > 20$). As expected, faint
objects have larger errors, but
the NNE error correlates well with the empirical error over the full magnitude range.
}\label{erer}
\end{figure*}
In Fig.~\ref{gausser}, we plot the normalized error distribution,
i.e., the distribution
of $(z_{\rm phot}-z_{\rm spec})/\sigma_{z}^{\rm NNE}$, for objects
in the spectroscopic sample, using the D1 ANN estimator.
The solid black lines are the data, and the dotted red lines
show Gaussian distributions with zero mean and unit variance.
The upper panels show results for the galaxies in the SDSS Main
and LRG spectroscopic samples. The lower panels show results for
all validation-set galaxies, divided into bright
($r < 20$) and faint ($r > 20$) samples.
These plots indicate that, averaged over the bulk of the spectroscopic
sample, the photo-z estimates are nearly unbiased, the NNE error
provides a good estimate of the true error, and the NNE error can be
approximately interpreted as a Gaussian error in this average sense.
Note that this does {\it not} imply that the photo-z error distributions in
bins of magnitude or redshift are unbiased Gaussians: Figs. \ref{plot:statvsm}
and \ref{plot:statvsz} show that they are not.
\section{Query Flags and Caveats} \label{rec}
When querying the SDSS data server to produce the photometric sample for
which we estimated photo-z's, we set the most relevant flags needed to
produce a clean galaxy sample.
However, some applications may require more stringent selection of objects.
We advise users of the catalog to read the documentation about producing a clean
galaxy sample on the SDSS
website\footnote{ {\tt http://cas.sdss.org/dr6/en/help/docs/algorithm.asp} }.
In particular, users should consider requiring the BINNED1 (object detected at $> 5\sigma$) flag and removing
objects with the NODEBLEND (object is a blend but deblending was not possible) flag. The various PHOTO flags
are described in more details at the above
website as well as in Appendix \ref{query}.
Finally, we note that the training of the photo-z estimators included only
galaxies, not stars. As a result, photo-z estimates for
stars that contaminate the photometric sample will be wrong, and cutting
objects with low $z_{\rm phot}$ will not remove them. Our tests on
star/galaxy separation in the photometric sample are briefly
described in Appendix \ref{stargal}.
\begin{figure}
\begin{center}
\begin{minipage}[t]{81mm}
\begin{center}
\resizebox{81mm}{!}{\includegraphics[angle=0]{f11.c.eps}}
\end{center}
\end{minipage}
\end{center}
\caption{
Distributions of
$(z_{\rm phot}-z_{\rm spec})/\sigma_{z}^{\rm NNE}$
for objects in the spectroscopic sample, with photo-z's calculated
using ANN D1; the
results for ANN CC2 are very similar.
The solid black lines are the data, and the dotted red lines are
Gaussians with zero mean and unit variance. {\it Top left:} SDSS Main
spectroscopic sample; {\it top right:} SDSS LRG sample; {\it bottom
left:} validation-set galaxies with $r<20$; {\it bottom right:} validation-set
galaxies with $r>20$. In all cases the photo-z errors
are reasonably well modeled by Gaussian distributions.
}\label{gausser}
\end{figure}
\section{Accessing the Catalog} \label{cat}
The photo-z catalog can be accessed from the
{\tt photoz2} table in the DR6 context on the
SDSS CasJobs site, at {\tt http://casjobs.sdss.org/casjobs/}.
A query similar to the one in the Appendix provides all objects
for which we computed photo-z's.
Alternatively, one can simply perform a query that searches for
objects with a {\tt photoz2} entry.
In addition to the {\tt photoz2} table in the SDSS CAS, an independent
{\tt photoz} table is also available, for which the photo-z's
have been computed using a template-based technique; see
\cite{csa07, ade07}.
\section{Conclusions}\label{con}
We have presented a public catalog of photometric redshifts for the SDSS DR6
photometric sample using
two different photo-z estimates, CC2 and D1, based on the ANN method.
As a consistency check, we have also calculated photo-z's using the NNP method,
a nearest neighbor approach, which gives very good agreement with
the ANN results.
The CC2 and D1 photo-z results are comparable. For the validation set, the
D1 photo-z estimates have lower photo-z scatter for bright galaxies ($r<20$),
and scatter similar to but slightly smaller than that of
CC2 for objects with $r>20$. Our tests indicate
that the SDSS photo-z estimates are most reliable for galaxies
with $r<20$
and that the scatter increases significantly at fainter magnitudes.
For faint galaxies ($r>20$), we recommend using the CC2 photo-z estimate,
since the CC2 $z_{\rm phot}$ \ distribution most closely resembles the $z_{\rm spec}$ \
distribution for the validation set and the weighted $z_{\rm spec}$ \ estimate
for the redshift distribution of the photometric sample.
For users who wish to use, for simplicity, a single photo-z estimator
over the full
magnitude range, we recommend using CC2.
Finally, we have demonstrated that the NNE error estimator, included in the
public catalog,
provides a reliable measure of the photo-z errors and that the overall scaled
photo-z errors are nearly Gaussian.
Funding for the DEEP2 survey has been provided by NSF grant AST-0071048 and AST-0071198. The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The DEEP2 team and Keck Observatory acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community and appreciate the opportunity to conduct observations from this mountain.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is {\tt http://www.sdss.org/}.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
|
1,314,259,993,337 | arxiv | \section{Introduction}
Many aspects of superstring theory can be captured by studying its low energy supergravity effective action. The stringy effects, however, appear in higher-derivative and genus corrections to the supergravity. These corrections may be extracted from the most fundamental observables in the superstring theory which are the S-matrix elements \cite{Gross:1986iv,Gross:1986mw}. These objects have various hidden structure such as the Kawai-Lewellen-Tye (KLT) relations \cite{Kawai:1985xq} which connect sphere-level S-matrix elements of closed strings to disk-level S-matrix elements of open string states. There are similar relations between disk-level S-matrix elements of open and closed stringds and disk-level S-matrix elements of only open string states \cite{Garousi:1996ad,Hashimoto:1996bf,Stieberger:2009hq}.
On the other hand, the S-matrix elements should expose the dualities of superstring theory \cite{Sen:1998kr,Vafa:1997pm} through the corresponding Ward identities \cite{Garousi:2011we}. These identities can be used as generating function for the S-matrix elements, {\it i.e.,}\ they establish connections between different elements of the scattering amplitude of $n$ supergravitons. Calculating one element explicitly, then all other elements of the S-matrix may be found by the Ward identities \cite{Garousi:2012gh}-\cite{Velni:2013jha}.
Alternatively, the effective actions may be calculated directly by implementing the string dualities. The consistency of the effective action of type IIB superstring theory with S-dualiy has been used in \cite{Green:1997tv}-\cite{Basu:2007ck} to find genus and instanton correctons to the four Riemann curvature corrections to the type IIB supergravity. The consistency of non-abelian D-brane action with T-duality has been used in \cite{Myers:1999ps} to find various commutators of the transverse scalar fields in the D-brane effective action. They have been verified by the corresponding S-matrix elements in \cite{Garousi:2000ea}. Using the consistency of the effective actions with the string dualities, some of the eight-derivative corrections to the supergravity and four-derivative corrections to the D-brane/O-plane world-volume effective action have been found in \cite{Garousi:2009dj}-\cite{Robbins:2014ara}. The complete list of the couplings at these orders which are fully consistent with the string dualities, however, are still lacking.
The effective actions of type II superstring theories at the leading order of $\alpha'$ are given by the type II supergravities which are invariant under the string dualities. The first higher-derivative correction to these actions is at eight-derivative order. The Riemann curvature corrections to the supergravities, $t_8t_8R^4$, have been found in \cite{Gross:1986iv} from the $\alpha'$-expansion of the sphere-level S-matrix element of four graviton vertex operators. These couplings have been extended in \cite{Gross:1986mw} to include all other couplings of four NSNS states by extending the Riemann curvature to the generalized Riemann curvature, {\it i.e.,}\
\begin{eqnarray}
\bar{R}_{ab}{}^{cd}&= &R_{ab}{}^{cd}-\frac{\kappa}{\sqrt{2}}\eta_{[a}{}^{[c}\phi_{;b]}{}^{d]}+ 2e^{-\phi_0/2}H_{ab}{}^{[c;d]}\labell{trans}
\end{eqnarray}
where $\bar{R}$ is the generalized Riemann curvature.
The resulting couplings are fully consistent with the corresponding S-matrix elements. The S-matrix elements of four massless states in the type II superstring theories have only contact terms at order $\alpha'^3$. The S-dual and T-dual Ward identities of the S-matrix elements then dictate that the NSNS couplings must be combined with the appropriate RR couplings to be consistent with these Ward identities. This guiding principle has been used in \cite{Garousi:2013lja} to find various on-shell couplings between two RR and two gravity/B-field states and between four RR states at order $\alpha'^3$.
The dilaton term in \reef{trans} is canceled when transforming it to the string frame \cite{Garousi:2012jp,Liu:2013dna}. As a result, there is no on-shell dilaton coupling between four NSNS fields in the string frame.
It has been speculated in \cite{Liu:2013dna} that there may be no dilaton couplings between all higher NSNS fields at eight-derivative level. As we will see in this paper, however, the consistency of the couplings $t_8t_8\bar{R}^4$ with the S-dual and T-dual Ward identities produces various non-zero couplings between the dilaton and the RR fields in the string frame.
In this paper, we are going to examine these couplings as well as the couplings found in \cite{Garousi:2013lja} with the explicit calculation of the sphere-level S-matrix element of two RR and two NSNS vertex operators in the RNS formalism.
The outline of the paper is as follows: We begin with the section 2 which is the detail calculations of the sphere-level S-matrix element of two RR and two NSNS vertex operators in the RNS formalism. In section 3, we compare the contact terms of these amplitudes at order $\alpha'^3$ for the grvity/B-field with the corresponding couplings that have been found in \cite{Garousi:2013lja}. In section 4, we study the dilaton couplings. Using the T-dual and S-dual Ward identities on the S-matrix element of RR five form field strength, we find various couplings at order $\alpha'^3$ in both the string and Einstein frames. We show that the dilaton couplings in the Einstein frame are fully consistent with the corresponding S-matrix element at order $\alpha'^3$. In section 5, we briefly discuss our results.
\section{Scattering amplitude}
The scattering amplitude of four RR states or
two RR and two NSNS states in the pure spinor formalism have been calculated in \cite{Policastro:2006vt}. In this section, we are going to calculate the scattering amplitude of two RR and two NSN states in the RNS formalism \cite{Friedan:1985ge,Kostelecky:1986xg}.
In this formalism, the tree level scattering amplitude of two RR and two NSNS states is given by the correlation function of their corresponding vertex operators on the sphere world-sheet. Since the background superghost charge of the sphere is $Q_{\phi}=2$, one has to choose the vertex operators in the appropriate pictures to produce the compensating charge $Q_{\phi}=-2$. One may choose the RR vertex operators in $(-1/2,-1/2)$ picture, one of the NSNS vertex operators in $(-1,-1)$ and the other one in $(0,0)$ picture. The final result, should be independent of the choice of the ghost picture.
Using the above picture for the vertex operators, the scattering amplitude is given by the following correlation function \cite{Friedan:1985ge,Kostelecky:1986xg}:
\begin{eqnarray}
{\cal A}\sim\int \prod_{i=1}^{4} d^2 z_{i} \; \lan \prod_{j=1}^{2} V_{RR}^{(-1/2,-1/2)}(z_{j},\bar{z}_{j})V_{NSNS}^{(-1,-1)}(z_{3},\bar{z}_{3})V_{NSNS}^{(0,0)}(z_{4},\bar{z}_{4})\ran \labell{amp1}
\end{eqnarray}
where the vertex operators are\footnote{Our conventions in the string theory side set $\alpha'=2$.}
\begin{eqnarray}
V_{RR}^{(-1/2,-1/2)}(z_{j},\bar{z}_{j})&=&(P_{-}\Gamma_{j(n)})^{AB}:e^{-\phi(z_{j})/2}S_{A}(z_{j})e^{ik_{j}\cdot X(z_{j})}:e^{-\tphi(\bar{z}_{j})/2}\tS_{B}(\bar{z}_{j})e^{ik_{j}\cdot \tX({\bar{z}_{j}})}:\nn\\
V_{NSNS}^{(-1,-1)}(z_{3},\bar{z}_{3})&=&\veps_{3\mu\nu}:e^{-\phi(z_{3})}\psi^{\mu}(z_{3})e^{ik_{3}\cdot X(z_{3})}:e^{-\tphi(\bar{z}_{3})}\tpsi^{\nu}(\bar{z}_{3})e^{ik_{3}\cdot \tX({\bar{z}_{3}})}:\nn\\
V_{NSNS}^{(0,0)}(z_{4},\bar{z}_{4})&=&\veps_{4 \alpha \beta}:(\prt X^{\alpha}(z_{4})+ik_{4}\!\cdot\!\psi\psi^{\alpha}(z_{4})) e^{ik_{4}\cdot X(z_{4})}:\nn\\&&\quad\quad\times
(\prt \tX^{\beta}(\bar{z}_{4})+ik_{4}\!\cdot\!\tpsi\tpsi^{\beta}(\bar{z}_{4})) e^{ik_{4}\cdot \tX(\bar{z}_{4})}:\labell{vert}
\end{eqnarray}
where the indices $A,B,\cdots$ are the Dirac spinor indices and $P_-=\frac{1}{2}(1-\gamma_{11})$ is the chiral projection operator which makes the calculation of the gamma matrices to be with the full $32\times 32$ Dirac matrices of the ten dimensions. The RR polarization tensors $\veps_1^{(n-1)},\, \veps_2^{(n-1)}$ appear in $\Gamma_{1(n)}, \, \Gamma_{2(n)}$ which are defined as
\begin{equation}
\Gamma_{i(n)}=\frac{a_n}{n!}(F_{i})_{\mu_1\cdots\mu_n}\,\gamma^{\mu_1\cdots\mu_n}
\labell{self}
\end{equation}
where the $n$-form $(F_i)_{\mu_1\cdots\mu_n}= \frac{1}{2}(dC_i)_{\mu_1\cdots\mu_n} $ is the linearized R-R field strength, and the factor $a_n=-1$ in the type IIA theory and $a_n=i$ in the type IIB theory \cite{Garousi:1996ad}. The polarization tensors of the NSNS fields are given by $\veps_3, \veps_4$. The polarization tensor is symmetric and traceless for graviton, antisymmetric for B-field and for dilaton it is
\begin{eqnarray} \veps_{i}^{\mu\nu}=\frac{\phi_i}{\sqrt{8}}\left(\eta^{\mu\nu}
-k_{i}^{\mu}\ell_{i}^{\nu}-\ell_{i}^{\mu}k_{i}^{\nu}\right)\labell{dilpol}
\end{eqnarray}
where $ \ell_i $ is an auxiliary vector which satisfies $ k_i\!\cdot\!\ell_i=1 $ and $ \phi_i $ is the dilaton polarization
which is one. The on-shell relations for the vertex operators are $k_i^2=0$, $k_i\!\cdot\!\veps_i=0$, and $\veps_i\!\cdot\! k_i=0$. The normalization of the amplitude \reef{amp1} will be fixed after fixing the conformal symmetry of the integrand.
Substituting the vertex operators \reef{vert} into \reef{amp1}, and using the fact that there is no correlation between holomorphic and anti-holomorphic for the world-sheets which have no boundary, one can separate the amplitude to the holomorphic and the anti-holomorphic parts as
\begin{eqnarray}
{\cal A} \sim (P_{-} \Gamma_{1(n)})^{AB}(P_{-} \Gamma_{2(m)})^{CD}\veps_{3 \mu \nu}\veps_{4 \alpha \beta}\int \prod_{i=1}^{4} d^2 z_{i} I_{AC}^{\mu \alpha}\otimes\tilde{I}_{BD}^{\nu \beta}\labell{amp2}
\end{eqnarray}
where the holomorphic part is
\begin{eqnarray}
I_{AC}^{\mu \alpha}&=&\lan:e^{-\phi(z_{1})/2}:e^{-\phi(z_{2})/2}:e^{-\phi(z_{3})}:\ran\big[\lan:S_{A}(z_{1}):S_{C}(z_{2}):\psi^{\mu}(z_{3}):\ran\nn\\&&\times\lan:e^{ik_{1}\cdot X(z_{1})}:e^{ik_{2}\cdot X(z_{2})}:e^{ik_{3}\cdot X(z_{3})}:\prt X^{\alpha}(z_{4})e^{ik_{4}\cdot X(z_{4})}:\ran\nn\\&&+\lan:S_{A}(z_{1}):S_{C}(z_{2}):\psi^{\mu}(z_{3}):ik_{4}\!\cdot\!\psi\psi^{\alpha}(z_{4}):\ran\nn\\&&\times\lan:e^{ik_{1}\cdot X(z_{1})}:e^{ik_{2}\cdot X(z_{2})}:e^{ik_{3}\cdot X(z_{3})}:e^{ik_{4}\cdot X(z_{4})}:\ran\big]\labell{right}
\end{eqnarray}
and the anti-holomorphic part $\tilde{I}_{BD}^{\nu \beta}$ is given by similar expression.
In calculating the correlators \reef{amp2}, one needs the world-sheet propagators for the holomorphic and anti-holomorphic fields \cite{Friedan:1985ge,Kostelecky:1986xg}. Using the standard sphere propagators, one can easily calculate the correlators of the bosonic fields as
\begin{eqnarray}
P\equiv\lan:e^{ik_{1}\cdot X(z_{1})}:e^{ik_{2}\cdot X(z_{2})}:e^{ik_{3}\cdot X(z_{3})}:e^{ik_{4}\cdot X(z_{4})}:\ran&=&\prod_{i<j}^{4} z_{ij}^{k_{i}\cdot k_{j}}\labell{boscor}\\
\lan:e^{ik_{1}\cdot X(z_{1})}:e^{ik_{2}\cdot X(z_{2})}:e^{ik_{3}\cdot X(z_{3})}:\prt X^{\alpha}(z_{4})e^{ik_{4}\cdot X(z_{4})}:\ran&=&\sum_{i=1}^{3} ik_{i}^{\alpha}z_{i4}^{-1}P\nonumber\\
\lan:e^{-\phi(z_{1})/2}:e^{-\phi(z_{2})/2}:e^{-\phi(z_{3})}:\ran&=& z_{12}^{-1/4}z_{13}^{-1/2}z_{23}^{-1/2}\nn
\end{eqnarray}
where $z_{ij}=z_i-z_j$. Using the conservation of momentum and the on-shell condition $k_4\!\cdot\!\veps_4=0$, one can write $\sum_{i=1}^{3} ik_{i}^{\alpha}z_{i4}^{-1}=\sum_{i=1}^{2} ik_{i}^{\alpha}z_{i4}^{-1}z_{34}^{-1}z_{3i}$. This relation will be useful later on to check that the integrand is invariant under $SL(2,{R})\times SL(2,{R})$ transformations.
To calculate the correlators involving the fermion and the spin operators, one may use
the Wick-like rule for the correlation function involving an arbitrary number of fermion fields and two spin operators \cite{Liu:2001qa,Garousi:2008ge}\footnote{ See \cite{Hartl:2009yf, Hartl:2010ks,Hatefi:2014lva}, for the correlation function of fermion fields with four spin operators.
}.
Using this rule, one finds the following results for the fermion correlators which appear in \reef{right}:
\begin{eqnarray}
&&\lan:S_{A}(z_{1}):S_{C}(z_{2}):\psi^{\mu}(z_{3}):\ran = \frac{1}{\sqrt{2}}(\gamma^{\mu}C^{-1})_{AC}z_{12}^{-3/4}z_{31}^{-1/2}z_{32}^{-1/2}\labell{ferghocor}\\
&&\lan:S_{A}(z_{1}):S_{C}(z_{2}):\psi^{\mu}(z_{3}):ik_{4}\!\cdot\!\psi\psi^{\alpha}(z_{4}):\ran = \frac{i}{2\sqrt{2}}k_{4\lambda}z_{12}^{1/4}z_{31}^{-1/2}z_{32}^{-1/2}z_{41}^{-1}z_{42}^{-1}\bigg[(\gamma^{\alpha \lambda \mu}C^{-1})_{AC}
\nn\\&&\qquad\qquad\qquad\qquad\qquad\qquad+z_{12}^{-1}z_{43}^{-1}(z_{41}z_{32}+z_{31}z_{42} )[\eta^{\mu \lambda}(\gamma^{\alpha}C^{-1})_{AC}-\eta^{\mu \alpha}(\gamma^{\lambda}C^{-1})_{AC}]\bigg]\nn
\end{eqnarray}
The fractional power of $z_{ij}$ will be converted to the integer power when the ghost correlator in \reef{boscor} multiplied the above correlators.
Replacing the correlators \reef{ferghocor} and \reef{boscor} into the scattering amplitude \reef{amp2}, and using the on-shell conditions along with the conservation of momentum, one can easily check that the integrand of the scattering amplitude is invariant under $SL(2,{R})\times SL(2,{R})$ transformations which is the conformal symmetry of the $z$-plane. Fixing this symmetry by setting $z_{1}=0,z_{2}\equiv z,z_{3}=1$ and $z_{4}=\infty$, one finds the following result:
\begin{eqnarray}
{\cal A}=-i\frac{\kappa^2 e^{-2\phi_0}}{8}\frac{\Gamma(-s/8)\Gamma(-t/8)\Gamma(-u/8)}{\Gamma(1+s/8)\Gamma(1+t/8)\Gamma(1+u/8)}\cK\labell{amp3}
\end{eqnarray}
where Gamma functions are the standard Gamma functions that appear in four closed string amplitude \cite{Gross:1986iv}, and the closed string kinematic factor is
\begin{eqnarray}
\cK&=&(P_{-} \Gamma_{1(n)})^{AB}(P_{-} \Gamma_{2(m)})^{CD}\veps_{3 \mu \nu}\veps_{4 \alpha \beta} K_{AC}^{\mu \alpha}\otimes\tilde K_{BD}^{\nu \beta}\labell{kin0}
\end{eqnarray}
In the kinematic factor, there is an implicit factor of delta function $\delta^{10}(k_1+k_2+k_3+k_4)$ imposing conservation of momentum. The Mandelstam variables $s=-8k_1\!\cdot\! k_2$, $u=-8k_1\!\cdot\! k_3$ and $t=-8k_2\!\cdot\! k_3$ satisfy $s+t+u=0$, and the kinematic factor in the holomorphic part is
\begin{eqnarray}
K_{AC}^{\mu \alpha} &\!\!\!\!\!=\!\!\!\!\!& \frac{1}{8}\bigg[t\bigg((k_{1}^{\alpha}+k_{2}^{\alpha})(\gamma^{\mu}C^{-1})_{AC}+k_{4}^{\mu}(\gamma^{\alpha}C^{-1})_{AC}-k_{4\lambda}\eta^{\mu \alpha}(\gamma^{\lambda}C^{-1})_{AC}\bigg)\labell{kin1}\\
&&+s\bigg(k_{1}^{\alpha}(\gamma^{\mu}C^{-1})_{AC}-\frac{1}{2}[k_{4\lambda}(\gamma^{\alpha \lambda \mu}C^{-1})_{AC}-k_{4}^{\mu}(\gamma^{\alpha}C^{-1})_{AC} +k_{4\lambda}\eta^{\mu \alpha}(\gamma^{\lambda}C^{-1})_{AC}]\bigg)\bigg]\nn
\end{eqnarray}
The kinematic factor in the anti-holomorphic part is similar to the above expression. We have normalized the amplitude \reef{amp3} to be consistent with the field theory couplings found in \cite{Garousi:2013lja}. The background flat metric in the Mandelstam variables and in the kinematic factor is in the string frame. That is why we have normalized the amplitude by the dilaton factor $e^{-2\phi_0}$. On the other hand, the graviton and the dilaton have the standard kinetic term or standard propagator only in the Einstein frame. The massless poles of the amplitude \reef{amp3} then indicates that the external gravitons in the amplitude \reef{amp3} are in the Einstein frame.
As a double check of the amplitude \reef{amp3}, one should be able to relate this amplitude to the product of open string amplitudes of two spinors and two gauge bosons using the KLT prescription \cite{Kawai:1985xq}. According to the KLT prescription, the sphere-level amplitude of four closed string states is given by
\begin{eqnarray}
{\cal A}&=&\frac{i}{2^9\pi} \sin(\pi k_2\!\cdot\! k_3)A_{\rm open}(s/8,t/8)\otimes\tA_{\rm open}(t/8,u/8)\labell{KLT}
\end{eqnarray}
where $A_{\rm open}(s/8,t/8)$ is the disk-level scattering amplitude of four open string states in the $s-t$ channel which has been calculated in \cite{Schwarz:1982jn},
\begin{eqnarray}
A_{\rm open}(s/8,t/8)&=&-i\kappa e^{-\phi_0}\frac{\Gamma(-s/8)\Gamma(-t/8)}{\Gamma(1+u/8)}K \labell{open}
\end{eqnarray}
where the Mandelstam variables are the same as in the closed string amplitude. The open string kinematic factor $K$ depends on the momentum and the polarization of the external states \cite{Schwarz:1982jn}. We have normalized the amplitudes \reef{open} and \reef{KLT} to be consistent with the normalization of the amplitude \reef{amp3}.
To find the sphere-level scattering amplitude of two RR and two NSNS states, one has to consider the open string amplitude of two spinors and two gauge bosons. The kinematic factor for this case is \cite{Schwarz:1982jn}
\begin{eqnarray}
K(u_1,u_2,\zeta_3,\zeta_4)&=&
-\frac{i}{\sqrt{2}}\bigg[\frac{1}{2} s \bar u_2\gamma \!\cdot\! \zeta_3\gamma \!\cdot\!(k_1+k_4)\gamma\!\cdot\!\zeta_4 u_1\labell{openkin}\\
&&\qquad\quad-t\bigg(\bar u_2\gamma\!\cdot\!\zeta_4 u_1 k_4\!\cdot\!\zeta_3
-\bar u_2\gamma\!\cdot\!\zeta_3 u_1 k_3\!\cdot\!\zeta_4-\bar u_2\gamma\!\cdot\! k_4 u_1 \zeta_3\!\cdot\!\zeta_4\bigg)\bigg]\nn
\end{eqnarray}
where $u_1,\, u_2$ are the spinor polarizations and $\zeta_3,\,\zeta_4$ are the gauge boson polarizations. They satisfy the following on-shell relations
\begin{eqnarray}
k_{i}^{2}=0,\qquad k_{i}\!\cdot\!\zeta_{i}=0,\qquad (\gamma\!\cdot\! k_{i}C^{-1})_{AB}u_i^{B}=0\labell{on-shell}
\end{eqnarray}
Using these relations, and the identity
\begin{eqnarray}
\bar u_2^{C}(\eta^{\lambda \mu}\gamma^{\alpha}C^{-1}-\eta^{\alpha \mu}\gamma^{\lambda}C^{-1}+\eta^{\alpha \lambda}\gamma^{\mu}C^{-1}+\gamma^{\mu}\gamma^{\lambda}\gamma^{\alpha}C^{-1})_{CA}u_1^{A}&=&(\gamma^{\alpha \lambda \mu}C^{-1})_{AC}u_1^{A}u_2^{C} \labell{trans3}\nn
\end{eqnarray}
one can write the open string kinematic factor \reef{openkin} in terms of the holomorphic kinematic factor \reef{kin1} as
\begin{eqnarray}
K(u_1,u_2,\zeta_3,\zeta_4)&=& -4i\sqrt{2}u_1^{A}u_2^{C}\zeta_{3\mu}\zeta_{4\alpha} K_{AC}^{\mu \alpha}\labell{trans2}
\end{eqnarray}
Similarly for the antiholomorphic part, {\it i.e.,}\
\begin{eqnarray}
\tilde K(\tilde u_1,\tilde u_2,\tilde \zeta_3,\tilde \zeta_4)&=&-4i\sqrt{2}\tilde u_1^{B}\tilde u_2^{D}\tilde\zeta_{3\nu}\tilde\zeta_{4\beta} \tilde K_{BD}^{\nu \beta} \nn
\end{eqnarray}
Using the above relations and $\Gamma(x)\Gamma(1-x)=\pi/\sin(\pi x)$, and substituting the following relations in \reef{KLT}
\begin{eqnarray}
\zeta_{i}^{\mu}\otimes\tilde \zeta_{i}^{\nu}&\rightarrow&\veps_{i}^{\mu\nu},\quad\quad i=3,4\nn\\
u_1^{A}\otimes \tilde u_1^{B}&\rightarrow& (P_{-} \Gamma_{1(n)})^{AB}\nn\\
u_2^{C}\otimes \tilde u_2^{D}&\rightarrow& (P_{-} \Gamma_{2(m)})^{CD}\labell{trans1}
\end{eqnarray}
one recovers the amplitude \reef{amp3}, as expected. While the open string kinematic factor \reef{openkin} is the final result for the S-matrix element of two gauge bosons and two open string spinors, the closed string kinematic factor \reef{kin0} is not yet the final result. The external closed string states are bosons, hence, the Dirac matrices in the kinematic factor must appear in the trace operator which should then be evaluated explicitly to find the final kinematic factor of the closed string amplitude.
The kinematic factor \reef{kin1} has one term which contains three antisymmetric gamma matrices and all other terms contain only one gamma matrix. As a result, the closed string kinematic factor \reef{kin0} has four different terms, each one has one of the following factors:
\begin{eqnarray}
T_1^{\sigma \tau}&=&(P_{-} \Gamma_{1(n)})^{AB}(P_{-} \Gamma_{2(m)})^{CD}(\gamma^{\sigma}C^{-1})_{AC}(\gamma^{\tau}C^{-1})_{BD}\nn\\
T_2^{\sigma \beta \rho \nu}&=&(P_{-} \Gamma_{1(n)})^{AB}(P_{-} \Gamma_{2(m)})^{CD}(\gamma^{\sigma}C^{-1})_{AC}(\gamma^{\beta \rho \nu}C^{-1})_{BD}\nn\\
T_3^{\tau \alpha \lambda \mu }&=&(P_{-} \Gamma_{1(n)})^{AB}(P_{-} \Gamma_{2(m)})^{CD}(\gamma^{\alpha \lambda \mu}C^{-1})_{AC}(\gamma^{\tau}C^{-1})_{BD}\nn\\
T_4^{\alpha \lambda\mu\beta \rho \nu}&=&(P_{-}\Gamma_{1(n)})^{AB}(P_{-} \Gamma_{2(m)})^{CD}(\gamma^{\alpha \lambda \mu}C^{-1})_{AC}(\gamma^{\beta \rho \nu}C^{-1})_{BD} \labell{tr0}
\end{eqnarray}
which can be written in terms of the RR field strengths and the trace of the gamma matrices. Using the above factors, one may then separate the closed string kinematic factor to the following parts:
\begin{eqnarray}
{\cal K}={\cal K}_1+{\cal K}_2+{\cal K}_3+{\cal K}_4\labell{kin2}
\end{eqnarray}
where
\begin{eqnarray}
{\cal K}_1 &=& \frac{1}{256}\bigg[(t-u)^2 {k}_{4\alpha } {k}_{4\beta} {\veps}_{3\lambda \mu }{\veps}_4^{\lambda \mu} +(t-u)^2 {k}_4^{\lambda }\bigg({k}_4^{\mu } {\veps}_{3\mu \lambda } {\veps}_{4\alpha \beta} \nn\\
&&-{k}_{4\beta }({\veps}_{3\lambda \mu } {\veps}_{4\alpha \mu}+{\veps}_{3\mu \lambda } {\veps}_{4\mu \alpha})\bigg) +2 t{k}_2^{\lambda } \bigg((t-u) {k}_4^{\mu } ({\veps}_{3\mu \beta } {\veps}_{4 \alpha \lambda }+{\veps}_{3\beta \mu }{\veps}_{4\lambda \alpha}) \nn\\
&&-(t-u) {k}_{4\beta }{\veps}_{3\alpha \mu } {\veps}_{4\lambda}{}^{ \mu}+(2 t{k}_2^{\mu } {\veps}_{3\alpha\beta }-(t-u) {k}_{4\alpha } {\veps}_3{}^{\mu}{}_{\beta }){\veps}_{4\mu \lambda }\bigg)\nn\\
&&-2u {k}_1^{\lambda } \bigg((t-u) {k}_4^{\mu } ({\veps}_{3\mu \beta } {\veps}_{4\alpha\lambda }+{\veps}_{3\beta \mu }{\veps}_{4\lambda \alpha})-(t-u) {k}_{4\beta }{\veps}_{3\alpha \mu }{\veps}_{4\lambda}{}^{ \mu }\nn\\
&&-(2u {k}_1^{\mu } {\veps}_{3\alpha\beta } +(t-u) {k}_{4\alpha } {\veps}_3{}^{\mu}{}_{\beta }) {\veps}_{4\mu\lambda }
+2 t {k}_2^{\mu }{\veps}_{3\alpha \beta } ({\veps}_{4\lambda \mu}+{\veps}_{4\mu \lambda})\bigg)\bigg] T_1^{\alpha \beta}\nn\\
{\cal K}_2 &=& \frac{s}{256}\bigg[ (u-t) {k}_{4\alpha }{k}_{4\lambda } {\veps}_{3\beta \nu } {\veps }_4{}^{\beta}{}_{ \mu } +{k}_{4\lambda } \bigg((t-u){k}_4^{\beta } {\veps }_{3\beta \nu } {\veps }_{4\alpha \mu } \nn\\
&&+2(t{k}_2^{\beta }-u {k}_1^{\beta }) {\veps}_{3\alpha \nu } {\veps} _{4\beta \mu }\bigg) \bigg]T_2^{\alpha \lambda \mu\nu}\nn\\
{\cal K}_3 &=& \frac{s}{256}\bigg[ (u-t) {k}_{4\alpha }{k}_{4\lambda } {\veps}_{3\nu\beta } {\veps }_{4\mu}{}^{\beta } +{k}_{4\lambda } \bigg((t-u){k}_4^{\beta } {\veps }_{3\nu \beta } {\veps }_{4\mu \alpha }\nn\\
&&+2(t{k}_2^{\beta }-u {k}_1^{\beta }) {\veps}_{3\nu \alpha } {\veps}_{4\mu \beta }\bigg) \bigg]T_3^{\alpha \lambda \mu\nu}\nn\\
{\cal K}_4 &=& \frac{s^2}{256}\bigg[ {k}_{4\alpha } {k}_{4\beta }{\veps }_{3\rho \mu } {\veps }_{4\nu \lambda }\bigg] T_4^{\alpha \nu \rho \beta \lambda \mu }\labell{kin3}
\end{eqnarray}
Note that the tensor $T_2$ $(T_3)$ is totally antisymmetric with respect to its last three indices, hence, the indices of the NSNS momenta and polarization tensors in $\cK_2$ $(\cK_3)$ which contract with this tensor, must be antisymmetrized. Similarly the tensor $T_4$ is totally antisymmetric with respect to its first and its second three indices, so the momenta and the polarization tensors in $\cK_4$ should be antisymmetrized accordingly.
One may try to write the polarization tensors and the momenta of the NSNS states in the form of $\veps^{[\mu}{}_{[\alpha}k^{\nu]}k_{\beta]}$ which is the generalized Riemann curvature in the momentum space. Such manipulation has been done in \cite{Policastro:2006vt} for finding the couplings of two RR and two NSNS states in the pure spinor formalism. However, we are interested in this paper in the form of couplings which are manifestly invariant under the linear T-duality and S-duality. This form of couplings may not be in terms of the generalized Riemann curvature.
To proceed further and write the kinematic factors \reef{kin3} in terms of the momenta and the polarization tensors of the external states, one has to find the explicit form of the tensors $T_1,\cdots, T_4$ in terms of the metric $\eta_{\mu\nu}$ and the RR fields strengths $F_1,\, F_2$.
Using the properties of the charge conjugation matrix and the Dirac matrices (see {\it e.g.,}\ appendix B. in \cite{Garousi:1996ad}), one can write the tensors $T_1,\cdots, T_4$ as
\begin{eqnarray}
T_1^{\sigma \tau}&=& -\frac{(-1)^{\frac{1}{2}m(m+1)}a_{n}a_{m}}{2\,n!m!}F_{1\mu_{1}\cdots\mu_{n}}F_{2\nu_{1}\cdots\nu_{m}}\Tr(\gamma^{\sigma} \gamma^{\mu_{1}\cdots\mu_{n}}\gamma^{\tau} \gamma^{\nu_{1}\cdots\nu_{m}})\nn\\
T_2^{\sigma \beta \rho \nu} &=&-\frac{(-1)^{\frac{1}{2}m(m+1)}a_{n}a_{m}}{2\,n!m!}F_{1\mu_{1}\cdots\mu_{n}}F_{2\nu_{1}\cdots\nu_{m}}\Tr(\gamma^{\sigma} \gamma^{\mu_{1}\cdots\mu_{n}}\gamma^{\beta \rho \nu} \gamma^{\nu_{1}\cdots\nu_{m}})\nn\\
T_3^{\tau \alpha \lambda \mu } &=&-\frac{(-1)^{\frac{1}{2}n(n+1)}a_{n}a_{m}}{2\,n!m!}F_{1\mu_{1}\cdots\mu_{n}}F_{2\nu_{1}\cdots\nu_{m}}\Tr(\gamma^{\tau} \gamma^{\mu_{1}\cdots\mu_{n}}\gamma^{\alpha \lambda \mu} \gamma^{\nu_{1}\cdots\nu_{m}})\nn\\
T_4^{\alpha \lambda\mu\beta \rho \nu} &=&\frac{(-1)^{\frac{1}{2}m(m+1)}a_{n}a_{m}}{2\,n!m!}F_{1\mu_{1}\cdots\mu_{n}}F_{2\nu_{1}\cdots\nu_{m}}\Tr(\gamma^{\alpha \lambda \mu} \gamma^{\mu_{1}\cdots\mu_{n}}\gamma^{\beta \rho \nu} \gamma^{\nu_{1}\cdots\nu_{m}})\labell{tr}
\end{eqnarray}
In the chiral projection operator $P_-=\frac{1}{2}(1-\gamma_{11})$, 1 corresponds to the RR field strength $F_n$ and $\gamma_{11}$ corresponds to $F_{10-n}$ which is the magnetic dual of $F_n$ at the linear order. One may ignore $\gamma_{11}$ and assume that $1\leq n \leq 9$. The corresponding couplings then produce corrections to the democratic form of the supergravity \cite{Fukuma:1999jt}.
The above traces indicate that when the difference between $n$ and $m$ is an odd number, these tensors are zero. This is what one expects because there is no couplings between the RR fields in the type IIA theory in which the RR field strengths have even rank, and the type IIB theory in which the RR field strengths have odd rank.
When the difference between $n$ and $m$ is an even number, the traces are not zero. One can easily verify that the traces are zero for $n=m+8$. For $n=m+6$ case, the traces in $T_1,\,T_2,\,T_3$ are zero and $T_4$ becomes totally antisymmetric. However, the corresponding kinematic factor ${\cal K}_4$ has $k_{4\alpha}k_{4\beta}$, so the kinematic factor ${\cal K}$ is zero in this case too. Therefore, there are three cases to consider, {\it i.e.,}\ $n=m$, $n=m+2$ and $n=m+4$.
For the case that $n=m+4$, one can easily find $T_1=0$ and $T_2=T_3$. A prescription for calculating the traces is given in the appendix A. Using it, one finds the tensors $ T_2,\, T_4$ to be
\begin{eqnarray}
T_2^{\sigma \beta \rho \nu} &=& -16\frac{(-1)^{n(n+1)}a_{n}^2 }{(n-4)!} F_{12}^{ \nu \rho \beta \sigma} \labell{tn=m+4}\\
T_4^{\alpha \lambda\mu\beta \rho \nu} &=& -16\frac{(-1)^{n(n+1)}a_{n}^2 }{(n-4)!}\bigg[3(\eta^{\alpha \beta}F_{12}^{ \nu \mu \rho \lambda}+\eta^{\lambda \beta}F_{12}^{ \nu \rho \mu \alpha}-\eta^{\mu \beta}F_{12}^{ \nu \rho \lambda \alpha})\nn\\
&&-(n-4)(F_{12}^{ \nu \mu \rho \lambda \beta \alpha}+F_{12}^{ \nu \rho \beta \mu \alpha \lambda}-F_{12}^{ \nu \rho \beta \lambda \alpha \mu}-F_{12}^{ \rho \beta \mu \lambda \alpha \nu}+F_{12}^{\nu \beta \mu \lambda \alpha \rho}-F_{12}^{ \nu \rho \mu \lambda \alpha \beta})\bigg]\nn
\end{eqnarray}
Here we have used the fact that $a_{n-4}=a_{n}$, and have used the following notation for $F_{12}$s:
\begin{eqnarray}
F_{12}^{\nu \rho \beta \sigma} &\equiv& F_{1\mu_{1}\cdots\mu_{n-4}}{}^{\nu \rho \beta \sigma} F_2^{\mu_{1}\cdots\mu_{n-4}}\nn\\
F_{12}^{\nu \rho \mu \beta \lambda \alpha} &\equiv& F_{1\mu_{1}\cdots\mu_{n-5}}{}^{\nu \rho \mu \beta \lambda} F_2^{\mu_{1}\cdots\mu_{n-5}\alpha}\labell{n=m+4}
\end{eqnarray}
Note that $F_1$ is $n$-form and $F_2$ is $(n-4)$-form. Replacing the tensors $T_1,\cdots, T_4$ into the kinematic factor \reef{kin2}, one finds the amplitude \reef{amp3} for one RR $n$-form, one RR $(n-4)$-form and two NSNS states. We have checked that the amplitude satisfies the Ward identity corresponding to the NSNS gauge transformations. The kinematic factor can be further simplified after specifying the NSNS states. As we will see in the next section, the amplitude is non-zero only when the two NSNS states are antisymmetric. For the cases that $n=m$ and $n=m+2$, the traces have been calculated in the appendix A.
To find the couplings which are produced by the amplitude \reef{amp3}, one has to expand the gamma functions in \reef{amp3} at low energy, {\it i.e.,}\
\begin{eqnarray}
\frac{\Gamma(-s/8)\Gamma(-t/8)\Gamma(-u/8)}{\Gamma(1+s/8)\Gamma(1+t/8)\Gamma(1+u/8)}&=&-\frac{2^9}{stu}-2\zeta(3)-\frac{s^2+su+u^2}{32}\zeta(5)+\cdots\labell{expa}
\end{eqnarray}
where dots refer to higher order contact terms. The first term corresponds to the massless poles in the Feynman amplitude of two RR and two NSNS fields which are reproduced by the supergravity couplings. We have done this calculation in the appendix B. All other terms correspond to the on-shell higher-derivative couplings of two RR and two NSNS fields in the momentum space, {\it i.e.,}\
\begin{eqnarray}
\cA_c&=&-i\frac{\kappa^2e^{-2\phi_0}}{8}\left( -2\zeta(3)-\frac{s^2+su+u^2}{32}\zeta(5)+\cdots\right)\cK\labell{contact}
\end{eqnarray}
Since the above amplitude contains only the contact terms, one has to be able to rewrite it in terms of the RR and the NSNS field strengths.
Moreover, the contact terms \reef{contact} should satisfy the T-dual and S-dual Ward identities as well \cite{Garousi:2011we}-\cite{Velni:2012sv}. The couplings of two RR field strengths and two Riemann curvatures/B-field strengths at eight-derivative level have been found in \cite{Garousi:2013lja} by imposing the above Ward identities on the four generalized Riemann curvature couplings \cite{Gross:1986mw}. In the next section we will compare those couplings with the corresponding contact terms in \reef{contact}.
\section{Gravity and B-field couplings}
In this section we are going to simplify the kinematic factor in \reef{contact} for the specific NSNS states which are either graviton or B-field, and compare them with the couplings that have been found in \cite{Garousi:2013lja}. These couplings have structure $F^{(n)}F^{(n)}RR$, $F^{(n)}F^{(n)}HH$, $F^{(n)}F^{(n-2)}RH$ and $F^{(n)}F^{(n-4)}HH$ where $R$ stands for the Riemann curvature and $H$ stands for the derivative of B-field strength. The coupling with structure $F^{(n)}F^{(n-4)}RR$ has been found to be zero. Using the explicit form of the $T_i$ tensors in \reef{tn=m+4}, we have found that $\cK=0$ when the two NSNS states are symmetric and traceless. Therefore, there is no on-shell higher-derivative coupling between one RR field strength $F^{(n)}$, one $F^{(n-4)}$ and two gravitons, as expected.
\subsection{$F^{(n)}F^{(n)}RR$}
To find the contact terms with structure $F^{(n)}F^{(n)}RR$, one should first simplify the kinematic factors in \reef{kin3} when the two NSNS polarization tensors are symmetric and traceless. One should then use the explicit form of the tensors $T_1,\cdots, T_4$ calculated in \reef{n=m}. Using the totally antisymmetric property of the RR field strengths and taking the on-shell relations into account, one can write the kinematic factor \reef{kin2} as
\begin{eqnarray}
\cK &=& \frac{n}{2^6} \bigg[(n-1) s \bigg((n-2) s F_{12}^{\alpha \beta \lambda \mu \nu \rho} {h}_{3\lambda \rho } {h}_{4\beta \nu } {k}_{4\alpha } {k}_{4\mu }+F_{12}^{\alpha \beta \mu \nu} [-{h}_{3\beta \nu } (t {k}_2^{\rho }-u {k}_1^{\rho }) \nn\\
&& \times({h}_{4\mu \rho } {k}_{4\alpha }-{h}_{4\alpha \rho } {k}_{4\mu })-u {h}_{3\nu}{}^{ \rho } {k}_{4\alpha } ({h}_{4\beta \rho } {k}_{4\mu }-{h}_{4\beta \mu } {k}_{4\rho })+t {h}_{3\beta}{}^{ \rho } {k}_{4\mu } \nn\\
&& \times({h}_{4\alpha \nu } {k}_{4\rho }-{h}_{4\nu \rho } {k}_{4\alpha })]\bigg)+F_{12}^{\alpha \mu} \bigg({h}_{3\alpha \mu } {h}_4^{\nu \rho } [u^2 {k}_{1\nu } {k}_{1\rho }+t {k}_{2\nu } (t {k}_{2\rho }-2 u {k}_{1\rho })]\nn\\
&&+u {h}_{3\mu}{}^{\nu } (t {k}_2^{\rho }-u {k}_1^{\rho }) ({h}_{4\nu \rho } {k}_{4\alpha }-{h}_{4\alpha \rho } {k}_{4\nu }) +t [-{h}_{3\alpha}{}^{\nu } (t {k}_2^{\rho }-u {k}_1^{\rho })({h}_{4\nu \rho } {k}_{4\mu }-{h}_{4\mu \rho } {k}_{4\nu })\nn\\
&& -u {h}_3^{\nu \rho } ({h}_{4\nu \rho } {k}_{4\alpha } {k}_{4\mu }-{k}_{4\nu } ({h}_{4\mu \rho } {k}_{4\alpha }+{h}_{4\alpha \rho } {k}_{4\mu }-{h}_{4\alpha \mu } {k}_{4\rho }))]\bigg)\bigg]\labell{FnR}
\end{eqnarray}
where $ h_3 $, $ h_4 $ are the graviton polarizations and $1\leq n\leq 9$. In order to compare the above kinematic factor with the eight-derivative couplings found, we have to write the couplings in both cases in terms of independent variables. To this end, we have to first write the RR field strengths in the above kinematic factor in terms of the RR potential. Then using the conservation of momentum and the on-shell relations, one may write it in terms of the momentum $k_1,k_2,k_3$ and in terms of the independent Mandelstam variables $s,u$. Moreover, one should write $k_3\!\cdot\! h_4=-k_1\!\cdot\! h_4-k_2\!\cdot\! h_4$ and $ h_4\!\cdot\! k_3=- h_4\!\cdot\! k_1-h_4\!\cdot\! k_2$ to rewrite the kinematic factor in terms of the independent variables. The Ward identity corresponding to the gauge transformations and the symmetry of the amplitude under the interchange of $ 1 \leftrightarrow 2 $ and $ 3 \leftrightarrow 4 $ can easily be verified in this form. Transforming the $F^{(n)}F^{(n)}RR$ couplings found in \cite{Garousi:2013lja} to the momentum space and doing the same steps as above to write them in terms of the independent variables, we have found exact agreement between \reef{FnR} and the couplings $F^{(n)}F^{(n)}RR$ for $ n=1,2,3,4,5 $.
One can easily extend the couplings with structure $F^{(5)}F^{(5)}RR$ to $F^{(n)}F^{(n)}RR$ with $6\leq n\leq 9$. Using $F^{(5)}F^{(5)}RR$ couplings, one can use the dimensional reduction on a circle, $y\sim y+2\pi$, and find the 9-dimensional couplings with structure $F^{(5)}F^{(5)}RR$ which have no Killing index. Then under the linear T-duality, these couplings transforms to the couplings with structure $F_y^{(6)}F_y^{(6)}RR$. Following \cite{Garousi:2013lja}, one can easily complete the $y$-index and find the 10-dimensional couplings with structure $F^{(6)}F^{(6)}RR$. They produce the correct couplings since it is impossible to have $F^{(6)}F^{(6)}RR$ couplings in which the RR field strengths have no contraction. Repeating the above steps, one finds $F^{(n)}F^{(n)}RR$ with $6\leq n\leq 9$.
\subsection{$F^{(n)}F^{(n)}HH$}
To find the contact terms with structure $F^{(n)}F^{(n)}HH$, one should first simplify the kinematic factors in \reef{kin3} when the two NSNS polarization tensors are antisymmetric. Then, one should use the explicit form of the tensors $T_1,\cdots, T_4$ in \reef{n=m}. Using the totally antisymmetric property of the RR field strengths and taking the on-shell relations into account, one can write the kinematic factor \reef{kin2} in this case as
\begin{eqnarray}
\cK &=& \frac{1}{2^8}b_{3\alpha \beta } \bigg[2 n (t^2+u^2) b_4^{\alpha \beta } F_{12}^{\lambda \rho} k_{4\lambda } k_{4\rho }+4 b_4{}^{\beta}{}_{\lambda } \bigg((u-t) F_{12} (t k_2^{\lambda }-u k_1^{\lambda }) k_4^{\alpha }\nn\\
&&+n [t F_{12}^{\mu \alpha} (t k_2^{\lambda }-u k_1^{\lambda })+u^2 F_{12}^{\mu \lambda} k_4^{\alpha }] k_{4\mu }+n [t (t F_{12}^{\lambda \rho} k_4^{\alpha }-(n-1) s F_{12}^{\lambda \mu \alpha \rho} k_{4\mu })\nn\\
&&-u (F_{12}^{\alpha \rho} (t k_2^{\lambda }-u k_1^{\lambda })+(n-1) s F_{12}^{\alpha \mu \lambda \rho} k_{4\mu })] k_{4\rho }\bigg)+n b_{4\lambda \mu } \bigg(2 (t k_2^{\lambda }-u k_1^{\lambda })\nn\\
&& \times [-2 t F_{12}^{\mu \beta} k_4^{\alpha }+(n-1) s F_{12}^{\mu \nu \alpha \beta} k_{4\nu }]+2 u k_4^{\alpha } [2 F_{12}^{\beta \mu} (t k_2^{\lambda }-u k_1^{\lambda })+(n-1) s\nn\\
&& \times F_{12}^{\beta \nu \lambda \mu} k_{4\nu }]+(n-1) s [ (2 t F_{12}^{\lambda \mu \beta \rho} k_4^{\alpha }+(n-2) s F_{12}^{\lambda \mu \nu \alpha \beta \rho} k_{4\nu })-2 F_{12}^{\alpha \beta \mu \rho}\nn\\
&& \times (t k_2^{\lambda}-u k_1^{\lambda })+(n-2) s F_{12}^{\alpha \beta \nu \lambda \mu \rho} k_{4\nu }] k_{4\rho }\bigg)\bigg]\labell{FnH}
\end{eqnarray}
where $ b_3 $, $ b_4 $ are the B-field polarization tensors.
Writing the above kinematic factor and the couplings $F^{(n)}F^{(n)}HH$ found in \cite{Garousi:2013lja} in terms of the independent variables, we have found that they are exactly identical for $ n=1,2,3,4,5 $. Using the consistency of the couplings with the linear T-duality, one can easily extend the $F^{(5)}F^{(5)}HH$ couplings to $F^{(n)}F^{(n)}HH$ with $6\leq n\leq 9$.
\subsection{$F^{(n)}F^{(n-2)}HR$}
To find the contact terms with structure $F^{(n)}F^{(n-2)}HR$, one should first simplify the kinematic factors in \reef{kin3} when one of the NSNS polarization tensors is symmetric and traceless, and the other one is antisymmetric. Then, one should use the explicit form of the tensors $T_1,\cdots, T_4$ in \reef{tn=m+2}. In this case, one finds
\begin{eqnarray}
\cK &=& -\frac{1}{2^7}{b}_{3\alpha \beta } \bigg[F_{12}^{\alpha \beta} {h}_4^{\nu \rho } [u^2 {k}_{1\nu } {k}_{1\rho }+t {k}_{2\nu } (t {k}_{2\rho }-2 u {k}_{1\rho })]+\bigg(2 u^2 F_{12}^{\nu \rho } {h}_4{}^{\beta}{}_{\rho } {k}_4^{\alpha }\nn\\
&&+(n-2) s [(n-3) s F_{12}^{\alpha \beta \nu \rho \lambda \mu} {h}_{4\mu \rho } {k}_{4\lambda }+2 u F_{12}^{\beta \nu \rho \lambda} ({h}_{4\lambda \rho } {k}_4^{\alpha }-{h}_4{}^{\alpha}{}_{ \rho } {k}_{4\lambda })]\bigg) {k}_{4\nu }\nn\\
&&+2 u F_{12}^{\beta \nu } (t {k}_2^{\rho }-u {k}_1^{\rho }) ({h}_{4\nu \rho } {k}_4^{\alpha }-{h}_4{}^{\alpha}{}_{\rho } {k}_{4\nu })-(n-2) s F_{12}^{\alpha \beta \nu \lambda} (t {k}_2^{\rho }-u {k}_1^{\rho }) \nn\\
&& \times({h}_{4\nu \rho } {k}_{4\lambda }-{h}_{4\lambda \rho } {k}_{4\nu })\bigg]\labell{Fn-2HR}
\end{eqnarray}
Writing the above kinematic factor and the couplings $F^{(n)}F^{(n-2)}HR$ found in \cite{Garousi:2013lja} in terms of the independent variables, we have found that they are exactly identical for $ n=3,4,5 $.
Here also one can use the dimensional reduction on $F^{(5)}F^{(3)}HR$ couplings and consider the 9-dimensional couplings $F^{(5)}F^{(3)}HR$ which have no $y$-index. Then under the linear T-duality one finds $F_y^{(6)}F_y^{(4)}HR$. Since it is impossible to have coupling $F^{(6)}F^{(4)}HR$ in which the RR field strengths have no contraction with each other, one can find all $F^{(6)}F^{(4)}HR$ couplings by completing the $y$-index in the above $F_y^{(6)}F_y^{(4)}HR$ couplings. So the couplings corresponding the kinematic factor \reef{Fn-2HR} for $6\leq n\leq 9$ can easily be read from the $F^{(5)}F^{(3)}HR$ couplings.
\subsection{$F^{(n)}F^{(n-4)}HH$}
To find the contact terms with structure $F^{(n)}F^{(n-4)}HH$, one should first simplify the kinematic factors in \reef{kin3} when the NSNS polarization tensors are antisymmetric. Then, one should use the explicit form of the $T_i$ tensors in \reef{tn=m+4}. In this case, one finds
\begin{eqnarray}
\cK &=& \frac{s}{2^8} {b}_{3\alpha \beta } {b}_{4\lambda \mu }{k}_{4\rho } \bigg[2 F_{12}^{\alpha \beta \mu \rho } (t {k}_2^{\lambda }-u {k}_1^{\lambda }) +2 u F_{12}^{\beta \lambda \mu \rho } {k}_4^{\alpha }+(n-4) s F_{12}^{\alpha \beta \lambda \mu \rho \nu } {k}_{4\nu }\bigg] \labell{Fn-4H}
\end{eqnarray}
Writing it in terms of the independent variables, one finds that it is invariant under the interchange of $ 3 \leftrightarrow 4 $, and it satisfies the Ward identity corresponding to the B-field gauge transformation. The minimum value of $n$ is 5, so we consider $n=5$ and compare it with the couplings with structure $F^{(1)}F^{(5)}HH$.
The $F^{(1)}F^{(5)}HH$ couplings have been found in \cite{Garousi:2013lja} by using the dimensional reduction on the couplings with structure $F^{(2)}F^{(4)}HR$ which have been verified by explicit calculation in the previous section. Then under the T-duality the couplings with structure $F^{(2)}_yF^{(4)}HR_y$ where the index $y$ is the Killing index, transform to the couplings with structure $F^{(1)}F^{(5)}_yHH_y$. One can complete the $y$-index to find the couplings with structure $F^{(1)}F^{(5)}HH$ in the string frame\footnote{Note that in writing the field theory couplings we have used only the lowercase indices and the repeated indices are contracted with the metric $g_{\mu\nu}$.}, {\it i.e.,}\
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2} \int d^{10}x\sqrt{-G}\big[ 8 F_{h,k} F_{{mnpqr},s} H_{{hpq},m} H_{{krs},n}\labell{F1F5}\\&&\qquad\qquad\qquad\quad+4 F_{h,k} F_{{kmnpq},r} H_{{mns},h} H_{{pqr},s}-2 F_{h,k} F_{{kmnpq},h} H_{{mns},r} H_{{pqr},s}
\big]\nn
\end{eqnarray}
where $\gamma=\alpha'^3\zeta(3)/2^5$. There is one extra term in the couplings that have been found in \cite{Garousi:2013lja} which is zero on-shell. Moreover, there is an extra factor of $-1/2$ in the above action which is resulted from completing the Killing $y$-index and considering the fact that three are two B-field strengths in the couplings with structure $F^{(1)}F^{(5)}_yHH_y$.
Transforming the above action to the momentum space, and writing the couplings in terms of the independent variables, we have found exact agreement with the kinematic factor in \reef{Fn-4H}.
One can use the dimensional reduction on above $F^{(1)}F^{(5)}HH$ couplings and consider the 9-dimensional couplings $F^{(1)}F^{(5)}HH$ which have no $y$-index. Then under the linear T-duality one finds $F_y^{(2)}F_y^{(6)}HH$. Completing the $y$-index, one finds all couplings in which the RR field strengths have contraction with each other. In this case, however, it is possible to have coupling $F^{(2)}F^{(6)}HH$ in which the RR field strengths have no contraction with each other. We add all such couplings with unknown coefficients, and constrain them to be consistent with the kinematic factor \reef{Fn-4H}. We find the following result for $F^{(2)}F^{(6)}HH$ couplings:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2} \int d^{10}x\sqrt{-G}\big[8 F_{ht,k} F_{mnpqrt,s} H_{hpq,m} H_{krs,n}+8 F_{hm,n} F_{nkpqrs,t} H_{hkp,q} H_{mrs,t}\nn\\&&\qquad\qquad\qquad+4 F_{ht,k} F_{kmnpqt,r} H_{mns,h} H_{pqr,s}-2 F_{ht,k} F_{kmnpqt,h} H_{mns,r} H_{pqr,s}\big]\labell{new1}
\end{eqnarray}
The first term and the couplings in the second line are the couplings that can be read from the T-duality of the couplings \reef{F1F5}. The second term is the coupling in which the RR field strengths have no contraction with each other, hence, it could not be read from the T-duality of \reef{F1F5}.
The couplings with structure $F^{(n-4)}F^{(n)}HH$ for $n>6$ can easily be read from the T-duality of the couplings \reef{new1} because it is impossible to have such couplings in which the RR field strengths have no contraction with each other. The result is
\begin{eqnarray}
S &\!\!\!\supset\!\!\!& \frac{1}{(n-5)!}\frac{\gamma}{\kappa^2} \int d^{10}x\sqrt{-G}\big[8 F_{ha_1\cdots a_{n-5},k} F_{mnpqra_1\cdots a_{n-5},s} H_{hpq,m} H_{krs,n}\nn\\&&-2 F_{ha_1\cdots a_{n-5},k} F_{kmnpqa_1\cdots a_{n-5},h} H_{mns,r} H_{pqr,s}+4 F_{ha_1\cdots a_{n-5},k} F_{kmnpqa_1\cdots a_{n-5},r} H_{mns,h} H_{pqr,s}\nn\\&&\qquad\qquad\qquad\qquad\qquad+8(n-5) F_{hma_1\cdots a_{n-6},n} F_{nkpqrsa_1\cdots a_{n-6},t} H_{hkp,q} H_{mrs,t}\big]\labell{new2}
\end{eqnarray}
The number of indices $a_1\cdots a_{n-m}$ in the RR field strengths is such that the total number of the indices of $F^{(n)}$ must be $n$. For example, $F_{nkpqrsa_1\cdots a_{n-6},t}$ is $F_{nkpqrs,t}$ for $n=6$ and is zero for $n<6$. We have checked that the above couplings are consistent with the kinematic factor \reef{Fn-4H} for $n\geq 5$.
\section{Dilaton couplings}
In this section we are going to simplify the kinematic factor in \reef{contact} for the cases that one or both of the NSNS states are dilatons. One has to use \reef{dilpol} for the dilaton polarization. There are three cases to consider. The first case is when one of the polarizations is the dilaton and the other one is antisymmetric. The kinematic factor in this case is non-zero for $n=m+2$. So the non-zero couplings should have structure $F^{(n)}F^{(n-2)}H\phi$ where $\phi$ stands for the second derivatives of the dilaton. The second case is when one of the polarization tensors is the dilaton and the other one is symmetric and traceless. The kinematic factor in this case is non-zero for $n=m$. So the non-zero couplings in this case should have structure $F^{(n)}F^{(n)}R\phi$. The third case is when both of the NSNS polarizations are the dilatons. The kinematic factor in this case is also non-zero for $n=m$. The non-zero couplings should have the structure $F^{(n)}F^{(n)}\phi\phi$. In all cases, we have found that the auxiliary vector $\ell$ of the dilaton polarization \reef{dilpol} is canceled in the kinematic factors, as expected. Let us begin with the first case.
\subsection{$F^{(n)}F^{(n-2)}H\phi$}
Replacing the tensors $T_i$ for $ n=m+2 $ case which are calculated in \reef{tn=m+2}, into \reef{kin3}, one finds the following result for the kinematic factor \reef{kin2} when one of the polarization tensors is antisymmetric and the other is the dilaton polarization \reef{dilpol}:
\begin{eqnarray}
\cK &=& -\frac{(n-7) s-2 t}{2^8 \sqrt{2} }\phi_3 {b}_{4\alpha \beta } {k}_{4\nu } \bigg[2 F_{12}^{\beta \nu } (t {k}_2^{\alpha }-u {k}_1^{\alpha })+(n-2) s F_{12}^{\alpha \beta \nu \lambda} {k}_{4\lambda }\bigg] \labell{FnHD}
\end{eqnarray}
As has been discussed already, the external states in the contact terms \reef{contact} are in the Einstein frame whereas the background metric $\eta_{\mu\nu}$ is in the string frame. Hence, to find the appropriate couplings in a specific frame, one has to either transform the external graviton states to the string frame or transform the background metric $\eta_{\mu\nu}$ to the Einstein frame, {\it i.e.,}\ $e^{\phi_0/2}\eta_{\mu\nu}$. We choose the latter transformation to rewrite the contact terms \reef{contact} in the Einstein frame.
Since the couplings are in the Einstein frame, the natural question is whether the string frame couplings $ F^{(n)} F^{(n-2)} H R $, produce all the couplings $ F^{(n)} F^{(n-2)} H \phi $ in the Einstein frame? In fact the transformation of the Riemann curvature from the string frame to the Einstein frame is \cite{Garousi:2012jp}
\begin{eqnarray}
{R}_{ab}{}^{cd} & \Longrightarrow & R_{ab}{}^{cd}-\frac{\kappa}{\sqrt{2}}\eta_{[a}{}^{[c}\phi_{;b]}{}^{d]}+\cdots\labell{transf}
\end{eqnarray}
where $\phi$ is the perturbation of dilaton, {\it i.e.,}\ $\Phi=\phi_0+\sqrt{2}\kappa \phi$. In above equation dots refer to the terms with two dilatons in which we are not interested. We have transformed the string frame couplings $ F^{(n)} F^{(n-2)} H R $ to the Einstein frame and found that the resulting $ F^{(n)} F^{(n-2)} H \phi $ couplings are not consistent with the kinematic factor in \reef{FnHD}. This indicates that there must be some new couplings with structure $ F^{(n)} F^{(n-2)} H \phi $ in the string frame. The combination of these couplings and the couplings with structure $ F^{(n)} F^{(n-2)} H R $, then must be consistent with \reef{FnHD} when transforming them to the Einstein frame. This constraint can be used to find the dilaton couplings. We will find the couplings in both string and Einstein frames for $n\leq 5$. In section 5, we extend the string frame couplings to $1\leq n \leq 9$.
The new couplings in the string frame must be consistent with the linear T-duality. So we will first find the new couplings for the case of $n=5$ by using the consistency with \reef{FnHD} and then find the couplings for other values of $n\leq4$ by using the consistency with the linear T-duality. To find the string frame couplings with structure $ F^{(5)} F^{(3)} H \phi $, we consider all possible on-shell contractions of terms with structure $ F^{(5)} F^{(3)} H \phi $ with unknown coefficients. This can be performed using the new field theory motivated package for the Mathematica "xTras" \cite{CS}. Transforming the combination of these couplings and the couplings with structure $ F^{(5)} F^{(3)} H \phi $ found in \cite{Garousi:2013lja}, to the Einstein frame and constraining them to be consistent with the kinematic factor \reef{FnHD}, one finds some relations between the unknown coefficients. Replacing these relations into the general couplings with structure $ F^{(5)} F^{(3)} H \phi $, one finds the following couplings in the string frame:
\begin{eqnarray}
S &\supset& - \frac{4 }{3} \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ 6 H_{h}{}_{rs}{}_{,m} F_{knqrs}{}_{,p}
F_{mnp}{}_{,q} + 3 H_{h}{}_{rs}{}_{,m} F_{knprs}{}_{,q}
F_{mnp}{}_{,q} \labell{mF5F3} \\
&&\qquad\qquad\qquad\qquad- H_{h}{}_{rs}{}_{,k} F_{mnprs}{}_{,q}
F_{mnp}{}_{,q} - 6 F_{knprs}{}_{,q} F_{mnp}{}_{,q} H_{hm}{}_{r}{}_{,s} \nn \\
&&\qquad\qquad\qquad\qquad+ 6
F_{mnp}{}_{,q} F_{knpqr}{}_{,s} H_{hm}{}_{r}{}_{,s} + 3
F_{h}{}_{mn}{}_{,p} F_{mnpqr}{}_{,s} H_{k}{}_{qr}{}_{,s}
\big]\Phi_{,hk}\nn
\end{eqnarray}
Plus some other terms which contains some of the unknown coefficients. However, these terms vanish when we write the field strengths in terms of the corresponding potentials and use the on-shell relations to write the result in terms of the independent variables. That means these terms are canceled using the Bianchi identities for the RR and for the B-field strengths and the on-shell relations. As a result, these terms can safely be set to zero. We refer the reader to section 4.3 in which similar calculation has been done in more details. Since we have considered all contractions without fixing completely the Bianchi identities, the above couplings are unique up to using the Bianchi identities. That is, one may find another action which is related to the above action by using the Bianchi identities and the on-shell relations.
Having found the string frame couplings $ F^{(n)} F^{(n-2)} H \phi $ for $n=5$ in \reef{mF5F3}, we now apply the T-duality transformations on them to find the corresponding couplings for other $n$. We use the dimensional reduction on the couplings \reef{mF5F3} and find the couplings with structure $F^{(5)}_yF^{(3)}_y H\phi$. Under the linear T-duality transformations, the RR field strength $F^{(n)}_y$ transforms to $F^{(n-1)}$ with no Killing index, the B-field with no $y$-index is invariant and the perturbation of dilaton transforms as (see {\it e.g.,}\ \cite{Garousi:2013lja})
\begin{eqnarray}
\phi\rightarrow \phi-\frac{1}{\sqrt{2}}h_{yy}\labell{delT}
\end{eqnarray}
where $h_{\mu\nu}$ is the metric perturbation, {\it i.e.,}\ $g_{\mu\nu}=\eta_{\mu\nu}+2\kappa h_{\mu\nu}$. The couplings in $F^{(5)}_yF^{(3)}_y H\phi$ corresponding to the second term above should be canceled with the couplings with structure $F^{(4)}F^{(2)} HR_{yy}$. The terms corresponding to the first term in \reef{delT} have structure $F^{(4)}F^{(2)}H\phi$. The result for all terms in \reef{mF5F3} is the following couplings in the string frame:
\begin{eqnarray}
S &\supset& - 4 \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ 2 H_{h}{}_{rs}{}_{,m} F_{kqrs}{}_{,p}
F_{mp}{}_{,q} + 2 H_{h}{}_{rs}{}_{,m} F_{kprs}{}_{,q}
F_{mp}{}_{,q} \labell{mF4F2}\\
&&\qquad\qquad\qquad\qquad- H_{h}{}_{rs}{}_{,k} F_{mprs}{}_{,q}
F_{mp}{}_{,q} - 4 F_{kprs}{}_{,q} F_{mp}{}_{,q} H_{hm}{}_{r}{}_{,s} \nn \\
&&\qquad\qquad\qquad\qquad+ 4
F_{mp}{}_{,q} F_{kpqr}{}_{,s} H_{hm}{}_{r}{}_{,s} - 2
F_{h}{}_{m}{}_{,p} F_{mpqr}{}_{,s} H_{k}{}_{qr}{}_{,s}
\big]\Phi_{,hk}\nn
\end{eqnarray}
The transformation of the combination of the above couplings and the couplings with structure $F^{(4)}F^{(2)} HR$ which have been found in \cite{Garousi:2013lja}, to the Einstein frame is the following:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} e^{-\phi_0}\big[-2 F_{hm,n} F_{mnpq,r} H_{kpq,r} \Phi_{,hk}-2 F_{hm,n} F_{nkpq,r} H_{mpq,r} \Phi_{,hk}\nn\\&&+6 F_{hm,n} F_{npqr,k} H_{mpq,r} \Phi_{,hk}-8 F_{hm,n} F_{mkpq,r} H_{npq,r} \Phi_{,hk}+12 F_{hm,n} F_{mkpr,q} H_{npq,r} \Phi_{,hk}\nn\\&&+\frac{2}{3} F_{hm,n} F_{mkpq,r} H_{kpq,r} \Phi_{,hn}+12 F_{hm,n} F_{mnqr,p} H_{hkq,r} \Phi_{,kp}-6 F_{hm,n} F_{mnqr,p} H_{hqr,k} \Phi_{,kp}\nn\\&&+6 F_{hm,n} F_{mkqr,p} H_{hqr,n} \Phi_{,kp}\big]\labell{EF4F2HD}
\end{eqnarray}
We have checked that it is consistent with \reef{FnHD} for $n=4$. In writing the above result we have used the Bianchi identities and the on-shell relations to simplify the couplings. However, one may still use these identities to rewrite the above couplings in a simpler form. It would be interesting to find the minimum number of terms in which the Bianchi identities have been used completely.
Having found the string frame couplings $ F^{(n)} F^{(n-2)} H \phi $ for $n=4$ in \reef{mF4F2}, we now apply the T-duality transformations on them to find the corresponding couplings for $n=3$. To find the couplings with structure $F^{(3)}F^{(1)}H\phi$, one has to use the dimensional reduction on the couplings \reef{mF4F2} and find the couplings with structure $F^{(4)}_yF^{(2)}_y H\phi$. Then under the linear T-duality transformations, they transform to the couplings with structure $F^{(3)}F^{(1)} H\phi$. In the string frame they are given by
\begin{eqnarray}
S &\supset& - 8 \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \Phi_{,hk}\big[ H_{h}{}_{rs}{}_{,m} F_{krs}{}_{,q}
F_{m}{}_{,q} - H_{h}{}_{rs}{}_{,k} F_{mrs}{}_{,q}
F_{m}{}_{,q}\labell{mF3F1} \\
&&\qquad\qquad\qquad\qquad\qquad - 2 F_{krs}{}_{,q} F_{m}{}_{,q} H_{hm}{}_{r}{}_{,s} + 2
F_{m}{}_{,q} F_{kqr}{}_{,s} H_{hm}{}_{r}{}_{,s} +
F_{h}{}_{}{}_{,p} F_{pqr}{}_{,s} H_{k}{}_{qr}{}_{,s}
\big]\nn
\end{eqnarray}
We have checked that the transformation of the combination of the above couplings and the couplings with structure $F^{(3)}F^{(1)} HR$, to the Einstein frame produces exactly the couplings which are consistent with \reef{FnHD} for $n=3$.
\subsubsection{Consistency with the S-duality}
The dilaton couplings in \reef{mF4F2} are in the type IIA theory whereas the couplings in \reef{mF5F3} and \reef{mF3F1} are in the type IIB theory. The effective action in the type IIB theory should be invariant under the S-duality, as a result, the couplings in \reef{mF5F3} and \reef{mF3F1} should be consistent with the S-duality. The standard S-duality transformations are in the Einstein frame, so we have to transform the couplings with structure $F^{(n)}F^{(n-2)} HR$ and $F^{(n)}F^{(n-2)} H\phi$ to the Einstein frame and then study their compatibility with the S-duality for $n=5,3$.
The extension of the couplings in \reef{F1F5} to the S-duality invariant form has been found in \cite{Garousi:2013lja}. Apart from the overall dilaton factor in the Einstein frame which is extended to the $SL(2,{Z})$ invariant Eisenstein series $E_{3/2}$, each coupling should be extended to the $SL(2,{R})$ invariant form, {\it e.g.,}\
the first term in \reef{F1F5} is extended to
\begin{eqnarray}
\cH^T_{h q r,m}\cM_{,nk}\cN^{-1}\cM_0\cH_{m p s,k}&=&2F_{n,k}(H_{h q r,m} H_{m p s,k}-e^{2\phi_0}F_{h q r,m} F_{m p s,k})\labell{ex}\\
&&+\sqrt{2}\kappa\phi_{,nk}(H_{h q r,m} F_{m p s,k}+F_{h q r,m} H_{m p s,k})+\cdots\nonumber
\end{eqnarray}
where dots refer to the terms with non-zero axion background in which we are not interested. We refer the interested reader to \cite{Garousi:2013lja} for the definitions of $\cH,\,\cM$ and $\cN$. The terms in \reef{F1F5} correspond to the first term in the above $SL(2,{R})$ invariant set.
The terms in the S-duality invariant action which correspond to the second line above have structure $F^{(5)}F^{(3)}H\phi$. We have checked explicitly that these terms are reproduced exactly by transforming the couplings \reef{mF5F3} and the couplings with structure $F^{(5)}F^{(3)} HR$ (see eq.(35) in \cite{Garousi:2013lja}) to the Einstein frame. In other words, these couplings are fully consistent with the kinematic factor \reef{FnHD} for $n=5$. The couplings corresponding to the last term in \reef{ex} are the S-duality prediction for four RR couplings with structure $F^{(5)}F^{(1)}F^{(3)}F^{(3)}$.
The transformation of the couplings \reef{mF3F1} and the couplings with structure $F^{(3)}F^{(1)} HR$ (see eq.(37) in \cite{Garousi:2013lja}) to the Einstein frame, produces the following couplings with structure $F^{(3)}F^{(1)} H\phi$ in the Einstein frame:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} e^{-\phi_0/2}\big[-\frac{4}{3} F_{h,m} F_{nkp,q} H_{nkp,q} \Phi_{,hm}\labell{F3F1}\\&&+8 F_{h,m} F_{kpq,n} H_{mkp,q} \Phi_{,hn}+4 F_{h,m} F_{nkp,q} H_{mkp,q} \Phi_{,hn}+4 F_{h,m} F_{mkp,q} H_{nkp,q} \Phi_{,hn}\nn\\&&-8 F_{h,m} F_{npq,k} H_{hpq,m} \Phi_{,nk}-16 F_{h,m} F_{hpq,n} H_{mkp,q} \Phi_{,nk}+8 F_{h,m} F_{hpq,n} H_{mpq,k} \Phi_{,nk}\big]\nn
\end{eqnarray}
which are consistent with the kinematic factor \reef{FnHD} for $n=3$. Using the on-shell relations, one finds that the above amplitude is invariant under the transformation
\begin{eqnarray}
F^{(3)}\longrightarrow H&;& H\longrightarrow -F^{(3)}
\end{eqnarray}
It is also invariant under the following transformation:
\begin{eqnarray}
F^{(1)}\longrightarrow d\Phi&;&d\Phi\longrightarrow -F^{(1)}
\end{eqnarray}
Using these properties, one should extended the amplitude \reef{F3F1} to the S-duality invariant form.
The dilaton factor in \reef{F3F1} can be rewritten as $e^{-3\phi_0/2}\times e^{\phi_0}$. The first factor is extended to the $SL(2,{Z})$ invariant function $E_{3/2}$ after including the one-loop result and the nonperturbative effects \cite{Green:1997tv}. The second factor combines with the dilaton and the RR scalar to produce the following $SL(2,{R})$ invariant term:
\begin{eqnarray}
e^{\phi_0}(F_{h,k}\Phi_{,mn}-F_{m,n}\Phi_{,hk})
\end{eqnarray}
Using the standard $SL(2,{R})$ transformation of the dilaton and the RR scalar, {\it i.e.,}\ $\tau\longrightarrow \frac{p\tau+q}{r\tau+s}$ where $\tau=C+ie^{-\Phi}$, one finds the above term is invariant under the $SL(2,{R})$ transformation\footnote{It has been observed in \cite{Kamani:2013tv} that $e^{\Phi}F^{(1)}\wedge d\Phi$ is invariant under the ${Z}_2$ subgroup of the $SL(2,{R})$ group.}. The RR two-form and the B-field should appear in the following $SL(2,{R})$ invariant term:
\begin{eqnarray}
\cH^T_{mnq}{}_{,p}\cN \cH_{mnp}{}_{,q}&=&H_{mnq}{}_{,p} F_{mnp}{}_{,q}- F_{mnq}{}_{,p} H_{mnp}{}_{,q}
\end{eqnarray}
Therefore, the $SL(2,{Z})$ invariant extension of the action \reef{F3F1} has no coupling other than $F^{(3)}F^{(1)} H\phi$.
\subsection{$F^{(n)}F^{(n)}R\phi$}
Replacing the tensors $T_i$ for $ n=m $ case which are calculated in \reef{tn=m}, into \reef{kin3}, one finds the following result for the kinematic factor \reef{kin2} when one of the polarization tensors is symmetric and traceless and the other one is the dilaton polarization \reef{dilpol}:
\begin{eqnarray}
\cK &=& \frac{\phi_3}{2^7 \sqrt{2} }(n-5) \bigg[F_{12} {h}_4^{\mu \nu } \bigg(u^2 {k}_{1\mu } {k}_{1\nu }+t {k}_{2\mu } (t {k}_{2\nu }-2 u {k}_{1\nu })\bigg)\nn\\
&& +n s \bigg((n-1) s F_{12}^{\alpha \beta \mu \nu} {h}_{4\beta \nu } {k}_{4\alpha } {k}_{4\mu }-F_{12}^{\alpha \mu} (t {k}_2^{\nu }-u {k}_1^{\nu }) ({h}_{4\mu \nu } {k}_{4\alpha }-{h}_{4\alpha \nu } {k}_{4\mu })\bigg)\bigg]\labell{FnRD}
\end{eqnarray}
The kinematic factor is zero for $n=5$. So there is no higher-derivative coupling between two $F^{(5)}$, one graviton and one dilaton in the Einstein frame. This result is consistent with the S-duality because $F^{(5)}$ and the graviton in the Einstein frame are invariant under the S-duality whereas the dilaton is not invariant under the S-duality.
Now we are going to find the couplings with structure $ F^{(n)} F^{(n)} R \phi $ for $ n=1,2,3,4,5 $ in the string frame. To this end, we have to first transform the string frame couplings with structure $ F^{(n)} F^{(n)} R R $ which have been found in \cite{Garousi:2013lja}, to the Einstein frame. If they do not produce the kinematic factor \reef{FnRD}, then one has to consider new couplings with structure $ F^{(n)} F^{(n)} R \phi $. So let us begin with the case of $n=5$. Transforming the couplings with structure $ F^{(5)} F^{(5)} R R $ (see eq.(27) in \cite{Garousi:2013lja}) to the Einstein frame, we have found that they produce the couplings with structure $ F^{(5)} F^{(5)} R \phi $ which are not zero, {\it i.e.,}\ they are not consistent with \reef{FnRD}. As a result, one has to consider new couplings in the string frame with structure $ F^{(5)} F^{(5)} R \phi $ to cancel them. We consider all such on-shell couplings with unknown coefficients, and constrain them to cancel the above $ F^{(5)} F^{(5)} R \phi $ couplings. This constraint produces some relations between the coefficients. Replacing them into the general couplings with structure $ F^{(5)} F^{(5)} R \phi $, one finds the following couplings:
\begin{eqnarray}
S &\supset& - \frac{4 }{3} \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ - R_{h}{}_{mnp} \
F_{k}{}_{qrst}{}_{,n} F_{mpqrs}{}_{,t} - R_{h}{}_{mnp} F_{mqrst}{}_{,p} F_{kn}{}_{qrs}{}_{,t} \labell{mF5RD}\\
&& + 3 R_{h}{}_{mnp} F_{mpqrt}{}_{,s} \
F_{kn}{}_{qrs}{}_{,t} - 3 R_{h}{}_{mnp} \
F_{mpqrs}{}_{,t} F_{kn}{}_{qrs}{}_{,t} + R_{h}{}_{m}{}_{k}{}_{n} F_{npqrt}{}_{,s} \
F_{m}{}_{pqrs}{}_{,t} \big]\Phi_{,hk}\nn
\end{eqnarray}
Plus some other terms which contains some of the unknown coefficients. However, these terms vanish when we write the field strengths in terms of the corresponding potentials and use the on-shell relations to write the result in terms of the independent variables. As a result, these terms can safely be set to zero. We refer the reader to section 4.3 in which similar calculation has been done in more details.
Using the above couplings in the string frame, one can perform the dimensional reduction on a circle and finds the couplings with structure $F^{(5)}_yF^{(5)}_y R\phi$. Under the linear T-duality, they produce the following couplings with structure $F^{(4)}F^{(4)} R\phi$ in the type IIA theory:
\begin{eqnarray}
S &\supset& - 4 \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ R_{h}{}_{mnp} \
F_{k}{}_{rst}{}_{,n} F_{mprs}{}_{,t} + R_{h}{}_{mnp} F_{mrst}{}_{,p} F_{kn}{}_{rs}{}_{,t} \labell{mF4RD}\\
&& + 2 R_{h}{}_{mnp} F_{mprt}{}_{,s} \
F_{kn}{}_{rs}{}_{,t} - 3 R_{h}{}_{mnp} \
F_{mprs}{}_{,t} F_{kn}{}_{rs}{}_{,t} + R_{h}{}_{m}{}_{k}{}_{n} F_{nprt}{}_{,s} \
F_{m}{}_{prs}{}_{,t} \big]\Phi_{,hk}\nn
\end{eqnarray}
The transformation of the above couplings and the couplings with structure $F^{(4)}F^{(4)} RR$ (see eq.(26) in \cite{Garousi:2013lja}), to the Einstein frame produces the following couplings:
\begin{eqnarray}
S &\!\!\!\supset\!\!\!& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} e^{-3\phi_0/2}\big[3 F_{mkqr,s} F_{npqr,s} R_{hnkp} \Phi_{,hm}- F_{mqrs,k} F_{npqr,s} R_{hnkp} \Phi_{,hm}\labell{EF4F4RD}\\&&-2 F_{mkqr,s} F_{npqs,r} R_{hnkp} \Phi_{,hm}- F_{mkqr,s} F_{nqrs,p} R_{hnkp} \Phi_{,hm}- F_{kpqs,r} F_{npqr,s} R_{hnmk} \Phi_{,hm}\big]\nn
\end{eqnarray}
which are exactly consistent with \reef{FnRD} for $n=4$.
The couplings with structure $F^{(3)}F^{(3)} R\phi$ in the string frame can be found from the couplings \reef{mF4RD} by applying the dimensional reduction and finding the terms with structure $F^{(3)}_yF^{(3)}_yR\phi$. Then under T-duality they produce the following couplings in the type IIB theory:
\begin{eqnarray}
S &\supset& - 8 \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ - R_{h}{}_{mnp} \
F_{k}{}_{st}{}_{,n} F_{mps}{}_{,t} - R_{h}{}_{mnp} F_{mst}{}_{,p} F_{kn}{}_{s}{}_{,t} \labell{mF3RD}\\
&& + R_{h}{}_{mnp} F_{mpt}{}_{,s} \
F_{kn}{}_{s}{}_{,t} - 3 R_{h}{}_{mnp} \
F_{mps}{}_{,t} F_{kn}{}_{s}{}_{,t} + R_{h}{}_{m}{}_{k}{}_{n} F_{npt}{}_{,s} \
F_{m}{}_{ps}{}_{,t} \big]\Phi_{,hk}\nn
\end{eqnarray}
We have checked that the transformation of the above couplings and the couplings with structure $F^{(3)}F^{(3)} RR$ (see eq.(20) in \cite{Garousi:2013lja}), to the Einstein frame are exactly consistent with \reef{FnRD} for $n=3$. The S-duality transformations of these couplings are discussed in the next section.
Having found the couplings with structure $F^{(3)}F^{(3)}R\phi$ in \reef{mF3RD}, we now construct the couplings with structure $F^{(2)}F^{(2)}R\phi$ in the type IIA theory. Under the dimensional reduction on the above couplings, the couplings with structure $F^{(3)}_yF^{(3)}_yR\phi$ produce the following couplings under the T-duality:
\begin{eqnarray}
S &\supset& - 8 \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ R_{h}{}_{mnp} \
F_{k}{}_{t}{}_{,n} F_{mp}{}_{,t} + R_{h}{}_{mnp} F_{mt}{}_{,p} F_{kn}{}_{}{}_{,t} \labell{mF2RD}\\
&&\qquad\qquad\qquad\qquad- 3 R_{h}{}_{mnp} \
F_{mp}{}_{,t} F_{kn}{}_{}{}_{,t} + R_{h}{}_{m}{}_{k}{}_{n} F_{nt}{}_{,s} \
F_{m}{}_{s}{}_{,t} \big]\Phi_{,hk}\nn
\end{eqnarray}
Note that in the first term in the second line of \reef{mF3RD} there is no contraction between the two RR field strengths. Hence, this term does not produce coupling with structure $F^{(3)}_yF^{(3)}_yR\phi$. That is why this term does not appear in \reef{mF2RD}. The transformation of the above couplings and the couplings with structure $F^{(2)}F^{(2)} RR$ (see eq.(21) in \cite{Garousi:2013lja}), to the Einstein frame is given by the following couplings:
\begin{eqnarray}
S &\supset& 12\frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} e^{-\phi_0/2}\big[ F_{hm,n} F_{kp,q} R_{mkpq} \Phi_{,hn}- F_{hm,n} F_{kp,q} R_{mnpq} \Phi_{,hk}\labell{EF2F2RD}\\&&\qquad\qquad\qquad\qquad\qquad- F_{hm,n} F_{kp,n} R_{mkpq} \Phi_{,hq}- F_{hm,n} F_{nk,p} R_{mkpq} \Phi_{,hq}\big]\nn
\end{eqnarray}
which are exactly consistent with \reef{FnRD} for $n=2$.
There is no contraction between the RR field strengths in \reef{mF2RD}, hence, the dimensional reduction on a circle does not produce couplings with structure $F^{(2)}_yF^{(2)}_yR\phi$. As a result, the linear T-duality indicates that there is no new coupling with structure $F^{(1)}F^{(1)}R\phi$ in the string frame. We have checked that the transformation of the couplings with structure $F^{(1)}F^{(1)} RR$ (see eq.(22) in \cite{Garousi:2013lja}), to the Einstein frame are consistent with \reef{FnRD} for $n=1$. In fact both are zero in this case.
\subsubsection{Consistency with the S-duality}
The Einstein frame couplings $F^{(n)}F^{(n)}R\phi$ for $n=1,3,5$ are in the type IIB theory, so they should be consistent with the S-duality. We have seen that for $n=1,5$ the couplings are zero which are consistent with the S-duality because it is impossible to construct the $SL(2,{R})$ invariant term from one dilaton or from one dilaton and two RR scalars. Note that the dilaton and the axion in $E_{3/2}$ are constant, so we can not consider the derivative of $E_{3/2}$ which produces $\partial\phi e^{-3\phi/2}$ at weak coupling. In fact the contact terms in \reef{contact} represent the couplings of four quantum states in the presence of constant dilaton background. It is totally nontrivial to extend the amplitude \reef{amp3} to non-constant dilaton background, {\it i.e.,}\ it is not trivial to take into account the derivatives of the dilaton background. That amplitude would produce higher-point functions.
The couplings with structure $F^{(3)}F^{(3)}R\phi$, however, are not zero. That means it is possible to construct the $SL(2,{R})$ invariant couplings which contains one dilaton and two RR 2-forms. In fact the S-duality invariant couplings which include such couplings have been constructed in \cite{Garousi:2013lja}, {\it i.e.,}\
\begin{eqnarray}
S &\supset&\frac{\gamma}{\kappa^2 } \int d^{10}x E_{3/2}\sqrt{-G}\big[ 4 \cH^T_{h q r,n} \cM_{,rm}\cH_{k n p,h} R_{m p k q}-4 \cH^T_{n p r,h}\cM_{,mk} \cH_{h n q,r} R_{k q m p}\nonumber\\&&-4 \cH^T_{n p q,m} \cM_{,qh}\cH_{m p r,k} R_{k r h n}+4 \cH^T_{n p q,h} \cM_{,qm}\cH_{k n r,h} R_{m r k p}-2 \cH^T_{n p q,h} \cM_{,mh}\cH_{n p r,k} R_{m r k q}\nonumber\\&&+2 \cH^T_{m n q,h} \cM_{,rk} \cH_{n p q,k} R_{p r h m}-2 \cH^T_{m n p,k} \cM_{,rm} \cH_{n p q,h} R_{q r h k}\big]\labell{F1F3}
\end{eqnarray}
Each term is invariant under the $SL(2,{R})$ transformations. For zero axion background, each term has the following couplings:
\begin{eqnarray}
\cH^T_{h q r,n}\cM_{,rm}\cH_{k n p,h}R_{m p k q}&=&2e^{\phi_0}F_{r,m}(F_{h q r,n}H_{k n p,h}+H_{h q r,n}F_{k n p,h})R_{m p k q}\labell{HMH}\\
&&+\sqrt{2}\kappa\phi_{,rm}(e^{\phi_0}F_{h q r,n}F_{k n p,h}-e^{-\phi_0}H_{h q r,n}H_{k n p,h})R_{m p k q}\nonumber
\end{eqnarray}
The couplings corresponding to the terms in the first line of \reef{HMH} have been found in \cite{Garousi:2013lja}. The couplings corresponding to the last term above have been found in \cite{Garousi:2013tca}. The couplings corresponding to the first term in the second line of \reef{HMH} are the couplings $F^{(3)}F^{(3)}R\phi$ in the Einstein frame. We have checked it explicitly that they are consistent with \reef{FnRD} for $n=3$.
\subsection{$F^{(n)}F^{(n)}\phi\phi$}
Replacing the tensors $T_i$ for $ n=m $ case which are calculated in \reef{tn=m}, into \reef{kin3}, one finds the following result for the kinematic factor \reef{kin2} when both the NSNS polarization tensors are the dilaton polarization \reef{dilpol}:
\begin{eqnarray}
\cK &=& \frac{\phi_3\phi_4}{2^{11}} \bigg[(n-5) s t u F_{12}+4 n F_{12}^{\alpha \mu} \bigg((n-5) s u k_{1\mu } k_{4\alpha }\nn\\
&&\qquad\quad+(n-5) s t k_{2\alpha }k_{4\mu }+[(n-5)^2 s^2-8 t u] k_{4\alpha }k_{4\mu } \bigg) \bigg]\labell{FnD}
\end{eqnarray}
We are going to find the couplings with structure $ F^{(n)} F^{(n)} \phi \phi $ for $ n=1,2,3,4,5 $ which are consistent with the above kinematic factor.
For $n=5$, only the last term survives. The coupling in the Einstein frame is
\begin{eqnarray}
S &\supset& \frac{1}{6}\frac{\gamma}{\kappa^2} \int d^{10}x \sqrt{-G} e^{-3\phi_0/2} \big[
F_{h}{}_{pqrs}{}_{,m} F_{kpqrs}{}_{,n} \Phi_{,hk}\Phi_{,mn} \big]\labell{F5D1}
\end{eqnarray}
which can easily be extended to the S-duality invariant form. However, the couplings in the string frame which can be studied under the linear T-duality, are not so easy to read from the kinematic factor \reef{FnD}. So we consider all possible on-shell couplings with structure $ F^{(5)} F^{(5)} \phi\phi $ in the string frame with unknown coefficients, {\it i.e.,}\
\begin{eqnarray}
S &\supset& \frac{1}{2}\frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[C_{1} F_{nkpqr,s} F_{nkpqr,s} \Phi_{,hm} \Phi_{,hm}\nn\\
&&+C_{2} F_{nkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hm}+C_{3} F_{mkpqr,s} F_{nkpqr,s} \Phi_{,hm} \Phi_{,hn}\nn\\
&&+C_{4} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}+C_{5} F_{kpqrs,n} F_{mkpqr,s} \Phi_{,hm} \Phi_{,hn}\nn\\
&&+C_{6} F_{kpqrs,m} F_{kpqrs,n} \Phi_{,hm} \Phi_{,hn}+C_{7} F_{hnpqr,s} F_{mkpqr,s} \Phi_{,hm} \Phi_{,nk}\nn\\
&&+C_{8} F_{hnpqr,s} F_{mkpqs,r} \Phi_{,hm} \Phi_{,nk}+C_{9} F_{hnpqr,s} F_{mpqrs,k} \Phi_{,hm} \Phi_{,nk}\nn\\
&&+C_{10} F_{hpqrs,n} F_{mpqrs,k} \Phi_{,hm} \Phi_{,nk}+C_{11} F_{hpqrs,m} F_{npqrs,k} \Phi_{,hm} \Phi_{,nk}\nn\\
&&+C_{12} F_{hpqrs,n} F_{kpqrs,m} \Phi_{,hm} \Phi_{,nk}\big]\labell{allcontractions}
\end{eqnarray}
and find the coefficients by imposing the constraint that the couplings in the Einstein frame are given by the above equation.
To find the coefficients, one has to consider the string frame couplings with structure $ F^{(5)} F^{(5)} R R $ which have been found in \cite{Garousi:2013lja}, and the string frame couplings with structure $ F^{(5)} F^{(5)} R\phi $ which have been found in \reef{mF5RD}. Both of them produce couplings with structure $ F^{(5)} F^{(5)} \phi\phi $ when transforming them to the Einstein frame \reef{transf}. Transforming all couplings to the Einstein frame and constraining them to be identical with the coupling \reef{F5D1}, one finds the following couplings in the string frame:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[- \frac{1}{30} F_{mnpqr}{}_{,s} (F_{mnpqr}{}_{,s} \Phi_{,hk} - 10 F_{k}{}_{npqr}{}_{,s} \Phi_{,h}{}_{m})\Phi_{,hk}\big]\labell{mF5D}
\end{eqnarray}
Plus the following terms which contains some of the unknown coefficients:
\begin{eqnarray}
S &\supset& \frac{1}{2}\frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[ C_{5} F_{kpqrs,n} F_{mkpqr,s} \Phi_{,hm} \Phi_{,hn}\nn\\
&&-\frac{1}{5} C_{2} F_{nkpqr,s} F_{nkpqr,s} \Phi_{,hm} \Phi_{,hm}+\frac{1}{10} C_{11} F_{nkpqr,s} F_{nkpqr,s} \Phi_{,hm}\Phi_{,hm}\nn\\
&&+C_{2} F_{nkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hm}+C_{6} F_{kpqrs,m} F_{kpqrs,n} \Phi_{,hm} \Phi_{,hn}\nn\\
&&-C_{5} F_{mkpqr,s} F_{nkpqr,s} \Phi_{,hm} \Phi_{,hn}-5 C_{6} F_{mkpqr,s} F_{nkpqr,s} \Phi_{,hm} \Phi_{,hn}\nn\\
&&-2 C_{11} F_{mkpqr,s} F_{nkpqr,s} \Phi_{,hm} \Phi_{,hn}+4 C_{5} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}\nn\\
&&+20 C_{6} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}+C_{7} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}\nn\\
&&-\frac{1}{2} C_{9} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}+6 C_{11} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}\nn\\
&&-2 C_{12} F_{mkpqr,s} F_{nkpqs,r} \Phi_{,hm} \Phi_{,hn}+C_{12} F_{hpqrs,n} F_{kpqrs,m} \Phi_{,hm} \Phi_{,nk}\nn\\
&&+C_{7} F_{hnpqr,s} F_{mkpqr,s} \Phi_{,hm} \Phi_{,nk}+C_{8} F_{hnpqr,s} F_{mkpqs,r} \Phi_{,hm} \Phi_{,nk}\nn\\
&&+C_{9} F_{hnpqr,s} F_{mpqrs,k} \Phi_{,hm} \Phi_{,nk}-C_{11} F_{hpqrs,n} F_{mpqrs,k} \Phi_{,hm} \Phi_{,nk}\nn\\
&&-C_{12} F_{hpqrs,n} F_{mpqrs,k} \Phi_{,hm} \Phi_{,nk}+C_{11} F_{hpqrs,m} F_{npqrs,k} \Phi_{,hm} \Phi_{,nk}\big]\labell{all}
\end{eqnarray}
However, using the Bianchi identity and the on-shell relations, one finds that they are zero. To see this explicitly, consider for example the terms with coefficient $C_2$, {\it i.e.,}\
\begin{eqnarray}
-\frac{1}{5} C_{2}
F_{nkpqr,s}F_{nkpqr,s} \Phi_{,hm}\Phi_{,hm}+C_{2}F_{nkpqr,s}F_{nkpqs,r}\Phi_{,hm}\Phi_{,hm}
\end{eqnarray}
To apply the Bianchi identity we write the RR field strength in terms of the RR potential. To impose the on-shell relations, we first transform the couplings to the momentum space and then impose the on-shell relations. One finds
\begin{eqnarray}
-48 C_{2} (k_1.k_2)^2 {C}_{1hmnp} {C}_{2npqr} k_{1q} k_{1r} k_{2h} k_{2m}
\end{eqnarray}
which can easily be observed that it is zero using the totally antisymmetric property of the RR potential. Simile calculation shows that all other terms in \reef{all} vanishes.
We now apply the T-duality transformations on the couplings \reef{mF5D} to find the string frame couplings with structure $F^{(4)}F^{(4)}\phi\phi$ in the type IIA theory. To this end, we use the dimensional reduction on the couplings \reef{mF5D} and find the couplings with structure $ F^{(5)}_yF^{(5)}_y \phi\phi$. Under the linear T-duality transformations, they transforms to the couplings with structure $ F^{(4)}F^{(4)}\phi\phi$ and some other terms involving $R_{yy}$ in which we are not interested. The couplings with structure $ F^{(4)}F^{(4)}\phi\phi$ are
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[- \frac{1}{6} F_{mnpq}{}_{,r} (F_{mnpq}{}_{,r} \Phi_{,hk} - \
8 F_{k}{}_{npq}{}_{,r} \Phi_{,h}{}_{m})\Phi_{,hk}\big]\labell{mF4D}
\end{eqnarray}
The transformation of the above couplings, the couplings in \reef{mF4RD} and the couplings with structure $F^{(4)}F^{(4)}RR$ (see eq.(26) in \cite{Garousi:2013lja}), to the Einstein frame produces the following couplings:
\begin{eqnarray}
S &\supset& \frac{1}{2}\frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} e^{-\phi_0} \big[-\frac{1}{48} F_{nkpq,r} F_{nkpq,r} \Phi_{,hm} \Phi_{,hm}\labell{EF4F4DD}\\&&\qquad\qquad\qquad\qquad\qquad+\frac{1}{6} F_{mkpq,r} F_{nkpq,r} \Phi_{,hm} \Phi_{,hn}+\frac{4}{3} F_{hpqr,n} F_{mpqr,k} \Phi_{,hm} \Phi_{,nk}\big]\nn
\end{eqnarray}
which are fully consistent with the kinematic factor \reef{FnD} for $n=4$. In writing the above result, we have used the Bianchi identity and the on-shell relations to simplify the result.
To find the string frame couplings with structure $F^{(3)}F^{(3)}\phi\phi$ in the type IIB theory, one has to use the dimensional reduction on the couplings \reef{mF4D} and find the couplings with structure $F^{(4)}_yF^{(4)}_y \phi\phi$. Then under the linear T-duality transformations, they transform to the following couplings with structure $F^{(3)}F^{(3)} \phi\phi$:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[- \frac{2}{3}F_{mnp}{}_{,q} (F_{mnp}{}_{,q} \Phi_{,hk} - 6 \
F_{k}{}_{np}{}_{,q} \Phi_{,h}{}_{m})\Phi_{,hk}\big]\labell{mF3D}
\end{eqnarray}
We have checked that the transformation of the above couplings, the couplings in \reef{mF3RD} and the couplings with structure $F^{(3)}F^{(3)}RR$ (see eq.(20) in \cite{Garousi:2013lja}), to the Einstein frame produces the couplings with structure $ F^{(3)}F^{(3)}\phi\phi$ which are consistent with the kinematic factor \reef{FnD} for $n=3$. We will study the S-duality of these couplings in the next section.
Applying the T-duality transformations on the couplings \reef{mF3D}, one finds the following couplings in the type IIA theory in the string frame:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[- 2F_{mn}{}_{,q} (F_{mn}{}_{,q} \Phi_{,hk} - 4 \
F_{k}{}_{n}{}_{,q} \Phi_{,h}{}_{m})\Phi_{,hk}\big]\labell{mF2D}
\end{eqnarray}
The transformation of the above couplings, the couplings in \reef{mF2RD} and the couplings with structure $F^{(2)}F^{(2)}RR$ (see eq.(21) in \cite{Garousi:2013lja}), to the Einstein frame produces the following couplings:
\begin{eqnarray}
S &\supset& \frac{1}{2}\frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[9 F_{hk,p} F_{hm,n} \Phi_{,kp} \Phi_{,mn}+10 F_{hm,n} F_{nk,p} \Phi_{,hk} \Phi_{,mp}\labell{EF2F2DD}\\&&\qquad\qquad\qquad\quad-9 F_{hm,n} F_{hn,k} \Phi_{,kp} \Phi_{,mp}-F_{hk,p} F_{hm,n} \Phi_{,mp} \Phi_{,nk}
\big]\nn
\end{eqnarray}
which are consistent with the kinematic factor \reef{FnD} for $n=2$.
Finally, applying the T-duality transformations on the couplings \reef{mF2D}, one finds the following couplings in the type IIB theory in the string frame:
\begin{eqnarray}
S &\supset& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \big[- 4F_{m}{}_{,q} (F_{m}{}_{,q} \Phi_{,hk} - 2 \
F_{k}{}_{}{}_{,q} \Phi_{,h}{}_{m})\Phi_{,hk}\big]\labell{mF1D}
\end{eqnarray}
We have checked that the transformation of the above couplings and the couplings with structure $F^{(1)}F^{(1)}RR$ (see eq.(22) in \cite{Garousi:2013lja}), to the Einstein frame produces couplings with structure $ F^{(1)}F^{(1)}\phi\phi$ which are consistent with the kinematic factor \reef{FnD} for $n=1$.
\subsubsection{Consistency with the S-duality}
The Einstein frame couplings $F^{(n)}F^{(n)}\phi\phi$ for $n=1,3,5$ are in the type IIB theory, so they should be consistent with the S-duality. For $n=5$, the coupling is given in \reef{F5D1}. The S-duality invariant extension of this coupling is
\begin{eqnarray}
S &\supset& -\frac{1}{12}\frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} E_{3/2} \bigg(
F_{h}{}_{pqrs}{}_{,m} F_{kpqrs}{}_{,n} \Tr[\cM_{,hk}\cM^{-1}_{,mn}] \bigg)\labell{F5D2}
\end{eqnarray}
where the $SL(2,{R})$ invariant combination of the dilaton and the RR scalar is
\begin{eqnarray}
-\frac{1}{4}\Tr[\cM_{,hk}\cM^{-1}_{,mn} ]&=&2e^{2\phi_0}F_{h,k}F_{m,n}+\kappa^2\phi_{,hk}\phi_{,mn}\labell{FF}
\end{eqnarray}
The second term corresponds to the coupling \reef{F5D1}. The first term is the S-duality prediction for the couplings of two RR 4-forms and two RR scalars.
To study the S-duality of couplings for $n=3$ case, consider the following S-duality invariant action that has been found in \cite{Garousi:2013lja}:
\begin{eqnarray}
S &\supset&\frac{\gamma}{\kappa^2} \int d^{10}x\sqrt{-G}E_{3/2}\bigg[\frac{1}{6} \Tr[\cM_{,nh}\cM^{-1}_{,nk} ] \cH^T_{m p r,h}\cM_0 \cH_{m p r,k}\labell{F1HS}\\&&-\frac{1}{2} \Tr[\cM_{,nh}\cM^{-1}_{,km} ] \cH^T_{m p r,k} \cM_0\cH_{n p r,h}-\frac{1}{2} \Tr[\cM_{,hm}\cM^{-1}_{,kn} ] \cH^T_{m p r,k} \cM_0\cH_{n p r,h}\bigg]\nonumber
\end{eqnarray}
where in the presence of zero axion background, the $SL(2,{R})$ invariant $\cH^T\cM_0\cH$ has the following terms:
\begin{eqnarray}
\cH^T\cM_0\cH&=&e^{-\phi_0}HH+e^{\phi_0}F^{(3)}F^{(3)}
\end{eqnarray}
Using \reef{FF} and the above expression, one finds the S-duality invariant action \reef{F1HS} has four different terms. Terms with structure $HH\phi\phi$ which have been verified by the corresponding S-matrix element in \cite{Garousi:2013tca}, terms with structure $HHF^{(1)}F^{(1)}$ which have been verified by the corresponding S-matrix element in section 3.2, terms with structure $F^{(3)}F^{(3)}\phi\phi$ and terms with structure $F^{(3)}F^{(3)}F^{(1)}F^{(1)}$. We have checked explicitly that the couplings in \reef{F1HS} with structure $F^{(3)}F^{(3)}\phi\phi$ are consistent with the kinematic factor \reef{FnD} for $n=3$. The couplings in \reef{F1HS} with structure $F^{(3)}F^{(3)}F^{(1)}F^{(1)}$ are the prediction of the S-duality for the couplings of two RR 2-forms and two RR scalars.
To study the S-duality of couplings for $n=1$ case, consider the following S-duality invariant action that has been found in \cite{Garousi:2013tca}:
\begin{eqnarray}
S \!\supset\!\frac{\gamma}{\kappa^2 } \int d^{10}x \sqrt{-G} E_{3/2}\bigg[ a\bigg(\frac{1}{4} \Tr[\cM_{,nm}\cM^{-1}_{,nm} ]\bigg)^2 +\frac{b}{16} \Tr[\cM_{,nm}\cM^{-1}_{,hk} ]\Tr[\cM_{,hk}\cM^{-1}_{,nm} ]\bigg]\labell{F1HS1}
\end{eqnarray}
where the constants $a,b$ satisfy the relation $a+b=1$. Using the expression \reef{FF}, one finds the above action has three different couplings. The couplings with structure $\phi\phi\phi\phi$ which have been verified by the S-matrix element of four dilatons in \cite{Garousi:2013tca} and the couplings with structures $F^{(1)}F^{(1)}\phi\phi$ and $F^{(1)}F^{(1)}F^{(1)}F^{(1)}$. We have found that the couplings with structure $F^{(1)}F^{(1)}\phi\phi$ are reproduced by the kinematic factor \reef{FnD} for $n=1$. This fixes the constants to be $a=-1$ and $b=2$. The couplings in \reef{F1HS1} with structure $F^{(1)}F^{(1)}F^{(1)}F^{(1)}$ are the prediction of the S-duality for the couplings of four RR scalars. These couplings have been confirmed in \cite{Garousi:2013tca} to be consistent with the linear T-duality of the couplings with structure $F^{(3)}F^{(3)}F^{(3)}F^{(3)}$.
\section{Discussion}
In this paper, we have examined in details the calculation of the S-matrix element of two RR and two NSNS states in the RNS formalism to find the corresponding couplings at order $\alpha'^3$. For the gravity and B-field couplings, we have found perfect agreement with the eight-derivative couplings that have been found in \cite{Garousi:2013lja}. For the dilaton couplings in the Einstein frame, we have found that the couplings are fully consistent with the S-dual multiplets that have been found in \cite{Garousi:2013lja}. We have also found the couplings with structure $F^{(3)}F^{(1)}H\phi$ which are singlet under the $SL(2,{R})$ transformation.
Unlike the four NSNS couplings which have no dilaton in the string frame, we have found that there are non-zero couplings between the dilaton and the RR fields in the string frame. The couplings with structure $F^{(n)}F^{(n)}\phi\phi$ are the following:
\begin{eqnarray}
S &\!\!\!\supset\!\!\!& \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \bigg[\frac{8}{(n-1)!} F_{ka_1\cdots a_{n-1}}{}_{,s} F_{ma_1\cdots a_{n-1}}{}_{,s}\Phi_{,h}{}_{m}-\frac{4}{n!} F_{a_1\cdots a_n}{}_{,s} F_{a_1\cdots a_n}{}_{,s} \Phi_{,hk} \bigg]\Phi_{,hk} \nn
\end{eqnarray}
The couplings with structure $F^{(n)}F^{(n)}R\phi$ are the following:
\begin{eqnarray}
S &\supset& - \frac{8 }{(n-2)!} \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \bigg[ R_{h}{}_{mnp} \
F_{kt}{}_{a_1\cdots a_{n-2}}{}_{,n} F_{mpa_1\cdots a_{n-2}}{}_{,t} \labell{mFnRD}\\
&& + R_{h}{}_{mnp} F_{mta_1\cdots a_{n-2}}{}_{,p} F_{kn}{}_{a_1\cdots a_{n-2}}{}_{,t}+ (n-2) R_{h}{}_{mnp} F_{mpt a_1\cdots a_{n-3}}{}_{,s} \
F_{kns}{}_{a_1\cdots a_{n-3} }{}_{,t} \nn\\
&&- 3 R_{h}{}_{mnp} \
F_{mpa_1\cdots a_{n-2} }{}_{,t} F_{kn}{}_{a_1\cdots a_{n-2}}{}_{,t} + R_{h}{}_{m}{}_{k}{}_{n} F_{nta_1\cdots a_{n-2} }{}_{,s} \
F_{ms}{}_{a_1\cdots a_{n-2}}{}_{,t} \bigg]\Phi_{,hk}\nn
\end{eqnarray}
And the couplings with structure $F^{(n)}F^{(n-2)}H\phi$ are the following:
\begin{eqnarray}
S &\supset& - \frac{8}{(n-3)!} \frac{\gamma}{\kappa^2}\int d^{10}x \sqrt{-G} \bigg[ (n-3) H_{h}{}_{rs}{}_{,m} F_{kqrs a_1\cdots a_{n-4}}{}_{,p}
F_{mp a_1\cdots a_{n-4}}{}_{,q} \nn \\
&&+ H_{h}{}_{rs}{}_{,m} F_{krs a_1\cdots a_{n-3}}{}_{,q} \
F_{m a_1\cdots a_{n-3}}{}_{,q}- \frac{1}{(n-2)} H_{h}{}_{rs}{}_{,k} F_{rsa_1\cdots a_{n-2}}{}_{,q}
F_{a_1\cdots a_{n-2}}{}_{,q} \nn\\
&& - 2H_{hm}{}_{r}{}_{,s}F_{krs a_1\cdots a_{n-3}}{}_{,q} F_{ma_1\cdots a_{n-3} }{}_{,q} + 2H_{hm}{}_{r}{}_{,s} F_{kqr a_1\cdots a_{n-3}}{}_{,s}
F_{m a_1\cdots a_{n-3}}{}_{,q}\nn \\
&&\qquad\qquad\qquad\qquad\qquad\qquad+
H_{k}{}_{qr}{}_{,s}F_{pqr a_1\cdots a_{n-3}}{}_{,s} F_{h}{}_{a_1\cdots a_{n-3} }{}_{,p}
\bigg]\Phi_{,hk}\labell{mFnF3}
\end{eqnarray}
The number of indices $a_1\cdots a_{n-m}$ in the RR field strengths is such that the total number of the indices of $F^{(n)}$ must be $n$. For example, $F_{kqrs a_1\cdots a_{n-4}}{}_{,p}$ is $F_{kqrs}{}_{,p}$ for $n=4$ and is zero for $n<4$. We have shown that the transformation of the above couplings and the couplings with structure $F^{(n)}F^{(n-2)}HR$ and $F^{(n)}F^{(n)}RR$ to the Einstein frame are fully consistent with the S-duality and with the corresponding S-matrix elements. The above couplings have been found in this paper for $1\leq n \leq 5$. However, using the fact that for $6\leq n \leq 9$, it is impossible to have couplings in which the RR field strengths have no contraction with each other, one finds that the consistency of the couplings with the linear T-duality requires the above couplings to be extended to $1\leq n \leq 9$.
Using the pure spinor formalism, the S-matrix element of two NSNS and two RR states has been also calculated in \cite{Policastro:2006vt}. The kinematic factor in this amplitude is given by
\begin{eqnarray}
\sum_{M,N}u^{ijmnpqm'n'p'q'a_1\cdots a_{M}b_1\cdots b_N}\bar{R}_{mnm'n'}\bar{R}_{pqp'q'}F_{a_1\cdots a_M,i}F_{b_1\cdots b_N,j}
\end{eqnarray}
where $\bar{R}$ is the generalized Riemann curvature \reef{trans}, and the tensor $u$ is given in terms of the trace of the gamma matrices as
\begin{eqnarray}
u^{ijmnpqm'n'p'q'a_1\cdots a_{M}b_1\cdots b_N}&=&-32\frac{c_{M}c_{N}}{M!N!}\bigg[2\veps_Ng^{mp}g^{m'q'}g^{ip}g^{j(n'}g^{p')k}\Tr(\gamma^n\gamma^{a_1\cdots a_M}\gamma_k\gamma^{b_1\cdots b_{N}})\nn\\
&&\qquad\qquad -(\veps_M+ \veps_N)g^{m'q'}g^{iq}g^{j(n'}g^{p')k}\Tr(\gamma^{mnp}\gamma^{a_1\cdots a_M}\gamma_k\gamma^{b_1\cdots b_{N}})\nn\\
&&\qquad\qquad +\frac{1}{2}\veps_Ng^{i[q|}g^{jp'}\Tr(\gamma^{|mnp]}\gamma^{a_1\cdots a_M}\gamma^{m'n'p'}\gamma^{b_1\cdots b_{N}})\bigg]
\end{eqnarray}
where $c_p^2=(-1)^{p+1}/16\sqrt{2}$ and $\veps_N=(-1)^{\frac{1}{2}N(N-1)}$. Writing the above kinematic factor in terms of the independent variables, we have checked that it is exactly identical to the kinematic factor \reef{kin2} for the cases that $N=M$ and $N=M-4$ . However, for the case that $N=M-2$ the above result is different from \reef{kin2}. In fact the factor $(\veps_M+ \veps_N)$ in the second line above is zero for $N=M-2$ whereas the corresponding kinematic term in \reef{kin2} which is $\cK_2+\cK_3$ is non-zero. We think there must be a typo in the above amplitude, {\it i.e.,}\ the factor $(\veps_M+ \veps_N)$ should be $(i^{N-M}\veps_M+\veps_N)$. With this modification, we find agreement with the kinematic factor \reef{kin2} even for $N=M-2$.
The S-duality invariant couplings in the sections 4.1.1 and 4.3.1 predict various couplings for four RR fields. These couplings may be confirmed by the details study of the S-matrix element of four RR vertex operators. This S-matrix element has been calculated in \cite{Policastro:2006vt} in the pure spinor formalism. This amplitude can also be calculated in the RNS formalism using the KLT prescription \reef{KLT} and using the S-matrix element of four massless open string spinors which has been calculated in \cite{Schwarz:1982jn}. In both formalisms the amplitude involves various traces of the gamma matrices which have to be performed explicitly, and then one can compare the eight-derivative couplings with the four RR couplings predicted by the S-duality. We leave the details of this calculation to the further works.
The consistency of the NSNS couplings with the on-shell linear T-duality and S-duality has been used in \cite{Garousi:2013lja} and in the present paper to find various four-field couplings involving the RR fields. Since the four-point function \reef{contact} has only contact terms, the Ward identities corresponding to the T-duality and the S-duality of the scattering amplitude appear as the on-shell linear dualities is the four-field couplings. On the other hand, one may require the higher derivative couplings to be consistent with the nonlinear T-duality and S-duality without using the on-shell relations. This may be used to find the eight-derivative couplings involving more than four fields. A step in this direction has been taken in \cite{Garousi:2013qka} to find the gravity and the dilaton couplings which are consistent with the off-shell S-duality. It would be interesting to extended the four-field on-shell couplings found in \cite{Garousi:2013lja} and in the present paper to the couplings which are invariant under the off-shell T-duality and S-duality.
{\bf Acknowledgments}: This work is supported by Ferdowsi University of Mashhad under grant 3/27102-1392/02/25.
|
1,314,259,993,338 | arxiv | \section{Introduction}
\label{intro}
\cite{eberhart1995new} introduced the particle swarm optimization algorithm (PSO) based on social interactions (behaviors of birds). Since then PSO has known a great popularity in many domains and gave birth to many variants of the original algorithm (see \citealt{zhang2015comprehensive} for a survey of variants and applications). PSO is a stochastic meta heuristic solving an optimization problem without any evaluation of the gradient. The algorithm explores the search space in an intelligent way thanks to a population of particles interacting with each other and updated at each step their position and their velocity. The dynamic of the particles relies on two attractors: their personal best position (historical best position of the particle denoted $p_{n}^{s}$ below), and the neighborhood best position (corresponding to the social component of the particles, denoted $g_{n}^{s}$ ). In the dynamic equation, the attractors are linked with a stochastic process in order to explore the search space. Algorithm \ref{algo:1} refers to the classical version of PSO with $S$ particles and $N$ iterations.
\begin{algorithm}[H]
\begin{algorithmic}
\State Initialize the swarm of $S$ particles with random positions $x_{0}^{s}$ and velocities $v_{0}^{s}$ over the search space.
\For{$n=1$ to $N$}
\State Evaluate the optimization fitness function for each particle.
\State Update $p^{s}_n$ (personal best position) and $g^{s}_n$ (neighborhood best position).
\State Change velocity ($v_{n}^{s}$) and position ($x_{n}^{s}$) according to the dynamic equation.
\EndFor
\end{algorithmic}
\caption{Classical PSO}
\label{algo:1}
\end{algorithm}
The convergence and stability analysis of PSO are important matters.
In the literature, there are two kinds of convergence:
\begin{itemize}
\item the convergence of the particles towards a local or global optimum. This convergence is not obtained with the classical version of PSO. \cite{van2010convergence} and \cite{schmitt2015particle} proposed a modified version of PSO to obtain the convergence.
\item the convergence of each particle to a point (e.g. \citealt{poli2009mean}).
\end{itemize}
If we focus on the convergence of each particle to a point, a prerequisite is the stability of the trajectory of the particles. In a deterministic case, \cite{clerc2002particle} dealt with the stability of the particles with some conditions on the parametrization of PSO. Later, \cite{kadirkamanathan2006stability} used the Lyapunov stability theorem to study the stability. About the convergence of PSO, \cite{van2006study} looked at the trajectories of the particles and proved that each particle converges to a stable point (deterministic analysis). Under stagnation hypotheses (no improvement of the personal and neighborhood best positions), \cite{poli2009mean} gives the exact formula of the second moment. More recently, \cite{bonyadi2016stability} or \cite{cleghorn2018particle} provided results for the order-1 and order-2 stabilities with respectively stagnant and non-stagnant distribution assumptions (both weaker than the stagnation hypotheses).
Let us introduce some notations. We consider here a cost function $f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}$
that should be minimized on a compact set $\mathrm{\Omega}$. Consequently the particles
evolve in $\mathrm{\Omega}\subset\mathbb{R}^{d}$.
Let $x_{n}^{s}\in\mathbb{R}^{d}$ $1\leq s\leq S$ denote the position of
particle number $s$ in the swarm at step $n$. Let $\left( r_{j,n}\right)
_{j=1,...,n\geq1}$ be sequences of independent random vectors in $\mathbb{R}^{d}$ whose margins obey a uniform
distribution over $\left[ 0,1\right] $ and denote by $\omega,$ $c_{1}$ and
$c_{2}$ three positive constants which will be discussed later. Then the PSO
algorithm considered in the sequel is defined by the two following equations (or dynamic equation, \citealt{poli2009mean}):
\begin{equation}
\left\{
\begin{aligned} v_{n+1}^{s}&=\omega \cdot v_{n}^{s}+ c_{1}r_{1,n}\odot\left(p_{n}^{s}-x_{n}^{s}\right)+c_{2}r_{2,n}\odot\left(g_{n}^{s}-x_{n}^{s}\right),\\ x_{n+1}^{s}&=x_{n}^{s}+v_{n+1}^{s}. \\ \end{aligned}\right.
\label{pso-def}
\end{equation}
where $\odot$ stands for the Hadamard product:
\[
u\odot v=\left( u_{1}v_{1},...,u_{d}v_{d}\right).
\]
and $p_{n}^{s}$ (resp. $g_{n}^{s}$) is the best personal position (resp. the best neighborhood position of the particle $s$) :
\begin{align*}
p_{n}^{s} &=\argmin_{t \in \lbrace x_{0}^{s},\ldots, x_{n}^{s}\rbrace}f\left( t\right), \\
g_{n}^{s}&=\argmin_{t \in \lbrace p_{n}^{s^{\prime}}: s^{\prime}\in\mathcal{V}(s)\rbrace }f\left( t\right).
\end{align*}
with $\mathcal{V}(s)$ the neighborhood of particle $s$. This neighborhood depends on the swarm's topology: if the topology is called global (all the particles communicate between each other) then $g_{n}^{s}=g_{n}=\argmin_{t \in \lbrace p_{n}^{1},\ldots, p_{n}^{S} \rbrace }f\left( t\right)$ (see \citep{lane2008particle}).
Our main objective is to provide (asymptotic) confidence sets for the global or local optimum of the cost function $f$. If $g=\argmin_{t \in \mathrm{\Omega}}f\left(t\right)$ for some domain $\mathrm{\Omega}$, a confidence region for $g$ is a random set $\Lambda$ such that :\[
\mathbb{P}\left(g \in \Lambda \right) \geq 1-\alpha
\]
for some small $\alpha \in \left(0,1\right)$. The set $\Lambda$ depends on the swarm and is consequently random due to the random evolution of the particles. The probability symbol above, $\mathbb{P}$, depends on the distribution of the particles in the swarm. Let us illustrate the use of confidence interval for a real-valued PSO. Typically the kind of results we expect is: $g \in \left[m,M\right]$ with a probability larger than, say .99 \%. This does not aim at yielding a precise estimate for $g$ but defines a ``control area" for $g$, as well as a measure of the imprecision and variability of PSO.
Convergence of the swarm will not be an issue here. In fact we assume that the personal and global best converge : see assumptions $\mathbf{A_2}$ and $\mathbf{B_3}$ below. We are interested in the ``step after" : localizing the limit of the particles with high probability, whatever their initialization and trajectories.
Formally, confidence set estimation forces us to inspect order two terms (i.e. the rate of convergence), typically convergence of the empirical variance. The word \textit{asymptotic} just means that the sample size increases to infinity.
\bigskip
The outline of the paper is the following. In the next section the three main results are introduced. They are all related to weak convergence of random variables and vectors (see \citep{billing} for a classical monograph) and obtained under three different sets of assumptions.
The two first consider the trajectory of single particles. The sample consists in the path of a fixed particle. We show that two different regimes should be investigated depending on the limiting behavior of $p_n$ and $g_n$. Briefly speaking : if the limits of $p_n$ and $g_n$ are distinct, the particles oscillate between them (which is a well-known characteristics of PSO), if the limits of $p_n$ and $g_n$ coincide, then particles converge at a fast, exponential, rate.
In the oscillating case a classical Central Limit Theorem is obtained relying essentially on martingale difference techniques. In the non-oscillating situation, the particle converges quickly and we have to use random matrices products to obtain a non-standard CLT. As by-products of these two subsections we will retrieve confidence sets of the form $\Lambda\left(x_1^s \ldots x_n^s\right)$, depending on the $n$ positions of each particle $x^s$.
The third result states another classical CLT. The sample consists here in the whole swarm. This time the confidence set is of the form $\Lambda\left(x_n^1 \ldots x_n^S\right)$, depending on the $S$ particles of the swarm when the iteration step is fixed to $n$.
A numerical study and simulations are performed in a Python environment. A discussion follows. The derivations of the main theorems are collected in the last section.
\section{Main results}
The usual euclidean norm and associated inner product for vectors in
$\mathbb{R}^{d}$ are denoted respectively $\left\Vert \cdot\right\Vert $ and
$\left\langle \cdot,\cdot\right\rangle $.
If $X$ is a random vector with null expectation then $\mathbb{E}\left(
X\otimes X\right)=\mathbb{E}\left(XX^{t}\right)$ is the covariance matrix of $X$. The covariance matrix is crucial since it determines the limiting Gaussian distribution in the Central Limit Theorem. We will need two kinds of stochastic convergence in the sequel. convergence in probability of $X_{n}$ to $X$ is denoted $X_{n}\rightarrow_{
\mathbb{P}}X$. The arrow $\hookrightarrow$ stands for convergence in
distribution (weak convergence).
Except in section \ref{sfs} we consider a single particle in order to alleviate notations. We drop the particle index so that $x_{n}^{s}=x_{n}$, $p_{n}^{s}=p_{n}$ and $g_{n}^{s}=g_{n}$.
In all the sequel we assume once and for all that :
\[
c_{1}=c_{2}=c
\]
Some authors took this choice for defining their working version of PSO, see for instance \cite{Clerc2006} (see equation [3.2] p.39), \cite{poli2009mean}.
Without this assumption, computations turn out to get intricate. Besides if $c_{1} \neq c_{2}$ some constants become quite complicated and hard to interpret (see for instance condition $\mathbf{A_2}$ and the two constants $\mathfrak{C}$ and $\mathfrak{L}$ involved in the asymptotic variance within Theorem \ref{TH1} below).
At last we take for granted that particles are warm, reached an area of the domain were they fluctuate without exiting.
\subsection{First case: oscillatory ($p\neq g$)}
The following assumptions are required and discussed after the statement of Theorem \ref{TH1}.
\noindent $\mathbf{A}_{1}:$ For all $n$, $x_{n}
\in \mathrm{\Omega}$ where $\mathrm{\Omega}$ is a compact subset of $\mathbb{R}^{d}$.
\noindent $\mathbf{A}_{2}:$ The following holds :
\[
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left(p_n-\mathbb{E}p_n\right)\rightarrow_{\mathbb{P}}0, \lim
_{n\rightarrow+\infty}\mathbb{E}\left\Vert p_n-\mathbb{E}p_n\right\Vert =0,
\]
\[
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left(g_n-\mathbb{E}g_n\right)\rightarrow_{\mathbb{P}}0, \lim
_{n\rightarrow+\infty}\mathbb{E}\left\Vert g_n-\mathbb{E}g_n\right\Vert =0.
\]
\noindent $\mathbf{A}_{3}:$ The inequality below connect $c$ and $\omega$ :
\[
0<c<12\frac{1-\omega^2}{7-5\omega}.
\]
Before stating the Theorem we need a last notation. Let $\delta=\left( \delta_{1},...,\delta_{d}\right) \in\mathbb{R}^{d}$. The notation $\mathrm{diag}\left( \delta\right) $ stands
for the diagonal $d\times d$ matrix with entries $\delta_{1},...,\delta_{d}$
and $\delta^{\odot2}$ is the vector in $\mathbb{R}^{d}$ defined by
$\delta^{\odot2}=\left( \delta_{1}^{2},...,\delta_{d}^{2}\right)$.
\begin{theorem}
\label{TH1}In addition to assumptions $\mathbf{A}_{1-3}$ suppose that, when $n$ goes to infinity $\mathbb{E}p_{n}\rightarrow p$ and $\mathbb{E}g_{n}\rightarrow g$. Set:
\[
\mathfrak{L}=2c\left( \frac{1-\omega}{1+\omega}\right) \left(
1+\omega-\frac{c}{2}\right) -\frac{c^{2}}{6}, \mathfrak{\quad C=}\frac
{c}{12\mathfrak{L}}\frac{1-\omega}{1+\omega}\left( 1+\omega-\frac{c}
{2}\right).
\]
Denote finally $\Gamma=\mathfrak{C}\cdot\mathrm{diag}\left( p-g\right)
^{\odot2}$ then:
\[
\sqrt{N}\left( \frac{1}{N}\sum_{n=1}^{N}x_{n}-\frac{1}{N}\sum_{n=1}^{N}
\frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}}{2}\right) \hookrightarrow
\mathcal{N}\left( 0,\Gamma\right),
\]
where $\mathcal{N}\left( 0,\Gamma\right) $ denotes a Gaussian centered
vector of $\mathbb{R}^{d}$ with covariance matrix $\Gamma$.
\end{theorem}
\textbf{Discussion of the Assumptions:}
\noindent We avoid here the assumption of stagnation: the personal and local best are
not supposed to be constant but they oscillate around their expectation. The
convergence occurs at a rate ensuring that neither $g_{n}$ nor $p_{n}$ are
involved in the weak convergence of the particles $x_{n}$. Condition $\mathbf{A}_{2}$ is specific of what we intend by a convergent PSO. It ensures that $p_n$ or $g_n$ have no impact on the weak convergence behavior of the particles. With other words Assumption $\mathbf{A}_{2}$ requires that the oscillations of $p_{n}$ and $g_{n}$ around their expectations are
negligible. We tried here to model the stagnation phenomenon which consists in
sequence of iterations during which $g_{n}$ (resp. $p_{n}$) remain
constant hence $g_{n}=\mathbb{E}g_{n}$ for $n$ in $\left[ \underline{N},
\overline{N}\right] $ supposedly. Notice however that convergence of the
expectation towards $p$ and $g$ is not mentioned at this step.
Note that assumption $\mathbf{A}_{3}$ is exactly the condition found by Poli in \cite{poli2009mean} (see the last paragraph of section III) for defining order 2 stability. This condition may be extended to the case when $c_1 \neq c_2$, see \cite{cleghorn2018particle} and references therein. At last $\mathbf{A}_{3}$ holds for the classical calibration
appearing in \cite{clerc2002particle} (constriction constraints) with $c=1.496172$ and $\omega=0.72984$.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{assumptionA3.eps}
\caption{Display of constraint $\mathbf{A}_{3}$ in the plane $\left(\omega,c\right)$.}
\label{fig:assumptionA3}
\end{figure}
The next Corollary simplifies Theorem \ref{TH1} whenever the convergence of $\mathbb{E}p_{n}$ and $\mathbb{E}g_{n}$ to $p$ and $g$ holds with an appropriate rate. The centering term in the CLT is then $\frac{p+g}{2}$. Denote $\bar{x}_N= \frac{1}{N}\sum_{n=1}^{N}x_{n}$.
\begin{corollary}
If assumption $\mathbf{A_{1-3}}$ hold and if :
\begin{equation} \label{asympt-conv-pg}
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left( \left\vert p-\mathbb{E} p_{n}
\right\vert +\left\vert g-\mathbb{E}g_{n}\right\vert \right) \rightarrow0,
\end{equation}
then Theorem \ref{TH1} comes down to :
\[
\sqrt{N}\left( \bar{x}_N-\frac{p+g}{2}\right)
\hookrightarrow\mathcal{N}\left( 0,\Gamma\right)
\]
when $N$tends to infinity.
\label{corollary:3}
\end{corollary}
\begin{remark}
Condition \ref{asympt-conv-pg} always holds under stagnation .
\end{remark}
The Corollary above shows that the mean computed from the path of the particle converges to $\theta=\left( p+g\right)/2$, losing the information on the location of $p$ and $g$ only. Note that the initial PSO version in \ref{pso-def} (with $c_1 \neq c_2$) would lead to $\theta=\left( c_1p+c_2g\right)/\left(c_1+c_2\right)$.
The next Corollary provides finally two kinds of by-products : asymptotic confidence intervals, in $\mathbb{R}$, for the coordinates of $\theta$ and confidence regions for $\theta$ in $\mathbb{R}^d$. Let $\alpha \in \left[0,1\right]$, $q_{\alpha}$ be the $1-\alpha$ quantile of the standard Gaussian distribution and $\chi_{1-\alpha}^{\left(d\right)}$ be the $1-\alpha$ quantile of the Chi-square distribution with $d$ degrees of freedom. Set also $\theta=\left(\theta_1,\ldots,\theta_d\right)$ and $\bar{x}_N=\left(\bar{x}_{N,1},\ldots,\bar{x}_{N,d}\right)$.
\begin{corollary}
An asymptotic confidence interval at level $1-\alpha$ for $\theta_\ell$ is directly derived from Corollary \ref{corollary:3} :
\[
\mathcal{I}_{1-\alpha}\left(\theta_\ell\right)=\left[\bar{x}_{N,\ell}-s_\ell\left(N,\alpha\right);\bar{x}_{N,\ell}+s_\ell\left(N,\alpha\right)\right]
\]
with $s_\ell\left(N,\alpha\right)=\left|p_\ell-g_\ell\right|\sqrt{\frac{\mathfrak{C}}{N}}q_{1-\frac{\alpha}{2}}$ . \\
An asymptotic confidence region at level $1-\alpha$ for the vector $\theta=\left( p+g\right)/2$ is :
\begin{align*}
\Lambda_{1-\alpha}\left(\theta\right)=
\left\{t \in \mathbb{R}^d : N\left\|\Gamma^{-1/2}\left(t-\bar{x}_N\right)\right\|^2
\leq \chi_{1-\alpha}^{\left(d\right)} \right\} \\
=\left\{t=\left(t_1,...,t_n\right): \frac{N}{\mathfrak{C}}\sum_{\ell=1}^{d}\left(\frac{t_\ell-\bar{x}_{N,\ell}}{p_\ell-g_\ell}\right)^2
\leq \chi_{1-\alpha}^{\left(d\right)} \right\}.
\end{align*}
\end{corollary}
We note however that the vector $\theta$ may not be of crucial interest for the initial optimization problem conversely to $g$. This point will be discussed in the last section.
\begin{remark}
Assumption A2 deserves a last but technical remark. First, notice that $\frac{1}{\sqrt{N}}\sum_{n=1}^{N}t_{n}\rightarrow_{ \mathbb{P}}0$ does not imply $\lim_{n\rightarrow+\infty}\mathbb{E}\left\Vert t_{n}\right\Vert =0$ nor the converse. Take for instance $t_{n}=\left(
-1\right) ^{n}t_{0}$ with $\mathbb{E}t_{0}\neq0$ then $\frac{1}{ \sqrt{N}
}\sum_{n=1}^{N}t_{n}\rightarrow_{\mathbb{P}}0$ whereas $\mathbb{E} \left\Vert
t_{n}\right\Vert =\mathbb{E}\left\Vert t_{0}\right\Vert $. Conversely take
$t_{n}=t_{0}/\log\left( n\right) $ $\lim_{n\rightarrow+\infty}
\mathbb{E}\left\Vert t_{n}\right\Vert =0$ but $\frac{1}{\sqrt{N}}\sum
_{n=1}^{N}t_{n}=\frac{t_{0}}{\sqrt{N}}\sum_{n=1}^{N}\log^{-1}\left( n\right)
$ cannot converge in probability to 0.
\end{remark}
\subsection{Second case: non-oscillatory and stagnant ($p=g$)}
In this section we study again a single particle and suppose once and for all that $x_{n}\in\mathbb{R}$. We assume throughout this subsection that the particle is under stagnation that is $p_n=p$ for $n$ sufficiently large (see assumption $\mathbf{B_3}$ below). This assumption is strong but a more general framework leads to theoretical developments out of our scope.
Starting from Equation (\ref{pso-def}), the PSO equation becomes this time:
\[
x_{n+1}=\left( 1+\omega\right) x_{n}-\omega x_{n-1}+c\left( r_{1,n}
+r_{2,n}\right) \left( p-x_{n}\right).
\]
Change the centering and consider $x_{n}-p=y_{n}$. The previous equation becomes :
\begin{equation} \label{chain:y}
y_{n+1}=\left( 1+\omega-c+c\varepsilon_{n}\right) y_{n}-\omega y_{n-1},
\end{equation}
where $\varepsilon_{n}$ is the sum of two independent random variables
with $\mathcal{U}\left[ -1/2;1/2\right] $ distribution.
Assuming that for all $n$ $y_{n}\neq0$, we have then :
\begin{equation} \label{chain:X}
\frac{y_{n+1}}{y_{n}} =\left( 1+\omega-c+c\varepsilon_{n}\right)
-\omega\frac{y_{n-1}}{y_{n}}
\end{equation}
It is plain that $y_{n+1}/y_{n}$ defines a Markov chain (more precisely : a non-linear auto-regressive process) which will play a crucial role in the proofs and in the forthcoming results.
Notice that $\sum_{n=1}^{N}\log\left\vert \frac{y_{n}}{y_{n-1}}\right\vert =\log\left\vert y_{N}\right\vert -\log\left\vert y_{0}\right\vert $.
We are ready to introduce a new set of assumptions. Denote $\pi$ the stationary distribution of $\left(y_{n}/y_{n-1}\right)_{n \in \mathbb{N}}$ and take $\left(Z_n\right)_{n \in \mathbb{N}}$ a copy of $\left(y_{n}/y_{n-1}\right)_{n \in \mathbb{N}}$ with $Z_0$ a realization of $\pi$. Then define :
\begin{align*}
\mu_{x} & =\mathbb{E}_{\pi}\log\left\vert Z_0\right\vert ,\\
\sigma_{x}^{2} & =\mathrm{Var}_{\pi}\left( \log\left\vert Z_{0}\right\vert \right)
+2\sum_{k=1}^{+\infty}\mathrm{Cov}_{\pi}\left( \log\left\vert Z_{0}\right\vert
,\log\left\vert Z_{k}\right\vert \right).
\end{align*}
\vspace{.5cm}
\noindent $\mathbf{B_1}: \frac{x_n-p}{x_{n-1}-p}$ is a Harris recurrent Markov chain.
\noindent $\mathbf{B_2}: 1+\omega -c< \omega/c < (1+c)/4$.
\noindent $\mathbf{B_3}:$ For sufficiently large $n$ $g_{n}=p_{n}=p=g$ is constant.
The definition of Harris recurrence needed above is in \cite{meyn2012markov}, beginning of Chapter 9.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{assumptionB2.eps}
\caption{Display of constraint $\mathbf{B}_{2}$ in the plane $\left(c,\omega\right)$.}
\label{fig:assumptionB2}
\end{figure}
\begin{theorem}
Let $\omega\in(0,1)$, $c>1$, when $\mathbf{B_{1-3}}$ hold, then:
\[
\frac{1}{\sqrt{n}}\left( \log\left\vert x_{n}-g_{n}\right\vert -n\mu_{x}\right)
\hookrightarrow\mathcal{N}\left( 0,\sigma_{x}^{2}\right).
\]
\label{theo:equal}
\end{theorem}
\begin{remark}
The theorem above is not a Central Limit Theorem for $x_n$. It is derived thanks to a CLT but it shows that the asymptotic distribution of $\left\vert x_{n}-g_{n}\right\vert$ is asymptotically log-normal with approximate parameters $n\mu_{x}$ and $n\sigma_{x}^{2}$.
\end{remark}
\begin{remark}
The mean and variance $\mu_{x}$ and $\sigma_{x}^{2}$ are usually unknown but
may be approximated numerically. We refer to the simulation section for more details.
\end{remark}
\begin{corollary}
If $p_{n}=p$ an asymptotic confidence (non convex) region for $g$
at level $1-\alpha$ denoted $\Lambda_{1-\alpha}$ below may be derived from the
preceding Theorem:
\begin{align*}
\Lambda_{1-\alpha}\left(g\right) & =\Lambda_{1-\alpha}^{+}\cup\Lambda_{1-\alpha}^{-}\\
\Lambda_{1-\alpha}^{+} & =\left[ x_{n}+\exp\left( \mu_{x}+\frac
{\sigma_{x}}{\sqrt{n}}q_{\alpha/2}\right) ,x_{n}+\exp\left( \mu
_{x}-\frac{\sigma_{x}}{\sqrt{n}}q_{1-\alpha/2}\right) \right] \\
\Lambda_{1-\alpha}^{-} & =\left[ x_{n}-\exp\left( \mu_{x}+\frac
{\sigma_{x}}{\sqrt{n}}q_{1-\alpha/2}\right) ,x_{n}-\exp\left( \mu
_{x}-\frac{\sigma_{x}}{\sqrt{n}}q_{\alpha/2}\right) \right]
\end{align*}
\end{corollary}
\begin{remark}
Under matrix form the equation \ref{chain:y} is purely linear but driven by a random matrix:
\begin{align}
\left(
\begin{array}
[c]{c}
y_{n+1}\\
y_{n}
\end{array}
\right) & =\mathbf{y}_{n+1} =\left[
\begin{array}
[c]{cc}
1+\omega-c\left( 1+\varepsilon_{n}\right) & -\omega\\
1 & 0
\end{array}
\right] \left(
\begin{array}
[c]{c}
y_{n}\\
y_{n-1}
\end{array}
\right) \label{non-osc-pso}\\
\mathbf{y}_{n+1} & \mathbf{=}\mathbf{S}_{n+1}\mathbf{y}_{n},\quad
\mathbf{y}_{n}=\mathbf{S}_{n}\mathbf{S}_{n-1}...\mathbf{S}_{2}\mathbf{y}
_{1}=\mathbf{T}_{n}\mathbf{y}_{1},\nonumber
\end{align}
with $\mathbf{T}_{n}=\mathbf{S}_{n}\mathbf{S}_{n-1}...\mathbf{S}_{2}$. It is
plain here that a classical Central Limit Theorem cannot hold for the sequence
$\left( y_{n}\right) _{n\in\mathbb{N}}$. We turn to asymptotic theory for the product of random matrices. We refer to the historical references: \cite{furstenberg1960} and \cite{berger1984central} who proved Central Limit Theorems for the regularity index of the product of i.i.d random matrices. Later \cite{hennion1997limit} generalized their results. But the assumptions of (almost surely) positive entries is common to all these papers. Other authors obtain similar results under different sets of assumptions (see \citealp{le1982theoremes}, \citealp{benoist2016}, and references therein), typically revolving around characterization of the semi-group spanned by the distribution of $\mathbf{S}$. These assumptions are uneasy to check here and we carried out to a direct approach with Markov chain fundamental tools.
\end{remark}
\subsection{The swarm at a fixed step} \label{sfs}
In this section we change our viewpoint. Instead of considering a single
particle and sampling along its trajectory we will take advantage of the
whole swarm but at a fixed and common iteration step. Our aim here is to
localize the minimum of the cost function based on $\left(x_n^1,\ldots,x_n^S\right)$. This time the particle index $s$ varies up to $S$ the swarm size, whereas the index $n$ is fixed. In this subsection we assume that $S\uparrow +\infty$ and asymptotic is with respect to $S$. We do not drop $n$ in order to see how the iteration steps influence the results. We still address only the case $x_{n}^s\in \mathbb{R}$ even if our results may be straigthforwardly
generalized to $x_{n}^s\in \mathbb{R}^{d}$. We
provide below a Central Limit Theorem suited to
the case when the number of particles in the swarm becomes large.
In order to clarify the method, we assume that for all
particles $x_{n}^{i}$ in the swarm $p_{n}^{i}=g_{n}=p$. In other words, no
local minimum stands in the domain $\mathrm{\Omega}$, which implies additional smoothness or convexity assumptions on the cost function $f$. This may be possible by a
preliminary screening of the search space. Indeed a first (or several) run(s) of preliminary PSO(s) on the whole domain identifies an area where a single optimum lies. Then a new PSO is launched with initial values close to this optimum and with parameters ensuring that most of the particles will stay in the identified area.
So we are given $\left(x_{n}^{1},...,x_{n}^{S}\right)$ where $S$ is the sample size. Basically, the framework is the same as in the non oscillatory case studied
above for a single particle. From (\ref{non-osc-pso}) we get with $\mathbf{y}_{n}^{i}=x_{n}^{i}-p$ :
\begin{eqnarray*}
\mathbf{y}_{n}^{i} &=&\mathbf{T}_{n}^{i}\mathbf{y}_{1}^{i}, \\
\mathbf{T}_{n}^{i} &=&\Pi _{j=2}^{n}\mathbf{S}_{j},\quad \mathbf{S}_{j}=\left[
\begin{array}{cc}
1+\omega -c\left( 1+\varepsilon _{j}^{i}\right) & -\omega \\
1 & 0%
\end{array}
\right].
\end{eqnarray*}
Assume that the domain $\mathrm{\Omega} $ contains $0$ and that for all $s$ $\left(
x_{0}^{i},x_{1}^{i}\right) _{i\leq S}$ are independent, identically
distributed and centered then from the decomposition above, for all $n$ and $s$,
$\mathbb{E}\mathbf{y}_{n}^{i}=0$ and the $\left( \mathbf{y}_{n}^{i}\right) _{1\leq i\leq S}$ are i.i.d too.
The assumptions we need to derive Theorem \ref{prop3} below are :
$\mathbf{C}_{1}:$ The operational domain $\mathrm{\Omega} $ contains $0$ (and is
ideally a symmetric set).
$\mathbf{C}_{2}:$ The couples $\left( x_{0}^{i},x_{1}^{i}\right) _{i\leq S}$
are i.i.d. and centered.
$\mathbf{C}_{3}:$ For all $i$ in $\left\{ 1,...,S\right\} $ $p_{n}^{i}=g_{n}=p$.
When $S$ is large the following Theorem may be of interest and is a simple consequence of the i.i.d. CLT.
\begin{theorem}
Under assumptions $\mathbf{C}_{1-3}$ a CLT holds when $S$ the number of
particles in the swarm tends to $+\infty$ :
\begin{equation*}
\frac{1}{\sqrt{S}}\sum_{i=1}^{S}\left( x_{n}^{i}-g_{n}\right) \underset{%
S\rightarrow +\infty }{\hookrightarrow }\mathcal{N}\left( 0,\sigma_{n}^{2}\right),
\end{equation*}
where $\sigma _{n}^{2}=\mathbb{E}\left( x_{n}^{1}-g_{n}\right) ^{2}$ is estimated consistently by :
\begin{equation*}
\widehat{\sigma }_{n}^{2}=\frac{1}{S}\sum_{i=1}^{S}\left(x_{n}^{i}-g_{n}\right) ^{2}.
\end{equation*}
\label{prop3}
\end{theorem}
\begin{remark}
The convergence of $\widehat{\sigma }_{n}^{2}$ to $\sigma _{n}^{2}$ is a
straightforward consequence of the weak and strong laws of large numbers.
\end{remark}
Denote $\overline{x}^{S}=\left( 1/S\right) \sum_{i=1}^{S}x_{n}^{i}$.
The Theorem above paves the way towards an asymptotic confidence interval.
\begin{corollary}
An asymptotic confidence interval at level $1-\alpha$ for $g$ is :
\begin{equation*}
\Lambda_n\left(g\right)=\left[ \overline{x}^{S}-\frac{\widehat{\sigma }_{n}}{\sqrt{S}}q_{1-\alpha
/2},\overline{x}^{S}+\frac{\widehat{\sigma }_{n}}{\sqrt{S}}q_{1-\alpha /2}%
\right].
\end{equation*}
\end{corollary}
\section{Simulation and numerical results}
The Himmelblau's function is chosen as example for our experiments. It is a 2 dimensional function with four local optima in $\left[-10,10\right]^{2}$ defined by : $f(x,y)=\left(x^2+y-11\right)^2+\left(x+y^2-7\right)^2$. Figure \ref{fig:himmelblau_contour} illustrates the contour of this function.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Fig1.eps}
\caption{Contour of the Himmelblau's function in the space $\left[-6,6\right]^{2}$. }
\label{fig:himmelblau_contour}
\end{figure}
With the Himmelblau's function, we can observe the two different behaviors of the particles: oscillatory and non-oscillatory. The Himmelblau's function is positive and has four global minima in: $(3,2)$, $(-2.81,3.13)$, $(-3.77,-3.28)$, $(3.58,-1.84)$ where $f(x,y)=0$. We use a ring topology (for a quick review of the different topologies of PSO see \citealt{lane2008particle}) for the algorithm in order to have both oscillating and non-oscillating particles. The latter converge quickly. The former go on running between two groups of particles converging to two distinct local optima.
\subsection{Oscillatory case}
We select particles oscillating between $(3.58,-1.84)$ and $(3,2)$, these values could be both their personal best position or their neighborhood best position. In this case, the convergence of the $p_n$ and $g_n$ to $(3.58,-1.84)$ or $(3,2)$ are satisfying the conditions of Corollary \ref{corollary:3}. We have to verify the Gaussian asymptotic behavior of $H^{s}_{1}(N)=\sqrt{N}\left( \frac{1}{N}\sum_{n=1}^{N}x^{s}_{n}-\frac{p+g}{2}\right)$ for each $s$ oscillating particle.
We launch PSO with a population of 200 particles and with 2000 iterations, $\omega=0.72984$ and $c=1.496172$. A ring topology was used to ensure the presence of oscillating particles. A particle is said oscillating if between the 500th and the 2000th iteration, Assumptions $\mathbf{A}_{1-3}$ holds.
A visual tool to verify the normality of $H^{s}_{1}(N)$ for a particle is a normal probability plot. Figures \ref{fig:droite_henry_x} and \ref{fig:droite_henry_y} displays the normal probability plot of $H^{s}_{1}(N)$ respectively for the $x$ axis and $y$ axis. For each axis, the normality is confirmed: $H^{s}_{1}(N)$ fits well the theoretical quantiles.
\begin{figure}[H]
\centering
\begin{minipage}[t]{.45\linewidth}
\includegraphics[width=\textwidth]{Fig2.eps}
\caption{Normal probability plot of $H^{s}_{1}(N)$ on the first coordinate.}
\label{fig:droite_henry_x}
\end{minipage}\hfill
\begin{minipage}[t]{.45\linewidth}
\includegraphics[width=\textwidth]{Fig3.eps}
\caption{Normal probability plot of $H^{s}_{1}(N)$ on the second coordinate.}
\label{fig:droite_henry_y}
\end{minipage}
\end{figure}
To check the formula of the covariance matrix $\Gamma$, the confidence ellipsoid is also a good indicator (see Figure \ref{fig:ellipsoid}). For a single particle, $H^{s}_{1}(N)$ is not necessarily always inside the confidence ellipsoid and does not respect the percentage of the defined confidence level. Figure \ref{fig:trajectory_particle} shows the trajectory of $x^{s}_n$ and $H^{s}_{1}(N)$ on the y axis, $H^{s}_{1}(N)$ remains bounded.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Fig4.eps}
\caption{Trajectory of $H^{s}_{1}(N)$ for an oscillating particle in $\left[-10,10\right]^{2}$. The confidence ellipsoid at a level of 85\% is displayed in red. Around 99\% of the trajectory of $H^{s}_{1}(N)$ is inside the ellipse.}
\label{fig:ellipsoid}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Fig5.eps}
\caption{Top track: Trajectory of a oscillating particle on the y axis. The particle is oscillating between 2 and -1.84. Bottom track: corresponding trajectory of $H^{s}_{1}(N)$ on the y axis, the red dot are corresponding to the 95\% confidence interval. The trajectory of $H^{s}_{1}(N)$ is well bounded.}
\label{fig:trajectory_particle}
\end{figure}
With 200 Monte-Carlo simulations of PSO (200 particles, and 2000 iterations), we select all the particles oscillating between $(3.58,-1.84)$ and $(3,2)$, and for each of them we compute $H^{s}_{1}(2000)$. Figure \ref{fig:density_mc_oscillating} displayed the density of $H^{s}_{1}(2000)$ using 1150 oscillating particles. Almost 95\% of the particles are inside the confidence ellipsoid of level 95\% (represented in red).
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{Fig6.eps}
\caption{Density of $H_{1}(2000)$ with 1150 particles issued from Monte-Carlo simulations. The red ellipse is the 95\% confidence ellipsoid. }
\label{fig:density_mc_oscillating}
\end{figure}
\FloatBarrier
\subsection{Non-oscillatory case}
We study now the behaviors of non-oscillating particles on the Himmelblau's function. We launch PSO with a population of 1000 particles and with 2000 iterations, $\omega=0.72984$ and $c=1.496172$. A ring topology was used to ensure the presence of enough particle converging to each local optimum.
We select particles converging to $(3,2)$, meaning that $p_n=g_n=p$ for a sufficiently large $n$. For the weak convergence of the particle, we consider:
\begin{equation*}
H^{s}_{2}(N)=\frac{1}{\sqrt{N}}\left( \log\left\vert x^{s}_{N}-g_{N}\right\vert -N \mu_{x}\right).
\end{equation*}
First, it is easy to check the linear dependency of $\log\left\vert x^{s}_{N}-g_{N}\right\vert$ with a single display of the trajectory. Figure \ref{fig:himmelblau_converging_linear} illustrates this phenomenon for a single particle. We observed numerical issues when we reach the machine precision, but a numerical approximation of $\mu_{x}$ can be performed thanks to a linear regression.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Fig9.eps}
\caption{$\vert x_n-g_n\vert$ over 500 iterations in a logarithmic scale. After 300 iterations, we reach the computer precision. We take advantage of the linear behavior of $\log \left( \vert x_n-g_n \vert \right)$ on the 200 first iterations to perform a linear regression estimating $\mu_x$.}
\label{fig:himmelblau_converging_linear}
\end{figure}
Using many converging particles, a Monte Carlo approximation of $\mu_x$ is done. For the approximation of $\sigma_x$, a possibility is:
\begin{equation*}
\bar{\sigma}_{x}^{2} =\mathrm{Var}_{\pi}\left( \log\left\vert X_{0}\right\vert \right)
+2\sum_{k=1}^{T}\mathrm{Cov}_{\pi}\left( \log\left\vert X_{0}\right\vert
,\log\left\vert X_{k}\right\vert \right),
\end{equation*}
where $T=20$. With near 240 converging particles to $(3,2)$, we found that for the first coordinate:
\begin{align*}
\bar{\mu}_{x} & = -0.032 ,\\
\bar{\sigma}_{x} & =0.156.
\end{align*}
We verify the asymptotic normality of $H_{2}(N)$ with a normal probability plot using the approximation of $\mu_x$. Figure \ref{fig:henry_x_conv} displays the normal probability plot of $H_{2}(N)$ on the first coordinate, the theoretical quantiles are well fitted by $H_{2}(N)$.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{Fig10.eps}
\caption{Normal probability plot of $H_{2}(N)$ on the first coordinate.}
\label{fig:henry_x_conv}
\end{figure}
Figure \ref{fig:multiple_H2} illustrates different trajectories of $H_{2}(N)$ on the first coordinate which are bounded by the 95\% confidence interval deduced from $\bar{\sigma}_{x}$.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{Fig11.eps}
\caption{Trajectories of $H_{2}(N)$ on the first coordinate for five particles. The red dot represents the 95\% confidence interval deduced from $\bar{\sigma}_{x}$. Trajectories stop around the 400 iterations (after an heating phase) due to numerical precision.}
\label{fig:multiple_H2}
\end{figure}
\subsection{Swarm at a fixed step}
To check the Theorem \ref{prop3}, we study:
\begin{equation*}
H^{n}_{3}(S)=\frac{1}{\sqrt{S} \widehat{\sigma }_{n}}\sum_{i=1}^{S}\left( x_n^i-g_n\right).
\end{equation*}
In practice, we encountered some difficulties to verify Theorem \ref{prop3} because of the convergence rate of the particles. Indeed, when $p^s_n=g^s_n=p^s$, the particle $s$ converges exponentially to $g^s_n$ but the spread of the rate of convergence is large. As a consequence, at a fixed step of PSO, some particles could be considered as outliers because of a lower rate of convergence. Because of these particles qualified as belated, the asymptotic Gaussian behavior of $H^{n}_{3}(S)$ is not verified. A solution is to filter the particles and remove the belated particles. Figures \ref{fig:H3_all} and \ref{fig:H3_sub} illustrate this phenomenon for the Himmelblau's function in 2D and with near 1500 converging particles to $(3,2)$ over 500 iterations. In Figure \ref{fig:H3_all}, we compute without any filtering $H^{n}_{3}(S)$ and we notice that the Gaussian behavior is not verified and some jumps appeared. The presence of these "jumps" is due to belated particles which have a lower rate of convergence in comparison to the swarm. When we remove these particles with a classical outliers detection algorithm in Figure \ref{fig:H3_sub}, Theorem \ref{prop3} seems to be verified.
\begin{figure}[H]
\centering
\begin{minipage}[t]{.45\linewidth}
\includegraphics[width=\textwidth]{Fig12.eps}
\caption{Normal probability plot of $H^{n=200}_{3}(S)$ at the 200th iteration on the first coordinate, using all the particles. We observe a discontinuity of the probability plot due to belated particles.}
\label{fig:H3_all}
\end{minipage}\hfill
\begin{minipage}[t]{.45\linewidth}
\includegraphics[width=\textwidth]{Fig13.eps}
\caption{Normal probability plot of $H^{n=200}_{3}(N)$ at the 200th iteration on the first coordinate, computed without outlying or belated particles. }
\label{fig:H3_sub}
\end{minipage}
\end{figure}
\section{Discussion}
Our main theoretical contribution revolves around the three CLTs and the confidence regions derived from sampling either a particle's path or the whole swarm. Practically the confidence set $\Lambda_{1-\alpha}$ localizes, say $g$, with a high probability. High probability means larger than $1-\alpha$. The simulations carried out in Python language tend to foster the theory.
This work was initiated in order to solve a practical issue in connection with oil industry and petrophysics. Yet in the previous section we confined ourselves to simulated data for several reasons. It appears that our method should be applied on real-data for a clear validation.
A second limitation of our work is the asymptotic set-up. The results are stated for samples large enough. This may not have an obvious meaning for people not familiar with statistics or probability. Practitioners usually claim that the Central Limit Theorem machinery works well for a sample size larger than thirty/forty whenever data are stable, say stationary (path with no jumps...). As a consequence the first iterations of the algorithm should be avoided to ensure a warming. When studying PSO the behavior of $p_n$ and $g_n$ turns out to be crucial too in order to ensure this stability, hence the validity of the theoretical results. However the control of $p_n$ and $g_n$ is a difficult matter, which explains our current assumptions on stagnation or ``almost stagnation". However, in order to fix the question of the asymptotic versus non-asymptotic approach a work is on progress. We expect finite-sample concentration inequalities : the non-asymptotic counterparts of CLT.
In the oscillating framework (see Corollary \ref{corollary:3}) our results involve the parameter denoted $\theta$, the center of the segment $\left[p,g\right]$. Here we miss the target $g$ for instance. We claim that our method may be adapted to tackle this issue. Briefly speaking : running two independent PSOs on the same domain with two distinct pairs $\left(c_1,c_2\right)$ and $\left(c'_1,c'_2\right)$ will provide, under additional assumptions on the local and global minima of the cost function, two CLTs involving :
\[
\theta=\frac{c_1g+c_2p}{c_1+c_2} \quad \mathrm{and} \quad \theta'=\frac{c'_1g+c'_2p}{c'_1+c'_2}.
\]
A simple matrix inversion then recovers the CLT for e.g. $g$.
\FloatBarrier
\section{Derivations of the results}
We start with some notations. First we recall that the sup-norm for square
matrices of size $d$ defined by:
\[
\left\Vert M\right\Vert _{\infty}=\sup_{x\neq0}\frac{\left\Vert Mx\right\Vert
}{\left\Vert x\right\Vert }.
\]
The tensor product notation is appropriate when dealing with special kind of
matrices for instance covariance matrices. Let $u$ and $v$ be two vectors of
$\mathbb{R}^{d}$ then $u\otimes v=uv^{t}$ (where $v^{t}$ is the transpose of
vector $v$) stands for the rank one matrix defined for all vector $x$ by $\left( u\otimes
v\right) \left( x\right) =\left\langle v,x\right\rangle u$. Besides
$\left\Vert u\otimes v\right\Vert _{\infty}=\left\Vert u\right\Vert \left\Vert v\right\Vert $. The Hadamard product
between vectors was mentioned earlier. Its matrix version may be defined a
similar way. Let $M$ and $S$ be two matrices with same size then $M\odot S$ is
the matrix whose $\left( i,j\right) $ cell is $\left( M\odot S\right)
_{i,j}=m_{i,j}s_{i,j}$. We recall without proof the following computation rule
mixing Hadamard and tensor product. Let $\eta,\varepsilon,u$ and $v$ be four
vectors in $\mathbb{R}^{d}$. Then:
\begin{equation}
\left( \eta\odot u\right) \otimes\left( \varepsilon\odot v\right) =\left(
\eta\otimes\varepsilon\right) \odot\left( u\otimes v\right), \label{Hadam}
\end{equation}
and the reader must be aware that the Hadamard product on the left-hand side
operates between vectors whereas on the right-hand side it operates on matrices.
We will need $\mathcal{F}_{n}^{s}$ the filtration generated by the path of particle number $s$ up to step $n$ : $\left\{x_{0}^{s},...,x_{n}^{s}\right\}$ and
$\mathcal{F}_{n}^{\mathcal{S}}$ the filtration generated by the swarm up to step $n$ :
$\left\{ x_{0}^{s},...,x_{n}^{s}:s=1,...,S\right\} $ .
We will also denote later $g_{n}=\mathbb{E}\left(g_{n}\right)+\xi_{n}$ and $p_{n}=\mathbb{E}\left(p_{n}\right) +\nu_{n}$ the expectation-variance decomposition of
$g_{n}$ and $p_{n}$ where $\xi_{n}$ and $\nu_{n}$ are centered random vectors and support all the variability of $g_{n}$ and $p_{n}$ respectively.
\subsection{First case: oscillatory}
The proof of Theorem \ref{TH1} is developed in the next two subsections. The first subsection below provides some short preliminary results and the proof of the Theorem. The second subsection is devoted to the -somewhat long- derivation of the single Proposition \ref{covconv}. This Proposition \ref{covconv} is needed within the proof of Theorem \ref{TH1}.
\subsubsection{Preliminary results and proof of Theorem \ref{TH1}}
Let us start with some Lemmas who will be invoked later.
\begin{lemma}
\label{noise}Let $\varepsilon^{\left(i\right)}_{n}$ and $\eta^{\left(j\right)}_{n}$ be any coordinate of the random vectors $\varepsilon_{n}$ and $\eta_{n}$. Clearly, $\varepsilon^{\left(i\right)}_{n}$ and $\eta^{\left(j\right)}_{n}$ are not
independent but not correlated and follow the same type of distribution. Besides:
\begin{align*}
\mathbb{E}{\varepsilon^{\left(i\right)}}^{2} & =\mathbb{E}{\eta^{\left(j\right)}}^{2}=1/6,\quad\mathbb{E}
\varepsilon^{\left(i\right)}\eta^{\left(j\right)}=0,\\
\mathbb{E}{\varepsilon^{\left(i\right)}}^{3}\eta^{\left(j\right)} & =\mathbb{E}\varepsilon^{\left(i\right)}{\eta^{\left(j\right)}}^{3}=0.
\end{align*}
\end{lemma}
The proof is very simple hence omitted.
\begin{lemma}
\label{L1}When $\mathbf{A}_{1-3}$ hold and when $N$ goes to
infinity:
\[
\frac{1}{N}\sum_{n=1}^{N}\left\Vert \frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}}{2}
-\mathbb{E}x_{n}\right\Vert ^{2}\rightarrow0.
\]
\end{lemma}
\textbf{Proof:} Denote $q_{n}=\mathbb{E}x_{n}-\left( \mathbb{E} p_{n}+
\mathbb{E}g_{n}\right) /2$. From (\ref{eq:expectation}) we get in matrix form:
\[
\left(
\begin{array}
[c]{c}
q_{n+1}\\
q_{n}
\end{array}
\right) =\left[
\begin{array}
[c]{cc}
1+\omega-c & -\omega\\
1 & 0
\end{array}
\right] \left(
\begin{array}
[c]{c}
q_{n}\\
q_{n-1}
\end{array}
\right) +\left(
\begin{array}
[c]{c}
K_{n}\\
0
\end{array}
\right)
\]
where:
\[
K_{n+1}=\left( 1+\omega\right) \mathbb{E}\left( p_{n}\mathbb{+}g_{n}\right)
-\omega\mathbb{E}\left( p_{n-1}\mathbb{+}g_{n-1}\right) -\mathbb{E} \left(
p_{n+1}\mathbb{+}g_{n+1}\right).
\]
The above equation may be rewritten $\mathbf{q}_{n+1}=\mathbf{Mq}_{n}+
\mathbf{K}_{n+1}$ with obvious but shorter notations. Notice that in $K_{n}$
each $\mathbb{E}\left( p_{k}\mathbb{+}g_{k}\right)$, $k\in\{n-1,n,n+1\}$ may
be replaced by $\mathbb{E}\left( p_{k}\mathbb{+}g_{k}\right) -\left(
p+g\right) $. Consequently, we know that $K_{n}$ tends to $0$. Also notice
that $\Vert\mathbf{M} \Vert_{\infty}<1$ because $\omega<1$.
Then we derive:
\[
\mathbf{q}_{n}\mathbf{=}\sum_{p=2}^{n}\mathbf{M}^{n-p}\mathbf{K}_{p}.
\]
It suffices to notice now that if $\mathbf{q}_{n}$ decays to $0,$ so does
$\left\Vert \mathbf{q}_{n}\right\Vert $ and Lemma \ref{L1} will be derived by
a simple application of Cesaro's famous Lemma. Let us prove now that
$\mathbf{q}_{n}\downarrow0$.
Take $\varepsilon>0$ denote $K_{\infty}=\max\left( \left\Vert \mathbf{K}
_{p}\right\Vert \right) $ and $C_{M}=\left[ 1-\left\Vert \mathbf{M}
\right\Vert _{\infty}\right] ^{-1}.$ First pick $N^{\ast}$ such that
$\sup_{p\geq N^{\ast}+1}\left\Vert \mathbf{K}_{p}\right\Vert <\varepsilon
/\left( 2C_{M}\right) $
\begin{align*}
\left\Vert \mathbf{q}_{n}\right\Vert & \leq\sum_{p=2}^{N^{\ast}}\left\Vert
\mathbf{M}\right\Vert _{\infty}^{n-p}\left\Vert \mathbf{K}_{p}\right\Vert
+\sum_{p=N^{\ast}+1}^{n}\left\Vert \mathbf{M}\right\Vert _{\infty}
^{n-p}\left\Vert \mathbf{K}_{p}\right\Vert \\
& \leq\left\Vert \mathbf{M}\right\Vert _{\infty}^{n-N^{\ast}}\sum
_{p=2}^{N^{\ast}}\left\Vert \mathbf{M}\right\Vert _{\infty}^{n-p}\left\Vert
\mathbf{K}_{p}\right\Vert +\max_{p\geq N^{\ast}+1}\left\Vert \mathbf{K}
_{p}\right\Vert \sum_{p=N^{\ast}+1}^{n}\left\Vert \mathbf{M}\right\Vert
_{\infty}^{n-p}\\
& \leq C_{M}\left[ K_{\infty}\left\Vert \mathbf{M}\right\Vert _{\infty
}^{n-N^{\ast}}+\sup_{p\geq N^{\ast}+1}\left\Vert \mathbf{K}_{p}\right\Vert.
\right]
\end{align*}
Then let $N^{\dag}$ be such that $\left\Vert \mathbf{M}\right\Vert _{\infty
}^{N^{\dag}}<\varepsilon/\left( 2C_{M}K_{\infty}\right) $. It can be seen
from the equations above that for $n>N^{\dag}+N^{\ast}$, $\left\Vert
\mathbf{\ q}_{n}\right\Vert <\varepsilon$.
The following Proposition is crucial:
\begin{proposition}
\label{P1}Let $S_{N}=\sum_{n=1}^{N}z_{n}$ then:
\[
S_{N}=\sum_{n=1}^{N}\left[ \frac{\eta_{n}}{2}\odot\left( p_{n}-g_{n}\right)
-\varepsilon_{n}\odot z_{n}\right] +\sum_{n=1}^{N}\tilde{r}_n+ \frac{1}{c}\left(
z_{1}-\omega z_{0}+\omega z_{N}-z_{N+1}\right).
\]
Under assumptions $\mathbf{A}_{2}$
\[
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\tilde{r}_n\rightarrow_{\mathbb{P}}0.
\]
Besides when $\mathbf{A}_{1}$ holds (the $x_{n}^{\prime}$s are almost surely
bounded) then $S_{N}/\sqrt{N}$ converges weakly if and only if:
\[
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left[ \frac{\eta_{n}}{2}\odot\left(
p_{n}-g_{n}\right) -\varepsilon_{n}\odot z_{n}\right]
\]
converges weakly too.
\end{proposition}
\textbf{Proof of Proposition \ref{P1}:} The first equation is obtained by
summing from $n=1$ to $N$ both sides of Equation (\ref{base}). Then we are
going to prove that $\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\tilde{r}_n\rightarrow_{
\mathbb{P}}0$ by Markov inequality which will be enough to get the rest of the
Proposition. We start from $\tilde{r}_n=\varepsilon_{n}\left( \frac{\mathbb{E}
p_{n}+\mathbb{E}g_{n}}{2}-\mathbb{E}x_{n}\right) +\frac{1+\varepsilon_{n}}{2
}\left( p_{n}-\mathbb{E}p_{n}+g_{n}-\mathbb{E}g_{n}\right) $. By the first
part of assumption $\mathbf{A}_{2}$
\[
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left( p_{n}-\mathbb{E}p_{n}\right)
\rightarrow_{\mathbb{P}}0\quad\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left( g_{n}-
\mathbb{E}g_{n}\right) \rightarrow_{\mathbb{P}}0.
\]
We turn to:
\[
\mathbb{E}\left\Vert \sum_{n=1}^{N}\varepsilon_{n}\left( \frac{\mathbb{E}
p_{n}+\mathbb{E}g_{n}}{2}-\mathbb{E}x_{n}\right) \right\Vert ^{2}=\mathbb{E}
\varepsilon_{1}^{2}\sum_{n=1}^{N}\left\Vert \frac{\mathbb{E}p_{n}+\mathbb{E}
g_{n}}{2}-\mathbb{E}x_{n}\right\Vert ^{2}
\]
because $\varepsilon_{n}$ is an i.i.d. centered sequence. The proposition now
follows from Lemma \ref{L1}. This completes the proof of the Proposition.
Proposition \ref{P1} shows that the limiting distribution of $S_{N}$ is
completely determined by the limiting distribution of $\sum_{n=1}^{N}\left[
\frac{\eta_{n}}{2}\odot\left( p_{n}-g_{n}\right) -\varepsilon_{n}\odot
z_{n}\right] $. If a Central Limit Theorem holds for the previous
series, then the limiting distribution will depend on the covariance matrix of
$\frac{\eta_{n}}{2}\odot\left( p_{n}-g_{n}\right) -\varepsilon_{n}\odot
z_{n}$. The latter will be decomposed in several terms.
\textbf{Proof of Theorem \ref{TH1}:}
As a preliminary step, let us rewrite (\ref{pso-def}) on a single line :
\[
x_{n+1}=\left( 1+\omega\right) x_{n}-\omega x_{n-1}+c\left( r_{1,n}
+r_{2,n}\right) \odot\left( \frac{p_{n}+g_{n}}{2}-x_{n}\right) +c\left(
r_{1,n}-r_{2,n}\right) \odot\frac{p_{n}-g_{n}}{2}.
\]
Taking expectation:
\begin{equation}
\mathbb{E}x_{n+1}=\left( 1+\omega\right) \mathbb{E}x_{n}-\omega\mathbb{E}
x_{n-1}+c\left( \frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}}{2}-\mathbb{E}
x_{n}\right).
\label{eq:expectation}
\end{equation}
By subtracting, denoting the centered process $z_{n}=x_{n}-\mathbb{E}x_{n}$ and $e$ the vector
of $\mathbb{R}^{d}$ defined by $e=\left( 1,1,...,1\right)$ the PSO
equation becomes :
\begin{equation}
z_{n+1}=\left( 1+\omega-c\right) z_{n}-\omega z_{n-1}-c\varepsilon_{n}\odot
z_{n}+c \tilde{r}_{n}+\frac{c}{2}\eta_{n}\odot\left( p_{n}-g_{n}\right), \label{base}
\end{equation}
with
\begin{align*}
\varepsilon_{n} & =r_{1,n}+r_{2,n}-e\quad\eta_{n}=r_{1,n}-r_{2,n},\\
\tilde{r}_n & =\varepsilon_{n}\odot\left( \frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}
}{2 }-\mathbb{E}x_{n}\right) +\frac{e+\varepsilon_{n}}{2}\odot\left( p_{n}-
\mathbb{E}p_{n}+g_{n}-\mathbb{E}g_{n}\right).
\end{align*}
Equation (\ref{base}) will be a starting point especially for studying single
particle trajectories.
We turn to the Theorem. From:
\[
\frac{1}{N}\sum_{n=1}^{N}x_{n}-\frac{1}{N}\sum_{n=1}^{N}\frac{\mathbb{E}
p_{n}+\mathbb{E}g_{n}}{2}=\frac{1}{N}\sum_{n=1}^{N}z_{n}+\frac{1}{N}\sum
_{n=1}^{N}\left( \mathbb{E}x_{n}-\frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}}
{2}\right),
\]
we see from the proof of Lemma \ref{L1} that $\left( 1/\sqrt{N}\right)
\sum_{n=1}^{N}\left[ \mathbb{E}x_{n}-\left( \mathbb{E}p_{n}+\mathbb{E}
g_{n}\right) /2\right] \rightarrow0$ as $N$ tends to infinity. Following
Proposition \ref{P1}, the Theorem \ref{TH1} comes down to proving:
\[
\frac{1}{\sqrt{N}}\sum_{n=1}^{N}\left[ \frac{\eta_{n}}{2}\odot\left(
p_{n}-g_{n}\right) -\varepsilon_{n}\odot z_{n}\right] \hookrightarrow
\mathcal{N}\left( 0,\Gamma\right).
\]
We can first remark that $u_{n}=\frac{\eta_{n}}{2}\odot\left( p_{n}
-g_{n}\right) -\varepsilon_{n}\odot z_{n}$ is a vector-valued martingale
difference sequence with respect to the filtration $\mathcal{F}_{n}
^{s}=\mathcal{F}_{n}$. We confine ourselves to proving a Levy-Lindeberg
version of the CLT for the series of $u_{n}$ in two steps (Theorem 2.1.9 p. 46 and its corollary 2.1.10 in \citealp{Duflo_1997}): first ensuring convergence of the conditional covariance structure of $\left( 1/\sqrt{N}\right) \sum_{n=1}^{N}u_{n}$, then checking the Lyapunov condition holds (hence the Lindeberg's uniform integrability that ensures uniform tightness of the sequence).
\paragraph*{}\textbf{First step: }The conditional covariance sequence of $u_{n}$ is the
sequence of matrices defined by $\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left(
u_{n}\otimes u_{n}|\mathcal{F}_{n}\right) $. We show in this first step that
this sequence converges in probability to $\Gamma$. We start with elementary
calculations:
\begin{align*}
\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left( u_{n}\otimes u_{n}|\mathcal{F}
_{n}\right) & =\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left( \frac{\eta_{n}
}{2}\odot\left( p_{n}-g_{n}\right) \otimes\frac{\eta_{n}}{2}\odot\left(
p_{n}-g_{n}\right) |\mathcal{F}_{n}\right) \\
& +\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left( \left( \varepsilon_{n}\odot z_{n}\right) \otimes\left(
\varepsilon_{n}\odot z_{n}\right) |\mathcal{F}_{n}\right) \\
& -\frac{1}{2N}\sum_{n=1}^{N}\mathbb{E}\left( \left[ \eta_{n}\odot\left(
p_{n}-g_{n}\right) \right] \otimes\left[ \varepsilon_{n}\odot z_{n}\right] |\mathcal{F}_{n}\right) \\
& -\frac{1}{2N}\sum_{n=1}^{N}\mathbb{E}\left( \left[ \varepsilon_{n}\odot z_{n}\right] \otimes\left[ \eta_{n}
\odot\left( p_{n}-g_{n}\right) \right] |\mathcal{F}_{n}\right) \\
& =\frac{\mathbb{E}\varepsilon_{1}^{2}}{N}\mathbf{I}\odot\left\{ \sum
_{n=1}^{N}\frac{1}{4}\left[ \left( p_{n}-g_{n}\right) \otimes\left(
p_{n}-g_{n}\right) \right] +z_{n}\otimes z_{n}\right\} \\
& =\frac{1}{6}\mathbf{I}\odot\left[ \frac{1}{4}\Gamma_{N,p,g}+\Gamma
_{N,z}\right],
\end{align*}
where we used Lemma \ref{noise} and denoted $\Gamma_{N,p,g}=\frac{1}{N}
\sum_{n=1}^{N}\left[ \left( p_{n}-g_{n}\right) \otimes\left( p_{n}
-g_{n}\right) \right]$ and :
\[
\Gamma_{N,z}=\frac{1}{N}\sum_{n=1}^{N}z_{n}\otimes z_{n},
\]
By Proposition \ref{covconv} stated and proved in the next subsection, $\Gamma_{N,z}-\frac{\mathfrak{H}}{N} \Gamma_{N,p,g}$ tends in probability to zero. Hence whenever the limit in the
r.h.s below exists:
\[
\lim_{N\rightarrow+\infty}\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left(
u_{n}\otimes u_{n}|\mathcal{F}_{n}\right) =\left( \frac{1}{24}
+\frac{\mathfrak{H}}{6}\right) \lim_{N\rightarrow+\infty}\mathbf{I}
\odot\Gamma_{N,p,g}.
\]
Under assumption $\mathbf{A}_{1-2}$ $\lim_{N\rightarrow+\infty}\Gamma
_{N,p,g}-\overline{\Gamma}_{N,p,g}=0$ in matrix norm, where \[\overline{\Gamma}_{N,p,g}=\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left[ \left( p_{n}-g_{n}\right)\otimes\mathbb{E}\left( p_{n}-g_{n}\right) \right].\] To prove this it
suffices to write:
\[
\Gamma_{N,p,g}-\overline{\Gamma}_{N,p,g}=\frac{1}{N}\sum_{n=1}^{N}\left(
\nu_{n}-\xi_{n}\right) \otimes\left( p_{n}-g_{n}\right) +\frac{1}{N}
\sum_{n=1}^{N}\mathbb{E}\left( p_{n}-g_{n}\right) \otimes\left( \nu_{n}
-\xi_{n}\right),
\]
and apply the bounds derived from the proof of Proposition \ref{covconv}, for
instance:
\begin{align*}
\frac{1}{N}\left\Vert \sum_{n=1}^{N}\left( \nu_{n}-\xi_{n}\right)
\otimes\left( p_{n}-g_{n}\right) \right\Vert _{\infty} & \leq\frac{1}
{N}\sum_{n=1}^{N}\left\Vert \left( \nu_{n}-\xi_{n}\right) \otimes\left(
p_{n}-g_{n}\right) \right\Vert _{\infty}\\
& \leq \frac{1}{N}\sum_{n=1}^{N}\left\Vert
\nu_{n}-\xi_{n}\right\Vert \left\Vert p_{n}-g_{n}\right\Vert \\
& \leq\sup_{n}\left\Vert p_{n}-g_{n}\right\Vert \frac{1}{N}\sum_{n=1}
^{N}\left( \left\Vert \nu_{n}\right\Vert +\left\Vert \xi_{n}\right\Vert
\right) \rightarrow_{\mathbb{P}}0.
\end{align*}
Finally, the convergence of $\overline{\Gamma}_{N,p,g}$ to $\Gamma$ is a
consequence of the adaptation of Cesaro's Lemma for tensor products.
\paragraph*{}\textbf{Second step:} A Lyapunov condition holds almost trivially here.
Namely we are going to prove that:
\[
\frac{1}{N^{2}}\sum_{n=1}^{N}\mathbb{E}\left\Vert u_{n}\right\Vert
^{4}\rightarrow_{N\rightarrow+\infty}0.
\]
Simple calculations provide:
\begin{align*}
\left\Vert u_{n}\right\Vert ^{2} & \leq2\left\Vert \frac{\eta_{n}}{2}
\odot\left( p_{n}-g_{n}\right) \right\Vert ^{2}+2\left\Vert \varepsilon
_{n}\odot z_{n}\right\Vert ^{2}\\
& \leq\frac{1}{2}\sup_{n,i}\lbrace\left( p_{n,i}-g_{n,i}\right) ^{2}\rbrace\left\Vert
\eta_{n}\right\Vert ^{2}+2\sup_{n,i}\lbrace z_{n,i}^{2}\rbrace\left\Vert \varepsilon
_{n}\right\Vert ^{2}\\
& \leq2\left\vert \mathrm{\Omega}\right\vert _{\infty}^{2}\left[ \left\Vert \eta
_{n}\right\Vert ^{2}+\left\Vert \varepsilon_{n}\right\Vert ^{2}\right],
\end{align*}
with $\left\vert \mathrm{\Omega}\right\vert _{\infty}=\sup_{s\in\mathrm{\Omega}}\left\vert
s\right\vert $ hence
\[
\mathbb{E}\left\Vert u_{n}\right\Vert ^{4}\leq C\left\vert \mathrm{\Omega}\right\vert
_{\infty}^{4},
\]
for some constant $C$ then $\left( 1/N^{2}\right) \sum_{n\leq N}
\mathbb{E}\left\Vert u_{n}\right\Vert ^{4}$ tends to zero when $N$ tends to
infinity. This completes the proof of Theorem \ref{TH1}.
\subsubsection{Proof of Proposition \ref{covconv}}\label{sssec:num1}
We are ready to state and prove this important intermediate result.
\begin{proposition}
\label{covconv}Under assumption $A_{3}$ denote:
\[
\mathfrak{H}=\frac{c^{2}}{24}\left( 2c\left( \frac{1-\omega}{1+\omega
}\right) \left( 1+\omega-\frac{c}{2}\right) -\frac{c^{2}}{6}\right) ^{-1},
\]
then when $N$ tends to $+\infty$
\[
\Gamma_{N,z}-\frac{\mathfrak{H}}{N}\sum_{n=1}^{N}\mathrm{diag}\left(
\delta_{n}^{\odot2}\right) \rightarrow_{\mathbb{P}}0,
\]
with $\delta_{n}=p_{n}-g_{n}$.\bigskip
\end{proposition}
The proof of the Proposition will immediately follow from this Lemma.
\begin{lemma}
\label{LLN}Let $E_{n}$ be a sequence of i.i.d centered random matrices with
finite moment of order 4, let $u_{n}$ and $v_{n}$ two sequence of random
vectors almost surely bounded and such that $\left( u_{n},v_{n}\right) $ is
for all $n$ independent from $E_{n}$ then for the $\left\Vert \cdot
\right\Vert _{\infty}$ norm:
\[
\frac{1}{N}\sum_{n=1}^{N}E_{n}\odot\left( u_{n}\otimes v_{n}\right)
\rightarrow_{\mathbb{P}}0.
\]
\end{lemma}
The proof of the above Lemma is a straightforward application of
Cauchy-Schwartz inequality.
\paragraph*{}\textbf{Proof of the Proposition \ref{covconv}}:
We take advantage of Equation (\ref{base}) and note that
\begin{align*}
\left( \varepsilon_{n}\odot z_{n}\right) \otimes\left( \varepsilon_{n}\odot
z_{n}\right) & =\left( \varepsilon_{n}\otimes\varepsilon_{n}\right)
\odot\left( z_{n}\otimes z_{n}\right) \\
& =\left[ \left( \varepsilon_{n}\otimes\varepsilon_{n}\right)
-\sigma_{\varepsilon}^{2}\mathbf{I}\right] \odot\left( z_{n}\otimes
z_{n}\right) +\sigma_{\varepsilon}^{2}\mathbf{I}\odot\left( z_{n}\otimes
z_{n}\right),
\end{align*}
where $\sigma_{\varepsilon}^{2}=1/6$ by Lemma \ref{noise} and $\mathbf{I}$ stands for
identity matrix. After some tedious calculations we obtain:
\begin{align}
z_{n+1}\otimes z_{n+1} & =\left( 1+\omega-c\right) ^{2}\left(
z_{n}\otimes z_{n}\right) +\omega^{2}z_{n-1}\otimes z_{n-1} \label{cov1}\\
& +c^{2}\sigma_{\varepsilon}^{2} \left[ \mathbf{I}\odot\left( z_{n}\otimes
z_{n}\right) \right]+\frac{c^{2}}{4}\left[ \eta_{n}\odot\left( p_{n}-g_{n}\right) \right]
\otimes\left[ \eta_{n}\odot\left( p_{n}-g_{n}\right) \right] \nonumber\\
& -\omega\left( 1+\omega-c\right) \left[ z_{n}\otimes z_{n-1}+z_{n-1}\otimes
z_{n}\right] +\mathcal{C}_{n}.\nonumber
\end{align}
For the sake of completeness, we list now all the eleven terms contained in
$\mathcal{C}_{n}$. In order to simplify notations let $\left[ \left[
u:v\right] \right] =u\otimes v+v\otimes u$. Notice for further use:
\[
\left\Vert \left[ \left[ u:v\right] \right] \right\Vert _{\infty}
\leq2\left\Vert u\right\Vert \left\Vert v\right\Vert.
\]
Then:
\begin{align}
\mathcal{C}_{n} & =c\left( 1+\omega-c\right) \left\{ \left[ \left[
z_{n}:\tilde{r}_n\right] \right] -\left[ \left[ z_{n}:\varepsilon_{n}\odot
z_{n}\right] \right] +\frac{1}{2}\left[ \left[ z_{n}:\eta_{n}\odot\left(
p_{n}-g_{n}\right) \right] \right] \right\} \label{Cnprime}\\
& +\omega c\left\{ \left[ \left[ z_{n-1}:\varepsilon_{n}\odot
z_{n}\right] \right] -\left[ \left[ z_{n-1}:\tilde{r}_n\right] \right]
-\frac{1}{2}\left[ \left[ z_{n-1}:\eta_{n}\odot\left( p_{n}-g_{n}\right)
\right] \right] \right\} \nonumber\\
& -c^{2}\left\{ \left[ \left[ c\varepsilon_{n}\odot z_{n}:\tilde{r}_n\right]
\right] +\frac{1}{2}\left[ \left[ \varepsilon_{n}\odot z_{n}:\eta_{n}
\odot\left( p_{n}-g_{n}\right) \right] \right] -\frac{1}{2}\left[ \left[
\tilde{r}_n:\eta_{n}\odot\left( p_{n}-g_{n}\right) \right] \right] \right\}
\nonumber\\
& +c^{2}\tilde{r}_n\otimes \tilde{r}_n+c^{2}\left[ \left( \varepsilon_{n}
\otimes\varepsilon_{n}\right) -\sigma_{\varepsilon}^{2}\mathbf{I}\right] \odot\left(
z_{n}\otimes z_{n}\right). \nonumber
\end{align}
\smallskip
From \ref{cov1} we sum over all indices $n$ in $\left\{ 1,...,N\right\} $ we
get:
\begin{align*}
\left( 1-\left( 1+\omega-c\right) ^{2}-\omega^{2}\right) \sum_{n=1}
^{N}z_{n}\otimes z_{n} & =c^{2}\sigma_{\varepsilon}^{2}\sum_{n=1}^{N}\left[
\mathbf{I}\odot\left( z_{n}\otimes z_{n}\right) \right] \\
& +\frac{c^{2}}{4}\sum_{n=1}^{N}\left[ \eta_{n}\odot\left( p_{n}
-g_{n}\right) \right] \otimes\left[ \eta_{n}\odot\left( p_{n}
-g_{n}\right) \right] \\
& -\omega\left( 1+\omega-c\right) \sum_{n=1}^{N}\left[ z_{n}\otimes
z_{n-1}+z_{n-1}\otimes z_{n}\right] \\
& +\sum_{n=1}^{N}\mathcal{C}_{n}+z_{1}\otimes z_{1}-z_{N+1}\otimes
z_{N+1}\\
&+\omega^{2}\left( z_{0}\otimes z_{0}-z_{N}\otimes z_{N}\right).
\end{align*}
Now we need to go slightly further in the computations and to find a recurrent
equation for $z_{n}\otimes z_{n-1}$. Let us start again from (\ref{base}):
\begin{align*}
z_{n+1}\otimes z_{n} & =\left( 1+\omega-c\right) z_{n}\otimes z_{n}-\omega
z_{n-1}\otimes z_{n}-c\left( \varepsilon_{n}\odot z_{n}\right) \otimes
z_{n}+c \tilde{r}_{n}\otimes z_{n}\\
&+\frac{c}{2}\left[ \eta_{n}\odot\left( p_{n}
-g_{n}\right) \right] \otimes z_{n},\\
z_{n}\otimes z_{n+1} & =\left( 1+\omega-c\right) z_{n}\otimes z_{n}-\omega
z_{n}\otimes z_{n-1}-cz_{n}\otimes\left( \varepsilon_{n}\odot z_{n}\right)
+cz_{n}\otimes \tilde{r}_n\\
&+\frac{c}{2}z_{n}\otimes\left[ \eta_{n}\odot\left(
p_{n}-g_{n}\right) \right].
\end{align*}
Summing over all indices $n$ in $\left\{ 1,...,N\right\} $ above we come to
\begin{align*}
& \left( 1+\omega\right) \sum_{n=1}^{N}\left[ z_{n}\otimes z_{n-1}
+z_{n-1}\otimes z_{n}\right] \\
& =z_{1}\otimes z_{0}+z_{0}\otimes z_{1}-z_{N}\otimes z_{N+1}-z_{N+1}\otimes
z_{N}+2\left( 1+\omega-c\right) \sum_{n=1}^{N}\left[ z_{n}\otimes
z_{n}\right] \\
& +c\sum_{n=1}^{N}\left\{ \left[ \left[ \tilde{r}_n:z_{n}\right] \right]
-\left[ [\left( \varepsilon_{n}\odot z_{n}\right):z_{n}\right] ]+\frac
{1}{2}\left[ \left[ \eta_{n}\odot\left( p_{n}-g_{n}\right):z_{n}\right]
\right] \right\}.
\end{align*}
Plugging the last equation in line (\ref{cov1}) we get finally:
\begin{align*}
& \left( 1-\left( 1+\omega-c\right) ^{2}-\omega^{2}\right) \sum_{n=1}
^{N}z_{n}\otimes z_{n}\\
& =c^{2}\sigma_{\varepsilon}^{2}\sum_{n=1}^{N}\left[ \mathbf{I}\odot\left(
z_{n}\otimes z_{n}\right) \right] +\frac{c^{2}}{4}\sum_{n=1}^{N}\left[
\eta_{n}\odot\left( p_{n}-g_{n}\right) \right] \otimes\left[ \eta_{n}
\odot\left( p_{n}-g_{n}\right) \right] \\
& -2\omega\frac{\left( 1+\omega-c\right) ^{2}}{1+\omega}\sum_{n=1}
^{N}\left[ z_{n}\otimes z_{n}\right] +\omega c\frac{\left( 1+\omega
-c\right) }{1+\omega}\sum_{n=1}^{N}\left[ [\left( \varepsilon_{n}\odot
z_{n}\right):z_{n}\right] ]\\
& -\omega c\frac{\left( 1+\omega-c\right) }{1+\omega}\sum_{n=1}^{N}\left[
\left[ \tilde{r}_n:z_{n}\right] \right] -\omega c\frac{\left( 1+\omega-c\right)
}{\left( 1+\omega\right) }\sum_{n=1}^{N}\frac{1}{2}\left[ \left[ \eta
_{n}\odot\left( p_{n}-g_{n}\right):z_{n}\right] \right] +\sum_{n=1}
^{N}\mathcal{C}_{n}.
\end{align*}
Denoting $\kappa=\left( 1-\left( 1+\omega-c\right) ^{2}-\omega^{2}
+2\omega\frac{\left( 1+\omega-c\right) ^{2}}{1+\omega}\right) =2c\left(
\frac{1-\omega}{1+\omega}\right) \left( 1+\omega-\frac{c}{2}\right)$:
\[
\kappa\sum_{n=1}^{N}z_{n}\otimes z_{n}=\frac{c^{2}}{6}\sum_{n=1}^{N}\left[
\mathbf{I}\odot\left( z_{n}\otimes z_{n}\right) \right] +\frac{c^{2}}{4}\sum
_{n=1}^{N}\left[ \eta_{n}\odot\left( p_{n}-g_{n}\right) \right]
\otimes\left[ \eta_{n}\odot\left( p_{n}-g_{n}\right) \right]
+\mathfrak{R}_{N},
\]
with
\begin{align}
\mathfrak{R}_{N} & =-\omega\frac{\left( 1+\omega-c\right) }{1+\omega
}\left( \left[ \left[ z_{1}:z_{0}\right] \right] -\left[ \left[
z_{N}:z_{N+1}\right] \right] \right) +z_{1}\otimes z_{1}-z_{N+1}\otimes
z_{N+1}\label{R_n}\\
&+\omega^{2}\left( z_{0}\otimes z_{0}-z_{N}\otimes z_{N}\right)+\sum
_{n=1}^{N}\mathcal{C}_{n}\nonumber \\
& +\omega c\frac{\left( 1+\omega-c\right) }{1+\omega}\sum_{n=1}^{N}\left\{
\left[ [\left( \varepsilon_{n}\odot z_{n}\right):z_{n}\right] ]-\left[
\left[ \tilde{r}_n:z_{n}\right] \right] -\frac{1}{2}\left[ \left[ \eta_{n}
\odot\left( p_{n}-g_{n}\right):z_{n}\right] \right] \right\} .\nonumber
\end{align}
It is proved in Lemma \ref{L2} that $\mathfrak{R}_{N}/N\rightarrow
_{\mathbb{P}}0$. First, let us unravel the matrix:
\[
\mathbf{T}_{N}=\frac{\kappa}{N}\sum_{n=1}^{N}z_{n}\otimes z_{n}-\frac{c^{2}
}{6}\frac{1}{N}\sum_{n=1}^{N}\left[ \mathbf{I}\odot\left( z_{n}\otimes z_{n}\right)
\right].
\]
Denote $\left[ \mathbf{T}_{N}\right] _{ij}$ the $\left( i,j\right) $ cell
of matrix $\mathbf{T}_{n}.$ It is simple to see that:
\begin{equation}
\left[ \mathbf{T}_{N}\right] _{ii}=\left( \kappa-\frac{c^{2}}{6}\right)
\left[ \frac{1}{N}\sum_{n=1}^{N}\left( z_{n}\otimes z_{n}\right) \right]
_{ii},\quad\left[ \mathbf{T}_{N}\right] _{ij}=\kappa\left[ \frac{1}{N}
\sum_{n=1}^{N}z_{n}\otimes z_{n}\right] _{i,j}\label{T_N}
\end{equation}
for $i\neq j$. Now denote:
\begin{align*}
\Lambda_{N} & =\frac{1}{N}\sum_{n=1}^{N}\left[ \eta_{n}\odot\left(
p_{n}-g_{n}\right) \right] \otimes\left[ \eta_{n}\odot\left( p_{n}
-g_{n}\right) \right] \\
& =\frac{1}{N}\sum_{n=1}^{N}\left( \eta_{n}\otimes\eta_{n}\right)
\odot\left[ \left( p_{n}-g_{n}\right) \otimes\left( p_{n}-g_{n}\right)
\right].
\end{align*}
Taking conditional expectation w.r.t $\mathcal{F}_{n}$ we get -all non
diagonal term vanish:
\begin{align*}
\mathbb{E}\left( \Lambda_{N}|\mathcal{F}_{n}\right) & =\frac{1}{N}
\sum_{n=1}^{N}\left[ \mathbb{E}\left( \eta_{n}\otimes\eta_{n}\right)
\right] \odot\left[ \left( p_{n}-g_{n}\right) \otimes\left( p_{n}
-g_{n}\right) \right] \\
& =\frac{\mathbb{E}\eta_{1}^{2}}{N}\sum_{n=1}^{N}\mathbf{I}\odot\left[
\left( p_{n}-g_{n}\right) \otimes\left( p_{n}-g_{n}\right) \right],
\end{align*}
with $\mathbf{I}$ the identity matrix. Noting that the difference:
\[
\Lambda_{N}-\mathbb{E}\left( \Lambda_{N}|\mathcal{F}_{n}\right) =\frac{1}
{N}\sum_{n=1}^{N}\left[ \left( \eta_{n}\otimes\eta_{n}\right) -\left(
\mathbb{E}\eta_{1}^{2}\right) \mathbf{I}\right] \odot\left[ \left(
p_{n}-g_{n}\right) \otimes\left( p_{n}-g_{n}\right) \right]
\]
vanishes when $N$ tends to infinity by applying Lemma \ref{LLN} we get with
Landau notation in probability:
\[
\mathbf{T}_{N}=\frac{c^{2}}{24N}\sum_{n=1}^{N}\mathbf{I}\odot\left[ \left(
p_{n}-g_{n}\right) \otimes\left( p_{n}-g_{n}\right) \right] +o_{P}\left(
\frac{1}{N}\right).
\]
From (\ref{T_N}) we obtain simultaneously that $\left[ \mathbf{T}_{N}\right]
_{ij}\rightarrow_{\mathbb{P}}0$ for $i\neq j$ and
\[
\left[ \mathbf{T}_{N}\right] _{ii}-\frac{c^{2}}{24\left( \kappa-\frac
{c^{2}}{6}\right) }\frac{1}{N}\sum_{n=1}^{N}\mathbf{I}\odot\left[ \left(
p_{n}-g_{n}\right) \otimes\left( p_{n}-g_{n}\right) \right] \rightarrow
_{\mathbb{P}}0
\]
which is precisely the statement of Proposition \ref{covconv}.
\begin{lemma}
\label{L2}$\mathfrak{R}_{N}/N\rightarrow_{\mathbb{P}}0$ in matrix norm.
\end{lemma}
\textbf{Proof:} Let us start from (\ref{R_n}). First by assumption
$\mathbf{A}_{1}$:
\[
\frac{1}{N}\left( \left[ \left[ z_{1}:z_{0}\right] \right] -\left[
\left[ z_{N}:z_{N+1}\right] \right] \right) +z_{1}\otimes z_{1}
-z_{N+1}\otimes z_{N+1}+\omega^{2}\left( z_{0}\otimes z_{0}-z_{N}\otimes
z_{N}\right) \rightarrow_{\mathbb{P}}0.
\]
It must be also noticed that:
\[
\omega c\frac{\left( 1+\omega-c\right) }{1+\omega}\sum_{n=1}^{N}\left\{
\left[ [\left( \varepsilon_{n}\odot z_{n}\right):z_{n}\right] ]-\left[
\left[ \tilde{r}_n:z_{n}\right] \right] -\frac{1}{2}\left[ \left[ \eta_{n}
\odot\left( p_{n}-g_{n}\right):z_{n}\right] \right] \right\}
\]
involves three terms between double brackets that already appear in
$\mathcal{C}_{n}$, up to constants. Proving that each term of $\left(
1/N\right) \sum_{n=1}^{N}\mathcal{C}_{n}$ tends in probability to $0$ will be
sufficient to complete the proof of the Lemma.
Our aim is to perform successive applications of Lemma \ref{LLN}. For the sake
of completeness we remind now the eleven terms contained in $\mathcal{C}_{n}$
and mentioned earlier.
\begin{align*}
\mathcal{C}_{n} & =c\left( 1+\omega-c\right) \left\{ \left[ \left[
z_{n}:\tilde{r}_n\right] \right] -\left[ \left[ z_{n}:\varepsilon_{n}\odot
z_{n}\right] \right] +\frac{1}{2}\left[ \left[ z_{n}:\eta_{n}\odot\left(
p_{n}-g_{n}\right) \right] \right] \right\} \\
& +\omega c\left\{ \left[ \left[ z_{n-1}:\varepsilon_{n}\odot
z_{n}\right] \right] -\left[ \left[ z_{n-1}:\tilde{r}_n\right] \right]
-\frac{1}{2}\left[ \left[ z_{n-1}:\eta_{n}\odot\left( p_{n}-g_{n}\right)
\right] \right] \right\} \\
& -c^{2}\left\{ \left[ \left[ c\varepsilon_{n}\odot z_{n}:\tilde{r}_n\right]
\right] +\frac{1}{2}\left[ \left[ \varepsilon_{n}\odot z_{n}:\eta_{n}
\odot\left( p_{n}-g_{n}\right) \right] \right] -\frac{1}{2}\left[ \left[
\tilde{r}_n:\eta_{n}\odot\left( p_{n}-g_{n}\right) \right] \right] \right\} \\
& +c^{2}\tilde{r}_n\otimes \tilde{r}_n+c^{2}\left[ \left( \varepsilon_{n}
\otimes\varepsilon_{n}\right) -\sigma_{\varepsilon}^{2}\mathbf{I}\right] \odot\left(
z_{n}\otimes z_{n}\right).
\end{align*}
In the list above terms numbered 2, 3, 4, 6, 11 vanish by applying
successively (\ref{Hadam}) and Lemma \ref{LLN}. Take for instance term 1, get
rid of the constants and focus on the first tensor product in the double
bracket namely:
\[
z_{n}\otimes\left( \varepsilon_{n}\odot z_{n}\right) =\left( e\odot
z_{n}\right) \otimes\left( \varepsilon_{n}\odot z_{n}\right) =\left(
e\otimes\varepsilon_{n}\right) \odot\left( z_{n}\otimes z_{n}\right).
\]
Recall that $e$ is the vector with all components valued at $1.$ Then we can
apply Lemma \ref{LLN} which shows that $\left( 1/N\right) \sum_{n=1}
^{N}\left( e\otimes\varepsilon_{n}\right) \odot\left( z_{n}\otimes
z_{n}\right) $ vanishes in probability. It is not hard to see that terms 3,
4, 6 and 11 may be treated the same way.
Term number 8 namely depends on both $\varepsilon_{n}$ and $\eta_{n}$. An
application of (\ref{Hadam}) leads to considering, up to a constant and
commutation to:
\[
\left( \eta_{n}\otimes\varepsilon_{n}\right) \odot\left( z_{n}
\otimes\left( p_{n}-g_{n}\right) \right).
\]
The reader can check that the matrix $E_{n}=\left( \eta_{n}\otimes
\varepsilon_{n}\right) $ is stochastically independent from $z_{n}
\otimes\left( p_{n}-g_{n}\right) $, that the sequence of matrices $E_{n} $
are independent and centered. Lemma \ref{LLN} applies here.
The terms numbered 1, 5, 7, 9, 10 depend on $\tilde{r}_n.$ Now consider
\[\tilde{r}_n=\varepsilon_{n}\odot\left( \frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}}
{2}-\mathbb{E}x_{n}\right) +\frac{e+\varepsilon_{n}}{2}\odot\left( \xi
_{n}+\nu_{n}\right) =r_{n1}+r_{n2},\] and the five terms containing them in
$\mathcal{C}_{n}$. We split $\tilde{r}_n$ and deal separately with $r_{n1}$ and
$r_{n2}$ to derive the needed bounds in probability.
First, let us focus on $r_{n1}=\varepsilon_{n}\odot\left( \frac{\mathbb{E}
p_{n}+\mathbb{E}g_{n}}{2}-\mathbb{E}x_{n}\right) $ only. Once again Lemma
\ref{LLN} may be invoked for the specific halves of terms 1, 5 and 9 by using
the same methods as above. The half part of terms 7 and 10 of (\ref{Cnprime})
containing $\varepsilon_{n}$ may be bounded the following way (denote
$\beta_{n}=\frac{\mathbb{E}p_{n}+\mathbb{E}g_{n}}{2}-\mathbb{E}x_{n} $):
\begin{align*}
\left\Vert \left[ \left[ \varepsilon_{n}\odot z_{n}:\varepsilon_{n}
\odot\beta_{n}\right] \right] \right\Vert _{\infty} & \leq2\left\Vert
z_{n}\right\Vert \left\Vert \beta_{n}\right\Vert, \\
\left\Vert \left( \varepsilon_{n}\odot\beta_{n}\right) \otimes\left(
\varepsilon_{n}\odot\beta_{n}\right) \right\Vert _{\infty} & \leq2\left\Vert
\beta_{n}\right\Vert ^{2}.
\end{align*}
Lemma \ref{L1} ensures that both right hand sides above tend to zero (since
$\sup_{n}\left\Vert z_{n}\right\Vert $ is almost surely bounded) and Cesaro's
Lemma terminates this part.
We should now inspect the terms numbered 1, 5, 7, 9, 10 in (\ref{Cnprime})
with respect to $r_{n2}=\frac{e+\varepsilon_{n}}{2}\odot\left( \xi_{n}
+\nu_{n}\right) $. Terms 7 and 9 may be controlled by Lemma \ref{L1} and
Lemma \ref{LLN} respectively. Let us deal with term 1:
\begin{align*}
\left\Vert \sum_{n=1}^{N}\left[ \left[ z_{n}:r_{n2}\right] \right]
\right\Vert _{\infty} & \leq2\sum_{n=1}^{N}\left\Vert z_{n}\right\Vert
\left\Vert r_{n2}\right\Vert \\
& \leq2\left( \sup_{n}\left\Vert z_{n}\right\Vert \right) \sum_{n=1}
^{N}\left\Vert r_{n2}\right\Vert \\
& \leq2\sup_{n}\left\Vert z_{n}\right\Vert \sum_{n=1}^{N}\left[ \left\Vert
\nu_{n}\right\Vert +\left\Vert \xi_{n}\right\Vert \right].
\end{align*}
Take for instance $\sum_{n=1}^{N}\left\Vert \nu_{n}\right\Vert $. Applying
Markov inequality:
\[
\mathbb{P}\left( \frac{1}{N}\sum_{n=1}^{N}\left\Vert \nu_{n}\right\Vert
>\varepsilon\right) \leq\frac{1}{N}\sum_{n=1}^{N}\mathbb{E}\left\Vert \nu
_{n}\right\Vert,
\]
then Assumption $\mathbf{A}_{2}$ together with Cesaro's Lemma again ensure
that $\frac{1}{N}\sum_{n=1}^{N}\left\Vert \nu_{n}\right\Vert \rightarrow
_{\mathbb{P}}0$ as well as $\frac{1}{N}\sum_{n=1}^{N}\left\Vert \xi
_{n}\right\Vert $ hence $\frac{1}{N}\left\Vert \sum_{n=1}^{N}\left[ \left[
z_{n}:r_{n2}\right] \right] \right\Vert _{\infty}$. Term 5 vanishes in
probability with the same technique at hand. Now term 10 gives:
\[
\left\Vert \frac{1}{N}\sum_{n=1}^{N}r_{2n}\otimes r_{2n}\right\Vert _{\infty
}\leq\frac{1}{N}\sum_{n=1}^{N}\left\Vert r_{2n}\otimes r_{2n}\right\Vert
_{\infty}=\frac{1}{N}\sum_{n=1}^{N}\left\Vert r_{2n}\right\Vert ^{2}
\]
which also tends to zero in probability under $\mathbf{A}_{1-2}$. By the way
we mention that the cross-product $\left[ \left[ r_{1n}:r_{2n}\right]
\right] $ in term 10 vanishes due to Lemma \ref{L1}. Our task is almost done.
Let us deal now with the remaining terms of $\mathfrak{R}_{N}$. Both:
\begin{align*}
& \frac{1}{N}\sum_{n=1}^{N}\left[ \left( \varepsilon_{n}\odot z_{n}\right)
\otimes z_{n}+z_{n}\otimes\left( \varepsilon_{n}\odot z_{n}\right) \right]
,\\
& \frac{1}{N}\sum_{n=1}^{N}\left[ \eta_{n}\odot\left( p_{n}-g_{n}\right)
\right] \otimes z_{n}+z_{n}\otimes\left[ \eta_{n}\odot\left( p_{n}
-g_{n}\right) \right]
\end{align*}
tends to $0$ in probability by Lemma \ref{LLN} and (\ref{Hadam}). At last, the
remaining $\left[ \left[ \tilde{r}_n:z_{n}\right] \right] $ is basically the
same as term 1 in (\ref{Cnprime}) and may be addressed the same way. This
terminates the proof of the Lemma.
\subsection{Second case: non-oscillatory and stagnant}
We start from \ref{chain:X} and set $X_{n}=y_{n}/y_{n-1}$. Then we get :
\[
X_{n+1}=1+\omega-c-\frac{\omega}{X_{n}}+c\varepsilon_{n},
\]
where $\varepsilon_{n}=r_{1,n}+r_{2,n}-1$ has a ``witch hat" distribution (convolution of two uniform distributions) with support $\left[ -1,+1\right] $.
We focus now on the above homogeneous Markov chain $X_{n}$ and we aim at proving that a CLT holds for $h\left(
X_{n}\right) =\log\left\vert X_{n}\right\vert $ namely that for some $\mu$
and $\sigma^{2}$ :
\[
\sqrt{N}\left[ \frac{1}{N}\sum_{n=1}^{N}\log\left\vert X_{n}\right\vert
-\mu\right] \hookrightarrow\mathcal{N}\left( 0,\sigma^{2}\right),
\]
which will yield:
\[
\sqrt{N}\left[ \frac{1}{N}\log\left\vert y_{N}\right\vert -\mu\right]
\hookrightarrow\mathcal{N}\left( 0,\sigma^{2}\right).
\]
We aim at applying Theorem 1 p.302 in \cite{jones2004markov}. We need to check two points: existence of a small set $\mathcal{C}$
and of a function $g$ with a drift condition (see \citealp{meyn2012markov}).
Denote $P\left( t,x\right) $ the transition kernel of $X_n$. It is plain
that $P\left( t,x\right) $ coincides with the density of the uniform
distribution on the set $\mathcal{E}_{x}=\left[ 1+\omega -2c-\frac{\omega }{x
},1+\omega -\frac{\omega }{x}\right] $. The Theorem \ref{theo:equal} is a
consequence of the two Lemmas below coupled with the above-mentioned Theorem
1 p.302 in \cite{jones2004markov}.
\begin{lemma}
\label{smallset}Take $M_{\tau}=\omega/\left( c-\tau\right) $ with any $
0<\tau<c$ then the set $\mathcal{C}=\left( -\infty,-M_{\tau}\right] \cup
\left[ M_{\tau},+\infty\right) $ is a small set for the transition kernel of
$X_{n}$.
\end{lemma}
\textbf{Proof:}
We have to show that for all $x\in \mathcal{C}$ and Borel set $A$ in $
\mathbb{\ R}$:
\begin{equation*}
P\left( A,x\right) \geq \varepsilon Q\left( A\right) ,
\end{equation*}
where $\varepsilon >0$ and $Q$ is a probability distribution. The main
problem here comes from the compact support of $P\left( t,x\right) $. Take $x
$ such that $\left\vert x\right\vert \geq M$ then:
\begin{equation*}
1+\omega -c-\frac{\omega }{M}+c\varepsilon _{n}\leq 1+\omega -c-\frac{\omega
}{x}+c\varepsilon _{n}\leq 1+\omega -c+\frac{\omega }{M}+c\varepsilon _{n},
\end{equation*}
where $\varepsilon _{n}$ has compact support $\left[ -1,+1\right] $. It is
simple to see that with $M=M_{\tau }=\omega /\left( c-\tau \right) $ the
above bound becomes:
\begin{equation*}
1+\omega -2c+\tau +c\varepsilon _{n}\leq 1+\omega -c-\frac{\omega }{x}
+c\varepsilon _{n}\leq 1+\omega +\tau +c\varepsilon _{n}.
\end{equation*}
The intersection of the supports of $1+\omega -2c+\tau +c\varepsilon _{n}$
and $1+\omega -2c+\tau +c\varepsilon _{n}$ is the set $\left[ 1+\omega
-c-\tau ,1+\omega -c+\tau \right] $ whatever the value of $x$ in $\mathcal{C}
$. The probability measure $Q$ mentioned above may be chosen with support $
\left[ 1+\omega -c-\tau ,1+\omega -c+\tau \right] $.\bigskip
Now we turn to the drift condition. Our task consists in constructing a
function \\ $g:\mathbb{R}\rightarrow\left[ 1,+\infty\right[ $ such that for all
$x$:
\begin{equation}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt\leq\rho _{1}g\left(
x\right) +\rho_{2}1_{x\in\mathcal{C}} , \label{drift_condition}
\end{equation}
where $0<\rho_{1}<1$ and $\rho_{2}\geq0.$ Besides, in order to get a CLT on $
\log\left\vert X_{n}\right\vert $ we must further ensure that for all $x$:
\begin{equation}
\left[ \log\left\vert x\right\vert \right] ^{2}\leq g\left( x\right).
\label{maj_log}
\end{equation}
Note however that, if (\ref{drift_condition}) holds for $g$ but (\ref
{maj_log}) fails, then both conditions will hold for updated function $
g^{\ast}=\eta g$ with constant $\eta>1$ and $\rho_{2}^{\prime}=\eta\rho_{2}$
such that (\ref{maj_log}) holds.
\begin{lemma}
\label{drift}Take for $g$ the even function defined by $g\left( x\right)
=C_{1}/\sqrt{\left\vert x\right\vert }$ for $\left\vert x\right\vert \leq
M_{\tau}$ and $g\left( x\right) =C_{2}\left( \log\left\vert x\right\vert
\right) ^{2}$ for $\left\vert x\right\vert >M_{\tau}$. Assume that:
\begin{equation*}
\mathbf{B}_{2}:1+\omega -c<\omega/c<\left( 1+c\right) /4.
\end{equation*}
Then it is always possible to choose three constants $\tau,$ $C_{1}$ and $
C_{2}$ such that (\ref{drift_condition}) holds for a specific choice of $
\rho_{1}$ and $\rho_{2}$.
\end{lemma}
\textbf{Proof:}
The proof of the Lemma just consists in an explicit construction of the
above-mentioned $\tau$, $C_{1}$, and $C_{2}$. This construction is detailed
for the sake of completeness.
\textbf{At this point and in order to simplify the computations below we
will assume that the distribution of }$\varepsilon_{n}$\textbf{\ is uniform
on }$\left[ -1,+1\right] $ instead of the convolution of two $\mathcal{U} _{
\left[ -1/2,1/2\right] }$ distributions.
Set $\lambda =1+\omega -c$, assume that $\lambda >0$ (the case $\lambda <0$
follows the same lines) and notice that:
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt=\frac{1}{2c}\int_{
\mathbb{\lambda -}\left( \omega /x\right) -c}^{\mathbb{\lambda -}\left(
\omega /x\right) +c}g\left( s\right) ds=\frac{1}{2c}\int_{\left( \omega
/x\right) -\lambda -c}^{\left( \omega /x\right) -\lambda +c}g\left( s\right)
ds,
\end{equation*}
the last inequality stemming from parity of $g$. We should consider two
cases $x>0$ and $x\leq 0$.
The proof takes 2 parts ($x>0$ and $x<0$ respectively). Both are given again
for completeness and because the problem is not symmetric. Each part is
split in three steps: the two first steps deal with $x\notin \mathcal{C}$,
the third with $x\in \mathcal{C}=\left( -\infty ,-M_{\tau }\right] \cup
\left[ M_{\tau },+\infty \right) $.
\paragraph*{}\textbf{Part 1: }$x>0$
\textit{First step:} We split $\left[ 0,M_{\tau }\right] $ in two subsets, $
\left[ 0,M_{\tau }\right] =\left[ 0,A_{\tau }\right] \cup \left[ A_{\tau
},M_{\tau }\right] $ with \[A_{\tau }=\omega /\left( M_{\tau }+1+\omega \right)
\] is chosen such that $0\leq x\leq A_{\tau }$ implies the following
inequality on the lower bound of the integral: $\left( \omega /x\right)
-\lambda -c>M_{\tau }$. Clearly $A_{\tau }\leq M_{\tau }$ because $\lambda
>0>-\tau -M_{\tau }$. Then:
\begin{equation*}
\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda -c}^{\left( \omega
/x\right) -\lambda +c}g\left( s\right) ds=\frac{C_{2}}{2c}\int_{\left(
\omega /x\right) -\lambda -c}^{\left( \omega /x\right) -\lambda +c}\log
^{2}\left\vert s\right\vert ds\leq C_{2}\log ^{2}\left\vert \left( \omega
/x\right) -\lambda +c\right\vert .
\end{equation*}
Let:
\begin{equation*}
\sup_{0\leq x\leq \omega /\left( c+M_{\tau }\right) }\sqrt{\left\vert
x\right\vert }\left( \log \left\vert \left( \omega /x\right) -\lambda
+c\right\vert \right) ^{2}=K_{1}\left( \omega ,c,\tau \right) <+\infty .
\end{equation*}
The strictly positive $K_{1}\left( \omega ,c,\tau \right) $ exists because $
\sqrt{x}\log ^{2}\left\vert \left( \omega /x\right) -\lambda +c\right\vert $
is bounded on $\left[ 0,A_{\tau }\right] $. The first condition reads:
\begin{equation*}
\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda -c}^{\left( \omega
/x\right) -\lambda +c}g\left( s\right) ds\leq \rho _{1}C_{1}/\sqrt{
\left\vert x\right\vert }, \qquad 0\leq x\leq A_{\tau }
\end{equation*}
whenever
\begin{equation}
C_{2}K_{1}\left( \omega ,c,\tau \right) \leq \rho _{1}C_{1} \label{cond1}
\end{equation}
and $\rho _{1}$ will be fixed after the second step.\bigskip
\textit{Second step:} Now we turn to $A_{\tau }\leq x\leq M_{\tau }$. We
still have $g\left( x\right) =C_{1}/\sqrt{\left\vert x\right\vert }$ but we
need to focus on the bounds of the integral.
This time the lower bound of the integral $\left( \omega /x\right) -\lambda
-c\in \left[ -\lambda -\tau ,M_{\tau }\right] $ and the upper bound $\left(
\omega /x\right) -\lambda +c\in \left[ 2c-\lambda -\tau ,2c+M_{\tau }\right]
$. We are going to require that $\left( \omega /x\right) -\lambda -c\geq
-M_{\tau }$ it suffices to take $\lambda +\tau \leq M_{\tau }$ and this
comes down to the following set of constraint on $\tau $: $\left\{ \tau
\geq c-\omega \right\} \cup \left\{ \tau \leq c-1\right\}.$ We keep the
second and assume once and for all that:
\begin{equation}
\tau \leq c-1. \label{constr-tau}
\end{equation}
Then for $x\in \left[ A_{\tau },M_{\tau }\right] $,
\begin{align}
\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda -c}^{\left( \omega
/x\right) -\lambda +c}g\left( s\right) ds& =\frac{1}{2c}\int_{\left( \omega
/x\right) -\lambda -c}^{M_{\tau }}g\left( s\right) ds+\frac{1}{2c}
\int_{M_{\tau }}^{\left( \omega /x\right) -\lambda +c}g\left( s\right) ds
\label{calc} \\
& \equiv \mathcal{I}_{1}+\mathcal{I}_{2}. \notag
\end{align}
We want to make sure that the upper bound $\left( \omega /x\right) -\lambda
+c$ is larger than $M_{\tau }$. This will hold if $\left( \omega /M_{\tau
}\right) -\lambda +c\geq M_{\tau }$ hence if $2c-\tau -\lambda \geq M_{\tau
}.$ We imposed previously that $\lambda +\tau \leq M_{\tau }$. So:
\begin{equation*}
\tau <c-\frac{\omega }{c}\Rightarrow M_{\tau }<c\Rightarrow 2c-\tau -\lambda
\geq M_{\tau }
\end{equation*}
but the constraint $\tau <c-\omega /c$ is weaker than (\ref{constr-tau})
consequently (\ref{calc}) holds.
Focus on the first term $\mathcal{I}_{1}$ in (\ref{calc}) and consider:
\begin{equation*}
\mathcal{I}_{1}=\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda
-c}^{M_{\tau }}g\left( s\right) ds=\frac{1}{2c}\int_{\left( \omega /x\right)
-\lambda -c}^{M_{\tau }}\frac{C_{1}}{\sqrt{\left\vert s\right\vert }}ds.
\end{equation*}
Consider the (only) two situations on the sign of $\left( \omega /x\right)
-\lambda -c=\left( \omega /x\right) -\left( 1+\omega \right) $.\\ If $x<\omega
/\left( 1+\omega \right)$ then $\left( \omega /x\right) -\left( 1+\omega
\right) >0$ and $\mathcal{I}_{1}\leq \frac{C_{1}}{c}\sqrt{M_{\tau }}$ Notice
by the way and for further purpose that:
\begin{equation*}
\sup_{x\in \left[ A_{\tau },\omega /\left( 1+\omega \right) \right] }\sqrt{
\left\vert x\right\vert }\mathcal{I}_{1}\leq \frac{C_{1}}{c}\sqrt{\frac{
\omega }{1+\omega }M_{\tau }}.
\end{equation*}
If $x\geq \omega /\left( 1+\omega \right) $ then $\left( \omega /x\right)
-\left( 1+\omega \right) \leq 0$ and:
\begin{align*}
\mathcal{I}_{1}& =\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda
-c}^{M_{\tau }}g\left( s\right) ds=\frac{1}{2c}\int_{\left( \omega /x\right)
-\left( 1+\omega \right) }^{0}\frac{C_{1}}{\sqrt{\left\vert s\right\vert }}
ds+\frac{1}{2c}\int_{0}^{M_{\tau }}\frac{C_{1}}{\sqrt{\left\vert
s\right\vert }}ds \\
& =\frac{C_{1}}{c}\left[ \sqrt{\left\vert \left( \omega /x\right) -\left(
1+\omega \right) \right\vert }+\sqrt{M_{\tau }}\right].
\end{align*}
Again:
\begin{equation*}
\sqrt{\left\vert x\right\vert }\mathcal{I}_{1}\leq \frac{C_{1}}{c}\left[
\sqrt{\left\vert x\left( 1+\omega \right) -\omega \right\vert }+\sqrt{
xM_{\tau }}\right].
\end{equation*}
From the bounds above we see that:
\begin{equation*}
\sup_{x\in \left[ A_{\tau },M_{\tau }\right] }\sqrt{\left\vert x\right\vert }
\mathcal{I}_{1}\leq \frac{C_{1}}{c}\left[ \sqrt{\left\vert M_{\tau }\left(
1+\omega \right) -\omega \right\vert }+M_{\tau }\right].
\end{equation*}
The reader will soon understand why we need to make sure that the right had
side in equation above is strictly under $C_{1}$. It is not hard to see that
the function $\tau \longmapsto \sqrt{\left\vert M_{\tau }\left( 1+\omega
\right) -\omega \right\vert }+M_{\tau }$ is increasing and continuous on $
\left[ 0,c-1\right] $. If we prove that for some $\delta \in \left] 0,1
\right[ $:
\begin{equation*}
\frac{1}{c}\left[ \sqrt{\left\vert M_{0}\left( 1+\omega \right) -\omega
\right\vert }+M_{0}\right] =1-3\delta <1,
\end{equation*}
then the existence of some $\tau ^{+}>0$ such that:
\begin{equation}
\frac{1}{c}\left[ \sqrt{\left\vert M_{\tau ^{+}}\left( 1+\omega \right)
-\omega \right\vert }+M_{\tau ^{+}}\right] =1-2\delta <1 \label{delta}
\end{equation}
will be granted.\ But $\frac{1}{c}\left[ \sqrt{\left\vert M_{0}\left(
1+\omega \right) -\omega \right\vert }+M_{0}\right] =\frac{1}{c}\left[ \sqrt{
\frac{\omega }{c}\lambda }+\frac{\omega }{c}\right].$ \\ If we assume that $
\lambda <\omega /c<\left( 1+c\right) /4$ (assumption $\mathbf{B}_{2}$) then
since $c>1$:
\begin{equation*}
\frac{1}{c}\left[ \sqrt{\frac{\omega }{c}\lambda }+\frac{\omega }{c}\right] <
\frac{1}{2}\left( 1+\frac{1}{c}\right) <1.
\end{equation*}
We turn to $\mathcal{I}_{2}$ in (\ref{calc}):
\begin{equation*}
\mathcal{I}_{2}=\frac{1}{2c}\int_{M_{\tau }}^{\left( \omega /x\right)
-\lambda +c}g\left( s\right) ds=\frac{C_{2}}{2c}\int_{M_{\tau }}^{\left(
\omega /x\right) -\lambda +c}\left( \log s\right) ^{2} ds\leq \frac{C_{2}}{
\sqrt{\left\vert x\right\vert }}K_{2}\left( \omega ,c,\tau ^{+}\right),
\end{equation*}
where:
\begin{equation*}
K_{2}\left( \omega ,c,\tau ^{+}\right) =\sup_{x\in \left[ A_{\tau },M_{\tau }
\right] }\frac{\sqrt{\left\vert x\right\vert }}{2c}\int_{M_{\tau }}^{\left(
\omega /x\right) -\lambda +c}\left( \log s\right) ^{2}ds.
\end{equation*}
Set finally $\rho _{1}^{+}=1-\delta <1$.
From (\ref{calc}) we get:
\begin{align*}
\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda -c}^{\left( \omega
/x\right) -\lambda +c}g\left( s\right) ds& \leq \frac{C_{1}}{\sqrt{
\left\vert x\right\vert }}.\left( 1-2\delta \right) +\frac{C_{2}}{\sqrt{
\left\vert x\right\vert }}K_{2}\left( \omega ,c,\tau ^{+}\right) \\
& \leq \rho _{1}^{+}\frac{C_{1}}{\sqrt{\left\vert x\right\vert }},
\end{align*}
whenever holds the new condition:
\begin{equation}
C_{2}K_{2}\left( \omega ,c,\tau \right) \leq C_{1}\delta. \label{cond2}
\end{equation}
Finally comparing (\ref{cond1}) and (\ref{cond2}), we see that both
conditions cannot be incompatible. Accurate choices of the couple $\left(
C_{1}^{+},C_{2}^{+}\right) $ are given by the summary bound:
\begin{equation}
C_{2}^{+}\leq C_{1}^{+}\min \left( \frac{\delta }{K_{2}},\frac{1-\delta }{
K_{1}}\right). \label{condPart1}
\end{equation}
It is now basic to see that the quadruple $\left( C_{1}^{+},C_{2}^{+},\tau
^{+},\rho _{1}^{+}\right) $ yields the drift condition (\ref{drift_condition}) for $x\notin \mathcal{C}$.\bigskip
\textit{Third step:} The remaining step is to check the inequality for some $
\rho _{2}$:
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt\leq \rho
_{1}^{+}g\left( x\right) +\rho _{2},
\end{equation*}
for any $x$ in $\mathcal{C}$ -that is any $\left\vert x\right\vert >M_{\tau }
$ (rather $x>M_{\tau }$ here as explained above since $x>0$). We see that:
\begin{equation*}
0\leq \frac{\omega }{x}\leq \frac{\omega }{M_{\tau }},
\end{equation*}
and:
\begin{equation*}
\frac{1}{2c}\int_{\left( \omega /x\right) -\lambda -c}^{\left( \omega
/x\right) -\lambda +c}g\left( s\right) ds\leq \frac{1}{2c}\int_{-\lambda
-c}^{2c-\tau -\lambda }g\left( s\right) ds\leq \frac{1}{2c}\int_{-\left(
1+\omega \right) }^{3c-\left( 1+\omega \right) }g\left( s\right) ds.
\end{equation*}
The values of the constants $C_{1}$ and $C_{2}$ were fixed above. Then
denote:
\begin{equation*}
\rho _{2}^{+}=\frac{1}{2c}\int_{-\left( 1+\omega \right) }^{3c-\left(
1+\omega \right) }g\left( s\right) ds>0,
\end{equation*}
then clearly for any $x$ in $\mathcal{C}$:
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt\leq \rho _{2}^{+},
\end{equation*}
so that (\ref{drift_condition}) holds.\bigskip
\textbf{Part 2 (}$x\leq 0$\textbf{)}
We go on with $x<0$ and $\lambda >0,$ set $y=-x\geq 0,$
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt=\frac{1}{2c}\int_{
\mathbb{\lambda -}\left( \omega /x\right) -c}^{\mathbb{\lambda -}\left(
\omega /x\right) +c}g\left( s\right) ds=\frac{1}{2c}\int_{\left( \omega
/y\right) +\lambda -c}^{\left( \omega /y\right) +\lambda +c}g\left( s\right)
ds.
\end{equation*}
Since $g$ is even and in view of the proposed $\mathcal{C}$ we just have to
prove exactly the following drift condition with $x>0$:
\begin{equation*}
\frac{1}{2c}\int_{\left( \omega /x\right) +\lambda -c}^{\left( \omega
/x\right) +\lambda +c}g\left( s\right) ds\leq \rho _{1}g\left( x\right)
+\rho _{2}1_{x\in \mathcal{C}}.
\end{equation*}
\bigskip \textit{First step:} Take $x\notin \mathcal{C}$. We split $\left[
0,M_{\tau }\right] $ in two subsets, $\left[ 0,M_{\tau }\right] =\left[
0,B_{\tau }\right] \cup \left[ B_{\tau },M_{\tau }\right] $ with $B_{\tau
}=\omega /\left( M_{\tau }-\lambda +c\right) $ is chosen such that $0\leq
x\leq B_{\tau }$ implies the following inequality on the lower bound of the
integral: $\left( \omega /x\right) +\lambda -c>M_{\tau }$. Clearly $B_{\tau
}\leq M_{\tau }$ for all $\tau $. Then:
\begin{equation*}
\frac{1}{2c}\int_{\left( \omega /x\right) +\lambda -c}^{\left( \omega
/x\right) +\lambda +c}g\left( s\right) ds=\frac{C_{2}}{2c}\int_{\left(
\omega /x\right) +\lambda -c}^{\left( \omega /x\right) +\lambda +c}\log
^{2}\left\vert s\right\vert ds\leq C_{2}\log ^{2}\left\vert \left( \omega
/x\right) +\lambda +c\right\vert .
\end{equation*}
Let:
\begin{equation*}
\sup_{0\leq x\leq B_{\tau }}\sqrt{\left\vert x\right\vert }\log
^{2}\left\vert \left( \omega /x\right) +\lambda +c\right\vert =K_{1}\left(
\omega ,c,\tau \right) <+\infty .
\end{equation*}
The strictly positive $K_{1}\left( \omega ,c,\tau \right) $ exists because $
\sqrt{\left\vert x\right\vert }\log ^{2}\left\vert \left( \omega /x\right)
+\lambda +c\right\vert $ is bounded on $\left[ 0,B_{\tau }\right] $. The
initial condition reads:
\begin{equation*}
\frac{1}{2c}\int_{\left( \omega /x\right) +\lambda -c}^{\left( \omega
/x\right) +\lambda +c}g\left( s\right) ds\leq \rho _{1}C_{1}/\sqrt{
\left\vert x\right\vert },\qquad 0\leq x\leq B_{\tau }
\end{equation*}
whenever $C_{2}K_{1}\left( \omega ,c,\tau \right) \leq \rho _{1}C_{1}$ and $
\rho _{1}$ will be fixed later.\bigskip
\textit{Second step:} Now we turn to $B_{\tau }\leq x\leq M_{\tau }$. We
still have $g\left( x\right) =C_{1}/\sqrt{\left\vert x\right\vert }$ but we
need to focus on the bounds of the integral.
This time the lower bound of the integral $\left( \omega /x\right) +\lambda
-c\in \left[ \lambda -\tau ,M_{\tau }\right] $ and the upper bound $\left(
\omega /x\right) +\lambda +c\in \left[ \lambda +2c-\tau ,M_{\tau }+2c\right]
$. If we assume that $\tau \leq \lambda $ then $\left( \omega /x\right)
+\lambda -c\geq 0\geq -M_{\tau }.$ Besides in order that $\lambda +2c-\tau
\geq M_{\tau }$ we just have need that $2c\geq M_{\tau }$ or $\tau \leq
c-\omega /\left( 2c\right).$ As a consequence the assumption:
\begin{equation}
\tau \leq \min \left( \lambda ,c-\frac{\omega }{2c}\right) \label{condthau2}
\end{equation}
allows to write for $x\in \left[ B_{\tau },M_{\tau }\right] $,
\begin{align*}
\frac{1}{2c}\int_{\left( \omega /x\right) +\lambda -c}^{\left( \omega
/x\right) +\lambda +c}g\left( s\right) ds& =\frac{1}{2c}\int_{\left( \omega
/x\right) +\lambda -c}^{M_{\tau }}g\left( s\right) ds+\frac{1}{2c}
\int_{M_{\tau }}^{\left( \omega /x\right) +\lambda +c}g\left( s\right) ds \\
& \equiv \mathcal{I}_{1}+\mathcal{I}_{2},
\end{align*}
with non-null $\mathcal{I}_{1}$ and $\mathcal{I}_{2}$. Focus on:
\begin{eqnarray*}
\mathcal{I}_{1} &=&\frac{C_{1}}{2c}\int_{\left( \omega /x\right) +\lambda
-c}^{M_{\tau }}\frac{1}{\sqrt{s}}ds=\frac{C_{1}}{c}\left[ \sqrt{M_{\tau }}-
\sqrt{\left( \omega /x\right) +\lambda -c}\right] \\
\sup_{x\in \left[ B_{\tau },M_{\tau }\right] }\sqrt{x}\mathcal{I}_{1} &\leq &
\frac{C_{1}}{c}M_{\tau }.
\end{eqnarray*}
At last we see that for $M_{\tau }\leq 1$ i.e. $\tau \leq c-\omega $, $
\sup_{x\in \left[ B_{\tau },M_{\tau }\right] }\sqrt{x}\mathcal{I}_{1}\leq
C_{1}/c$. This condition combined with (\ref{condthau2}) let us set in the
sequel:
\begin{equation*}
\tau \leq \tau ^{-}=\min \left( \lambda ,c-\omega \right).
\end{equation*}
We turn to $\mathcal{I}_{2}$:
\begin{equation*}
\frac{1}{2c}\int_{M_{\tau }}^{\left( \omega /x\right) +\lambda +c}g\left(
s\right) ds=\frac{C_{2}}{2c}\int_{M_{\tau }}^{\left( \omega /x\right)
+\lambda +c}\log ^{2}\left\vert s\right\vert ds\leq \frac{C_{2}}{\sqrt{x}}
K_{2}\left( \omega ,c,\tau ^{-}\right),
\end{equation*}
with:
\begin{equation*}
K_{2}^{-}=K_{2}\left( \omega ,c,\tau ^{-}\right) =\sup_{x\in \left[ B_{\tau
},M_{\tau }\right] }\frac{\sqrt{\left\vert x\right\vert }}{2c}\int_{M_{\tau
}}^{\left( \omega /x\right) +\lambda +c}\left( \log s\right) ^{2}ds.
\end{equation*}
Set finally $\rho _{1}^{-}=\left( 1+1/c\right) /2<1$. From all that was done
above we get:
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt\leq \frac{C_{1}}{
\sqrt{\left\vert x\right\vert }}\frac{1}{c}+\frac{C_{2}}{\sqrt{\left\vert
x\right\vert }}K_{2}^{-}\leq \rho _{1}^{-}\frac{C_{1}}{\sqrt{\left\vert
x\right\vert }},
\end{equation*}
whenever:
\begin{equation*}
C_{2}K_{2}\left( \omega ,c,\tau ^{-}\right) \leq \frac{1-1/c}{2}C_{1}.
\end{equation*}
This will be combined with the constraint of the first step $
C_{2}K_{1}^{-}\leq \rho _{1}C_{1}$ (we denoted $K_{1}\left( \omega ,c,\tau
^{-}\right) =K_{1}^{-}$). The new condition:
\begin{equation}
C_{2}^{-}\leq C_{1}^{-}\min \left( \frac{\rho _{1}^{-}}{K_{1}^{-}},\frac{
1-1/c}{2K_{2}^{-}}\right) \label{condPart2}
\end{equation}
ensures that \[\frac{1}{2c}\int_{\left( \omega /x\right) +\lambda -c}^{\left(
\omega /x\right) +\lambda +c}g\left( s\right) ds\leq \rho _{1}^{-}g\left(
x\right) \text{ for } x\in \left[ 0,M_{\tau }\right]. \]\bigskip
\textit{Third step:} The remaining step is to check the inequality:
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt\leq \rho
_{1}^{-}g\left( x\right) +\rho _{2},
\end{equation*}
for any $x$ in $\mathcal{C}$ -that is here any $x>M_{\tau }$. Adapting the
method given above is straightforward and leads to the desired result with a
given $\rho _{2}^{-}$.\bigskip
We are ready to conclude. Take
\begin{equation*}
C_{2}^{\ast }= C_{1}^{\ast }\min \left( \frac{\delta }{K_{2}^{+}},\frac{
1-\delta }{K_{1}^{+}},\frac{\rho _{1}^{-}}{K_{1}^{-}},\frac{1-1/c}{2K_{2}^{-}
}\right) .
\end{equation*}
Conditions (\ref{condPart1}) and (\ref{condPart2}) hold for the couple $
\left( C_{1}^{\ast },C_{2}^{\ast }\right).$ For such a couple we have:
\begin{eqnarray*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt &\leq &\rho
_{1}^{+}g\left( x\right) +\rho _{2}^{+},\quad x>0, \\
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt &\leq &\rho
_{1}^{-}g\left( x\right) +\rho _{2}^{-},\quad x\leq 0,
\end{eqnarray*}
and for all $x$:
\begin{equation*}
\int_{\mathbb{R}}g\left( t\right) P\left( t,x\right) dt\leq \max \left( \rho
_{1}^{+},\rho _{1}^{-}\right) g\left( x\right) +\max \left( \rho
_{2}^{+},\rho _{2}^{-}\right).
\end{equation*}
This finishes the proof of the Lemma.
\bibliographystyle{apa}
|
1,314,259,993,339 | arxiv | \section{Introduction}
As we know, affine hyperspheres are very special in the equiaffine differential geometry of hypersurfaces. In particular, if an affine hypersurface is of parallel Fubini-Pick form, then it must be an affine hypersphere (\cite{bok-nom-sim90}). If we only take account of the definition, affine hyperspheres seem very simple but in fact they do form a very large class of hypersurfaces. Consequently it is a great challenge to find explicitly all the affine hyperspheres and now it still remains a very hard job. Although this, the study of affine hyperspheres has been made a lot of great achievement by many authors. For example, the proof of the Calabi's conjecture (see for example, \cite{amli90}, \cite{amli92}), the classification of hyperspheres of constant sectional affine curvatures (\cite{vra-li-sim91}, \cite{wang93} and \cite{kri-vra99}), the generalizations of Calabi's composition of affine hyperbolic hyperspheres (with multiple factors, \cite{lix93}; for more general cases, \cite{dil-vra94}), the characterization of the Calabi's composition of hyperbolic hyperspheres (\cite{hu-li-vra08}; also \cite{lix13} and \cite{lix14} in a different manner), and the classification of locally strongly convex hypersurfaces with parallel Fubini-Pick forms (\cite{dil-vra-yap94} and \cite{hu-li-sim-vra09} for some special cases; \cite{hu-li-vra11} for general case). As for the general nondegenerate case, there also have been some interesting partial classification results, see for example the series of published papers by Z.J. Hu et al: \cite{hu-li11}, \cite{hu-li-li-vra11a} and \cite{hu-li-li-vra11b}. In this direction, a very recent development is the preprint article \cite{Hil12} in which the author aimed at a complete classification of nondegenerate centroaffine hypersurfaces with parallel Fubini-Pick form.
In this paper, on the basis of a recent characterization of Calabi composition of hyperbolic hypersphere (\cite{lix13}, \cite{lix14}), we make use of the idea by H. Naitoh in \cite{nai81} for classification of totally real parallel submanifolds in the complex projective space, to provide a direct proof of the complete classification of symmetric affine hyperspheres. Then, via an earlier result of the author, we easily give an alternative and simpler proof for the classification theorem (Theorem \ref{cla thm}) for the affine hypersurface with parallel Fubini-Pick forms, which has already been established by Z.J. Hu et al in a totally different way (see \cite{hu-li-vra11} for the detail).
Our main theorem is stated as follows:
{\thm[The main theorem]\label{main} Let $x:M^n\to \bbr^{n+1}$ ($n\geq 2$) be a locally strongly convex affine {\bf hypersphere}. If $x$ is locally affine symmetric, then either of the following two cases holds:
$(1)$ With the affine metric $g$, the Riemannian manifold $(M^n,g)$ is irreducible and $x$ is locally affine equivalent to
$(a)$ one of the three kinds of quadric affine spheres: Ellipsoid, elliptic paraboloid and hyperboloid; or
$(b)$ the standard embedding of the Riemannian symmetric space ${\rm SL}(m,\bbr)/{\rm SO}(m)$ into $\bbr^{n+1}$ with $n=\fr12m(m+1)-1$, $m\geq 3$;
or
$(c)$ the standard embedding of the Riemannian symmetric space ${\rm SL}(m,\bbc)/{\rm SU}(m)$ into $\bbr^{n+1}$ with $n=m^2-1$, $m\geq 3$; or
$(d)$ the standard embedding of the Riemannian symmetric space ${\rm SU}^*(2m)/{\rm Sp}(m)$ into $\bbr^{n+1}$ with $n=2m^2-m-1$, $m\geq 3$; or
$(e)$ the standard embedding of the Riemannian symmetric space ${\rm E}_{6(-26)}/{\rm F}_4$ into $\bbr^{27}$.
$(2)$ $(M^n,g)$ is reducible and $x$ is locally affine equivalent to the Calabi product of $r$ points and $s$ of the above irreducible hyperbolic affine spheres of lower dimensions, where $r$, $s$ are nonnegative integers and $r+s\geq 2$.}
Examples (b), (c), (d) and (e) are explicitly presented in Section 3, while examples in (a) can be found in the most text books, see for example \cite{li-sim-zhao93}.
{\sc Acknowledgement} The first author is grateful to Professor A-M Li for his encouragement and important suggestions during the preparation of this article. He also thanks Professor Z.J. Hu for providing him valuable related references some of which are listed in the end of this paper.
\section{Preliminaries}
\subsection{The equiaffine geometry of hypersurfaces}
In this subsection, we brief some basic facts in the equiaffine geometry of hypersurfaces. For details the readers are referred to some text books, say, \cite{li-sim-zhao93} and \cite{nom-sas94}.
Let $x:M^n\to\bbr^{n+1}$ be a nondegenerate hypersurface. Then there are several basic equiaffine invariants of $x$ among which are: the affine metric (Berwald-Blaschke metric) $g$, the affine normal $\xi:=\fr1n\Delta_gx$, the Fubini-Pick $3$-form (the so called cubic form) $A\in\bigodot^3T^*M^n$ and the affine second fundamental $2$-form $B\in\bigodot^2T^*M^n$. By using the index lifting by the metric $g$, we can identify $A$ and $B$ with the linear maps $A:TM^n\to \ed(TM^n)$ or $A:TM^n\bigodot TM^n\to TM^n$ and $B:TM^n\to TM^n$, respectively, by
\be\label{ab}
g(A(X)Y,Z)=A(X,Y,Z) \mb{\ or\ }g(A(X,Y),Z)=A(X,Y,Z),\quad
g(B(X),Y)=B(X,Y),
\ee
for all $X,Y,Z\in TM^n$. Sometimes we call the corresponding $B\in \ed(TM^n)$ the affine shape operator of $x$. In this sense, the affine Gauss equation can be written as follows:
\be\label{gaus}
R(X,Y)Z=\fr12(g(Y,Z)B(X)+B(Y,Z)X-g(X,Z)B(Y)-B(X,Z)Y)-[A(X),A(Y)](Z),
\ee
where, for any linear transformations $T,S\in \ed(TM^n)$,
\be\label{comm}
[T,S]=T\circ S-S\circ T.
\ee
Each of the eigenvalues $B_1,\cdots,B_n$ of the affine shape operator $B:TM^n\to TM^n$ is called the affine principal curvature of $x$. Define
\be\label{afme}
L_1:=\fr1n\tr B=\fr1n\sum_iB_i.
\ee
Then $L_1$ is referred to as the affine mean curvature of $x$. A hypersurface $x$ is called an (elliptic, parabolic, or hyperbolic) affine hypersphere, if all of its affine principal curvatures are equal to one (positive, 0, or negative) constant. In this case we have
\be\label{afsp}
B(X)=L_1X,\quad\mb{for all\ }X\in TM^n.
\ee
It follows that the affine Gauss equation \eqref{gaus} of an affine hypersphere assumes the following form:
\be\label{gaus_af sph}
R(X,Y)Z=L_1(g(Y,Z)X-g(X,Z)Y)-[A(X),A(Y)](Z),
\ee
Furthermore, all the affine lines of an elliptic affine hypersphere or a hyperbolic affine hypersphere $x:M^n\to\bbr^{n+1}$ pass through a fix point $o$ which is refer to as the affine center of $x$; Both the elliptic affine hyperspheres and the hyperbolic affine hyperspheres are called proper affine hyperspheres, while the parabolic affine hyperspheres are called improper affine hyperspheres.
For each vector field $\eta$ transversal to the tangent space of $x$, we have the following direct decomposition of vector spaces
$$
x^*T\bbr^{n+1}=x_*(TM)+\bbr\cdot\eta.
$$
This decomposition and the canonical differentiation $\bar D^0$ on $\bbr^{n+1}$ define a nondegenerate bilinear form $h\in\bigodot^2T^*M^n$ and a connection $D^\eta$ on $TM^n$ as follows:
\be\label{dfn h}
\bar D^0_XY=x_*(D^\eta_XY)+h(X,Y)\eta,\quad\forall X,Y\in TM^n.
\ee
\eqref{dfn h} can be referred as to the affine Gauss formula of the hypersurface $x$.
In what follows we make the following convention for the range of indices:
$$1\leq i,j,k,l\leq n.$$
Let $\{e_i,e_{n+1}\}$ be a local unimodular frame field along $x$, and $\{\omega^i,\omega^{n+1}\}$ its dual coframe. Then $\eta:=e_{n+1}$ is transversal to the tangent space $x_*(TM)$. Write $h=\sum h_{ij}\omega^i\omega^j$ with $h_{ij}=h(e_i,e_j)$ and $H=|\det(h_{ij})|$. Then the locally defined nondegenerate metric $g:=H^{-\fr1{n+2}}h$ is independent of the choice of the unimodular frame field $\{e_i,e_{n+1}\}$ and thus is in fact a globally well-defined metric on $M^n$ which is called the affine (or Berwald-Blaschke) metric. By taking $x$ as an $\bbr^{n+1}$-valued smooth function on $M^n$, we call the vector function $\xi:=\fr1n\tr_g(x)$ the affine normal vector.
If, in particular, $\eta$ is chosen to be parallel to the affine normal $\xi$, Then the induced connection $\nabla:=D^\eta$ is independent of the choice of $\eta$ and is referred to as the affine connection of $x$.
If $\hat\nabla$ is the Levi-Civita connection of the affine metric $g$, then
the Fubini-Pick form (as a symmetric $(1,2)$ tensor) is defined by
\be\label{f-p}
A(X,Y)=\nabla_XY-\hat\nabla_XY,\quad \forall\, X,Y\in TM,
\ee
which is identified via the affine metric $g$ with a symmetric cubic form $A(X,Y,Z)=g(A(X,Y),Z)$. This cubic form $A$ is also referred to as the Fubini-Pick form.
From now on we assume that the transversal vector $e_{n+1}$ above is parallel to the normal vector $\xi$. Then it holds that $\xi=H^{\fr1{n+2}}e_{n+1}$ and we have connection forms $\omega^A_B$, $1\leq A,B\leq n+1$, defined by
$$
d\omega^A=\omega^B\wedge\omega^A_B,\quad d\omega^A_B=\sum_{C=1}^{n+1}\omega^C_B\wedge\omega^A_C,\quad \omega^{n+1}\equiv 0.
$$
Furthermore, the local expressions of $g$, $A$ and $B$:
\be\label{gab}
A=\sum A_{ijk}\omega^i\omega^j\omega^k,\quad B=\sum B_{ij}\omega^i\omega^j,
\ee
is subject to the following basic formulas:
\begin{align}
&\sum_{i,j} g^{ij}A_{ijk}=0\text{\ (the apolarity)},\label{basic1}\\
&A_{ijk,l}-A_{ijl,k}=\fr12(g_{ik}B_{jl}+g_{jl}B_{ik} -g_{il}B_{jk}-g_{jk}B_{il}),\label{basic3}\\
&\sum_{l}A^l_{ij,l}=\fr n2(L_1g_{ij}-B_{ij}),\label{basic3-1}
\end{align}
where $A_{ijk,l}$ are the covariant derivatives of $A_{ijk}$ with respect to the Levi-Civita connection of $g$.
Define
\be\label{hijk0}
\sum_kh_{ijk}\omega^k=dh_{ij}+h_{ij}\omega^{n+1}_{n+1}-\sum h_{kj}\omega^k_i-\sum h_{ik}\omega^k_j.
\ee
Then the Fubini-Pick form $A$ can be determined by the following formula:
\be\label{hijktoaijk}
A_{ijk}=-\fr12H^{-\fr1{n+2}}h_{ijk}.
\ee
Define the normalized scalar curvature $\chi$ and the Pick invariant $J$ by
$$
\chi=\fr1{n(n-1)}\sum g^{il}g^{jk}R_{ijkl},\quad J=\fr1{n(n-1)}\sum A_{ijk}A_{pqr}g^{ip}g^{jq}g^{kr}.$$
Then the affine Gauss equation can be written in terms of the metric and the Fubini-Pick form as follows
\begin{align}
R_{ijkl}=&(A_{ijk,l}-A_{ijl,k})+(\chi-J)(g_{il}g_{jk}-g_{ik}g_{jl})\nnm\\ &\ +\fr2n\sum(g_{ik}A_{jlm,m}-g_{il}A_{jkm,m}) +\sum_m(A^m_{ik}A_{jlm}-A^m_{il}A_{jkm}).
\label{basic2}\end{align}
We shall use the following affine existence and uniqueness theorems later:
{\thm\label{affine existence} $($\cite{li-sim-zhao93}$)$
$($The existence$)$ Let $(M^n,g)$ be a simply connected Riemannian manifold
of dimension $n$, and $A$ be a symmetric $3$-form on $M^n$ satisfying the
affine Gauss equation \eqref{basic2} $($or equivalently \eqref{gaus}$)$ and the apolarity condition \eqref{basic1}. Then there exists a locally strongly convex immersion $x:M^n\to \bbr^{n+1}$ such that $g$ and $A$ are the affine metric and the Fubini-Pick form for $x$, respectively.}
{\thm\label{affine uniqueness} $($\cite{li-sim-zhao93}$)$ $($The uniqueness$)$ Let $x:M^n\to \bbr^{n+1}$,
$\bar x:\bar M^n\to \bbr^{n+1}$ be two locally strongly convex hypersurfaces of dimension $n$ with respectively the affine metrics $g$, $\bar g$ and the Fubini-Pick forms $A$, $\bar A$, and $\vfi:(M^n,g)\to (\bar M^n,\bar g)$ be an isometry between Riemannian manifolds. Then $\vfi^*\bar A=A$ if and only if there exists a unimodular affine transformation $\Phi:\bbr^{n+1}\to \bbr^{n+1}$ such that $\bar x\circ\vfi=\Phi\circ x$, or equivalently, $\bar x=\Phi\circ x\circ\vfi^{-1}$.}
\rmk\rm For the sufficient part of Theorem \ref{affine uniqueness}, see also \cite{lix14}.
Given a constant $L_1\in\bbr$ and a Riemannian manifold $(M^n,g)$, denote by ${\mathcal S}_{(M^n,g)}(c)$ the set of
all $TM^n$-valued symmetric bilinear forms $A\in \Gamma(\bigodot^2(T^*M^n)\bigotimes (TM^n))$, satisfying the following conditions:
(1) Under the metric $g$, the corresponding $3$-form $A\in \Gamma(\bigodot^2(T^*M^n)\bigotimes (T^*M^n))$ is totally symmetric, that is, $A\in \Gamma(\bigodot^3(T^*M^n))$;
(2) Affine Gauss equation, that is, for any $X,Y,Z\in {\mathfrak X}(M^n)$
\be\label{pre gaus_af sph1}
R(X,Y)Z=L_1(g(Y,Z)X-g(X,Z)Y)-[A(X),A(Y)](Z).
\ee
(3) $\tr_g(A)=0$,
From Theorem \ref{affine existence} and Theorem \ref{affine uniqueness}, we have
\begin{cor}\label{cor2.1}
For each $A\in {\mathcal S}_{(M^n,g)}(L_1)$, there uniquely exists one affine hypersphere $x:M^n\to\bbr^{d+1}$ with affine metric $g$, Fubini-Pick form $A$ and affine mean curvature $L_1$.
\end{cor}
Motivated by Theorem \ref{affine uniqueness}, we introduce the following concept of affine equivalence relation between nondegenerate hypersurfaces:
{\dfn Let $x:M^n\to \bbr^{n+1}$ be a nondegenerate hypersurface with the affine metric $g$. A hypersurface $\bar x:M^n\to \bbr^{n+1}$ is called affine equivalent to $x$ if there exists a unimodular transformation $\Phi:\bbr^{n+1}\to \bbr^{n+1}$ and an isometry $\vfi$ of $(M^n,g)$ such that $\bar x=\Phi\circ x\circ\vfi^{-1}$}.
To end this section, we would like to recall the following concept:
{\dfn\label{dfn afsym} {\rm(\cite{lix13})} A nondegenerate hypersurface $x:M^n\to \bbr^{n+1}$ is called affine symmetric (resp. locally affine symmetric) if
$(1)$ the pseudo-Riemannian manifold $(M^n,g)$ is symmetric (resp. locally symmetric) and therefore $(M^n,g)$ can be written (resp. locally written) as $G/K$ for some connected Lie group $G$ of isometries with $K$ one of its closed subgroups;
$(2)$ the Fubini-Pick form $A$ is invariant under the action of $G$.}
\subsection{The multiple Calabi product of hyperbolic affine hyperspheres}
For later use we make a brief review of the Calabi composition of multiple factors of hyperbolic affine hypersurfaces.
Detailed proofs of the facts listed in this subsection has been given in the articles \cite{lix11} and \cite{lix14}.
\newcommand{\stx}[2]{\strl{(#1)}{#2}}
\newcommand{\spec}[1]{\prod_{#1=1}^K\fr{c_{#1}^{n_{#1}+1}H_{(#1)}^{\fr1{n_{#1}+2}}} {(n_{#1}+1)(-\!\!\stx{#1}{L}_1)}}
\newcommand{\la}{\stx{a}{L}\!\!_1{}}\newcommand{\lb}{\stx{b}{L}\!\!_1{}}
\newcommand{\lc}{\stx{c}{L}\!\!_1{}}\newcommand{\lalp}{\stx{\alpha}{L}\!\!_1{}}
\newcommand{\ha}{\!\stx{a}{h}{}\!\!}
\newcommand{\Ha}{H_{(a)}}\newcommand{\Hb}{H_{(b)}}\newcommand{\Hc}{H_{(c)}}
\newcommand{\ga}{\!\!\stx{a}{g}{}\!\!\!}\newcommand{\Ga}{\stx{a}{G}\!\!{}}
\newcommand{\gb}{\!\!\stx{b}{g}{}\!\!\!}\newcommand{\Gb}{\stx{b}{G}\!\!{}}
\newcommand{\galp}{\!\!\stx{\alpha}{g}{}\!\!\!}
\newcommand{\xai}{x_{a,i_a}} \newcommand{\xaij}{x_{a,i_aj_a}}
\newcommand{\HH}{f_K\prod_a\fr{c_a^{(n_a+1)(f_K-1)}\Ha^{\fr{f_K+1}{n_a+2}}}
{(n_a+1)^{f_K-n_a}(-\!\!\la)^{f_K-n_a-1}}}
\newcommand{\h}{f^{-\fr1{n+2}}_K \prod_a\fr {(n_a+1)^{\fr{f_K-n_a}{f_K+1}}(-\!\!\la)^{\fr{f_K-n_a-1}{f_K+1}}}
{\left(c_a^{n_a+1}\right)^{\fr{f_K-1}{f_K+1}}\Ha^{\fr1{n_a+2}}}}
\newcommand{\oma}{\stx{a}{\omega}{}\!\!}
\newcommand{\omb}{\stx{b}{\omega}{}\!\!}
\newcommand{\tdca}{(n_a+1)\big(-\!\!\la\big)\prod_b\fr{c_b^{n_b+1}} {(n_b+1)\big(-\!\!\lb\big)}}
\newcommand{\cha}{\prod_a\fr{c_a^{n_a+1}\Ha^{\fr1{n_a+2}}} {(n_a+1)\big(-\!\!\la\big)}}
\newcommand{\chb}{\prod_b\fr{c_b^{n_b+1}\Hb^{\fr1{n_b+2}}} {(n_b+1)\big(-\!\!\lb\big)}}
\newcommand{\chc}{\prod_c\fr{c_c^{n_c+1}\Hc^{\fr1{n_c+2}}} {(n_c+1)\big(-\!\!\lc\big)}}
\newcommand{\ca}{\prod_a\fr{c_a^{n_a+1}}{(n_a+1)\big(-\!\!\la)}}
\newcommand{\cb}{\prod_b\fr{c_b^{n_b+1}}{(n_b+1)\big(-\!\!\lb)}}
\newcommand{\cc}{\prod_c\fr{c_c^{n_c+1}}{(n_c+1)\big(-\!\!\lc)}}
\newcommand{\Aa}{\stx{a}{A}{}\!\!}
\newcommand{\Aalp}{\stx{\alpha}{A}{}\!\!}
\newcommand{\olomea}{\stx{a}{\ol\omega}{}\!\!}
\newcommand{\olomeb}{\stx{b}{\ol\omega}{}\!\!}
\newcommand{\olgma}{\stx{a}{\ol\Gamma}{}\!\!\!}
\newcommand{\olgmb}{\stx{b}{\ol\Gamma}{}\!\!\!}
Let $r,s$ be two nonnegative integers with $K:=r+s\geq 2$ and $x_\alpha:M^{n_\alpha}_\alpha\to\bbr^{n_\alpha+1}$, $1\leq \alpha\leq s$, be hyperbolic affine hyperspheres of dimension $n_\alpha>0$ with affine mean curvatures $\stx{\alpha}{L}\!\!_1$ and with the origin their common affine center. For convenience we make the following convention:
$$1\leq a,b,c\cdots\leq K,\quad 1\leq\lambda,\mu,\nu\leq K-1,\quad
1\leq\alpha,\beta,\gamma\leq s,\quad \td\alpha=\alpha+r,\ \td\beta=\beta+r,\ \td\gamma=\gamma+r.
$$
Furthermore, for each $\alpha=1,\cdots,s$, set $\td i_{\alpha}=i_\alpha+K-1+\sum_{\beta<\alpha}n_\beta$ with $1\leq i_\alpha\leq n_\alpha$.
Define
$$
f_a:=\begin{cases} a,&1\leq a\leq r;\\ \sum_{\beta\leq \alpha}n_\beta+\td{\alpha},&r+1\leq a=\td\alpha\leq r+s,
\end{cases}
$$
and
$$e_a:=\exp\left(-\fr{t_{a-1}}{n_{a}+1}+\fr{t_{a}}{f_{a}}+\fr{t_{a+1}}{f_{a+1}} +\cdots+\fr{t_{K-1}}{f_{K-1}}\right),\quad 1\leq a\leq K=r+s$$
In particular,
$$
e_1=\exp\left(\fr{t_1}{f_1}+\fr{t_2}{f_2} +\cdots+\fr{t_{K-1}}{f_{K-1}}\right),\quad
e_K=\exp\left(-\fr{t_{K-1}}{n_K+1}\right).
$$
Put $n=\sum_\alpha n_\alpha+K-1$ and $M^n=R^{K-1}\times M^{n_1}_1\times\cdots\times M^{n_s}_s$. For any $K$ positive numbers $c_1,\cdots,c_K$, define a smooth map $x:M^n\to\bbr^{n+1}$ by
\begin{align}
x(t^1,&\cdots,t^{K-1},p_1,\cdots,p_s):=(c_1e_1,\cdots, c_re_r,c_{r+1}e_{r+1}x_1(p_1),\cdots,c_Ke_Kx_s(p_s)),\nnm\\&\hs{1cm}\forall (t^1,\cdots,t^{K-1},p_1,\cdots,p_s)\in M^n.\label{mulpro2}
\end{align}
{\prop\label{general sense} (\cite{lix11}) The map $x:M^n\to\bbr^{n+1}$ defined above is a new hyperbolic affine hypersphere with the affine mean curvature
\be\label{newl1c}
L_1=-\fr1{(n+1)C},\quad C:=\left(\fr1{n+1}\prod_{a=1}^r c_a^2\cdot\prod_{\alpha=1}^s\fr{c_{r+\alpha}^{2(n_\alpha+1)}} {(n_\alpha+1)^{n_\alpha+1}(-\!\!\stx{\alpha}{L}_1)^{n_\alpha+2}}\right)^{\fr1{n+2}},
\ee
Moreover, for given positive numbers $c_1,\cdots,c_K$, there exits some $c>0$ and $c'>0$ such that
the following three hyperbolic affine hyperspheres
\bea &x:=(c_1e_1,\cdots, c_re_r,c_{r+1}e_{r+1}x_1,\cdots,c_Ke_sx_s),\nnm\\
&\bar x:=c(e_1,\cdots, e_r,e_{r+1}x_1,\cdots,e_sx_s),\nnm\\
&\td x:=(e_1,\cdots, e_r,e_{r+1}x_1,\cdots,c'e_sx_s)\nnm
\eea
are equiaffine equivalent to each other.}
{\dfn\label{df2} {\rm(\cite{lix11})} \rm The hyperbolic affine hypersphere $x$ is called the Calabi composition of $r$ points and $s$ hyperbolic affine hyperspheres.}
Then we have
{\cor\label{cor} {\rm(\cite{lix11})} The Calabi composition $x:M^n\to \bbr^{n+1}$ of $r$ points and $s$ hyperbolic affine hyperspheres $x_\alpha:M^{n_\alpha}\to\bbr^{n_\alpha+1}$, $1\leq \alpha\leq s$, is affine symmetric if and only if
each positive dimensional factor $x_\alpha$ is symmetric.}
Note that for a given locally strongly convex hypersurface $x:M^n\to\bbr^{n+1}$ with the affine metric $g$, $(M^n,g)$ is a Riemannian manifold. Then we have the following characterization of Calabi composition of symmetric factors which is important in the proof of Theorem \ref{main}:
{\thm\label{chara} {\rm(\cite{lix14}; cf. \cite{lix13})} A locally strongly convex and affine symmetric {\bf hypersphere} $x:M^n\to\bbr^{n+1}$ is locally affine equivalent to the Calabi composition of some hyperbolic affine hyperspheres possibly including point factors if and only if $M^n$ is reducible as a Riemannian manifold with respect to the affine metric.}
\section{Some typical examples}
To make the main theorem more understandable, we provide in this section a systematic and unified treatment of some typical examples of affine symmetric hyperspheres in $\bbr^{n+1}$ giving, for the first time, the necessary computation details. These examples have partly appeared in \cite{sas80}, \cite{li-sim-zhao93}, \cite{nom-sas94}, \cite{dil-vra94}, \cite{bir-djo12}, \cite{lix13} and particularly in the important classification theorem by Z.J. Hu, H.Z. Li and L. Vrancken (\cite{hu-li-vra11}, see also Theorem \ref{cla thm} in the next section).
\expl\label{expl1} \rm (\cite{li-sim-zhao93}, \cite{nom-sas94}) Quadric Hypersurfaces
There are three kinds of quadric hypersurfaces in $\bbr^{n+1}$ and they are given by the following quadric equations
\begin{align}
&\text{(1)\ Ellipsoid:}\quad(x^1)^2+\cdots+(x^n)^2+(x^{n+1})^2=c^2,\quad c>0;\\
&\text{(2)\ Paraboloid:}\quad(x^1)^2+\cdots+(x^n)^2=2x^{n+1};\\
&\text{(3)\ Hyperboloid:}\quad(x^1)^2+\cdots+(x^n)^2-(x^{n+1})^2=-c^2,\quad x^{n+1}>0,
\quad c>0.\hs{1.1in} \end{align}
It is well known that the above three hypersurfaces are (resp. elliptic, hyperbolic and parabolic) affine hyperspheres (with resp. positive, negative and zero affine principal curvatures) and have vanishing Fubini-Pick forms. It then follows that, with respect to the affine metrics, they have constant (resp. positive, negative and zero) affine sectional curvatures. In particular, they are affine symmetric hyperspheres. Also we have
{\prop\label{expl1-prop} {\rm(\cite{li-sim-zhao93})} A locally strongly convex hypersurface $x:M\to\bbr^{n+1}$ has vanishing Fubini-Pick form if and only if it is one of the above quadric hypersurfaces.}
\expl\label{expl2} \rm (\cite{li-sim-zhao93}) The standard flat hypersurfaces with nonzero Fubini-Pick form
Given a positive number $C$, let $x:\bbr^{n}\to \bbr^{n+1}$ be the well known flat hyperbolic affine hypersphere of dimension $n$ which is defined by
$$
x^1\cdots x^{n} x^{n+1}=C,\quad x^1>0,\cdots,x^{n+1}>0.
$$
Then it is not hard to see that $x$ is the Calabi composition of $n+1$ points and thus is affine flat. In fact, we can write for example
$$
x=(e_1,\cdots,e_{n},Ce_{n+1}).
$$
It follows from Corollary \ref{cor} that $x$ is affine symmetric. In particular, $x$ has a positive constant Pick invariant.
Note that by a theorem of L. Vrancken, A-M. Li and U. Simon in \cite{vra-li-sim91} (also see \cite{amli89}), Example \ref{expl2} is, up to equiaffine equivalence, the only one with flat affine metric and positive Pick invariant.
\expl\label{expl3} \rm(\cite{nom-sas94}, \cite{dil-vra94}, \cite{hu-li-sim-vra09}) The standard embedding
$$x:M\equiv{\rm SL}(m,\bbr)/{\rm SO}(m)\to\bbr^{n+1},\quad n=\fr12m(m+1)-1,\quad m\geq 3.$$
Let $\mathfrak{s}\mathfrak{l}(m,\bbr)$, $\mathfrak{s}\mathfrak{o}(m)$ be the Lie algebras of ${\rm SL}(m,\bbr)$, ${\rm SO}(m)$ respectively, and $\bbr^{n+1}\equiv {\mathfrak s}(m)$ the vector space of real symmetric matrices of order $m$. Then the canonical decomposition of $\mathfrak{s}\mathfrak{l}(m,\bbr)$ with respective to $\mathfrak{s}\mathfrak{o}(m)$ is ${\mathfrak s}{\mathfrak l}(m,\bbr)={\mathfrak s}{\mathfrak o}(m,\bbr)+{\mathfrak s}_0(m)$ where
$${\mathfrak s}_0(m):=\{X\in {\mathfrak s}(m);\ \tr X=0\}$$
and is naturally identified with the tangent space $T_oM$ at the origin $o={\rm SO}(m)\in M$, the coset of the identity matrix.
There is a representation $\phi$ of ${\rm SL}(m,\bbr)$ on $\bbr^{n+1}$ defined by
$$
\phi(a)X:=aXa^t,\quad \text{for\ }a\in {\rm SL}(m,\bbr),\ X\in \bbr^{n+1}.
$$
Then we have
{\lem\label{expl3-lem}{\rm(\cite{nom-sas94})} $\phi({\rm SL}(m,\bbr))\subset {\rm SL}(n+1,\bbr)$. So $\phi({\rm SL}(m,\bbr))$ can be taken to be a subgroup of the unimodular group ${\rm UA}(n+1)$ on $\bbr^{n+1}$.}
For a given constant $L_1<0$, put
$$C=\fr{\sqrt{m}}{4}\left(\fr4{m(-L_1)}\right)^{\fr{n+2}2}$$
and define a map
$x:{\rm SL}(m,\bbr)/{\rm SO}(m)\to \bbr^{n+1}$ as follows:
$$
x(\,{}_a{\rm SO}(m))=Caa^t,\quad \text{for\ }a\in {\rm SL}(m,\bbr).
$$
Then it is clear that $x$ is equivariant with respect to the representation $\phi:{\rm SL}(m,\bbr)\to {\rm UA}(n+1)$ (see Lemma \ref{expl3-lem}) and $x(M)$ coincides with the subset of all positive-definite matrices in ${\mathfrak s}(m)$ with constant determinant $C^m$, and $x(o)=CI_m$ where $I_m$ is the identity matrix of order $m$.
Furthermore, $x$ is an equiaffine symmetric hypersphere of affine mean curvature $L_1$. In fact, this last conclusion follows by the following computation:
Now for each $X\in{\mathfrak s}_0(m)\equiv T_oM$, $a(t):={}_{\exp tX}{\rm SO}(m)$ is a geodesic curve on $M$. Then it holds that
$$
x_*(X)=\left.\dd{}{t}\right|_{t=0}x(a(t))=C\left.\dd{}{t}\right|_{t=0}((\exp tX)(\exp tX)^t)=2CX.
$$
This shows that $x$ is an immersion at $o$ and thus is an immersion globally since $x$ is equivariant. Clearly, $x$ is injective and is thus an imbedding of $M$ into $\bbr^{n+1}$.
Moreover, the standard inner product $(\cdot,\cdot)$ on $\bbr^{n+1}\equiv {\mathfrak s}(m)$ is defined by $(X,Y)=\tr(XY)$, $X,Y\in {\mathfrak s}(m)$.
Since
$$(x_*(X),x(o))=(2CX,CI_m)=2C^2\tr(XI_m)=2C^2\tr X=0,\quad X\in T_oM\equiv{\mathfrak s}_0(m),$$
$x(o)$ is a transversal vector of $x$ at $o$ and thus is transversal everywhere by the equivariance.
On the other hand, if we denote by $Y^*$ the Killing vector field on $M$ induced by $Y\in {\mathfrak s}_0(m)$, then the value of $Y^*$ at $a(t)$
$$
Y^*|_{a(t)}=\left.\dd{}{s}\right|_{s=0}(\,{}_{\exp sY a(t)}{\rm SO}(m))=\left.\dd{}{s}\right|_{s=0}({}\,_{\exp s Y \exp t X}{\rm SO}(m)).
$$
Therefore
$$
x_*(Y^*|_{a(t)})=C\left.\dd{}{s}\right|_{s=0}((\exp sY \exp tX)(\exp sY \exp tX)^t).
$$
It follows that
\begin{align}
X(x_*(Y^*))=&C\left.\ppp{}{t}{s}\right|_{t=s=0}(\exp sY \exp tX \exp sX^t \exp tY^t)\nnm\\
=&2C(YX+XY)=2C\left(YX+XY-\fr2m\tr(XY)I_m\right)+\fr4mC(X,Y)I_m\label{expl3-gausf}
\end{align}
implying that $x$ is locally strongly convex since $(X,Y)$ is positive definite.
Moreover the affine metric (Blaschke metric) of $x$ at the origin $o$ is by definition
$$
g_o(X,Y)=\left(\fr{4C}{\sqrt{m}}\right)^{\fr2{n+2}}(X,Y)=-\fr4{mL_1}(X,Y),\quad X,Y\in {\mathfrak s}_0(m).
$$
Clearly $g_o$ is positive definite and invariant by ${\rm SO}(m)$ and it induced a invariant Riemannian metric $g$. On the other hand, the involution map $\sigma:{\mathfrak s}{\mathfrak l}(m,\bbr)\to {\mathfrak s}{\mathfrak l}(m,\bbr)$ defined by $\sigma(X)=-X^t$ is isometric with respect to $g_o$, thus the invariant metric $g$ is symmetric; Note that $x$ is equivariant, thus $g$ is nothing but the affine metric of $x$.
Let $A_o$ be the $(1,2)$ tensor on $\mathfrak{s}_0(m)$ defined by
$$
A_o(X,Y)=XY+YX-\fr2m\tr(XY)I_m,\quad \forall X,Y\in\mathfrak{s}_0(m),
$$
which gives a linear map for any $X\in \mathfrak{s}_0(m)$: $A_o(X):\mathfrak{s}_0(m)\to \mathfrak{s}_0(m)$ by $A_o(X)Y=A_o(X,Y)$, $Y\in \mathfrak{s}_0(m)$.
To find the affine normal vector at $o$, we should first prove the following lemma:
{\lem\label{expl3-lem2} Define $A_o(X,Y,Z)=g_o(A_o(X,Y),Z)$, for $X,Y,Z\in \mathfrak{s}_0(m)$. Then
$(1)$ the $(0,3)$-tensor $A_o(X,Y,Z)$ is totally symmetric;
$(2)$ for each $X\in \mathfrak{s}_0(m)$, the linear map $A_o(X)$ is traceless.}
\proof
Conclusion (1) is direct. To prove (2), we denote by $e^j_i$ the $m\times m$ matrix with the $(i,j)$-th element being $1$ and all other elements zero, that is, its $(k,l)$-th element $(e^j_i)^k_l=\delta^k_i\delta^j_l$, $1\leq k,l\leq m$. Then $\{e^j_i,\ 1\leq i,j\leq m\}$ is the standard basis for the real linear space $M(m,\bbr)$ of $m\times m$ real matrices. Define
$$
f_\alpha=e^\alpha_\alpha-e^m_m\text{\ for\ }1\leq \alpha\leq m-1;\quad f^j_i=\fr12(e^j_i+e^i_j)\text{\ for\ }1\leq i<j\leq m.
$$
Then $\{f_\alpha,f^j_i\}$ is a basis for $\mathfrak{s}_0(m)$. For $X=(X^i_j)\in \mathfrak{s}_0(m)$, we find by direct computation
\begin{align}
A_o(X)f_\alpha=&f_\alpha X+Xf_\alpha-\fr2m\tr(f_\alpha X)I_m=\fr2m((m-1)X^\alpha_\alpha+X^m_m)f_\alpha+\cdots,\label{expl3-21}\\
A_o(X)f^j_i=&f^j_iX+Xf^j_i-\fr2m\tr(f^j_iX)I_m=(X^j_j+X^i_i)f^j_i+\cdots\label{expl3-22}
\end{align}
where we have omitted those terms not containing $f_\alpha$ in \eqref{expl3-21}, and those not containing $f^j_i$ in \eqref{expl3-22}, respectively. It then follows that
\begin{align}
\tr A_o(X)=&\fr2m\sum_\alpha((m-1)X^\alpha_\alpha+X^m_m) +\sum_{i<j}(X^j_j+X^i_i)\nnm\\ =&\fr{2(m-1)}m\sum_iX^i_i+\fr12\sum_{i,j}(X^i_i+X^j_j)-\sum_iX^i_i=0
\end{align}
since $\tr X=\sum_iX^i_i=0$.
\endproof
Since $Y^*$ is chosen to be the Killing vector field on $M$ corresponding to $Y$, we have $\hat\nabla_XY^*=0$ where $\hat\nabla$ is the Levi-Civita connection of the affine metric $g$. Therefore by taking the trace of \eqref{expl3-gausf} with respect to $g_o$ and using Lemma \ref{expl3-lem2}, we find that, at $o$, the affine normal vector
$$
\xi_o=\fr1n\Delta_g x=\left(\fr{4C}{\sqrt{m}}\right)^{-\fr2{n+2}}\fr4m\cdot x(o)=-L_1x(o).
$$
$\xi_o$ is clearly invariant by $\phi({\rm SO}(m))$ and for any $X\in {\mathfrak s}{\mathfrak l}(m,\bbr)$, $\phi_*(X)\xi_o\in x_*(T_o(M))$. Then the equivariant transversal vector field $\xi$ induced by $\xi_o$ coincides with the affine normal vector (see Lemma 4.4 in \cite{nom-sas94}). Since $x$ is also equivariant, $\xi=-L_1x$ holds identically. Therefore, $x$ is a hyperbolic affine sphere with affine mean curvature $L_1$.
Now the equivariance of $x$ implies that its Fubini-Pick form $A$ is ${\rm SL}(m,\bbr)$-invariant which indicates that $x$ is affine symmetric, and the invariant Fubini-Pick form $A$ is uniquely determined by the cubic form $A_o$ given in Lemma \ref{expl3-lem2} (see also Definition \eqref{f-p}):
\be\label{expl3-eq5}
A_o(X,Y,Z)=g_0\left(\left(XY+YX-\fr2m\tr(XY)I_m\right),Z\right),\quad X,Y,Z\in \mathfrak{s}_0(m).
\ee
\expl\label{expl4} \rm(\cite{hu-li-vra11}, cf. \cite{bir-djo12} for $m=3$) The standard embedding
$$x:M\equiv{\rm SL}(m,\bbc)/{\rm SU}(m)\to\bbr^{n+1}\quad n=m^2-1,\quad m\geq 3.$$
Let $\mathfrak{s}\mathfrak{l}(m,\bbc)$, $\mathfrak{s}\mathfrak{u}(m)$ be the Lie algebras of ${\rm SL}(m,\bbc)$, ${\rm SU}(m)$ respectively, and $\bbr^{n+1}\equiv {\mathfrak h}(m)$ the vector space of complex Hermitian matrices of order $m$. Then the canonical decomposition of $\mathfrak{s}\mathfrak{l}(m,\bbc)$ with respective to $\mathfrak{s}\mathfrak{u}(m)$ is ${\mathfrak s}{\mathfrak l}(m,\bbc)={\mathfrak s}{\mathfrak u}(m)+{\mathfrak h}_0(m)$ where
$${\mathfrak h}_0(m):=\{X\in {\mathfrak h}(m);\ \tr X=0\},$$
which can be identified with the tangent space $T_oM$ at the origin $o={\rm SU}(m)\in M$.
There is a representation $\phi$ of ${\rm SL}(m,\bbc)$ on $\bbr^{n+1}$ by
$$
\phi(a)X:=aX\bar a^t,\quad \text{for\ }a\in {\rm SL}(m,\bbc),\ X\in \bbr^{n+1}.
$$
{\lem\label{expl4-lem}\rm $\phi({\rm SL}(m,\bbc))\subset {\rm SL}(n+1,\bbr)$ and thus $\phi({\rm SL}(m,\bbc))$ can be viewed as a subgroup of the unimodular group ${\rm UA}(n+1)$ on $\bbr^{n+1}$.}
\proof Let $e^j_i$ and $f^j_i$ be as in Example \ref{expl3}. Then $\{e^j_i,\ 1\leq i,j\leq m\}$ can also be taken as the standard basis for the complex linear space $M(m,\bbc)$ of $m\times m$ complex matrices, with its complex dual basis denoted by $\{\omega^i_j,\ 1\leq i,j\leq m\}$. Define
$$
\td f^j_i=\fr12\sqrt{-1}(e^j_i-e^i_j)\text{\ for\ }1\leq i<j\leq m.
$$
Then $\{e^i_i,f^j_i,\td f^j_i\}$ is a basis for the real linear space $\mathfrak{h}(m)$ with the dual basis $\{\theta^i_i,\theta^i_j,\td \theta^i_j\}$ where
$$
\theta^i_i=\omega^i_i\text{\ for\ }1\leq i\leq m;\quad \theta^i_j=(\omega^i_j+\omega^j_i),\ \td \theta^i_j=\sqrt{-1}(\omega^j_i-\omega^i_j)\text{\ for\ }1\leq i<j\leq m.
$$
It follows that for $i<j$,
\begin{align}
\theta^i_j(e^l_k)=&\omega^i_j(e^l_k)+\omega^j_i(e^l_k) =\delta^i_k\delta^l_j+\delta^j_k\delta^l_i,\label{expl4-6}\\
\td\theta^i_j(e^l_k)=&\sqrt{-1}(\omega^j_i(e^l_k)-\omega^i_j(e^l_k)) =\sqrt{-1}(\delta^j_k\delta^l_i-\delta^i_k\delta^l_j).\label{expl4-7}
\end{align}
For each $X\in\mathfrak{s}\mathfrak{l}(m,\bbc)$, write $X=(X^k_l)_{m\times m}=\sum_{k,l}X^k_le^l_k$. Then $\tr X=\sum_iX^i_i=0$ and, for each pair of $i,j$, we have
\be\label{expl401}
Xe^j_i=\sum_{k,l,p}(X^k_p\delta^p_i\delta^j_l)e^l_k=\sum_{k}X^k_ie^j_k,\quad e^j_i\bar X^t=\sum_{k,l,p}(\delta^k_i\delta^j_p\bar X^l_p)e^l_k=\sum_{k}\bar X^k_je^k_i.
\ee
Since, by definition, $\phi_*(X)(A)=XA+A\bar X^t$ ($X\in {\mathfrak s}{\mathfrak l}(m,\bbc)$, $A\in\bbr^{n+1}$), it follows by \eqref{expl4-6}--\eqref{expl401} and $\tr X=0$ that
\begin{align}
\sum_i\theta^i_i(\phi_*(X)(f^i_i))=&\sum_i\omega^i_i(Xe^i_i+e^i_i\bar X^t) =\sum_{i,k}(X^k_i\omega^i_i(e^i_k)+\bar X^k_i\omega^i_i(e^k_i))\nnm\\
=&\sum_i(X^i_i+\bar X^i_i)=0,\label{expl4-8}\\
\sum_{i<j}\theta^i_j(\phi_*(X)(f^j_i)) =&\fr12\sum_{i<j}\theta^i_j(X(e^j_i+e^i_j)+(e^j_i+e^i_j)\bar X^t)\nnm\\
=&\fr12\sum_{i<j,k}(X^k_i\theta^i_j(e^j_k)+X^k_j\theta^i_j(e^i_k)+\bar X^k_j\theta^i_j(e^k_i)+\bar X^k_i\theta^i_j(e^k_j))\nnm\\
=&\fr12\sum_{i<j}(X^i_i+X^j_j+\bar X^j_j+\bar X^i_i)=\fr12\sum_{i\neq j}(X^i_i+\bar X^i_i)\nnm\\
=&\fr{m-1}2\sum_i(X^i_i+\bar X^i_i)=0,\label{expl4-9}\\
\sum_{i<j}\td\theta^i_j(\phi_*(X)(\td f^j_i))=&\fr12\sqrt{-1}\sum_{i<j}\td\theta^i_j(X(e^j_i-e^i_j)+(e^j_i-e^i_j)\bar X^t) \nnm\\ =&\fr12\sqrt{-1}\sum_{i<j,k}(X^k_i\td\theta^i_j(e^j_k)-X^k_j\td\theta^i_j(e^i_k) +\bar X^k_j\td\theta^i_j(e^k_i)-\bar X^k_i\td\theta^i_j(e^k_j))\nnm\\
=&\fr12\sum_{i<j}(X^i_i+X^j_j+\bar X^j_j+\bar X^i_i) =\fr12\sum_{i\neq j}(X^i_i+\bar X^i_i)\nnm\\
=&\fr{m-1}2\sum_i(X^i_i+\bar X^i_i)=0.\label{expl4-10}
\end{align}
Taking the sum of \eqref{expl4-8}--\eqref{expl4-10}, we find
$$
\tr(\phi_*(X))=\sum_i\theta^i_i(\phi_*(X)(f^i_i)) +\sum_{i<j}\theta^i_j(\phi_*(X)(f^j_i)) +\sum_{i<j}\td\theta^i_j(\phi_*(X)(\td f^j_i))=0,
$$
completing the proof of Lemma \ref{expl4-lem}.\endproof
For a given constant $L_1<0$, put
$$C=\fr{\sqrt{m}}{4}\left(\fr4{m(-L_1)}\right)^{\fr{n+2}2}$$
and define a map
$x:{\rm SL}(m,\bbc)/{\rm SU}(m)\to \bbr^{n+1}$ as follows:
$$
x(\,{}_a{\rm SU}(m))=Ca\bar a^t,\quad \text{for\ }a\in {\rm SL}(m,\bbc).
$$
Then, by Lemma \ref{expl4-lem}, the $x$ is equivariant with respect to the representation $\phi:{\rm SL}(m,\bbc)\to {\rm UA}(n+1)$.
Now for each $X\in \mathfrak{h}_0(m)$, define $a(t)={}_{\exp tX}{\rm SO}(m)$. Then
$$
x_*(X)=\left.\dd{}{t}\right|_{t=0}x(a(t))=C\left.\dd{}{t}\right|_{t=0}((\exp tX)(\ol{\exp tX})^t)=2CX.
$$
Thus $x$ is an immersion at $o$ and thus everywhere. Moreover, $x$ is also an imbedding of $M$ into $\bbr^{n+1}$.
For $X,Y\in {\mathfrak h}(m)$, define $(X,Y)=\tr(XY)$. Then $(\cdot,\cdot)$ is the standard inner product on $\bbr^{n+1}\equiv {\mathfrak h}(m)$. In particular, it is positive definite. As in Example \ref{expl3}, $x$ is equivariant and transversal everywhere on $M$.
For any $Y\in \mathfrak{h}_0(m)$, the corresponding Killing vector field $Y^*$ on $M$ satisfies
$$
Y^*|_{a(t)}=\left.\dd{}{s}\right|_{s=0}(\,{}_{\exp sY a(t)}{\rm SU}(m))=\left.\dd{}{s}\right|_{s=0}({}\,_{\exp sY \exp tX}{\rm SU}(m)).
$$
It then follows that
$$
x_*(Y^*|_{a(t)})=C\left.\dd{}{s}\right|_{s=0}((\exp sY \exp tX)(\ol{\exp sY \exp tX})^t).
$$
Therefore
\begin{align}
X(x_*(Y^*))=&C\left.\ppp{}{t}{s}\right|_{t=s=0}(\exp sY \exp tX \exp s\ol X^t \exp t\ol Y^t)\nnm\\
=&2C(XY+YX)=2C\left(YX+XY-\fr2m\tr(XY)I_m\right)+\fr4mC(X,Y)I_m\label{expl4-gausf}
\end{align}
implying that $x$ is locally strongly convex as $(X,Y)$ is positive definite.
Thus, at the origin $o$, the invariant affine metric $g_o$ is defined by:
$$
g_o(X,Y)=\left(\fr{4C}{\sqrt{m}}\right)^{\fr2{n+2}}(X,Y)=-\fr4{mL_1}(X,Y),\quad X,Y\in \mathfrak{h}_0(m).
$$
Since $g_o$ is positive definite and invariant by ${\rm SU}(m)$, the invariant Riemannian metric $g$ determined by $g_o$ is exactly the affine metric of $x$. Similar to Example \ref{expl3}, we can prove that, for any $X\in \mathfrak{h}_0(m)$, the real linear map
$$
Y\in \mathfrak{h}_0(m)\mapsto XY+YX-\fr2m\tr(XY)I_m
$$
is also traceless. So, by making use of \eqref{expl4-gausf}, we find that $\xi=-L_1x$ holds identically, implying that $x$ is a hyperbolic affine sphere with affine mean curvature $L_1$.
Note that the involution map $\sigma:{\mathfrak s}{\mathfrak l}(m,\bbc)\to {\mathfrak s}{\mathfrak l}(m,\bbc)$ is given by $\sigma(X)=-\bar X^t$ and isometric with respect to $g_o$, thus the invariant affine metric $g$ is symmetric; Furthermore, the Killing vector field $Y^*$ on $M$ given by $Y\in \mathfrak{h}_0(m)$ subject to $\hat\nabla_XY^*=0$ for all $X\in \mathfrak{h}_0(m)$ with $\hat\nabla$ the Levi-Civita connection. It follows from \eqref{expl3-gausf} and Definition \eqref{f-p} that the Fubini-Pick form $A$ of $x$ is invariant and is determined by its value $A_o$ at the origin $o$:
\be\label{expl4-eq5}
A_o(X,Y,Z)=g_0\left(\left(XY+YX-\fr2m\tr(XY)I_m\right),Z\right),\quad X,Y,Z\in \mathfrak{h}_0(m).
\ee
This shows that $x$ is an affine symmetric hypersphere.
\expl\label{expl5}\rm(\cite{hu-li-vra11}, cf. \cite{bir-djo12} for $m=3$) The standard embedding
$$x:M\equiv{\rm SU}^*(2m)/{\rm Sp}(m)\to\bbr^{n+1}\quad n=2m^2-m-1,\quad m\geq 3,$$
where ${\rm SU}^*(2m)={\rm SL}(2m,\bbc)\cap {\rm U}^*(2m)$ with ${\rm U}^*(2m)$ the usual ${\rm U}$-star group of order $2m$.
Define $J=\lmx 0&-I_m\\I_m&0\rmx$. Then the $U$-star group, or in other words, the general quaternion linear group has an expression in terms of complex matrices as
\begin{align}
U^*(2m)=&\{T\in GL(2m,\bbc);\ TJ=J\bar T\}\nnm\\
=&\{T=\lmx A&B\\-\bar B&\bar A\rmx\in {\rm GL}(2m,\bbc);\ A,B\in {\rm M}(m,\bbc)\}.
\end{align}
Consequently, the Lie algebra of $U^*(2m)$ is written as
\begin{align}
\mathfrak{u}^*(2m)=&\left\{X\in {\rm M}(2m,\bbc);\ XJ=J\bar X\right\}\nnm\\
=&\{X=\lmx A&B\\-\bar B&\bar A\rmx\in {\rm M}(2m,\bbc);\ A,B\in {\rm M}(m,\bbc)\}.
\end{align}
It follows that the special $U$-star group or the special quaternion linear group ${\rm SU}^*(2m)$ is given by
$$
{\rm SU}^*(2m)={\rm SL}(2m,\bbc)\cap {\rm U}^*(2m)=\left\{T\in U^*(2m);\ \det T=1\right\}
$$
of which the Lie algebra is
$$
\mathfrak{s}\mathfrak{u}^*(2m)=\left\{X\in \mathfrak{u}^*(2m),\ \tr X=0.\right\}
$$
Moreover, the quaternion unitary group or the symplectic group is defined by
$${\rm Sp}(m)={\rm U}(2m)\cap {\rm SU}^*(2m)=\{T\in {\rm SU}^*(2m);\ T\bar T^t=I_{2m}\}$$ with the Lie algebra
$$\mathfrak{s}\mathfrak{p}(m)=\left\{X\in\mathfrak{s}\mathfrak{u}^*(2m);\ X+\bar X^t=0.\right\}.
$$
Let $\bbr^{n+1}\equiv {\mathfrak q}{\mathfrak h}(m)$ be the real vector space of quaternion Hermitian matrices of order $m$. Then we have
$$
{\mathfrak q}{\mathfrak h}(m)=\mathfrak{h}(m)\oplus \mathfrak{s}\mathfrak{o}(m,\bbc)
=\{\lmx A&B\\-\bar B&\bar A\rmx\in {\rm M}(2m,\bbc);\ A\in \mathfrak{h}(m),B\in\mathfrak{s}\mathfrak{o}(m,\bbc)\}
$$
there is a representation $\phi$ of ${\rm SU}^*(2m)$ on $\bbr^{n+1}$ by
$$
\phi(a)X:=aX\bar a^t,\quad \text{for\ }a\in {\rm SU}^*(2m),\ X\in \bbr^{n+1}.
$$
Suitably choose a basis for the real vector space $\bbr^{n+1}$ together with its dual basis, and then by a similar computation as in Example \ref{expl4} we are able to obtain
{\lem\label{expl5-lem}\rm $\phi({\rm SU}^*(2m))\subset {\rm SL}(n+1,\bbr)$, that is, $\phi({\rm SU}^*(2m))$ can be viewed as a subgroup of the unimodular group ${\rm UA}(n+1)$ on $\bbr^{n+1}$.}
Define ${\mathfrak q}{\mathfrak h}_0(m)=\{X\in{\mathfrak q}{\mathfrak h}(m);\ \tr X=0\}$.
Then the canonical decomposition of $\mathfrak{s}\mathfrak{u}^*(2m)$ with respect to $\mathfrak{s}\mathfrak{p}(m)$ is as follows:
$$
\mathfrak{s}\mathfrak{u}^*(2m)=\mathfrak{s}\mathfrak{p}(m)
+{\mathfrak q}{\mathfrak h}_0(m)
$$
where the subspace ${\mathfrak q}{\mathfrak h}_0(m)$ can be identified with the tangent space $T_oM$ at the origin $o={\rm Sp}(m)\in M$.
For a given constant $L_1<0$, put
$$C=\fr{\sqrt{2m}}{4}\left(\fr2{m(-L_1)}\right)^{\fr{n+2}2}$$
and define a map
$x:{\rm SU}^*(2m)/{\rm Sp}(m)\to \bbr^{n+1}$ as follows:
$$
x(\,{}_a{\rm Sp}(m))=Ca\bar a^t,\quad \text{for\ }a\in {\rm SU}^*(m).
$$
Then, by Lemma \ref{expl5-lem}, the $x$ is equivariant with respect to the representation $\phi:{\rm SU}^*(2m)\to {\rm UA}(n+1)$.
Now for each $X\in {\mathfrak q}{\mathfrak h}_0(m)$, define $a(t)={}_{\exp t X}{\rm Sp}(m)$. Then
$$
x_*(X)=\left.\dd{}{t}\right|_{t=0}x(a(t))=C\left.\dd{}{t}\right|_{t=0}((\exp tX)(\ol{\exp tX})^t)=2CX.
$$
Thus $x$ is an immersion at $o$ and thus everywhere. Moreover, $x$ is also an imbedding of $M$ into $\bbr^{n+1}$.
For $X,Y\in {\mathfrak q}{\mathfrak h}(m)$, define $(X,Y)=\tr(XY)$. Then $(\cdot,\cdot)$ is the standard inner product on $\bbr^{n+1}\equiv {\mathfrak q}{\mathfrak h}(m)$. In particular, it is positive definite. As in Example \ref{expl3}, $x$ is invariant and transversal everywhere on $M$.
For any $Y\in {\mathfrak q}{\mathfrak h}_0(m)$, the corresponding Killing vector field $Y^*$ on $M$ satisfies
$$
Y^*|_{a(t)}=\left.\dd{}{s}\right|_{s=0}(\,{}_{\exp sY a(t)}{\rm Sp}(m))=\left.\dd{}{s}\right|_{s=0}({}\,_{\exp sY \exp tX}{\rm Sp}(m)).
$$
It then follows that
$$
x_*(Y^*|_{a(t)})=C\left.\dd{}{s}\right|_{s=0}((\exp sY \exp tX)(\ol{\exp sY \exp tX})^t).
$$
Therefore
\begin{align}
X(x_*(Y^*))=&C\left.\ppp{}{t}{s}\right|_{t=s=0}(\exp sY \exp tX \exp s\ol X^t \exp t\ol Y^t)\nnm\\
=&2C(XY+YX)=2C\left(YX+XY-\fr1m\tr(XY)I_{2m}\right)+\fr2mC(X,Y)I_{2m}\label{expl5-gausf}
\end{align}
implying that $x$ is locally strongly convex since $(X,Y)$ is positive definite.
Thus, at the origin $o$, the invariant affine metric $g_o$ is defined by:
$$
g_o(X,Y)=\left(\fr{4C}{\sqrt{2m}}\right)^{\fr2{n+2}}(X,Y)=-\fr2{mL_1}(X,Y),\quad X,Y\in {\mathfrak q}{\mathfrak h}_0(m).
$$
Since $g_o$ is positive definite and invariant by ${\rm Sp}(m)$, the invariant Riemannian metric $g$ induced by $g_o$ is exactly the affine metric of $x$. Once again we can prove that the real linear map
$$
Y\in {\mathfrak h}_0(2m)\mapsto XY+YX-\fr1m(XY)I_{2m}
$$
has a vanishing trace for each $X\in {\mathfrak h}_0(2m)$. With this fact we use \eqref{expl5-gausf} to find that $\xi=-L_1x$ holds identically, implying that $x$ is a hyperbolic affine sphere with affine mean curvature $L_1$.
Moreover, the involution map $\sigma:{\mathfrak s}{\mathfrak u}^*(2m,\bbc)\to {\mathfrak s}{\mathfrak u}^*(2m,\bbc)$ given by $\sigma(X)=-\bar X^t$ is isometric with respect to $g_o$, thus the invariant affine metric $g$ is symmetric, and the Fubini-Pick form $A_o$ of $x$ at the origin $o$ is (Definition \eqref{f-p})
\be\label{expl5-eq5}
A_o(X,Y,Z)=g_0\left(\left(XY+YX-\fr1m\tr(XY)I_{2m}\right),Z\right),\quad X,Y,Z\in {\mathfrak q}{\mathfrak h}_0(m)
\ee
which is invariant by the adjoint action ${\rm Sp}(m)$ and thus the ${\rm SU}^*(2m,\bbc)$-invariant $3$-form $A$ induced by $A_o$ is exactly the Fubini-Pick form of the hypersurface $x:M\to \bbr^{n+1}$. This shows that $x$ is an affine symmetric hypersphere.
\expl\label{expl6}\rm (\cite{bir-djo12}, \cite{lix13}) The standard embedding
$$x:M\equiv{\rm E}_{6(-26)}/{\rm F}_4\to\bbr^{27},$$
where ${\rm E}_{6(-26)}$ is the noncompact real group of type $\mathfrak{e}_6$ with the compact real form ${\rm F}_4$ of type ${\mathfrak f}_4$ as its maximal compact subgroup.
Let $\mathbb{O}$ be the space of octonions and $\mathfrak{J}$ be the set of $3\times 3$ Hermitian matrices with entries in $\mathbb{O}$, that is
$$
\mathfrak{J}=\{X=\lmx \xi_1&x_3&\bar x_2\\ \bar x_3&\xi_2&x_1\\
x_2&\bar x_1&\xi_3\rmx\in {\rm M}(3,\mathbb{O});\ \bar X^t=X\},
$$
where ${\rm M}(3,\mathbb{O})$ is the real vector space of all octonian square matrices of order $3$. Clearly $\mathfrak{J}$ is a $27$-dimensional real vector space and thus can be identified with $\bbr^{27}$. On $\mathfrak{J}$, the symmetric Jordan multiplication $\circ$ and the standard inner product $(\cdot,\cdot)$ on $\mathfrak{J}$ are defined as follows:
$$
X\circ Y=\fr12(XY+YX),\quad (X, Y)=\tr(X\circ Y).
$$
Furthermore, the cross product $\times$ and the determinant function $\det$ are given by
\bea
&X\times Y=\fr12(2X\circ Y-\tr(X)Y-\tr(Y)X+(\tr(X)\tr(Y)-\tr(X\circ Y))I_3)\\
&\det(X)=\fr13(X\times X, X).
\eea
The noncompact group ${\rm E}_{6(-26)}$ is defined as the set of all determinant-preserving real linear automorphism on $\mathfrak{J}$, that is
\be\label{E6(-26)} {\rm E}_{6(-26)}=\{A\in {\rm GL}_\bbr(\mathfrak{J});\ \det(AX)=\det(X),\,\forall X\in\mathfrak{J}\}.
\ee
The maximal compact subgroup of ${\rm E}_{6(-26)}$ is given by
\begin{align}
{\rm F}_4=&\{A\in {\rm E}_{6(-26)};\ A(X\circ Y)=(AX)\circ(AY),\, \forall X,Y\in\mathfrak{J}\}\label{F4-1}\\
\equiv&\{A\in {\rm E}_{6(-26)};\ A(I_3)=I_3\}.\label{F4-2}
\end{align}
For each matrix $T\in\mathfrak{J}$, there associated an element $\td T\in {\rm E}_{6(-26)}$ defined by
$$
\td T(X):=T\circ X,\quad \forall X\in \mathfrak{J}.
$$
Define
$$
\mathfrak{m}=\{\td T;\ T\in\mathfrak{J}_0\}, \text{\ where\ }
\mathfrak{J}_0=\{T\in\mathfrak{J};\ \tr T=0\},
$$
and denote by ${\mathfrak f}_4$ the Lie algebra of ${\rm F}_4$. Then by \cite{yok09}, the Lie algebra $\mathfrak{e}_{6(-26)}$ has a canonical direct decomposition as
\be\label{dec1}\mathfrak{e}_{6(-26)}=\mathfrak{f}_4+\mathfrak{m}\ee
satisfying
$[\mathfrak{f}_4,\mathfrak{m}]\subset \mathfrak{m}$, $[\mathfrak{m},\mathfrak{m}]\subset \mathfrak{f}_4$. Note that we have a natural identification $\mathfrak{m}\equiv T_oM$ where $o:={}_{I_{27}}{\rm F}_4$ with $I_{27}$ the identity element in ${\rm E}_{6(-26)}$.
Similar to the above, one can perform a computation which shows that the trace of an arbitrary element of $\mathfrak{e}_{6(-26)}$ must vanish (for the detail, see \cite{lix13}). Thus we have
{\prop\label{prop 5.1}\rm(\cite{lix13})
${\rm E}_{6(-26)}$ is a subgroup of the special linear group ${\rm SL}(27,\bbr)$.}
For any given constant $L_1<0$, set
$$C=\sqrt{3}(-3L_1)^{-14}>0$$
and then define a smooth map $f:{\rm E}_{6(-26)}\to \mathfrak{J}$ by
$f(L)=C\cdot L(I_3)$ for all $L\in {\rm E}_{6(-26)}$. Clearly, for any $L_1,L_2\in {\rm E}_{6(-26)}$, $f(L_1)=f(L_2)$ if and only if $(L_1^{-1}\circ L_2)(I_3)=I_3$. By the definition of ${\rm F}_4$, $f$ naturally induces a smooth map $x:{\rm E}_{6(-26)}/{\rm F}_4\to\bbr^{27}\equiv\mathfrak{J}$:
\be\label{e6/f4}
x({}_L{\rm F}_4)=C\cdot L(I_3),\quad \forall L\in {\rm E}_{6(-26)}.
\ee
By Proposition \ref{prop 5.1}, we can choose a volume element on $\bbr^{27}$,
say, the canonical volume element with respect to the inner product $(\cdot,\cdot)$ on $\mathfrak{J}$, so that ${\rm {\rm E}_{6(-26)}}$ can be identified with a subgroup of the
group ${\rm UA}(27)$ of unimodular affine transformation on $\bbr^{27}$.
Therefore, the induced map $x$ is equivariant as an affine hypersurface
in $\bbr^{27}$. Consequently all the equiaffine invariants of $x$ such as the affine metric, the Fubini-Pick form and the fundamental form are ${\rm {\rm E}_{6(-26)}}$-invariant.
Now for each $\td X\in\mathfrak{m}\equiv T_oM$, $X\in\mathfrak{J}_0$, $a(t):={}_{\exp t\td X}{\rm F}_4$ is a geodesic curve on $M$. It holds clearly that
$$
x_*(\td X)=\left.\dd{}{t}\right|_{t=0}x(a(t))=C\left.\dd{}{t}\right|_{t=0}(\exp t\td X(I_3))=C\td X(I_3)=C(X\circ I_3)=C\cdot X.
$$
This shows that $x$ is an immersion at $o$ and thus is an immersion globally since $x$ is equivariant. Clearly, $x$ is injective and is thus an imbedding of $M$ into $\bbr^{27}$.
Moreover, since for each $X\in \mathfrak{J}_0$,
$$(X,I_3)=\tr(X\circ I_3)=\tr X=0,$$ $x(o)$ is a transversal vector of $x$ at $o$ and thus is transversal everywhere. Furthermore,
for an arbitrary $Y\in\mathfrak{J}_0$, denote by $Y^*$ the Killing vector field on $M$ induced by $\td Y$, then the value of $Y^*$ at $a(t)$
$$
Y^*|_{a(t)}=\left.\dd{}{s}\right|_{s=0}(\,{}_{\exp s\td Y a(t)}{\rm F}_4)=\left.\dd{}{s}\right|_{s=0}({}\,_{\exp s\td Y \exp t\td X}{\rm F}_4).
$$
Therefore
$$
x_*(Y^*|_{a(t)})=C\left.\dd{}{s}\right|_{s=0}(\exp s\td Y \exp t\td X(I_3)).
$$
It follows that
\begin{align}
\td X(x_*(Y^*))=&C\left.\ppp{}{t}{s}\right|_{t=s=0}(\exp s\cdot\td Y\exp t\td X(I_3))\nnm\\
=&C(Y\circ(X\circ I_3))=C(Y\circ X)\nnm\\
=&C\left(X\circ Y-\fr13\tr(X\circ Y)I_3\right)+\fr13C(X,Y)I_3\label{gaussf}
\end{align}
implying that $x$ is locally strongly convex since $(X,Y)=\tr (X\circ Y)$ is positive definite.
Note that the inner product $(\cdot,\cdot)$ on $\mathfrak{J}_0$ is $\mathfrak{f}_4$-invariant and that the correspondence $\ \widetilde{\ }:\mathfrak{J}_0\to \mathfrak{m}$ is $\mathfrak{f}_4$-equivariant. It follows that the affine metric $g$ of $x$ is the invariant metric on ${\rm E}_{6(-26)}/{\rm F}_4$ induced by
$$
g_o(\td X,\td Y):=\left(\fr1{\sqrt{3}}C\right)^{\fr1{14}}(X,Y)=-\fr1{3L_1}(X,Y),\quad \forall\, X,Y\in\mathfrak{J}_0.
$$
Clearly, $g$ is symmetric since $g_o$ is invariant by the involution $\sigma(\td X)=-\widetilde{\ol X^t}$, $X\in {\mathfrak J}_0$.
A direct computation shows once more that for each $\td X$ with $X\in \mathfrak{J}_0$, the real linear map
$$
\td Y\mapsto \left(X\circ Y-\fr13\tr(X\circ Y)I_3\right)^{\widetilde{}},\quad\forall\ Y\in {\mathfrak J}_0
$$
is traceless. Taking the trace of \eqref{gaussf} respect to the metric $g$ and using $\hat\nabla_{\td X}\td Y^*=0$, $X,Y\in {\mathfrak J}_0$, with $\hat\nabla$ the Levi-Civita connection of $g$ and $\td Y^*$ the Killing vector field induced by $\td Y$, we find that the affine normal $\xi=-L_1\cdot x$ at $o$ and thus at everywhere. It follows that $x$ is a hyperbolic affine hypersphere with the affine mean curvature being the given number $L_1$.
On the other hand, the invariant Fubini-Pick form $A$ of $x$ is induced by the following $\mathfrak{f}_4$-invariant form $A_o$ (see \eqref{f-p} and \eqref{gaussf})
$$
A_o(\td X,\td Y,\td Z)=g_o\left(\left(X\circ Y-\fr13\tr(X\circ Y)I_3\right)^{\widetilde{}},\td Z\right),\,
\forall X,Y,Z\in \mathfrak{J}_0,
$$
where once again we have used the fact that $\hat\nabla_{\td X}Y^*=0$. In particular, $x$ is an affine symmetric hypersphere in $\bbr^{27}$.
\section{Proof of the main theorem with an application}
In this section we are going to prove the main theorem of this paper. After this we shall prove a proposition which makes it clear that our classification is essentially equivalent to a previous important one given by Z.J. Hu, H.Z. Li and L. Vrancken in \cite{hu-li-vra11}. Thus in a sense we in fact provide a direct way with shorter argument of proving the complete classification of the locally strongly convex hypersurfaces with parallel Fubini-Pick form. The main idea here has been used by H. Naitoh in \cite{nai81} to classify the irreducible totally real parallel submanifolds in the projective space.
Let $x:M^n\to\bbr^{n+1}$ be a locally strongly convex hypersphere with affine metric $g$ and Fubini-Pick form $A$, and suppose that $x$ is locally affine symmetric. Then by Definition \ref{dfn afsym}, $(M^n,g)$ is locally isometric to a simply connected symmetric space $G/K$ which is necessarily complete. Without loss of generality, we can put $M^n=G/K$. Furthermore, the Fubini-Pick form $A$ of $x$ must be an element of the set $\mathcal{S}_{(M^n,g)}(L_1)$ defined in Section 2.
Denote by $\mathfrak{g}$, $\mathfrak{k}$, respectively, the Lie algebras of $G$ and $K$, and $\mathfrak{g}=\mathfrak{k}+\mathfrak{m}$ the canonical decomposition of the symmetric Lie algebra pair $(\mathfrak{g},\mathfrak{k})$. Denote by $A_o$ the value of an element $A$ of $\mathcal{S}_{(M^n,g)}(L_1)$ at the origin point $o=\,{}_eK$, where $e$ is the unit element of the Lie group $G$. Define
\be\label{S^0(c)}
{\mathcal S}^0_{(M^n,g)}(L_1)=\{\sigma=A_o;\ A\in \mathcal S_{(M^n,g)}(L_1),\ \mathfrak{k}\cdot \sigma=0\}.
\ee
Then the Fubini-Pick form $A_o$ at the origin $o$ of an affine symmetric hypersphere $x:M^n\to\bbr^{n+1}$ with affine mean curvature $L_1$ is contained in ${\mathcal S}^0_{(M^n,g)}(L_1)$.
Since locally strongly convex affine hypersurface with vanishing Fubini-Pick form $A$ must be equiaffine equivalent to one of the quadric hypersurfaces given in Example \ref{expl1} (see Proposition \ref{expl1-prop}), we can assume that $A\neq 0$ thus, by the completeness and Theorems in \cite{li-sim-zhao93}, $x$ is a hyperbolic affine hypersphere where $(M^n,g)$ is a symmetric space of noncompact type.
If {\bf this} $(M^n,g)$ is reducible as a Riemannian manifold, then by Theorem \ref{chara} $x$ is a Calabi composition of $r$ points and $s$ irreducible hyperbolic affine hyperspheres. In what follows we consider the case that $(M^n,g)$ is irreducible.
The following Lemma is crucial in our proof of Theorem \ref{main}:
{\lem\label{sect4-lem} Let $M^n$ be a simply connected irreducible symmetric space of noncompact type and set $d_M=\dim\{\sigma\in S^3(\mathfrak{m});\ \mathfrak{k}\cdot\sigma=0\}$. Then $d_M=1$ if $M^n$ is one of the following spaces and $d_M=0$ otherwise:}
$$
{\rm SL}(m,\bbr)/{\rm SO}(m),\ m\geq 3;\quad {\rm SL}(m,\bbc)/{\rm SU}(m),\ m\geq 3;\quad {\rm SL}(m,\bbc)/{\rm SU}(m),\ m\geq 3;\quad {\rm E}_{6(-26)}/{\rm F}_4.
$$
\proof The argument in proving Lemma \ref{sect4-lem} is the same as the one used by H. Naitoh in \cite{nai81} (cf. the proof of Lemma 4.2 in \cite{nai81}). Let $\mathfrak{a}$ be a maximal abelian subspace in $\mathfrak{m}$ and $W$ the Weyl group of $M^n$ relative to $\mathfrak{a}$. Denote by $S^3(\mathfrak{m})$ and $S^3(\mathfrak{a})$ the vector space of all symmetric trilinear forms on $\mathfrak{m}$ and $\mathfrak{a}$, respectively. Then it is known that the vector subspace $\{\sigma\in S^3(\mathfrak{m});\ \mathfrak{k}\cdot\sigma=0\}$ is isomorphic to the vector subspace $\{\td\sigma\in S^3(\mathfrak{a});\ w\cdot\td\sigma=\td\sigma,\ \forall\,w\in W\}$ by the restriction to the subspace $\mathfrak{a}$. Since the Weyl group acts on $\mathfrak{a}$ irreducibly, all the $W$-invariant polynomials of degree $3$ are irreducible. Hence a basis of this vector subspace is given by all the fundamental $W$-invariant polynomials of degree $3$. The Weyl group $W$ for $M^n$ is of types $A_l$, $B_l$, $C_l$, $D_l$, $E_l$, ${\rm F}_4$, $G_2$ or $B_lC_l$ by the Araki's table (\cite{ara62}). Then by N. Bourbaki (\cite{bou68}), only the Weyl groups of type $A_l$ ($l\geq 2$) have one fundamental $W$-invariant polynomial of degree $3$ and the other Weyl groups have nothing. Thus the lemma follows easily. \endproof
By our previous assumption, the Fubini-Pick form $A$ of $x$ is non-vanishing, we have $$0<\dim \mathcal{S}^0_{(M^n,g)}(L_1)\leq d_M.$$ It follows that in our case $d_M=1$. Thus by Lemma \ref{sect4-lem}, $(M^n,g)$ can not be of constant sectional curvature.
{\prop\label{sect4-prop} Let $M^n$ be one of the symmetric spaces listed in Lemma \ref{sect4-lem} with symmetric metric $g$. If $\mathcal{S}^0_{(M^n,g)}(L_1)\neq\emptyset$, then the symmetric Riemannian metric $g$ is uniquely determined by the constant $L_1$ and $\mathcal{S}^0_{(M^n,g)}(L_1)$ contains only two elements $A_o,\phi\cdot A_o:=(\phi^{-1})^*A_o$ where $\phi$ is the symmetry of $(M^n,g)$ at the origin $o$.}
\proof Suppose $\td g$ is another symmetric Riemannian metric on $M^n$. Let $A\in\mathcal{S}^0_{(M^n,g)}(L_1)$ and $\td A\in\mathcal{S}^0_{(M^n,\td g)}(L_1)$. Then by \eqref{pre gaus_af sph1} we have
\begin{align}
R(X,Y)Z=&L_1(g(Y,Z)X-g(X,Z)Y)-[A(X),A(Y)](Z)\label{1}\\
\td R(X,Y)Z=&L_1(\td g(Y,Z)X-\td g(X,Z)Y)-[\td A(X),\td A(Y)](Z)\label{2}.
\end{align}
Since $M^n$ is irreducible, $\td g=\lambda^2 g$ for some positive constant $\lambda$ implying that $\td R(X,Y)Z=R(X,Y)Z$ for all $X,Y,Z\in TM^n$. Moreover, $A\neq 0$ and $\td A\neq 0$ because both $g$ and $\td g$ are not of constant sectional curvatures. On the other hand, since $\mathcal{S}^0_{(M^n,g)}(L_1)$, $\mathcal{S}^0_{(M^n,\td g)}(L_1)$ are both subsets of $\{\sigma\in S^3(\mathfrak{m});\ \mathfrak{k}\cdot\sigma=0\}$ and $d_M=1$, there is a nonzero number $\mu$ such that $\td A=\mu\cdot A$. Therefore \eqref{2} can be written as
\be\label{2'}
R(X,Y)Z=L_1\lambda^2(g(Y,Z)X-g(X,Z)Y)-\mu^2[A(X),A(Y)](Z).
\ee
Comparing \eqref{1} and \eqref{2'} we find
$$
(\mu^2-1)R(X,Y)Z=L_1(\mu^2-\lambda^2)(g(Y,Z)X-g(X,Z)Y).
$$
Note again that the metric $g$ is not of constant sectional curvature, hence $\mu^2=1$ and $\mu^2=\lambda^2$ since $L_1\neq 0$. It follows that $\td g=g$ (implying $\mathcal{S}^0_{(M^n,\td g)}(L_1)=\mathcal{S}^0_{(M^n, g)}(L_1)$) and $\td A=\pm A$,
it is easy to see that $\phi\cdot A=-A$.
\endproof
{\cor\label{sect4-cor} Let $M^n$ be one of the symmetric spaces listed in Lemma \ref{sect4-lem}. Then the symmetric affine hypersphere $x:M^n\to\bbr^{n+1}$ is unique up to affine equivalences.}
\proof If $x,\td x:M^n\to\bbr^{n+1}$ are two affine symmetric hyperspheres with a same affine mean curvature $L_1$ and with Fubini-Pick forms $A,\td A$, respectively. Then by Proposition \ref{sect4-prop}, the affine metrics of $x,\td x$ coincide and denoted as $g$. Therefore both $A_o,\td A_o$, the values of $A,\td A$ at $o$ respectively, are elements of $\mathcal{S}^0_{(M^n,g)}(L_1)$. So $\td A_o=\phi\cdot A_o$, or equivalently, $\td A_o=(\phi^{-1})^*A_o$. Consider the composition $\bar x:=x\circ\phi^{-1}$. Then the sufficient part of Theorem \ref{affine uniqueness} tells that the Fubini-Pick form $\bar A$ of $\bar x$ is subject to $\bar A=(\phi^{-1})^*A$. In particular, at $o$, we have $\bar A_o=(\phi^{-1})^*A_o=\td A_o$. Since $\bar A,\td A$ are invariant, $\bar A=\td A$ globally on $M^n$. Thus an application of the necessary part of Theorem \ref{affine uniqueness} shows that $\bar x$ and $\td x$ are equiaffine equivalent, implying that $\td x$ and $x$ are affine equivalent. \endproof
By summing up the foregoing discussions, we arrive at the completion of proving the main theorem (Theorem \ref{main}):
Let $x:M^n\to \bbr^{n+1}$ be a locally strongly convex and affine symmetric hypersphere with affine metric $g$ and affine mean curvature $L_1$.
(1) If the Fubini-Pick form $A$ of $x$ vanishes identically, then by Proposition \ref{expl1-prop}, $x$ must be one of the quadric hypersurfaces in Example \ref{expl1};
(2) If the Riemannian manifold $(M^n,g)$ is irreducible and $A\neq 0$, then by Lemma \ref{sect4-lem} and Corollary \ref{sect4-cor}, $x$ is affine equivalent to one of Examples \ref{expl3}--\ref{expl6} in Section 3;
(3) If $(M^n,g)$ is reducible, then by Theorem \ref{chara}, $x$ is affine equivalent to the Calabi composition of some $r$ points and $s$ hyperbolic affine hyperspheres listed in Examples \ref{expl1} and \ref{expl3}--\ref{expl6}, where $r+s\geq 2$.
Thus Theorem \ref{main} is proved.
Finally, to make an end of this article, we remark an alternate and simpler proof of a classification theorem originally proved by Hu et al in \cite{hu-li-vra11}. For doing this, we need the following results:
{\prop\label{sym paral} {\rm(\cite{lix13})}
A nondegenerate hypersurface $x:M^n\to\bbr^{n+1}$ is of parallel Fubini-Pick form $A$ if and only if $x$ is locally affine symmetric.}
\proof
First we suppose that the Fubini-Pick form $A$ of $x$ is parallel. Then by \cite{bok-nom-sim90}, $x$ must be an affine hypersphere. It then follows from \eqref{gaus_af sph} that the affine metric $g$ must be locally symmetric. Thus locally we can write $M^n=G/K$ and the canonical decomposition of the corresponding orthogonal symmetric pair $(\mathfrak{g},\mathfrak{k})$ is written as
${\mathfrak g}=\mathfrak{k}+\mathfrak{m}$ where the vector space $\mathfrak m$ is identified with $T_oM$. Here $o\in M^n$ is the base point given by $o=eK$ with $e$ the identity of $G$.
Note that, for all $X,Y_i\in {\mathfrak m}=T_oM$, $i=1,2,3$, the vector field $Y_i(t):=L_{\exp (tX)*}(Y_i)$ is the parallel translation of $Y_i$ along the geodesic $\gamma(t):=_{\exp (tX)}\!\!K$ (see, for example, \cite{hel01}).
Consequently we have
\begin{align}
&\dd{}{t}((L_{\exp (tX)}^*A)(Y_1,Y_2,Y_3))\nnm\\
=&\dd{}{t}(A_{\exp (tX)K}(L_{\exp (tX)*}(Y_1),
L_{\exp (tX)*}(Y_2),L_{\exp (tX)*}(Y_3)))\nnm\\
=&(\hat\nabla_{\gamma'(t)}A)(Y_1(t),Y_2(t),Y_3(t))=0,
\end{align}
where $\hat{\nabla}$ is the Levi-Civita connection of the metric $g$.
It follows that
\be\label{2.19-0}
A_{\exp (tX)K}(L_{\exp (tX)*}(Y_1),
L_{\exp (tX)*}(Y_2),L_{\exp (tX)*}(Y_3))
\ee
is constant with respect to the parameter $t$ and thus $A$ is $G$-invariant.
Conversely, we suppose that $M^n=G/K$ locally for some symmetric pair $(G,K)$ and that $A$ is $G$-invariant. Then for any $X,Y_i\in {\mathfrak m}=T_oM$, $i=1,2,3$, the function
\eqref{2.19-0}
is again a constant along the geodesic $\gamma(t)$. Therefore,
$$
(\hat\nabla_XA)(Y_1,Y_2,Y_3)=\left.\dd{}{t}\right|_{t=0} A_{\gamma(t)}(Y_1(t),Y_2(t),Y_3(t))=0,
$$
where we have once again used the fact that each $Y_i(t)$ is parallel along the geodesic $\gamma(t)$.
\endproof
{\prop\label{bok} {\rm(\cite{bok-nom-sim90})} A nondegenerate affine hypersurface with parallel Fubini-Pick form is necessarily an affine hypersphere.}
Now the following classification theorem comes readily from Theorem \ref{main}, Proposition \ref{sym paral} and Proposition \ref{bok}:
{\thm\label{cla thm} {\rm (cf. \cite{hu-li-vra11})}
Let $x:M^n\to \bbr^{n+1}$ ($n\geq 2$) be a locally strongly convex affine hypersurface with parallel Fubini-Pick form $A$. Then either of the following two cases holds:
$(1)$ With the affine metric $g$, the Riemannian manifold $(M^n,g)$ is irreducible and $x$ is locally equiaffine equivalent to
$(a)$ one of the three kinds of quadric affine spheres: Ellipsoid, elliptic paraboloid and hyperboloid; or
$(b)$ the standard embedding of the Riemannian symmetric space ${\rm SL}(m,\bbr)/{\rm SO}(m)$ into $\bbr^{n+1}$ with $n=\fr12m(m+1)-1$, $m\geq 3$;
or
$(c)$ the standard embedding of the Riemannian symmetric space ${\rm SL}(m,\bbc)/{\rm SU}(m)$ into $\bbr^{n+1}$ with $n=m^2-1$, $m\geq 3$; or
$(d)$ the standard embedding of the Riemannian symmetric space ${\rm SU}^*(2m)/{\rm Sp}(m)$ into $\bbr^{n+1}$ with $n=2m^2-m-1$, $m\geq 3$; or
$(e)$ the standard embedding of the Riemannian symmetric space ${\rm E}_{6(-26)}/{\rm F}_4$ into $\bbr^{27}$.
$(2)$ $(M^n,g)$ is reducible and $x$ is locally affine equivalent to the Calabi product of $r$ points and $s$ of the above irreducible hyperbolic affine spheres of lower dimensions, where $r$, $s$ are nonnegative integers and $r+s\geq 2$.}
|
1,314,259,993,340 | arxiv | \section{Introduction}
Several decades of research into coherent atom-light interactions have precipitated a multifarious menagerie of optical phenomena for storing and manipulating light fields inside atomic ensembles \cite{fleischhauer_electromagnetically_2005,Hammerer:2010gsa}. In 2002 Andr\'e and Lukin proposed that dynamically modulating the refractive index along the optical axis of an ensemble could be used, not only to slow or store light, but also to reversibly trap a light field within the atoms \cite{andre_manipulating_2002}. In contrast to prior methods of creating `stored light' \cite{Phillips:2001td}, the optical component of this `stationary light' (SL) field remains considerable even as the light's group velocity vanishes. Rather than mapping the optical field to an entirely atomic state, the non-zero light field is prevented from propagating by a dynamic optical bandgap, analogous to the static bandgap caused by the structure of photonic crystals or ordered atoms \cite{schilke_photonic_2011}. This bandgap is controlled by counterpropagating optical fields and can be tuned to manipulate the localization of light fields and atomic excitations inside the ensemble.
The original SL proposal \cite{andre_manipulating_2002} was followed in quick succession by a first experimental demonstration in hot atomic vapor \cite{bajcsy_stationary_2003} and alternative SL schemes, some with an alternative physical picture of the underlying mechanism \cite{moiseev_quantum_2006}. Although early demonstrations were described in terms of standing wave modulated gratings, this picture does not apply to hot atoms due to thermal atomic motion. A subsequent multi-wave mixing formulation \cite{moiseev_quantum_2006,zimmer_coherent_2006} more comprehensively explains the complex behaviours resulting from combinations of counterpropagating optical fields.
The same all-optical tunable bandgap that lies behind SL effects has been considered as a flexible alternative to fixed photonic crystals \cite{artoni_optically_2006} with applications in quantum light storage and fast optical switching. Furthermore, because of the nonlinear behaviour of atomic ensembles, reversibly trapping a SL field in such a dynamic bandgap holds promise for enhancing nonlinear photon-photon interactions. The use of SL for this purpose is strongly motivated by the development of photonic quantum information processing \cite{chuang_simple_1995}. The size of a nonlinear phase shift scales with the product of the interaction strength and time. Consequently, nonlinear interactions usually involve high intensity fields. Photonic qubit gates, however, require nonlinear interactions for fields down to the single-photon level. SL therefore provides a path to this end by localizing optical fields in the atomic medium and providing a longer time for a nonlinear phase shift to be accumulated.
Although slow-light schemes have been proposed for such quantum information applications, they feature an inherent trade-off between interaction time and interaction strength, because the photonic component of the polariton is inversely proportional to the time the probe spends inside the ensemble \cite{harris_nonlinear_1999}. SL schemes allow greater flexibility in the configuration of the optical field potentially enabling larger conditional phase shifts at the single photon level \cite{andre_nonlinear_2005} and photon-photon entanglement \cite{friedler_deterministic_2005}.
These SL phase-gate schemes are in some ways analogous to the well known cavity QED techniques for atom-mediated photon-photon gates \cite{obrien_2009_nat_phot}, with the photonic band gap trapping the light field in place of an optical resonator. What distinguishes SL photon-photon gate proposals is the wide degree of tunability: the spatial distribution of stationary atomic coherences and optical fields can be separately configured to implement a wide range of potential interactions.
The purpose of this review is to consolidate the new body of work on SL and to provide a unified model of SL phenomena under various conditions given the experimental evidence now available. We will also consider the prospects and limitations of SL as a tool for quantum information applications in light of these results.
\subsection{Structure}
We have divided the following review into three sections. We begin in \Cref{sec:gen} with the literature concerning schemes for generating SL fields. Our intention in reviewing this work is to convey a sense of the history of the field, with an emphasis on experimental results and how they shaped our evolving picture of SL. We divide these results by generation scheme. The bulk of results to date concern EIT-based SL, which we cover in \Cref{sec:EITSL}. The more recent Raman-based schemes are covered in \Cref{sec:ORSL}. In the interests of brevity, we will at first introduce only the bare minimum theory required to provide perspective for these results.
In \Cref{sec:theo} we provide a mathematical basis for the physics in this review and introduce a comprehensive theoretical framework for SL in atomic ensembles. Our goal is to give the reader sufficient tools to explain and model the effects discussed in \Cref{sec:gen} as well as proposals which have yet to be implemented. We derive equations of motion for the optical fields and atomic coherences in a secular level scheme, i.e. one in which only the interactions of copropagating fields are considered. We use these to describe EIT and Raman SL. We return to a non-secular scheme to discuss how to calculate the effect of higher-order coherences on the SL. Finally, we discuss phase-matching requirements and transverse propagation.
\Cref{sec:apps} concerns the proposed uses of SL, in particular for mediating gates in photonic quantum information systems. We review early proposals for enhancing nonlinear interactions, discuss the no-go theorems these proposals inspired, and finally review more recent proposals that should overcome the obstacles raised by the no-go theorems.
\par
\medskip
\section{Generation of stationary light}\label{sec:gen}
The SL schemes proposed and implemented to date can be divided into two broad categories delineated by the configuration of the optical control generating the stationary field. The earliest SL schemes were based on electromagnetically induced transparency (EIT) with near-resonant control fields and subsequent research has largely focused on this approach. More recently, Raman SL schemes with far-detuned control fields have been introduced.
What follows in this section is a short summary of SL work to date in each of these approaches.
Although SL can, in principle, be generated in any sufficiently large ensemble of emitters, current demonstrations have been restricted to atomic systems by the difficulty of constructing optically deep and sufficiently coherent ensembles of `artificial atoms' such as quantum dots and diamond colour centres. In particular, most demonstrations have been done in vapours of rubidium or cesium atoms, which are dense ensembles with hydrogen-like spectra. SL has yet to be demonstrated in other optically deep atomic systems, such as rare-Earth ion crystals. Throughout this review we'll refer to the ensemble mediating SL as `atoms', but it's worth bearing in mind that SL could be generated in any optically dense ensemble of coherent emitters.
We'll see in the sections below that SL generation depends sensitively on the mobility of atoms within the ensemble, with qualitatively different behaviour in hot ($\approx 400$~K), cold ($\approx$~mK), and ultracold ($\approx$~$\mu$K) or stationary atoms. Hot, mobile atoms are typically the more complicated platform for quantum optics. Their velocity with respect to optical beams Doppler broadens transitions and local coherences and excitations are carried with the atoms as they move ballistically or diffusively through the ensemble. In this case, however, motion has the effect of simplifying SL generation. Ultracold and stationary atoms can maintain a richer variety of coherences (higher-order coherences, see \Cref{sec:hocs}) and have the more complicated dynamics.
\begin{figure*}[th]
\centerline{\includegraphics[width=175mm]{EIT_combined.pdf}}
\caption{ \small $\Lambda$ atomic level configuration for electromagnetically induced transparency. A weak probe, $\hat{\mathcal{E}}$, copropagates with a bright control field $\Omega$ which opens a narrow transparency window for the probe about the two-photon resonance $\delta = 0 $. The excited state decays at a rate $\Gamma$. (b) Probe transmission and (c) phase shift, $\phi$ through an ensemble of $\Lambda$-type atoms as a function of two-photon detuning $\delta$. The width of the EIT window, $\Delta \omega_{\mathrm{EIT}}$, increases with control field power (dashed: lower power, solid: higher power).}
\label{fig:eit}
\end{figure*}
\subsection{EIT-based stationary light}\label{sec:EITSL}
Electromagnetically induced transparency is a property of three-level atoms interacting with two (usually copropagating) optical fields. In the most common configuration, known as the $\Lambda$ configuration and shown in \Cref{fig:eit}(a), a weak probe field couples the atomic ground state $\ket{1}$ to an excited state $\ket{3}$ and a bright control beam couples the excited state to a meta-stable state $\ket{2}$. The bright control field, with Rabi frequency $\Omega$, opens a narrow transparency window for the probe, $\hat{\mathcal{E}}$, at a resonant frequency that would otherwise be absorbed. When the two fields are equally detuned from the excited state, they are said to be in two-photon resonance and drive the ground-metastable state coherence. When the two-photon resonance condition is exactly met, the probe and control field excitations interfere such that the excited state is not driven at all, and the atomic medium is rendered transparent for the probe field. This is known as electromagnetically induced transparency (EIT) and can be used to open a narrow window of almost perfect transparency in an otherwise opaque atomic ensemble. The bandwidth of the transparency window, shown in \Cref{fig:eit}(b), is \cite{fleischhauer_quantum_2002}
\begin{align}
\Delta \omega_{\mathrm{EIT}} = \frac{\Omega^2}{\Gamma \sqrt{\alpha}} \,,
\end{align}
where $\Gamma$ is the total decay rate of the excited state $\ket{3}$ and $\alpha$ is opacity in the absence of EIT.
In an ensemble of emitters, EIT gives rise to a controllable slow-light effect for the probe field. The group velocity of the probe is proportional to the phase/frequency gradient $\partial \phi / \partial \delta$, shown in \Cref{fig:eit}(c), which is inversely proportional to the linewidth of the transparency window. The width of the transparency is, in turn, proportional to the power of the control field, providing an adjustable group velocity. EIT has been used to slow classical light pulses down to $17$~m/s \cite{hau_light_1999} and single photons to $10^3$~km/s \cite{eisaman_electromagnetically_2005}.
The slow light in EIT exists as a polariton superposition of an optical field and atomic coherence. The coherence is frequently generated between hyperfine split ground states in which case we may call the coherence envelope a `spinwave'. The greater the atomic proportion of the polariton, the slower it propagates through the ensemble. By adiabatically reducing the power in the control field, a resonant probe field can be decelerated. As the light slows, the optical component of the polarition is reduced while the spinwave component grows. When the control field power reaches zero, the probe field becomes a state of `stored light' that has no optical component. The stored light spinwave is motionless, which is why it is also sometimes referred to as `stopped light'.
EIT has been proposed as a means to create a memory device for quantum light \cite{fleischhauer_quantum_2002,fleischhauer_electromagnetically_2005} with applications in quantum communication \cite{lvovsky2009optical}. Experimental demonstrations have shown recall of non-classical states of light \cite{akamatsu_squeezed_2004, appel2008quantum, chaneliere2005storage} and efficiencies of up to 92\% \cite{PhysRevLett.120.183602}. EIT has also been proposed to enhance non-linearities for all-optical quantum gates \cite{schmidt1996giant, fleischhauer_electromagnetically_2005, chang2014quantum}. Further uses for EIT include slow-light enhanced sensing \cite{purves2006sagnac} and laser cooling below the Doppler limit \cite{roos2000experimental}.
\begin{figure*}[t]
\centerline{\includegraphics[width=150mm]{slschemes.pdf}}
\caption{ \small EIT SL level schemes. (a) Original proposal by Andr\'e and Lukin for EIT SL in stationary atoms \cite{andre_manipulating_2002}. (b) Multi-colour scheme proposed by Andr\'e and Lukin as a Doppler-free alternative to (a) for use in hot-atom ensembles \cite{andre_manipulating_2002}. (c) Scheme used by Bajcsy et al. in the first demonstration of SL \cite{bajcsy_stationary_2003}. When performed in a medium of stationary atoms, this scheme drives the creation of higher-order coherences (HOCs). (d) Two-colour SL scheme with a single excited state. Higher order coherences may arise depending on the temperature and choice of detunings $\Delta_\pm$. (e) Secular SL scheme with separate excited states addressed by the counterpropagating fields. This prevents the creation of HOCs, and is functionally equivalent to (c) in a hot-atom where atomic motion washes out HOCs. This is the secular approximation for hot atoms.}
\label{fig:schemes}
\end{figure*}
\subsubsection{Proposal and early demonstrations}
The first SL proposal by Andr\'e and Lukin \cite{andre_manipulating_2002} involved applying a spatially modulated light-shift to an EIT window via an additional optical standing wave addressing the metastable state and a fourth level as illustrated in \Cref{fig:schemes}(a). This was identified as creating a dispersive Bragg grating by periodically modulating the EIT transparency frequency across the ensemble, trapping the light. An example of a SL-induced bandgap is shown in \Cref{fig:bandgap}.
It was already recognized in this initial proposal that atomic motion would effect the SL dynamics via Doppler shifting. In the conclusion to Ref.~\cite{andre_manipulating_2002}, Andr\'e and Lukin proposed a Doppler-free alternative for generating SL in hot atoms. This scheme, shown in \Cref{fig:schemes}(b), couples the forward and backward propagating probes to separate coherences between distinct metastable states. In contrast to the scheme of \Cref{fig:schemes}(a), this alternative proposal is `multi-colour' in that the counterpropagating probe and control fields are at different frequencies.
The first demonstration of SL by Bajcsy et al. followed soon after this proposal, and took a simpler but related approach \cite{bajcsy_stationary_2003}. Rather than modulate the EIT frequency with an additional off-resonant standing wave, as in \Cref{fig:schemes}(a), a standing wave was generated in the control field intensity itself, see \Cref{fig:schemes}(c). This was thought to produce a standing wave in the probe absorption due to the spatial modulation of the EIT effect (an electromagnetically induced grating, or EIG \cite{brown_all-optical_2005}) and result in a stationary probe field with intensity fringes at the control field wavelength. SL fields generated by this approach in a hot Rubidium vapour cell were witnessed by controllably releasing the field from the ensemble.
\begin{figure}[b!]
\centerline{\includegraphics[width=85mm]{bandgap.pdf}}
\caption{\small The transmission (dashed red line) and reflection (solid blue line) of a probe field incident on an atomic ensemble with periodically modulated dispersion (using the model in \cite{lahad_induced_2017}). A bandgap emerges on resonance, preventing the propagation of light within the ensemble while light incident on the ensemble is reflected. Off resonance, transmission peaks appear as interference between light reflected throughout the ensemble becomes constructive at the far end.}
\label{fig:bandgap}
\end{figure}
The SL bandgap was further investigated by Brown et al. by measuring the reflection of a probe field from a hot $^{87}$Rb vapour cell illuminated by counterpropagating control fields \cite{brown_all-optical_2005} using the scheme shown in \Cref{fig:schemes}(c). In this experiment the reflected power was limited to less than $7$\% owing to the small fraction of atoms slow enough to contribute to a grating. Performance may also have been limited by phase mismatch between the forward and backward control fields.
A similar experiment \cite{wang_optical_2013} was carried out more recently in room temperature Cs and showed that the SL bandgap can be made direction sensitive by detuning the counterpropagating control fields as in \Cref{fig:schemes}(d). In this experiment the detuning ($\Delta_{+}-\Delta_{-}$) was 20~MHz. The result was described as an ``all optical diode'' and was explained in terms of a ``travelling photonic crystal''. The standing wave between the two counterpropagating control fields $\Omega_\pm$ travels with velocity proportional to their frequency difference $\Delta_+ - \Delta_-$. In the frame of the traveling grating, forward and backward propagating probe fields are blue and red Doppler shifted respectively. By choosing detunings such that only backward propagating fields fall into the bandgap, the medium becomes direction sensitive. In this experiment reflectance approached unity, considerably higher than the earlier work of Ref.~\cite{brown_all-optical_2005} even in a hot-atom medium. Ref.~\cite{ullah_observation_2014} added a fourth field in order to observe the interplay of EIT SL bandgap with non-linear parametric field generation by four-wave mixing.
Although the first two demonstrations, Refs.~\cite{bajcsy_stationary_2003, brown_all-optical_2005}, would be the only two experimental investigations of SL for four more years, a great deal of theoretical work was carried out in this time. In particular, it was realized that the intuitive model of SL arising from a standing wave of the control field does not work when one considers the motion of atomic vapours. The motion of the atoms across a period of the standing wave modulating the EIT window is much faster than the speed of the EIT interaction itself, as given by the EIT inverse bandwidth $\tau_\mathrm{EIT} = 1/ \Delta \omega_\mathrm{EIT} $. In this case probe light travelling through the medium does not actually experience a spatially modulated absorption or dispersion profile.
Hot atoms travel through alternate regions of high and low control field intensity during the EIT process, and so the standing wave grating is insufficient to explain how a stationary probe field is generated in the above experiments. Alternative schemes were also proposed for generating SL (in hot and ultracold atomic media) in which the standing wave grating picture would prove inadequate. A complete treatment of these many SL schemes requires considering multi-wave mixing, the coupling of coherences to counterpropagating fields, which is outlined in the following section.
\subsubsection{Multi-wave mixing theory}
Multi-wave mixing (MWM) is a process where the ground state atomic coherence is coupled to both forward and backward travelling probe fields by their respective control fields. Under control field parameters that give SL conditions, the resulting polariton has zero group velocity. The interpretation for the SL in this case is that the polariton is prevented from spreading by the interference between light travelling along multiple different paths. That is, the light is reflected at a continuum of different points in space, generating interference. This is equivalent to the mechanism by which a Bragg (absorption) grating produces a bandgap. Although they have similar consequences in the simplest configuration, the multi-wave mixing description is necessary to account for the interactions that generate higher-order coherences (HOCs).
The multi-wave mixing treatment of SL began with Moiseev and Ham, who considered several situations in which SL is generated without a standing wave control field. They showed that a SL field could be generated from an EIT slow light pulse by adiabatically switching on the counterpropagating control field \cite{moiseev_generation_2005}, and that forward and backward control fields of two \cite{moiseev_quantum_2006} or more \cite{moiseev_quantum_2007} different frequencies could generate multi-colour SL fields. The bichromatic control field scheme shown in \Cref{fig:schemes}(d) generates SL fields even when the mutual frequency difference is so large that no control field standing wave exists (in contrast to the small frequency difference used for the optical diode \cite{wang_optical_2013}). Moiseev and Ham explored the use of this scheme for wavelength conversion by adiabatically switching off the forward control field \cite{moiseev_quantum_2006}.
Zimmer et al. gave a detailed explanation of SL in a hot-atom medium based on multi-wave mixing \cite{zimmer_coherent_2006}. The authors drew upon pulse matching as described for EIT \cite{Harris1993} to derive the characteristic spreading time of the quasi-stationary probe pulse expanding to match the stationary control profile in a medium with finite optical depth.
\subsubsection{High-order coherences}\label{sec:hocs}
Coupling between forward and backward propagating fields in the multi-wave mixing theory generates coherences with higher momentum than the probe and control photons, as it involves the absorption and re-emission of photons traveling in opposite directions. These high momentum spinwaves are known as higher-order coherences (HOCs) and have a wavelength of half the optical wavelength or smaller. We use the term `coherence’ as HOCs include both spinwaves (ground state coherences) and excited state coherences. The coherence order refers to the additional momentum, for example a $+2$ order coherence is generated by absorption of a forward probe photon with re-emission into the backward control field. Higher orders are generated by the subsequent absorption and re-emission of additional counterpropagating fields.
The movement of hot atoms (under the experimental conditions of Ref.~\cite{bajcsy_stationary_2003} and other demonstrations in hot atoms) washes out the sub-wavelength spinwave more quickly than the light can couple to it. Under such conditions HOCs are not important, and simply decohere before they can be coupled. In contrast, this washing out occurs much more slowly in cold or stationary atoms and these HOCs can indeed effect the propagation of light.
The case of EIT with frequency degenerate counterpropagating control fields (\Cref{fig:schemes}(c)) in stationary atoms was considered by Hansen et al.\cite{hansen_trapping_2007}, but only in the `secular' approximation in which coherences couple exclusively to copropagating fields and cross-coupling between counterpropagating fields is forbidden. Because such cross-coupling is the origin of HOCs, HOC formation is prevented in this system. This secular approximation is made exact in the dual-V level scheme shown in \Cref{fig:schemes}(e). The authors conclude that SL is possible in stationary atoms with such a scheme. In a hot-atom medium that cannot sustain HOCs, the classic single-colour EIT SL configuration of Ref.~\cite{bajcsy_stationary_2003} and \Cref{fig:schemes}(c) is equivalent to the Doppler-free, secular scheme of \Cref{fig:schemes}(e).
The standing wave grating description of the SL in Ref.~\cite{bajcsy_stationary_2003} is mathematically equivalent to a description of multi-wave mixing in the secular regime. This isn't surprising, as either mechanism creates a coherent interchange of forward travelling light with backward travelling light, resulting in the same bandgap behaviour. Multi-wave mixing can give rise to a bandgap with the same profile as the dispersive grating bandgap in \Cref{fig:bandgap}.
The generation of HOCs is first mentioned by Moiseev and Ham \cite{moiseev_generation_2005}, and thoroughly examined by Hansen and Mølmer \cite{hansen_trapping_2007,hansen_stationary_2007}. In particular, the later authors examine how HOCs can negatively affect the generation of SL. The additional interference between light generated from these various HOCs can disturb the multi-wave mixing SL effect \cite{hansen_stationary_2007, wu_stationary_2010}. Under these conditions the polariton can split and travel through the ensemble, and no SL field exists.
\subsubsection{Demonstrations with cold atoms}
A convincing demonstration of the role of HOCs in the multi-wave mixing process was presented by Lin et al.~\cite{lin_stationary_2009}. This work compared the schemes of \Cref{fig:schemes} (c) and (e) and the key results are shown in \Cref{fig:lin2009}. This work demonstrated that no SL field is formed in an ultracold-atom medium when the counterpropagating EIT control fields are frequency degenerate, \Cref{fig:lin2009}(e). This is consistent with the destructive role of HOCs introduced in the multi-wave mixing theory~\cite{hansen_stationary_2007}.
\begin{figure}[t!]
\centerline{\includegraphics[width=80mm]{Lin2009.pdf}}
\caption{\small Propagation of a pulse (pink diamonds, scaled by 0.2) through an ultracold atomic medium with combinations of slow, stored and SL. Experiments (points) are compared to theoretical predictions (lines). Control field intensity is shown in black (forward) and green (backwards). Recalled probe intensity is shown in blue (forwards) and red (backwards).
(a) Forward probe delayed by slow light (circles, continuous control not shown) and retrieved after having been converted to stored light (squares, switched control shown).
(b) Backward probe retrieved with backward control.
(c) Forward probe delayed by continuous backward control (not shown).
(d-f) show a sequence of stored light (2-3$\mu s$) followed by SL (3-5$\mu s$) and forward recall (5$\mu s$).
(d) A model for a hot atomic medium with single-colour control fields.
(e) Model and experiment for ultracold atomic vapour with single-colour control fields. Substantial probe light leaks both forward and backward during the SL stage. The absence of retrieved probe compared to the hot-atom case indicates that SL was unsuccessful.
(f) Mutually-detuned (two-colour) control fields and ultracold atoms. The smaller leakage of the probe and presence of recalled forward probe indicates the successful creation of SL. (Figure modified from \cite{lin_stationary_2009}).}
\label{fig:lin2009}
\end{figure}
When using a scheme where the forward and backward control fields had very different frequencies, Ref.~\cite{lin_stationary_2009} successfully demonstrated the formation of SL \Cref{fig:lin2009}(f). This was the first experiment to generate SL fields in an ensemble of laser-cooled atoms. The forward propagating probe and control pair had a wavelength of $780$~nm and the backward pair $795$~nm. Due to the wavelength difference, the subsequent two-photon travelling wave modulates the control field intensity grating several orders of magnitude more quickly than the EIT bandwidth. The energy difference also prevented direct coupling between the forward probe and backward control and vice versa. The SL field is therefore generated without any standing wave in the control field. The large difference in wavelengths ensured that the HOCs are suppressed, but also prevented effective phase-matching of the counterpropagating fields, causing rapid decay of the spinwave during the SL period.
In their analysis, Lin et al. matched the shape of of leaked and released SL fields to a multi-wave mixing model that included excited state coherences of order $\pm1$ driving spin wave coherences of order $\pm 2$ with qualitative agreement. Subsequent work by Wu et al. \cite{wu_stationary_2010} expanded on this multi-wave mixing model by including both higher order coherences and residual Doppler broadening, which remains non-negligible at temperatures of several hundred $\mu$K and was included in an earlier theoretical treatment by the same authors \cite{wu_decay_2010}. The higher order, Doppler broadened MWM model matches additional features of the data taken by Lin et al. in Ref.~\cite{lin_stationary_2009}.
In a pair of follow-up experiments \cite{peters_observation_2010, peters_formation_2012} Peters et al. further explored SL formation in ultracold ensembles with the same apparatus as Ref.~\cite{lin_stationary_2009}. First they prepared SL using a much smaller forward and backwards detuning difference ($\Delta_+ - \Delta_-$) on the order of $10$~MHz \cite{peters_observation_2010}. In this configuration the off-resonant transitions are weakly driven, but may nevertheless introduce a non-uniform, time varying phase variation to the SL field (a prediction made by Moiseev and Ham in their MWM theory \cite{moiseev_quantum_2006}). Peters et al. inferred the SL phase shift by inference from a numerical MWM model matched to the phase of released SL fields as predicted by a numerical MWM model. This phase variation poses a problem for quantum information schemes that propose to use cross-phase modulation from the SL field to phase shift a target pulse (which we discuss in \Cref{sec:apps}). This phase distortion is, however, not fatal for SL cross-phase modulation with cold atoms because the distortion vanishes for large ODs.
In Ref.~\cite{peters_formation_2012} Peters et al. considered the role of EIT window bandwidth in the formation of SL pulses in ultracold media, showing that HOCs detrimental to SL are formed when the EIT bandwidth is larger than the ensemble Doppler broadening. This condition expresses the ability for mobile atoms to sustain wavelength-scale coherences.
Although HOCs can prove destructive for SL fields, Park et al. showed that the frustrated SL effect could be used to construct a coherent and dynamic beam splitter with a cold vapour of magneto-optically trapped rubidium atoms \cite{park_coherent_2016}. MWM by counterpropagating control fields coherently couples a stored spinwave into two optical modes with power splitting ratio, phase and frequency determined dynamically by the controls. Such an effect would not be possible in a hot medium that cannot sustain HOCs.
\subsubsection{Stationary light in hollow core fibre}
The optically deep cold-atom ensembles discussed so far were realized by using a magneto-optical trap (MOT) to cool and confine a cloud of atomic vapour within a vacuum chamber. In Ref.~\cite{blatt_stationary_2016} Blatt et al. demonstrated SL formation with EIT in an atomic ensemble inside a hollow-core using the scheme of \Cref{fig:schemes}(d) with detunings large enough to suppress HOCs. The fiber was loaded with cold rubidium atoms from a MOT, at a temperature of $1$~mK. SL was witnessed by the suppression of an EIT pulse escaping from the ensemble. Once again, SL could be maintained only when the relative detuning of the two counterpropagating control fields was larger than the EIT transmission bandwidth.
In such hollow-core fibre experiments, the fields propagate along a single optical axis. This limits the capacity to phase-match the counterpropagating fields, which can be done in free-space systems by introducing small angles between the fields (see \Cref{sec:phasematch}). In Ref.~\cite{blatt_stationary_2016} this was compensated with a combination of two-photon detuning $\delta$ and an imbalance between the two control field Rabi frequencies $\Omega_{c\pm}$.
Loading atoms into hollow core fibers has the consequence of increasing the atom-photon interaction cross section. The optical field is confined largely within the fibre core so that the field per photon remains large as the guided mode propagates across the ensemble. This is particularly advantageous for SL cross-phase modulation schemes which improve not only with high optical depths, but also with the single photon-single atom interaction cross section \cite{hafezi_quantum_2012}.
Fibers also bring additional challenges due to the confined geometry. Firstly there is the issue of atomic collisions with the walls of the hollow fiber. One approach to mitigate this issue is to coat the inside of the fiber with an anti-relaxation coating that reduces collisional dephasing. Using room temperature atoms this approach has yielded a collisional dephasing rate of $\sim$1~MHz \cite{light_OL_2007}. A dipole trap can also be used to trap the atoms in the center of the fiber core, away from the walls. The SL demonstration in Ref.~\cite{blatt_stationary_2016} used a red-detuned dipole trap with a depth of $5$~mK resulting a collisional dephasing rate of only 50kHz. A second issue is control field inhomogeneity. In free space experiments, the control fields can be expanded to improve the uniformity of the intensity over the interaction volume. In a fiber this is not possible and radial control field intensity variations impose additional inhomogeneous broadening on the atomic ensemble \cite{blatt_stationary_2016}.
\subsubsection{Imaging the atomic coherence}
\begin{figure*}[th]
\centerline{\includegraphics[width=175mm]{Campbell_EIT.pdf}}
\caption{\small (Adapted from \cite{campbell_direct_2017}). Side-imaging the atomic coherence during EIT SL. (a) Schematic of the apparatus used in Ref.~\cite{campbell_direct_2017} to side-image the atomic coherence of a propagating polariton. An additional imaging beam transversely illuminates the ensemble. (b) Stroboscopic images of shadows cast by an EIT polariton spinwave $\hat{\mathcal{S}}$ propagating through the ensemble. (c--h) Observed evolution of the coherence along the propagation direction compared to MWM simulations. The corresponding pulse scheme is shown at top. The EIT polariton first propagates into the ensemble and is then either stored (`stored light', no control fields) or frozen by EIT SL with either single- or two-colour control fields. (c,d) Balanced EIT SL. (e,f) Stored light followed by EIT SL. (g,h) Unbalanced EIT SL.}
\label{fig:EITSL_imaging}
\end{figure*}
All the experiments discussed so far inferred the existence of EIT SL by observing the temporal shape of the output optical fields, such as the results in \Cref{fig:lin2009} from Ref.~\cite{lin_stationary_2009}. Optical probe pulses were observed to be reflected by the SL bandgap, recalled from the ensemble after being trapped by the SL bandgap, or leaked forwards and backwards from the ensemble during imperfect SL operation. Significant conclusions about the SL process have been reached by comparing the behaviour of these output fields with models. Throughout these experiments, however, the internal dynamics of the atomic ensemble and its associated light fields remained unobserved.
To expose the internal dynamics of an atomic ensemble, side imaging can be employed. In this measurement a broad imaging beam illuminates the entire ensemble from a direction perpendicular to the optical propagation axis and is absorbed selectively by one of the ground states in the atomic coherence, $\hat{\mathcal{S}}$. An example of this configuration is shown in \Cref{fig:EITSL_imaging}(a). The ensemble's shadow is imaged onto a camera to reveal the shape of the spinwave. Side imaging the spinwave after various delays reveals the propagation of a polariton through the ensemble stroboscopically by imaging its atomic component at different times. The stroboscopic evolution of an EIT polariton recorded in Ref.~\cite{campbell_direct_2017} is shown in \Cref{fig:EITSL_imaging}(b). The measurements are destructive, so each image represents a new run of the experiment with a fresh ensemble of atoms.
Side imaging had previously been used to observe the propagation of atomic coherences in Bose-Einstein condensates \cite{ginsberg_coherent_2007,zhang_creation_2009} and hot atomic vapours \cite{wilson_slow_2017}. The use of side imaging to expose the dynamics of SL was first demonstrated by Everett et al. \cite{everett_dynamical_2016}. This experiment used Raman SL and showed very different behaviour to the EIT SL considered here, as will be discussed in detail in \Cref{sec:ORSL}.
Side imaging was first applied to EIT SL in Ref. \cite{campbell_direct_2017}. In this experiment the atomic coherence in an ensemble of cold, magneto-optically trapped atoms was imaged during both slow and SL scenarios. This was the first time that a direct comparison could be made between the actual and simulated evolution of an EIT SL polariton within the ensemble. Since there is little prospect of directly measuring the SL field (if it were observed then it can not also be stationary!) the ability to image the spinwave $\hat{\mathcal{S}}$ and compare this with models is the next best thing.
The experiments in Ref.~\cite{campbell_direct_2017} showed the EIT SL polariton diffusing due to the limited optical depth of the ensemble (\Cref{fig:EITSL_imaging}(c,d)); the motion of the polariton with unbalanced control fields (\Cref{fig:EITSL_imaging}(g,h)) and compared single-colour (\Cref{fig:schemes}(c)) two-colour SL (\Cref{fig:schemes}(d)) with a relative detuning of $4$~MHz between the forward and backward propagating components (\Cref{fig:EITSL_imaging}(c--f)). In this experiment there was no measurable difference between the single- and two-colour EIT SL. It was supposed that the pump process that prepared the ensemble also induced sufficient longitudinal atomic motion to wash out the control field standing wave.
\subsubsection{Stationary light with quantum fields}
SL-based gates for photonic quantum information require the trapping of quantum light fields, in many proposals this means generating a SL field with a single photon. To date, however, almost all SL experiments have been done with classical fields. The first, and so far only, demonstration of a quantum SL field was performed by Park et al. \cite{park_experimental_2018} using single photon states. They generated a single distributed atomic excitation in a cold $^{87}$Rb MOT by detecting a Stokes photon Raman scattered spontaneously from a detuned `write' pulse. Measuring the Stokes photon heralds the existence of a single-excitation spinwave throughout the ensemble.
Counterpropagating control fields, mutually detuned to prevent the creation of HOCs (as in \Cref{fig:schemes}(d)), transfer the single-excitation spinwave to an anti-Stokes single-photon SL field trapped in the ensemble. In contrast to the coherent SL fields in the previous experiments, which were generated from polariton pulses injected into the ensemble, the single-excitation spinwave envelope is uniform because every atom in the ensemble is equally likely to have scattered the Stokes photon. SL can't be maintained at the edges of the ensemble, so a portion of the single-photon field escapes during the SL stage.
The anti-Stokes photon is eventually released by a single-directional control field after being held briefly as a SL pulse. The post-SL anti-Stokes photon is non-classically correlated with the herald Stokes photon, as well as being anti-bunched (under some assumptions) \cite{park_experimental_2018}. Such a demonstration of quantum SL fields is a necessary precursor to photonic computation with SL-mediated interactions.
\subsection{Raman stationary light}\label{sec:ORSL}
The EIT-based form of SL from Andr\'e and Lukin's first proposal relies on a local multi-wave interference mechanism between near-resonant fields. The propagation of light is prevented due to interference between counterpropagating light fields generated by multi-wave mixing. In contrast, Raman SL uses an interaction in which the driving fields are far detuned from atomic resonance. This form of SL was proposed and demonstrated by Everett et al. in Ref.~\cite{everett_dynamical_2016}. The absorption length of such far-detuned fields is much larger and the interference between light reflected after travelling short distances can therefore not be relied on to trap the light. Instead, the effect that traps Raman SL is interference of light that is generated at different positions in the memory. For this reason, Raman SL can potentially be sustained over larger distances.
\begin{figure*}[thp]
\centerline{\includegraphics[width=160mm]{Raman_sl_combined_2.pdf}}
\caption{\small (a) Level scheme for Raman SL. Forward (+) and backward (-) probe fields $\hat{\mathcal{E}}_\pm$ are coupled to the atomic spinwave $\hat{S}$ by corresponding control fields $\Omega_\pm$ with large equal and opposite single-photon detunings $\Delta_+ = \Delta$, $\Delta_- = -\Delta$. (b--e) Schematic of Raman SL formation. The initial spinwave consists of two identical separated Gaussian envelopes with equal or opposite phase. Simultaneous pumping by the two-colour control field produces SL. (b, d) The antisymmetric spinwave, $\phi = \pi \implies \int \mathrm{d}z \, \hat{\mathcal{S}} = 0$, is stationary and decays only globally---giving rise to a stable SL field. (c, e) The symmetric spinwave, $\phi = 0 \implies \int \mathrm{d}z \, \hat{\mathcal{S}} \ne 0$, evolves until it reaches a stationary configuration, leaking light fields forward and backward in the process. Figure modified from \cite{everett_dynamical_2016} }
\label{fig:raman_combined}
\end{figure*}
To generate Raman SL a suitable spinwave is written into the ensemble via some initial probe pulse sequence. The spinwave is subsequently illuminated with counterpropagating control fields that have large equal and opposite detuning as shown in \Cref{fig:raman_combined}(a). The forward and backward control fields of Raman SL, like two-colour EIT-SL in \Cref{fig:schemes}(d), are mutually detuned. In this case, however, the detuning is so large that the interaction bandwidth is much narrower than the mutual detuning $\Delta_+ - \Delta_-$. This prevents the formation of HOCs no matter the atomic temperature.
Due to the pair of counterpropagating control fields, the spinwave is converted into forward- and backward-propagating probe fields. Stable SL will be generated when the counterpropagating components interfere destructively such that the optical field vanishes at the edges of the ensemble and no field escapes. The spinwaves generated by each probe field travelling in opposite directions within the ensemble also interfere destructively, cancelling out any evolution. The spinwave and probe fields are therefore stationary, and any bright optical standing wave in the probe fields is trapped. The condition for a stable Raman SL configuration is elegant in its simplicity: all that is required is that the integral of the spin wave over the ensemble is zero:
\begin{align}
\int \mathrm{d}z \, \hat{\mathcal{S}} = 0 \,.
\end{align}
Any spinwave that integrates to zero along the length of the memory will evolve only by a global decay rate, we can refer to such a spinwave as `stationary'. This Raman SL condition is satisfied by any spinwave with an average amplitude of zero---including spinwaves that are spatially separate from the SL field they generate.
A schematic of a simple Raman SL configuration satisfying this condition is shown in \Cref{fig:raman_combined}(b). The spinwave consists of two equal, but separated, Gaussian coherence envelopes with opposite phase. The forward and backward probe fields driven from the spinwave by counterpropagating control fields interfere destructively at the ensemble edges, but between the two components of the spinwave exists a stationary optical field consisting of equal forward and backward fields with opposite phase circulating between the coherences. The spinwaves are stationary (up to a global decay rate) and evolve unchanged to \Cref{fig:raman_combined}(d).
\begin{figure}[b!]
\centerline{\includegraphics[width=80mm]{RamanSL_stack_2.pdf}}
\caption{\small (Adapted from \cite{everett_dynamical_2016}). A comparison of experimental results and modelling for Raman SL. (a) Stationary and (b) initially non-stationary spinwaves are illuminated by counterpropagating control fields at 60 $\mu$s. (i) Experimental imaging and (ii) modelling compare the evolution of the spinwaves, with (iii) a snapshot of both at 64 $\mu$s. (iv) the modelled forward propagating probe field. Intense light emerges from the non-stationary spinwave during its rapid evolution to a stationary spinwave. (v )A snapshot of both modelled probe fields at 64 $\mu$s, the trapped light causes no further evolution of the spinwave.}
\label{fig:raman_images}
\end{figure}
If the initial spinwave has instead $\int \mathrm{d}z \, \hat{\mathcal{S}} \ne 0$ then $\hat{\mathcal{E}}^\pm\left(z=0,L \right) \ne 0$. In this case the probe fields evolve and leak from the ensemble until the spinwave reaches an equilibrium state with zero mean. At equilibrium, $\hat{\mathcal{E}}^\pm$ circulate within the ensemble and the resulting spinwave is distributed such that $\hat{\mathcal{E}}^\pm$ interfere destructively to arrest any further evolution. For example, by flipping the phase of one component of the spinwave in \Cref{fig:raman_combined}(b) such that the relative phase between the two Gaussians is $\phi=0$, we have a spinwave that is unstable under Raman SL. This spinwave, shown in \Cref{fig:raman_combined}(c), evolves over time into the stable spinwave shown in \Cref{fig:raman_combined}(e). A substantial proportion of the coherence may escape in this process.
Raman SL was first characterized in an ensemble of laser-cooled $^{87}$Rb atoms \cite{everett_dynamical_2016}. In this case the initial spinwave was written into the ensemble via a gradient-echo scheme \cite{alexander_photon_2006,hetet_photon_2008, longdell_analytic_2008}. By side-imaging the atomic coherence during the Raman SL process, the authors were able to compare the evolution of the symmetric and anti-symmetric spinwaves shown in \Cref{fig:raman_combined}(b,d) and (c,e). The corresponding results are shown in \Cref{fig:raman_images}(a) and (b). The magnitude of the spinwave is shown by the optical depth distribution along the optical propagation axis ($z$). This distribution changes in time as the spinwave first propagates into the ensemble then evolves under SL. Because the symmetric, \Cref{fig:raman_images}(a), and anti-symmetric ,\Cref{fig:raman_images}(b), spinwaves differ only by a phase, the two OD distributions are initially the same. At $60$~$\mu$s, the Raman SL control fields are activated and the distributions begin to evolve. The symmetric spinwave develops a distinctive central coherence peak that is absent from the evolution of the antisymmetric spinwave. This difference is most evident in \Cref{fig:raman_images}(iii), which compares coherence distribution of the two cases after they have both reached a SL equilibrium. The shape of the antisymmetric spinwave is essentially stable under Raman SL, as expected.
The spinwave evolution was further confirmed by mapping the coherence back to a traveling probe field subsequent to the SL operation. The field recalled from the symmetric spinwave was both weaker and distorted consistent with the Raman SL model. No considerable field escaped the ensemble during the SL process with an antisymmetric spinwave. However, the evolution of the symmetric spinwave under counterpropagating control was accompanied by the detection of brief, intense probe fields escaping in both directions from the unstable Raman SL configuration. These fields lasted until the spinwave had reached the stable configuration in (b-iii).
\Cref{fig:raman_images}(iv) and (v) show the corresponding SL fields $\hat{\mathcal{E}}_\pm$ from simulations. The symmetric case features a considerable SL field at the centre of the ensemble, between the two atomic coherence peaks. This is a distinctive feature of Raman SL. In contrast to the EIT polariton, the spinwave and stationary probe field need not coincide, and the SL field can be bright in regions where the spinwave is essentially zero. Raman-SL grants dynamic control over not only the optical bandgap, but also the SL distribution within the ensemble. This additional capability of Raman-SL increases the range of SL-mediated interactions that are possible. We return to this subject in \Cref{sec:apps}.
\section{Theory of stationary light}\label{sec:theo}
Having seen the range of SL experiments to date, we will delve now into a theoretical model of the atom-light interaction that gives rise to SL. Along the way, different approximations allow the model to branch into treatments of the EIT and Raman conditions as well as dealing with the higher order coherences.
The conditions for which a weak probe field is converted into SL may occur when three- or four-level atomic systems are driven by bright, counter-propagating control fields. We begin our analysis by deriving the behaviour for a four-level double-$\Lambda$ system driven by two counter-propagating control fields, as shown in \Cref{fig:schemes}(e).
We assume that the probe fields are weak and propagate in opposite directions. They may be described by the operators \cite{PhysRevA.76.033805}:
\begin{align}
\hat{\textbf{E}}_{p+}(z)=&\boldsymbol{\epsilon_{p+}}\sqrt{\frac{\hbar\omega_{p+}}{4\pi c \epsilon_0A}}\int_{\omega_{p+}}\mkern-18mu \mathrm{d}\omega\left(\hat{a}_\omega e^{i\omega z/c}+\hat{a}^\dag_\omega e^{-i\omega z/c}\right)\nonumber\\
\hat{\textbf{E}}_{p-}(z)=&\boldsymbol{\epsilon_{p-}}\sqrt{\frac{\hbar\omega_{p-}}{4\pi c \epsilon_0A}}\int_{\omega_{p-}}\mkern-18mu \mathrm{d}\omega\left(\hat{a}_\omega e^{-i\omega z/c}+\hat{a}^\dag_\omega e^{i\omega z/c}\right)\nonumber
\end{align}
We treat the bright control fields as classical, with electric fields:
\begin{align}
\textbf{E}_{c+}(z)=&\boldsymbol{\epsilon_{c+}}\mathcal{E}_{c+}(t-z/c)\textrm{cos}[\omega_{c+}(t-z/c)],\nonumber\\
\textbf{E}_{c-}(z)=&\boldsymbol{\epsilon_{c-}}\mathcal{E}_{c-}(t+z/c)\textrm{cos}[\omega_{c-}(t+z/c)].
\end{align}
The subscripts refer to the field and and direction of travel: $p$ is for probe; $c$ is for control; $+$ is for forwards and $-$ is for backwards. The unit polarization vectors are given by $\boldsymbol{\epsilon}$ while the slowly varying field envelopes are given by $\mathcal{E}(t-z/c)$. The cross-sectional area of the beam is $A$.
We assume that the weak fields each exist in a small bandwidth around a carrier frequency given by $\omega_{p+}=\omega_{13}+\Delta_+$, $\omega_{p-}=\omega_{14}+\Delta_-$, $\omega_{c+}=\omega_{23}+\Delta_+$, $\omega_{c-}=\omega_{24}+\Delta_-$ where the frequency difference of each probe-field $\omega_{p\pm}$ relative to the atomic transition frequencies $\omega_{13}$, $\omega_{14}$ include an independent detuning $\Delta_\pm$ from the excited states.
The light interacts with an ensemble of $N$ atoms over a length $L$. We can then write the interaction part of the Hamiltonian as
\begin{align}
&\hat{H}_\mathrm{INT}=-\hbar\sum^{N}_{n=1}\Bigg[\Omega_{c+}(t-z_n/c)e^{-i\omega_{c+}(t-z_n/c)}\hat{\sigma}^n_{32} \nonumber \\
&\phantom{\hat{H}_\mathrm{INT}=} +\Omega_{c-}(t+z_n/c)e^{-i\omega_{c-}(t+z_n/c)}\hat{\sigma}^n_{42} \nonumber\\
&+g\left(\frac{L}{2\pi c}\right)^{1/2}\Bigg(\int_{\omega_{p+}} \mathrm{d}\omega\hat{a}_\omega e^{i\omega z/c}\hat{\sigma}^n_{31} \\
& \phantom{+g\left(\frac{L}{2\pi c}\right)^{1/2}\Bigg(}+ \int_{\omega_{p-}}\mathrm{d}\omega\hat{a}_\omega e^{-i\omega z/c}\hat{\sigma}^n_{41}\Bigg)+ \textrm{H.c.}\Bigg], \nonumber
\end{align}
where H.c. is the Hermitian conjugate. The Rabi frequency associated with the forward control field is $\Omega_{c+} = \bra{3}(\mathbf{\hat{d}}_{23}\cdot\boldsymbol{\epsilon_{c+}})\ket{2}\mathcal{E}_{c+}/(2\hbar)$ where $\mathbf{\hat{d}}_{23}$ is the transition dipole operator for the $\ket{2}\rightarrow\ket{3}$ transition, with a similar expression for the backward control field. The coupling rate between the forward probe field and the $\ket{1}\rightarrow\ket{3}$ transition is
\begin{align}
g = \bra{3}(\mathbf{\hat{d}}_{13}\cdot\boldsymbol{\epsilon_{p+}})\ket{1}\sqrt{\frac{\omega_{p+}}{2\hbar c \epsilon_0 A L}} \,,
\end{align}
and, for simplicity, we assume that the coupling rate for the backward propagating probe is identical.
\vspace{2.5cm}
The equations of motion are more useful if we define operators that vary slowly compared to the optical frequencies over space and time. This requires a large density of atoms to allow assumptions about weak fields to hold. We take a slice $\mathrm{d}z$ of the ensemble containing a large number of atoms $N_z \gg 1$ and define the following operators:
\begingroup
\allowdisplaybreaks
\begin{align}
\hat{\sigma}_{\mu\mu}(z,t)&= \frac{1}{N_z}\sum^{N_z}_n\hat{\sigma}^n_{\mu\mu}(t), \nonumber\\
\hat{\sigma}_{32}(z,t)&=\frac{1}{N_z}\sum^{N_z}_n\hat{\sigma}_{32}^n(t)e^{-i\omega_{c+}(t - z_n/c)}, \nonumber\\
\hat{\sigma}_{42}(z,t)&=\frac{1}{N_z}\sum^{N_z}_n\hat{\sigma}_{42}^n(t)e^{-i\omega_{c-}(t + z_n/c)}, \nonumber\\
\hat{\sigma}_{31}(z,t)&=\frac{1}{N_z}\sum^{N_z}_n\hat{\sigma}_{31}^n(t)e^{-i\omega_{p+}(t- z_n/c)}, \label{eq:slowoperators} \\
\hat{\sigma}_{41}(z,t)&=\frac{1}{N_z}\sum^{N_z}_n\hat{\sigma}_{41}^n(t)e^{-i\omega_{p-}(t+ z_n/c)}, \nonumber\\
\hat{\sigma}_{21\pm}(z,t)&=\frac{1}{N_z}\sum^{N_z}_n\hat{\sigma}_{21}^n(t)e^{-i(\omega_{p\pm}-\omega_{c\pm})(t\mp z_n/c)}, \nonumber\\
\hat{\mathcal{E}}_\pm(z,t)&=\sqrt{\frac{L}{2\pi c}}e^{i\omega_{p\pm}(t\mp z/c)}\int_{\omega_{p\pm}} \mathrm{d}\omega\hat{a}_\omega(t) e^{\pm i\omega z/c}. \nonumber
\end{align}
\endgroup
The multiplication by terms of the form $\exp(-i\omega(t-z/c))$ assigns a separate rotating frame to each operator. The $\pm$ subscripts for $\sigma_{21\pm}$ and $\hat{\mathcal{E}}_\pm$ indicate that the operator is slowly varying with respect to fields travelling in the $\pm z$ direction.
The collective operators have commutators
\begin{align}
\left[\hat{\sigma}_{\mu\nu}(t),\hat{\sigma}_{\alpha\beta}(t)\right]&=\delta_{\nu\alpha}\hat{\sigma}_{\mu\beta}(t) - \delta_{\mu\beta}\hat{\sigma}_{\alpha\nu}(t),\nonumber\\
\left[\hat{\mathcal{E}}_\pm(t),\hat{\mathcal{E}}_\pm^\dag(t)\right]&=1. \nonumber
\end{align}
Substituting the slowly varying operators into $\hat{H}_\mathrm{INT}$ and including the energy terms for the separate light and atomic systems gives the complete Hamiltonian:
\vspace{2cm}
\begin{widetext}
\begin{align}
\hat{H}= &\int \mathrm{d}\omega\,\hbar\omega\hat{a}_\omega^\dag\hat{a}_\omega - \frac{\hbar\omega_{p+}}{L}\int_0^L \mathrm{d}z\,\hat{\mathcal{E}}_+^\dag\hat{\mathcal{E}}_+-\frac{\hbar\omega_{p-}}{L}\int_0^L \mathrm{d}z \, \hat{\mathcal{E}}_-^\dag\hat{\mathcal{E}}_- \label{eq:3levelcphamiltonian}\\&+
\int^L_0\mathrm{d}z\, \hbar \mathcal{N}(z) \times \Bigg[\Delta_+\hat{\sigma}_{33}+\Delta_-\hat{\sigma}_{44}-\Bigg(\Omega_{c+}(t-z/c)\hat\sigma_{32} + \Omega_{c-}(t+z/c)\hat{\sigma}_{42}
+g\Big(\hat{\mathcal{E}}_+\hat{\sigma}_{31}+\hat{\mathcal{E}}_-\hat{\sigma}_{41} \Big)+ \textrm{H.c.}\Bigg)\Bigg]\nonumber
\end{align}
\end{widetext}
\vspace{0cm}
\noindent where $\mathcal{N}(z)$ is the linear atomic density. The $(z,t)$ dependence of the operators is generally omitted for readability.
To obtain compact equations of motion that yield insight into the dynamics, we make three assumptions. The first is that the probe fields are weak enough that almost all the atomic population resides in $\ket{1}$. Known as the pure-state approximation, this allows us to keep track of only the coherences, as $\hat{\sigma}_{11}\approx1$ and $\hat{\sigma}_{22}\approx\hat{\sigma}_{33}\approx\hat{\sigma}_{44}\approx0$. Additionally, we assume that the length of the ensemble L is short enough that the free-space propagation time for light to traverse the ensemble is much faster than any timescale of interest, $L/c \ll T$. Finally, we assume that the two $\Lambda$ transitions, one formed by the forward propagating fields and the other by the backward propagating fields, are phase-matched $k_{p+}-k_{c+}= k_{p-}-k_{c-}$ and have equal two-photon detunings $\omega_{p+}-\omega_{c+}= \omega_{p-}-\omega_{c-}$. This allows us to write a single slowly-varying $\hat{\sigma}_{21}$ coherence operator rather than separating it into components that each couple to the forward or backward fields. With these assumptions, we find familiar Maxwell-Bloch equations \cite{PhysRevA.76.033805}, with an additional probe field and corresponding excited state coherence all interacting with a single spinwave:
\begin{align}
\partial_t\hat{\sigma}_{13}&= -(\Gamma+ i\Delta_+)\hat{\sigma}_{13}+i g \hat{\mathcal{E}}_+ + i\Omega_{c+}\hat{\sigma}_{12} \nonumber\\
\partial_t\hat{\sigma}_{14}&= -(\Gamma+ i\Delta_-)\hat{\sigma}_{14}+i g \hat{\mathcal{E}}_- + i\Omega_{c-}\hat{\sigma}_{12} \nonumber\\
\partial_t\hat{\sigma}_{12}&= -\gamma\hat{\sigma}_{12} +i\Omega_{c+}^*\hat{\sigma}_{13}+i\Omega_{c-}^*\hat{\sigma}_{14} \nonumber\\
\partial_z\hat{\mathcal{E}}_+ &= \frac{ig\mathcal{N}(z)L}{c}\hat{\sigma}_{13} \nonumber\\
\partial_z\hat{\mathcal{E}}_- &= \frac{ig\mathcal{N}(z)L}{c}\hat{\sigma}_{14}. \label{eq:3levelcp}
\end{align}
The equations can be simplified further by writing them in terms of typical experimental parameters. The optical depth $d=g^2NL/(\Gamma c)$ is a standard parameter and is straightforward to measure experimentally. We substitute it into the equations of motion to remove the atom number $N$ and interaction strength $g$. To avoid having a spatially dependent optical depth, we scale the spatial coordinate to be normalised according to the atomic density $\xi(z) = \int_0^z \mathrm{d}z'\,\mathcal{N}(z')/N$. To replace terms containing the atom number, the coherences are also renormalised: $\hat{S}=\sqrt{N}\hat{\sigma}_{12}$, $\hat{P}_+=\sqrt{N}\hat{\sigma}_{13}$, and $\hat{P}_-=\sqrt{N}\hat{\sigma}_{14}$. The probe field is also renormalised $\hat{\mathcal{E}}_\pm \rightarrow \sqrt{c/(L\Gamma)}\hat{\mathcal{E}}_\pm $, giving
\begin{align}
\partial_t\hat{P}_\pm&= -(\Gamma+ i\Delta_\pm)\hat{P}_\pm+i \sqrt{d}\Gamma\hat{\mathcal{E}}_\pm+ i\Omega_{c\pm}\hat{S} \label{eq:3levelcp:1}\\
\partial_t \hat{S} &= -\gamma \hat{S} + i \Omega_{c+}^* \hat{P}_+ + i \Omega_{c-}^* \hat{P}_-\label{eq:3levelcp:2}\\
\pm\partial_\xi\hat{\mathcal{E}}_\pm &= i\sqrt{d}\hat{P}_\pm \label{eq:3levelcp:3}
\end{align}
\subsection{EIT stationary light}
The original proposal for SL used EIT with an additional counterpropagating control field. This form of SL arises from the equations of motion when we set both control fields on resonance: $\Delta_\pm=0$. We can make an adiabatic approximation by assuming that the excited state coherences vary slowly relative to the excited state decay rates $\partial_t \hat{P} \ll \Gamma$.
We then consider the equations in the spatial Fourier domain \cite{zimmer_dark-state_2008,campbell_direct_2017} by applying the transform $X(\xi,t)=\int{\mathrm{d}\kappa \, e^{-i\kappa\xi}\Tilde{X}(\kappa,t)}$ and expand to first order in $\kappa/d$. Combining \Cref{eq:3levelcp:1} and \Cref{eq:3levelcp:3}, we find
\begin{align}
\Tilde{\hat{\mathcal{E}}}_\pm \simeq -\frac{\Omega_{c\pm}}{\sqrt{d}\Gamma}(1\pm i\kappa/d)\Tilde{\hat{S}} .
\end{align}
The quantity $\kappa/d$ describes the spatial variation of the coherence S with respect to the optical depth. After substituting this, along with \Cref{eq:3levelcp:1}, into \Cref{eq:3levelcp:2} we transform back into the the spatial domain $\xi$ to obtain the approximate equation of motion \cite{campbell_direct_2017}
\begin{align}
\left(\partial_t + \Gamma \tan^2\theta\left(\cos 2\phi\partial_\xi-\frac{1}{d}\partial^2_\xi\right)\right)\hat{S}=0
\end{align}
with the mixing angles
$\tan^2\theta\equiv(|\Omega_+|^2+|\Omega_-|^2)/(d\Gamma^2)$ and $\tan^2\phi\equiv|\Omega_-|^2/|\Omega_+|^2$.
The diffusion term $\partial^2/\partial \xi^2$ is due to the finite absorption length of light in the EIT medium; the two probe fields become unequal where the coherence changes quickly in space and there is no longer complete interference between the two. The pulse matching is therefore imperfect, allowing decay of the polariton. For sufficiently large optical depth, the diffusion can be neglected in order to define a dark state polariton for the system:
\begin{align}
\hat{\Psi}_D = \sin \theta(\hat{\mathcal{E}}_+\cos \phi + \hat{\mathcal{E}}_-\sin \phi) - \hat{S} \cos \theta
\end{align}
This dark state polariton can also be found as in Zimmer et al. \cite{zimmer_dark-state_2008} by applying a Morris-Shore transformation to the system. By analogy with the Schrodinger equation for a massive particle, the diffusion term for the dark-state polariton identifies a complex effective mass of the polariton:
\begin{align}
m^* = 2\hbar\left(\frac{d\Gamma}{\Omega}\right)^2\frac{1}{\Delta-i\Gamma}
\end{align}
This mass is relevant in applications of SL for generating non-classical statistics of the polariton.
It is also possible to find a bandgap in the dispersion relation by analysing the equations of motion in the temporal frequency domain \cite{moiseev_stationary_2014}. Frequency domain analysis of SL is particularly important in proposals for quantum gates based on changes in dispersion for light transmitted or reflected from the ensemble \cite{lahad_induced_2017,iakoupov_controlled-phase_2018}.
\subsection{Raman stationary light}
As described in \Cref{sec:ORSL}, Raman SL relies on destructive interference between light emitted from different regions of the ensemble. To obtain a simple equation for this behaviour, we follow Ref.~\cite{everett_dynamical_2016} and take the two counterpropagating probe fields far from resonance. We make the secular approximation from the start and justify this later based on the large difference in detuning required in the probe fields. At large detuning $\Delta \gg \partial_t\hat{P}$,
\begin{align}
\hat{P}_\pm\approx i\left(\sqrt{d}\Gamma\hat{\mathcal{E}_\pm} + \Omega_{c\pm}\hat{S}\right)/\left(\Gamma+i\Delta_\pm\right)\label{eq:ramanadiabaticapprox}
\end{align}
We can ignore the incoherent absorption of the probe fields due to the excited state, but should still consider the dispersion. Substituting \Cref{eq:ramanadiabaticapprox} into \Cref{eq:3levelcp:3}, the probes experience phase rotations $\partial_\xi\hat{\mathcal{E}}_\pm\rightarrow i\Gamma d\hat{\mathcal{E}}_\pm/\Delta_\pm$. Loss from the spinwave due to incoherent absorption of the control field is collected along with spinwave dephasing in $\gamma$, resulting in:
\begin{align}
\partial_t\hat{S}=& -(\gamma-i\left(\frac{|\Omega_{c+}|^2}{\Delta_+}+\frac{|\Omega_{c-}|^2}{\Delta_-}\right)\hat{S}\nonumber \\&+i\sqrt{d}\Gamma\left(\frac{\Omega_{c+}^*}{\Delta_+}\hat{\mathcal{E}}_++\frac{\Omega_{c-}^*}{\Delta_-}\hat{\mathcal{E}}_-\right)\label{eq:2levwdisp1}\\
\partial_\xi\hat{\mathcal{E}}_+ =& i\sqrt{d}\frac{\Omega_{c+}}{\Delta_+}\hat{S} + i\frac{\Gamma d}{\Delta_+}\hat{\mathcal{E}}_+ \label{eq:2levwdisp2}\\
\partial_\xi\hat{\mathcal{E}}_- =&- i\sqrt{d}\frac{\Omega_{c-}}{\Delta_-}\hat{S} - i\frac{\Gamma d}{\Delta_-}\hat{\mathcal{E}}_-.\label{eq:2levwdisp3}
\end{align}
\subsubsection*{The Raman stationary light equation \label{sec:sleqs}}
As with EIT SL, we set the control field drivings equal, $\Omega_{c+}=\Omega_{c-}=\Omega$. To achieve the interference effect that produces Raman SL, it is necessary to ensure that light generated in one location will interfere with a uniform phase throughout the ensemble. Where the dispersion terms above are non-negligible, this can be satisfied by setting equal and opposite $\Delta_+=-\Delta_-=\Delta$. The dispersion term $i\Gamma d/\Delta$ is now equal for both probe fields, and can be removed by transforming to the rotating spatial frame:
\begin{align}
\hat{\mathcal{E}}_\pm\rightarrow\hat{\mathcal{E}}_\pm e^{(i\frac{\Gamma d}{\Delta} \xi)},
\end{align}
giving
\begin{align}\label{eq:cptwoleveleqns3}
\partial_t\hat{S}(t,\xi)&=i\sqrt{d}\,\frac{\Gamma \Omega}{\Delta}\left(\hat{\mathcal{E}}_++\hat{\mathcal{E}}_-\right)-\gamma\hat{S}\\\label{eq:cptwoleveleqns4}
\partial_\xi\hat{\mathcal{E}}_+(t,\xi)&= i\sqrt{d}\frac{\Omega}{\Delta}\hat{S}\\
\partial_\xi\hat{\mathcal{E}}_-(t,\xi)&= - i\sqrt{d}\frac{\Omega}{\Delta}\hat{S}\label{eq:cptwoleveleqns5}.
\end{align}
Solutions of the probe field are found by integrating the spinwave:
\begin{align}
\hat{\mathcal{E}}_+(t,\xi)&=i\sqrt{d}\frac{\Omega}{\Delta}\int_0^\xi\hat{S}(t,\xi') \, \mathrm{d} \xi' \label{eq:forwardprobeintegral}\\
\hat{\mathcal{E}}_-(t,\xi)&=-i\sqrt{d}\frac{\Omega}{\Delta}\int_1^\xi\hat{S}(t,\xi')\,\mathrm{d}\xi'.\label{eq:backwardprobeintegral}
\end{align}
The SL equation is then obtained by inserting these solutions into \Cref{eq:cptwoleveleqns3}:
\begin{align}
\left( \partial_t + \gamma\right) \hat{S}(t,\xi)
= -d\,\Gamma\frac{\Omega^2}{\Delta^2}\int_0^1{\hat{S}(t,\xi') \, \mathrm{d}\xi'} \,.\label{eq:sleq}
\end{align}
Apart from uniform decay, a spinwave that integrates to zero along the length of the ensemble will not evolve.
This equation can also be used to describe the evolution of spinwaves that do not satisfy $\int{S} \, \mathrm{d}\xi =0$. The spinwave can be separated into a spatially constant term and a part that integrates to zero. In other words, the spinwave can be written as the sum of a stationary and non-stationary component $\hat{S}(\xi,t)=\hat{S}_\xi(\xi) + \hat{S}_t(t)$ where $\int_0^1\hat{S}_\xi(\xi') \, \mathrm{d}\xi'=0$. The SL equation can be applied at time $t=0$.
\begin{align}
(\partial{t}+\gamma)\hat{S}(t,\xi)&=-d\frac{\Gamma\Omega^2}{\Delta^2}\int_0^1\hat{S}(t,\xi')\, \mathrm{d}\xi'\\
& =-d\frac{\Gamma\Omega^2}{\Delta^2}\hat{S}_t(t)
\end{align}
By linearity, the stationary component will remain constant and the non-stationary component will decay exponentially:
\begin{align}
\hat{S}(\xi,t)=[\hat{S}_{\xi}(\xi) + \hat{S}_{t_0}e^{-t(\frac{d\Gamma\Omega^2}{\Delta^2})}]e^{-\gamma t}
\end{align}
\subsection{Higher order coherences}
We have so far considered the secular case; where the two probe fields address entirely separate excited states, or implicitly made a secular approximation. In \Cref{sec:hocs} we discussed higher order coherences (HOCs), and now we provide some mathematical tools for modelling this phenomenon.
In the non-secular case, the same excited state is addressed by the counterpropagating fields, allowing additional couplings between the fields. A high momentum state in the atomic coherence, or HOC, is generated when a probe or control photon is absorbed and re-emitted into the control field travelling in the opposite direction. The short wavelength causes HOCs to decay more quickly in non-stationary media, and the momentum mismatch means they do not couple equally to both probe fields, effecting the SL behaviour.
Due to the shorter wavelengths the HOCs are not described by the same slowly varying operators. Instead additional operators can be used to describe a spinwave with HOC components. For example Wu et al. \cite{wu_stationary_2010} write the spinwave as a sum of operators,
\begin{align}
\hat{\sigma}_{12}=\sum_{n=-\infty}^\infty \hat{\sigma}_{12}^{(2n)}e^{in[(\omega_{c+}+\omega_{c-})z/c+(\omega_{c+}-\omega_{c-})t]}
\end{align}
where each additional term is generated by absorbing a control photon travelling in one direction and emitting it in the other.
The excited state coherences then coupling between each of the spinwave terms are
\begin{widetext}
\begin{align}
\hat{\sigma}_{13}&=e^{-i\Delta_+t+i\omega_{c+}z/c}\sum_{n=0}^\infty \hat{\sigma}_{13}^{(2n+1)}e^{in[(\omega_{c+}+\omega_{c-})z/c+(\omega_{c+}-\omega_{c-})t]}\\&+e^{-i\Delta_-t-i\omega_{c-}z/c}\sum_{n=0}^{-\infty} \hat{\sigma}_{13}^{(2n-1)}e^{in[(\omega_{c+}+\omega_{c-})z/c+(\omega_{c+}-\omega_{c-})t]}.
\end{align}
In the case of a standing wave control field $\omega_{c+}=\omega_{c-}$ the coupling between the spinwaves depends simply on the coupling of each control field with the relevant spinwave, giving equations of motion
\begin{align}
\partial_t\hat{\sigma}_{13}^{(\pm1)} &= -(\Gamma-i\Delta)\hat{\sigma}_{13}^{(\pm1)}+i\sqrt{d}\Gamma\hat{\mathcal{E}}_\pm+i\Omega_{c\pm}\hat{\sigma}_{12}^{(0)}+i\Omega_{c\mp}\hat{\sigma}_{12}^{(\pm2)}\\
\partial_t\hat{\sigma}_{13}^{(\pm(2n-1))} &= -(\Gamma_n-i\Delta)\hat{\sigma}_{13}^{(\pm1)}+i\Omega_{c\pm}\hat{\sigma}_{12}^{(\pm(2n-2))}+i\Omega_{c\mp}\hat{\sigma}_{12}^{(\pm2n)}\\
\partial_t\hat{\sigma}_{12}^{(0)} &=-\gamma\hat{\sigma}_{12}^{(0)} + i\Omega^*_{c+}\hat{\sigma}_{13}^{(+1)} + i\Omega^*_{c-}\hat{\sigma}_{13}^{(-1)}\\
\partial_t\hat{\sigma}_{12}^{(\pm2n)} &= -\gamma_{n}\hat{\sigma}_{12}^{(2n)} + i\Omega^*_{c+}\hat{\sigma}_{13}^{(2n+1)} + i\Omega^*_{c-}\hat{\sigma}_{13}^{(2n-1)}
\end{align}
\end{widetext}
This involves coupling along an infinite ladder of coherences, but the equations can be solved by truncating at a suitable order. Except in completely stationary atoms, the motional decay $\gamma_n$ of the higher order spinwaves is fast enough that only a few terms should be considered.
We have assumed no spatial ordering of the atoms thus far. In the case of standing wave control fields interacting with spatially ordered ultracold or stationary emitters, the dispersion relation becomes more complicated. The analytic results in Ref.~\cite{iakoupov_dispersion_2016} are beyond the scope of this review, but that work is also interesting for studying SL with disordered atoms for its independent derivation of the dispersion relations for both these cases.
\subsection{Phase matching and transverse propagation}\label{sec:phasematch}
Phase matching is an important concept for any light-matter interaction where additional optical frequencies are generated or directions of propagation change. Ideal phase matching means that all the light generated in a spatially extended interaction interferes constructively with the propagating field. Phase matching is critical for generating both EIT SL and Raman SL fields, but was taken for granted in our earlier derivation of the SL equations.
Mathematically, phase matching for stationary light means orchestrating the properties of the optical fields so that the spinwave operators in \Cref{eq:slowoperators} are equal, i.e. $\sigma_{21+}=\sigma_{21-}$. This condition is satisfied naturally with probe and control pairs of equal frequency. Since this is not possible in many of the SL schemes we have discussed, it is necessary to match the phases by introducing an additional degree of freedom, namely by allowing for control fields that are not exactly parallel to their corresponding probe fields. We can write the spinwave operators with the longitudinal ($z$) component of the field momenta to give a phase factor
\begin{align}
e^{-i[(\omega_{p\pm}-\omega_{c\pm})t+(k_{pz\pm}-k_{cz\pm})z]}.
\end{align}
Choosing a level scheme where the control fields have a larger momentum than the probe fields will allow $(k_{pz+}-k_{cz+})z=(k_{pz-}-k_{cz-})z=0$ by introducing a small angle between probe and control fields. This is illustrated, for example, in \Cref{fig:EITSL_imaging}(a), which shows the phase matching scheme used in Ref.~\cite{campbell_direct_2017}.
Under EIT conditions, applying control fields that do not satisfy phase matching can cause significant loss, as the interference causing the transparency breaks down. This would be detrimental to EIT SL. This loss mechanism does not exist in Raman SL due to the fields being far detuned from resonance. In this case, the interference condition that allows Raman SL can only hold for all points in space with proper phase matching of the two counterpropagating probe fields generated from the spinwave. Without this, the Raman SL condition will not be satisfied leading to spinwave decay as the probe field leaks from the atomic ensemble.
The theory collected here considers only plane wave propagation. This is acceptable for situations where the beam sizes are large enough, but in any other system the transverse mode could be relevent. Andr{\'e} et al. \cite{andre_nonlinear_2005} point out that for EIT SL in free-space ensembles, the intensity profiles of the control fields effectively produce a waveguide due to the spatially-varying refractive index. It was also seen in the results of the hollow-core experiments \cite{blatt_stationary_2016} that the transverse mode of the control fields gave rise to extra inhomogeneous broadening. The impact of the spatial mode on Raman SL has not yet been investigated in any detail.
\section{Applications of stationary light}\label{sec:apps}
The physics of SL is rich with complexity. The ability to engineer dispersion in these systems allows the simulation of numerous quantum phenomona such as Cooper pairing with photons, spin-charge separation and relativistic quantum field theories. The review by Noh and Angelakis~\cite{noh_quantum_2017} summarizes recent developments in this area. The primary technological application for SL is in the development of nonlinear phase gates in optical quantum information processing. This will be the focus of our attention in this section.
\subsection{Optical gates}
Deterministic quantum computing schemes require a nonlinear element in order to realise a universal set of operations. Ideally, one would like to build a \emph{cnot} gate where the presence of a single control photon will invert the phase of a target photon \cite{chuang_simple_1995,OBrien1567}. This corresponds to a cross phase modulation so strong as to give a $\pi$ phase shift at the single photon level. This feat has recently been achieved using a single atom in an optical resonator \cite{Hacker:wr} and an ensemble of Rydberg atoms \cite{Tiarks:2018tl}. Theoretical work has suggested a way around needing a $\pi$ phase shift \cite{Munro:2005hk}. In this scheme, phase shifts on the order of milliradians could be used to build deterministic optical gates using a combination of single photon and coherent states, although even milliradian single-photon phase shifts remain challenging.
SL is a means of enhancing the available phase shifts simply by increasing the available interaction time. This could enable the construction of nonlinear phase gates in systems that would otherwise be too weakly interacting for use in any computation. SL has further promise in modifying the propagation of light during the nonlinear interaction to avoid parasitic effects limiting the fidelity of the operation.
Andr\'e et al. \cite{andre_nonlinear_2005} proposed storing a weak pulse in an atomic ensemble and passing a second pulse across it in a quasi-SL configuration with unbalanced counterpropagating control fields. Friedler et al. \cite{friedler_deterministic_2005} proposed a similar scheme in which a slow light pulse travels through a SL pulse that modifies the two-photon detuning of the slow light pulse by AC-Stark shift and produces a phase shift. A typical level scheme for these schemes is shown in \Cref{fig:combinedgates}(a). Both these schemes were calculated to produce conditional $\pi$ phase shifts between single photons in experimental conditions that are currently accessible.
\begin{figure*}[t!]
\includegraphics[width=175mm]{combinedgates.pdf}
\caption{ \small (a) Level scheme for cross nonlinearities with SL. One or both SL fields interact with a second level scheme, experiencing dispersion and/or modulating the energy level of the additional transition. (b) Arrangement of spinwave and fields allowing a control phase gate to be implmented with Raman SL (c) Lahad and Firstenberg \cite{lahad_induced_2017} compare the transmission through the SL medium within a ring cavity to transmission through a Fabry-Perot cavity.}
\label{fig:combinedgates}
\end{figure*}
An in-principle demonstration of such a scheme was performed by Chen et al. \cite{chen_demonstration_2012}. A SL pulse was used to erase a weak stored pulse, rather than to imprint a phase shift, by a resonant interaction with atoms in the stored pulse spinwave. This showed usefully large phase shifts of up to 10~mrad could be achieved with this type of interaction.
After these first proposals, enhanced Kerr nonlinearities in EIT, including EIT SL, were the subject of several papers arguing that producing large phase shifts with such schemes was either impossible or extremely impractical \cite{shapiro_single-photon_2006,gea-banacloche_impossibility_2010,he_transverse_2011,he_continuous-mode_2012}. The diverse nature of the proposals makes the no-go theorems difficult to generalise, but a recent work by Viswanathan and Gea-Banacloche \cite{viswanathan_analytical_2018} summarises the obstacles as follows. There are two mechanisms which reduce the fidelity: phase noise arising from a finite response time of the medium; and entanglement arising from the phase-shifting interaction, in which photons are destroyed and recreated subject only to energy conservation. A theme of the no-go papers is that these mechanisms cannot be eliminated simultaneously while generating a large phase shift.
Recently, proposals for avoiding these pitfalls have begun to emerge, with proposals based on SL taking advantage of the radically different propagation of light compared with EIT schemes. Iakoupov et al. \cite{iakoupov_controlled-phase_2018} and Lahad and Firstenberg \cite{lahad_induced_2017} proposed sending light at frequencies outside the bandgap, within a transmission peak as shown in \Cref{fig:bandgap}. These transmission resonances exist wherever the polariton forms a standing wave inside the ensemble \cite{iakoupov_controlled-phase_2018}, and a constructive interference results for light at the far end of the ensemble. The propagation of light resembles the reflection of light within an optical cavity, lending the term \textit{induced cavity}. The multiple reflections of the light travelling through the ensemble change the character of the phase-shifting interaction and allow a high-fidelity, large phase shift. Lahad and Firstenberg make explicit comparison to a cavity as shown in \Cref{fig:combinedgates}(c)
Murray and Pohl proposed a slightly different approach based on a Rydberg interaction. A probe photon is incident on an ensemble illuminated by counterpropagating control fields. An additional coupling between a forward propagating probe and a Rydberg level prevents the forward propagating probe light from coupling to backward travelling probe light and vice versa. The probe photon is transmitted under EIT conditions. A stored `gate' photon shifts the Rydberg level for nearby atoms, interrupting the additional coupling and restoring SL conditions. The probe field then experiences a bandgap and is reflected \cite{murray_coherent_2017}.
Everett et al.~\cite{everett_dynamical_2016} proposed that a gate based on Raman-SL, shown in \Cref{fig:combinedgates} (a) and (b) would also overcome obstacles to high fidelity gates. SL is generated by a spinwave that is spatially separated from the nonlinear interaction of that light with a target state. The circulation of light from the spinwave through the interaction region is equivalent to routing light through an interaction region many times, for example by using a cavity. The combination of spatial separation and repeated weak interaction is proposed to escape the entanglement and phase noise problems.
These recent proposals all include the use of reflection to change the character of the nonlinear interaction. More work needs to be done to understand how and to what extent the no-go theorems are addressed by these schemes.
\section{Conclusion and outlook}
The ability to make states of SL with a group velocity of zero is not only inherently fascinating, it is also a useful technique with numerous applications. Experiments have now been carried out using hot and cold atomic vapours, as well as in hollow-cored fibres. The early understanding of SL arising from standing waves of the control field was shown to be incomplete and demonstrations of SL without standing waves have solidified our understanding of the phenomenon via a multi-wave mixing model.
In this review, our goal was to put all the demonstrations to date in context with a unifying theoretical model that shows how SL based on EIT and Raman interactions can be understood, as well as the behaviour of these schemes in hot and cold atomic systems where higher-order coherences may play a role. In the future we look forward to further application and demonstrations of SL for quantum simulations and development of quantum information systems.
\subsection{Acknowledgments}
This work was supported by the Australian Research Council Centre of Excellence Grant No. CE170100012.
\bibliographystyle{ieeetr}
|
1,314,259,993,341 | arxiv | \section{Introduction} \label{sec:Intro}
\def5.\arabic{equation}{1.\arabic{equation}}
\setcounter{equation}{0}
Equivalence tests are nowadays frequently used in drug development to assess similarity of a test and a reference treatment
at a controlled type I error. They are very popular in regulatory settings because they reverse the burden of proof compared
to a standard test of significance. Therefore they avoid the problem that failing to reject a null hypothesis of no difference is not logically equivalent
to deciding for the null hypothesis.
Typically equivalence testing is based on a null hypothesis that a scalar parameter of interest,
such as the effect difference
of two treatments, is outside an equivalence region defined through an appropriate choice of an interval
depending on the metric of equivalence being used. Thus rejecting the null hypothesis means
to decide at a controlled type I error that the parameter of interest is in the postulated equivalence region.
We refer to the monographs of \cite{wellek2010testing}
for an overview of the currently available methodology on testing the equivalence of finite dimensional parameters.
On the other hand there are many applications, where the similarity
between two populations cannot be appropriately described by a parameter of finite dimension. One obvious situation occurs if treatments involving covariates have to be compared and one is interested in the similarity
of the relations between the measured endpoints and the covariates in the two groups. Statistically speaking,
this corresponds to the problem establishing the similarity between two regression models and in the last decade considerable efforts
have been made to develop methodology to solve this problem.
\cite{liubrehaywynn2009} proposed tests for the hypothesis of equivalence of two linear regression models,
while \cite{gsteiger2011} developed a bootstrap approach using a confidence band for the difference of two non-linear models.
These methods are based on
the intersection-union principle \citep[see, for example,][]{berger1982} which is used to construct an overall test for equivalence.
In a recent paper \cite{detmolvolbre2015} showed that equivalence tests based on the intersection-union principle
lead to rather conservative decision procedures with low power. As a very powerful alternative they proposed bootstrap
tests based on estimates of the maximal deviation between the two curves corresponding to the different treatments.
\cite{moedetkotvolcol2019} demonstrated the superiority of the maximum deviation
approach for the comparison of dissolution profiles of two different formulations
\citep[see][for some alternative equivalence tests based
on similarity factors]{paixao2017,yoshida2017}. In all these papers, data is finite dimensional and the curves to be compared
are defined by parametric regression models with finite dimensional parameters.
Moreover, in the information age, data is often recorded sequentially over time at high resolution and in such instances it is reasonable
to model data as functions because the densely sampled observations exhibit certain degrees of dependence and smoothness. As a consequence
corresponding parameters such as mean or variance are varying over time and have to be considered as functions
as well. The corresponding
field in statistics is called functional data analysis and the current state of the art in analyzing functional data is well documented in the monographs
by \cite{RamsaySilverman2005}, \cite{FerratyVieu2010}, \cite{HorvathKokoskza2012}, and \cite{hsingeubank2015}. Although numerous
statistical concepts such as the comparison of mean functions, covariance operators, principal components, change point analysis have been
considered and developed for functional data, the problem of establishing the practical equivalence of two parameters (more precisely parameter functions)
for functional data has not found much attention in the literature.
In a recent paper \cite{fogarty2014} developed methodology for establishing the equivalence
between the mean and variance functions from two populations. Their work is motivated by a comparison study of devices for assessing pulmonary function
and extends the popular Two One-Sided Testing (TOST) procedure for equivalence testing of scalars \citep[see][among others]{schuirmann1987,phillips1990} to the
functional regime.
By the duality between hypotheses testing and confidence intervals their approach is equivalent to the construction of a lower and an upper (pointwise) confidence band
for the difference of the two parameters.
The test then decides for equivalence if the functions $\kappa_{l}(\cdot )$ and $\kappa_{u}(\cdot )$ defining the lower and upper equivalence region for the difference of the two functional parameters
are outside of the upper and lower confidence band. Thus their method is similar in spirit to the work of \cite{liubrehaywynn2009} and \cite{gsteiger2011} for the comparison of parametric regression models and therefore
expected to be rather conservative.
A similar comment applies to equivalence tests that can be constructed in the same way using simultaneous confidence bands as
developed in \cite{dette2018} and \cite{lieblreim2019}.
The purpose of this paper is to develop more efficient procedures to establish equivalence of parameters
for the two sample problem in functional data analysis. Our approach is based on an estimate of the maximum deviation between
parameter functions (such as the difference of the mean functions or the ratio of the variance functions) and we propose to decide for similarity if the estimated distance is small. In Section \ref{sec2} we introduce the basic model and
review the method of \cite{fogarty2014}. Section~\ref{sec3} is devoted to the construction of a more powerful test for the equivalence of functional parameters,
where we concentrate on the mean functions for the sake of brevity. In particular a bootstrap test is developed and its consistency is proved. We also provide a generalization to dependent data and illustrate the superiority of the new test in a small example. In Section \ref{sec4} we
demonstrate the general applicability of our approach and develop methodology for a functional random effect model as considered by \cite{fogarty2014}.
We also demonstrate by means of a simulation study that the new tests introduced in this paper are more powerful than the currently available methodology.
Finally, all proofs are given in an appendix as they are technically demanding and involve functional data analysis for Banach space valued random variables.
\section{Formulation of the problem and state of the art} \label{sec2}
\def5.\arabic{equation}{2.\arabic{equation}}
\setcounter{equation}{0}
In this section we state the problem and briefly revisit the approach proposed in \cite{fogarty2014}. To
be precise, let $X_{11}(\cdot),\ldots,X_{1m}(\cdot)$ and $X_{21}(\cdot),\ldots,X_{2n}(\cdot)$ denote two independent samples of functional data, which are observed on the interval $[0,1]$. We denote the mean functions by $\mu_1(\cdot)$ and $\mu_2(\cdot)$ and variance functions by $\sigma^2_1(\cdot)$ and $\sigma^2_2(\cdot)$, respectively (assuming its existence - see Section \ref{sec6} for the necessary assumptions).
We define $\theta (\cdot) = \mu_1(\cdot) - \mu_2(\cdot)$ and $\lambda (\cdot) = \frac {\sigma^2_1(\cdot)}{\sigma^2_2(\cdot)}$ as measures of similarity
between the mean and variance functions, respectively, and consider the hypotheses
\begin{align}
\label{H0mean}
\begin{split}
H^\theta_0 : & ~\exists \ t \in [0,1] \mbox { such that } \theta(t) \notin (\kappa_l(t), \kappa_u(t)) \\
H^\theta_1 : & ~\forall \ t \in [0,1]: \theta(t) \in (\kappa_l(t), \kappa_u(t))
\end{split}
\end{align}
and
\begin{align}
\label{H0var}
\begin{split}
H^\lambda_0 : & ~\exists \ t \in [0,1] \mbox { such that } \lambda(t) \notin (\zeta_l(t), \zeta_u(t)) \\
H^\lambda_1 : & ~ \forall \ t \in [0,1]: \lambda(t) \in (\zeta_l(t), \zeta_u(t)) \, .
\end{split}
\end{align}
Here $\kappa_l, \kappa_u, \zeta_l, \zeta_u$ are given functions on the interval $[0,1]$, which define the region of equivalence. These bands have to be developed in cooperation with the experts from the field of application. Usually the band defined by the functions $\kappa_l $ and $ \kappa_u$ contains the constant function
$0$ (as one wants to demonstrate the similarity of the functions $\mu_1$ and $\mu_2$) and the band defined by the functions
$\zeta_l$ and $\zeta_u$ contains the constant function $1$.
Note that the rejection of the null hypothesis in \eqref{H0mean} means to decide (at a controlled type I error) that the difference of the mean functions is contained
in the band defined by the functions $\kappa_l$ and $ \kappa_u$ and a similar comment applies to the rejection of
the null hypothesis in \eqref{H0var}.
In the following, we concentrate on the mean functions to describe the currently available methodology.
\cite{fogarty2014} combined the intersection-union principle with equivalence testing of scalar parameters to develop tests for the hypotheses \eqref{H0mean}. More precisely, they
proposed to test for equivalence in location at each $ t \in [0,1]$ and to reject the null hypothesis in \eqref{H0mean} if all individual tests yield a rejection. For the construction of
the individual tests they used a bootstrap version of the Two-One-Sided-Testing (TOST) principle as introduced by \cite{schuirmann1987}.
To be precise, if $\hat \theta_{m,n} (t) = \overline{X}_{1 \cdot}(t) - \overline{X}_{2 \cdot}(t)$ is the common estimate of the mean difference at time $t \in [0,1]$ and
\begin{align*}
\overline{C} _{1-\alpha,\theta} (t) &= \big[ 2 \hat\theta_{m,n}(t)
- q_{1-\alpha} (\hat\theta^*_{m,n}(t)), \infty \big) \\
\underline{C}_{1-\alpha,\theta} (t) &= \big (- \infty, 2 \hat \theta_{m,n}(t)
- q_{ \alpha}(\hat \theta^*_{m,n}(t)) \big ]
\end{align*}
are bias corrected percentile-based bootstrap one-sided confidence intervals, then the individual null hypothesis $H_{0,t}^\theta :\theta(t) \notin (\kappa_l (t), \kappa_u(t))$ is rejected in favor of
$H_{1,t}^\theta: \theta(t) \in (\kappa_l(t), \kappa_u(t))$ if $\kappa_l(t) \notin \overline{C}_{1-\alpha,\theta}(t)$
{\bf and} $\kappa_u \notin \underline{C}_{1-\alpha,\theta}(t)$, or equivalently
\begin{equation}\label{hd3}
\kappa_l(t) < 2 \hat \theta_{m,n}(t) - q_{1-\alpha}(\hat \theta^*_{m,n}(t))
\leq 2 \hat \theta_{m,n}(t) - q_\alpha (\hat \theta^*_{m,n}(t))
< \kappa_u(t) \, .
\end{equation}
\smallskip
\begin{rem} \label{rem1}~
{\rm
\begin{itemize}
\item[(a)]
It is worthwhile to mention that the concept described here can be used with any type of one-sided confidence intervals.
For example, it follows from the proofs of the results in Section~\ref{sec3} (see Section \ref{sec6} for more details) that
$\sqrt{m+n} ( \hat \theta_{m,n}(t) - \theta(t) ) $ (for fixed $t$) is asymptotically normal distributed with variance $\sigma^{2} (t) = \frac{1}{\tau }\sigma^2_1(t) + \frac {1}{1-\tau }\sigma^2_2(t)$,
where $\tau = \lim_{m,n \to \infty} m/(m+n)$. Therefore one could use the asymptotic $(1-\alpha)$-confidence intervals
\begin{align*}
\overline{C}_{1-\alpha,\theta}^{a} (\theta(t))
&= \Big [\hat \theta_{m,n}(t) - u_{1-\alpha} \frac{\hat \sigma_{m,n} (t)}{{\sqrt{m+n}}} \, , \infty \Big) \\
\underline{C}_{1-\alpha,\theta}^{a} (\theta(t) )
&= \Big (- \infty, \hat \theta_{m,n}(t)-u_{\alpha} \frac{\hat \sigma_{m,n} (t)}{{\sqrt{m+n}}} \, \Big]
\end{align*}
to derive an analogue of the decision rule \eqref{hd3}, where $\hat \sigma_{m,n}^{2} (t)$ is an appropriate estimate of the asymptotic variance $\sigma^{2} (t) $ at the point $ t \in [0,1]$ and $u_\alpha$ denotes the $\alpha$-quantile of the standard normal distribution.
\item[(b)] Besides the frequentist test described in the previous paragraph \cite{fogarty2014} also proposed a test within the Bayesian
paradigm using Gaussian Processes for modelling the data. Because the focus of this paper is on nonparametric procedures we do not consider this test here.
\end{itemize}
}
\end{rem}
\section{Efficient equivalence-testing of functional parameters } \label{sec3}
\def5.\arabic{equation}{3.\arabic{equation}}
\setcounter{equation}{0}
In this section we develop an alternative test for the hypotheses \eqref{H0mean} in the two sample problem, which turns
out to be substantially more powerful than the frequentist method proposed by \cite{fogarty2014}.
Our approach is based
on the estimation of the maximum deviation of the unknown measure of similarity from the equivalence bounds defined
in \eqref{H0mean} and \eqref{H0var}.
To be precise we restrict ourselves again to the difference of the location parameters
$\theta = \mu_{1} - \mu_{2} $ and note that the hypotheses in \eqref{H0mean} can be rewritten as
\begin{align} \label{eq:equi-hypotheses}
\begin{split}
H_0^{\theta} : ~ T^{\theta}
& = \max\Big\{ \sup_{t\in [0,1]} \big(-\theta(t) + \kappa_l(t) \big), \,
\sup_{t\in [0,1]} \big(\theta(t) - \kappa_u(t) \big) \Big\} \geq 0 \\
H_1^{\theta} :~ T^{\theta }
&= \max\Big\{ \sup_{t\in [0,1]} \big(-\theta(t) + \kappa_l(t) \big), \,
\sup_{t\in [0,1]} \big(\theta(t) - \kappa_u(t) \big) \Big\}< 0 \, .
\end{split}
\end{align}
The representation of the hypotheses simplifies in the case of symmetric and constant boundaries, that is $\kappa_u(t) =- \kappa_l (t) = \kappa > 0 $ for all $t \in [0,1]$, where we obtain
for the hypotheses in \eqref{eq:equi-hypotheses}
$$
H_0^{\theta} : \sup_{t\in [0,1]} \big | \theta(t) \big | \geq \kappa \, ,~~ H_1^{\theta} : \sup_{t\in [0,1]} \big | \theta(t) \big | < \kappa \, .
$$
For the construction of an efficient test, we define the statistic
\begin{align} \label{eq:statistic}
\hat{T}_{m,n}^{\theta} = \max\Big\{
\sup_{t\in [0,1]} \big(-\hat{\theta}_{m,n}(t) + \kappa_l(t) \big), \,
\sup_{t\in [0,1]} \big(\hat{\theta}_{m,n}(t) - \kappa_u(t) \big) \Big\}
\end{align}
as an estimator of $T^{\theta}$, where
$$
\hat{\theta}_{m,n} = \overline{X}_{1\cdot} - \overline{X}_{2\cdot}
$$
denotes the difference of the sample means, which serves as an estimator
of the function $\theta= \mu_1-\mu_2$. The null hypothesis in \eqref{eq:equi-hypotheses} is then rejected for small values
of $ \hat{T}_{m,n}^{\theta}$, where the critical values will be determined by bootstrap (in the independent case by resampling with replacement, in the dependent case by multiplier block bootstrap).
To be precise and motivate our bootstrap assume that $m,n \to \infty$, such that $m/(m+n) \to \tau \in (0,1)$. Then
if follows from Theorem \ref{thma1} in the online supplement that
\begin{align} \label{eq:limit-dist}
\sqrt{m+n} \big ( \hat T^\theta_{m,n} - T^\theta \big) {\stackrel{\mathcal{D}}{\longrightarrow}}
Z_{\mathcal{E}, \theta} = \max\Big\{
\sup_{t\in \mathcal{E}^l_ \theta} \big(-Z(t)\big), \,
\sup_{t\in \mathcal{E}^u_ \theta} Z(t) \Big\} \, ,
\end{align}
where $Z$ is a Gaussian process with covariance kernel
\begin{align} \label{eq:Z-kernel}
k(s,t) = \frac{1}{\tau} \mbox{Cov} (X_{11}(s), X_{11}(t)) + \frac{1}{1-\tau} \mbox{Cov} (X_{21}(s), X_{21}(t))
\end{align}
and the sets $ \mathcal{E}^l_{ \theta} , \mathcal{E}^u_ {\theta} \subset [0,1] $ contain the points, where
the functions $- \theta +\kappa_l $ and $\theta -\kappa_u $ attain the value $T^{\theta } $, i.e.
\begin{align} \label{eq:sets}
\mathcal{E}^l_\theta =
\big\{ t\in [0,1] \colon -\theta(t)+\kappa_l(t) = T^{\theta } \, \big\} \, , \quad
\mathcal{E}^u_\theta =
\big\{ t\in [0,1] \colon \theta(t)-\kappa _u(t) = T^{\theta } \, \big\} \, .
\end{align}
Throughout this paper, these sets are called {\it extremal sets} and we
note that the extremal sets can be empty (but not both at the same time).
As a consequence, the limit distribution on the right hand side of \eqref{eq:limit-dist} depends on the covariance kernel $k$ and
the extremal sets $ \mathcal{E}^l_\theta$ and $ \mathcal{E}^u_\theta$ defined by the
unknown difference $\theta$ between the mean functions $\mu_{1}$ and $\mu_{2}$.
For the calculation of quantiles of the distribution of $ Z_{\mathcal{E}, \theta}$ we propose to use the bootstrap and proceed in two steps:
\begin{itemize}
\item[(1)] We estimate the unknown sets of extremal points.
\item[(2)] We use the bootstrap to mimic the distribution of the process $Z$ in \eqref{eq:limit-dist}.
\end{itemize}
For the estimation of the extremal sets $ \mathcal{E}^l_\theta $ and $ \mathcal{E}^u_\theta $, we use the statistics
\begin{align} \label{eq:est-sets0}
\begin{split}
\hat{\mathcal{E}}_{\theta}^l &=
\Big\{ t\in[0,1] \colon -\hat{\theta}_{m,n}(t)+\kappa_l(t)
\geq \hat{T}_{m,n}^\theta - c \, \frac{\log(m+n)}{\sqrt{m+n}} \, \Big\} \, , \\
\hat{\mathcal{E}}_{\theta}^u &=
\Big\{ t\in[0,1] \colon \hat{\theta}_{m,n}(t)-\kappa_u(t)
\geq \hat{T}_{m,n}^\theta - c \, \frac{\log(m+n)}{\sqrt{m+n}} \, \Big\} \, ,
\end{split}
\end{align}
where the statistic $ \hat{T}_{m,n}^\theta$ is defined in \eqref{eq:statistic} and $c$ is a tuning parameter.
For the bootstrap part note that it follows from the arguments given
in the proof of Theorem~\ref{thma1} in the appendix that the statistic on the left hand side of \eqref{eq:limit-dist}
is asymptotically equivalent to the statistic
$$
\max\Big\{
\sup_{t\in \mathcal{E}^l_ \theta} \big(-\hat Z_{m,n} (t)\big), \,
\sup_{t\in \mathcal{E}^u_ \theta} \hat Z_{m,n} (t) \Big\}
$$
where the process $\hat Z_{m,n} $ is defined by
$$
\hat Z_{m,n}= \sqrt{m+n} \, \big\{ \hat \theta_{m,n} - \theta \big\}
= \sqrt{m+n} \, \big\{ \overline{X}_{1 \cdot} - \mu_{1}
- ( \overline{X}_{2 \cdot} -\mu_{2} ) \big\}
$$
(by the arguments given in the proof of Theorem~\ref{thma1} this process converges weakly to the process $Z$ on the right
hand side of \eqref{eq:limit-dist}).
To mimic the distribution of this process in the independent case we now use resampling with replacement.
More precisely, assume for $r=1, \ldots , R$ that
$X^{*(r)} _{11}, \ldots, X^{*(r)}_{1m}$ and
$X^{*(r)}_{21}, \ldots, X^{*(r)}_{2n}$ are drawn randomly with replacement
from $X_{11},\ldots, X_{1m}$ and $X_{21},\ldots,X_{2n}$, respectively, and
denote by $\overline{X}_{1 \cdot}$ and $\overline{X}_{2 \cdot}$ the sample
means of both groups. We define
\begin{align} \label{eq:bootstrap-process}
\hat Z_{m,n}^{*(r)} =
\sqrt{m+n} \, \Big \{ \frac {1}{m} \sum^m_{j=1}
(X^{*(r)}_{1j} - \overline{X}_{1 \cdot})
- \frac {1}{n} \sum^n_{j=1} (X^{*(r)}_{2j}- \overline{X}_{2 \cdot}) \Big \}
\end{align}
as the $r$-th bootstrap analogue of the statistic $ \hat Z_{m,n}$
and a bootstrap version of the random variable on the left hand side of \eqref{eq:limit-dist} by
\begin{equation} \label{hda}
\hat T^{\theta, * (r)}_{m,n} = \max \Big \{ \sup_{t \in \hat {\mathcal{E}}^l_\theta} \big ( - \hat Z^{*(r)}_{m,n}(t) \big), \, \sup_{t \in \hat{\mathcal{E}}^u_\theta} \hat Z^{*(r)}_{m,n} (t) \Big \} \, .
\end{equation}
Finally, the null hypothesis in \eqref{eq:equi-hypotheses} is rejected, whenever
\begin{align} \label{eq:equi-test}
\sqrt{m+n} \, \hat{T}_{m,n}^\theta < z_{m,n,\alpha}^{*(R)} \, ,
\end{align}
where $z_{m,n,\alpha}^{*(R)}$ is the empirical $\alpha$-quantile of the
bootstrap sample $ T^{\theta,*(1)}_{m,n}, \ldots, T^{\theta,*(R)}_{m,n}$.
The following result, which is proved in the appendix, shows that this procedure defines a consistent and
asymptotic level $\alpha$-test for the hypotheses \eqref{eq:equi-hypotheses}
(or equivalently for the hypotheses \eqref{H0mean}).
\begin{theorem} \label{thm1}
Let Assumption~\ref{as:ts} in Section~\ref{sec62} be satisfied.
\begin{itemize}
\item[(a)] Assume that the null hypothesis $H_{0}^{\theta}$ of no
equivalence in \eqref{H0mean} holds, that is
$ T^{\theta} \geq 0$.
If $T^\theta =0$, then
\begin{align*}
\lim_{m,n,R\to\infty} \mathbb{P}\big(
\sqrt{m+n} \, \hat{T}_{m,n}^\theta < z_{m,n,\alpha}^{*(R)} \big) = \alpha \, .
\end{align*}
If $T^\theta > 0$, then for any $R \in \mathbb{N}$
\begin{align*}
\lim_{m,n\to\infty} \mathbb{P}\big(
\sqrt{m+n} \, \hat{T}_{m,n}^\theta < z_{m,n,\alpha}^{*(R)} \big) =
0 \, .
\end{align*}
\item[(b)] If the alternative $H_{1}^{\theta}$ of equivalence in \eqref{H0mean} holds, that is $T^\theta < 0$, we have for any $R \in \mathbb{N}$
\begin{align*}
\liminf_{m,n\to\infty} \mathbb{P}\big ( \sqrt{m+n} \, \hat{T}_{m,n}^\theta < z_{m,n,\alpha}^{*(R)} \big)
=1 \, .
\end{align*}
\end{itemize}
\end{theorem}
\begin{rem} \label{rem3}
{\rm
The results remain correct in the case of dependent data, where $X_{1,1}, \ldots X_{1,m}$ and $X_{2,1}, \ldots X_{2,n}$ are two independent stationary time series.
In this case, we propose to use a block multiplier bootstrap to mimic the dependency in the data. To be precise,
define a bootstrap process by
\begin{align} \label{2bProcess}
\begin{split}
\hat Z_{m,n}^{**(r)}(t) =& \sqrt{m+n} \Big\{
\frac{1}{m} \sum_{k=1}^{m-l_1+1} \frac{1}{\sqrt{l_1}}\Big( \sum_{j=k}^{k+l_1-1} X_{1j}(t)
-\frac{l_1}{m}\sum_{j=1}^m X_{1j}(t) \Big) \xi_k^{(r)} \\
&- \frac{1}{n} \sum_{k=1}^{n-l_2+1} \frac{1}{\sqrt{l_2}}\Big( \sum_{j=k}^{k+l_2-1} X_{2j}(t)
-\frac{l_2}{n}\sum_{j=1}^n X_{2j}(t) \Big) \zeta_k^{(r)} \Big\} ~~~~(r=1, \ldots , R)
\end{split}
\end{align}
where $\xi_1^{(r)} , \ldots , \xi_m^{(r)}$, $ \zeta_1^{(r)} , \ldots , \zeta_n^{(r)} $ are independent standard normal distributed random variables
and $l_{1}, l_{2}$ are sequences converging to infinity
with increasing sample sizes $m,n \to \infty$.
The null hypothesis in \eqref{H0mean} is now rejected, whenever
\begin{align} \label{eq:equi-testdep}
\sqrt{m+n} \, \hat{T}_{m,n}^\theta < z_{m,n,\alpha}^{**(R)}~,
\end{align}
where $z_{m,n,\alpha}^{**(R)}$ is the empirical $\alpha$-quantile of the sample $ T^{\theta,**(1)}_{m,n}, \ldots, T^{\theta,**(R)}_{m,n}$
and the statistic $ \hat T^{\theta, ** (r)}_{m,n}$ is defined by
\begin{align*}
\hat T^{\theta, ** (r)}_{m,n} = \max \Big \{ \sup_{t \in \hat {\mathcal{E}}^l_\theta} \big ( - \hat Z^{**(r)}_{m,n}(t) \big), \, \sup_{t \in \hat{\mathcal{E}}^u_\theta} \hat Z^{**(r)}_{m,n} (t) \Big \} ~~~~(r=1, \ldots , R) \, .
\end{align*}
In this case - under the assumptions stated in Section \ref{sec623} - the result in Theorem \ref{thm1} remains valid.
Finally we note that this procedure with $l_{1} = l_{2}=1$ provides also a valid bootstrap test in the case of independent data.
}
\end{rem}
\begin{example} \label{examsim1}
{\rm
We have conducted a small
simulation study to compare the new bootstrap test \eqref{eq:equi-test} with the
frequentist test proposed by \cite{fogarty2014}. Further numerical results supporting our findings can be found
in Section \ref{sec53}, where we compare both methods in a functional random effect model.
We have generated functional data as described in Sections~6.3 and
6.4 of \cite{aueDubartNorinhoHormann2015}, who considered $D =21 $
$B$-spline basis functions $\nu_1,\dots , \nu_D$ and defined the
random functions $\eta_{11}, \dots, \eta_{1m}, \eta_{21}, \dots, \eta_{2n}$ by
\begin{equation}\label{eq:errors}
\eta_{1j} = \sum_{i=1}^D N_{1,i,j} \nu_i \, , \quad
\eta_{2k} = \sum_{i=1}^D N_{2,i,k} \nu_i \, ,
\qquad j=1,\dots, m, \, k=1,\dots,n \, ,
\end{equation}
where $N_{1,1,1}, \dots, N_{1,D,m}, N_{2,1,1}, \dots, N_{2,D,n} $ are
independent, normally distributed random variables with expectation zero and
variances $\sigma_i^2 = \mbox{Var}(N_{1,i,j}) = \mbox{Var}(N_{2,i,k}) = 1/i^2$
($i = 1,\dots, D$; $j=1,\ldots , m$; $k=1,\ldots , n$).
\begin{figure}[t]
{ \centering
\includegraphics[width = 6cm, height = 6cm]{true-mu-subinterval.png}
\includegraphics[width = 6cm, height = 6cm]{no-groupeffects-subinterval.png}
\caption{\it Left panel: Difference $\theta = \mu_1 - \mu_2$ of the mean
functions defined by \eqref{eq:subinterval} with fixed $b_1 = 0.46, b_2 = 0.54$
and different values for $a\in \{ 0.19, 0.204, 0.2 \}$.
Right panel: Empirical rejection probabilities of the frequentist test
proposed by \cite{fogarty2014} and the test \eqref{eq:equi-test} for the
hypotheses \eqref{eq:equi-hypotheses} with
$\kappa_{l} \equiv -0.2$, $\kappa_{u} \equiv 0.2$ and different values of $a$.
}
\label{fig:subinterval}}
\end{figure}
Then two independent samples of independent and identically distributed Gaussian random
functions are obtained by
\begin{align} \label{eq:fIID}
X_{1j} = \eta_{1,j} + \mu_1 \, , \quad X_{2k} = \eta_{2,k} + \mu_2 \, ,
\end{align}
with mean functions
\begin{align} \label{eq:subinterval}
\mu_1 \equiv 0 \, , \quad \mu_2(t) =
\begin{cases}
\frac{a}{b_1-0.02} (t-0.02) \, , & t\in [0, b_1) \\
a \, , & t\in[b_1, b_2] \\
\frac{-a}{0.98 - b_2} (t - b_2) + a \, , &t \in (b_2,1]
\end{cases}~,
\end{align}
where $a$, $b_1$ and $b_2 $ are parameters.
The left part of Figure~\ref{fig:subinterval} illustrates the difference $\theta = \mu_{1} - \mu_2$ of the mean functions for fixed $b_1 = 0.46$, $b_2 = 0.54$ and different values of the parameter $a$.
The equivalence bands, used in the hypotheses \eqref{H0mean}, are defined by
$\kappa_l \equiv -0.2$, $\kappa_u \equiv 0.2$. Note that, for any $a > 0.02$,
the extremal sets in \eqref{eq:sets} are defined by $\mathcal{E}_\theta^u = \emptyset$,
$\mathcal{E}_\theta^l = [b_1, b_2]$ (here
$\mathcal{E}_\theta^l = [0.46,0.54]$) and that the cases $|a| \geq 0.2$ and $|a| < 0.2$ correspond to the null hypothesis of no
equivalence and the alternative of equivalent mean functions, respectively. In the right part of
Figure~\ref{fig:subinterval} we display the empirical rejection probabilities
of the frequentist test proposed by \cite{fogarty2014} and the test defined
in \eqref{eq:equi-test} for different values of $a \in \{
0.204,0.202,\dots,0.190 \} $ (by symmetry negative values of $a$ yield the same
results). Here the extremal sets are estimated by
\eqref{eq:est-sets0} with $c=0.005$.
The sample sizes are $m = n = 100$ and the rejection probabilities are
calculated by $1000$ simulation runs and $300$ bootstrap replications. We
observe that the rejection probabilities are strictly smaller than the level
$5\%$ for $a>0.2$ and increase towards $1$ for decreasing $a$ beyond $0.2$.
Both tests slightly underestimate the nominal level at the boundary of the
null hypothesis ($a=0.2$). Moreover, the new test has substantially more power
in all considered scenarios under the alternative.
\begin{figure}[h]
{ \centering
\includegraphics[width = 6cm, height = 6 cm]{true-mu-varySets.png}
\includegraphics[width = 6cm, height = 6 cm]{no-groupeffects-varySets.png}
\caption{\it Left panel: Difference $\theta = \mu_1 - \mu_2$ of the mean
functions defined by \eqref{eq:subinterval} for fixed $a = 0.194$ and
different choices of $b_1 = 0.5-0.08j$, $b_2 = 0.5+0.08j$, where $j=0,\dots,4$.
Right panel: Empirical rejection probabilities of the frequentist test proposed by \cite{fogarty2014}
and the test \eqref{eq:equi-test} for the
hypotheses \eqref{eq:equi-hypotheses} with
$\kappa_{l} \equiv -0.2$, $\kappa_{u} \equiv 0.2$.
}
\label{fig:varySets}}
\end{figure}
The superiority of the new test is even more visible if the size of the set of
extremal points is larger. To illustrate this fact, we consider the mean
functions in \eqref{eq:subinterval} for fixed $a = 0.194$ and different values
of $b_1$ and $b_2$.
The rejection probabilities of the frequentist test proposed by
\cite{fogarty2014} and the test defined in \eqref{eq:equi-test}
are shown in Figure \ref{fig:varySets} where, for $j=0,\dots,4$, function
number $j$ corresponds to the choices $b_1 = 0.5-0.08j$, $b_2 = 0.5+0.08j$ in
the definition of the mean differences in \eqref{eq:subinterval}. The sample
sizes are again $m=n=100$. We observe that only in the case $j=0$, both tests
have comparable power. In all other cases, the new test \eqref{eq:equi-test}
outperforms the test proposed by \cite{fogarty2014} substantially.
}
\end{example}
\section{A functional random effect model for paired data } \label{sec4}
\def5.\arabic{equation}{4.\arabic{equation}}
\setcounter{equation}{0}
In this section we demonstrate that the method introduced in Section \ref{sec3} for the simple two sample problem of comparing
two mean functions is a universally applicable decision rule to decide for the equivalence between two functional parameters from two samples
of functional data. For this purpose only the bootstrap procedure has to be adjusted to the situation under consideration.
As a concrete example (in particular for the sake of comparison with the currently available methodology) we consider a functional analysis of variance model with
random effects as proposed by \cite{fogarty2014} for the analysis of
functional data describing the lung volume over time for different patients and different breaths produced by a spirometer (industry standard) and a new device (Structured Light Plethysmography - SLP). While the new SLP holds many advantages, it has to be assured that it produces measurements (practically) equivalent to those produced by the industry standard, before it can be used for diagnoses purposes \citep[see][for more details]{fogarty2014}. There are $A$ patients and for the $i$-th patient, $n_i$ breaths are recorded simultaneously by both devices leading to paired functional data with cross-covariances between the pairs. The goal is the development of a statistically justified decision rule to decide for or against equivalence of the measurements.
To be precise, we consider pairs of random functions
defined by
\begin{align} \label{eq:fogarty-datamodel}
\left(
\begin{array}{ll}
X_{1,i,j} \\
X_{2,i,j}
\end{array}
\right)
= \left(
\begin{array}{ll}
\mu_1 + \varepsilon_{1,i} + \eta_{1,i,j} \\
\mu_2 + \varepsilon_{2,i} + \eta_{2,i,j}
\end{array}
\right) \, \quad
j = 1,\dots,n_i \, , \, i = 1,\dots,A \, .
\end{align}
Here $\mu_1,\mu_2$ denote the mean functions, the functions $\varepsilon_{1,i}, \varepsilon_{2,i}$ model a random group effect
(usually corresponding to different individuals drawn from a larger population) and the functions $\eta_{1,i,j},\eta_{2,i,j}$ are individual random effects.
The random group effects and the individual random effect functions are assumed to be centred and independent and identically distributed,
respectively. Furthermore the group effects are independent of the individual ones.
Note that the total number of pairs is given by $N = \sum_{i = 1}^A n_i$.
\subsection{Comparing mean functions} \label{sec51}
For the construction of a test for the hypotheses \eqref{eq:equi-hypotheses}, we consider the statistic
\begin{align} \label{eq:statistic-groups}
\hat{T}_{N}^{\theta}=
\sqrt{A} \, \max\big\{
\sup_{t\in [0,1]} \big(-\hat{\theta}_{N}(t) + \kappa_l(t) \big), \,
\sup_{t\in [0,1]} \big(\hat{\theta}_{N}(t) - \kappa_u(t) \big) \big\} \, ,
\end{align}
where $\hat{\theta}_{N} = \overline{X}_{1\cdot \cdot } - \overline{X}_{2\cdot \cdot } $ and
\begin{align*}
\overline{X}_{\ell\cdot \cdot } = \frac{1}{A} \sum_{i=1}^A \frac{1}{n_i} \sum_{j=1}^{n_i} X_{\ell,i,j} \, ,~~\ell =1,2
\end{align*}
denote the two sample means. The bootstrap analogue of
\eqref{eq:statistic-groups} is defined as follows. We use the sample means
\begin{align} \label{eq:group-mean}
\overline{X}_{\ell, i, \cdot }
= \frac{1}{n_i} \sum_{j=1}^{n_i} X_{\ell,i,j} \, ,~~
\ell =1,2 \, ,~ i = 1,\dots,A
\end{align}
in the different groups
to estimate the group effects by
\begin{align*}
\hat{\varepsilon}_{1,i} = \overline{X}_{1,i,\cdot} - \overline{X}_{1,\cdot\cdot} \, , \quad
\hat{\varepsilon}_{2,i} = \overline{X}_{2,i,\cdot} - \overline{X}_{2,\cdot\cdot} \, , ~~~(i = 1,\dots,A) .
\end{align*}
For the bootstrap we draw, for $r=1,\ldots , R$, samples
$(\hat{\varepsilon}_{1,1}^{\star (r)}, \hat{\varepsilon}_{2,1}^{\star (r)}), \ldots, (\hat{\varepsilon}_{1,A}^{\star (r)},
\hat{\varepsilon}_{2,A}^{\star (r)})$
randomly with replacement from the pairs
$(\hat{\varepsilon}_{1,1}, \hat{\varepsilon}_{2,1}), \ldots,
(\hat{\varepsilon}_{1,A},\hat{\varepsilon}_{2,A})$.
The bootstrap statistic is then defined by
\begin{align} \label{hd21}
\hat T_{N}^{\theta,\star (r) } = \max\Big\{
\sup_{t\in \hat{\mathcal{E}}^l_\theta} \big(-B_N^{\star (r) }(t)\big), \,
\sup_{t\in \hat{\mathcal{E}}^u_\theta} B_N^{\star (r) }(t) \Big\} \, ,
\end{align}
where
\begin{align} \label{eq:bootstrap-process2}
B_N^{\star (r) } = \frac{1}{\sqrt{A}} \sum_{i = 1}^A
\big( \hat{\varepsilon}_{1,i}^{\star (r)}
- \hat{\varepsilon}_{2,i}^{\star (r)} \big ) \, ,
\end{align}
and the sets $\hat{\mathcal{E}}^l, \hat{\mathcal{E}}^u$ are given by
\begin{align} \label{eq:est-sets}
\begin{split}
\hat{\mathcal{E}}_{\theta}^l &=
\Big\{ t\in[0,1] \colon -\hat{\theta}_N(t)+\kappa_l(t)
\geq \hat{T}_N^{\theta} - c \, \frac{\log(A)}{\sqrt{A}} \ \Big\} \\
\hat{\mathcal{E}}_{\theta}^u &=
\Big\{ t\in[0,1] \colon \hat{\theta}_N(t)-\kappa_u(t)
\geq \hat{T}_N^{\theta}- c \, \frac{\log(A)}{\sqrt{A}} \ \Big\} \, .
\end{split}
\end{align}
The consideration of the process
$
B_N^{\star (r) } $ in \eqref{eq:bootstrap-process2} is motivated by the expansion
$ \sqrt{A} \, (\hat \theta_N - \theta)
= \frac{1}{\sqrt{A}}\sum_{i=1}^A (\varepsilon_{1,i} - \varepsilon_{2,i})
+ o_\mathbb{P}(1) $, which is derived in equation
\eqref{eq:CLT2} in the online supplement.
The null hypothesis in \eqref{eq:equi-hypotheses} is finally rejected whenever
\begin{align} \label{t1}
\hat{T}_{N}^\theta < z_{N,\alpha}^{\star(R)}
\end{align}
where $z_{N,\alpha}^{\star(R)}$ is the empirical $\alpha$-quantile of the bootstrap sample
$\hat T_{N}^{\theta,\star (1) } , \ldots , \hat T_{N}^{\theta,\star (R) } $. The following result shows that this decision rule defines a consistent asymptotic level
$\alpha$ test for the hypotheses in \eqref{H0mean}.
\begin{theorem} \label{thm2}
Let Assumption~\ref{(C)} in Section~\ref{sec51-assumptions} be satisfied
and assume that $A\to\infty$ and $\min_i^A n_i \to\infty$ as $N\to\infty$.
\begin{itemize}
\item[(a)] Assume that the null hypothesis $H_{0}^{\theta}$ of no
equivalence in \eqref{H0mean} holds, that is
$ T^{\theta} \geq 0$.
If $T^\theta =0$, then
\begin{align*}
\lim_{A, \min n_i, R \to\infty} \mathbb{P}\big(
\hat{T}_{N}^\theta < z_{N,\alpha}^{\star(R)} \big) =
\alpha \, .
\end{align*}
If $T^\theta > 0$, then for any $R \in \mathbb{N}$
\begin{align*}
\lim_{A, \min n_i \to\infty} \mathbb{P}\big(
\hat{T}_{N}^\theta < z_{N,\alpha}^{\star(R)} \big) =
0 \, .
\end{align*}
\item[(b)] If the alternative $H_{1}^{\theta}$ of equivalence in
\eqref{H0mean} holds, that is $T^\theta < 0$, we have for any $R \in \mathbb{N}$
\begin{align*}
\liminf_{A, \min n_i \to\infty} \mathbb{P}\big (\hat{T}_{N}^\theta < z_{N,\alpha}^{\star(R)} \big)
=1 \, .
\end{align*}
\end{itemize}
\end{theorem}
\subsection{Comparing variance functions} \label{sec52}
Recall the definition of model \eqref{eq:fogarty-datamodel} and define
(assuming its existence - see Section~\ref{sec61} for more details)
\begin{align*}
\sigma_1^2(\cdot) = \E \big[\eta_{1,1,1}(\cdot)^2\big] \, , \quad
\sigma_2^2(\cdot) = \E \big[\eta_{2,1,1}(\cdot)^2\big] \in C([0,1])
\end{align*}
as the variance functions of the individual errors
$\eta_{1,1,1}, \eta_{2,1,1}$. We are interested in testing the hypotheses
\eqref{H0var}, which can be rewritten as
\begin{align} \label{eq:equi-hypotheses-var}
\begin{split}
&H_0^\lambda: ~ T^\lambda
= \max\Big\{
\sup_{t\in [0,1]} \big(-\log \lambda(t) + \log \zeta_l(t) \big), \,
\sup_{t\in [0,1]} \big(\log\lambda(t) - \log\zeta_u(t) \big) \Big\} \geq 0 \\
&H_1^\lambda:~ T^\lambda < 0 \, ,
\end{split}
\end{align}
where $\lambda = \frac{\sigma_1^2}{\sigma_2^2}$ is the ratio of the two variance functions and $\zeta_l(t), \zeta_u(t)$ are the given equivalence bands.
{Note that we} work with the logarithm of $\lambda$ to obtain stabilized variances. We define
\begin{align} \label{eq:est-var}
\hat{\sigma}_\ell^2 = \frac{1}{N-A} \sum_{i = 1}^A
\sum_{j=1}^{n_i} \Big (X_{\ell,i,j}
- \frac{1}{n_i} \sum_{k=1}^{n_i} X_{\ell,i,k} \Big )^2 \, , \quad
\ell=1,2 ,
\end{align}
estimate the variance ratio by $\hat{\lambda} = \frac{\hat{\sigma}_1^2}{\hat{\sigma}_2^2} $ and consider the test
statistic
\begin{align} \label{eq:statistic-groups_var}
\hat{T}^{\lambda}_{N}= \sqrt{N} \, \max\Big\{
\sup_{t\in [0,1]} \big(- \log \hat{\lambda}(t) + \log \zeta_l(t) \big), \,
\sup_{t\in [0,1]} \big( \log \hat{\lambda}(t) - \log \zeta_u(t) \big)
\Big\} \, .
\end{align}
For the calculation of bootstrap quantiles we adapt resampling with replacement to the random effect model \eqref{eq:fogarty-datamodel}
and estimate the individual random effects by
\begin{align*}
\hat{\eta}_{1,i,j} = X_{1,i,j}-\overline{X}_{1,i ,\cdot} \, , \quad
\hat{\eta}_{2,i,j} = X_{2,i,j}-\overline{X}_{2,i ,\cdot}
\end{align*}
for $i = 1,\dots, A$ and $j = 1,\dots, n_i$, where the group means $\overline{X}_{\ell ,i ,\cdot}$ ($i = 1,\dots, A$) are defined by \eqref{eq:group-mean}.
We now draw with replacement $N=\sum_{i=1}^A {n_{i}}$ pairs
$(\hat{\eta}_{1,1,1}^{\star (r)}, \hat{\eta}_{2,1,1}^{\star (r)}),\dots, (\hat{\eta}_{1,A,n_A}^{\star (r)}, \hat{\eta}_{2,A,n_A}^{\star (r)})$
from $(\hat{\eta}_{1,1,1}, \hat{\eta}_{2,1,1}),\dots, (\hat{\eta}_{1,A,n_A}, \hat{\eta}_{2,A,n_A})$
and define for $r = 1,\dots,R$
\begin{align} \label{eq:bootstrap-statistic-var}
\hat T_{N}^{\lambda, \star (r)} = \sqrt{N} \max\Big\{
\sup_{t\in \hat{\mathcal{E}}^l_\lambda} \big(-C_{N}^{ \star (r)}(t)\big), \,
\sup_{t\in \hat{\mathcal{E}}^u_\lambda} C_{N}^{ \star (r)}(t) \Big\}~
\end{align}
as the bootstrap analogue of \eqref{eq:statistic-groups_var},
where
\begin{align} \label{hol11}
C_{N}^{ \star(r)} = \frac{C_{1,N}^{ \star (r)}}{\hat \sigma_1^2}
- \frac{C_{2,N}^{ \star (r)}}{\hat \sigma_2^2} \, , \quad
C_{\ell,N}^{ \star (r)} = \frac{1}{N-A} \sum_{i = 1}^A
\sum_{j=1}^{n_i} \big\{ (\hat{\eta}_{\ell,i,j}^{\star (r)})^2
- \hat{\sigma}^2_\ell \big \} \, ,
\quad \ell=1,2
\end{align}
and
\begin{align} \label{eq:var-est-sets}
\begin{split}
\hat{\mathcal{E}}_{\lambda}^l &=
\Big\{ t\in[0,1] \colon -\log \hat{\lambda}(t)+ \log \zeta_l(t)
\geq \hat{T}_{N}^{\lambda} - c \, \frac{\log(N)}{\sqrt{N}} \ \Big\} \\
\hat{\mathcal{E}}_{\lambda}^u &=
\Big\{ t\in[0,1] \colon \log \hat{\lambda}(t) - \log \zeta_u(t)
\geq \hat{T}_{N}^{\lambda} - c \, \frac{\log(N)}{\sqrt{N}} \ \Big\} \, .
\end{split}
\end{align}
The consideration of the process $C_{N}^{ \star(r)}$ in \eqref{hol11} is motivated by the expansion
\begin{align*}
\sqrt{N}(\log \hat\lambda - \log \lambda)
= \frac{\sqrt{N}}{N-A} \sum_{i = 1}^A
\sum_{j=1}^{n_i} \Big\{ \frac{(\eta_{1,i,j})^2 - \sigma^2_1}{\sigma^2_1}
- \frac{(\eta_{2,i,j})^2 - \sigma^2_2}{\sigma^2_2} \Big \} + o_\mathbb{P}(1) \, ,
\end{align*}
which is derived in equation \eqref{eq:delta-method} in the online supplement.
Finally, the null hypothesis in \eqref{H0var} of no equivalence is rejected, whenever
\begin{align} \label{eq:var-test}
\hat{T}_{N}^{\lambda} < u_{N,\alpha}^{\star(R)}
\end{align}
where $ u_{N,\alpha}^{\star(R)}$ is the empirical $\alpha$-quantile of the sample $\hat T_{N}^{\lambda, \star (1)} , \ldots , \hat T_{N}^{\lambda, \star (R)} $.
\begin{theorem} \label{thm3}
Let Assumption~\ref{as:ts2} in Section~\ref{sec52-assumptions} be satisfied
and assume that $A\to\infty$ and $\min_i^A n_i \to\infty$ as $N\to\infty$.
\begin{itemize}
\item[(a)] Assume that the null hypothesis $H_{0}^{\lambda}$ of no equivalence
in \eqref{eq:equi-hypotheses-var} holds, that is $T^{\lambda} \geq 0$. If
$T^\lambda =0$, then
\begin{align*}
\lim_{A , \min n_i, R \to\infty} \mathbb{P}\big(
\hat{T}_{N}^\lambda < u_{N,\alpha}^{\star(R)} \big) = \alpha \, .
\end{align*}
If $T^\lambda > 0$, then for any $R \in \mathbb{N}$
\begin{align*}
\lim_{A, \min n_i \to\infty} \mathbb{P}\big(
\hat{T}_{N}^\lambda < u_{N,\alpha}^{\star(R)} \big) = 0 \, .
\end{align*}
\item[(b)] If the alternative $H_{1}^{\lambda}$ of equivalence in
\eqref{eq:equi-hypotheses-var} holds, that is $T^\lambda < 0$, we have for any
$R \in \mathbb{N}$
\begin{align*}
\liminf_{A, \min n_i \to\infty} \mathbb{P}\big (
\hat{T}_{N}^\lambda < u_{N,\alpha}^{\star(R)} \big) =1 \, .
\end{align*}
\end{itemize}
\end{theorem}
\subsection{Some numerical results } \label{sec53}
In this section we illustrate the finite sample properties of the new bootstrap
procedure in the functional analysis of variance model
\eqref{eq:fogarty-datamodel} and also provide a comparison with the method
proposed in \cite{fogarty2014}. For this purpose we consider some of the
scenarios described in Sections 10.1 and 10.2 of this reference.
As a general picture we will demonstrate that the procedure proposed in this paper is more powerful than the frequentist test method developed in \cite{fogarty2014}.
Note that \cite{fogarty2014} also develop a Bayesian test method but it is outperformed by the frequentist test. Therefore, the new bootstrap test is only compared with the frequentist test in the following sections.
For the sake of comparison, we perform the frequentist test of \cite{fogarty2014} with the same data as the new bootstrap procedure and do not use the exact results displayed in this reference. In each scenario under consideration, we perform $1000$ simulation runs and in each run, $R = 300$ bootstrap replicates are generated to calculate the empirical $5\%$ bootstrap quantile. The extremal sets are estimated as in \eqref{eq:est-sets} and \eqref{eq:var-est-sets} with $c=0.005$, respectively.
\begin{figure}[t]
{ \centering
\includegraphics[width = 6cm, height = 6 cm]{initial-expectation.png}~~~
\includegraphics[width = 6cm, height = 6 cm]{initial-variance.png}
\caption{\it Expectation function $\mu_1$ (left panel) and variance function $\sigma_1$ (right panel) used
in the simulation study in Section~\ref{sec431} and
Section~\ref{sec432}. \label{fig:initial-functions}}}
\end{figure}
\begin{figure}[t]
{ \centering
\includegraphics[width = 6cm, height = 6 cm]{size-true-mu.png}~~~
\includegraphics[width = 6cm, height = 6 cm]{groupeffects-size.png}
\caption{\it Approximation of the nominal level by the frequentist test proposed by
\cite{fogarty2014} (called Frequentist) and the test \eqref{t1} for the hypotheses
\eqref{H0mean} with $\kappa_{l} \equiv -0.2$ and
$\kappa_{u} \equiv 0.2$.
Left panel: True differences for scenarios $1$, $3$, $5$, $7$ and $9$.
Right part: simulated nominal level.}
\label{fig:fogarty-size}}
\end{figure}
\begin{figure}[t]
{ \centering
\includegraphics[width = 6cm, height = 6 cm]{power-true-mu.png}
\includegraphics[width = 6cm, height = 6 cm]{groupeffects-power.png}
\caption{\it Power comparison of the frequentist test proposed by
\cite{fogarty2014} (called Frequentist) and the test \eqref{t1} for the hypotheses
\eqref{H0mean} with $\kappa_{l} \equiv -0.2$ and
$\kappa_{u} \equiv 0.2$.
Left panel: true mean difference for each scenario (1-8). Right panel: simulated rejection probabilities.
}
\label{fig:fogarty-power}}
\end{figure}
\subsubsection{Comparison of mean functions} \label{sec431}
For the mean functions, we consider five different scenarios. The mean function
$\mu_1$ is the same in each scenario and can be obtained from the software code provided by \cite{fogarty2014}.
It is not defined explicitly and displayed in the left panel of
Figure~\ref{fig:initial-functions}.
The mean function $\mu_2$ is defined by
\begin{align*}
\mu_{2}(t) = \mu_{1}(t) + 0.2 \exp \big(-a_i \, |t-1/2|\big)
\end{align*}
(thus the difference has a parametric form),
where $a_1 = 0$ and $a_i = 10^{2(i-2)/7}$ for $i = 3,5,7,9$.
The differences $\mu_{1 }- \mu_{2} $ correspond to the functions $1$, $3$, $5$, $7$ and $9$ in the left part of Figure~\ref{fig:fogarty-size}, which also shows the equivalence bounds given by
$\kappa_{l} (t) \equiv -0.2$ and $\kappa_{u} (t) \equiv 0.2$.
Note that
\cite{fogarty2014} only investigate the equivalence between the curves on the set $\{ t_j = (j-0.5)/25 \colon j = 1,\dots,25 \}$ in their simulations and for the sake of comparison, we
consider the same set here. The variance function
$\sigma_1^2$ is also the same in each scenario and it is displayed in the right
panel of Figure~\ref{fig:initial-functions}. The variance function $\sigma_2^2$ is defined by
\begin{align}
\frac{\sigma^2_1}{\sigma^2_2}(t)
= \exp\big(\log(2) \exp\big(-a_i \, |t-1/2|\big)\big) \, \label{var1}
\end{align}
where $i =1,3,5,7,9$.
The right part of the Figure~\ref{fig:fogarty-size} shows the simulated nominal
level of the bootstrap test \eqref{t1} and the frequentist test proposed by
\cite{fogarty2014} for the five cases under consideration. We observe that the
frequentist test of \cite{fogarty2014} approximates the nominal level rather well for the function $9$, slightly exceeds the nominal level for function 7 and is conservative in the cases $1$, $3$ and $5$. The test \eqref{t1} shows a similar
picture, where it provides a better approximation of the nominal level for the function $3$ and slightly exceeds the nominal in the cases $5$, $7$ and $9$.
Next we study the power of the two tests for the hypotheses \eqref{H0mean}. The
mean function $\mu_1$ is given in the left panel of Figure~\ref{fig:initial-functions}
and $\mu_2$ is defined by
$$
\mu_{2}(t) = \mu_{1 }(t) - b_i \cos(2\pi t) - c_i
$$
$b_i = 0.05-0.1\cdot(i-1)/14$ and $c_i = 0.15-0.3\cdot(i-1)/14$ for $i=1,\dots, 8$. The variance function
$\sigma_1^2$ is given in the right panel of Figure \ref{fig:initial-functions} and $\sigma^2_2$ is defined by
\begin{align} \label{var2}
\frac{\sigma^2_1}{\sigma^2_2}(t)
= (0.1 \cos(2\pi t) + 1.8)^{d_i} \, ,
\end{align}
where $d_i = -1+2\cdot(i-1)/14$ for $i=1,\dots,8$.
The mean differences are depicted in the left part of Figure~\ref{fig:fogarty-power}. We observe that the frequentist test of \cite{fogarty2014} is outperformed by the new test \eqref{t1} proposed in this paper.
While the differences between the test \eqref{t1} and the frequentist test of \cite{fogarty2014} are small in scenarios $6-8$ (because the power of both tests is close to $1$),
we observe substantial advantages of the new test \eqref{t1} for the functions $2-5$.
\begin{figure}[t]
{ \centering
\includegraphics[width = 6cm, height = 6 cm]{size-true-ratio.png}~~
\includegraphics[width = 6cm, height = 6 cm]{groupeffects-size-var.png}
\caption{\it Approximation of the nominal level by the frequentist test proposed by
\cite{fogarty2014} (called Frequentist) and the test \eqref{eq:var-test}
for the hypotheses
\eqref{H0var} with $\zeta_l \equiv 0.5$ and $\zeta_u \equiv 2$. Left part: True ratio of the variance functions in the scenarios
$1,3,5,7$ and $9$ in \eqref{var1}.
Right part: Simulated rejection probabilities. }
\label{fig:fogarty-size-var}}
\end{figure}
\begin{figure}[t]
{ \centering
\includegraphics[width = 6cm, height = 6 cm]{power-true-ratio.png}~~
\includegraphics[width = 6cm, height = 6 cm]{groupeffects-power-var.png}
\caption{\it
Power comparison of the frequentist test proposed by
\cite{fogarty2014} (called Frequentist) and the test \eqref{eq:var-test} for the hypotheses
\eqref{H0var} with $\zeta_l \equiv 1/1.9$ and $\zeta_u \equiv 1.9$.
Left part: True ratio of the variance functions in the scenarios $1-5$ in
\eqref{var2}.
Right part: Empirical rejection probabilities.
}
\label{fig:fogarty-power-var}}
\end{figure}
\subsubsection{Variance functions} \label{sec432}
In this section, we consider the same scenarios as in the previous section and
investigate the finite sample properties of the tests for
the equivalence of the variance functions of the two samples. For the different
scenarios, the decision rule in \eqref{eq:var-test} is applied in order to
decide for the null or the alternative hypothesis which are defined by
\eqref{eq:equi-hypotheses-var} or equivalently by \eqref{H0var}. The results are then compared with those of the
frequentist test developed in \cite{fogarty2014}. In the left part of
Figure~\ref{fig:fogarty-size-var}, we display the true ratio of the variance
functions for each considered scenario in \eqref{var1} as well as the equivalence bands defined by $\zeta_l \equiv 0.5, \zeta_u \equiv 2$.
The right part of this figure displays the simulated nominal level of the
bootstrap test \eqref{eq:var-test} and the frequentist test proposed by
\cite{fogarty2014} on the boundary of the null hypothesis.
Similar to the results for testing the equivalence of the
means, the frequentist approximates the test level slightly better than the
bootstrap test in the scenarios $5, 7$ and $9$. In scenario $1$, both tests are conservative. The same is true for scenario $3$ but in this case, the empirical rejection probability of the new test is closer to the nominal level.
The true ratio of the variance functions for the considered scenarios under the
alternative hypothesis and the used equivalence bands $\zeta_l \equiv 1/1.9, \zeta_u \equiv 1.9$
are displayed in the left part of
Figure~\ref{fig:fogarty-power-var}. Only the functions $1$ - $ 5$ in
\eqref{var2} are considered since both tests always reject the null hypothesis
in the cases $6-8$.
The rejection rates of the two tests corresponding to the five considered
scenarios are displayed in the right panel of Figure~\ref{fig:fogarty-power-var}.
We observe a superior performance of the new bootstrap test
\eqref{eq:var-test} in all the considered scenarios, where in the scenarios
$1$, $4$ and $5$ the differences are very small.
\bigskip
\medskip
\noindent
{\bf Acknowledgements}
This research was partially supported by the Collaborative Research Center `Statistical modeling
of nonlinear dynamic processes' ({\it Sonderforschungsbereich 823, Teilprojekt A1, C1})
and the Research Training Group `High-dimensional phenomena in probability - fluctuations and
discontinuity' ({\it RTG 2131}).
The authors are grateful to Martina
Stein, who typed parts of this manuscript with considerable technical expertise and to Dr. Colin Fogarty for sending us the code of the procedures developed by \cite{fogarty2014}.
{\small
|
1,314,259,993,342 | arxiv | \section{Introduction}
Given $m\in \mathbb{N}_{\geq 2}$,
let $\{\phi_{i}\}_{i=1}^{m}$ be an iterated function system of contractive similitudes on $\mathbb{R}$ defined as
\begin{equation}\label{Hutchinson formula}
\phi_{i}(x)=\lambda_{i}x+b_{i},\;\; i=1,\ldots ,m,
\end{equation}
where $0<|\lambda_{i}|<1$ and $b_{i}\in \mathbb{R}.$ Hutchinson \cite{Hutchinson} proved that there exists a unique non-empty compact set, denoted by $K$, such that
$$K=\cup_{i=1}^{m}\phi_i(K).$$
We call $K$ the self-similar set or attractor with respect to the IFS
$\{\phi_{j}\}_{j=1}^{m}$, see \cite{Hutchinson} for further details.
In particular, some $q$-adic expansions can be generated by some IFS. Let $q\in \mathbb{N}_{\geq 3}$. Define a digit set $$ \mathcal{A}\subset \{0,1,2,\cdots,q-1\}.$$
Without loss of generality, we assume $\mathcal{A}$ contains at least two integers.
Then the following set
\begin{equation}\label{IFS}
K=\left\{x=\sum_{i=1}^{\infty}\dfrac{a_i}{q^i}:a_i\in \mathcal{A} \right\}.
\end{equation}
is clearly a self-similar set.
Its IFS is $$\left\{\phi_i(x)=\dfrac{x+c_i}{q},c_i\in \mathcal{A}\right\} .$$
In other words, for any $x\in K$, we can find a sequence $(a_i)\in \mathcal{A}^{\mathbb{N}}$ such that
$$x=\sum_{i=1}^{\infty}\dfrac{a_i}{q^i}.$$ We call $(a_i)$ a $q$-adic expansion or coding of $x$. Evidently, if $\mathcal{A}$ does not contain consecutive digits, then any point in $K$ has a unique $q$-adic expansion.
Let $K\subset\mathbb{R}$ be a self-similar set. Usually, $K$ simultaneously contains rational and irrational numbers.
It is natural to ask whether $K$ only consists of irrational numbers. For simplicity, we call a set $E\subset \mathbb{R}$ an irrational set if all the numbers in $E$ are irrational. We say that $E$ is a transcendental set if all the numbers in $E$ are transcendental \cite{Allouche}. We first give a result on the existence of $t$ such that $K+t$ is an irrational set or a transcendental set. We, in fact, can prove more general results. Namely, the following set
$$\{t\in \mathbb{R}: K+t \mbox{ is an irrational set or a transcendental set} \}$$ is large from the measure theoretical or topological perspective.
\begin{prop}\label{key1}
If $E\subset \mathbb{R}$ is a set with zero Lebesgue measure and $D\subset\mathbb{R}$ is a countable set. Then for Lebesgue almost every $t$, $$(E+t)\cap D =\emptyset.$$
Moreover, if $E\subset \mathbb{R}$ is a set of first category, then except a set of the first category with respect to the parameter $t$, we have that
$$(E+t)\cap D=\emptyset.$$
\end{prop}
Proposition \ref{key1} implies many results. We only list the following two consequences. The reader may find other similar statements. For instance, analogous results can be obtained from the topological perspective.
\begin{coro}\label{cor1}
Let $\{E_i\}_{i=1}^{\infty}\subset \mathbb{R}$ be a set sequence such that each $E_i, i\in \mathbb{N}^{+}$ is a Lebesgue null set. Then for Lebesgue almost every $t$, we have that
$$\cup_{i=1}^{\infty}(E_i+t) $$ is an irrational set or a transcendental set.
\end{coro}
\begin{coro}\label{cor2}
Let $E$ be a set with zero Lebesgue measure. Then for Lebesgue almost every $t$,
$$E+it=\{e+it:e\in E\},\dfrac{E}{it}=\left\{\dfrac{e}{it}:e\in E\setminus\{0\}\right\}, i\in \mathbb{Z}\setminus \{0\}$$ are simultaneously irrational sets or transcendental sets.
Moreover, for Lebesgue almost every $t$,
$$E+t, E-t, tE, \dfrac{E}{t}$$are simultaneously irrational sets or transcendental sets.
Here $$tE=\{te, e\in E\setminus\{0\}\}.$$
\end{coro}
Although Proposition \ref{key1} gives the existence of $t$, usually it is not easy to find a such $t$ explicitly. The main aim of this paper is to consider some self-similar set $K$, and construct explicitly $t$
such that $K+t$ is an irrational set.
Before we introduce the first concrete example, we define a Liouville number.
Let $q\in \mathbb{N}_{\geq 3}$. We define
$$s(q):=\sum_{n=1}^{\infty}\dfrac{1}{q^{n!}}.$$
In what follows, we shall use this number $s(q)$. Sometimes, we may simply denote it by $s.$
\begin{thm}\label{Main}
Let $K$ be the self-similar set generated by
$$\left\{f_1(x)=\dfrac{x}{q},f_2(x)=\dfrac{x+q-1}{q}\right\}$$ with $ q\in \mathbb{N}_{\geq 3}.$
Then for any $\ell\in \{1,2,\cdots, q-2\}$ we have
$$K-\ell s=\{x-\ell s:x\in K\}\subset \mathbb{Q}^c,$$
In particular, when $q=3$, $K$ is the middle-third Cantor set (denoted by $C$). Therefore, we have
$$C- s(3) \subset \mathbb{Q}^c.$$
\end{thm}
\begin{remark}
\begin{itemize}
\item[(1)] Our result can be strengthened, i.e. for any $r\in \mathbb{Q}$, we have
$$K-\ell s-r\subset \mathbb{Q}^c.$$
It is easy to see that $K-(q-1)s$ may contain some rational numbers. For instance, we let $q=3$. We claim that $C-2s$ may contain $0$. This is because $$x=\sum_{n=1}^{\infty}\dfrac{2}{3^{n!}}\in C$$ and $x-2s=0.$
We do not know whether each $K- \ell s, 1\leq \ell\leq q-2$ is a transcendental set. However, we can conclude that for any $1\leq \ell\leq q-2$, all the numbers in $K- \ell s$ cannot only contain the Liouville numbers as the set of all the Liouville numbers has Hausdorff dimension zero whilst the translation of $K$ has Hausdorff dimension $$\dfrac{\log 2}{\log q}.$$
\item[(2)] The Liouville number $s$ is not the unique way which guarantees $$K-\ell s\subset \mathbb{Q}^c.$$ The number $s$ can be constructed generally as follows.
Let $g:\mathbb{N}^{+}\to \mathbb{N}^{+}$ be an integer function such that $$\lim_{n\to \infty}(g(n+1)-g(n))=+\infty.$$
Then we define
$$s^{*}=\sum_{n=1}^{\infty}\dfrac{1}{q^{g(n)}}.$$
In particular, we may let $g(n)=n!, n^n, 2021n^3+2020n^2+2019n, \cdots. $
Under these constructions, we still have the desired result in the above theorem. In what follows, we always let $g(n)=n!$. Some other constructions are allowed.
\item[(3)] Similar results can be obtained for the $p$-adic numbers ($p$ is some prime number). As in the setting of $p$-adic system, we still have that
a number in $\mathbb{Q}_p$ is rational if and only if it has an eventually periodic $p$-adic expansion. The proof is almost the same as our idea. We leave this to the reader.
\item[(4)] If we want to obtain the following result, i.e. for any $i\in \mathbb{Z}\setminus\{0\}$,
$$K+is^{\prime}\subset \mathbb{Q}^c $$
then we may let $s^{\prime}$ be a normal number, see \cite{Jiang} for more general result. For the Liouville number, it is not enough. A counterexample is constructed in the above Remark (1).
\end{itemize}
\end{remark}
With similar discussion, we can prove the following result.
\begin{thm}\label{Main1}
Let $K_1$ be the attractor of the following IFS
$$\left\{f_i(x)=\dfrac{x+a_i}{q}\right\},$$ where
$$ a_i\in \mathcal{A}\subset \{0,1,2,\cdots,q-1\}, q\in \mathbb{N}_{\geq 3}.$$
Suppose
\begin{itemize}
\item [(1)] $1\notin \mathcal{A}, q-1\in \mathcal{A}$;
\item [(2)] $\mathcal{A}$ contains no consecutive integers.
\end{itemize}
Then we have
$$K_1-s=\{x-s:x\in K_1\}\subset \mathbb{Q}^c,$$
\end{thm}
\begin{remark}\label{remark}
The proofs of Theorems \ref{Main} and \ref{Main1} are motivated by some classical techniques in the setting of $q$-expansions. In particular, we are motivated by some carries occurring in the Fibonacci base \cite{GS}, i.e. a word $100$ can be replaced by the word $011$ in base $\dfrac{\sqrt{5}+1}{2}$ . The reader may find many useful ideas in the references \cite{RSK,KarmaMartijn,MK,EJK,GS,SN}.
\end{remark}
Theorem \ref{Main1} has the following corollary.
\begin{coro}
Let $J$ be the attractor of the IFS
$$f_i(x,y)=\left(\dfrac{x+c_i}{q}, \dfrac{x+d_i}{q}\right),$$ where
$(c_i,d_i)\in \mathcal{A}_1\times \mathcal{A}_2$, and $\mathcal{A}_i, i=1,2,$ satisfies the conditions in Theorem \ref{Main1}.
Then $$J+\left(-s,-s\right)=\left\{\left(x-s, y-s\right):(x,y)\in J\right\}\subset \mathbb{Q}^c \times \mathbb{Q}^c.$$
\end{coro}
We may construct a self-similar set $J_1\subset \mathbb{R}^2$ with zero Lebesgue measure such that for any $(x,y)\in \mathbb{R}^2$, we always have
$$J_1+(x,y)\nsubseteqq \mathbb{Q}^c \times \mathbb{Q}^c .$$ The key point is that a two dimensional set with zero Lebesgue measure may still contain some line segment.
For different self-similar sets, we may give a uniform translation such that
the translation sets are irrational sets.
\begin{thm}\label{Main2}
Let $J_2$ and $J_3$ be the attractors of the IFS's
$$\left\{f_1(x)=\dfrac{x}{q^n}, f_2(x)=\dfrac{x+q^n-1}{q^n}\right\}$$
and
$$\left\{g_1(x)=\dfrac{x}{q^m}, g_2(x)=\dfrac{x+q^m-1}{q^m}\right\},$$ respectively, where $q\in \mathbb{N}_{\geq 3}, n,m\in \mathbb{N}_{\geq 2}.$ Then
$$J_2+s\subset \mathbb{Q}^c, J_3+s\subset \mathbb{Q}^c,$$
where
$$s=-\sum_{k=1}^{\infty}\dfrac{1}{q^{(mn)k!}}.$$
\end{thm}
\begin{remark}
We can give more similar results. For instance, in Theorem \ref{Main2}, we are allowed to consider a general digit set as defined in Theorem \ref{Main1}. Moreover, in Theorem \ref{Main1}, we may investigate consecutive translations as Theorem \ref{Main}. In other words, we are able to find some results for a general digit set, and to analyze the consecutive translations of some fractal sets. We leave these generalizations to the reader.
\end{remark}
This paper is organized as follows. In Section 2, we give the proofs of the main results. In Secion 3, we pose some problems.
\section{Proofs of the main results}
\begin{proof}[\textbf{Proof of Proposition \ref{key1}}]
We only prove the first statement as the second proof is similar.
Suppose that for some $t$, $E+t$ contains some number in $D$, i.e.
$$D\cap (E+t)\neq \emptyset.$$ Therefore, there exists some $a\in D, e\in E $ such that
$$-t=e-a\in E-a.$$ Subsequently,
$$-t\in \cup_{a\in D}(E-a),$$ which yields that
$$m(\cup_{a\in D}(E-a))\leq \sum_{a\in D}m(E-a)=0, $$
where $m(\cdot)$ denotes the Lebesgue measure.
\end{proof}
The proofs of Corollaries \ref{cor1} and \ref{cor2} are similar to Proposition \ref{key1}. We leave them to the reader.
\begin{proof}[\textbf{Proof of Theorem \ref{Main}}]
Note that for any $x\in K$ there exists a unique sequence $(x_i)\in\{0,q-1\}^{\mathbb{N}}$ such that
$$x=\sum_{i=1}^{\infty}\dfrac{x_i}{q^i}=:(x_1x_2\cdots )_q,$$ and
the infinite sequence $(x_i) $ is called a (unique) $q$-adic coding of $x$. Define
$$s=\sum_{n=1}^{\infty}\dfrac{1}{q^{n!}}=(s_1s_2\cdots)_q,$$
where
\begin{equation}\label{1}
s_i=\left\{
\begin{array}{ll}
1 & \text{if } i=k! \text{ for some }k\in \mathbb{Z}^{+} \\
0 & \text{otherwise} .
\end{array
\right.
\end{equation}
Take $x\in K$ with its unique coding $(x_i)\in\{0,q-1\}^{\mathbb{N}}, $ and take $\ell \in \{1,\cdots, q-2\}$. It suffices to prove that
\begin{equation}\label{2}
x-\ell s=\sum_{i=1}^{\infty}\dfrac{x_i-\ell s_i}{q^i}\notin \mathbb{Q}.
\end{equation}
Write
$a_i=x_i-\ell s_i$ for $i\geq 1$. Then $$a_i\in \{0,q-1\}-\{0,\ell\}=\{-\ell,0,q-1-\ell,q-1\}.$$ Furthermore, by (\ref{1}) it follows that $a_i\in \{-\ell, q-1-\ell\}$ if $i=k!$ for some $k\in \mathbb{N},$ and otherwise $a_i\in \{0,q-1\}.$ It is well-known that a number $y\in \mathbb{R}$ is rational if and only if it has an eventually periodic $q$-adic coding in $\{0,1,\cdots, q-1\}^{\mathbb{N}}$. Observe that the sequence $(a_i)$ may have negative digit $-\ell$. Therefore, our strategy to prove (\ref{2}) is that we first construct a $q$-adic coding $(b_i)\in \{0,1,\cdots, q-1\}^{\mathbb{N}}$ of $x-\ell s$ based on the sequence $(a_i)$, and then show that the coding $(b_i)$ is not eventually periodic.
Without loss of generality we may assume that $$x=(x_1x_2\cdots )_q\in K$$ with $x_1=q-1$.
This is due to the similarity structure of $K$, i.e. for any $y\in [0,1/q]\cap K$, then there exists some $x\in [1-1/q,1]\cap K$ such that
$$x=y+1-1/q.$$
Therefore, $$x\notin \mathbb{Q}\Leftrightarrow y\notin \mathbb{Q}.$$
In the following we may assume that $(x_i)$ contains infinitely many zeros and infinitely many $(q-1)$'s. If $(x_i)$ ends with $(q-1)^{\infty}$ or $0^{\infty}$ (in what follows, we denote $a^k$ by $k$ consecutive concatenation of the digit $a$), then clearly $x$ is rational. So $x-\ell s\notin \mathbb{Q}$ as $s$ is a transcendental number. In other words, we finish the proof.
Note that $a_1\in \{q-1-\ell, q-1\}.$ If $a_{2!}=-\ell$, then we replace $a_1a_2$ by $b_1a^{\prime}_1=(a_1-1)(q-\ell).$ Now we look at the next place where $a_{(k+1)!}=-\ell$ for some smallest $k\geq 2$. If there exists a largest index $i\in \{k!+1, \cdots, (k+1)!-1\}$ such that $a_i=q-1$, then we replace the word
$$a_{k!}^{\prime}a_{k!+1}\cdots a_{(k+1)!}$$ by
$$b_{k!}\cdots b_{(k+1)!-1} a_{(k+1)!}^{\prime}=a_{k!}^{\prime} a_{k!+1}\cdots a_{i-1}(a_i-1)(q-1)^{(k+1)!-i-1}(q-\ell).$$
Otherwise, $a_i=0$ for all $k!<i<(k+1)!$. Note that by our previous replacement, we have $a_{k!}^{\prime}>0.$ Then we replace the word
$$a_{k!}^{\prime}a_{k!+1}\cdots a_{(k+1)!}$$ by
$$b_{k!}\cdots b_{(k+1)!-1} a_{(k+1)!}^{\prime}=(a_{k!}^{\prime}-1) (q-1)^{(k+1)!-k!-1}(q-\ell).$$
Proceeding this argument indefinitely we obtain a $q$-adic coding $(b_i)\in \{0,1,\cdots, q-1\}^{\mathbb{N}}$ of $x-\ell s.$
In the following we will prove by contradiction that $(b_i)$ is not eventually periodic. Suppose on the contrary that $$(b_i)=b_1\cdots b_N(b_{N+1}\cdots b_{N+p})^{\infty}$$ for some $N,p\in \mathbb{N}.$ Then
\begin{equation}\label{3}
b_{i+p}=b_i,\forall i\geq N.
\end{equation}
Take $k\geq \max\{p, N\}$. Observe that $b_{k!}\in\{q-\ell-2, q-\ell-1,q-\ell\}$. We split the proof into the following three cases.
Case \uppercase\expandafter{\romannumeral1}. $b_{k!}=q-\ell-2$, Then $b_{k!+1}\cdots b_{(k+1)!-1}=(q-1)^{(k+1)!-k!-1}$.This, together with (\ref{3}), implies that $(b_i)$ ends with $(q-1)^{\infty}$. However, by our construction the sequence $(b_i)$ cannot end with $(q-1)^{\infty}$, leading to a contradiction.
Case \uppercase\expandafter{\romannumeral2}. $b_{k!}=q-\ell-1$. Then $1\leq q-\ell-1\leq q-2.$ By (\ref{3}) it follows that there exist more than one index $i\in \{k!+1, \cdots, (k+1)!-1\}$ such that $b_i=q-\ell-1.$ This is again leads to a contradiction with our construction of $(b_i). $
Case \uppercase\expandafter{\romannumeral3}. $b_{k!}=q-\ell$. Then there exists $i\in \{k!+1, \cdots, (k+1)!-1\}$ such that
$$b_i\cdots b_{(k+1)!-1}=(q-2)(q-1)^{(k+1)!-i-1}.$$ If $(k+1)!-i-1\geq p$, then we can conclude by (\ref{3}) that $(b_i)$ ends with $(q-1)^{\infty}$. Otherwise, we can conclude that there exist more than one index
$i\in \{(k+1)!+1, \cdots, (k+2)!-1\}$ such that $b_i=q-2.$ However, both cases will lead to a contradiction with our construction of $(b_i).$
Therefore, by Cases \uppercase\expandafter{\romannumeral1}-\uppercase\expandafter{\romannumeral3} it follows that $(b_i)$ is not eventually periodic, and thus $x-\ell s=(b_1b_2\cdots )_q\notin \mathbb{Q}. $ This completes the proof.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{Main1}}]
The proof of Theorem \ref{Main1} is similar to that of Theorem \ref{Main}.
It suffices to prove that for any
$$x\in K\cap [1-1/q,1] $$ we have
$$r=x-s\notin \mathbb{Q}.$$
Let $$x=\sum_{i=1}^{\infty}\dfrac{x_i}{q^i}$$ with $(x_i)\in \mathcal{A}^{\mathbb{N}}, x_1=q-1$. Let $(s_i)\in \{0,1\}^{\mathbb{N}}$ with $s_i=1$ iff $i=k!.$
Then it is easy to check that
$$r=\sum_{i=1}^{\infty}\dfrac{x_i- s_i}{q^i}\in (0,1),$$ and that
$$x_1- s_1=q-1-1\geq 1, x_i-s_i\in\{-1,0,1,2,\cdots, q-1\}, i\geq 2.$$
We denote $b_i:=x_i-s_i. $ Therefore, we have
$$r=\sum_{i=1}^{\infty}\dfrac{b_i}{q^i}.$$
Then we may implement the idea from Theorem \ref{Main} and
rearrange the expansion $(b_i)$. Without loss of generality, we still use $(b_i)$ to denote the new rearranged expansion. We can prove
the following two statements.
\begin{itemize}
\item[(1)]
If $i=k!$ for some large $k\geq 2$, then
$$b_i\in \{(\mathcal{A}-1)\cap \{0,1,\cdots, q-1\}\}\subset \mathcal{A}^c.$$
\item[(2)]
If there exists some large $k\geq 3$ such that $k!<i<(k+1)!$, then $$b_i\in \mathcal{A}.$$
\end{itemize}
Therefore, the expansion $(b_i)$ is not eventually periodic, which yields that $r\notin \mathbb{Q}.$
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{Main2}}]
The proof is similar to that of Theorem \ref{Main}. We only give an outline.
It suffices to consider the numbers in
$$ \left[\dfrac{q^n-1}{q^n},1\right]\cap J_2, \left[\dfrac{q^m-1}{q^m},1\right]\cap J_3.$$
More specifically, we let $r_1=x_1+s, r_2=x_2+s$, where
$$x_1\in \left[\dfrac{q^n-1}{q^n},1\right]\cap J_2, x_2\in \left[\dfrac{q^m-1}{q^m},1\right]\cap J_3.$$
Note that $r_1, r_2\in (0,1)$. Therefore, both of them have some $q$-expansions. We denote their expansions by
$$(u_i)\in \{-1,0,q^n-1, q^n-2\}^{\mathbb{N}}, (v_i)\in \{-1,0,q^m-1, q^m-2\}^{\mathbb{N}},$$
respectively. By means of the idea we used in Theorem \ref{Main}, we rearrange these two expansions. Again, we still use $(u_i)$ and $(v_i)$ to denote these expansions.
We can prove the following statements:
\begin{itemize}
\item[(1)] if $i$ is sufficient large, then $u_i=q^n-2$, where $i=(mn)k!$ for some large $k\geq 1;$
\item[(2)] if $i$ is sufficient large, and $(mn)k!<i<(mn)(k+1)!$ for some large $k$, then $u_i\in \{0,q^n-1\}$;
\item[(3)] if $i$ is sufficient large, then $v_i=q^m-2$, where $i=(mn)k!$ for some large $k\geq 1;$
\item[(4)] if $i$ is sufficient large, and $(mn)k!<i<(mn)(k+1)!$ for some large $k$, then $v_i\in \{0,q^m-1\}$.
\end{itemize}
Clearly, under these conditions, $(u_i)$ and $(v_i)$ are not eventually periodic. Therefore, we prove that
$r_1$ and $r_2$ are not rational.
\end{proof}
\section{Final remark}
There are many problems can be asked. We list the following questions.
\begin{itemize}
\item [(1)] Can we find a uniform $t$ such that $$C_{1/3}+t, C_{1/4}+t\subset \mathbb{Q}^c,$$ where $C_{1/3}$ and $C_{1/4}$ are the middle-third and middle-$1/2$ Cantor sets, respectively. By Corollary \ref{cor1}, we can find in theory many such $t$. We conjecture that
$$t=-\sum_{n=1}^{\infty}\dfrac{1}{3^{n!}}-\sum_{n=1}^{\infty}\dfrac{1}{4^{n!}}$$ may work.
\item [(2)] How can we find some explicit self-similar set which contains only transcendental numbers.
\item [(3)] We do not know whether $C_{1/3}+e$ or $C_{1/3}+\pi$ is a transcendental set.
\item [(4)] For any $1<q<2$, how can we find an explicit $t$ such that $U_q+t$ is a transcendental set, where $U_q$ denotes the univoque set \cite{RSK,KarmaMartijn,MK,EJK,GS}.
\end{itemize}
\section*{Acknowledgements}
We are grateful to the anonymous referees for many suggestions and comments.
This work is supported by the National Natural Science Foundation of China (No.11701302), and by the Zhejiang Provincial Natural Science Foundation of China with
No.LY20A010009. The work is
also supported by K.C. Wong Magna Fund in Ningbo University.
|
1,314,259,993,343 | arxiv | \section*{Priority 1: Make consent \emph{meaningful}~--~or abandon it}\addcontentsline{toc}{section}{Priority 1: Make consent \emph{meaningful}~--~or abandon it}
Although often claimed otherwise, the GDPR \emph{does not} require a broad implementation of `cookie banners'. EU and UK data protection principles have hardly changed since the GDPR came into force in May 2018 and were already part of the Data Protection Directive 1995. The requirements regarding `cookie banners' additionally result from Art 5(3) of the ePrivacy Directive as amended in 2009, not from the GDPR.
The recent flood of cookie banners can rather be explained by the fact that the potential sanctions for data protection violations have drastically increased with the GDPR, causing discontent within the online data industry. The conditions for consent have also been tightened and existing standards have been clarified in line with case law. User consent must now be `freely given, specific, informed and unambiguous' (Recital 32 GDPR). However, this is rarely the case in practice~\cite{nouwens_dark_2020,matte_cookie_2019,kollnig2021_consent,nguyen_share_first_consent_2021}. A significant proportion of current `cookie banners' are thus in violation of the GDPR.
The designation of those \emph{consent} banners as `cookie banners' can further be interpreted as misinformation. For example, Facebook implements a pop-up on its website titled `Allow the use of cookies from Facebook in this browser?' It is only in the accompanying Cookie Policy that Facebook clarifies that `cookies' do not only refer to cookies but also that other `technologies, including data that we store on your web browser or device, identifiers associated with your device and other software, are used for similar purposes.' The online advertising industry today uses a variety of technologies to track user activity across various apps and websites~--~such as fingerprinting (i.e.~using browser characteristics such as time zone, language and operating system) and email hashing (i.e.~sending email addresses from non-Facebook websites to Facebook even if the user does not use Facebook). This collection of data about websites and apps~--~tracking~--~is widespread, as found by numerous pieces of research~\cite{binns_third_2018,kollnig2022_iphone_android,razaghpanah_apps_2018}. Meanwhile, the term `cookie' sounds innocuous and is widely used by the industry.
Overall, a considerable part of the `cookie banners' on the internet aims to misinform and frustrate internet users vis-à-vis the GDPR rather than to implement the law's requirements~\cite{Veale_Nouwens_Santos_2022}.
There remains significant work to do for authorities and other organisations to tackle incompliant implementations of consent and make it meaningful. Indeed, ample research suggests that this is not possible at all~\cite{solove_privacy_2012,barocas2009notice,bietti_consent_2020}, in part because individuals will never be sufficiently `informed'~--~as is required by GDPR~--~about the opaque data practices of large technology companies~\cite{norwegian_consumer_council_out_2020}.
\section*{Priority 2: Better, bolder communication}\addcontentsline{toc}{section}{Priority 2: Better, bolder communication}
Due to the continued uncertainty and misinformation regarding the GDPR, the current way of working of data protection and other public authorities has created a vacuum of knowledge and authority that has been successfully occupied by third parties with strong self-interests. This way of working in the EU is often characterised as bureaucratic and apolitical, resulting from a lack of a transnational public sphere in Europe. However, without a European public sphere and debate, political legitimacy in the Habermasian sense is difficult, if not impossible.
The end result is problematic for data protection because it fuels a negative and dismissive mood among citizens~--~including those individuals who are responsible for the practical implementation of the GDPR~--~towards the competence of public authorities in digital matters. Data protection and other public authorities should counter this perception boldly and decisively. This applies both to new digital initiatives and existing laws such as the GDPR.
\section*{Priority 3: Clear technical standards, visualisations and reference code}\addcontentsline{toc}{section}{Priority 3: Clear technical standards, visualisations and reference code}
As part of better communication, \emph{clear, reliable and actionable technical standards} should be considered. Unfortunately, developers do often not know how to comply~\cite{anirudhchi2021,mhaidli_we_2019,sirur_are_2018}, so there is a need to clarify what forms of data processing are permitted and how this should be implemented in software.
Currently, the expectation from the authorities is that software developers will resolve important issues related to the implementation of the GDPR themselves~--~by studying the relevant legislation and rulings. This assumption is unrealistic, at least for smaller software companies~\cite{sirur_are_2018}. In addition, the European Data Protection Board and the ICO regularly publish explanatory notes on important aspects of the GDPR. This usually involves the publication of long texts of legalese. The target audience of these publications is thus primarily legal, especially courts, but not the individuals tasked with the practical implementation of the law.
It is certainly important to explain the legal dimensions of the GDPR and to pursue this through legal methodology, particularly by publishing explanatory legal texts. At the same time, it seems that authorities too often hide their lack of authority and technical expertise behind overly formal communication and shy away from clear specifications. As a result, a significant part of the interpretation of the GDPR currently falls to the courts. Unfortunately, this approach undermines a swift and effective implementation of the GDPR and is unsuitable to keep pace with rapid technological change. Code can be changed and rolled out to users worldwide in a matter of minutes. For effective IT regulation, the (ambitious) goal must be to act similarly agile.
From a technical perspective, it is almost naïve to assume that legal text could be translated more or less directly into code. Instead, in IT, \emph{requirements specification} provides a decades-old approach to describing and building IT systems. A common standard was first published by the IEEE (Institute of Electrical and Electronic Engineers) in 1984; the latest version is ISO/IEC/IEEE 29148:2018 from 2018. Requirements can be both technical and non-technical as well as specific and less specific. There is no reason why similar requirements cannot be formulated for core elements of the GDPR and other IT law. This could be done in particular for the implementation of consent and should be accompanied by visualisations and reference code where possible. In the context of the amendment of the ePrivacy Directive in 2009, the EU even provided visualisations and reference code in the past, but did not maintain them over the years and discontinued them after the introduction of the GDPR.
\section*{Priority 4: Sufficient resources for authorities}\addcontentsline{toc}{section}{Priority 4: Sufficient resources for authorities}
There are many reasons for the lack of implementation of the GDPR in practice. One important reason is the continued lack of resources of data protection authorities~\cite{edpb_funding,lynskey_grappling_2019}. This refers to both financial resources and (technical) expertise. For example, there has been virtually no action by the responsible authorities against the documented data protection problems in mobile apps. A second reason is the one-stop-shop approach of the GDPR in the EU. This approach currently leads to a race to the bottom between member states in terms of negligent implementation of the GDPR. In particular, Ireland, where most of the major tech companies in Europe are based (including Microsoft, Alphabet/Google and Meta/Facebook), has been criticised in this regard~\cite{mcintyre_regulating_2021,iccl_ads}. A third reason is the still-evolving case law in the courts.
The problem of the lack of practical enforcement of the GDPR has been recognised by lawmakers and is being addressed in new EU digital legislation. Ireland is no longer a single point of failure of legal enforcement in DMA and DSA against big tech, as it was de facto under the GDPR, but rather the EU Commission. In addition, technology companies will be required to subsidise enforcement financially.
A key challenge will remain recruiting the necessary technical talent for public institutions, most of whom currently work for the same technology companies and are needed to keep pace with private industry in terms of expertise and technical understanding. In the past, European legislators have not always maintained an air of technical competence. One example is the planned EU AI Act, which is supposed to regulate AI applications. However, the definition of AI applications in the Commission's first proposal was so broad that it covered almost any computer application. The planned AI rules are derived from EU product safety legislation. This creates the risk of missing the core of AI, which rather lies in the inputs and outputs of the model rather than the product/technology itself. Doubts about the EU legislator's deep technical understanding also arise when reading the GDPR. The law, like its 1995 predecessor, distinguishes between controllers and processors in the processing of personal data. Controllers are those that alone or jointly with others determine the purposes and means of the processing of personal data (Art 4 GDPR). Processors, on the other hand, usually only act at the instruction of the controller. However, today's IT systems are the product of the combined work of many different actors, both large and small. This often makes it almost impossible to distinguish between controllers and processors. This distinction is, however, important because controllers face many more obligations than processors. Moreover, whether and to what degree software development~--~rather than direct data processing~--~entails obligations under the GDPR is not clear~\cite{bygrave_data_2017,jasmontaite_data_2018}. As a result of these definitions, which only peripherally deal with the usual processes and distribution of tasks in software development, there are a number of concluded and pending cases regarding the definition of the role of data controller~\cite{ecj_waka,ecj_jw,ecj_fashion,belgium_dpa_tcf}. One solution to this phenomenon was proposed by the Belgian DPA~\cite{belgium_dpa_tcf} in its case against IAB Europe: the DPA decided to define almost all actors in the online advertising business as controllers, i.e.~thousands of different companies. IAB Europe has appealed and the case is currently pending before the ECJ. As an alternative approach, China's Personal Information Protection Law (PIPL) from 2021 only foresees processors of personal data, but no controllers.
Of course, the GDPR is not limited to IT but covers many other areas of our daily lives that involve personal data. Therefore, one could argue that criticism of the GDPR's lack of focus on software development misses the point of the law. However, it is also the case that without technological developments, there would have been little motivation for a revision of EU data protection law (see also Recital 6 GDPR).
\section*{Priority 5: Embrace regulatory technologies}\addcontentsline{toc}{section}{Priority 5: Embrace regulatory technologies}
There are two dominant approaches to enforcing data protection rules in digital systems. The first one is taken by data protection authorities who tend to focus their efforts on a few select cases and companies. The hope is that this will tame the most egregious data practices and that there will be spillover effects across the data practices by other organisations. The second approach is taken by gatekeepers, such as app stores, who conduct some enforcement of data protection rules at scale (e.g.~through their (automated) app review), but publish limited public information about this enforcement, including the \emph{number and nature} of decisions taken~\cite{hoboken2021}. Given the scale of the digital ecosystem and the extent of current violations of data protection rules (as observed in this and other work), both approaches are insufficient. Without the help of regulatory technologies in ensuring compliance in the digital ecosystem, it will be \emph{impossible} to scale operations across the vastness of these digital ecosystems, to fulfil the expectations of individuals in keeping them safe online, and to protect fundamental rights and freedoms.
In the app ecosystem, an important, persisting issue that emerged from my analysis across iOS and Android is the lack of transparency around apps' data practices. This conflicts with the strict transparency requirements for the processing of personal data laid out in the GDPR. Design decisions by Apple and Google currently impede research efforts, such as the application of copyright protection to \emph{every} iOS app~--~even free ones. This is why it is \emph{important to develop and maintain transparency tools}.
A starting point could be expanding my PlatformControl toolkit (\url{https://platformcontrol.org}), and give more up-to-date and detailed insights into apps' privacy and compliance properties.
As part of this, an important field for further study is the development of a cross-platform app instrumentation tool. \emph{Automatic app compliance analysis tools} are not widely available nor used by regulators and the interested public (though it might be easy to conduct such automatic checks if the regulators defined more explicit rules regarding privacy and app design), but would help keep up with the vastness of the app ecosystem. Such analysis tools would require \emph{reliable and computable metrics for compliance}. While most of this work has been on the situation in Europe, there have been emerging many promising new pieces of technology regulation across the globe, which need further investigation.
\section*{Priority 6: Evolve `legacy' legislation and provide support for research}\addcontentsline{toc}{section}{Priority 6: Evolve `legacy' legislation and provide support for research}
Much research efforts have been devoted to analysing privacy in mobile apps. Such research remains challenging, as the creation of the necessary data is associated with high investments of time and scarce technical expertise~\cite{kollnig_ready_2023}. The fact that analysing privacy issues in apps and in other software products is so difficult has an impact not only on my research but also on the work of other researchers and data protection authorities aiming to protect fundamental rights in digital systems~\cite{kollnig_ready_2023}. For example, most data protection authorities themselves currently do not possess independent expertise to analyse compliance issues in mobile apps.
The EU Digital Services Act makes promising progress in supporting research in relation to online platforms and search engines. Its Article 40, for example, obliges `very large online platforms' and `very large online search engines' to allow researchers to analyse `systemic risks'. The concrete implications for research practice, however, remain to be seen. I have, in the context of app research, elaborated on these new legal requirements in a recent pre-print~\cite{kollnig_ready_2023}. It can, however, be expected that clarifications of the law by the highest courts will be necessary and that many years will pass before the law will lead to major changes to the status quo.
Despite all the debates about new IT laws, one must not lose sight of existing laws, such as copyright, patent, and IT security law. Even if such legislation may be less attractive for public and academic debate and thus receives less attention, there is also a great need for improvement here. This was also demonstrated by the research in my PhD thesis, which conducted the first large-scale study into iOS app privacy in about 10 years and avoided legal challenges around Apple's application of DRM to iOS apps~\cite{kollnig2022_iphone_android,kollnig_att_2022}.
The used methods are freely available at \url{https://platformcontrol.org}.
\paragraph{Acknowledgements}
This report is inspired by previous submissions to the UK competition authority~\cite{kollnig_cma_2022} and the Department for Digital, Culture, Media and Sport~\cite{researchers_from_the_human_centred_computing_research_group_department_of_computer_science_university_of_oxford_response_2021} from my research group. An earlier version of this report was published in the Ad Legendum journal of the University Münster (in German)~\cite{kollnig_lehren_2023}.
{\renewcommand*\MakeUppercase[1]{#1}%
\begingroup
\sloppy
\printbibliography
\endgroup
\end{document} |
1,314,259,993,344 | arxiv | \section{Introduction}
Photonic crystal fibers (PCFs) have been the topic of extensive research in recent years because of their wide range of applications, flexibility of the design, and distinctive advantages in controlling light as well as numerous exclusive features, such as their large effective mode area, high nonlinearity, adjustable dispersion, large birefringence, and constant single-mode nature, as well as maintaining polarization at mid-IR wavelengths \cite{1}-\cite{4}. Many optical devices such as polarization splitter/filter, coupler, and wavelength coupler/splitter can be accomplished using this PCF, which performs an important role in integrated optical systems \cite{5}-\cite{7}. In the polarization splitters, optical signal is normally separated into two orthogonal polarization components \cite{choyon1}-\cite{choyon8}. Aside from these applications, it can also be used in high-speed radio-over-free-space optical technology \cite{7a}. Therefore, multifunctional fiber has gained major concentration by the scientists and engineering community for the application of integrated and compact optical systems. In the area of optical communication, many photonic and optical devices such as wavelength division MUX-DeMUX, polarization splitter have been thoroughly researched that result in the invention of newly designed photonic crystal fiber for their unique tunable characteristics \cite{8}.
Besides, many research investigations have been done to depict multifunctional DC-PCF \cite{8}-\cite{9}, that contains maintained desirable properties for optical communications, for example, design flexibility, short coupling length compared to traditional fiber optic coupler \cite{5}, large birefringence value, low splice loss, endlessly operation in single-mode \cite{10}-\cite{11}, effective polarization, and also wavelength splitting \cite{12}. A PCF multi-core fiber can additionally provide short coupling lengths in the millimeter range. A PCF with these properties is an attractive candidate for both MUXing (mixing the different wavelengths incoming) and DeMUXing (dividing the power into multiple wavelengths) \cite{4}.
Currently, mid-IR photonic devices have been urged significant interest due to its potential applications in free space optical communication, spectroscopic sensing, imaging and so on \cite{13}-\cite{14}. Mid-IR optical devices, in particular, are widely used for their various capabilities, including power coupling, wavelength splitting, and multiplexing \cite{15}. A variety of approaches have now been utilized to develop these devices, including the use of multi-mode interference (MMI) waveguide coupling, glass laser inscription waveguides, and mid-IR fibers \cite{16}-\cite{17}. In particular, due to the consequence of their practically achievable Fermi level, MMI waveguide couplers are limited in their operating bandwidth. As a means to mitigate this issue, mid-IR photonic crystal fibers (PCFs) have been found to be the best choice for developing couplers and multiplexers due to their short coupling length and greater design flexibility \cite{17}-\cite{18}.
Due to their flexible structural design, some PCF-based polarization splitters have improved performance in the near-infrared region (0.7-2 $\mu m$). Fan et al. \cite{19}, in 2015, improved the coupling properties of a soft glass DC-PCF with a fiber length of 52.29 $mm$ by using a high refractive index \ce{As2S3} core. It has been reported that Zhao et al. \cite{20}, in 2016, designed a polarization splitter with two fluorine-doped cores and achieved a fiber length of 52.8 $mm$. According to Hagras et al. (2019) \cite{10}, chalcogenide glass (\ce{As2S3}) and nematic liquid crystal were used to achieve 1.55 $\mu m$ and 1.30 $\mu m$ polarization splitters with fiber lengths of 83 $\mu m$ and 166 $\mu m$, respectively. According to a recent study by Rahman et al. \cite{8}, in 2019, an elliptical DC-PCF was used along with silica material to achieve a polarization splitter at 1550 $nm$ with a fiber length of 39.8 $mm$, and it was used in WDM MUX-DeMUX for separating 1300 $nm$ and 1550 $nm$ wavelengths in X and Y polarization modes at 9.9 $mm$ and 16.65 $mm$ fiber lengths, respectively. All of the above research (ref. \cite{8}, \cite{10}, \cite{19}-\cite{20}) were conducted in the near-infrared region (0.7-2 $\mu m$), whereas, this proposed work, for the first time, has been conducted in the mid-IR region (5-13 $\mu m$) using chalcogenide elliptical DC-PCFs with an improved shorter coupling length, shorter fiber length for polarization splitter and WDM MUX-DeMUX, low splice loss for the practical applications in integrated and compact optical and photonic communications systems as a short-length multifunctional device.
The remaining sections of the paper are arranged as follows. At the beginning of Section \ref{sys}, it elucidates the explanations of choosing \ce{As2S3} and \ce{As2Se3} as the two different background materials for our proposed DC-PCF design. Section \ref{design} elaborates the design parameters along with the simulation methodology using COMSOL Multiphysics and explains the linear optical properties of DC-PCF. Section \ref{appl} shows the possible applications of our proposed DC-PCF to use it as a short-length multifunctional device and finally, the paper is summarized in Section \ref{conclusion}.
\section{\textbf{Material Selection for mid-IR}}\label{sys}
PCFs in the mid-IR range have been designed so far with transparent fluoride, telluride, and chalcogenide glasses (ChG). In particular, ChGs exhibit low losses and a high level of transparency up to a wavelength of 25 $\mu m$ while possessing a nonlinearity of 1000 times that of silica. As a result, they can be used as infrared optical fibers or waveguides \cite{23}-\cite{24}. Aside from the wide transmission windows in the mid-IR range, ChGs also have very large nonlinear and linear refractive indices \cite{25}.
In our study, we have used ChGs as background material for the PCF. The chalcogens in ChGs include one or more elements from the periodic table (e.g., sulphur, selenium, tellurium, except for oxygen) that are covalently bonded to other elements such as As, Ge, Sb, Ga, Si, or P. In this study, we have used \ce{As2Se3} and \ce{As2S3} as chalcogenide-based background materials. The reasons for using these chalcogenides as background materials are that they have a wider transmission window than silica, a relatively constant material loss, and they are more practical in use \cite{26}. Infrared transmission, optical fibers for optical sensors, and telecommunication applications benefit from the low intrinsic material loss of \ce{As2Se3} chalcogenide glass \cite{27}. With an attenuation coefficient of less than 1 $cm^{-1}$, \ce{As2Se3} glass has excellent optical transparency between 0.85 and 17.5 $\mu m$ \cite{28}. The properties of ChGs based on \ce{As2S3} have attracted intense research attention due to their high Kerr nonlinearity $n_2$ (100-500 times greater than silica glass), low linear loss, and small two-photon absorption (TPA) coefficient \cite{29}. In addition, here we have used the specifications of the commercially available Femtofiber Dichro mid-IR laser source from \textit{Toptica Photonics} to generate light with a tunable wavelength in the range of 5-13 $\mu m$ (23-60 THz) with a repetition rate of 80 MHz and an emission bandwidth greater than 400 $cm^{-1}$ \cite{30}.
\section{\textbf{Design and Analysis of DC-PCF}} \label{design}
In Fig. \ref{PCF}, our proposed structure for DC-PCF is shown which has holes of elliptical shape surrounding the cores. The design has outer air hole with diameter, $d = 2.8$ $\mu m$; pitch, $\Lambda$ = 3.5 $\mu m$ and core separation, $C = 2\Lambda$ = 7 $\mu m$. As shown in the figure, the major and minor axes are 14 $\mu m$ and 2.8 $\mu m$ for the elliptical air-hole parallel to the axis connecting the cores, and they are 5.4 $\mu m$ and 2.8 $\mu m$ for the elliptical air-hole perpendicular to the axis connecting the cores, respectively. The process “stack-and draw” is used for manufacturing of such elliptical air hole PCF as presented in Ref. \cite{31}. To simulate the design numerically, COMSOL Multiphysics has been used, whereas the finite element method (FEM) was applied to all triangular edge elements based on cylinder perfectly matched layer (PML) boundary conditions. The structure of FIG PCF has been developed on the basis of the structures in \cite{8} and \cite{32}. We demonstrate light propagation through DC-PCFs through four different types of supermodes, which differ with respect to electric field distribution with regard to horizontal and vertical polarization. Fig. \ref{pol} illustrates the fundamental polarization modes for \ce{As2S3} and \ce{As2Se3} DC-PCF.
\begin{figure}
\centering
\includegraphics[width=3.5in,height=3in,keepaspectratio]{PCF.png}
\caption{ A 2D cross-section of the proposed chalcogenide DC-PCF incorporating elliptical air holes surrounding the cores with PML at the boundary, and $C=2\Lambda$. In the big elliptical air-hole: Major (horizontal) axis= 14 $\mu m$, Minor (vertical) axis= 2.8 $\mu m$; in the small elliptical air-hole: Major (vertical) axis= 5.4 $\mu$m, Minor (horizontal) axis= 2.8 $\mu m$. In addition to elliptical air holes, the structure has two more types of air holes, inside (center) and outside of elliptical air-holes. The diameter of- outer air-hole, $d= 2.8$ $\mu m$, and center air hole= 1.9 $\mu m$. Here, the separation between outer air holes (center to center distance or pitch), $\Lambda$=3.5 $\mu m$ and, the DC-PCF cores are separated by $C$. }\label{PCF}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=6in,height=5in,keepaspectratio]{PolMode.png}
\caption{\centering Electric field distribution of the fundamental supermodes for the proposed \ce{As2S3} and \ce{As2Se3} DC-PCFs of FIG. \ref{PCF} simulated at 8 $\mu m$. (a) X-even (\ce{As2S3}), (b) X-odd (\ce{As2S3}), (c) Y-even (\ce{As2S3}), (d) Y-odd (\ce{As2S3}), (e) X-even (\ce{As2Se3}), (f) X-odd (\ce{As2Se3}), (g) Y-even (\ce{As2Se3}), and (h) Y-odd (\ce{As2Se3}). }\label{pol}
\end{figure*}
\subsection{\textbf{Effective Refractive Index}}
To incorporate the result of both waveguide and material dispersion, refractive index of chalcogenide materials (both \ce{As2S3} and \ce{As2Se3}) is calculated from the Sellmeier formula \cite{35}:
\begin{equation} \label{neff}
\epsilon_{r}(\lambda) = 1+ \sum_{n=1}^{3}\frac{A_{n}\lambda^{2}}{\lambda^{2} - B_{n}}
\end{equation}
Here, the value of wavelength ($\lambda$) is noted in $\mu m$ and $A_{n}$ and $B_{n}$ are called Sellmeier coefficients and the corresponding value of $A_{n}$ and $B_{n}$ for $n = 1, 2, 3$ is given in Table \ref{tab1}.
\begin{table*}
\centering
\caption{Constant $A_{n}$ and $B_{n}$ ($\mu$$m^{2}$) ($n = \text{1, 2 and 3}$) for \ce{As2S3} and \ce{As2Se3}, used in Thompson’s Sellmeier Equation \cite{26a}, \cite{31a}.}\label{tab1}
\begin{tabular}{c c c c c c c}
\hline
\textbf{Material} & \textbf{$A_{1}$} & \textbf{$A_{2}$} & \textbf{$A_{3}$} & \textbf{$B_{1}$($\mu$$m^{2}$)} & \textbf{$B_{2}$($\mu$$m^{2}$)} & \textbf{$B_{3}$($\mu$$m^{2}$)}\\ \hline
\ce{As2S3} & 1.8983 & 1.9222 & 0.8765 & 0.0225 & 0.0625 & 0.1225\\
\ce{As2Se3} & 4.994 & 0.120 & 1.710 & 0.0583 & 361 & 0.2332\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=7in,height=7in,keepaspectratio]{Fig1NVBLc.png}
\caption{\centering Linear optical properties of proposed DC-PCF for both chalcogenide (\ce{As2S3} and \ce{As2Se3}) materials: (a) Effective refractive index (b) Effective V-parameter (c) Birefringence properties (d) Coupling length (e) Confinement loss. }\label{nvbl}
\end{figure*}
In Fig. \ref{nvbl}(a), the effective refractive index ($n_{\text{eff}}$) of even symmetry for both horizontal and vertical polarizations of both chalcogenide materials have been plotted with respect to operating wavelength. According to the figure, due to a larger diffused electric field distribution, light confinement within the core regions increases with wavelength, reducing the $n_{\text{eff}}$. The Fig. \ref{nvbl}(a) additionally indicates that as wavelength increases, the difference between the effective indices of X and Y supermodes increases, resulting in increased birefringence in the larger wavelength region.
\subsection{\textbf{Effective V-Parameter}}
The Normalized Frequency Parameter of a fiber, also called V-parameter, is an important parameter for characterising PCF. The value of effective V-parameter, $V_{\text{eff}}$ can be computed from the following equation \cite{11}:
\begin{equation}
V_{\text{eff}} = 2\pi\frac{\Lambda}{\lambda}\sqrt{n_{0}^{2} - n_{\text{eff}}^{2}}
\end{equation}
According to Sellmeier Equation (\ref{neff}), $n_{0}$ is the chalcogenide's refractive index, while $n_{\text{eff}}$ is its effective refractive index calculated using modal analysis. In Fig. \ref{nvbl}(b), we can clearly notice the change of effective V parameter with the change of wavelength ($\lambda$). The requirement for single mode operation in a particular hexagonal or triangular lattice PCF is $V_{\text{eff}}\leq4.1$ \cite{11}. Thus, Fig. \ref{nvbl}(b) illustrates that the DC-PCF we propose is single-moded at 5-13 $\mu m$ wavelengths for \ce{As2S3}, and single-moded for the region of wavelength upto 10 $\mu m$ for \ce{As2Se3}, although \ce{As2Se3} deviates from the single-mode nature after 10 $\mu m$ according to the ref. \cite{11}. Moreover, the requirement of $V_{\text{eff}}$ for single mode operation depends on the geometrical shape of PCF, pitch, diameter to pitch ratio, wavelength, etc., where our proposed structure for dual-core PCF in mid-IR region is completely different from ref. \cite{11}. Additionally, the confinement loss of both \ce{As2S3} and \ce{As2Se3} is zero for the region of wavelength upto 10 $\mu m$, depicted in Fig. \ref{nvbl}(e), where the attenuation or confinement loss due to optical field leakage to the claddings of waveguide structures that propagate light is calculated using the following formula \cite{36}:
\begin{equation}
\text{Confinement loss} = \frac{40\pi}{\ln{(10)}\lambda}\text{Im}(n_{\text{eff}}) \hspace{2mm}\text{(dB/m)}
\end{equation}
Here, the imaginary component of the effective refractive index is Im($n_{\text{eff}}$). Hence, from the Fig. \ref{nvbl}(b), we can confirm that our proposed DC-PCFs are single moded over the wavelength region considered upto 10 $\mu m$ and can be used for wavelength windows for optical communications.
\subsection{\textbf{Coupling Length}}
The length of fiber at where optical power fully lifted from one core to another is represented as coupling length ($L_c$) which is derived from the following equation \cite{32}:
\begin{equation}
L_{C} = \frac{\pi}{\beta_{even} - \beta_{odd}}
\end{equation}
Where, $\beta_{even}$ and $\beta_{odd}$ are the symmetric and anti-symmetric orthogonally polarized supermode propagation constants, respectively.
According to Fig. \ref{nvbl}(d), the coupling length decreases as the wavelength of the fiber increases. The coupling lengths of horizontal $X_{even}$/vertical $Y_{even}$ supermodes for \ce{As2S3} and \ce{As2Se3} are 0.249 mm/0.360 mm, and 0.992 mm/1.496 mm, respectively at 9 $\mu m$ wavelength. Fig. \ref{pol}(a) and Fig. \ref{pol}(e) illustrate the phenomenon of the short coupling length of the X-supermode. This is because the two cores are strongly coupled via horizontal channels in the core regions. Due to the fact that both cores are aligned parallel to the x-axis, X-polarized mode has a shorter coupling length than Y-polarized mode. Additionally, X-polarized mode has higher coupling than Y-polarized mode. The coupling characteristic of DC-PCF has a potential application in the wavelength selective system, i.e. a device can be used as a MUX-DEMUX or power-coupler in the WDM system with short coupling length compared to regular fiber. Moreover, it is required to decrease the coupling length for compensating birefringence with low confinement loss to demonstrate polarization-insensitive properties and for the use in wide-band optical communications \cite{2},\cite{4},\cite{32}.
\subsection{\textbf{Coupling Length Ratio and Birefringence}}
Coupling length ratio (CLR) is one key parameter to calculate the felicity of a coupling structure as a polarization splitter. This ratio can be expressed as follows \cite{1}:
\begin{equation}
CLR = \frac{L_{C,y}}{L_{C,x}}
\end{equation}
Here, the coupling length for X-mode and Y-mode is $L_{C,x}$ and $L_{C,y}$, respectively. The two orthogonal modes in a fiber cannot be separated unless the fiber lengths for the two modes satisfy the following condition: $L = mL_{C,x} = nL_{C,y}$, where $m$ and $n$ are positive integers and their parity varies (i.e., even or odd) within each mode \cite{33}. CLR values are depicted for the X and Y supermodes of the proposed DC-PCFs, as illustrated in Fig. \ref{clr}. At 8 $\mu m$ and 9 $\mu m$, the CLR for \ce{As2Se3} is found 1.454 (or 16/11) and 1.50 (or 15/10), respectively. Similarly, the CLR for \ce{As2S3} is found 1.441 (or 36/25) and 1.437 (or 23/16), respectively at 9 $\mu m$ and 10 $\mu m$.
\begin{figure}
\centering
\includegraphics[width=3.4in,height=7.5in,keepaspectratio]{Fig2CLRB.png}
\caption{Coupling Length Ratio (CLR), $L_{C,\lambda}/L_{C,10}$ and birefringence of the proposed DC-PCFs for: (a) \ce{As2Se3} (b) \ce{As2S3} chalcogenide materials. }\label{clr}
\end{figure}
Using a PCF coupler, two wavelengths $\lambda_1$ and $\lambda_2$ can be effectively separated provided that the coupling lengths $L_{C,\lambda_1}$ and $L_{C,\lambda_2}$ at wavelengths $\lambda_1$ and $\lambda_2$, respectively, fulfil the following conditions: \cite{32}:
\begin{equation}
\frac{L_{C,\lambda_1}}{L_{C,\lambda_2}} = \frac{\text{even integer}}{\text{odd integer}}
\end{equation}
or,
\begin{equation}
\frac{L_{C,\lambda_1}}{L_{C,\lambda_2}} = \frac{\text{odd integer}}{\text{even integer}}
\end{equation}
Fig. \ref{clr} shows wavelength-dependent CLRs $L_{C,\lambda_1}/L_{C,\lambda_2}$ for both the X and Y supermodes of DC-PCF structures (\ce{As2Se3} and \ce{As2S3}). The wavelength $\lambda_2$ here in Fig. \ref{clr} is fixed at 10 $\mu$m. In order to get a short-length and high-performance WDM MUX-DeMUX, the optimum value of the ratio $L_{C,\lambda_1}/L_{C,\lambda_2}$ is to be 1/2 or 2 \cite{1}. From Fig. \ref{clr}, we have found this ratio approximately at 8.5 $\mu m$ for both DC-PCF structures.
The fiber obtains birefringence, also called double refraction, at the time of different propagation constants $\beta_x$ and $\beta_y$. The property of birefringence occurs when one light beam is split into two isolated beams, with each beam being orthogonally polarized and refracted at a different angle \cite{8}. Fiber birefringence is defined as the difference between the effective refractive indexes of these two polarization states driven by the different propagation phase velocities as shown in the following equation \cite{34}. Moreover, high birefringence is necessary for maintaining two linear orthogonal polarisation states over long distance \cite{34}.
\begin{equation}
B_m = \frac{| \beta_x - \beta_y |}{2\pi/\lambda} = |n^{x}_{\text{eff}} - n^{y}_{\text{eff}}|
\end{equation}
In Fig. \ref{clr}, the acquired birefringence of our structures for both the materials (\ce{As2Se3} and \ce{As2S3}) is depicted, which is quite high enough for mid-IR optical communications.
\section{\textbf{Applications}}\label{appl}
According to the observed data presented above, our proposed DC-PCFs are suited for using as WDM MUX-DeMUX for separating the wavelengths of 8 $\mu m$ and 9 $\mu m$ for \ce{As2Se3} and 9 $\mu m$ and 10 $\mu m$ for \ce{As2S3}. While the CLRs are not ideal for polarization splitting functions, the developed structures for both chalcogenides fulfill the polarization separation criteria and can thus be utilized as polarization splitters.
\subsection{\textbf{Polarization Splitter}}
$P_{out,A}$ and $P_{out,B}$ represent the normalized output power generated by cores $A$ and $B$, respectively, when a fundamental mode is launched into core $A$ via the power $P_{in}$ and are as follows \cite{35}:
\begin{equation}\label{eq8}
P_{out,A} = P^i_{in}{cos}^2\frac{\pi z}{2L^i_C}
\end{equation}
\begin{equation}\label{eq9}
P_{out,B} = P^i_{in}{sin}^2\frac{\pi z}{2L^i_C}
\end{equation}
Here, $z$= propagation length, $i=x,y$, and $L^i_C$ = coupling length of the $i$-polarized mode.
\begin{figure}
\centering
\includegraphics[width=3.4in,height=7.5in,keepaspectratio]{Fig3PBS.png}
\caption{Normalized output power of DC-PCF structures as a function of propagation length for: (a) core A, \ce{As2S3} (X-mode) (b) core B, \ce{As2S3} (Y-mode) (c) core A, \ce{As2Se3} (X-mode) (d) core B, \ce{As2Se3} (Y-mode). }\label{PBS}
\end{figure}
In Fig. \ref{PBS}, the normalized power for different propagation length of core $A$ and core $B$ of both DC-PCF structures mentioned in Fig. \ref{PCF} at 9 $\mu m$ wavelength are shown. From this figure, we have estimated that the DC-PCF structures separate the two orthogonal modes at a distance of $(36 \times 0.249 + 25 \times 0.360)/2 = 8.9$ $mm$ for \ce{As2S3} and at a distance of $(15 \times 0.992 + 10 \times 1.496)/2 = 14.8$ $mm$ for \ce{As2Se3}.
\begin{figure}
\centering
\includegraphics[width=3.4in,height=7.5in,keepaspectratio]{Fig4MuxDeS.png}
\caption{Normalized output power of \ce{As2S3} DC-PCF structure as a function of propagation length for: (a) X-mode, core $A$ ($\lambda = 9 \mu m$) (b) X-mode, core $B$ ($\lambda = 10 \mu m$) (c) Y-mode, core $A$ ($\lambda = 9 \mu m$) (d) Y-mode, core $B$ ($\lambda = 10 \mu m$).}\label{MUX-S}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.4in,height=7.5in,keepaspectratio]{Fig5MuxDeSe.png}
\caption{Normalized output power of \ce{As2Se3} DC-PCF structure as a function of propagation length for: (a) X-mode, core $A$ ($\lambda = 8 \mu m$) (b) X-mode, core $B$ ($\lambda = 10 \mu m$) (c) Y-mode, core $A$ ($\lambda = 8 \mu m$) (d) Y-mode, core $B$ ($\lambda = 10 \mu m$).}\label{MUX-Se}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=2.5in,height=2.5in,keepaspectratio]{spl.png}
\caption{Splice loss variation of DC-PCF structures with respect to wavelength for fundamental even supermodes.}\label{spl}
\end{figure}
\begin{table*}
\centering
\caption{Comparison between \ce{As2S3} and \ce{As2Se3} DC-PCFs as multifunctional devices used in optical communications}\label{tab2}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Applications/Parameter} & \textbf{\ce{As2S3 DC-PCF}} & \textbf{\ce{As2Se3 DC-PCF}}\\ \hline
{\centering Polarization Splitter} & $\lambda$ = 9 $\mu m$ & $\lambda$ = 9 $\mu m$ \\
& Fiber length = 8.9 $mm$ & Fiber length = 14.8 $mm$ \\ \hline
{\centering WDM MUX-DeMUX} & separates $\lambda$ = 9 (X-Pol) \& 10 (Y-Pol) $\mu m$ & separates $\lambda$ = 8 (X-Pol) \& 9 (Y-Pol) $\mu m$ \\
& X-mode Fiber length = 3.99 $mm$ & X-mode Fiber length = 10.94 $mm$ \\
& Y-mode Fiber length = 8.28 $mm$ & Y-mode Fiber length = 23.96 $mm$ \\ \hline
{\centering Splice loss (X-even mode)} & 0.42 dB ($\lambda$ = 8 $\mu m$) & 0.92 dB ($\lambda$ = 8 $\mu m$) \\
& 0.26 dB ($\lambda$ = 9 $\mu m$) & 0.71 dB ($\lambda$ = 9 $\mu m$) \\
& 0.18 dB ($\lambda$ = 10 $\mu m$) & 0.55 dB ($\lambda$ = 10 $\mu m$) \\
\hline
\end{tabular}
\end{table*}
\subsection{WDM MUX-DeMUX}
Similar equations ((\ref{eq8}) and (\ref{eq9})) can also be used to evaluate the normalized output power for WDM MUX-DeMUX or wavelength splitters used in optical communications, and this normalized output power with the variation of wavelength for both chalcogenide materials are depicted in Fig. \ref{MUX-S} and Fig. \ref{MUX-Se}. The chalcogenide \ce{As2S3} DC-PCF separates 9 $\mu m$ and 10 $\mu m$ wavelengths of X-mode at a distance of $(25 \times 0.16 + 16 \times 0.249)/2 = 3.99$ $mm$ and of Y-mode at a distance of $(36 \times 0.23 + 23 \times 0.36)/2 = 8.28$ $mm$, see Fig. \ref{MUX-S}. On the other hand, from Fig. \ref{MUX-Se}, the \ce{As2Se3} DC-PCF separates 8 $\mu m$ and 9 $\mu m$ wavelengths of X-mode at a distance of $(11 \times 0.992 + 10 \times 1.1)/2 = 10.94$ $mm$ and of Y-mode at a distance of $(16 \times 1.496 + 15 \times 0.16)/2 = 23.96$ $mm$. Among the important findings is that DC-PCF is more effective as a WDM MUX-DeMUX in X-mode than in Y-mode due to the shorter fiber length.
\subsection{Splice Loss}
When two optical fibers are spliced together, some of its incoming optical power is not transmitted through the splice and is radiated out of the fiber, which is known as splice loss and calculated as \cite{8}:
\begin{equation}
P_{\text{splice Loss}} = -20\log_{10}\frac{2\omega^{}_{\text{SMF}} \omega^{}_{\text{DC-PCF}} }{ \omega^{2}_{\text{SMF}} + \omega^{2}_{\text{DC-PCF}} }
\end{equation}
From $ A_{\text{eff}} = \pi \omega^{2} $, we can find $\omega$, which is the mode field diameter of the fiber, and the mode field diameter of a single-mode fiber (SMF) is generally considered to be 10 $\mu m$.
In Fig. \ref{spl}, we have shown the splice loss for each of the proposed elliptical DC-PCF structures for both of the chalcogenide materials. This figure demonstrates that \ce{As2S3} DC-PCF has lower splice loss compared to \ce{As2Se3} DC-PCF. Moreover, the splice loss (for X-even supermode) is around 0.42 dB, 0.26 dB, 0.18 dB for \ce{As2S3} and 0.92 dB, 0.71 dB, 0.55 dB for \ce{As2Se3} at the 8 $\mu m$, 9 $\mu m$, and 10 $\mu m$ wavelength windows, respectively. For practical optical communication applications, this is quite acceptable. Table \ref{tab2} is an overview of the findings for this study.
\section{\textbf{Conclusion}}\label{conclusion}
To summarize, we have designed a multifunctional hexagonal lattice DC-PCF structure incorporating elliptical air-holes for two types of chalcogenide materials (\ce{As2S3} and \ce{As2Se3}) showing highly birefringent and single-moded characteristics in the mid-IR (5-13 $\mu m$) wavelength windows, which is very important for optical communications. Results show that DC-PCF structures separate the two orthogonal modes, working as polarization splitters at 9 $\mu m$ wavelength, at a length of 8.9 $mm$ (for \ce{As2S3}) and 14.8 $mm$ (for \ce{As2Se3}). On top of that, \ce{As2S3} and \ce{As2Se3} DC-PCFs can also be utilized as WDM MUX-DeMUX for both X- and Y-polarization fundamental supermodes separating 9 $\mu m$/10 $\mu m$ and 8 $\mu m$/9 $\mu m$ wavelengths, respectively, where the X/Y-mode fiber lengths for \ce{As2S3} and \ce{As2Se3} are found 3.99 $mm$/8.28 $mm$ and 10.94 $mm$/23.96 $mm$, respectively. Moreover, the splice loss (for X-even supermode) is around 0.42 dB, 0.26 dB, 0.18 dB for \ce{As2S3} and 0.92 dB, 0.71 dB, 0.55 dB for \ce{As2Se3} at the 8 $\mu m$, 9 $\mu m$, and 10 $\mu m$ wavelength windows, respectively, which, for practical optical communication applications, is quite low and satisfactory. Hence, these chalcogenide DC-PCFs can be used in integrated and compact optical and photonic communications systems as short-length multifunctional devices. For the unique optical characteristics of these DC-PCFs, they can be utilized in mid-IR supercontinuum generation, surface plasmon resonance based PCF sensing applications, telecommunications, and frequency metrology.
\section{Acknowledgments}
No major funding was received for this research. Author declares no conflict of interest.
|
1,314,259,993,345 | arxiv | \section{Introduction}
\label{s0}
Throughout the whole history of humanity many brilliant thinkers
studied problems related to the idea of infinity (see
\cite{Cantor,Cohen,Conway,Godel,Hardy,Hilbert,Leibniz,Newton,Robinson}
and references given therein). To emphasize importance of the
subject it is sufficient to mention that the Continuum Hypothesis
related to infinity has been included by David Hilbert as the
Problem Number One in his famous list of 23 unsolved mathematical
problems (see \cite{Hilbert}) that have influenced strongly
development of Mathematics in the XX-th century.
There exist different ways to generalize traditional arithmetic
for finite numbers to the case of infinite and infinitesimal
numbers (see \cite{Benci,Cantor,Conway,Robinson} and references
given therein). However, arithmetics developed for infinite
numbers are quite different with respect to the finite arithmetic
we are used to deal with. Moreover, very often they leave
undetermined many operations where infinite numbers take part (for
example, $\infty-\infty$, $\frac{\infty}{\infty}$, sum of
infinitely many items, etc.) or use representation of infinite
numbers based on infinite sequences of finite numbers. In spite of
these crucial difficulties and due to enormous importance of the
concept of infinity in science, people try to introduce infinity
in their work with computers. We can mention the IEEE Standard for
Binary Floating-Point Arithmetic containing representations for
$+\infty$ and $-\infty$ and incorporation of these notions in the
interval analysis implementations.
The point of view on infinity accepted nowadays takes its origins
from the famous ideas of Georg Cantor (see \cite{Cantor}) who has
shown that there exist infinite sets having different number of
elements. However, it is well known that Cantor's approach leads
to some situations that often are called by non mathematicians
`paradoxes'. The most famous and simple of them is, probably,
Hilbert's paradox of the Grand Hotel. In a normal hotel having a
finite number of rooms no more new guests can be accommodated if
it is full. Hilbert's Grand Hotel has an infinite number of rooms
(of course, the number of rooms is countable, because the rooms in
the Hotel are numbered). Due to Cantor, if a new guest arrives at
the Hotel where every room is occupied, it is, nevertheless,
possible to find a room for him. To do so, it is necessary to move
the guest occupying room 1 to room 2, the guest occupying room 2
to room 3, etc. In such a way room 1 will be ready for the
newcomer and, in spite of our assumption that there are no
available rooms in the Hotel, we have found one.
This result is very difficult to be fully realized by anyone who
is not a mathematician since in our every day experience in the
world around us the part is always less than the whole and if a
hotel is complete there are no places in it. In order to
understand how it is possible to tackle the problem of infinity in
such a way that Hilbert's Grand Hotel would be in accordance with
the principle `the part is less than the whole' let us consider a
study published in \textit{Science} by Peter Gordon (see
\cite{Gordon}) where he describes a primitive tribe living in
Amazonia - Pirah\~{a} - that uses a very simple numeral
system\footnote{ We remind that \textit{numeral} is a symbol or
group of symbols that represents a \textit{number}. The difference
between numerals and numbers is the same as the difference between
words and the things they refer to. A \textit{number} is a concept
that a \textit{numeral} expresses. The same number can be
represented by different numerals. For example, the symbols `3',
`three', and `III' are different numerals, but they all represent
the same number.} for counting: one, two, many. For Pirah\~{a},
all quantities larger than two are just `many' and such operations
as 2+2 and 2+1 give the same result, i.e., `many'. Using their
weak numeral system Pirah\~{a} are not able to see, for instance,
numbers 3, 4, 5, and 6, to execute arithmetical operations with
them, and, in general, to say anything about these numbers because
in their language there are neither words nor concepts for that.
Moreover, the weakness of their numeral system leads to such
results as
\[
\mbox{`many'}+ 1= \mbox{`many'}, \hspace{1cm} \mbox{`many'} +
2 = \mbox{`many'},
\]
which are very familiar to us in the context of views on infinity
used in the traditional calculus
\[
\infty + 1= \infty, \hspace{1cm} \infty + 2 = \infty.
\]
This observation leads us to the following idea: \textit{Probably
our difficulty in working with infinity is not connected to the
nature of infinity but is a result of inadequate numeral systems
used to express numbers.}
In this paper, we describe a new methodology for treating infinite
and infinitesimal quantities (examples of its usage see in
\cite{Sergeyev,Sergeyev_patent,www,Philosophy,Poland,Mathesis,chaos})
having a strong numerical character. Its description is given in
Section~\ref{s1}. The new methodology allows us to introduce in
Section~\ref{s2} a new infinite unit of measure that is then used
as the radix of a new positional numeral system.
Section~\ref{s3} shows that this system allows one to express
finite, infinite, and infinitesimal numbers in a unique framework
and to execute arithmetical operations with all of them.
Section~\ref{s4} discusses some applications of the new
methodology. Section~\ref{s6} establishes relations to some of the
results of Georg Cantor. After all, Section~\ref{s7} concludes the
paper.
We close this Introduction by emphasizing that the goal of the
paper is not to construct a complete theory of infinity and to
discuss such concepts as, for example, `set of all sets'. In
contrast, the problem of infinity is considered from the point of
view of applied Mathematics and theory and practice of
computations -- fields being among the main scientific interests
(see, e.g., monographs \cite{Sergeyev,Strongin_Sergeyev}) of the
author. A new viewpoint on infinity is introduced in the paper in
order to give possibilities to solve new and old (but with higher
precision) applied problems. Educational issues (see
\cite{Mockus_1,Mockus_2,Mathesis}) have also been taken into
account. In this connection, it is worthy to notice that a new
kind of computers -- the Infinity Computer -- able to execute
computations with infinite and infinitesimal numbers introduced in
this paper has been recently proposed and its software simulator
has already been implemented (see
\cite{Sergeyev_patent,www,Poland}).
\section{A new computational methodology}
\label{s1}
The aim of this section is to introduce a new methodology that
would allow one to work with infinite and infinitesimal quantities
\textit{in the same way} as one works with finite numbers.
Evidently, it becomes necessary to define what does it mean
\textit{in the same way}. Usually, in modern Mathematics, when it
is necessary to define a concept or an object, logicians try to
introduce a number of axioms describing the object. However, this
way is fraught with danger because of the following reasons. First
of all, when we describe a mathematical object or concept we are
limited by the expressive capacity of the language we use to make
this description. A more rich language allows us to say more about
the object and a weaker language -- less (remind Pirah\~{a} that
are not able to say a word about number 4). Thus, development of
the mathematical (and not only mathematical) languages leads to a
continuous necessity of a transcription and specification of
axiomatic systems. Second, there is no any guarantee that the
chosen axiomatic system defines `sufficiently well' the required
concept and a continuous comparison with practice is required in
order to check the goodness of the accepted set of axioms.
However, there cannot be again any guarantee that the new version
will be the last and definitive one. Finally, the third limitation
latent in axiomatic systems has been discovered by
G\"odel in his two famous incompleteness theorems (see
\cite{Godel_1931}).
In this paper, we introduce a different, significantly more
applied and less ambitious view on axiomatic systems related only
to utilitarian necessities to make calculations. We start by
introducing three postulates that will fix our methodological
positions with respect to infinite and infinitesimal quantities
and Mathematics, in general. In contrast to the modern
mathematical fashion that tries to make all axiomatic systems more
and more precise (decreasing so degrees of freedom of the studied
part of Mathematics), we just define a set of general rules
describing how practical computations should be executed leaving
so as much space as possible for further, dictated by practice,
changes and developments of the introduced mathematical language.
Speaking metaphorically, we prefer to make a hammer and to use it
instead of describing what is a hammer and how it works.
Usually, when mathematicians deal with infinite objects (sets or
processes) it is supposed (even by constructivists (see, for
example, \cite{Markov})) that human beings are able to execute
certain operations infinitely many times. For example, in a fixed
numeral system it is possible to write down a numeral with
\textit{any} number of digits. However, this supposition is an
abstraction (courageously declared by constructivists in
\cite{Markov}) because we live in a finite world and all human
beings and/or computers finish operations they have started. In
this paper, this abstraction is not used and the following
postulate is adopted.
\textbf{Postulate 1.} \textit{We postulate existence of infinite
and infinitesimal objects but accept that human beings and
machines are able to execute only a finite number of operations.}
Thus, we accept that we shall never be able to give a complete
description of infinite processes and sets due to our finite
capabilities. Particularly, this means that we accept that we are
able to write down only a finite number of symbols to express
numbers.
The second postulate that will be adopted is due to the following
consideration. In natural sciences, researchers use tools to
describe the object of their study and the used instruments
influence results of observations. When physicists see a black dot
in their microscope they cannot say: the object of observation
\textit{is} the black dot. They are obliged to say: the lens used
in the microscope allows us to see the black dot and it is not
possible to say anything more about the nature of the object of
observation until we shall not change the instrument -- the lens
or the microscope itself -- by a more precise one.
Due to Postulate 1, the same happens in Mathematics when studying
natural phenomena, numbers, and objects that can be constructed by
using numbers. Numeral systems used to express numbers are among
the instruments of observations used by mathematicians. Usage of
powerful numeral systems gives the possibility to obtain more
precise results in mathematics in the same way as usage of a good
microscope gives the possibility to obtain more precise results in
Physics. However, the capabilities of the tools will be always
limited due to Postulate 1. Thus, following natural sciences, we
accept the second postulate.
\textbf{Postulate 2.} \textit{We shall not tell \textbf{what
are} the mathematical objects we deal with; we just shall
construct more powerful tools that will allow us to improve our
capacities to observe and to describe properties of mathematical
objects.}
Particularly, this means that from this applied point of view,
axiomatic systems do not define mathematical objects but just
determine formal rules for operating with certain numerals
reflecting some properties of the studied mathematical objects.
For example, axioms for real numbers are considered together with
a particular numeral system $\mathcal{S}$ used to write down
numerals and are viewed as practical rules (associative and
commutative properties of multiplication and addition,
distributive property of multiplication over addition, etc.)
describing operations with the numerals. The completeness property
is interpreted as a possibility to extend $\mathcal{S}$ with
additional symbols (e.g., $e$, $\pi$, $\sqrt{2}$, etc.) taking
care of the fact that the results of computations with these
symbols agree with the facts observed in practice. As a rule, the
assertions regarding numbers that cannot be expressed in a numeral
system are avoided (e.g., it is not supposed that real numbers
form a field).
After all, we want to treat infinite and infinitesimal numbers
in the same manner as we are used to deal with finite ones, i.e.,
by applying the philosophical principle of Ancient Greeks `The
part is less than the whole'. This principle, in our opinion, very
well reflects organization of the world around us but is not
incorporated in many traditional infinity theories where it is
true only for finite numbers.
\textbf{Postulate 3.} \textit{We adopt the principle `The part is
less than the whole' to all numbers (finite, infinite, and
infinitesimal) and to all sets and processes (finite and
infinite).}
Due to this declared applied statement, such concepts as
bijection, numerable and continuum sets, cardinal and ordinal
numbers cannot be used in this paper because they belong to
theories working with different assumptions. However, the approach
proposed here does not contradict Cantor. In contrast, it evolves
his deep ideas regarding existence of different infinite numbers
in a more applied way.
It is important to notice that the adopted Postulates impose also
the style of exposition of results in the paper: we first
introduce new mathematical instruments, then show how to use
them in several areas of Mathematics, introducing each item as
soon as it becomes indispensable for the problem under
consideration.
Let us introduce now the main methodological idea of the paper by
studying a situation arising in practice and related to the
necessity to operate with extremely large quantities (see
\cite{Sergeyev} for a detailed discussion). Imagine that we are in
a granary and the owner asks us to count how much grain he has
inside it. Of course, nobody counts the grain seed by seed
because the number of seeds is enormous.
To overcome this difficulty, people take sacks, fill them in with
seeds, and count the number of sacks. It is important that nobody
counts the number of seeds in a sack. If the granary is huge and
it becomes difficult to count the sacks, then trucks or even big
train waggons are used. Of course, we suppose that all sacks
contain the same number of seeds, all trucks -- the same number of
sacks, and all waggons -- the same number of trucks. At the end of
the counting we obtain a result in the following form: the granary
contains 14 waggons, 54 trucks, 18 sacks, and 47 seeds of grain.
Note, that if we add, for example, one seed to the granary, we can
count it and see that the granary has more grain. If we take out
one waggon, we again are able to say how much grain has been
subtracted.
Thus, in our example it is necessary to count large quantities.
They are finite but it is impossible to count them directly by
using an elementary unit of measure, $u_0$, (seeds in our
example) because the quantities expressed in these units would be
too large. Therefore, people are forced to behave as if the
quantities were infinite.
To solve the problem of `infinite' quantities, new units of
measure, $u_1,u_2,$ and $u_3,$ are introduced (units $u_1$ --
sacks, $u_2$ -- trucks, and $u_3$ -- waggons). The new units have
the following important peculiarity: all the units $u_{i+1}$
contain a certain number $K_i$\label{p:1} of units $u_{i}$ but
this number, $K_i$, is unknown. Naturally, it is supposed that
$K_i$ is the same for all instances of the units $u_{i+1}$. Thus,
numbers that it was impossible to express using only the initial
unit of measure are perfectly expressible in the new units we
have introduced in spite of the fact that the numbers $K_i$ are
unknown.
This key idea of counting by introduction of new units of
measure will be used in the paper to deal with infinite quantities
together with the idea of separate count of units with different
exponents used in traditional positional numeral systems.
\section{The infinite unit of measure}
\label{s2}
The infinite unit of measure is expressed by the numeral
\ding{172} called \textit{grossone} and is introduced as the
number of elements of the set, $\mathbb{N}$, of natural numbers.
Remind that the usage of a numeral indicating totality of the
elements we deal with is not new in Mathematics. It is sufficient
to mention the theory of probability (axioms of Kolmogorov) where
events can be defined in two ways. First, as union of elementary
events; second, as a sample space, $\Omega$, of all possible
elementary events (or its parts $\Omega/2, \Omega/3,$ etc.) from
which some elementary events have been excluded (or added in case
of parts of $\Omega$). Naturally, the latter way to define events
becomes particularly useful when the sample space consists of
infinitely many elementary events.
Grossone is introduced by describing its properties (similarly,
in order to pass from natural to integer numbers a new element --
zero -- is introduced by describing its properties) postulated by
the \textit{Infinite Unit Axiom} (IUA) consisting of three parts:
Infinity, Identity, and Divisibility. This axiom is added to
axioms for real numbers (remind that we consider axioms in sense
of Postulate~2). Thus, it is postulated that associative and
commutative properties of multiplication and addition,
distributive property of multiplication over addition, existence
of inverse elements with respect to addition and multiplication
hold for grossone as for finite numbers\footnote{It is important
to emphasize that we speak about axioms of real numbers in sense
of Postulate~2, i.e., axioms define formal rules of operations
with numerals in a given numeral system. Therefore, if we want to
have a numeral system including grossone, we should fix also a
numeral system to express finite numbers. In order to concentrate
our attention on properties of grossone, this point will be
investigated later. }. Let us introduce the axiom and then give
comments on it.
\textit{Infinity.}
Any finite natural number $n$ is less than grossone, i.e., $n
<~\mbox{\ding{172}}$.
\textit{Identity.}
The following
relations link \ding{172} to identity elements 0 and 1
\beq
0 \cdot \mbox{\ding{172}} =
\mbox{\ding{172}} \cdot 0 = 0, \hspace{3mm}
\mbox{\ding{172}}-\mbox{\ding{172}}= 0,\hspace{3mm}
\frac{\mbox{\ding{172}}}{\mbox{\ding{172}}}=1, \hspace{3mm}
\mbox{\ding{172}}^0=1, \hspace{3mm}
1^{\mbox{\tiny{\ding{172}}}}=1, \hspace{3mm}
0^{\mbox{\tiny{\ding{172}}}}=0.
\label{3.2.1}
\eeq
\textit{Divisibility.}
For any finite natural number $n$ sets $\mathbb{N}_{k,n}, 1 \le
k \le n,$ being the $n$th parts of the set, $\mathbb{N}$, of
natural numbers have the same number of elements indicated by the
numeral $\frac{\mbox{\ding{172}}}{n}$ where
\beq
\mathbb{N}_{k,n} = \{k,
k+n, k+2n, k+3n, \ldots \}, \hspace{5mm} 1 \le k \le n,
\hspace{5mm} \bigcup_{k=1}^{n}\mathbb{N}_{k,n}=\mathbb{N}.
\label{3.3}
\eeq
The first part of the introduced axiom -- Infinity -- is quite
clear. In fact, we want to describe an infinite number, thus, it
should be larger than any finite number. The second part of the
axiom -- Identity -- tells us that \ding{172} behaves itself with
identity elements 0 and 1 as all other numbers. In reality, we
could even omit this part of the axiom because, due to
Postulate~3, all numbers should be treated in the same way and,
therefore, at the moment we have told that grossone is a number,
we have fixed usual properties of numbers, i.e., the properties
described in Identity, associative and commutative properties of
multiplication and addition, distributive property of
multiplication over addition, existence of inverse elements with
respect to addition and multiplication. The third part of the
axiom -- Divisibility -- is the most interesting, it is based on
Postulate~3. Let us first illustrate it by an example.
\begin{example}
\label{e0} If we take $n = 1$, then $\mathbb{N}_{1,1} =
\mathbb{N}$ and Divisibility tells that the set, $\mathbb{N}$, of
natural numbers has \ding{172} elements. If $n = 2$, we have two
sets $\mathbb{N}_{1,2}$ and $\mathbb{N}_{2,2}$
\beq
\begin{array}{ccccccccccc}
\mathbb{N}_{1,2} = &\hspace{-2mm} \{1, & & 3, & & 5, & & 7, & \ldots &\} \\
& & & & & & & & \\
\mathbb{N}_{2,2} = & \hspace{-5mm} \{ & 2, & & 4, & & 6, & & \ldots &\} \\
\end{array}
\label{3.3.1}
\eeq
and they have $\frac{\mbox{\ding{172}}}{2}$ elements each. If $n
= 3$, then we have three sets
\beq
\begin{array}{cccccccccc}
\mathbb{N}_{1,3} = &\hspace{-2mm}\{1, & & & 4, & & & 7, & \ldots &\} \\
& & & & & & & & \\
\mathbb{N}_{2,3} = & \hspace{-5mm} \{ & 2, & & & 5, & & & \ldots &\} \\
& & & & & & & & \\
\mathbb{N}_{3,3} = & \hspace{-5mm} \{ & & 3, & & & 6, & & \ldots &\} \\
\end{array}\vspace{ 2mm}
\label{3.3.2}
\eeq
and they have $\frac{\mbox{\ding{172}}}{3}$ elements each.
\hfill$\Box$ \end{example}
It is important to emphasize that to introduce
$\frac{\mbox{\ding{172}}}{n}$ we do not try to count elements $k,
k+n, k+2n, k+3n, \ldots$ one by one in (\ref{3.3}). In fact, we
cannot do this due to Postulate~1. By using Postulate~3, we
construct the sets $\mathbb{N}_{k,n}, 1 \le k \le n,$ by
separating the whole, i.e., the set $\mathbb{N}$, in $n$ parts
(this separation is highlighted visually in formulae (\ref{3.3.1})
and (\ref{3.3.2})). Again due to Postulate~3, we affirm that the
number of elements of the $n$th part of the set, i.e.,
$\frac{\mbox{\ding{172}}}{n}$, is $n$ times less than the number
of elements of the whole set, i.e., than \ding{172}. In terms of
our granary example \ding{172} can be interpreted as the number
of seeds in the sack. Then, if the sack contains \ding{172} seeds,
its $n$th part contains $n$ times less quantity, i.e.,
$\frac{\mbox{\ding{172}}}{n}$ seeds. Note that, since the numbers
$\frac{\mbox{\ding{172}}}{n}$ have been introduced as numbers of
elements of sets $\mathbb{N}_{k,n}$, they are integer.
The new unit of measure allows us to calculate easily the number
of elements of sets being union, intersection, difference, or
product of other sets of the type $\mathbb{N}_{k,n}$. Due to our
accepted methodology, we do it in the same way as these
measurements are executed for finite sets. Let us consider two
simple examples (a general rule for determining the number of
elements of infinite sets having a more complex structure will be
given in Section~\ref{s4}) showing how grossone can be used for
this purpose.
\begin{example}
\label{e1} Let us determine the number of elements of the set
$A_{k,n} = \mathbb{N}_{k,n} \backslash \{a\},$ $a \in
\mathbb{N}_{k,n}, n \ge 1$. Due to the IUA, the set
$\mathbb{N}_{k,n}$ has $\frac{\mbox{\ding{172}}}{n}$ elements. The
set $A_{k,n}$ has been constructed by excluding one element from
$N_{k,n}$. Thus, the set $A_{k,n}$ has
$\frac{\mbox{\ding{172}}}{n}-1$ elements. The granary
interpretation can be also given for the number
$\frac{\mbox{\ding{172}}}{n}-1$: the number of seeds in the $n$th
part of the sack minus one seed. For $n=1$ we have
$\mbox{\ding{172}}-1$ interpreted as the number of seeds in the
sack minus one seed. \hfill$\Box$
\end{example}
\begin{example}
\label{e2} Let us consider the following two sets
\[
B_1 = \{ 4, 9, 14, 19, 24, 29, 34, 39, 44, 49, 54, 59, 64, 69, 74,
79,\ldots\},
\]
\[
B_2 = \{ 3, 14, 25, 36, 47, 58, 69, 80, 91, 102, 113, 124, 135,
\ldots\}
\]
and determine the number of elements in the set $B = (B_1 \cap
B_2 ) \cup \{ 3,4,5, 69 \}$. It follows immediately from the IUA
that $B_1 = \mathbb{N}_{4,5}, B_2 = \mathbb{N}_{3,11}$. Their
intersection
\[
B_1 \cap B_2 = \mathbb{N}_{4,5} \cap \mathbb{N}_{3,11} = \{ 14,
69, 124, \ldots\} = \mathbb{N}_{14,55}
\]
and, therefore, due to the IUA, it has
$\frac{\mbox{\ding{172}}}{55}$ elements. Finally, since 69 belongs
to the set $\mathbb{N}_{14,55}$ and 3, 4, and 5 do not belong to
it, the set $B$ has $\frac{\mbox{\ding{172}}}{55}+3$ elements. The
granary interpretation: this is the number of seeds in the $55$th
part of the sack plus three seeds. \hfill$\Box$
\end{example}
One of the important differences of the new approach with respect
to the non-standard analysis consists of the fact that
$\mbox{\ding{172}} \in \mathbb{N}$ because grossone has been
introduced as the quantity of natural numbers (similarly, the
number 5 being the number of elements of the set $\{1, 2, 3, 4, 5
\}$ is the largest element in this set). The new numeral
\ding{172} allows one to write down the set, $\mathbb{N}$, of
natural numbers in the form
\beq
\mathbb{N} = \{ 1,2,3, \hspace{5mm} \ldots \hspace{5mm}
\mbox{\ding{172}}-3, \hspace{2mm} \mbox{\ding{172}}-2,
\hspace{2mm}\mbox{\ding{172}}-1, \hspace{2mm} \mbox{\ding{172}} \}
\label{4.1}
\eeq
where the numerals
\beq
\ldots \hspace{2mm} \mbox{\ding{172}}-3,
\hspace{2mm}\mbox{\ding{172}}-2, \hspace{2mm}\mbox{\ding{172}}-1,
\hspace{2mm} \mbox{\ding{172}} \label{4.2}
\eeq
indicate \textit{infinite} natural numbers.
It is important to emphasize that in the new approach the set
(\ref{4.1}) is the same set of natural numbers
\beq
\mathbb{N} = \{ 1,2,3, \hspace{2mm} \ldots \hspace{2mm} \}
\label{4.1_calcolo}
\eeq
we
are used to deal with and infinite numbers (\ref{4.2}) also take
part of $\mathbb{N}$. Both records, (\ref{4.1}) and
(\ref{4.1_calcolo}), are correct and do not contradict each other.
They just use two different numeral systems to express
$\mathbb{N}$. Traditional numeral systems do not allow us to see
infinite natural numbers that we can observe now thanks
to~\ding{172}. Similarly, Pirah\~{a} are not able to see finite
natural numbers greater than~2. In spite of this fact, these
numbers (e.g., 3 and 4) belong to $\mathbb{N}$ and are visible if
one uses a more powerful numeral system. Thus, we have the same
object of observation -- the set $\mathbb{N}$ -- that can be
observed by different instruments -- numeral systems -- with
different accuracies
(see Postulate~2).
Now the following obvious question arises: Which natural numbers
can we express by using the new numeral \ding{172}? Suppose that
we have a numeral system, $\mathcal{S}$, for expressing finite
natural numbers and it allows us to express $K_{\mathcal{S}}$
numbers (not necessary consecutive) belonging to a set
$\mathcal{N}_{\mathcal{S}} \subset \mathbb{N}$. Note that due to
Postulate~1, $K_{\mathcal{S}}$ is finite. Then, addition of
\ding{172} to this numeral system will allow us to express also
infinite natural numbers $\frac{i\mbox{\small{\ding{172}}}}{n}\pm
k \le \mbox{\ding{172}}$ where $1 \le i \le n,\,\,\, k\in
\mathcal{N}_{\mathcal{S}},\,\,\, n \in \mathcal{N}_{\mathcal{S}}$
(note that since $\frac{\mbox{\small{\ding{172}}}}{n}$ are
integers, $\frac{i\mbox{\small{\ding{172}}}}{n}$ are integers
too). Thus, the more powerful system $\mathcal{S}$ is used to
express finite numbers, the more infinite numbers can be expressed
but their quantity is always finite, again due to Postulate~1.
The new numeral system using grossone allows us to express more
numbers than traditional numeral systems thanks to the introduced
new numerals but, as it happens for all numeral systems, its
abilities to express numbers are limited.
\begin{example} \label{e3} Let us consider the numeral system,
$\mathcal{P}$, of Pirah\~{a} able to express only numbers 1 and 2
(the only difference will be in the usage of numerals `1' and `2'
instead of original numerals $I$ and $II$ used by Pirah\~{a}). If
we add to $\mathcal{P}$ the new numeral \ding{172}, we obtain a
new numeral system (we call it $\widehat{\mathcal{P}}$) allowing
us to express only ten numbers represented by the following
numerals
\beq \underbrace{1,2}_{finite},
\hspace{5mm} \ldots \hspace{5mm}
\underbrace{\frac{\mbox{\small{\ding{172}}}}{2}-2,
\frac{\mbox{\small{\ding{172}}}}{2}-1,
\frac{\mbox{\small{\ding{172}}}}{2},
\frac{\mbox{\small{\ding{172}}}}{2}+1,
\frac{\mbox{\small{\ding{172}}}}{2}+2}_{infinite}, \hspace{5mm}
\ldots \hspace{5mm} \underbrace{\mbox{\ding{172}}-2,
\mbox{\ding{172}}-1, \mbox{\ding{172}}}_{infinite}.
\label{4.2.1}
\eeq
The first two numbers in (\ref{4.2.1}) are finite, the remaining
eight are infinite, and dots show natural numbers that are not
expressible in $\widehat{\mathcal{P}}$. As a consequence,
$\widehat{\mathcal{P}}$ does not allow us to execute such
operation as $2+2$ or to add $2$ to
$\frac{\mbox{\small{\ding{172}}}}{2}+2$ because their results
cannot be expressed in it. Of course, we do not say that results
of these operations are equal (as Pirah\~{a} do for operations
$2+2$ and $2+1$). We just say that the results are not
expressible in $\widehat{\mathcal{P}}$ and it is necessary to take
another, more powerful numeral system if we want to execute these
operations. \hfill$\Box$ \end{example}
Note that crucial limitations discussed in Example~\ref{e3} hold
for sets, too. As a consequence, the numeral system
$\mathcal{P}$ allows us to define only the sets $\mathbb{N}_{1,2}$
and $\mathbb{N}_{2,2}$ among all possible sets of the form
$\mathbb{N}_{k,n}$ from (\ref{3.3}) because we have only two
finite numerals, `1' and `2', in $\mathcal{P}$. This numeral
system is too weak to define other sets of this type because
numbers greater than 2 required for these definition are not
expressible in $\mathcal{P}$. These limitations have a general
character and are related to all questions requiring a numerical
answer (i.e., an answer expressed only in numerals, without
variables). In order to obtain such an answer, it is necessary to
know at least one numeral system able to express numerals required
to write down this answer.
We are ready now to formulate the following important result being
a direct consequence of the accepted methodological postulates.
\begin{theorem}
\label{t1} The set $\mathbb{N}$ is not a monoid under addition.
\end{theorem}
\textit{Proof.}
Due to Postulate~3, the operation $\mbox{\ding{172}}+1$ gives us as
the result a number greater than \ding{172}. Thus, by definition
of grossone, $\mbox{\ding{172}}+1$ does not belong to $\mathbb{N}$
and, therefore, $\mathbb{N}$ is not closed under addition and is
not a monoid. \hfill$\Box$
This result also means that adding the IUA to the axioms of
natural numbers defines the set of \textit{extended natural
numbers} indicated as $\widehat{\mathbb{N}}$ and including
$\mathbb{N}$ as a proper subset
\beq
\widehat{\mathbb{N}} = \{
1,2, \ldots ,\mbox{\ding{172}}-1, \mbox{\ding{172}},
\mbox{\ding{172}}+1, \ldots , \mbox{\ding{172}}^2-1,
\mbox{\ding{172}}^2, \mbox{\ding{172}}^2+1, \ldots \}.
\label{4.2.2}
\eeq
The extended natural numbers greater than grossone are also
linked to sets of numbers and can be interpreted in the terms of
grain.
\begin{example}
\label{e4} Let us determine the number of elements of the set
\[
C =
\{
(a_1, a_2, \ldots, a_m ) : a_i \in \mathbb{N}, 1 \le i \le m
\}.
\]
The elements of $C$ are $m$-tuples of natural numbers. It is
known from combinatorial calculus that if we have $m$ positions
and each of them can be filled in by one of $l$ symbols, the
number of the obtained $m$-tuples is equal to $l^m$. In our case,
since $\mathbb{N}$ has grossone elements, $l =
\mbox{\ding{172}}$. Thus, the set $C$ has $\mbox{\ding{172}}^m$
elements. The granary interpretation: if we accept that the
numbers $K_i$ from page~\pageref{p:1} are such that
$K_i=\mbox{\ding{172}}, 1 \le i \le m-1,$ then
$\mbox{\ding{172}}^2$ can be viewed as the number of seeds in the
truck, $\mbox{\ding{172}}^3$ as the number of seeds in the train
waggon, etc. \hfill$\Box$
\end{example}
The set, $\widehat{\mathbb{Z}}$, of \textit{extended integer
numbers}\index{extended integer numbers} can be construct\-ed from
the set, $\mathbb{Z}$, of integer numbers by a complete analogy
and inverse elements with respect to addition are introduced
naturally. For example, $7\mbox{\ding{172}}$ has its inverse with
respect to addition equal to~$-7\mbox{\ding{172}}$.
It is important to notice that, due to Postulates 1 and 2, the
new system of counting cannot give answers to \textit{all}
questions regarding infinite sets. What can we say, for instance,
about the number of elements of the sets $\widehat{\mathbb{N}}$
and $\widehat{\mathbb{Z}}$? The introduced numeral system based on
\ding{172} is too weak to give answers to these questions. It is
necessary to introduce in a way a more powerful numeral system by
defining new numerals (for instance, \ding{173}, \ding{174}, etc).
We conclude this section by the following remark. The IUA
introduces a new number -- the quantity of elements in the set of
natural numbers -- expressed by the new numeral \ding{172}.
However, other numerals and sets can be used to state the idea of
the axiom. For example, the numeral \ding{182} can be introduced
as the number of elements of the set, $\mathbb{E}$, of even
numbers and can be taken as the base of a numeral system. In this
case, the IUA can be reformulated using the numeral \ding{182} and
numerals using it will be used to express infinite numbers. For
example, the number of elements of the set, $\mathbb{O}$, of odd
numbers will be expressed as $|\mathbb{O}|=|\mathbb{E}|=$
\ding{182} and $|\mathbb{N}|=2 \cdot$ \ding{182}. We emphasize
through this note that infinite numbers (similarly to the finite
ones) can be expressed by various numerals and in different
numeral systems.
\section{Arithmetical operations in the new numeral system}
\label{s3}
We have already started to write down simple infinite numbers and
to execute arithmetical operations with them without concentrating
our attention upon this question. Let us consider it
systematically.
\subsection{Positional numeral system with infinite radix}
Different numeral systems have been developed to describe finite
numbers. In positional numeral systems, fractional numbers are
expressed by the record
\beq
(a_{n}a_{n-1} \ldots a_1 a_0 .
a_{-1} a_{-2} \ldots a_{-(q-1)} a_{-q})_b
\label{3.10}
\eeq
where numerals $a_i, -q \le i \le n,$ are called \textit{digits},
belong to the alphabet $\{ 0, 1, \ldots , b-1 \}$, and the dot is
used to separate the fractional part from the integer one. Thus,
the numeral (\ref{3.10}) is equal to the sum
\beq
a_{n} b^{n} + a_{n-1} b^{n-1} + \ldots + a_1 b^1 +a_0 b^0+
a_{-1} b^{-1} + \ldots + a_{-(q-1)}b^{-(q-1)} + a_{-q}
b^{-q}.
\label{3.11}
\eeq
Record (\ref{3.10}) uses numerals consisting of one symbol
each, i.e., digits $a_{i} \in \{ 0, 1,$ $ \ldots , b-1 \}$, to
express how many finite units of the type $b^{i}$ belong to the
number (\ref{3.11}). Quantities of finite units $b^{i}$ are
counted separately for each exponent $i$ and all symbols in the
alphabet $\{ 0, 1, \ldots , b-1 \}$ express finite numbers.
To express infinite and infinitesimal numbers we shall use records
that are similar to (\ref{3.10}) and (\ref{3.11}) but have some
peculiarities. In order to construct a number $C$ in the new
numeral positional system with base \ding{172}, we subdivide $C$
into groups corresponding to powers of \ding{172}:
\beq
C = c_{p_{m}}
\mbox{\ding{172}}^{p_{m}} + \ldots + c_{p_{1}}
\mbox{\ding{172}}^{p_{1}} +c_{p_{0}} \mbox{\ding{172}}^{p_{0}} +
c_{p_{-1}} \mbox{\ding{172}}^{p_{-1}} + \ldots + c_{p_{-k}}
\mbox{\ding{172}}^{p_{-k}}.
\label{3.12}
\eeq
Then, the record
\beq
C = c_{p_{m}}
\mbox{\ding{172}}^{p_{m}} \ldots c_{p_{1}}
\mbox{\ding{172}}^{p_{1}} c_{p_{0}} \mbox{\ding{172}}^{p_{0}}
c_{p_{-1}} \mbox{\ding{172}}^{p_{-1}} \ldots c_{p_{-k}}
\mbox{\ding{172}}^{p_{-k}}
\label{3.13}
\eeq
represents the number $C$, where all numerals $c_i\neq0$, they
belong to a traditional numeral system and are called
\textit{grossdigits}. They express finite positive or negative
numbers and show how many corresponding units
$\mbox{\ding{172}}^{p_{i}}$ should be added or subtracted in order
to form the number $C$. Grossdigits can be expressed by several
symbols using positional systems, the form $\frac{Q}{q}$ where $Q$
and $q$ are integer numbers, or in any other finite numeral
system.
Numbers $p_i$ in (\ref{3.13}) called \textit{grosspowers} can be
finite, infinite, and infinitesimal (the introduction of
infinitesimal numbers will be given soon), they are sorted in
the decreasing order
\[
p_{m} > p_{m-1} > \ldots > p_{1} > p_0 > p_{-1} > \ldots
p_{-(k-1)} > p_{-k}
\]
with $ p_0=0$.
In the traditional record
(\ref{3.10}), there exists a convention that a digit $a_i$ shows
how many powers $b^i$ are present in the number and the radix $b$
is not written explicitly. In the record (\ref{3.13}), we write
$\mbox{\ding{172}}^{p_{i}}$ explicitly because in the new numeral
positional system the number $i$ in general is not equal to the
grosspower $p_{i}$. This gives possibility to write, for example,
such a number as
$7.6\mbox{\ding{172}}^{244.5}\,34\mbox{\ding{172}}^{32}$ having
grospowers $p_2=244.5,p_{1}=32$ and grossdigits $c_{244.5}=7.6,
c_{32} = 34$ without indicating grossdigits equal to zero
corresponding to grosspowers less than 244.5 and greater than 32.
Note also that if a grossdigit $c_{p_i}=1$ then we often write
$\mbox{\ding{172}}^{p_i}$ instead of $1\mbox{\ding{172}}^{p_i}$.
\textit{Finite numbers} in this new numeral system are represented
by numerals having only one grosspower $ p_0=0$. In fact, if we
have a number $C$ such that $m=k=$~0 in representation
(\ref{3.13}), then due to (\ref{3.2.1}), we have $C=c_0
\mbox{\ding{172}}^0=c_0$. Thus, the number $C$ in this case does
not contain grossone and is equal to the grossdigit $c_0$ being a
conventional finite number expressed in a traditional finite
numeral system.
\textit{Infinitesimal numbers} are represented by numerals $C$
having only negative finite or infinite grosspowers. The following
two numbers are examples of infinitesimals:
$3\mbox{\ding{172}}^{-3.2}$,
$37\mbox{\ding{172}}^{-2}11\mbox{\ding{172}}^{-15}$. The simplest
infinitesimal number is
$\mbox{\ding{172}}^{-1}=\frac{1}{\mbox{\ding{172}}}$ being the
inverse element with respect to multiplication for \ding{172}:
\beq
\frac{1}{\mbox{\ding{172}}}\cdot\mbox{\ding{172}}=\mbox{\ding{172}}\cdot\frac{1}{\mbox{\ding{172}}}=1.
\label{3.15.1}
\eeq
Note that all infinitesimals are not equal to zero. Particularly,
$\frac{1}{\mbox{\ding{172}}}>0$ because it is a result of division
of two positive numbers. It also has a clear granary
interpretation. Namely, if we have a sack containing \ding{172}
seeds, then one sack divided by the number of seeds in it is equal
to one seed. Vice versa, one seed, i.e.,
$\frac{1}{\mbox{\ding{172}}}$, multiplied by the number of seeds
in the sack, $\mbox{\ding{172}}$, gives one sack of seeds. Note
that the usage of infinitesimals as grosspowers can lead to more
complex constructions, particularly, again to infinitesimals, see
e.g., the number
$1\mbox{\ding{172}}^{\mbox{\tiny\ding{172}}^{-1}}(-1)\mbox{\ding{172}}^0$.
\textit{Infinite numbers} in this numeral system are expressed
by numerals having at least one finite or infinite gross\-power
greater than zero. Thus, they have infinite parts and can also
have a finite part and infinitesimal ones. If power
$\mbox{\ding{172}}^{0}$ is the lowest in a number then we often
write simply grossdigit $c_0$ without $\mbox{\ding{172}}^{0}$,
for instance, we write $23\mbox{\ding{172}}^{14}5$ instead of
$23\mbox{\ding{172}}^{14}5\mbox{\ding{172}}^{0}$.
\begin{example}
\label{e5} The left-hand expression below shows how to write down
numbers in the new numeral system and the right-hand shows how the
value of the number is calculated:
\[
15\mbox{\ding{172}}^{1.4\mbox{\tiny{\ding{172}}}}(-17.2045)\mbox{\ding{172}}^{3}7\mbox{\ding{172}}^{0}52.1\mbox{\ding{172}}^{-6}
=
15\mbox{\ding{172}}^{1.4\mbox{\tiny{\ding{172}}}}-17.2045\mbox{\ding{172}}^{3}+7\mbox{\ding{172}}^{0}+52.1\mbox{\ding{172}}^{-6}.
\]
The number above has one infinite part having the infinite
grosspower, one infinite part having the finite grosspower, a
finite part, and an infinitesimal part. \hfill$\Box$
\end{example}
Finally, numbers having a finite and infinitesimal parts can be
also expressed in the new numeral system, for instance, the number
$-3.5\mbox{\ding{172}}^{0}(-37)\mbox{\ding{172}}^{-2}11\mbox{\ding{172}}^{-15\mbox{\tiny{\ding{172}}}+2.3}$
has a finite and two infinitesimal parts, the second of them has
the infinite negative gross\-power equal to
$-15\mbox{\ding{172}}+2.3$.
\subsection{Arithmetical operations }
We start the description of arithmetical operations for the new
positional numeral system by the operation of \textit{addition}
(\textit{subtraction} is a direct consequence of addition and is
thus omitted) of two given infinite numbers $A$ and $B$, where
\beq
A= \sum_{i=1}^{K} a_{k_{i}}\mbox{\ding{172}}^{k_{i}}, \hspace{1cm}
B= \sum_{j=1}^{M} b_{m_{j}}\mbox{\ding{172}}^{m_{j}}, \hspace{1cm}
C= \sum_{i=1}^{L} c_{l_{i}}\mbox{\ding{172}}^{l_{i}},
\label{3.20}
\eeq
and the result $C=A+B$ is constructed by including in it all
items $a_{k_{i}}\mbox{\ding{172}}^{k_{i}}$ from $A$ such that
$k_{i} \neq m_{j},1 \le j \le M,$ and all items
$b_{m_{j}}\mbox{\ding{172}}^{m_{j}}$ from $B$ such that $m_{j}
\neq k_{i},1 \le i \le K$. If in $A$ and $B$ there are items such
that $k_{i}=m_{j}$, for some $i$ and $j$, then this grosspower
$k_{i}$ is included in $C$ with the grossdigit
$b_{k_{i}}+a_{k_{i}}$, i.e., as
$(b_{k_{i}}+a_{k_{i}})\mbox{\ding{172}}^{k_{i}}$.
\begin{example}
\label{e6}
We consider two infinite numbers $A$ and $B$, where
$$
A=16.5\mbox{\ding{172}}^{44.2}(-12)\mbox{\ding{172}}^{12}
17\mbox{\ding{172}}^{0}, \hspace{1cm} B=6.23\mbox{\ding{172}}^{3}
10.1\mbox{\ding{172}}^{0}15\mbox{\ding{172}}^{-4.1}.
$$
Their sum $C$ is calculated as follows:
\[
C=A+B=16.5\mbox{\ding{172}}^{44.2}+(-12)\mbox{\ding{172}}^{12}+
17\mbox{\ding{172}}^{0}+ 6.23\mbox{\ding{172}}^{3}+
10.1\mbox{\ding{172}}^{0} +15\mbox{\ding{172}}^{-4.1}=
\]
\[
16.5\mbox{\ding{172}}^{44.2}-
12\mbox{\ding{172}}^{12}+6.23\mbox{\ding{172}}^{3}+
27.1\mbox{\ding{172}}^{0}+15\mbox{\ding{172}}^{-4.1}=
\]
\[
\begin{tabular}{cr}\hspace {28mm}$16.5\mbox{\ding{172}}^{44.2}
(-12)\mbox{\ding{172}}^{12}6.23\mbox{\ding{172}}^{3}
27.1\mbox{\ding{172}}^{0}15\mbox{\ding{172}}^{-4.1}.\hfill { }$ &
\hspace {2cm}
$\Box$
\end{tabular}
\]
\end{example}
The operation of \textit{multiplication} of two numbers
$A$ and $B$ in the form (\ref{3.20}) returns, as
the result, the infinite number $C$ constructed as follows:
\beq
C= \sum_{j=1}^{M} C_{j}, \hspace{5mm} C_{j} =
b_{m_{j}}\mbox{\ding{172}}^{m_{j}}\cdot A =\sum_{i=1}^{K}
a_{k_{i}}b_{m_{j}}\mbox{\ding{172}}^{k_{i}+m_{j}}, \hspace{5mm}1
\le j \le M. \label{3.23}
\eeq
\begin{example}
\label{e7}
We consider two infinite numbers
\[
A=1\mbox{\ding{172}}^{18}(-5)\mbox{\ding{172}}^{2.4}
(-3)\mbox{\ding{172}}^{1}, \hspace{1cm} B=-1\mbox{\ding{172}}^{1}
0.7\mbox{\ding{172}}^{-3}
\]
and calculate the
product $C=B \cdot A$. The first partial product $C_1$ is equal
to
\[
C_{1} = 0.7\mbox{\ding{172}}^{-3} \cdot A =
0.7\mbox{\ding{172}}^{-3}(\mbox{\ding{172}}^{18}-5\mbox{\ding{172}}^{2.4}-
3\mbox{\ding{172}}^{1})=
\]
\[
0.7\mbox{\ding{172}}^{15}-3.5\mbox{\ding{172}}^{-0.6}-2.1\mbox{\ding{172}}^{-2}
=
0.7\mbox{\ding{172}}^{15}(-3.5)\mbox{\ding{172}}^{-0.6}(-2.1)\mbox{\ding{172}}^{-2}.
\]
The second partial product, $C_2$, is computed analogously
\[
C_{2} = -\mbox{\ding{172}}^{1} \cdot A =
-\mbox{\ding{172}}^{1}(\mbox{\ding{172}}^{18}-5\mbox{\ding{172}}^{2.4}
-3\mbox{\ding{172}}^{1})=
-\mbox{\ding{172}}^{19}5\mbox{\ding{172}}^{3.4}3\mbox{\ding{172}}^{2}.
\]
Finally, the product $C$ is equal to
\[
\begin{tabular}{cr}\hspace {11mm}$C = C_{1} + C_{2} =
-1\mbox{\ding{172}}^{19}0.7\mbox{\ding{172}}^{15}
5\mbox{\ding{172}}^{3.4}3\mbox{\ding{172}}^{2}
(-3.5)\mbox{\ding{172}}^{-0.6}(-2.1)\mbox{\ding{172}}^{-2}.$ &
\end{tabular} \hspace{5mm}
\Box
\]
\end{example}
In the operation of \textit{division} of a number $C$ by a
number $B$ from (\ref{3.20}), we obtain a result $A$ and a
reminder $R$ (that can be also equal to zero), i.e., $C =A \cdot
B+R$. The number $A$ is constructed as follows. The first
grossdigit $a_{k_{K}}$ and the corresponding maximal exponent
${k_{K}}$ are established from the equalities
\beq
a_{k_{K}}=c_{l_{L}}/ b_{m_{M}}, \hspace{4mm} k_{K} = l_{L}-
m_{M}.
\label{3.25}
\eeq
Then the first partial reminder $R_1$ is calculated as
\beq
R_1= C - a_{k_{K}}\mbox{\ding{172}}^{k_{K}} \cdot B.
\label{3.25.0}
\eeq
If $R_1 \neq 0$ then the number $C$ is substituted by $R_1$ and
the process is repeated with a complete analogy. The grossdigit
$a_{k_{K-i}}$, the corresponding grosspower $k_{K-i}$ and the
partial reminder $R_{i+1}$ are computed by formulae
(\ref{3.25.1}) and (\ref{3.25.2}) obtained from (\ref{3.25}) and
(\ref{3.25.0}) as follows: $l_{L}$ and $c_{l_{L}}$ are substituted
by the highest grosspower $n_i$ and the corresponding grossdigit
$r_{n_{i}}$ of the partial reminder $R_{i}$ that, in turn,
substitutes $C$:
\beq
a_{k_{K-i}}=r_{n_i}/ b_{m_{M}}, \hspace{4mm} k_{K-i} = n_i-
m_{M}.
\label{3.25.1}
\eeq
\beq
R_{i+1}= R_{i} - a_{k_{K-i}}\mbox{\ding{172}}^{k_{K-i}}\cdot B,
\hspace{4mm}
i \ge 1. \label{3.25.2}
\eeq
The process stops when a partial reminder equal to zero is found
(this means that the final reminder $R=0$) or when a required
accuracy of the result is reached.
\begin{example}
\label{e8} Let us divide the number
$C=-10\mbox{\ding{172}}^{3}16\mbox{\ding{172}}^{0}42\mbox{\ding{172}}^{-3}$
by the number $B=5\mbox{\ding{172}}^{3}7$. For these numbers we
have
\[
l_{L} = 3, \hspace{2mm} m_{M}= 3, \hspace{2mm} c_{l_{L}}= -10,
\hspace{2mm} b_{m_{M}}= 5.
\]
It follows immediately from (\ref{3.25}) that
$a_{k_K}\mbox{\ding{172}}^{k_K}=-2\mbox{\ding{172}}^{0}$. The
first partial reminder $R_1$ is calculated as
\[
R_1=
-10\mbox{\ding{172}}^{3}16\mbox{\ding{172}}^{0}42\mbox{\ding{172}}^{-3}
- (-2\mbox{\ding{172}}^{0}) \cdot 5\mbox{\ding{172}}^{3}7=
\]
\[
-10\mbox{\ding{172}}^{3}16\mbox{\ding{172}}^{0}42\mbox{\ding{172}}^{-3}
+10\mbox{\ding{172}}^{3}14\mbox{\ding{172}}^{0} =
30\mbox{\ding{172}}^{0}42\mbox{\ding{172}}^{-3}.
\]
By a complete analogy we should construct
$a_{k_{K-1}}\mbox{\ding{172}}^{k_{K-1}}$ by rewriting
(\ref{3.25}) for $R_1$. By doing so we obtain equalities
\[
30=a_{k_{K-1}}\cdot 5, \hspace{4mm} 0 = k_{K-1} + 3
\]
and, as the result, $a_{k_{K-1}}\mbox{\ding{172}}^{k_{K-1}}=
6\mbox{\ding{172}}^{-3}$. The second partial reminder is
\[
R_2= R_1 - 6\mbox{\ding{172}}^{-3} \cdot 5\mbox{\ding{172}}^{3}7=
30\mbox{\ding{172}}^{0}42\mbox{\ding{172}}^{-3} -
30\mbox{\ding{172}}^{0}42\mbox{\ding{172}}^{-3} = 0.
\]
Thus, we can conclude that the reminder $R=R_2= 0$ and the final
result of division is
$A=-2\mbox{\ding{172}}^{0}6\mbox{\ding{172}}^{-3}$.
Let us now substitute the grossdigit 42 by 40 in $C$ and divide
this new number
$\widetilde{C}=-10\mbox{\ding{172}}^{3}16\mbox{\ding{172}}^{0}40\mbox{\ding{172}}^{-3}$
by the same number $B=5\mbox{\ding{172}}^{3}7$. This operation
gives us the same result
$\widetilde{A}_2=A=-2\mbox{\ding{172}}^{0}6\mbox{\ding{172}}^{-3}$
(where subscript 2 indicates that two partial
reminders\index{partial reminder} have been obtained) but with the
reminder $\widetilde{R}=\widetilde{R}_2=
-2\mbox{\ding{172}}^{-3}$. Thus, we obtain $\widetilde{C} = B
\cdot \widetilde{A}_2+\widetilde{R}_2$. If we want to continue the
procedure of division, we obtain
$\widetilde{A}_3=-2\mbox{\ding{172}}^{0}6\mbox{\ding{172}}^{-3}(-0.4)\mbox{\ding{172}}^{-6}$
with the reminder $\widetilde{R}_3= 0.28\mbox{\ding{172}}^{-6}$.
Naturally, it follows $\widetilde{C} = B \cdot
\widetilde{A}_3+\widetilde{R}_3$. The process continues until a
partial reminder $\widetilde{R}_i = 0$ is found or when a
required accuracy of the result will be reached. \hfill $\Box$
\end{example}
\section{Examples of problems where computations with new numerals can be useful}
\label{s4}
\subsection{The work with infinite sequences}
\label{s4.2}
We start by reminding traditional definitions of the infinite
sequences and subsequences. An \textit{infinite sequence}
$\{a_n\}, a_n \in A, n \in \mathbb{N},$ is a function having as
the domain the set of natural numbers, $\mathbb{N}$, and as the
codomain a set $A$. A \textit{subsequence} is a sequence from
which some of its elements have been removed. The IUA allows us to
prove the following result.
\begin{theorem}
\label{t2} The number of elements of any infinite sequence is less
or equal to~\ding{172}.
\end{theorem}
\textit{Proof.} The IUA states that the set $\mathbb{N}$ has
\ding{172} elements. Thus, due to the sequence definition given
above, any sequence having $\mathbb{N}$ as the domain has
\ding{172} elements.
The notion of subsequence is introduced as a sequence from which
some of its elements have been removed. Thus, this definition
gives infinite sequences having the number of members less than
grossone. \hfill $\Box$
One of the immediate consequences of the understanding of this
result is that any sequential process can have at maximum
\ding{172} elements. Due to Postulate 1, it depends on the chosen
numeral system which numbers among \ding{172} members of the
process we can observe.
\begin{example}
\label{e12} Let us consider the set, $\widehat{\mathbb{N}}$, of
extended natural numbers from (\ref{4.2.2}). Then, starting from
the number 1, the process of the sequential counting can arrive at
maximum to \ding{172}
\[
\underbrace{1,2,3,4,\hspace{1mm} \ldots \hspace{1mm}
\mbox{\ding{172}}-2,\hspace{1mm}
\mbox{\ding{172}}-1,
\mbox{\ding{172}}}_{\mbox{\ding{172}}}, \mbox{\ding{172}}+1,
\mbox{\ding{172}}+2, \mbox{\ding{172}}+3, \ldots
\]
Starting from 3 it arrives at maximum
to $\mbox{\ding{172}}+2$
\[
\begin{tabular}{cr}\hspace {20mm}$1,2,\underbrace{3,4,\hspace{1mm} \ldots \hspace{1mm}
\mbox{\ding{172}}-2,\hspace{1mm}
\mbox{\ding{172}}-1,
\mbox{\ding{172}}, \mbox{\ding{172}}+1,
\mbox{\ding{172}}+2}_{\mbox{\ding{172}}}, \mbox{\ding{172}}+3,
\ldots$ &
\hspace {13mm}
$\Box$
\end{tabular}
\]
\end{example}
It becomes appropriate now to define the \textit{complete
sequence} as an infinite sequence containing \ding{172} elements.
For example, the sequence of natural numbers is complete, the
sequences of even and odd natural numbers are not complete.
Thus, the IUA imposes a more precise description of infinite
sequences. To define a sequence $\{a_n\}$ it is not sufficient
just to give a formula for~$a_n$, we should determine (as it
happens for sequences having a finite number of elements) the
first and the last elements of the sequence. If the number of the
first element is equal to one, we can use the record $\{a_n: k \}$
where $a_n$ is, as usual, the general element of the sequence and
$k$ is the number (that can be finite or infinite) of members of
the sequence.
\begin{example}
\label{e13} Let us consider the following two sequences, $\{a_n\}$
and $\{c_n\}$:
\[
\{a_n\} = \{ 5,\hspace{3mm} 10,\hspace{3mm} \ldots \hspace{3mm} 5(\mbox{\ding{172}}-1),\hspace{3mm}
5\mbox{\ding{172}} \},
\]
\beq \{b_n\} = \{ 5,\hspace{3mm}10,\hspace{3mm} \ldots
\hspace{3mm} 5 (\frac{2\mbox{\ding{172}}}{5}-1),\hspace{3mm}
5\cdot \frac{2\mbox{\ding{172}}}{5} \},
\label{3.7.1}
\eeq
\beq
\{c_n\} = \{ 5,\hspace{3mm} 10,\hspace{3mm} \ldots \hspace{3mm}
5 (\frac{4\mbox{\ding{172}}}{5}-1),\hspace{3mm} 5\cdot
\frac{4\mbox{\ding{172}}}{5} \}.
\label{3.7.2}
\eeq
They have the same general element $a_n=b_n=c_n=5n$ but they
are different because they have different numbers of members. The
first sequence has \ding{172} elements and is thus complete, the
other two sequences are not complete: $\{b_n\}$ has
$\frac{2\mbox{\ding{172}}}{5}$ elements and $\{c_n\}$ has
$\frac{4\mbox{\ding{172}}}{5}$ members. \hfill
$\Box$
\end{example}
In connection with this definition the following natural question
arises inevitably. Suppose that we have two sequences, for
example, $\{b_n:\frac{2\mbox{\ding{172}}}{5}\}$ and
$\{c_n:\frac{4\mbox{\ding{172}}}{5}\}$ from (\ref{3.7.1}) and
(\ref{3.7.2}). Can we create a new sequence, $\{d_n:k\}$, composed
from both of them, for instance, as it is shown below
\[
b_1,\hspace{1mm} b_2,\hspace{1mm} \ldots \hspace{1mm}
b_{\frac{2\mbox{\tiny{\ding{172}}}}{5}-2},\hspace{1mm}
b_{\frac{2\mbox{\tiny{\ding{172}}}}{5}-1},\hspace{1mm}b_{\frac{2\mbox{\tiny{\ding{172}}}}{5}},\hspace{1mm}
c_1,\hspace{1mm} c_2,\hspace{1mm} \ldots \hspace{1mm}
c_{\frac{4\mbox{\tiny{\ding{172}}}}{5}-2},\hspace{1mm}
c_{\frac{4\mbox{\tiny{\ding{172}}}}{5}-1},\hspace{1mm}
c_{\frac{4\mbox{\tiny{\ding{172}}}}{5}}
\]
and which will be the value of the number of its elements $k$?
The answer is `no' because due to the definition of the infinite
sequence, a sequence can be at maximum complete, i.e., it cannot
have more than $\mbox{\ding{172}}$ elements. Starting from the
element $b_1$ we can arrive at maximum to the element
$c_{\frac{3\mbox{\tiny{\ding{172}}}}{5}}$ being the element
number \ding{172} in the sequence $\{d_n:k\}$ which we try to
construct. Therefore, $k=\mbox{\ding{172}}$ and
\[
\underbrace{b_1,\hspace{1mm} \ldots \hspace{1mm}
b_{\frac{2\mbox{\tiny{\ding{172}}}}{5}},\hspace{1mm}
c_1,\hspace{1mm} \ldots
c_{\frac{3\mbox{\tiny{\ding{172}}}}{5}}}_{\mbox{\ding{172}
elements}}, \hspace{1mm}
\underbrace{c_{\frac{3\mbox{\tiny{\ding{172}}}}{5}+1}, \ldots
\hspace{1mm}
c_{\frac{4\mbox{\tiny{\ding{172}}}}{5}}}_{\frac{\mbox{\tiny{\ding{172}}}}{5}
\mbox{ elements }}.
\]
The remaining members of the sequence
$\{c_n:\frac{4\mbox{\ding{172}}}{5}\}$ will form the second
sequence, $\{g_n: l \}$ having $l=
\frac{4\mbox{\ding{172}}}{5}-\frac{3\mbox{\ding{172}}}{5} =
\frac{\mbox{\ding{172}}}{5}$ elements. Thus, we have formed two
sequences, the first of them is complete and the second is not.
It is important to emphasize that the above consideration on the
infinite sequences allows us to deal with recursively defined
sets. Since such a set is constructed sequentially by a process,
it can have at maximum \ding{172} elements.
To conclude this subsection, let us return to Hilbert's paradox
of the Grand Hotel presented in Section~\ref{s1}. In the paradox,
the number of the rooms in the Hotel is countable. In our
terminology this means that it has \ding{172} rooms. When a new
guest arrives, it is proposed to move the guest occupying room 1
to room 2, the guest occupying room 2 to room 3, etc. Under the
IUA this procedure does not help because the guest from room
\ding{172} should be moved to room \ding{172}+1 and the Hotel has
only \ding{172} rooms. Thus, when the Hotel is full, no more new
guests can be accommodated -- the result corresponding perfectly
to Postulate~3 and the situation taking place in normal hotels
with a finite number of rooms.
\subsection{Calculating divergent series}
\label{s4.3}
Let us show how the new approach can be applied in such an
important area as theory of divergent series. We consider two
infinite series $S_1=10+10+10+\ldots$ and $S_2=3+3+3+\ldots$ The
traditional analysis gives us a very poor answer that both of them
diverge to infinity. Such operations as, e.g., $\frac{S_2}{S_1}$
and $S_2 - S_1$
are not defined.
Now, when we are able to express not only different finite numbers
but also different infinite numbers, it is necessary to indicate
explicitly the number of items in the sums $S_1$ and $S_2$ and it
is not important if it is finite or infinite. To calculate the sum
it is necessary that the number of items and the result are
expressible in the numeral system used for calculations. It is
important to notice that even though a sequence cannot have more
than \ding{172} elements, the number of items in a series can be
greater than grossone because the process of summing up is not
necessary executed by a sequential adding items.
Let us suppose that the series $S_1$ has $k$ items and $S_2$ has
$n$ items. We can then define sums (that can have a finite or an
infinite number of items),
$$S_1(k)=\underbrace{10+10+10+\ldots+10}_k, \hspace{1cm} S_2(n)=\underbrace{3+3+3+\ldots+3}_n,$$
calculate them, and execute
arithmetical operations with the obtained results. The sums then
are obviously calculated as $S_1(k)=10k$ and $S_2(n)=3n$. If, for
instance, $k=n=5\mbox{\ding{172}}$ then we obtain
$S_1(5\mbox{\ding{172}})=50\mbox{\ding{172}}$,
$S_2(5\mbox{\ding{172}})=15\mbox{\ding{172}}$ and
\[
S_2(5\mbox{\ding{172}}) / S_1(5\mbox{\ding{172}}) = 0.3.
\]
Analogously, if $k=3\mbox{\ding{172}}$ and
$n=10\mbox{\ding{172}}$ we obtain
$S_1(3\mbox{\ding{172}})=30\mbox{\ding{172}}$,
$S_2(\mbox{\ding{172}})=30\mbox{\ding{172}}$ and it follows
$S_2(\mbox{\ding{172}}) -
S_1(3\mbox{\ding{172}})=0$.
If $k=3\mbox{\ding{172}}4$ (we remind that we use here a shorter
way to write down this infinite number, the complete record is
$3\mbox{\ding{172}}^{1}4\mbox{\ding{172}}^{0}$) and
$n=10\mbox{\ding{172}}$ we obtain
$S_1(3\mbox{\ding{172}}4)=30\mbox{\ding{172}}40$,
$S_2(\mbox{\ding{172}})=30\mbox{\ding{172}}$ and it follows
\[
S_1(3\mbox{\ding{172}}4) - S_2(\mbox{\ding{172}}) =
30\mbox{\ding{172}}40 - 30\mbox{\ding{172}}= 40.
\]
\[
S_1(3\mbox{\ding{172}}2) / S_2(\mbox{\ding{172}}) =
30\mbox{\ding{172}}20 / 30\mbox{\ding{172}}=
1\mbox{\ding{172}}^{0}0.66667\mbox{\ding{172}}^{-1} > 0.
\]
We conclude this subsection by studying the series
$\sum_{i=1}^{\infty}\frac{1}{2^i}$. It is known that it converges
to one. However, we are able to give a more precise answer. Due to
Postulate~3, the formula
$$\sum_{i=1}^{k}\frac{1}{2^i}=1-\frac{1}{2^k}$$
can be used directly for infinite $k$, too. For example, if
$k=\mbox{\ding{172}}$ then
$$\sum_{i=1}^{\mbox{\small{\ding{172}}}}\frac{1}{2^i}=1-\frac{1}{2^{\mbox{\tiny{\ding{172}}}}}$$
where $\frac{1}{2^{\mbox{\tiny{\ding{172}}}}}$ is infinitesimal.
Thus, the traditional answer $\sum_{i=1}^{\infty}\frac{1}{2^i}=1$
is a finite approximation to our more precise result using
infinitesimals. More examples related to series can be found in
\cite{chaos}.
\subsection{Calculating limits and expressing irrational numbers}
\label{s4.5}
Let us now discuss the problem of calculation of limits from the point of
view of our approach. In traditional analysis, if a limit $\lim_{x
\rightarrow a}f(x)$ exists, then it gives us a very poor -- just
one value -- information about the behavior of $f(x)$ when $x$
tends to $a$. Now we can obtain significantly richer information
because we are able to calculate $f(x)$ directly at any finite,
infinite, or infinitesimal point that can be expressed by the new
positional system even if the limit does not exist.
Thus, limits equal to infinity can be substituted by precise
infinite numerals and limits equal to zero can be substituted by
precise infinitesimal numerals\footnote{Naturally, if we speak
about limits of sequences, $\lim_{n \rightarrow \infty}a(n)$, then
$ n \in \mathbb{N}$ and, as a consequence, it follows that $n$
should be less than or equal to grossone.}. This is very
important for practical computations because these substitutions
eliminate indeterminate forms.
\begin{example}
\label{e15} Let us consider the following two limits
\[
\lim_{x \rightarrow +\infty}(5x^3-x^2+10^{61})=
+\infty, \hspace{1cm} \lim_{x \rightarrow +\infty}(5x^3-x^2)=
+\infty.
\]
Both give us the same result, $+\infty$, and it is not possible
to execute the operation
\[
\lim_{x \rightarrow +\infty}(5x^3-x^2+10^{61}) - \lim_{x \rightarrow +\infty}(5x^3-x^2).
\]
that is an indeterminate form of the type $\infty-\infty$ in
spite of the fact that for any finite $x$ it follows
\beq
5x^3-x^2+10^{61} - (5x^3-x^2) = 10^{61}. \label{4.4.3}
\eeq
The new approach allows us to calculate exact values of both
expressions, $5x^3-x^2+10^{61}$ and $5x^3-x^2$, at any infinite
(and infinitesimal) $x$ expressible in the chosen numeral system.
For instance, the choice $x=3\mbox{\ding{172}}^{2}$ gives the
value
\[
5(3\mbox{\ding{172}}^{2})^{3}-(3\mbox{\ding{172}}^{2})^{2}+10^{61}=
135\mbox{\ding{172}}^{6}\mbox{\small-}9\mbox{\ding{172}}^{4}10^{61}
\]
for the first expression and
$135\mbox{\ding{172}}^{6}\mbox{\small-}9\mbox{\ding{172}}^{4}$ for
the second one. We can easily calculate the difference of these
two infinite numbers, thus obtaining the same result as we had for
finite values of $x$ in (\ref{4.4.3}):
\[
\begin{tabular}{cr}\hspace {32mm}$135\mbox{\ding{172}}^{6}\mbox{\small-}9\mbox{\ding{172}}^{4}10^{61} -
(135\mbox{\ding{172}}^{6}\mbox{\small-}9\mbox{\ding{172}}^{4}) =
10^{61}.$ & \hspace {21mm}
$\Box$
\end{tabular}
\]
\end{example}
It is necessary to emphasize the fact that expressions can be
calculated even when their limits do not exist. Thus, we obtain a
very powerful tool for studying divergent processes.
\begin{example}
\label{e16}
The limit
$ \lim_{n \rightarrow +\infty}f(n),$ $f(n)=(-1)^n n^3$, does not
exist. However, we can easily calculate expression $(-1)^n n^3$ at
different infinite points $n$. For instance, for
$n=\mbox{\ding{172}}$ it follows
$f(\mbox{\ding{172}})=\mbox{\ding{172}}^3$ because grossone is
even and for the odd $n=0.5\mbox{\ding{172}}-1$ it follows
\[
\hspace{18mm}
f(0.5\mbox{\ding{172}}-1)=-(0.5\mbox{\ding{172}}-1)^3=
-0.125\mbox{\ding{172}}^{3}0.75\mbox{\ding{172}}^{2}\mbox{\small-}1.5\mbox{\ding{172}}^{1}1.
\hspace{15mm} \Box
\]
\end{example}
Limits with the argument tending to zero can be considered
analogously. In this case, we can calculate the corresponding
expression at any infinitesimal point using the new positional
system and obtain a significantly more rich information.
\begin{example}
\label{e17} If $x$ is a fixed finite number then
\beq
\lim_{h \rightarrow 0}\frac{(x+h)^2-x^2}{h}= 2x.
\label{4.5}
\eeq
In the new positional system we obtain
\beq
\frac{(x+h)^2-x^2}{h}= 2x + h.
\label{4.6}
\eeq
If, for instance, $h=\mbox{\ding{172}}^{-1}$, the answer is
$2x\mbox{\ding{172}}^{0}\mbox{\ding{172}}^{-1}$, if
$h=4.2\mbox{\ding{172}}^{-2}$ we obtain the value
$2x\mbox{\ding{172}}^{0}4.2\mbox{\ding{172}}^{-2}$, etc. Thus, the
value of the limit (\ref{4.5}), for a finite $x$, is just the
finite approximation of the number (\ref{4.6}) having finite and
infinitesimal parts. \hfill $\Box$
\end{example}
Let us make a remark regarding irrational numbers. Among their
properties, they are characterized by the fact that we do not know
any numeral system that would allow us to express them by a
finite number of symbols used to express other numbers. Thus,
special numerals ($e, \pi, \sqrt{2}, \sqrt{3} $, etc.) are
introduced by describing their properties in a way (similarly, all
other numerals, e.g., symbols `0' or `1', are introduced also by
describing their properties). These special symbols are then used
in analytical transformations together with ordinary numerals.
For example, it is possible to work directly with the symbol $e$
in analytical transformations by applying suitable rules defining
this number together with numerals taking part in a chosen numeral
system $\mathcal{S}$. At the end of transformations, the obtained
result will be be expressed in numerals from $\mathcal{S}$ and,
probably, in terms of $e$. If it is then required to execute some
\textit{numerical} computations, this means that it is necessary
to substitute $e$ by a numeral (or numerals) from $\mathcal{S}$
that will allow us to approximate $e$ in some way.
The same situation takes place when one uses the new numeral
system, i.e., while we work analytically we use just the symbol
$e$ in our expressions and then, if we wish to work numerically we
should pass to approximations. The new numeral system opens a new
perspective on the problem of the expression of irrational
numbers. Let us consider one of the possible ways to obtain an
approximation of $e$, i.e., by using the limit
\beq
e = \lim_{n \rightarrow +\infty}(1+\frac{1}{n})^n =
2.71828182845904\ldots
\label{3.200}
\eeq
In our numeral system the expression $(1+\frac{1}{n})^n$ can be
written directly for finite and/or infinite values of $n$. For
$n=\mbox{\ding{172}}$ we obtain the number $e_0$ designated so in
order to distinguish it from the record (\ref{3.200})
\beq
e_0 = (1+\frac{1}{\mbox{\ding{172}}})^{\mbox{\tiny{\ding{172}}}}=
(\mbox{\ding{172}}^{0}\mbox{\ding{172}}^{-1})^{\mbox{\tiny{\ding{172}}}}.
\label{3.201}
\eeq
It becomes clear from this record why the number $e$ cannot be
expressed in a positional numeral system with a finite base. Due
to the definition of a sequence under the IUA, such a system can
have at maximum \ding{172} numerals -- digits -- to express
fractional part of a number (see section~\ref{s4.4} for details)
and, as it can be seen from (\ref{3.201}), this quantity is not
sufficient for $e$ because the item
$\frac{1}{\mbox{\ding{172}}^{\mbox{\tiny{\ding{172}}}}}$ is
present in it.
Naturally, it is also possible to construct more exotic $e$-type
numbers by substituting \ding{172} in (\ref{3.201}) by any
infinite number written in the new positional system with
infinite base. For example, if we substitute \ding{172} in
(\ref{3.201}) by $\mbox{\ding{172}}^{2}$ we obtain the number
\[
e_1 =
(1+\frac{1}{\mbox{\ding{172}}^{2}})^{\mbox{\tiny{\ding{172}}}^{2}}=
(\mbox{\ding{172}}^{0}\mbox{\ding{172}}^{-2})^{\mbox{\tiny{\ding{172}}}^{2}}.
\]
The numbers considered above take their origins in the limit
(\ref{3.200}). Similarly, other formulae leading to approximations
of $e$ expressed in traditional numeral systems give us other new
numbers that can be expressed in the new numeral system. The same
way of reasoning can be used with respect to other irrational
numbers, too.
\subsection{Measuring infinite sets with elements defined by formulae}
\label{s4.1}
We have already discussed in Section~\ref{s2} how we calculate
the number of elements for sets being results of the usual
operations (intersection, union, etc.) with finite sets and
infinite sets of the type $\mathbb{N}_{k,n}$. In order to have a
possibility to work with infinite sets having a more general
structure than the sets $\mathbb{N}_{k,n}$, we need to develop
more powerful instruments. Suppose that we have an integer
function $g(i)
> 0$ strictly increasing on indexes $i=1,2,3, \ldots$ and we wish to
know how many elements are there in the set
$$G = \{ g(1), g(2), g(3), \ldots \}.$$ In our terminology this
question has no any sense because of the following reason.
In the finite case, to define a set it is not sufficient to say
that it is finite. It is necessary to indicate its number of
elements explicitly as, e.g., in this example
\[
G_1 = \{ g(i): 1 \le i \le 5 \},
\]
or implicitly, as it is made here:
\beq
G_2 = \{ g(i): i
\ge 1, \,\,\,\, 0 < f(i)\le b \}, \label{4.3}
\eeq
where $b$ is finite.
Now we have mathematical tools to indicate the number of elements
for infinite sets, too. Thus, analogously to the finite case and
due to Postulate~3, it is not sufficient to say that a set has
infinitely many elements. It is necessary to indicate its number
of elements explicitly or implicitly. For instance, the number of
elements of the set
\[
G_3 = \{ g(i): 1 \le i \le \mbox{\ding{172}}^{10} \}
\]
is indicated explicitly: the set $G_3$ has
$\mbox{\ding{172}}^{10}$ elements.
If a set is given in the form (\ref{4.3}) where $b$ is infinite,
then its number of elements, $J$, can be determined as
\beq
J = \max
\{i : g(i) \le b \} \label{4.3.0}
\eeq
if we are able to determine the inverse
function $g^{-1}(x)$ for $g(x)$. Then, $J= [ g^{-1}(b)]$, where
$[u]$ is integer part of $u$. Note that if $b=\mbox{\ding{172}}$,
then the set $G_2 \subseteq \mathbb{N}$ since all its elements are
integer, positive, and $g(i) \le \mbox{\ding{172}}$ due to
(\ref{4.3.0}).
\begin{example}
\label{e9} Let us consider the following set, $A_1(k,n)$, having
$g(i) = k+n(i-1),$
\[
A_1(k,n) = \{ g(i) : i \ge 1, \,\,\,\,g(i) \le \mbox{\ding{172}}
\}, \hspace{1cm} 1 \le k \le n, \hspace{3mm} n \in \mathbb{N}.
\]
It follows from the IUA that $A_1(k,n) = \mathbb{N}_{k,n}$ from
(\ref{3.3}). By applying (\ref{4.3.0}) we find for $A_1(k,n)$
its number of elements
\[
\begin{tabular}{cr}\hspace {11mm}$J_1(k,n)= [ \frac{\mbox{\ding{172}}-k}{n}+1]= [
\frac{\mbox{\ding{172}}-k}{n}]+1= \frac{\mbox{\ding{172}}}{n}-1+1=
\frac{\mbox{\ding{172}}}{n}.$ & \hspace {23mm}
$\Box$
\end{tabular}
\]
\end{example}
\begin{example}
\label{e10} Analogously, the set
\[
A_2(k,n,j) = \{ k+ni^j : i \ge 0,\,\,\,\, 0 < k+ni^j \le
\mbox{\ding{172}} \}, \hspace{7mm} 0 \le k < n, \hspace{2mm} n
\in \mathbb{N}, \hspace{2mm} j \in \mathbb{N},
\]
has $J_2(k,n,j)= [ \sqrt[j]{\frac{\mbox{\ding{172}}-k}{n}}]$
elements. \hfill $\Box$
\end{example}
\subsection{Measuring infinite sets of numerals and their comparison}
\label{s4.4}
Let us calculate the number of elements in some well known
infinite sets of numerals using the designation $|A|$ to indicate
the number of elements of a set $A$.
\begin{theorem}
\label{t3}The number of elements of the set, $\mathbb{Z}$, of
integers is $|\mathbb{Z}|= 2\mbox{\ding{172}}1$.
\end{theorem}
\textit{Proof.} The set $\mathbb{Z}$ contains \ding{172} positive
numbers, \ding{172} negative numbers, and zero. Thus,
\beq
|\mathbb{Z}|= \mbox{\ding{172}} + \mbox{\ding{172}} + 1 =
2\mbox{\ding{172}}1. \hspace{5mm}
\Box
\label{3.102}
\eeq
Traditionally, rational numbers are defined as ratio of two
integer numbers. The new approach allows us to calculate the
number of numerals in a fixed numeral system. Let us consider a
numeral system $\mathbb{Q}_1$ containing numerals of the form
\beq
\frac{p}{q}, \hspace{5mm} p \in \mathbb{Z},\hspace{3mm} q \in \mathbb{Z},\,\,\, q \neq 0.
\label{3.102.0}
\eeq
\begin{theorem}
\label{t4.0}
The number of elements of the set, $\mathbb{Q}_1$,
of rational numerals of the type (\ref{3.102.0}) is
$|\mathbb{Q}_1|=4\mbox{\ding{172}}^{2}2\mbox{\ding{172}}^1$.
\end{theorem}
\textit{Proof.} It follows from Theorem~\ref{t3} that the
numerator of (\ref{3.102.0}) can be filled in by
$2\mbox{\ding{172}}1$ and the denominator by $2\mbox{\ding{172}}$
numbers. Thus, number of all possible combinations is
\[
|\mathbb{Q}_1|= 2\mbox{\ding{172}}1 \cdot 2\mbox{\ding{172}} =
4\mbox{\ding{172}}^{2}2\mbox{\ding{172}}^1.
\]
\hfill $\Box$
It is necessary to notice that in Theorem~\ref{t4.0} we have
calculated different numerals and not different numbers. For
example, in the numeral system $\mathbb{Q}_1$ the number 0 can be
expressed by $2\mbox{\ding{172}}$ different numerals
\[
\frac{0}{-\mbox{\ding{172}}}, \,\, \frac{0}{-\mbox{\ding{172}+1}},
\,\, \frac{0}{-\mbox{\ding{172}+2}}, \ldots \frac{0}{-2}, \,\,
\frac{0}{-1}, \,\, \frac{0}{1}, \,\, \frac{0}{2}, \ldots
\frac{0}{\mbox{\ding{172}-2}}, \,\, \frac{0}{\mbox{\ding{172}-1}},
\,\, \frac{0}{\mbox{\ding{172}}}
\]
and numerals such as $\frac{-1}{-2}$ and $\frac{1}{2}$ have been
calculated as two different numerals. The following theorem
determines the number of elements of the set $\mathbb{Q}_2$
containing numerals of the form
\beq
-\frac{p}{q}, \,\, \frac{p}{q}, \hspace{5mm} p \in \mathbb{N},\hspace{3mm} q \in
\mathbb{N},
\label{3.102.1}
\eeq
and zero is represented by one symbol 0.
\begin{theorem}
\label{t4}The number of elements of the set, $\mathbb{Q}_2$, of
rational numerals of the type (\ref{3.102.1}) is
$|\mathbb{Q}_2|=2\mbox{\ding{172}}^{2}1$.
\end{theorem}
\textit{Proof.} Let us consider positive rational numerals. The
form of the rational numeral $\frac{p}{q}$, the fact that $p, \, q
\in \mathbb{N},$ and the IUA impose that both $p$ and $q$ can
assume values from 1 to \ding{172}. Thus, the number of all
possible combinations is $\mbox{\ding{172}}^{2}$. The same number
of combinations we obtain for negative rational numbers and one is
added because we count zero as well. \hfill
$\Box$
Let us now calculate the number of elements of the set,
$\mathbb{R}_{b}$, of real numbers expressed by numerals in the
positional system by the record\vspace*{-2mm}
\beq
(a_{n-1}a_{n-2} \ldots a_1 a_0 .
a_{-1} a_{-2} \ldots a_{-(q-1)} a_{-q})_b
\label{3.103}
\eeq
where the symbol $b$ indicates the radix\index{radix} of the
record and $n, \, q \in \mathbb{N}$.
\begin{theorem}
\label{t5} The number of elements of the set, $\mathbb{R}_{b}$,
of numerals (\ref{3.103}) is $|\mathbb{R}_{b}| =
b^{2\mbox{\ding{172}}}$.
\end{theorem}
\textit{Proof.} In formula (\ref{3.103}) defining the type of
numerals we deal with there are two sequences of digits: the first
one, $a_{n-1}a_{n-2} \ldots a_1 a_0$, is used to express the
integer part of the number and the second, $a_{-1} a_{-2} \ldots
a_{-(q-1)} a_{-q}$, for its fractional part. Due to definition of
sequence and the IUA, each of them can have at maximum \ding{172}
elements. Thus, it can be at maximum \ding{172} positions on the
left of the dot
and, analogously,
\ding{172} positions on the right of the dot. Every position can
be filled in by one of the $b$ digits from the
alphabet\index{alphabet} $\{ 0, 1, \ldots , b-1 \}$. Thus, we have
$b^{\mbox{\ding{172}}}$ combinations to express the integer part
of the number and the same quantity to express its fractional
part. As a result, the positional numeral system using the
numerals of the form (\ref{3.103}) can express
$b^{2\mbox{\ding{172}}}$ numbers. \hfill
$\Box$
Note that the result of theorem~\ref{t5} does not consider the
practical situation of writing down concrete numerals. Obviously,
the number of numerals of the type (\ref{3.103}) that can be
written in practice is finite and depends on the chosen numeral
system for writing digits.
It is worthwhile to notice also that the traditional point of view
on real numbers tells that there exist real numbers that can be
represented in positional systems by two different infinite
sequences of digits. In contrast, under the IUA all the numerals
represent different numbers. In addition, minimal and maximal
numbers expressible in $\mathbb{R}_{b}$ can be explicitly
indicated.
\begin{example}
\label{e14} For instance, in the decimal positional
system\index{positional system} $\mathbb{R}_{10}$ the numerals
\[
1.\underbrace{999\ldots99}_{\mbox{\ding{172} digits}},
\,\,\,\,\,\,\,\, 2.\underbrace{000\ldots00}_{\mbox{\ding{172}
digits}}
\]
represent different numbers and their difference is equal to
\[
2.\underbrace{000\ldots00}_{\mbox{\ding{172}
digits}}-1.\underbrace{999\ldots9}_{\mbox{\ding{172} digits}}
=0.\underbrace{000\ldots01}_{\mbox{\ding{172} digits}}.
\]
Analogously the smallest and the largest numbers expressible in
$\mathbb{R}_{10}$ can be easily indicated. They are, respectively,
\[
\hspace{25mm}-\underbrace{999\ldots9}_{\mbox{\ding{172}
digits}}.\underbrace{999\ldots9}_{\mbox{\ding{172} digits}},
\hspace{10mm} \underbrace{999\ldots9}_{\mbox{\ding{172}
digits}}.\underbrace{999\ldots9}_{\mbox{\ding{172} digits}}.
\hspace{25mm}
\Box
\]
\end{example}
\begin{theorem}
\label{t6} The sets $\mathbb{Z}, \mathbb{Q}_1,\mathbb{Q}_2,$ and $
\mathbb{R}_{b}$ are not monoids under addition.
\end{theorem}
\textit{Proof.} The proof is obvious and is so omitted. \hfill
$\Box$
\section{Relations to results of Georg Cantor}
\label{s6}
It is obligatory to say in this occasion that the results
presented above should be considered as a more precise analysis of
the situation discovered by the genius of Cantor. He has proved,
by using his famous diagonal argument, that the number of elements
of the set $\mathbb{N}$ is less than the number of real numbers
at the interval $[0,1)$ \textit{without calculating the latter}.
To do this he expressed real numbers in a positional numeral
system. We have shown that this number will be different depending
on the radix $b$ used in the positional system to express real
numbers. However, all of the obtained numbers,
$b^{2\mbox{\ding{172}}}$, are more than the number of elements of
the set of natural numbers, \ding{172}, and, therefore, the
diagonal argument maintains its force.
We can now calculate the number of points of the interval $[0,1)$,
of a line, and of the $N$-dimensional space. To do this we need a
definition of the term \textit{point}\index{point} and
mathematical tools to indicate a point. Since this concept is one
of the most fundamental, it is very difficult to find an adequate
definition. If we accept (as is usually done in modern
Mathematics) that the \textit{point} $x$ in an $N$-dimensional
space is determined by $N$ numerals called \textit{coordinates of
the point}
\[
(x_1,x_2, \ldots x_{N-1},x_{N}) \in \mathbb{S}^N,
\]
where $\mathbb{S}^N$ is a set of numerals, then we can indicate
the point $x$ by its coordinates and we are able to execute the
required calculations. It is worthwhile to emphasize that we have
not postulated that $(x_1,x_2, \ldots $ $ x_{N-1},x_{N})$ belongs
to the $N$-dimensional set, $\mathbb{R}^N$, of real numbers as it
is usually done because we can express coordinates only by
numerals and, as we have shown above, different choices of numeral
systems lead to various sets of numerals.
We should decide now which numerals we shall use to express
coordinates of the points. Different variants can be chosen
depending on the precision level we want to obtain. For example,
if the numbers $0 \le x < 1$ are expressed in the form
$\frac{p-1}{\mbox{\ding{172}}}, p \in \mathbb{N}$, then the
smallest positive number we can distinguish is
$\frac{1}{\mbox{\ding{172}}}$. Therefore, the interval $[0,1)$
contains the following \ding{172} points
\[
0, \,\,\,\, \frac{1}{\mbox{\ding{172}}}, \,\,\,\,
\frac{2}{\mbox{\ding{172}}}, \,\,\,\, \ldots \,\,\,\,
\frac{\mbox{\ding{172}}-2}{\mbox{\ding{172}}}, \,\,\,\,
\frac{\mbox{\ding{172}}-1}{\mbox{\ding{172}}}.
\]
Then, due to the IUA and the definition of sequence, there are
\ding{172} intervals of the form $[a-1,a), a \in \mathbb{N},$ on
the ray $x \ge 0$. Hence, this ray contains
$\mbox{\ding{172}}^{2}$ points and the whole line consists of
$2\mbox{\ding{172}}^{2}$ points.
If we need a higher precision, within each interval
\[
[a-1+\frac{i-1}{\mbox{\ding{172}}},
a-1+\frac{i}{\mbox{\ding{172}}}), \hspace{3mm} a,i \in \mathbb{N},
\]
we can distinguish again \ding{172} points and the number of
points within each interval $[a-1,a), a \in \mathbb{N},$ will
become equal to $\mbox{\ding{172}}^{2}$. Consequently, the number
of the points on the line will be equal to
$2\mbox{\ding{172}}^{3}$.
This situation is a direct consequence of Postulate~2 and is
typical for natural sciences where it is well known that
instruments influence the results of observations. It is similar
as to work with a microscope: we decide the level of the
precision we need and obtain a result which is dependent on the
chosen level of accuracy. If we need a more precise or a more
rough answer, we change the lens of our microscope.
Continuing the analogy with the microscope, we can also decide to
change our microscope with a new one. In our terms this means to
change the numeral system with another one. For instance, instead
of the numerals considered above, we choose a positional numeral
system to calculate the number of points within the interval
$[0,1)$; then, as we have already seen before, we are able to
distinguish $b^{\mbox{\ding{172}}}$ points of the form
\[
(.a_{-1} a_{-2} \ldots a_{-(\mbox{\ding{172}}-1)}
a_{-\mbox{\ding{172}}})_b
\]
on it. Since the line contains $2\mbox{\ding{172}}$ unit
intervals, the whole number of points of this type on the line is
equal to $2\mbox{\ding{172}}b^{\mbox{\ding{172}}}$.
In this example of counting, we have changed the tool to calculate
the number of points within each interval, but used the old way to
calculate the number of intervals, i.e., by natural numbers. If
we are not interested in subdividing the line at intervals and
want to obtain the number of the points on the line directly by
using positional numerals of the type (\ref{3.103}) with possible
infinite $n$ and $q$, then we are able to distinguish at maximum
$b^{2\mbox{\ding{172}}}$ points on the line.
\begin{figure}[t]
\begin{center}
\epsfig{ figure = New_book3.eps, width = 2.6in, height = 2.3in, silent = yes }
\caption{Due to Cantor, the interval $(0,1)$ and the entire real number line have
the same number of points}
\label{figura3}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\epsfig{ figure = Big_paper1.eps, width = 2.6in, height = 2.3in, silent = yes }
\caption{Three independent mathematical objects: the set $X_{\mathcal{S}_1}$ represented by dots, the
set
$Y_{\mathcal{S}_2}$ represented by stars, and function
(\ref{4.4})}
\label{Big_paper1}
\end{center}
\end{figure}
Let us now return to the problem of comparison of infinite sets
and consider Cantor's famous result showing that the number of
points over the interval $(0,1)$ is equal to the number of points
over the whole real line, i.e.,
\beq
|\mathbb{R}| = |(0,1)|.
\label{1.11}
\eeq
The proof of this counterintuitive fact is
given by establishing a one-to-one correspondence between the
elements of the two sets. Such a mapping can be done by using for
example the function
\beq
y=\tan(0.5 \pi (2x-1)),\hspace{15mm}
x \in (0,1),
\label{4.4}
\eeq
illustrated in Fig.~\ref{figura3}. Cantor shows by using
Fig.~\ref{figura3} that to any point $x \in (0,1)$ a point $y \in
(-\infty,\infty)$ can be associated and vice versa. Thus, he
concludes that
the requested one-to-one correspondence between the
sets $\mathbb{R}$ and $ (0,1)$ has been established and,
therefore, this proves (\ref{1.11}).
Our point of view is different: the number of elements is an
intrinsic characteristic of each set (for both finite and infinite
cases) that does not depend on any object outside the set. Thus,
in Cantor's example from Fig.~\ref{figura3} we have (see
Fig.~\ref{Big_paper1}) three mathematical objects: (i) a set,
$X_{\mathcal{S}_1}$, of points over the interval $ (0,1)$ which we
are able to distinguish using a numeral system $\mathcal{S}_1$;
(ii) a set, $Y_{\mathcal{S}_2}$, of points over the vertical real
line which we are able to distinguish using a numeral system
$\mathcal{S}_2$; (iii) the function (\ref{4.4}) described using a
numeral system $\mathcal{S}_3$. All these three mathematical
objects are independent each other. The sets $X_{\mathcal{S}_1}$
and $Y_{\mathcal{S}_2}$ can have the same or different number of
elements.
Thus, we are not able to evaluate $f(x)$ at \textit{any} point
$x$. We are able to do this only at points from
$X_{\mathcal{S}_1}$. Of course, in order to be able to execute
these evaluations it is necessary to conciliate the numeral
systems $\mathcal{S}_1, \mathcal{S}_2,$ and $\mathcal{S}_3$. The
fact that we have made evaluations of $f(x)$ and have obtained
the corresponding values does not influence minimally the
numbers of elements of the sets $X_{\mathcal{S}_1}$ and
$Y_{\mathcal{S}_2}$. Moreover, it can happen that the number
$y=f(x)$ cannot be expressed in the numeral system $\mathcal{S}_2$
and it is necessary to approximate it by a number $\widetilde{y}
\in \mathcal{S}_2$. This situation, very well known to computer
scientists, is represented in Fig.~\ref{Big_paper1}.
Let us remind one more famous example related to the one-to-one
correspondence and taking its origins in studies of Galileo
Galilei: even numbers can be put in a one-to-one correspondence
with all natural numbers in spite of the fact that they are a part
of them:
\beq
\begin{array}{lccccccc}
\mbox{even numbers:} & \hspace{5mm} 2, & 4, & 6, & 8, & 10, & 12, & \ldots \\
& \hspace{5mm} \updownarrow & \updownarrow & \updownarrow & \updownarrow & \updownarrow & \updownarrow & \\
\mbox{natural numbers:}& \hspace{5mm}1, & 2, & 3, & 4 & 5,
& 6, & \ldots \\
\end{array}
\label{4.4.1}
\eeq
Again, our view on this situation is different since we cannot
establish a one-to-one correspondence between the sets because
they are infinite and we, due to Postulate~1, are able to execute
only a finite number of operations. We cannot use the one-to-one
correspondence as an executable operation when it is necessary to
work with infinite sets.
However, we already know that the number of elements of the
set of natural numbers is equal to \ding{172} and \ding{172} is
even. Since the number of elements of the set of even numbers is
equal to $\frac{\mbox{\ding{172}}}{2}$, we can write down not
only initial (as it is usually done traditionally) but also the
final part of (\ref{4.4.1})
\beq
\begin{array}{cccccccccc}
2, & 4, & 6, & 8, & 10, & 12, & \ldots &
\mbox{\ding{172}} -4, & \mbox{\ding{172}} -2, & \mbox{\ding{172}} \\
\updownarrow & \updownarrow & \updownarrow &
\updownarrow & \updownarrow & \updownarrow & &
\updownarrow & \updownarrow &
\updownarrow
\\
1, & 2, & 3, & 4 & 5, & 6, & \ldots & \frac{\mbox{\ding{172}}}{2} - 2, &
\frac{\mbox{\ding{172}}}{2} - 1, & \frac{\mbox{\ding{172}}}{2} \\
\end{array}
\label{4.4.2}
\eeq
concluding so (\ref{4.4.1}) in a complete accordance with
Postulate~3. Note that record (\ref{4.4.2}) does not affirms that
we have established the one-to-one correspondence among
\textit{all} even numbers and a half of natural ones. We cannot
do this due to Postulate~1. The symbols `$\ldots$' indicate an
infinite number of numbers and we can execute only a finite number
of operations. However, record (\ref{4.4.2}) affirms that for any
even number expressible in the chosen numeral system it is
possible to indicate the corresponding natural number in the lower
row of (\ref{4.4.2}).
We conclude the paper by the following remark. With respect to
our methodology, the mathematical results obtained by Pirah\~{a},
Cantor, and those presented in this paper do not contradict to
each other. \textit{They all are correct with respect to
mathematical languages used to express them.} This relativity is
very important and it has been emphasized in Postulate~2. For
instance, the result of Pirah\~{a} 1+2=`many' is correct in their
language in the same way as the result 1+2=3 is correct in the
modern mathematical languages. Analogously, the result
(\ref{4.4.1}) is correct in Cantor's language and the more
powerful language developed in this paper allows us to obtain a
more precise result (\ref{4.4.2}) that is correct in the new
language.
The choice of the mathematical language depends on the practical
problem that are to be solved and on the accuracy required for
such a solution. Again, the result of Pirah\~{a} `many'+1=`many'
is correct. If one is satisfied with its accuracy, the answer
`many' can be used (and is used by Pirah\~{a}) in practice.
However, if one needs a more precise result, it is necessary to
introduce a more powerful mathematical language (a numeral system
in this case) allowing one to express the required answer in a
more accurate way.
\section{A brief conclusion}
\label{s7}
In this paper, a new computational methodology has been
introduced. It allows us to express, by a finite number of
symbols, not only finite numbers but infinite and infinitesimals,
too. All of them can be viewed as particular instances of a
general framework used to express numbers.
It has been emphasized that the philosophical triad --
researcher, object of investigation, and tools used to observe the
object -- existing in such natural sciences as Physics and
Chemistry, exists in Mathematics, too. In natural sciences, the
instrument used to observe the object influences the results of
observations. The same happens in Mathematics where numeral
systems used to express numbers are among the instruments of
observations used by mathematicians. The usage of powerful numeral
systems gives the possibility to obtain more precise results in
Mathematics, in the same way as the usage of a good microscope
gives the possibility to obtain more precise results in Physics.
\bibliographystyle{amsplain}
|
1,314,259,993,346 | arxiv | \section{Introduction}
In the standard three flavour framework, neutrino oscillation in which neutrinos change their flavour is described by six parameters: three mixing angles: $\theta_{12}$, $\theta_{13}$, $\theta_{23}$,
two mass squared differences: $\Delta_{21}$ ($m_2^2 - m_1^2$), and $\Delta_{31}$ ($m_3^2 - m_1^2$) and one phase $\delta_{13}$. Among them one of the major unknown is the sign of $\Delta_{31}$ or
the neutrino mass hierarchy. It can be either normal i.e., $\Delta_{31} > 0$ (NH) or inverted i.e., $\Delta_{31} < 0$ (IH). It is well known that if Nature choose the favourable parameter space
where there is no degeneracy, then NO$\nu$A \cite{Adamson:2017gxd} can determine neutrino mass hierarchy at more that $2 \sigma$ C.L.
Fortunately the current best fit parameter space i.e., NH with $\delta_{13}=-90^\circ$ \cite{Forero:2014bxa,Esteban:2016qun,Capozzi:2013csa}
is indeed the favourable parameter space for NO$\nu$A and thus it is expected that the first hint of neutrino mass hierarchy will come from the NO$\nu$A experiment. But the situation can be different
if there exists new physics. In presence of new physics there can be additional degeneracies which can spoil the hierarchy sensitivity of NO$\nu$A even for the favourable parameter space.
In this work we consider the existence of an extra light sterile neutrino at the eV scale \cite{Abazajian:2012ys} i.e. the 3+1 scenario.
In this present work our aim is to identify the new degeneracies and study their
effect in the determination of hierarchy in NO$\nu$A.
\section{Oscillation parameters in 3+1 scheme}
In presence of one extra light sterile neutrino, we parametrize the PMNS matrix as
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
$4\nu$ Parameters & True Value & Test Value Range\\
\hline
$\sin^2\theta_{12}$ & $0.304$ & $\mathrm{N/A}$\\
$\sin^22\theta_{13}$ & $0.085$ & $\mathrm{N/A}$\\
$\theta_{23}^{\mathrm{LO}}$ & $40^\circ$ & $(40^\circ,50^\circ)$\\
$\theta_{23}^{\mathrm{HO}}$ & $50^\circ$ & $(40^\circ,50^\circ)$\\
$\sin^2\theta_{14}$ & $0.025$ & $\mathrm{N/A}$\\
$\sin^2\theta_{24}$ & $0.025$ & $\mathrm{N/A}$\\
$\theta_{34}$ & $0^\circ$ & $\mathrm{N/A}$\\
$\delta_{13}$ & $-90^\circ$ & $(-180^\circ,180^\circ)$\\
$\delta_{14}$ & $-90^\circ,0^\circ,90^\circ$ & $(-180^\circ,180^\circ)$\\
$\delta_{34}$ & $0^\circ$ & $\mathrm{N/A}$\\
$\Delta_{21}$ & $7.5\times10^{-5}\mathrm{eV}^2$ & $\mathrm{N/A}$\\
$\Delta_{31}$ & $2.475\times10^{-3}\mathrm{eV}^2$ & $(2.2,2.6)\times10^{-3}\mathrm{eV}^2$\\
$\Delta_{41}$ & $1\mathrm{eV}^2$ & $\mathrm{N/A}$\\
\hline
\end{tabular}
\caption{\label{tab:i} Expanded $4\nu$ parameter true values and test marginalisation ranges, parameters with N/A are not marginalised over.
\label{SterParam}}
\end{table}
\begin{equation}
U_{\mathrm{PMNS}}^{4\nu}=
U(\theta_{34},\delta_{34})
U(\theta_{24},0)
U(\theta_{14},\delta_{14})
U_{\mathrm{PMNS}}^{3\nu}\,.
\end{equation}
where
\begin{equation}
U_{\mathrm{PMNS}}^{3\nu}
=
U(\theta_{23},0)
U(\theta_{13},\delta_{13})
U(\theta_{12},0)\,.
\end{equation}where $U(\theta_{ij},\delta_{ij})$ contains a corresponding $2\times2$ mixing matrix:
\begin{equation}
U^{2\times 2}(\theta_{ij},\delta_{ij})
=
\left(
\begin{array}{c c}
\mathrm{c}_{ij} & \mathrm{s}_{ij}e^{i\delta_{ij}}\\
-\mathrm{s}_{ij}e^{i\delta_{ij}} & \mathrm{c}_{ij}
\end{array}
\right)
\end{equation}embedded in an $n\times n$ array in the $i,j$ sub-block.
Thus in this case the neutrino oscillation parameter space is increased by three more mixing angles: $\theta_{14}$, $\theta_{24}$ and $\theta_{34}$, two more Dirac type
CP phases i.e., $\delta_{14}$ and $\delta_{34}$ and one more mass squared difference: $\Delta_{41}$ ($m_4^2 - m_1^2$).
In 3+1 case, the appearance channel expression in vacuum is given by \cite{Klop:2014ima}
\begin{eqnarray} \nonumber
\label{eq:Pme_atm}
& P_{\mu e} &\!\! \simeq\, 4 s_{23}^2 s^2_{13} \sin^2{\Delta} + 8 s_{13} s_{12} c_{12} s_{23} c_{23} (\alpha \Delta)\sin \Delta \cos({\Delta \pm \delta_{13}}) +
4 s_{14} s_{24} s_{13} s_{23} \sin\Delta \sin (\Delta \pm \delta_{13} \mp \delta_{14})
\end{eqnarray}
where $\Delta \equiv \Delta_{31}L/4E$, $\alpha \equiv \Delta_{21}/ \Delta_{31}$ with $L$ being the baseline and $E$ is the energy.
For our present work we list our choice of parameters in Table \ref{tab:i} \cite{Kopp:2013vaa}.
\begin{figure*}[h]\centering
\includegraphics[scale=0.75]{PvsEmhMaxMix.pdf}
\includegraphics[scale=0.75]{PvsEmhMaxMixAnti.pdf}\\
\includegraphics[scale=0.75]{PvsEOctOverlap.pdf}
\includegraphics[scale=0.75]{PvsEOctOverlapAnti.pdf}
\caption{$\nu_\mu \rightarrow\nu_e$ oscillation probability bands for $\delta_{13} = -90^\circ$. Left panels are for neutrinos and right panels are for antineutrinos.
The upper panel shows the hierarchy-$\delta_{14}$ degeneracy and the lower panels shows the octant-$\delta_{14}$ degeneracy.}
\label{fig:prob}
\end{figure*}
\begin{figure*}
\includegraphics[width=0.45\textwidth]{th23vsdel13_nova6_NH.pdf}
\includegraphics[width=0.45\textwidth]{th23-del13_test_NH.pdf}
\caption{Contour plots in the $\theta_{23}({\rm test})$ vs $\delta_{13}({\rm test})$ plane for two different true values of $\theta_{23}= 40^\circ$ (first and third column) and $50^\circ$ (second and fourth column) for NO$\nu$A $(6+\bar0)$ (first and second column) and ($3+\bar 3$) (third and fourth column). The first, second and third rows are for $\delta_{14}=-90^\circ$ , $0^\circ$ and $90^\circ$ respectively. The true value for the $\delta_{13}$ is taken to be $-90^\circ$. The true hierarchy is NH. We marginalize over the test values of $\delta_{14}$. Also shown is the contours for the $3\nu$ flavor scenario.}
\label{fig:hierarchy_degen}
\end{figure*}
\begin{figure}\centering
\includegraphics[width=0.45\textwidth]{th23-del13_true_NH.pdf}
\caption{Contour plots at $2\sigma$ C.L. in the $\theta_{23}({\rm true})$ vs $\delta_{13}({\rm true})$ plane for Octant Unknown (left panel) and Octant Known (right panel) scenarios for NO$\nu$A ($3+\bar 3$). The first, second and third rows are for $\delta_{14}=-90^\circ$, $0^\circ$ and $90^\circ$ respectively. The true and test hierarchies are chosen to be normal (NH) and inverted hierarchy (IH) respectively. Also shown contours for the $3\nu$ flavor scenario.}
\label{fig:hierarchy_degen_1}
\end{figure}
In the table, LO implies the lower octant of $\theta_{23}$ and HO implies higher octant of $\theta_{23}$. We have generated all our results with the GLoBES software \cite{Huber:2004ka}.
\section{Degeneracy at the probability level}
In fig. \ref{fig:prob}, we have plotted the appearance probability vs energy for $\delta_{13} = -90^\circ$ and the bands are due to variation of $\delta_{14}$. The upper panels show the
hierarchy-$\delta_{14}$ degeneracy and the lower panels depict octant-$\delta_{14}$ degeneracy.
From the upper panels we see that we have degeneracies in \{NH, $\delta_{14}=90^\circ$\} with \{IH, $\delta_{14}=-90^\circ$\} for neutrinos and
\{NH, $\delta_{14}=-90^\circ$\} with \{IH, $\delta_{14}=90^\circ$\} for antineutrinos. Thus we understand that this degeneracy can be removed with a balanced run of neutrinos and antineutrinos.
From the lower panels we see that there is degeneracies in \{LO, $\delta_{14}=-90^\circ$ \} with \{HO, $\delta_{14}=90^\circ$\} for both neutrinos and antineutrinos. Thus it is clear that this
degeneracy is unremovable. It was shown in Ref. \cite{Agarwalla:2016xlg} that due to this degeneracy, the octant determination of the long-baseline experiments is highly compromised.
\section{Degeneracies at the event level}
To Show the degeneracies at the event level, in Fig. \ref{fig:hierarchy_degen} we have given the contour plots in the $\theta_{23}$ (test) - $\delta_{13}$ (test) plane. The true point
is represented by the red diamond. In these panels,
red and purple contours correspond to the right hierarchy and wrong hierarchy solutions respectively for the three generation case and the blue and green contours correspond to right hierarchy
and wrong hierarchy solutions respectively for the 3+1 case. By comparing the pure neutrino results of NO$\nu$A labeled as NO$\nu$A (6+0) and mixed neutrino-antineutrino results labeled
as NO$\nu$A (3+3) we notice that for three generation case, the all the degenerate solutions are almost gone when antineutrino data is considered.
But for the 3+1 case, we notice that the wrong hierarchy solutions are almost gone but
the wrong octant solutions does not get removed.
Thus from the above discussion we understand that even for NH and $\delta_{13} = -90^\circ$, where there is almost no degeneracy in the three generation case, there exists degenerate solutions
when there is an extra light sterile neutrino.
\section{Results for hierarchy sensitivity}
To study the effect of these degeneracies on the hierarchy measurement in Fig. \ref{fig:hierarchy_degen_1} we have plotted the hierarchy $\chi^2$ in the true $\theta_{23}$ -true $\delta_{13}$ plane.
From the figure we see that NO$\nu$A has good hierarchy sensitivity for $\delta_{13}=-90^\circ$ for the generation case.
But for the 3+1 case, the sensitivity depends on the true value of $\delta_{14}$. For $\delta_{14}=-90^\circ$ we see that the hierarchy sensitivity is lost for $\theta_{23} < 43^\circ$ if the
octant is unknown. However if the octant is known then the sensitivity coincides with the three generation case. For $\delta_{14} = 0^\circ$, we note that the hierarchy sensitivity is lost
if $\theta_{23}$ is less than $46^\circ$ for both the cases. However the most remarkable result is obtained if $\delta_{14}$ is $90^\circ$. In this case we see that there is a complete loss of
hierarchy sensitivity at $2 \sigma$ for all true values of $\theta_{23}$.
\section{Summary}
In this work we have studied the parameter degeneracy in neutrino oscillation in the presence of a light sterile neutrino in the eV scale for NO$\nu$A. In our work we have identified new degeneracies
which are absent in the standard three generation case. Because of these there are unsolved degenerate region in the 3+1 case. We also showed that the hierarchy sensitivity depends on the
true values of $\theta_{14}$. If the observed hierarchy sensitivity of NO$\nu$A is less than the expected then this can be a hint of existence of sterile neutrinos. For more detail we refer to
\cite{Ghosh:2017atj} on which this article is based upon.
\section*{Acknowledgements}
The work of MG is partly supported by the ``Grant-in-Aid for Scientific Research of the Ministry of Education, Science and Culture, Japan", under Grant No. 25105009.
SG, ZM, PS and AGW acknowledge the support by the University of Adelaide and the Australian Research Council through the ARC Centre of Excellence for
Particle Physics at the Terascale (CoEPP) (grant no. CE110001004).
|
1,314,259,993,347 | arxiv |
\section{Details of the modules}
\label{sec:appendixA}
In the following, we generally define the collision rules only for one side,
as the corresponding rules for the other side can be deduced by symetry.
We give here the rules for all modules: first the modules for building and stopping
the fractal tree, then the modules for setting and using the tree for satisfiability
problems and finally the modules specific to each variants of SAT.
All the diagrams of the paper were generated by Durand-Lose's software, implemented in Java,
and corresponds to a run of our Q-SAT solver for the formula:
\[
\phi = \exists x_1\forall x_2\forall x_3\ (x_1\AND \neg x_2)\OR x_3 \enspace.
\]
\subsection{The fractal cloud}
\paragraph{Constructing the fractal:}
$\texttt{[start]} = \RLO\sigstart,\RHI\sigstart$\\
The following rules on the left correspond to the bounce of $\RHI\sigstart$ and
$\LHI\sigstart$ on the wall $\ST\sigwall$. Rules on the right are the step of induction
for starting the next level: the initial signals $\RLO\sigstart$ and $\RHI\sigstart$ are
duplicated on the right and the left, and the stationary signal $\ST\sigstart$ is created
exactly at the middle of the previous stage. The result is given by \Fig{fig:app:tree}.
\begin{align*}
\sigwall,\LHI\sigstart & \becomes \sigwall,\RHI\sigstart &
\RLO\sigstart, \LHI\sigstart & \becomes \LHI\sigstart, \LLO\sigstart, \ST\sigstart, \RLO\sigstart, \RHI\sigstart\\
\RHI\sigstart, \sigwall & \becomes \LHI\sigstart, \sigwall &
\RHI\sigstart, \LLO\sigstart & \becomes \LHI\sigstart, \LLO\sigstart, \ST\sigstart, \RLO\sigstart, \RHI\sigstart\\
\end{align*}
\begin{figure}[hbt]
\centering
\includegraphics[width=0.4\textwidth]{Fig/tree.pdf}
\caption{The fractal tree.}
\label{fig:app:tree}
\end{figure}
\paragraph{Stopping the fractal:}
$\block{until}[n+1] = \RLO\sigstop{\RLO\sigstopaux}^{n}$
The role of this module is to stop the fractal after $(n+1)$ levels --- $n$
levels for assigning the $n$ variables and $1$ level for the evaluation of the
ground formula. For this, we use a stack of $n$ signals $\RLO\sigstopaux$ and
one signal $\RLO\sigstop$. The $\RLO\sigstopaux$ signals are used both as a
counter, one signal being killed at each level, and as inhibitors to the effect
of $\RLO\sigstop$. After $n$ levels, only $\RLO\sigstop$ remains and can
stop the construction of the fractal at level $n+1$. Here are the corresponding rules:
\begin{align*}
\RHI\sigstopaux, \LLO\sigstart & \becomes \LLO\sigstartoff,\RHI\sigstopaux &
\RHI\sigstopaux, \ST\sigstart & \becomes \ST\sigstartoff &
\RHI\sigstopaux, \ST\sigstartoff & \becomes \LLO\sigstopaux, \ST\sigstartoff, \RLO\sigstopaux\\
\RHI\sigstop, \LLO\sigstart & \becomes \LLO\sigstartoff, \RHI\sigstop &
\RHI\sigstop, \LLO\sigstartoff & \becomes \LLO\sigstart, \RHI\sigstop &
\RHI\sigstop, \ST\sigstartoff & \becomes \LLO\sigstop, \ST\sigstart, \RLO\sigstop\\
\RHI\sigstop, \ST\sigstart & \becomes \ST\sigstart, \RHI\sigstop &
\RHI\sigstop, \RLO\sigstart & \becomes \RLO\sigstartoff &
\RHI\sigstart, \LLO\sigstartoff & \becomes
\end{align*}
\paragraph{The lens effect:}
The general idea is that any signal $\RLO\sigi$ is accelerated by
$\LHI\sigstart$ and decelerated and split by any stationary signal
$\ST\sigs$. There are some special cases to handle for the deceleration and the
split, some particular signals being stopped at this moment. But in all cases,
signals are always accelerated by $\LHI\sigstart$. Figure \ref{fig:app:split}
zooms on a split and illustrates the lens effect on the beam as well as the
assignment of the top-most $\RLO\sigx$.
\begin{figure}[hbt]
\centering
\includegraphics[width=1\textwidth]{Fig_New/split.pdf}
\caption{Split and lens effect at the first level.}
\label{fig:app:split}
\end{figure}
\emph{General case}: for any stationary signal $\ST\sigs$ (more exactly, $\ST\sigs$ is either $\ST\sigstart$,
$\ST\sigstartoff$, $\ST\sigx$, $\ST\sigxoff$, $\ST\sigexistsL$, $\ST\sigexistsR$,
$\ST\sigforallL$ or $\ST\sigforallR$) and for any signal $\RLO\sigi$ distinct
of $\RLO\sigstopaux$ and $\RLO\sigstop$, we have:
\begin{align*}
\RLO\sigi, \LHI\sigstart & \becomes \LHI\sigstart, \RHI\sigi &
\RHI\sigi, \ST\sigs & \becomes \LLO\sigi, \ST\sigs, \RLO\sigi
\end{align*}
\emph{Case of $\RLO\sigstopaux$ and $\RLO\sigstop$}: the rules for applying the
lens effect to $\RLO\sigstopaux$ and $\RLO\sigstop$ are given previously in
§.Stopping the fractal. The stationary signal involved here is $\ST\sigstart$,
which kills the first $\RHI\sigstopaux$, becomes $\ST\sigstartoff$ and splits
and decelerates the next signals $\RHI\sigstopaux$ and $\RHI\sigstop$, and then
becomes $\ST\sigstart$ again.
\emph{Case of $\RLO\sigstartaux$}: the first $\RLO\sigstartaux$ is stopped by the stationary
signal $\ST\sigstart$, which becomes $\ST\sigx$ and decelerates and splits the next coming
$\RLO\sigstartaux$.
\begin{align*}
\RHI\sigstartaux, \ST\sigstart & \becomes \ST\sigx &
\RHI\sigstartaux, \ST\sigx & \becomes \LLO\sigstartaux, \ST\sigx, \RLO\sigstartaux
\end{align*}
\emph{Case of quantifiers signals}: the first quantifier signal, $\RHI\sigforall$ or $\RHI\sigexists$,
colliding with a stationary $\ST\sigx$ is stopped and turns $\ST\sigx$ into
$\ST\sigforallD$ or $\ST\sigexistsD$ ($D \in \{R, L\}$). This is achieved by the rules:
\begin{align*}
\ST\sigx, \LHI\sigexists & \becomes \ST\sigexistsR &
\RHI\sigexists, \ST\sigx & \becomes \ST\sigexistsL &
\ST\sigx, \LHI\sigforall & \becomes \ST\sigforallR &
\RHI\sigforall, \ST\sigx & \becomes \ST\sigforallL
\end{align*}
The next quantifier signals are decelerated and split by the
new stationary signal $\ST\sigforallD$ or $\ST\sigexistsD$.
In the following rules, we have $D \in \{R, L\}$:
\begin{align*}
\RHI\sigexists, \ST\sigexistsD & \becomes \LLO\sigexists, \ST\sigexistsD, \RLO\sigexists &
\RHI\sigexists, \ST\sigforallD & \becomes \LLO\sigexists, \ST\sigforallD, \RLO\sigexists\\
\RHI\sigforall, \ST\sigexistsD & \becomes \LLO\sigforall, \ST\sigexistsD, \RLO\sigforall &
\RHI\sigforall, \ST\sigforallD & \becomes \LLO\sigforall, \ST\sigforallD, \RLO\sigforall\\
\end{align*}
\subsection{The tree for satisfiabilty problems}
\paragraph{Activation of the decision tree:}
$\block{init}[n] = {\RLO\sigstartaux}^n$
\begin{align*}
\RHI\sigstartaux, \ST\sigstart & \becomes \ST\sigx &
\RHI\sigstartaux, \ST\sigx & \becomes \LLO\sigstartaux, \ST\sigx, \RLO\sigstartaux
\end{align*}
\paragraph{Representation of a variable:}
$\block{var}[x_i] = \RLO\sigx{\RLO\sigxdelay}^{i-1}$
As we explained in the paper, the variable $x_i$ is represented by a stack of
$i$ signals: one signal $\RLO\sigx$ and $i-1$ signals $\RLO\sigxdelay$. The role
of the signals $\RLO\sigxdelay$ is to protect $\RLO\sigx$ from being assigned
before the $i^{th}$ level. The first signal $\RLO\sigxdelay$ of the stack is
stopped at the next split, so that $i-1$ signals have disappeared just before
the $i^{th}$ stage. This is illustrated in \Fig{fig:app:split} and modelised
by the following rules:
\begin{align*}
\RHI\sigxdelay, \ST\sigx & \becomes \ST\sigxoff &
\RHI\sigxdelay, \ST\sigxoff & \becomes \LLO\sigxdelay, \ST\sigxoff, \RLO\sigxdelay \\
\RHI\sigx, \ST\sigx & \becomes \LLO\sigf, \ST\sigx, \RLO\sigt &
\RHI\sigx, \ST\sigxoff & \becomes \LLO\sigx, \ST\sigx, \RLO\sigx
\end{align*}
\paragraph{Compilation of the formula:}
We propose here a recursive algorithm which takes as input an
unquantified formula --- a SAT-formula --- and outputs the part of
the initial configuration corresponding to the formula.
In the following schemes of compilation, $\NCON{\phi}$ designates
the number of occurences of Boolean connectives in formula
$\phi$.
\begin{align*}
\CC{\phi} &= \CC{\phi}^0\\
\CC{\phi_1\AND\phi_2}^k &= \RLO\sigand\ {\RLO\siggamma}^k\ \CC{\phi_1}^0\ \CC{\phi_2}^{\NCON{\phi_1}}\\
\CC{\phi_1\OR\phi_2}^k &= \RLO\sigor\ {\RLO\siggamma}^k\ \CC{\phi_1}^0\ \CC{\phi_2}^{\NCON{\phi_1}}\\
\CC{\NOT\phi}^k &= \RLO\signot\ {\RLO\siggamma}^k\ \CC{\phi}\\
\CC{x_i}^k & = \RLO\sigx\ {\RLO\sigbeta}^{i-1}\ {\RLO\siggamma}^k
\end{align*}
\paragraph{Evaluation:}
The rules for the evaluation follow the classical Boolean operations. We explained earlier
that some inhibiting signals --- the $\RLO\siggamma$ --- are needed to allow
the result of the first evaluated argument of a binary connective to traverse
the beam of the other, as yet unevaluated, argument without reacting with the connectives contained
therein, and only interact with its actual syntactical parent connective.
Figure \ref{fig:app:eval} displays the evaluation of the
formula $\phi=\exists x_1\forall x_2\forall x_3\ (x_1\AND \neg x_2)\OR x_3$ for the case
$x_1=x_2=x_3=\ST\sigt$.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Fig_New/fig_K_2011_ICALP_eg_evaluate_cropped.pdf}
\caption{Evaluation for $x_1=x_2=x_3=\ST\sigt$ in $\exists x_1\forall x_2\forall x_3\ (x_1\AND \neg x_2)\OR x_3$.}
\label{fig:app:eval}
\end{figure}
\begin{align*}
\RHI\sigt, \ST\sigstart & \becomes \LLO\sigT, \ST\sigstart &
\RHI\sigf, \ST\sigstart & \becomes \LLO\sigF, \ST\sigstart &
\RHI\siggamma, \ST\sigstart & \becomes \LLO\siggammaP, \ST\sigstart\\[4mm]
\RHI\sigand, \LLO\sigT & \becomes \RHI\sigandP &
\RHI\sigfP, \LLO\sigT & \becomes \RHI\sigf &
\RHI\sigandP, \LLO\sigT & \becomes \RHI\sigt\\
\RHI\sigand, \LLO\sigF & \becomes \RHI\sigfP &
\RHI\sigfP, \LLO\sigF & \becomes \RHI\sigf &
\RHI\sigandP, \LLO\sigF & \becomes \RHI\sigf\\[4mm]
\RHI\sigor, \LLO\sigT & \becomes \RHI\sigtP &
\RHI\sigtP, \LLO\sigT & \becomes \RHI\sigt &
\RHI\sigorP, \LLO\sigT & \becomes \RHI\sigt\\
\RHI\sigor, \LLO\sigF & \becomes \RHI\sigorP &
\RHI\sigtP, \LLO\sigF & \becomes \RHI\sigt &
\RHI\sigorP, \LLO\sigF & \becomes \RHI\sigf\\[4mm]
\RHI\signot, \LLO\sigT & \becomes \RHI\sigf\\
\RHI\signot, \LLO\sigF & \becomes \RHI\sigt\\[4mm]
\RHI\sigand, \LLO\siggammaP & \becomes \RHI\sigandZ &
\RHI\sigor, \LLO\siggammaP & \becomes \RHI\sigorZ &
\RHI\signot, \LLO\siggammaP & \becomes \RHI\signotZ\\[4mm]
\RHI\sigandZ, \LLO\sigT & \becomes \LLO\sigT, \RHI\sigand &
\RHI\sigorZ, \LLO\sigT & \becomes \LLO\sigT, \RHI\sigor &
\RHI\signotZ, \LLO\sigT & \becomes \LLO\sigT, \RHI\signot\\
\RHI\sigandZ, \LLO\sigF & \becomes \LLO\sigF, \RHI\sigand &
\RHI\sigorZ, \LLO\sigF & \becomes \LLO\sigF, \RHI\sigor &
\RHI\signotZ, \LLO\sigF & \becomes \LLO\sigF, \RHI\signot
\end{align*}
\paragraph{Storing the results:} $\block{store} = \RLO\sigs$
\begin{align*}
\RHI\sigs, \LLO\sigT & \becomes \RHI\sigT &
\RHI\sigT, \ST\sigstart & \becomes \ST\sigt\\
\RHI\sigs, \LLO\sigF & \becomes \RHI\sigF &
\RHI\sigF, \ST\sigstart & \becomes \ST\sigf
\end{align*}
\section{Modularity and satifiability variants}
\label{sec:appendixB}
\subsection{Q-SAT}
To proceed to the aggregation process, we join the results coming
from right and left two-by-two. This is done with a stationary signal indicating
the type of operation to perform --- a conjunction for $\forall$ and
a disjunction for $\exists$ --- and the direction of the resulting signal ---
left or right. The whole process is displayed in \Fig{fig:app:collect}.
\begin{figure}[hbt]
\centering
\includegraphics[width=0.9\textwidth]{Fig_New/fig_K_2011_ICALP_eg_collecte.pdf}
\caption{Aggregation process.}
\label{fig:app:collect}
\end{figure}
\paragraph{Setting up the reduce stage:}
\[\block{reduce:qsat:init}[Q_1x_1 \cdots Q_nx_n] = \RLO{Q_n}\ldots\RLO{Q_1}\]
\begin{align*}
\ST\sigx, \LHI\sigexists & \becomes \ST\sigexistsR &
\RHI\sigexists, \ST\sigx & \becomes \ST\sigexistsL\\
\ST\sigx, \LHI\sigforall & \becomes \ST\sigforallR &
\RHI\sigforall, \ST\sigx & \becomes \ST\sigforallL
\end{align*}
\paragraph{Executing the reduce stage:}
$\block{reduce:qsat:exec} = \RLO\sigcollect$\\
Initiation:
\begin{align*}
\RHI\sigcollect, \ST\sigt & \becomes \LLO\sigt &
\RHI\sigcollect, \ST\sigf & \becomes \LLO\sigf\\
\end{align*}
Performing the disjunction:
\begin{align*}
\RLO\sigt, \ST\sigexistsL, \LLO\sigt & \becomes \LLO\sigt &
\RLO\sigt, \ST\sigexistsL, \LLO\sigf & \becomes \LLO\sigt &
\RLO\sigf, \ST\sigexistsL, \LLO\sigt & \becomes \LLO\sigt &
\RLO\sigf, \ST\sigexistsL, \LLO\sigf & \becomes \LLO\sigf \\
\RLO\sigt, \ST\sigexistsR, \LLO\sigt & \becomes \RLO\sigt &
\RLO\sigt, \ST\sigexistsR, \LLO\sigf & \becomes \RLO\sigt &
\RLO\sigf, \ST\sigexistsR, \LLO\sigt & \becomes \RLO\sigt &
\RLO\sigf, \ST\sigexistsR, \LLO\sigf & \becomes \RLO\sigf \\
\end{align*}
Performing the conjunction:
\begin{align*}
\RLO\sigt, \ST\sigforallL, \LLO\sigt & \becomes \LLO\sigt &
\RLO\sigt, \ST\sigforallL, \LLO\sigf & \becomes \LLO\sigf &
\RLO\sigf, \ST\sigforallL, \LLO\sigt & \becomes \LLO\sigf &
\RLO\sigf, \ST\sigforallL, \LLO\sigf & \becomes \LLO\sigf \\
\RLO\sigt, \ST\sigforallR, \RLO\sigt & \becomes \RLO\sigt &
\RLO\sigt, \ST\sigforallR, \RLO\sigf & \becomes \RLO\sigf &
\RLO\sigf, \ST\sigforallR, \RLO\sigt & \becomes \RLO\sigf &
\RLO\sigf, \ST\sigforallR, \RLO\sigf & \becomes \RLO\sigf \\
\end{align*}
Putting all the modules together, we obtain for the running example the
initial configuration shown by \Fig{fig:app:start}.
The global construction is displayed by \Fig{fig:app:whole}
\begin{figure}[hbt]
\centering
\includegraphics[scale=.8]{Fig_New/init.pdf}
\caption{Initial configuration.}
\label{fig:app:start}
\end{figure}
\begin{figure}[hbt]
\centering
\includegraphics[width=.8\textwidth]{Fig_New/whole_diagram_cropped.pdf}
\caption{The whole diagram.}
\label{fig:app:whole}
\end{figure}
\subsection{\#SAT}
\#SAT is the problem of counting the number of
solutions of SAT. We recall that this problem is complete
for the class \#P \emph{i.e.} the class of NP-problems for which
their solutions can be counted is polynomial time.
To solve \#SAT, as a SAT-formula is a special instance of a Q-SAT formula
in which all quantifiers are existential, we can use our Q-SAT solver and
we add a special module for counting the truth evalution of the formula
during the aggregation process. The counting is performed by a binary adder.
\paragraph{Setting up the reduce stage:}
$\block{reduce:\#sat:init} = {\RLO\sigadd}^n$
\begin{align*}
\RLO\sigadd, \ST\sigx & \becomes \ST\sigaddLZ &
\ST\sigx, \LLO\sigadd & \becomes \ST\sigaddRZ
\end{align*}
\paragraph{Executing the reduce stage:}
$\block{reduce:\#sat:exec} = \RLO\sigaddhiZ\RLO\sigaddloZ\RLO\sigzeroZ$
\begin{align*}
\RLO\sigaddloZ, \ST\sigt & \becomes \LLO\sigaddlo &
\RLO\sigaddhiZ, \ST\sigt & \becomes \LLO\sigaddhi &
\RLO\sigzeroZ, \ST\sigt & \becomes \LLO\sigone\\
\RLO\sigaddloZ, \ST\sigf & \becomes \LLO\sigaddlo &
\RLO\sigaddhiZ, \ST\sigf & \becomes \LLO\sigaddhi &
\RLO\sigzeroZ, \ST\sigf & \becomes \LLO\sigzero
\end{align*}
Rules for the binary adder (for the $R$ subscript, the rules for the $L$ are
similar, but with the output going to the left):
\begin{align*}
\RLO\sigzero, \ST\sigaddRZ, \LLO\sigzero & \becomes \ST\sigaddRZ, \RLO\sigzero &
\RLO\sigzero, \ST\sigaddRO, \LLO\sigzero & \becomes \ST\sigaddRZ, \RLO\sigone\\
\RLO\sigone, \ST\sigaddRZ, \LLO\sigzero & \becomes \ST\sigaddRZ, \RLO\sigone &
\RLO\sigone, \ST\sigaddRO, \LLO\sigzero & \becomes \ST\sigaddRO, \RLO\sigzero\\
\RLO\sigzero, \ST\sigaddRZ, \LLO\sigone & \becomes \ST\sigaddRZ, \RLO\sigone &
\RLO\sigzero, \ST\sigaddRO, \LLO\sigone & \becomes \ST\sigaddRO, \RLO\sigzero\\
\RLO\sigone, \ST\sigaddRZ, \LLO\sigone & \becomes \ST\sigaddRO, \RLO\sigzero &
\RLO\sigone, \ST\sigaddRO, \LLO\sigone & \becomes \ST\sigaddRO, \RLO\sigone\\[3mm]
\RLO\sigzero, \ST\sigaddRZ, \LLO\sigaddlo & \becomes \ST\sigaddRZ, \RLO\sigzero &
\RLO\sigzero, \ST\sigaddRO, \LLO\sigaddlo & \becomes \sigaddRZ, \RLO\sigone\\
\RLO\sigone, \ST\sigaddRZ, \LLO\sigaddlo & \becomes \ST\sigaddRZ, \RLO\sigone &
\RLO\sigone, \ST\sigaddRO, \LLO\sigaddlo & \becomes \ST\sigaddRO, \RLO\sigzero\\[3mm]
\RLO\sigzero, \ST\sigaddRZ & \becomes \ST\sigaddRZ, \RLO\sigzero &
\RLO\sigzero, \ST\sigaddRO & \becomes \ST\sigaddRZ, \RLO\sigone\\
\RLO\sigone, \ST\sigaddRZ & \becomes \ST\sigaddRZ,\RLO\sigone &
\RLO\sigone, \ST\sigaddRO & \becomes \ST\sigaddRO, \RLO\sigzero\\[3mm]
\RLO\sigaddlo, \ST\sigaddRZ, \LLO\sigaddlo & \becomes \ST\sigaddRZ, \RLO\sigaddlo &
\RLO\sigaddlo, \ST\sigaddRO, \LLO\sigaddlo & \becomes \ST\sigaddRZ, \RLO\sigone, \RHI\sigaddlo\\
\RLO\sigaddlo, \ST\sigaddRZ & \becomes \ST\sigaddRZ, \RLO\sigaddlo &
\RLO\sigaddlo, \ST\sigaddRO & \becomes \ST\sigaddRZ, \RLO\sigone, \RHI\sigaddlo\\[3mm]
\RHI\sigaddlo, \LLO\sigaddhi & \becomes \LLO\sigaddhi, \RLO\sigaddlo &
\RLO\sigaddhi, \ST\sigaddRZ & \becomes \RLO\sigaddhi
\end{align*}
\newcommand{\rsig}[1]{{\ensuremath{\overrightarrow{#1}}}\xspace}
\newcommand{\lsig}[1]{{\ensuremath{\overleftarrow{#1}}}\xspace}
\newcommand{\asig}[2]{{\ensuremath{{+}^{#1}_{#2}}}\xspace}
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.4]
\draw[step=1,very thin,color=lightgray] (0,0) grid (8,15);
\draw[color=olive] (4,0) -- (4,12);
\draw (0,0) node[left] {\rsig1} -- (4,4)
(0,4) node[left] {\rsig1} -- (4,8)
(0,6) node[left] {\rsig A} -- (4,10)
(0,8) node[left] {\rsig B} -- (4,12)
;
\draw (8,0) node[right] {\lsig 1} -- (4,4)
(8,4) node[right] {\lsig A} -- (4,8)
(8,8) node[right] {\lsig B} -- (4,12)
;
\draw[dashed,color=blue!80!black]
(4,4) -- (7,7) node[right] {\rsig 0}
(4,8) -- (7,11) node[right] {\rsig 0}
(4,10) -- (7,13) node[right] {\rsig 1}
(4.5,11.5) -- (7,14) node[right] {\rsig A}
(4,12) -- (7,15) node[right] {\rsig B}
;
\draw[color=red!80!black]
(4,10) -- (4.5,11.5) node[right] {\rsig C}
;
\draw[color=olive]
(4,0) node[below] {\asig0R}
(4,5) node[left] {\asig1R}
(4,9) node[right] {\asig1R}
(4,11) node[left] {\asig0R}
;
\end{tikzpicture}
\caption{Computing $3 + 1$}
\end{figure}
\subsection{ENUM-SAT}
ENUM-SAT is the problem of enumerating all the solutions for an instance of SAT:
we want to know \emph{all} the truth assignements of variables for which the formula is satisfiable.
We can also consider a particular case of ENUM-SAT: the problem ONESOL-SAT, which consists
in returning \emph{only one} valuation satifying the formula, when the formula is satisfiable.
\paragraph{Reduce stage:}
$\block{reduce:enumsat}[x_1 \ldots x_n] = \RLO\sigv\block{var}[x_1]\ldots\block{var}[x_n]\RLO\sigv$
\begin{align*}
\RHI\sigv, \ST\sigt & \becomes \LLO\sigv, \ST\sigv &
\RLO\sigv, \LHI\sigt & \becomes \ST\sigt, \RLO\sigv &
\RHI\sigv, \ST\sigf & \becomes \LLO\sigvZ &
\RLO\sigvZ, \LHI\sigt & \becomes \RLO\sigvZ\\
\RLO\sigv, \LHI\sigv & \becomes \ST\sigv &
\RLO\sigv, \LHI\sigf & \becomes \ST\sigf, \RLO\sigv &
\RLO\sigvZ, \LHI\sigv & \becomes &
\RLO\sigvZ, \LHI\sigf & \becomes \RLO\sigvZ
\end{align*}
\subsection{MAX-SAT}
The problem MAX-SAT consists in, given $k$ SAT-formulae, finding the maximum
number of formulae satisfiable by the same valuation. This problem is NP-hard
and is complete for the class APX --- the class of problems approximable in
polynomial time with a constant factor of approximation. The problem MAX-SAT
can be extended by returning the valuation of variables that satisfies the
greater number of formulae amoung the $k$ ones.
We do not give the corresponding rules for solving MAX-SAT, we just describe
the concerned modules. Each formula amoung the $k$ formulae given in the input
is compiled by the same method used previously, and the resulting
arrangement of signals for each formula are placed end-to-end. This
results in a beam of formulae composed by $k$ sub-beam, one for each formula.
The evaluation process is then the same as seen previously.
To compare the number of satisfiable formulae for each valuation, we used
the binary adder introduced for \#SAT, that we combine with a module comparing
two binary number. The reduce phase follows the same idea that for the other variants,
except that after comparing two-by-two the number of formulae satisfiable,
the greater number is transmitted to the next level of agregation for the next comparison.
Then, at the top of the construction, we can read in the binary representation
of the maximal number of satisfiable formulae.
If we also want the truth assignement that satisfies the greater number of
formulae, we can easily devise a new module on the basis of the one used for
ENUM-SAT.
\end{document}
\section{Complexities}
\label{sec:complexities}
As mentioned in \Sec{sec:intro}, we implement algorithms for
satisfiability problems on signal machines in order to
investigate the computational power of our abstract geometrical
model of computation and to compare it to others. As we shall see, for such
comparisons to be meaningful, the way complexity is measured must be adapted to
the nature on the computing machine.
\begin{wrapfigure}{l}{.45\textwidth}\vspace*{-7mm}%
\centering
\includegraphics[width=.45\textwidth]{whole2.pdf
\caption{The whole diagram.}
\label{fig:whole_diagram}
\vspace*{-6.5mm}%
\end{wrapfigure}
Since signal machines can be regarded as the extension of cellular automata
from discrete to continous time and space, it might seem natural to measure the
time (resp.\ space) complexity of a computation using the height (resp.\ width)
of its space-time diagram. But, in our applications to SAT variants, these are
bounded and independant of the formula: the \emph{Map} phase is bounded by the
fractal, and, by symmetry, so is the \emph{Reduce} phase. Indeed, in general,
by an appropriate scaling of the initial configuration, a finite computation
could be made as small as desired. Thus, height and width are no longer
pertinent measures of complexity.
Instead, we should regard our construction as a massively parallel
computational device transforming inputs into outputs. The input is the
initial configuration at the bottom of the diagram, and the output is the truth
value signal coming out at the top, as seen in~\Fig{fig:whole_diagram} for
formula $\exists x_1\forall x_2\forall x_3\ (x_1\AND\neg x_2)\OR x_3$. The
transformation is performed in parallel by many threads: a thread here is an
ascending path through the diagram from an input to the output, and the
operations executed by the thread are the collisions occurring on this path.
Formally, we view a space-time diagram as a directed acyclic graph of
collisions (vertices) and signals (arcs) oriented according to causality. Time
complexity is then defined as the maximal size of a chain of collisions \Latin{i.e.} the
length of the longest path, and space complexity as the maximal size of an
anti-chain \Latin{i.e.} the size of the maximal set of signals pairwise un-related.
This model-specific measure of time complexity is called \emph{collisions
depth}.
For the present construction, if $s$ is the size of the formula and $n$ the
number of variables, space complexity is exponential: during evaluation, $2^n$
independent computations are executed in parallel, each one involving
approximately $s$ signals, so that the total space complexity is in $O(s.2^n)$.
Regarding the time complexity: for each subformula, the compilation process
introduces a number of signals at most linear in $s$. Thus the number of
signals in the initial configuration is $O(s^2)$. The primary contribution to
the number of collisions along an ascending path comes, at each of the $n$
levels, from the reflected beam crossing the incoming beam. Thus a thread
involves $O(n.s^2)$ collisions, making the collision depth cubic in the size of
the formula instead of quadratic for our previous family of machines, giving us an
idea of the price for genericity.
\section{Conclusion}
\label{sec:conclusion}
We showed in this paper that abstract geometrical computation can solve Q-SAT
in bounded space and time by means of a single generic signal machine. This is
achieved through massive parallelism enabled by a fractal construction that we
call the \emph{fractal cloud}. We adapted the Map/Reduce paradigm to this
fractal cloud, and described a modular programming approach making it easy to
assemble generic machines for SAT variants such as \#SAT or MAX-SAT.
As we explained in \Sec{sec:complexities}, time and space are no longer
appropriate measures of complexity for geometrical computations. This leads us
to propose new definitions of complexity, specific to signal machines, and
taking in account the parallelism of the model: time and space complexities are
now defined respectively by the maximal sizes of a chain and an anti-chain,
when the diagram is regarded as a directed acyclic graph. Time complexity thus
defined is called \emph{collision depth}. According to these new definitions,
our construction has exponential space complexity and cubic time complexity.
Although the model is purely theoretical and has no ambition to be physically
realizable, it is a significant and distinguishing aspect of signal machines
that they solve satifiability problems while adhering to major principles of
modern physics --- finite density and speed of information, causality --- that
are typically not considered by other unconventional models of computation.
They do not, however, respect the quantization hypothesis, nor the uncertainty
principle.
We are know furthering our research along two axes. First, the design and
applications of other fractal structures for modular programming with fractal
parallelism. Second, the investigation of computational complexity classes,
both classical and model-specific for abstract geometrical computation.
\section{Definitions}
\label{sec:def}
Signal machines are an extension of cellular automata from discrete time and
space to continuous time and space. Dimensionless signals/particles move along
the real line and rules describe what happens when they collide.
\paragraph{Signals.}
Each \emph{signal} is an instance of a \emph{meta-signal}. The associated
meta-signal defines its \emph{velocity} and what happen when signals meet.
\Figure{fig:middle} presents a very simple space-time diagram. Time is
increasing upwards and the meta-signals are indicated as labels on the signals.
\begin{figure}[hbt]
\centering
\begin{tabular}[t]{c|c}
\textbf{Meta-signal} & \textbf{Speed}\\\hline
$\ST\sigwall, \ST\sigstart$ & 0\\
$\RLO\sigstart$ & 1\\
$\RHI\sigstart$ & 3\\
$\LHI\sigstart$ & $-3$
\end{tabular}
\qquad
\begin{tikzpicture}[baseline=(current bounding box.north),x=2em,y=2em]
\draw[dotted] (0,0) -- (6,0);
\draw (0,0) -- (0,3) -- node[right] {$\ST\sigwall$} (0,6);
\draw (6,0) -- (6,3) -- node[left] {$\ST\sigwall$} (6,6);
\draw (0,0) -- node [above] {$\RHI\sigstart$} (6,2);
\draw (6,2) -- node [above] {$\LHI\sigstart$} (3,3);
\draw (0,0) -- node [above] {$\RLO\sigstart$} (3,3);
\draw (3,3) -- node[right] {$\ST\sigstart$} (3,6);
\end{tikzpicture}
\qquad
\begin{tabular}[t]{@{}c@{}}%
\begin{tabular}[t]{r@{ $\becomes$ }l}
\multicolumn{2}{c}{\textbf{Collision rules}}\\\hline\noalign{\vskip2mm}%
$\RHI\sigstart, \ST\sigwall$ & $\LHI\sigstart, \ST\sigwall$\\
$\RLO\sigstart, \LHI\sigstart$ & $\ST\sigstart$
\end{tabular}\\\noalign{\vskip5mm}%
\begin{tabular}{r@{\texttt{@}}l}
\multicolumn{2}{c}{\textbf{Initial configuration}}\\\hline\noalign{\vskip2mm}%
\qquad$\SET{\ST\sigwall,\RLO\sigstart,\RHI\sigstart}$ & 0\\
$\SET{\ST\sigwall}$ & 1
\end{tabular}
\end{tabular}
\caption{\label{fig:middle}Geometrical algorithm for computing the middle}
\end{figure}
Generally, we use over-line arrows to indicate the direction and speed of
propagation of a meta-signal. For example, $\RLO\sigstart$ and $\LHI\sigstart$
denotes two different meta-signals; the first travels to the right at speed 1,
while the other travels to the left at speed $-3$. $\ST\sigwall$ and
$\ST\sigstart$ are both stationary meta-signals.
\paragraph{Collision rules.}
When a set of signals collide, they are replaced by a new set of signals
according to a matching collision rule. A rule has the form:
\[
\sigma_1,\ldots,\sigma_n \rightarrow \sigma'_1,\ldots,\sigma'_p
\]
where all $\sigma_i$ are meta-signals of distinct speeds as well as $\sigma'_j$
(two signals cannot collide if they have the same speed and outcoming signals
must have different speeds). A rule matches a set of colliding signals if its
left-hand side is equal to the set of their meta-signals. By default, if there
is no exactly matching rule for a collision, the behavior is defined to
regenerate exactly the same meta-signals. In such a case, the collision is
called \emph{blank}. Collision rules can be deduced from space-time diagrams
as on \Fig{fig:middle}. They are also listed on the right of this figure.
\paragraph{Signal machine.}
A signal machine is defined by a set of meta-signals, a set of collision rules,
and and initial configuration, i.e.\ a set of particles placed on the real
line. The evolution of a signal machine can be represented geometrically as a
\emph{space-time diagram}: space is always represented horizontally, and time
vertically, growing upwards. The geometrical algorithm displayed in
\Fig{fig:middle} computes the middle: the new $\ST\sigstart$ is located exactly
halfway between the initial two $\ST\sigwall$.
\section{Computing in the fractal cloud}
\label{sec:fractal-cloud}
\paragraph{Constructing the fractal:}
the fractal structure that interests us is based on the simple idea of
computing the middle illustated in Figure~\ref{fig:middle}. We just
indefinitely repeat this geometrical construction: once space has been halved,
we recursively halve the two halves, and so on.
\begin{figure}[hbt]
\vspace*{-7em}%
\centering
\subfigure[Constructing the fractal cloud\label{fig:fractal:basic}]{%
\includegraphics[width=0.4\linewidth]{tree.pdf}}
\quad
\subfigure[Distributing a computation\label{fig:fractal:distrib:bug}]{%
\includegraphics[width=0.4\linewidth]{Fig_New/fig_K_2011_ICALP_corridors.pdf}}
\caption{Computing in the fractal cloud}
\end{figure}
This is illustrated in Figure~\ref{fig:fractal:basic}, and can be generated by
the following rules:\footnote{For brevity, we will always omit the rules which can
be obtained from the others by symmetry. We refer to \Appendix{sec:appendixA} for more details.}
\begin{align*}
\sigwall,\LHI\sigstart & \becomes \sigwall,\RHI\sigstart &
\RLO\sigstart, \LHI\sigstart & \becomes
\LHI\sigstart, \LLO\sigstart, \ST\sigstart, \RLO\sigstart, \RHI\sigstart
\end{align*}
using $\AT{\ST\sigwall,\RLO\sigstart,\RHI\sigstart}0$ $\AT{\ST\sigwall}1$ as
the initial configuration. This produces a stack of levels: each level is half
the height of the previous one. As a consequence, the full fractal has width 1
and height 1.
\paragraph{Distributing a computation:}
the point of the fractal is to recursively halve space. At each point where
space is halved, we position a stationary signal (a vertical line in the
space-time diagram). We can use this structure, so that, at each halving point
(stationary signal), we split the computation in two: send it to the left with
half the data, and also to the right with the other half of the data.
The intuition is that the computation is represented by a \emph{beam} of
signals, and that stationary signals split this beam in two, resulting in one
beam that goes through, and one beam that is reflected.
Unfortunately, a beam of constant width will not do: eventually it becomes too
large for the height of the level. This can be clearly seen in
Figure~\ref{fig:fractal:distrib:bug}.
\paragraph{The lens device:}
the lens device narrows the beam by a factor of 2 at each level, thus
automatically adjusting it to fit the fractal. It is implemented by the
following meta-rule: \textit{unless otherwise specified, any signal $\RLO\sigma$
is accelerated by $\LHI\sigstart$ and decelerated and split by any stationary
signal $\ST\sigs$.}
\begin{figure}[htb]
\centering
\subfigure[narrows by 2]{%
\begin{tikzpicture}[x=2em,y=2em]
\draw[dotted] (-9,0) -- (1,0);
\draw[->] (0,-1) -- (0,5);
\draw (-9,-1) -- node[above] {$\overrightarrow\sigma$} (-6,2);
\draw (-6,2) -- node[above] {$\Overrightarrow\sigma$} (0,4);
\draw[->] (0,4) -- node[above] {$\overleftarrow\sigma$} (-1,5);
\draw[->] (0,4) -- node[above] {$\overrightarrow\sigma$} (1,5);
\draw[->] (0,0) -- node[above] {$\LHI\sigstart$} (-6,2) -- (-9,3);
\draw[dotted] (-6,0) -- (-6,2);
\draw[dotted] (-6,2) -- (0,2);
\draw[<->,gray] (-8,-0.2) -- node[below] {$t$} (-6,-0.2);
\draw[<->,gray] (-6,-0.2) -- node[below] {$3t$} (0,-0.2);
\draw[<->,gray] (0.2,0) -- node[right] {$t$} (0.2,2);
\draw[<->,gray] (0.2,2) -- node[right] {$t$} (0.2,4);
\end{tikzpicture}}
\quad
\subfigure[effect on propagation]{%
\mbox{\includegraphics[width=0.4\linewidth]{Fig_New/fig_K_2011_ICALP_corridors_lenses.pdf}}}
\caption{\label{fig.lens}The lens device}
\end{figure}
\paragraph{Generic computing over the fractal cloud:}
with the lens device in effect, generic computations can take place over the
fractal by propagating a beam from an initial configuration. We write
$\block[\RLO{\sigma_n}\ldots\RLO{\sigma_1}]{spawn}$ for an initial
configuration with a sequence $\RLO{\sigma_n}\ldots\RLO{\sigma_1}$ of signals.
Geometrically, it can be seen easily that, in order for the beam to fit through
the first level, the sequence $\RLO{\sigma_n}\ldots\RLO{\sigma_1}$ must be
placed in the interval $(-\frac 1 4,0)$.
\paragraph{Stopping the fractal:}
For finite computations, we don't need the entire fractal. The
$\block{until}[n]$ module can be inserted in the initial configuration to cut
the fractal after $n$ levels have been generated.\hfill
$\block{until}[n]=\RLO\sigstop{\RLO\sigstopaux}^{n-1}$
\begin{align*}
\RHI\sigstopaux, \LLO\sigstart & \becomes \LLO\sigstartoff,\RHI\sigstopaux &
\RHI\sigstopaux, \ST\sigstart & \becomes \ST\sigstartoff &
\RHI\sigstopaux, \ST\sigstartoff & \becomes \LLO\sigstopaux, \ST\sigstartoff, \RLO\sigstopaux\\
\RHI\sigstop, \LLO\sigstart & \becomes \LLO\sigstartoff, \RHI\sigstop &
\RHI\sigstop, \LLO\sigstartoff & \becomes \LLO\sigstart, \RHI\sigstop &
\RHI\sigstop, \ST\sigstartoff & \becomes \LLO\sigstop, \ST\sigstart, \RLO\sigstop\\
\RHI\sigstop, \ST\sigstart & \becomes \ST\sigstart, \RHI\sigstop &
\RHI\sigstop, \RLO\sigstart & \becomes \RLO\sigstartoff &
\RHI\sigstart, \LLO\sigstartoff & \becomes
\end{align*}
The subbeam ${\RLO\sigstopaux}^{n-1}$ are the inhibitors for $\RLO\sigstop$. One
inhibitor is consumed at each level, after which $\RHI\sigstop$ takes effect
and turns $\LLO\sigstart$ into $\LLO\sigstartoff$, then crosses $\ST\sigstart$
and turns $\RLO\sigstart$ on the other side into $\RLO\sigstartoff$. Finally
the annihilation rule $\RHI\sigstart, \LLO\sigstartoff \becomes \emptyset$
brings the fractal to a stop. Thus, a computation
$\block[\RLO{\sigma_n}\ldots\RLO{\sigma_1}{\block{until}[n]}]{spawn}$ uses only
$n$ levels. It can be seen geometrically that, for the collision of
$\RHI\sigstop$ with $\RLO\sigstart$ to occur before the latter meets with
$\LHI\sigstart$, $\RLO\sigstop$ must initially be placed in $(-\frac 1 6,0)$.
\section{Introduction}
\label{sec:intro}
Since their first formulations in the seventies, problems of Boolean
satisfiability have been studied extensively in the field of computational
complexity. Indeed, the most important complexity classes can be characterized
--- in terms of reducibility and completeness --- by such problems \Latin{e.g.} SAT for
NP \citep{cook71} and Q-SAT for PSPACE \citep{stockmeyer+meyer73}. As such, it
is a natural challenge to consider how to solve these problems when
investigating new computing machinery (quantum, NDA, membrane, hyperbolic
spaces\dots) \citep{paun01,margenstern+morita01,alhazov+jimenez07mcu}.
This is the line of investigation that we have been following with \emph{signal
machines} \citep{durand-lose05cie}, an abstract and geometrical model of
computation. We showed previously how such machines were able to solve SAT
\citep{duchier+durand-lose+senot10isaac} and Q-SAT
\citep{duchier+durand-lose+senot10scw} in bounded space and time. But in both
cases, the machines were instance-specific \Latin{i.e.} depended on the formula whose
satifiability was to be determined. The primary contribution of the present
paper is to exhibit a particular \emph{generic} signal machine for the same task: it takes
the instance formula as an input encoded (in polynomial-time by a Turing
machine) in an initial configuration. We further improve our previous results
by describing a \emph{modular} approach that allows us to easily construct
generic machines for other variants of SAT, such as \#SAT or MAX-SAT.
The model of signal machines, called \emph{abstract geometrical computation},
involves two types of fundamental objects: dimensionless \emph{particles} and
\emph{collision rules}. We use here one-dimensional machines: the space is the
Euclidean real line, on which the particles move with a constant speed.
Collision rules describe what happens when several particles collide.
By representing the continuous time on a vertical axis, we obtain
two-dimensional \emph{space-time diagram}, in which the motion of the particles
are materialized by segment lines called \emph{signals}.
Signal machines can simulate Turing machines, and are thus Turing-universal
\citep{durand-lose05cie}. They are also capable of analog computation by using
the continuity of space and time to simulate analog models such as BSS's one
\citep{durand-lose08cie,blum+shub+smale89} or computable analysis
\citep{durand-lose09uc}. Other geometrical models of computation exist:
colored universes \citep{jacopini+sontacchi90}, geometric machines
\citep{huckenbeck89tcs}, piece-wise constant derivative systems
\citep{bournez97icalp}, optical machines \citep{naughton+woods01mcu}\dots
All these models, including signal machines, belong to a larger class of models
of computation, called \emph{unconventional}, which are more powerful than
classical ones (Turing machines, RAM, $\lambda$-calculus \ldots). Among all
these abstract models, the model of signal machines distinguishes itself by
realistic assumptions respecting the major principles of physics --- finite
density of information, respect of causality and bounded speed of information
--- which are, in general, not respected all at the same time by other
models. Nevertheless, signal machines remain an abstract model, with no a
priori ambition to be physically realizable, and is studied for theoretical
issues of computer sciences.
As signal machines take their origins in the world of cellular automa (as
illustrated in \Fig{fig:ca}), they can also be viewed as a massively parallel
computational device. This is the approach proposed here: we put in place a
fractal compute grid, then use the Map/Reduce paradigm to distribute
the computations, then aggregate the results.
\vspace{-0.4cm}
\begin{figure}[hbt]
\centering
\small\SetUnitlength{1.7em}
\begin{tikzpicture}[x=\unitlength,y=\unitlength,thick]
\DiscreteSigUn{DarkGreen}{0,0}{3.75}
\DiscreteSigUn{DarkGreen}{3,0}{.75}
\DiscreteSigMoinsUn{Red}{2,0}{.75}
\DiscreteSigMoinsUn{Red}{5,0}{2.5}
\DiscreteSigZero{Brown}{1,1}{0}
\DiscreteSigZero{Brown}{2.5,2.5}{0}
\DiscreteSigZero{Blue}{1,1.25}{3.75}
\DiscreteSigZero{Blue}{2.5,2.75}{2.25}
\DiscreteSigZero{Blue}{4,0}{3.75}
\DiscreteSigZero{Orange}{4,1}{0}
\DiscreteSigZero{Blue}{4,1.25}{2.5}
\draw[step=.25,Grey,very thin] (0,0) grid (5.25,5.25);
\draw[->,Black] (0,0) -- (0,5.75);
\draw[<->,Black] (-1,0) -- (5.75,0);
\node[rotate=90,Black] at (-.5,2.5) {Time (\NaturalSet)};
\node[Black] at (2.5,-.5) {Space (\IntegerSet)};
\DiagETBasic{8,0}{%
\node[Black] at (2.5,-.5) {Space (\RealSet)};
}{ (\RealSet\!$^+$)}
\end{tikzpicture}
\caption{From cellular automata to signal machines.}
\label{fig:ca}
\end{figure}
\vspace{-0.2cm}
The Map/Reduce pattern, pioneered by Lisp, is now standard in functional
programming: a function is applied to many inputs (map), then the results are
aggregated (reduce). Google extended this pattern to allow its distributed
computation over a grid of possibly a thousand nodes \citep{dean04}. The idea
is to partition the input (petabytes of data) into chunks, and to process these
chunks in parallel on the available nodes.
When solving combinatorial problems, we are also faced with massive inputs;
namely, the exponential number of candidate solutions. Our approach is to
distribute the candidates, and thus the computation, over an unbounded fractal
grid. In this way, we adapt the map/reduce pattern for use over a grid with
fractal geometry.
Our contribution in this paper is three fold: first, we show how Q-SAT can be
solved in bounded space and time using a \emph{generic machine}, where the
input (the formula) is simply compiled into an initial configuration. This
improves on our previous result where the machine itself depended on the
formula. Second, we propose the first architecture for fractally distributed
computing (the \emph{fractal cloud}) and give a way to automatically shrink the data into this structure
by means of a \emph{lens device}. Third, we show how generic machines for many
variants of SAT can be assembled by composing independent modules, which naturally
emerged from the generalization of our previous family of machines into a single machine solving Q-SAT.
Each module can be programmed and understood independently.
The paper is structured as follow.
Signal machines are introduced in \Section{sec:def}.
\Section{sec:fractal-cloud} presents the fractal tree structure used to achieve
massive parallelism and how general computations can be inserted in the tree.
\Section{sec:qsat} details this implementation for a Q-SAT solver and \Section{sec:sat-variants}
explains how some variants of satisfiability problems can be solved with the
same approach. Complexities are discussed in \Section{sec:complexities} and
conclusions and remarks are gathered in \Section{sec:conclusion}.
\section{A modular Q-SAT solver}
\label{sec:qsat}
Q-SAT is the satisfiability problem for quantified Boolean formulae (QBF). A
QBF is a closed formula of the form: \[\phi~=~Q_1x_1Q_1x_2\ldots
Q_nx_n~~~\psi(x_1,x_2,\ldots,x_n)\] where $Q_i\in\{\exists,\forall\}$ and
$\psi$ is a quantifier-free formula of propositional logic. A recursive
algorithm for solving Q-SAT is:
\begin{align*}
\qsat(\exists x\ \phi) & = \qsat(\phi[x\leftarrow\qfalse]) \vee
\qsat(\phi[x\leftarrow\qtrue])\\
\qsat(\forall x\ \phi) & = \qsat(\phi[x\leftarrow\qfalse]) \wedge
\qsat(\phi[x\leftarrow\qtrue])\\
\qsat(\beta) & = \qeval(\beta)
\end{align*}
where $\beta$ is a ground Boolean formula. This is exactly the structure of
our construction: each quantified variable splits the computation in 2,
$\qsat(\phi[x\leftarrow\qfalse])$ is sent to the left and
$\qsat(\phi[x\leftarrow\qtrue])$ to the right, and subsequently the recursively
computed results that come back are combined (with $\vee$ for $\exists$ and
$\wedge$ for $\forall$) to yield the result for the quantified formula. This
process can be viewed as an instance of \emph{Map/Reduce}, where the \emph{Map}
phase distributes the combinatorial exploration of all possible valuations
across space using a binary decision tree, and the \emph{Reduce} phase collects
the results and aggregates them using quantifier-appropriate Boolean
operations.
Our Q-SAT solver is modularly composed as follows:
\[
\hss\block[{\block{reduce:qsat}[Q_1x_1\ldots Q_nx_n]
\block{map:sat}[\psi]\block{decide}[n]\block{until}[n+1]}]{spawn}\hss
\]
We describe the modules \texttt{decide}, \texttt{map:sat}, and
\texttt{reduce:qsat} below.
\subsection{Setting up the decision tree}
For a QBF with $n$ variables, we need 1 level per variable, and then at level
$n+1$ we have a ground propositional formula that needs to be evaluated. Thus,
the first module we insert is $\block{until}[n+1]$ to create $n+1$ levels. We
then insert $\block{decide}[n]$ because we want to use the first $n$ levels as
decision points for each variable.\hfill
$\block{decide}[n] = {\RLO\sigstartaux}^n$
\begin{align*}
\RHI\sigstartaux, \ST\sigstart & \becomes \ST\sigx &
\RHI\sigstartaux, \ST\sigx & \becomes \LLO\sigstartaux, \ST\sigx, \RLO\sigstartaux
\end{align*}
\subsection{Compiling the formula}
The intuition is that we want to compile the formula into a form of inverse
polish notation to obtain executable code using postfix operators. At level
$n+1$ all variables have been decided, and have become $\RLO\sigt$ or
$\RLO\sigf$. The ground formula, regarded as an expression tree, can be
executed bottom up to compute its truth value: the resulting signal for a
subexpression is sent to interact with its parent operator.
The formula is represented by a beam of signals: each subformula is represented
by a (contiguous) subbeam. A subformula that arrives at level $n+1$ starts
evaluating when it hits the stationary $\ST\sigstart$. When its truth value
has been computed, it is reflected so that it may eventually collide with the
incoming signal of its parent connective.
\paragraph{Compilation.}
For binary connectives: one argument arrives first, it is evaluated, and its
truth value is reflected toward the incoming connective; but, in order to reach
it, it must cross the incoming beam for the other argument and not interact
with the connectives contained therein. For this reason, with each
subexpression, we associate a beam ${\RLO\siggamma}^k$ of inhibitors that
prevents its resulting truth value to interact with the first $k$ connectives
that it crosses. We write $\CC{\psi}$ for the compilation of $\psi$
into a contribution to the initial configuration, and $\NCON{\psi}$ for the
number of occurrences of connectives in $\psi$.
\begin{align*}
\CC{\psi} &= \CC{\psi}^0\\
\CC{\psi_1\AND\psi_2}^k &= \RLO\sigand\ {\RLO\siggamma}^k\ \CC{\psi_1}^0\ \CC{\psi_2}^{\NCON{\psi_1}}\\
\CC{\psi_1\OR\psi_2}^k &= \RLO\sigor\ {\RLO\siggamma}^k\ \CC{\psi_1}^0\ \CC{\psi_2}^{\NCON{\psi_1}}\\
\CC{\NOT\psi}^k &= \RLO\signot\ {\RLO\siggamma}^k\ \CC{\psi}\\
\CC{x_i}^k & = \block{var}[x_i]\ {\RLO\siggamma}^k
\end{align*}
\paragraph{Variables.}
We want variable $x_i$ to be decided at level $i$. This can be achieved using
$i-1$ inhibitors.\hfill
$\block{var}[x_i] = \RLO\sigx{\RLO\sigxdelay}^{i-1}$
\begin{align*}
\RHI\sigxdelay, \ST\sigx & \becomes \ST\sigxoff &
\RHI\sigxdelay, \ST\sigxoff & \becomes \LLO\sigxdelay, \ST\sigxoff, \RLO\sigxdelay \\
\RHI\sigx, \ST\sigx & \becomes \LLO\sigf, \ST\sigx, \RLO\sigt &
\RHI\sigx, \ST\sigxoff & \becomes \LLO\sigx, \ST\sigx, \RLO\sigx
\end{align*}
For variable $x_i$, the idea is to protect $\RHI\sigx$ from being assigned into
$\LLO\sigf$ and $\RLO\sigt$ until it reaches the $i^{th}$ level. This is
achieved with a stack of $i-1$ signals $\RHI\sigxdelay$: at each level, the
first $\RHI\sigxdelay$ turns the stationary signal $\sigx$ into $\sigxoff$(the
non-assigning version of $\ST\sigx$), and dies. The following $\RHI\sigxdelay$
are simply split, and so is $\RHI\sigx$ but it additionally turns $\ST\sigxoff$
back into $\ST\sigx$. After the first $i-1$ levels, all the $\RLO\sigxdelay$
have been consumed so that $\RHI\sigx$ finally collides with $\sigx$ and splits
into $\LLO\sigf$ going left and $\RLO\sigt$ going right.
\paragraph{Evaluation.}
When hitting $\ST\sigstart$ at level $n+1$, $\RHI\sigt$ is reflected as
$\LLO\sigT$, and $\RHI\sigf$ as $\LLO\sigF$: these are their \emph{activated}
versions which can interact with incoming connectives to compute the truth
value of the formula according to the rules below (for $\AND$; other
connectives are similar, \emph{cf} \Appendix{sec:appendixA}). See Fig.~\ref{fig:eval} for an example.
\begin{align*}
\RHI\sigt, \ST\sigstart & \becomes \LLO\sigT, \ST\sigstart &
\RHI\sigf, \ST\sigstart & \becomes \LLO\sigF, \ST\sigstart &
\RHI\siggamma, \ST\sigstart & \becomes \LLO\siggammaP, \ST\sigstart\\
\RHI\sigand, \LLO\sigT & \becomes \RHI\sigandP &
\RHI\sigfP, \LLO\sigT & \becomes \RHI\sigf &
\RHI\sigandP, \LLO\sigT & \becomes \RHI\sigt\\
\RHI\sigand, \LLO\sigF & \becomes \RHI\sigfP &
\RHI\sigfP, \LLO\sigF & \becomes \RHI\sigf &
\RHI\sigandP, \LLO\sigF & \becomes \RHI\sigf\\
\RHI\sigand, \LLO\siggammaP & \becomes \RHI\sigandZ &
\RHI\sigandZ, \LLO\sigT & \becomes \LLO\sigT, \RHI\sigand &
\RHI\sigandZ, \LLO\sigF & \becomes \LLO\sigF, \RHI\sigand
\end{align*}
\begin{figure}[tb]\centering
\subfigure[\label{fig:eval}Evaluation
case~$x_1=x_2=x_3=\sigt$]{%
\includegraphics[width=0.4\linewidth]{fig_K_2011_ICALP_eg_evaluate_cropped.pdf}}
\hfill
\subfigure[\label{fig:aggreg}Aggregation]{%
\begin{minipage}[b]{0.58\linewidth}
\raggedleft
\mbox{$\block[\RLO\sigcollect\RLO\sigforall\RLO\sigforall\RLO\sigexists
\RLO\sigs\RLO\sigor\RLO\sigand\RLO\sigx\RLO\signot\RLO\sigx
\RLO\sigxdelay\RLO\sigx\RLO\sigxdelay\RLO\sigxdelay
\RLO\siggamma\RLO\siggamma
\RLO\sigstartaux\RLO\sigstartaux\RLO\sigstartaux
\RLO\sigstop\RLO\sigstopaux\RLO\sigstopaux\RLO\sigstopaux]{spawn}
$\hss}
\centerline{\textbf{Initial configuration}}
\vspace*{5mm}
\begin{align*}
\RHI\sigcollect, \ST\sigt & \becomes \LLO\sigt &
\RHI\sigcollect, \ST\sigf & \becomes \LLO\sigf\\[2mm]
\RLO\sigt, \ST\sigexistsL, \LLO\sigt & \becomes \LLO\sigt &
\RLO\sigt, \ST\sigforallL, \LLO\sigt & \becomes \LLO\sigt \\
\RLO\sigt, \ST\sigexistsL, \LLO\sigf & \becomes \LLO\sigt &
\RLO\sigt, \ST\sigforallL, \LLO\sigf & \becomes \LLO\sigf \\
\RLO\sigf, \ST\sigexistsL, \LLO\sigt & \becomes \LLO\sigt &
\RLO\sigf, \ST\sigforallL, \LLO\sigt & \becomes \LLO\sigf \\
\RLO\sigf, \ST\sigexistsL, \LLO\sigf & \becomes \LLO\sigf &
\RLO\sigf, \ST\sigforallL, \LLO\sigf & \becomes \LLO\sigf
\end{align*}
\includegraphics[width=\linewidth]{fig_K_2011_ICALP_eg_collecte.pdf}
\end{minipage}}
\caption{Example $\exists x_1\forall x_2\forall
x_3\ (x_1\AND \neg x_2)\OR x_3$}
\end{figure}
\paragraph{Storing the result.}
In order to make the result easily exploitable by the \emph{Reduce} phase, we
now store it as the stationary signal at level $n+1$; it replaces
$\ST\sigstart$.
\hspace*{\fill}$\block{store} = \RLO\sigs$
\begin{align*}
\RHI\sigs, \LLO\sigT & \becomes \RHI\sigT &
\RHI\sigs, \LLO\sigF & \becomes \RHI\sigF &
\RHI\sigT, \ST\sigstart & \becomes \ST\sigt &
\RHI\sigF, \ST\sigstart & \becomes \ST\sigf
\end{align*}
The complete \emph{Map} phase in implemented by:\hfill
$\block{map:sat}[\psi] = \block{store}\CC{\psi}$
\subsection{Aggregating the results}
As explained earlier, the results for an existentially (resp.\ universally)
quantified variable must be combined using $\OR$ (resp.\ $\AND$).
\paragraph{Setting up the quantifiers.}
We turn the decision points of the first $n$ levels into quantifier signals.
Moreover, at each level, we must also take note of the direction in which the
aggregated result must be sent. Thus $\ST\sigexistsL$ represents an
existential quantifier that must send its result to the left.\EOL
\hspace*{\fill}$\block{reduce:qsat:init}[Q_1x_1 \cdots Q_nx_n] = \RLO{Q_n}\ldots\RLO{Q_1}$
\begin{align*}
\ST\sigx, \LHI\sigexists & \becomes \ST\sigexistsR &
\RHI\sigexists, \ST\sigx & \becomes \ST\sigexistsL &
\ST\sigx, \LHI\sigforall & \becomes \ST\sigforallR &
\RHI\sigforall, \ST\sigx & \becomes \ST\sigforallL
\end{align*}
\paragraph{Aggregating the results.}
Actual aggregation is initiated by $\RLO\sigcollect$ and then executes
according to the rules given in Fig~\ref{fig:aggreg}.
\hspace*{\fill}$\block{reduce:qsat:exec}=\RLO\sigcollect$
\paragraph{The complete \emph{Reduce} phase} is implemented by
\hspace*{\fill}$\block{reduce:qsat}[Q_1x_1 \cdots Q_nx_n] =
\block{reduce:qsat:exec}\block{reduce:qsat:init}[Q_1x_1 \cdots Q_nx_n]$
\section{Machines for SAT variants}
\label{sec:sat-variants}
Similar machines for variants of SAT can be obtained easily, typically by using
different modules for the \emph{Reduce} phase.
\paragraph{ENUM-SAT.}
returning all the satisfying assignments for a propositional formula $\psi$ can
be achieved easily by storing them as stationary beams.\EOL
\hspace*{\fill}$\block{reduce:allsat}[n] =
\RLO\sigv\block{var}[x_1]\ldots\block{var}[x_n]\RLO\sigv$
\begin{align*}
\RHI\sigv, \ST\sigt & \becomes \LLO\sigvO, \ST\sigv &
\RLO\sigvO, \LHI\sigt & \becomes \ST\sigt, \RLO\sigvO &
\RHI\sigv, \ST\sigf & \becomes \LLO\sigvZ &
\RLO\sigvZ, \LHI\sigt & \becomes \RLO\sigvZ\\
\RLO\sigvO, \LHI\sigv & \becomes \ST\sigv &
\RLO\sigvO, \LHI\sigf & \becomes \ST\sigf, \RLO\sigvO &
\RLO\sigvZ, \LHI\sigv & \becomes &
\RLO\sigvZ, \LHI\sigf & \becomes \RLO\sigvZ
\end{align*}
\paragraph{\#SAT.}
counting the number of satisfying assignments for $\psi$ can be achieved using
signals for a binary adder. For lack of space, we cannot exhibit the rules
here, but they can be found in \Appendix{sec:appendixB}.
\paragraph{MAX-SAT.}
finding the maximum number of \emph{clauses} that can be satisfied by an
assignment. Here we must count the number of satisfied clauses rather than the
number of satisfying assignments, and then stack a module for computing the max
of two binary numbers.
|
1,314,259,993,348 | arxiv | \section{Introduction\label{sec:intro}}
The electromagnetic decays of quarkonia through a single virtual photon
have played an important r\^ole in the experimental and theoretical
development of quarkonium physics. On the experimental side, the decays
of charge-conjugation-odd quarkonium states to a lepton pair provide
unique signals for the detection of those states. On the theoretical
side, the decays of ${}^3S_1$ quarkonium states to a lepton pair
allow one to determine one of the fundamental parameters of
the heavy-quark--antiquark ($Q\bar Q$) bound state, namely, the square
of the wave function at the origin. (See, for example,
Ref.~\cite{Bodwin:2007fz}.) The square of the wave function at the
origin enters into many calculations of quarkonium decay and production
rates.
The expression for the ${}^3S_1$ quarkonium decay rate into a lepton pair
at leading order in the QCD coupling $\alpha_s$ and at leading
order in $v$, the $Q$ or $\bar Q$ velocity in the quarkonium rest frame,
has been known since the first discovery of quarkonium and is based on
the Van Royen-Weisskopf formula \cite{Van Royen:1967nq} of quantum
electrodynamics. The order-$\alpha_s$ corrections to this formula at
leading order in $v$ were calculated
in Refs.~\cite{Barbieri:1975ki,Celmaster:1978yz}.
The order-$\alpha_s^0$ relativistic corrections
at relative orders $v^2$ and $v^4$ were calculated in
Refs.~\cite{Bodwin:1994jh,Bodwin:2002hg}, respectively.
Order-$\alpha_s^2$ corrections to the decay rate were calculated in
Refs.~\cite{Czarnecki:1997vz,Beneke:1997jm}. The correction to
the electromagnetic current of a quarkonium at relative order
$\alpha_s v^2$ was calculated in Ref.~\cite{Luke:1997ys}.
In this paper, we calculate relativistic corrections to the quarkonium
electromagnetic current at order $\alpha_s$. We carry out our
calculation in the context of nonrelativistic QCD (NRQCD)
\cite{Bodwin:1994jh}. We obtain closed-form expressions whose Taylor-series
expansions in $v$ give the short-distance coefficients for the
NRQCD $Q\bar Q$ operators, of all orders in $v$, that match to the
electromagnetic current. We do not consider $Q\bar Q$ operators that
contain gauge fields. Therefore, our operators are not gauge
invariant, and we evaluate their matrix elements in the Coulomb gauge.
In the Coulomb gauge, $Q\bar Q$ operators involving gauge fields first
contribute at relative order $v^4$. Our results confirm the calculation
at relative order $\alpha_s v^2$ in Ref.~\cite{Luke:1997ys}. Since the
corrections at relative order $\alpha_s v^2$ are not very significant
at the current level of precision of calculations of $^{3}S_1$ quarkonium
electromagnetic decay rates, we do not expect the order-$\alpha_s$
corrections at still higher orders in $v$ to be important numerically.
We present our calculation primarily as a demonstration of a new method
for computing the one-loop NRQCD contribution that enters into the
matching of NRQCD to full QCD. The direct computation of one-loop NRQCD
expressions to all orders in $v$ would be a formidable task, in that it
would require knowledge of the NRQCD interactions and
electromagnetic-current operators, their Born-level short-distance
coefficients, and their Feynman rules to all orders in $v$. Instead of
following the direct NRQCD approach, we note that NRQCD through infinite
order in $v$ is equivalent to QCD, but with the interactions rearranged
in an expansion in powers of $v$. Therefore, we can obtain the one-loop
NRQCD contribution by starting from full-QCD expressions and expanding
integrands in powers of momenta divided by the heavy-quark mass $m$ {\it
before} we carry out the dimensional regularization. In dimensional
regularization, this method is related to the method of regions
\cite{Beneke:1997zp}. We explain this relationship in
Sec.~\ref{sec:matching}. The method of regions has been used previously
at leading order in $v$ to compute NRQCD short-distance coefficients
from full-QCD expressions. (See for example,
Ref.~\cite{Beneke:1997jm}.)
Our results show that, under a mild constraint on the NRQCD operator
matrix elements, the NRQCD velocity expansion for the
quark-antiquark-operator contributions to the electromagnetic current
converges. The velocity expansion converges rapidly for
approximate $J/\psi$ operator matrix elements.
The remainder of this paper is organized as follows. In
Sec.~\ref{sec:matching} we discuss the one-loop matching of NRQCD to QCD
at all orders in $v$. We define the notation that we use to describe the
kinematics of the calculation in Sec.~\ref{sec:kinematics}.
Section~\ref{sec:FORMCoeff} contains detailed formulas for the NRQCD $Q\bar Q$
short-distance coefficients. In Sec.~\ref{sec:QCD} we compute the
one-loop QCD corrections to the electromagnetic current, while in
Sec.~\ref{sec:NRQCD} we use our new method to compute the one-loop NRQCD
corrections to the electromagnetic current. We give analytic and
numerical results for the short-distance coefficients in
Sec.~\ref{sec:RESCoeff}, present a formula that resums a class
of relativistic corrections to all orders in $v$, and discuss the
convergence of the velocity expansion. Our conclusions are
given in Sec.~\ref{sec:Conclusion}. The Appendices contain compilations of
integrals and identities that are useful in the calculation.
\section{Matching to all orders in $\bm{v}$\label{sec:matching}}
We define the hadronic part of the quarkonium electromagnetic decay
amplitude ${\cal A}_H^\mu$ as
\begin{equation}
(-iee_Q)i{\cal A}_H^\mu=\langle 0|J^\mu_{\rm EM}|H\rangle,
\end{equation}
where $H$ is the quarkonium, $e$ is the electromagnetic charge, $e_Q$ is
the heavy-quark charge, and $J^\mu_{\rm EM}$ is the heavy-quark
electromagnetic current:
\begin{equation}
J^\mu_{\rm EM}=(-iee_Q)\bar\psi\gamma^\mu\psi.
\end{equation}
Here, $\psi$ is the heavy-quark Dirac field, and $\gamma^\mu$ is a Dirac
matrix.
In the quarkonium rest frame, $i{\cal A}_H^0=0$ because of conservation of
the electromagnetic current. According to NRQCD
factorization~\cite{Bodwin:1994jh}, we can write the spatial components
$i{\cal A}_H^i$ as
\begin{equation}
\label{NRQCD-fact}%
i{\cal A}_H^i=\sqrt{2m_H}\sum_n c_n\langle 0|{\cal O}^i_n|H\rangle,
\end{equation}
where the $c_n$ are short-distance coefficients, the ${\cal O}^i_n$ are
NRQCD operators, and $m_H$ is the quarkonium mass. We regulate
the operator matrix elements in Eq.~(\ref{NRQCD-fact})
dimensionally in $d=4-2\epsilon$ dimensions. The
factor $\sqrt{2m_H}$ on the right side of Eq.~(\ref{NRQCD-fact})
appears because the NRQCD operator matrix elements have nonrelativistic
normalization, while we choose the amplitude on the left side of
Eq.~(\ref{NRQCD-fact}) to have relativistic normalization for the
quarkonium $H$.
The aim of this paper is to calculate the short-distance coefficients
$c_n$ that correspond to $Q\bar Q$ color-singlet operators
in order $\alpha_s^1$. We can determine these $c_n$ by making use of
a matching equation that is the statement of NRQCD factorization for
perturbative $Q\bar Q$ color-singlet states:
\begin{equation}
i{\cal A}_{Q\bar Q_1}^i=\sum_n c_n\langle 0|{\cal O}^i_n|Q\bar
Q_1\rangle,
\end{equation}
where the subscript $1$ indicates a color-singlet state.
Throughout this paper, we suppress the factor $\sqrt{N_c}$
that comes from the implicit color trace in $i{\cal A}_{Q\bar Q_1}^i$,
where $N_c = 3$ is the number of colors.
Through order $\alpha_s^1$, the matching equation is
\begin{equation}
\label{matching-01}%
i{\cal A}_{Q\bar Q_1}^{i(0)}+i{\cal A}_{Q\bar Q_1}^{i(1)}=
\sum_n (c_n^{(0)}+c_n^{(1)})\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(0)}
+\sum_n c_n^{(0)}\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(1)},
\end{equation}
where the superscripts $(0)$ and $(1)$ indicate the order in
$\alpha_s$. In the first sum in Eq.~(\ref{matching-01}), only
color-singlet $Q \bar Q$ operators contribute, while in the last sum,
additional operators can contribute if they mix into color-singlet
$Q\bar Q$ operators under one-loop corrections.
We define the quantity
\begin{equation}
\left[i{\cal A}_{Q\bar Q_1}^{i(0)}\right]_{\rm NRQCD} =
\sum_n c_n^{(0)}\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(0)},
\end{equation}
which is the expansion of $i{\cal A}_{Q\bar Q_1}^{i(0)}$ in powers of $q/m$,
where $q$ is half the relative momentum of the heavy quark and heavy antiquark.
At order $\alpha_s^0$, the matching equation (\ref{matching-01}) yields
\begin{equation}
\label{match-0}%
i{\cal A}_{Q\bar Q_1}^{i(0)}=
\sum_n c_n^{(0)}\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(0)},
\end{equation}
from which the $c_n^{(0)}$ can be determined. The $c_n^{(0)}$ have
been computed previously in Ref.~\cite{Bodwin:2007fz}.
At order $\alpha_s^1$, the matching equation (\ref{matching-01}) yields
\begin{equation}
\label{match-1}%
i{\cal A}_{Q\bar Q_1}^{i(1)}=
\sum_n c_n^{(1)}\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(0)}+
\left[i{\cal A}_{Q\bar Q_1}^{i(1)}\right]_{\rm NRQCD},
\end{equation}
from which the $c_n^{(1)}$ can be computed.
We compute the quantities $i{\cal A}_{Q\bar Q_1}^{i(1)}$ and
$\left[i{\cal A}_{Q\bar Q_1}^{i(1)}\right]_{\rm NRQCD}$
in Secs.~\ref{sec:QCD} and \ref{sec:NRQCD}, respectively.
The quantity
\begin{equation}
\left[i{\cal A}_{Q\bar Q_1}^{i(1)}\right]_{\rm NRQCD} =
\sum_n c_n^{(0)}\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(1)}
\end{equation}
would be formidable to calculate directly in NRQCD because it involves
operators and interactions of all orders in $v$.
Rather than carry out such a direct calculation, we take a new
approach. We note that, by construction, NRQCD reproduces all of the
interactions in full QCD, but with those interactions reorganized in an
expansion in powers of $v$. Therefore, we can obtain $\left[i{\cal
A}_{Q\bar Q_1}^{i(1)}\right]_{\rm NRQCD}$ from the expression for $i{\cal
A}_{Q\bar Q_1}^{i(1)}$ by expanding the integrand in powers of the
momentum divided by $m$.
Before making this expansion, we carry out the integration over the
temporal component of the loop momentum, using contour integration. This
procedure establishes the scale of the temporal component of the loop
momentum, which varies from contribution to contribution, and it avoids
the generation of ill-defined pinch singularities that can arise when
one expands the $Q$ and $\bar Q$ propagators prematurely in powers of
the momentum.
We expand the integrand in powers of both the external momenta divided
by $m$ and the loop momenta divided by $m$. We then regulate the
integrals dimensionally, setting scaleless, power-divergent integrals
equal to zero. Ultimately, we renormalize ultraviolet divergences
according to the $\overline{\rm MS}$ prescription.
The procedure of expanding both external and loop momenta in power
series before regulating dimensionally was first utilized in the
Appendix of Ref.~\cite{Bodwin:1994jh}. The rationale for it was
discussed in Refs.~\cite{Luke:1997ys,Bodwin:1998mn}. This procedure
amounts to the prescription that infrared-finite contributions that
arise from loop momenta in the vicinity of zero are kept in the
short-distance coefficients \cite{Bodwin:1998mn}.\footnote{In the case
of hard-cutoff regularization, such as lattice regularization, the
expansion in powers of loop momentum divided by $m$ would be uniformly
convergent and would yield the same result as the unexpanded
expression.}
If one uses dimensional regularization, then the quantity $\sum_n
c_n^{(1)}\langle 0|{\cal O}^i_n|Q\bar Q_1\rangle^{(0)}$ corresponds to
the contribution from the hard region in the method of regions
\cite{Beneke:1997zp}, while the quantity $\left[i{\cal A}_{Q\bar
Q_1}^{i(1)}\right]_{\rm NRQCD}$ corresponds to the sum of the
contributions from the potential, soft, and ultrasoft regions,
i.e., the contribution from the small-loop-momentum region. In the
method of regions, it is assumed that there are no contributions from
the region in which the temporal component of the gluon momentum is of
order $m$, but the spatial component of the gluon momentum is of order
$mv$. As we shall see explicitly in our calculation, this assumption is
justified because the contribution from this region of integration vanishes
in dimensional regularization. We note, however, that the contribution
from this region does not vanish in the case of a hard-cutoff regulator.
One potentially useful feature of the approach that we present here is
that it can be applied in the case of a hard cutoff, such as lattice
regularization, while the method of regions is applicable only in
dimensional regularization. In the method of regions, one can compute the
contribution from the hard region directly, rather than computing it, as
we do, by subtracting the small-loop-momentum contribution from the
full-QCD contribution. As we shall explain later, there may be
advantages to our indirect procedure in calculating the hard
contribution to all orders in $v$.
\section{Kinematics\label{sec:kinematics}}
Before proceeding to write explicit formulas for the short-distance
coefficients, let us define some notation for the kinematics of the
heavy-quark electromagnetic vertex. We take $p_1$ and $p_2$ to be the momenta
of the incoming heavy quark $Q$ and heavy antiquark $\bar{Q}$, respectively.
$p_1$ and $p_2$ can be expressed as linear combinations of their
average $p$ and half their difference $q$:
\begin{subequations}
\label{momenta-definition}%
\begin{eqnarray}
p_1&=&p+q,
\\
p_2&=&p-q.
\end{eqnarray}
\end{subequations}
In the $Q\bar{Q}$ rest frame, the momenta are given by
\begin{subequations}
\label{momenta-rest}%
\begin{eqnarray}
p_1&=&(E,\bm{q}),
\\
p_2&=&(E, -\bm{q}),
\\
p&=&(E,\bm{0}),
\\
q&=&(0,\bm{q}),
\end{eqnarray}
\end{subequations}
where $E=\sqrt{m^2+\bm{q}^2}$.
The quark $Q$ and antiquark $\bar{Q}$ are on their mass shells:
$p_1^2=p_2^2=m^2$. For later use,
it is convenient to define a parameter
\begin{equation}
\label{delta}%
\delta=\frac{|\bm{q}|}{E},
\end{equation}
which is related to the velocity
\begin{equation}
\label{v}%
v=\frac{|\bm{q}|}{m}.
\end{equation}
We can write $\delta$ in terms of $v$ as
\begin{equation}
\label{delta-v}%
\delta=\frac{v}{\sqrt{1+v^2}}.
\end{equation}
$E^2$ and $\bm{q}^2$ are expressed in terms of $m$ and $\delta$ as
\begin{subequations}
\label{eq-delta}%
\begin{eqnarray}
E^2&=&\frac{m^2}{1-\delta^2},
\\
\bm{q}^2&=&\frac{m^2\delta^2}{1-\delta^2}.
\end{eqnarray}
\end{subequations}
\section{Formulas for the short-distance coefficients\label{sec:FORMCoeff}}
Now let us make use of the matching conditions (\ref{match-0}) and
(\ref{match-1}) to compute the short-distance coefficients for the
specific color-singlet $Q\bar Q$ operators that we consider in this
paper. These operators are
\begin{subequations}
\label{OPerators}%
\begin{eqnarray}
\label{OPeratorsA}%
{\cal O}_{An}^i&=&\chi^\dagger (-\tfrac{i}{2}\tensor{\bm{\nabla}})^{2n}
\sigma^i\psi,
\\
{\cal O}_{Bn}^i&=&\chi^\dagger (-\tfrac{i}{2}\tensor{\bm{\nabla}})^{2n-2}
(-\tfrac{i}{2}\tensor{\nabla}^i)
(-\tfrac{i}{2}\tensor{\bm{\nabla}})\cdot\bm{\sigma}
\psi,
\end{eqnarray}
\end{subequations}
where $\psi$ is the Pauli spinor field that annihilates a heavy quark,
$\chi^\dagger$ is the Pauli spinor field that annihilates a heavy antiquark,
and $\sigma^i$ is a Pauli matrix.
Our operators contain ordinary derivatives, rather
than covariant derivatives. Therefore, our operators are not gauge invariant,
and we evaluate their matrix elements in the Coulomb gauge.
We do not consider $Q\bar Q$ operators involving the gauge fields,
which first contribute at relative order $v^4$.
We note that ${\cal O}_{Bn}^i$ can be decomposed into a linear combination
of the $S$-wave operator ${\cal O}_{An}^i$ and the $D$-wave operator
${\cal O}_{Dn}^i$:
\begin{equation}
\label{OPerator-B}%
{\cal O}_{Bn}^i=
\frac{1}{d-1}
{\cal O}_{An}^i+
{\cal O}_{Dn}^i,
\end{equation}
where ${\cal O}_{Dn}^i$ is defined by
\begin{equation}
\label{OPerator-D}%
{\cal O}_{Dn}^i=
\chi^\dagger (-\tfrac{i}{2}\tensor{\bm{\nabla}})^{2n-2}
\left[
(-\tfrac{i}{2}\tensor{\nabla}^i)
(-\tfrac{i}{2}\tensor{\bm{\nabla}})\cdot\bm{\sigma}
-\frac{1}{d-1}
(-\tfrac{i}{2}\tensor{\bm{\nabla}})^{2}
\sigma^i
\right]
\psi.
\end{equation}
In the basis of operators ${\cal O}^i_{An}$ and ${\cal O}^i_{Bn}$,
the matching conditions (\ref{match-0}) and (\ref{match-1}) become
\begin{subequations}
\label{coeffs-a-b}%
\begin{eqnarray}
\label{coeffs-a-b-0}%
i{\cal A}_{Q\bar Q_1}^{i(0)}
&=&
\sum_n a_n^{(0)}\langle 0|{\cal O}^i_{An}|Q\bar Q_1\rangle^{(0)}
+
\sum_n b_n^{(0)}\langle 0|{\cal O}^i_{Bn}|Q\bar Q_1\rangle^{(0)},
\\
\label{coeffs-a-b-1}%
i{\cal A}_{Q\bar Q_1}^{i(1)}
&=&
\sum_n a_n^{(1)}\langle 0|{\cal O}^i_{An}|Q\bar Q_1\rangle^{(0)}
+
\sum_n b_n^{(1)}\langle 0|{\cal O}^i_{Bn}|Q\bar Q_1\rangle^{(0)}
+
\left[i{\cal A}_{Q\bar Q_1}^{i(1)}\right]_{\rm NRQCD},
\end{eqnarray}
\end{subequations}
where $a_n$ and $b_n$ are the corresponding short-distance coefficients.
A similar equation holds in the basis
${\cal O}^i_{An}$ and ${\cal O}^i_{Dn}$, where the associated
short-distance coefficients are
\begin{subequations}
\label{coeffs-sd}%
\begin{eqnarray}
\label{coeffs-s}
s_n&=&a_n+\frac{1}{d-1}\,b_n,
\\
d_n&=&b_n,
\end{eqnarray}
\end{subequations}
respectively.\footnote{
If we replace the ordinary derivatives $\tensor{\bm{\nabla}}$ with
covariant derivatives $\tensor{\bm{D}}$ in an $S$-wave operator ${\cal
O}_{An}^i$, then we obtain one of the conventional gauge-invariant
$S$-wave NRQCD operators. Because the squared covariant derivatives
$(\tensor{\bm{D}})^2$ commute with themselves, the substitution of
covariant derivatives for ordinary derivatives leads to a unique $S$-wave
operator at each order $n$. Therefore, the $S$-wave short-distance
coefficients $s_n$ that we compute are also the short-distance
coefficients of the $S$-wave operator in which ordinary derivatives have
been replaced with covariant derivatives. In the case of the $D$-wave
operators ${\cal O}_{Dn}^i$, the replacement of ordinary derivatives
with covariant derivatives does not lead to a unique operator because
$(\tensor{D})^i$ and $(\tensor{D})^j$ do not commute. Therefore, each of the
$D$-wave short-distance coefficients $d_n$ that we compute is the sum of
the short-distance coefficients for the various operators at order $n$
that can be constructed from covariant derivatives.}
The $Q\bar Q$ matrix elements in Eq.~(\ref{coeffs-a-b}) are
\begin{subequations}
\label{pert-me}%
\begin{eqnarray}
\langle 0|{\cal O}_{An}^i|Q\bar Q_1\rangle^{(0)}
&=&\bm{q}^{2n}\eta^\dagger \sigma^i \xi ,\\
\langle 0|{\cal O}_{Bn}^i|Q\bar Q_1\rangle^{(0)}
&=&\bm{q}^{2n-2}q^i\eta^\dagger\bm{q}\cdot\bm{\sigma}\xi,
\end{eqnarray}
\end{subequations}
where $\xi$ and $\eta$ are two-component spinors.
In order to maintain consistency with our calculations in full
QCD, we have taken the $Q\bar Q$ states to have nonrelativistic
normalization and we have suppressed the factor $\sqrt{N_c}$
that comes from the color trace.
Because of current conservation, the most general form of
$i \mathcal{A}_{Q\bar{Q}_1}^i$ is
\begin{equation}
i \mathcal{A}_{Q\bar{Q}_1}^i =
\bar{v} (p_2) (G \gamma^i + H q^i) u(p_1),
\end{equation}
where
\begin{equation}
\label{A-Z-L}%
G =Z_Q (1+\Lambda).
\end{equation}
$Z_Q$ is the fermion wave-function renormalization, and
$\Lambda$ is the multiplicative correction to the fermion
electromagnetic vertex. Similarly,
\begin{equation}
i \left[\mathcal{A}_{Q\bar{Q}_1}^i\right]_{\textrm{NRQCD}} =
\bar{v} (p_2) (G_{\textrm{NRQCD}} \gamma^i
+ H_{\textrm{NRQCD}} q^i) u(p_1),
\end{equation}
where
\begin{equation}
\label{GNRQCD}
G_{\textrm{NRQCD}}=[Z_Q]_\textrm{NRQCD} (1+\Lambda_\textrm{NRQCD}).
\end{equation}
Using nonrelativistic normalization for the spinors $u$ and $v$, we obtain
\begin{subequations}
\label{spinor-reduction}%
\begin{eqnarray}
\bar{v}(p_2) \gamma^i u(p_1)
&=& \eta^\dagger \sigma^i \xi
- \frac{q^i \eta^\dagger \bm{q}\cdot\bm{\sigma} \xi}{E (E+m)},
\\
q^i \bar{v}(p_2) u(p_1) &=&
-\,\, \frac{q^i\eta^\dagger \bm{q}\cdot\bm{\sigma} \xi}{E}.
\end{eqnarray}
\end{subequations}
Then,
\begin{equation}
\label{amp-A-B}%
i{\cal A }^i_{Q\bar Q_1}= G\eta^\dagger\sigma^i\xi
-\left[\frac{ G}{E(E+m)}+\frac{ H }{E}\,\right]
q^i\eta^\dagger\bm{q}\cdot\bm{\sigma}\xi.
\end{equation}
Similarly,
\begin{equation}
\label{amp-A-B-NRQCD}%
i\left[{\cal A }^i_{Q\bar Q_{1}}\right]_{\textrm{NRQCD}}
= G_{\textrm{NRQCD}}\eta^\dagger\sigma^i\xi
-\left[\frac{ G_{\textrm{NRQCD}}}{E(E+m)}
+\frac{ H_{\textrm{NRQCD}}}{E}\,\right]
q^i\eta^\dagger\bm{q}\cdot\bm{\sigma}\xi.
\end{equation}
Using the matching condition (\ref{coeffs-a-b-0}) and
Eqs.~(\ref{pert-me}) and (\ref{amp-A-B}), we obtain
the short-distance coefficients at order $\alpha_s^0$:
\begin{subequations}
\label{an0-bn0}%
\begin{eqnarray}
a_n^{(0)}&=&
\left.
\frac{1}{n!}\left(\frac{\partial}{\partial \bm{q}^2}\right)^n
G^{(0)}
\right|_{\bm{q}^2=0}
=
\delta_{n0},\\
b_n^{(0)}&=&d^{(0)}_n=
-
\left.
\frac{1}{(n-1)!}\left(\frac{\partial}{\partial \bm{q}^2}\right)^{n-1}
\left[\frac{ G^{(0)}}{E(E+m)}
+\frac{ H^{(0)}}{E}\right]
\right|_{\bm{q}^2=0}
\nonumber\\
&=&
-
\left.
\frac{1}{(n-1)!}\left(\frac{\partial}{\partial
\bm{q}^2}\right)^{n-1}\left[\frac{1}{E(E+m)}\right]
\right|_{\bm{q}^2=0},
\\
s_n^{(0)}&=& a_n^{(0)}+\frac{1}{3}\,b_n^{(0)}.
\end{eqnarray}
\end{subequations}
Using the matching condition (\ref{coeffs-a-b-1}) and
Eqs.~(\ref{pert-me}), (\ref{amp-A-B}), and (\ref{amp-A-B-NRQCD}),
we obtain the short-distance coefficients at
order $\alpha_s^1$:
\begin{subequations}
\begin{eqnarray}
a_n^{(1)}&=&
\left.
\frac{1}{n!}
\left(\frac{\partial}{\partial \bm{q}^2}\right)^n
\Delta G^{(1)}
\right|_{\bm{q}^2=0}
,\\
b_n^{(1)}&=& d_n^{(1)}=
-
\left.
\frac{1}{(n-1)!}\left(\frac{\partial}{\partial \bm{q}^2}\right)^{n-1}
\left[\frac{\Delta G^{(1)}}{E(E+m)}
+\frac{\Delta H^{(1)}}{E}
\right]
\right|_{\bm{q}^2=0}
,
\\
\label{dm1}
s_n^{(1)}&=& a_n^{(1)}+\frac{1}{d-1}\,b_n^{(1)},
\end{eqnarray}
\end{subequations}
where
\begin{subequations}
\label{DeltaAB}%
\begin{eqnarray}
\Delta G^{(1)}&=&
G^{(1)}- G_{\rm NRQCD}^{(1)},\\
\Delta H^{(1)}&=&
H^{(1)}- H_{\rm NRQCD}^{(1)}.
\end{eqnarray}
\end{subequations}
The infrared divergences in
$G^{(1)}_{\textrm{NRQCD}}$ and $H^{(1)}_{\textrm{NRQCD}}$
cancel in
$\Delta G^{(1)}$ and $\Delta H^{(1)}$
because NRQCD reproduces full QCD in the infrared region.
The one-loop NRQCD matrix elements in $G_{\rm NRQCD}^{(1)}$ contain
ultraviolet divergences, which we renormalize according to the
$\overline{\rm MS}$ prescription. The quantity $H_{\rm NRQCD}^{(1)}$
is free of ultraviolet divergences.
The quantities $\Lambda$ and $Z_Q$
also contain ultraviolet divergences. However, because of the usual
cancellation between the vertex and fermion-wave-function
renormalizations, $G^{(1)}$ is free of ultraviolet divergences.
$H^{(1)}$ is also free of ultraviolet divergences. Carrying out the
renormalization, we have
\begin{subequations}
\label{ab-MSbar}%
\begin{eqnarray}
\left[a_n^{(1)}\right]_{\overline{\rm MS}}&=&
\left.
\frac{1}{n!}
\left(\frac{\partial}{\partial \bm{q}^2}\right)^n
\Delta G^{(1)}_{\overline{\rm MS}}
\,\right|_{\bm{q}^2=0}
,
\\
\left[b_n^{(1)}\right]_{\overline{\rm MS}}&=&
\left[d_n^{(1)}\right]_{\overline{\rm MS}}=
-
\left.
\frac{1}{(n-1)!}\left(\frac{\partial}{\partial \bm{q}^2}\right)^{n-1}
\left[\frac{\Delta G^{(1)}_{\overline{\rm MS}}}{E(E+m)}
+\frac{\Delta H^{(1)}}{E}
\right]
\right|_{\bm{q}^2=0},
\\
\label{dm1MSbar}
\left[s_n^{(1)}\right]_{\overline{\rm MS}}&=&
\left[a_n^{(1)}\right]_{\overline{\rm MS}}+
\frac{1}{3}\left[b_n^{(1)}\right]_{\overline{\rm MS}}.
\end{eqnarray}
\end{subequations}
In deriving the expression for $\left[s_n^{(1)}\right]_{\overline{\rm MS}}$,
we have used the fact that,
in minimal subtraction, one removes the $1/\epsilon$ pole times the
order-$\alpha_s^0$ $d$-dimensional matrix element. Hence, a term proportional
to $(d-1)^{-1}\epsilon^{-1}$ is subtracted in Eq.~(\ref{dm1}) in carrying out
the renormalization.
\section{QCD Corrections\label{sec:QCD}}
In this section, we calculate the QCD corrections to the heavy-quark
electromagnetic current.
That is, we compute $i{\cal A}_{Q\bar Q_1}^{i(1)}$.
\subsection{Vertex Correction \label{subsec:QCDvertex}}
In the Feynman gauge, the vertex correction to the electromagnetic current
is given by
\begin{equation}
\label{v-amp}%
\Lambda^\mu
=
-ig_s^2C_F
\int_k
\frac{
\bar{v}(p_2)
\gamma_\alpha
(-/\!\!\!p_2+/\!\!\!k+m)
\gamma^\mu
(/\!\!\!p_1+/\!\!\!k+m)
\gamma^\alpha
u(p_1)
}
{D_0D_1D_2},
\end{equation}
where $g_s^2=4\pi\alpha_s$ is the strong coupling,
$C_F=(N_c^2-1)/(2N_c)=4/3$, and
\begin{subequations}
\label{Di-definition}%
\begin{eqnarray}
\label{int-definition}%
\int_k
&\equiv&
\mu^{2\epsilon}
\int\frac{d^dk}{(2\pi)^d},
\\
D_0&=&k^2+i \varepsilon,
\\
D_1&=&k^2+2k\cdot p_1+i \varepsilon,
\\
D_2&=&k^2-2k\cdot p_2+i \varepsilon.
\end{eqnarray}
\end{subequations}
$\mu$ is the renormalization scale.
The loop momentum $k$ is chosen to be the gluon momentum.
By making use of Eq.~(\ref{momenta-definition}) and applying the equations
of motion,
\begin{subequations}
\label{diraceq}%
\begin{eqnarray}
\bar{v}(p_2) /\!\!\!p u(p_1) &=& 0, \\
\bar{v}(p_2) /\!\!\!q u(p_1) &=& m \bar{v}(p_2) u(p_1),
\end{eqnarray}
\end{subequations}
we find that Eq.~(\ref{v-amp}) can be written as
\begin{eqnarray}
\label{numerator}%
\Lambda^\mu
&=&
-ig_s^2C_F
\int_k
\frac{1} {D_0D_1D_2}
\,
\bar{v} (p_2) \bigg\{
\bigg[(d-2)k^2 - 4(2 p^2 - m^2) + 8 k \cdot q
\bigg]
\gamma^\mu
\nonumber\\
&&\hspace{27ex}
+ 4 m k^{\mu} - 8 q^{\mu} k\!\!\!/ + 2 (2-d) k^{\mu} k\!\!\!/
\,\bigg\} u(p_1).
\end{eqnarray}
Tensor reductions of the integrals in
Eq.~(\ref{numerator}) are given in Appendix~\ref{appendix:tensor}.
The result is
\begin{eqnarray}
\label{lambdainS}%
\Lambda^{\mu} &=& -i g_s^2 C_F \bar{v}(p_2)\Bigg\{
\bigg[
(d-2) J_1
-4 (2 p^2 - m^2) J_2 + 4 J_3 + 2 (2-d) J_4
\bigg]
\gamma^{\mu}
\nonumber \\
&&\hspace{15ex}
+ \frac{2 m p^{\mu}}{p^2} J_5 - \frac{2 m q^{\mu}}{q^2} J_3
+ 2 (2-d) m \left( \frac{q^{\mu}}{q^2} J_6
+ \frac{p^{\mu}}{p^2 q^2} J_7 \right) \Bigg\}u(p_1),
\end{eqnarray}
where the integrals $J_i$ are defined by
\begin{equation}
\label{Si}%
J_i = \int_k \frac{N_i}{D_0 D_1 D_2},
\end{equation}
and
\begin{subequations}
\label{Ni-definition}%
\begin{eqnarray}
N_1 &=& k^2, \\
N_2 &=& 1, \\
N_3 &=& 2 k \cdot q, \\
N_4 &=&
\frac{1}{d-2} \left[
k^2
- \frac{(k \cdot p)^2}{p^2}
- \frac{(k \cdot q)^2}{q^2}
\right],\\
N_5 &=& 2 k \cdot p,\\
N_6 &=& \frac{1}{d-2} \left[
-k^2
+ \frac{(k \cdot p)^2}{p^2}
+ (d-1) \frac{(k \cdot q)^2}{q^2}
\right]
, \\
N_7 &=&
k \cdot p\,
k \cdot q.
\end{eqnarray}
\end{subequations}
The integrals $J_1$--$J_7$ are evaluated in
Appendix~\ref{appendix:scalarintegrals}.
The results are tabulated in Eq.~(\ref{Ji-final}).
We note that $J_5$ and $J_7$ vanish, as is required by conservation of
electromagnetic current in Eq.~(\ref{lambdainS}).
Writing the vertex correction as
$\Lambda^{\mu} = \bar{v}(p_2)(\Lambda\gamma^{\mu} + H q^{\mu})u(p_1)$,
we have
\begin{subequations}
\label{Lam-final}%
\begin{eqnarray}
\label{Lam-finala}%
\Lambda &=& - i g_s^2 C_F \bigg[(d-2) J_1 - 4 (2 p^2 -m^2) J_2
+ 4 J_3 + 2 (2-d)J_4 \bigg] \nonumber \\
&=& \frac{\alpha_s C_F}{4 \pi}
\bigg\{
\frac{1}{\epsilon_{\textrm{UV}}}
+\log \frac{4 \pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{m^2}
+2(1+\delta^2) L(\delta) \left(
\frac{1}{\epsilon_{\textrm{IR}}}
+\log \frac{4 \pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{m^2}
\right)
+ 6 \delta^2 L(\delta)
\nonumber\\
&&
\hspace{7ex}
- 4(1+\delta^2)K(\delta)
+(1+\delta^2)
\left[
\frac{\pi^2}{\delta}
- \frac{i \pi}{\delta}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+
\log \frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{\bm{q}^2}
+\frac{3 \delta^2}{1+\delta^2}
\right)
\right]
\bigg\},
\nonumber\\
\\
\label{Lam-finalb}%
H&=& -i g_s^2 C_F \left[ - \frac{2 m}{q^2} J_3
+ \frac{2 (2-d) m}{q^2} J_6 \right]=
\frac{\alpha_s C_F}{4 \pi} \frac{1-\delta^2}{m}
\left[ 2 L(\delta) - \frac{i \pi}{\delta} \right],
\end{eqnarray}
\end{subequations}
where the subscripts on $1/\epsilon$ denote the origins of the divergences
and $\gamma_{_{\!\textrm{E}}}$ is the Euler-Mascheroni constant.
The functions $L (\delta)$ and $K(\delta)$ are given by
\begin{subequations}
\label{L-K-delta}%
\begin{eqnarray}
\label{L-delta}%
L(\delta)&=& \frac{1}{2 \delta}
\log \left( \frac{1 + \delta}{1 - \delta} \right),
\\
\label{K-delta}%
K(\delta) &=&
\frac{1}{4 \delta}
\left[ \textrm{Sp} \left( \frac{2 \delta}{1+\delta} \right)
- \textrm{Sp} \left( - \frac{2 \delta}{1-\delta} \right)
\right],
\end{eqnarray}
\end{subequations}
where $\textrm{Sp}$ is the Spence function:
\begin{equation}
\label{Sp}%
\textrm{Sp}(x)=
\int_x^0\frac{\log (1-t)}{t}dt.
\end{equation}
In Eq.~(\ref{Lam-final}), we have neglected terms of order $\epsilon^1$
and higher. In the remainder of this paper, we drop such higher-order terms.
\subsection{Wave-function Renormalization\label{subsec:QCDZQ}}
The heavy-quark wave-function renormalization $Z_Q$,
evaluated in dimensional
regularization, is given in Ref.~\cite{Braaten:1995ej}:
\begin{equation}
\label{zq}%
Z_Q=
1+\frac{\alpha_s C_F}{4\pi}
\left(
-\frac{1}{\epsilon_{\textrm{UV}}}
-\frac{2}{\epsilon_{\textrm{IR}}}
-3
\log
\frac{4 \pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{m^2}
-4
\right).
\end{equation}
\subsection{ Summary of QCD results\label{subsec:QCDsummary}}
By making use of Eqs.~(\ref{A-Z-L}), (\ref{Lam-final}), and (\ref{zq}),
we find that $G$ and $H$ are given by
\begin{subequations}
\label{QCDAB}%
\begin{eqnarray}
G
&=& 1 + \frac{\alpha_s C_F}{4 \pi}
\bigg\{
2 \big[ (1+\delta^2) L (\delta) -1\big]
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+
\log \frac{4 \pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{m^2}
\right)
+ 6 \delta^2 L(\delta) -
4 (1+\delta^2) K(\delta)
\nonumber \\
&&\hspace{11ex}
-4
+
(1+\delta^2)
\left[ \frac{\pi^2}{\delta} - \frac{i \pi}{\delta}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+
\log \frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{\bm{q}^2}
+\frac{3 \delta^2}{1+\delta^2}
\right)
\right]
\bigg\},
\\
H &=& \frac{\alpha_s C_F}{4 \pi} \frac{1-\delta^2}{m}
\left[ 2 L(\delta) - \frac{i \pi}{\delta} \right].
\end{eqnarray}
\end{subequations}
Expanding Eq.~(\ref{amp-A-B}) through order $v^2$, using Eq.~(\ref{QCDAB}),
we obtain
\begin{eqnarray}
\label{amp-qq-v2}%
i \mathcal{A}^i_{Q\bar{Q}_{1}}
&=& \eta^\dagger \sigma^i \xi
\Bigg[ 1 + \frac{\alpha_s C_F}{4 \pi}
\bigg\{ \frac{8}{3} v^2
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+ \log \frac{4 \pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}{m^2}
\right)
-8 + \frac{2v^2}{9}
\nonumber \\
&& \hspace{17ex}
+ \left( 1 + \frac{3v^2}{2} \right)
\left[
\frac{\pi^2}{v}
- \frac{i \pi}{v} \left(
\frac{1}{\epsilon_{\textrm{IR}}}
+ \log \frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}
{\bm{q}^2}
\right)
\right]
-3 i \pi v
\bigg\}
\Bigg]
\nonumber \\
&&
- \frac{q^i \eta^\dagger \bm{q}\cdot\bm{\sigma} \xi}{2 m^2}
\Bigg\{ 1 + \frac{\alpha_s C_F}{4 \pi}
\bigg[ -4 + \frac{\pi^2}{v}
- \frac{i \pi}{v}\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+ \log \frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}
{\bm{q}^2}
+2
\right)
\bigg] \Bigg\}
\nonumber\\
&&+O(v^{3}).
\end{eqnarray}
Equation~(\ref{amp-qq-v2}) agrees with Eq.~(4.16) of Ref.~\cite{Luke:1997ys}.
\section{NRQCD corrections\label{sec:NRQCD}}
In this section, we calculate the NRQCD corrections to the heavy-quark
electromagnetic current.
That is, we compute
$\left[i{\cal A}_{Q\bar Q_1}^{i(1)}\right]_{\rm NRQCD}$.
In order to demonstrate our method for calculating
these corrections from full-QCD expressions, we present the calculation
in some detail.
Divergent integrals are regulated using dimensional regularization,
with $d=4-2\epsilon$.
We define the following notation for the loop integrals in $d-1$ dimensions:
\begin{equation}
\label{int3-definition}%
\int_{\bm{k}}\equiv
\mu^{2 \epsilon} \int \frac{d^{d-1}k}{(2 \pi)^{d-1}}.
\end{equation}
We also define $\intDtxt$ and $\intNtxt$, which have the same
meaning as $\int_k$ and $\int_{\bm{k}}$,
except that it is
understood for $\intDtxt$ that one carries out the $k^0$ integration
first, and it is understood for both $\intDtxt$ and $\intNtxt$ that
one expands the integrand
in powers of the momenta divided by $m$.
\subsection{Vertex Correction\label{subsec:NRQCDvertex}}
Now, we calculate the NRQCD vertex correction
to the electromagnetic current. From Eq.~(\ref{numerator}), we see that
the vertex correction is given by
\begin{eqnarray}
\label{vertexNRQCD00}%
\Lambda^i_{\textrm{NRQCD}}
&=&
-ig_s^2C_F
\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}
\frac{1} {D_0D_1D_2}
\,
\bar{v} (p_2) \bigg\{
\bigg[(d-2)k^2 - 4(2 p^2 - m^2) + 8 k \cdot q
\bigg]
\gamma^i
\nonumber\\
&&\hspace{27ex}
+ 4 m k^{i} - 8 q^{i} k\!\!\!/ + 2 (2-d) k^{i} k\!\!\!/
\,\bigg\} u(p_1).
\end{eqnarray}
The vertex correction (\ref{vertexNRQCD00}) can be written as
\begin{eqnarray}
\label{LambdaNRQCD0}%
\Lambda^i_{\textrm{NRQCD}}
&=&-ig_s^2C_F \bar{v}(p_2)
\bigg\{
\Big[
(d-2)S_1-4(2p^2-m^2)S_2+8q_\mu S^\mu_3
\Big]
\gamma^i
\nonumber\\
&&\hspace{15ex}
+2 \Big[
2mS_3^i -4\gamma_\mu S_3^\mu q^i +(2-d)\gamma_\mu S_4^{\mu i}
\Big]
\bigg\}
u(p_1),
\end{eqnarray}
where
\begin{subequations}
\label{Si-definition}%
\begin{eqnarray}
S_1 &=& \,\bm{\mathcal{N}}\hskip -2.945ex\int_{k} \frac{1}{D_1 D_2},
\\
S_2 &=& \,\bm{\mathcal{N}}\hskip -2.945ex\int_{k} \frac{1}{D_0 D_1 D_2},
\\
S^{\mu}_3 &=& \,\bm{\mathcal{N}}\hskip -2.945ex\int_{k} \frac{k^\mu}{D_0 D_1 D_2},
\\
S^{\mu\nu}_4 &=& \,\bm{\mathcal{N}}\hskip -2.945ex\int_{k} \frac{k^\mu k^\nu}{D_0 D_1 D_2}.
\end{eqnarray}
\end{subequations}
The factors in the denominator of the integrands are
defined in Eq.~(\ref{Di-definition}).
In the $Q\bar{Q}$ rest frame, the factors
$D_i$ in Eq.~(\ref{Di-definition}) are
\begin{subequations}
\label{poles}%
\begin{eqnarray}
D_0&=&(k^0)^2-\bm{k}^2+i\varepsilon=
(k^0-|\bm{k}|+i \varepsilon)(k^0+|\bm{k}|-i \varepsilon),
\\
D_1&=&(k^0+E)^2-\Delta^2+i\varepsilon
=(k^0+\Delta+E-i\varepsilon)(k^0-\Delta+E+i\varepsilon),
\\
D_2&=&(k^0-E)^2-\Delta^2+i\varepsilon
=(k^0+\Delta-E-i\varepsilon)(k^0-\Delta-E+i\varepsilon),
\end{eqnarray}
\end{subequations}
where $\Delta$ is defined by
\begin{equation}
\Delta=\sqrt{m^2+(\bm{k}+\bm{q})^2}.
\end{equation}
The following are identities that we use frequently:
\begin{subequations}
\begin{eqnarray}
\label{d2e2}%
\Delta-E&=&\frac{\bm{k}^2+2\bm{k}\cdot\bm{q}}{\Delta+E},
\\
\label{d2ke2}%
\Delta^2-(E\pm|\bm{k}|)^2&=&
\mp 2|\bm{k}|(E\mp \bm{q}\cdot \hat{\bm{k}}),
\end{eqnarray}
\end{subequations}
where $\hat{\bm{a}}=\bm{a}/|\bm{a}|$ for any spatial vector $\bm{a}$.
We first evaluate the $k^0$ integral by contour integration, closing the
contour in the upper half-plane in every case. The contributions from
the poles in the gluon, quark, and antiquark propagators are defined as
$S_{ig}$, $S_{iQ}$, and $S_{i\bar{Q}}$, respectively. Certain integrals
that we use frequently are tabulated in Appendix~\ref{appendix:IntS}.
We note that the contributions $S_{i\bar{Q}}$ correspond to the
potential region in the method of regions, and the contributions
$S_{ig}$ correspond to the soft and ultrasoft regions in the method of
regions \cite{Beneke:1997zp}. The contributions $S_{iQ}$ correspond to a
region of integration in which the temporal component of the gluon
momentum is of order $m$, but the spatial component of the gluon
momentum is of order $mv$. As we have mentioned, in the method of
regions it is assumed that this region of integration does not
contribute \cite{Beneke:1997zp}. We shall see explicitly in the
calculations that follow that this assumption is justified because the
contributions from this region of integration consist of scaleless,
power-divergent integrals, which vanish in dimensional regularization.
In the case of a hard-cutoff regulator these contributions do not
vanish, and they must be included in the calculation of the NRQCD
corrections.
\subsubsection{${S_1}$}
The integral $S_1$ is the sum of two contributions:
$S_1=S_{1Q}+S_{1\bar{Q}}$.
By making use of Eq.~(\ref{poles}), we
evaluate the $k^0$ integral. The contribution from the quark pole is
\begin{equation}
\label{s1q}%
S_{1Q} =
- \frac{i}{8E}
\intN
\frac{1}{\Delta(\Delta+E)}.
\end{equation}
Expanding $1/\Delta$ and $1/(\Delta+E)$ in Eq.~(\ref{s1q})
in powers of $(\bm{k}+\bm{q})^2/m^2$,
we find that all of the terms in the
expansion are scaleless, power-divergent integrals.
Hence,
\begin{equation}
\label{s1qf}%
S_{1Q} =0.
\end{equation}
The contribution from the antiquark pole is
\begin{equation}
\label{s1qb}%
S_{1\bar{Q}} =
\frac{i}{8E}
\intN
\frac{1}{\Delta (\Delta-E-i \varepsilon)}.
\end{equation}
We use the identity (\ref{d2e2}) to reduce
the integrand in Eq.~(\ref{s1qb}) to the following form:
\begin{equation}
\label{s1qb-plus}
S_{1\bar{Q}} =
\frac{i}{8E}
\intN\left(1+\frac{E}{\Delta}\right)
\frac{1}{\bm{k}^2+2\bm{k}\cdot\bm{q}-i \varepsilon}.
\end{equation}
Expanding $1/\Delta$ in Eq.~(\ref{s1qb-plus})
in powers of $(\bm{k}+\bm{q})^2/m^2$,
we find that the expansion brings in additional factors of
$(\bm{k}+\bm{q})^2$. In each additional factor, only the term $\bm{q}^2$
survives, as the terms $\bm{k}^2+2\bm{k}\cdot\bm{q}$ lead to scaleless,
power-divergent integrals, which vanish.
As a result, we can replace $\Delta$ with $E$ in Eq.~(\ref{s1qb-plus}).
Hence, $S_{1\bar{Q}}$ is
proportional to an elementary integral $n_1$, which is defined in
Eq.~(\ref{n1}):
\begin{equation}
\label{s1qbf}%
S_{1\bar{Q}} =
\frac{i}{4E}\,n_1= - \frac{|\bm{q}|}{16 \pi E}.
\end{equation}
Using Eqs.~(\ref{delta}), (\ref{s1qf}), and (\ref{s1qbf}), we obtain
\begin{equation}
\label{s1f}%
S_1
= \frac{i}{(4\pi)^2}\,\,
i \pi \delta.
\end{equation}
\subsubsection{${S_2}$}
The integral $S_2$ is the sum of three contributions:
$S_2=S_{2g}+S_{2Q}+S_{2\bar{Q}}$.
By making use of Eq.~(\ref{poles}), we evaluate the $k^0$
integral. The gluon-pole contribution is
\begin{eqnarray}
\label{s2g}%
S_{2g}&=&
-\frac{i}{2}
\intN
\frac{1}{|\bm{k}|
[\Delta^2-(E+|\bm{k}|)^2-i\varepsilon]
[\Delta^2-(E-|\bm{k}|)^2-i\varepsilon]}
\nonumber\\
&=&
\frac{i}{8}
\int_{\bm{k}}
\frac{1}{|\bm{k}|^3
[E^2-(\bm{q}\cdot\hat{\bm{k}})^2]
},
\end{eqnarray}
where we have used the identity (\ref{d2ke2}).
Making use of Eq.~(\ref{angularav2}), we find that $S_{2g}$ is proportional
to $n_0$ in Eq.~(\ref{n0}):
\begin{equation}
\label{s2g-simple}%
S_{2g}=
\frac{i}{16 E|\bm{q}|} \,n_0
\log \left( \frac{E+|\bm{q}|}{E-|\bm{q}|} \right).
\end{equation}
Using Eqs.~(\ref{eq-delta}) and (\ref{n0}), we express $S_{2g}$
in terms of $m$ and $\delta$ as
\begin{equation}
\label{s2gf}%
S_{2g}
=
\frac{i}{32\pi^2 m^2}
\left(
\frac{1}{\epsilon_{\textrm{UV}}}
-\frac{1}{\epsilon_{\textrm{IR}}}\right)
\frac{1-\delta^2}{2\delta}
\log\left(\frac{1+\delta}{1-\delta}\right).
\end{equation}
The contribution from the quark pole is
\begin{eqnarray}
\label{s2q}%
S_{2Q}&=&
- \frac{i}{8E}
\intN
\frac{1}{\Delta(\Delta+E)
[(\Delta+E)^2-\bm{k}^2+i \varepsilon]}
\nonumber\\
&=&
- \frac{i}{8E}
\sum_{n=0}^\infty
\intN
\frac{\bm{k}^{2n}}{\Delta(\Delta+E)^{2n+3}}.
\end{eqnarray}
Now we expand $1/\Delta$ and $1/(\Delta+E)$ in Eq.~(\ref{s2q})
in powers of $(\bm{k}+\bm{q})^2/m^2$.
All of the terms in the expansion yield
scaleless, power-divergent integrals, which vanish.
Therefore, we have
\begin{equation}
\label{s2qf}%
S_{2Q}=0.
\end{equation}
The contribution from the antiquark pole is
\begin{equation}
\label{s2qb}%
S_{2\bar{Q}}=
-\frac{i}{8E}
\intN
\frac{1}{\Delta(\Delta-E-i \varepsilon)
[\bm{k}^2-(\Delta-E)^2-i\varepsilon ]}.
\end{equation}
If we use the relation (\ref{d2e2}), we obtain
\begin{equation}
S_{2\bar{Q}}=
-\frac{i}{8E}
\intN
\left(1+\frac{E}{\Delta}\right)
\frac{1}{\bm{k}^2(\bm{k}^2+2\bm{k}\cdot\bm{q}-i \varepsilon)
\left[1-\frac{1}{\bm{k}^2}
\left(\frac{\bm{k}^2+2\bm{k}\cdot\bm{q}}
{\Delta+E}
\right)^2
\right]}.
\end{equation}
The denominator factor in the brackets can be expanded to give
\begin{equation}
\label{s2qb2-old}%
S_{2\bar{Q}}
=-\frac{i}{8E}
\intN
\left(1+\frac{E}{\Delta}\right)
\left[
\frac{1}
{\bm{k}^{2}(\bm{k}^2+2\bm{k}\cdot\bm{q}-i \varepsilon)}
+
\sum_{n=1}^\infty
\frac{(\bm{k}^2+2\bm{k}\cdot\bm{q})^{2n-1}}
{\bm{k}^{2n+2}(\Delta + E)^{2n}}
\right].
\end{equation}
Now we expand $E/\Delta$ and $1/(\Delta + E)$ in powers of
$(\bm{k}+\bm{q})^2/m^2$. The expansion brings in additional
factors of $(\bm{k}+\bm{q})^2$ in each term in the integrand of
Eq.~(\ref{s2qb2-old}). In each additional factor ($\bm{k}+\bm{q})^2$, only
the term $\bm{q}^2$ survives, as the terms $\bm{k}^2+2\bm{k}\cdot\bm{q}$
lead to scaleless, power-divergent integrals.
Therefore, we can
replace $\Delta$ in Eq.~(\ref{s2qb2-old}) with $E$. Furthermore, in the
numerator of the second term in brackets in Eq.~(\ref{s2qb2-old}), only the
term $(2\bm{k}\cdot \bm{q})^{2n-1}$ survives, as the other terms lead to
scaleless, power-divergent integrals. Then, we have
\begin{equation}
\label{s2qb2}%
S_{2\bar{Q}}
=-\frac{i}{4E}
\intN
\left[
\frac{1}
{\bm{k}^{2}(\bm{k}^2+2\bm{k}\cdot\bm{q}-i \varepsilon)}
+
\frac{1}{2}
\sum_{n=1}^\infty
\frac{(\bm{k}\cdot\bm{q})^{2n-1}}
{\bm{k}^{2n+2} E^{2n}}
\right].
\end{equation}
The term proportional to $(\bm{k}\cdot\bm{q})^{2n-1}$ yields a
scaleless, logarithmically divergent integral.
However, this integral vanishes because
the integrand is an odd function of $\bm{k}$. Thus, only
the first term in the brackets in Eq.~(\ref{s2qb2}) survives, and
we find that
\begin{equation}
\label{s2qbf}%
S_{2\bar{Q}}=
-\frac{i}{4E}n_2
=
-\frac{1}{64\pi E|\bm{q}|}\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+\log
\frac{\pi \mu^2 e^{- \gamma_{_{\!\textrm{E}}}} }
{\bm{q}^2}
+i\pi
\right),
\end{equation}
where $n_2$ is defined in Eq.~(\ref{n2}).
Making use of Eqs.~(\ref{eq-delta}), (\ref{s2gf}), (\ref{s2qf}),
and (\ref{s2qbf}), we obtain
\begin{equation}
\label{s2f}%
S_2
= \frac{i}{(4\pi)^2}\,\,
\frac{1-\delta^2}{4m^2}
\Bigg[
2 L(\delta)
\left(
\frac{1}{\epsilon_{\textrm{UV}}}
-\frac{1}{\epsilon_{\textrm{IR}}}
\right)
-\frac{\pi^2}{\delta}
+ \frac{i \pi}{\delta}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+ \log \frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}} }
{\bm{q}^2 }
\right)
\Bigg],
\end{equation}
where $L(\delta)$ is defined in Eq.~(\ref{L-delta}).
\subsubsection{$S_3^\mu$}
The integral $S_3^\mu$ is the sum of three contributions:
$S_3^\mu=S_{3g}^\mu+S_{3Q}^\mu+S_{3\bar{Q}}^\mu$.
We first evaluate $S_3^0$.
The integral of $S^{0}_3$ over $k^0$ is identical to the integral of $S_2$
over $k^0$ except that, in $S_3^0$, the result contains an additional factor
of $k^0$ evaluated at the gluon, quark, or antiquark pole.
Thus, by making use of
Eqs.~(\ref{d2e2}), (\ref{s2g}), (\ref{s2q}), and (\ref{s2qb2-old}),
we obtain
\begin{subequations}
\label{s30}%
\begin{eqnarray}
\label{s3g0f}%
S_{3g}^0&=&
-\frac{i}{8}
\int_{\bm{k}}
\frac{1}{\bm{k}^2[ E^2-(\bm{q}\cdot\hat{\bm{k}})^2]},
\\
\label{s3q0}%
S^0_{3Q}&=&
\frac{i}{8E}
\sum_{n=0}^{\infty}
\intN
\frac{\bm{k}^{2n}}{\Delta(\Delta+E)^{2n+2}},
\\
\label{s3qb0}%
S^0_{3\bar{Q}}&=&
\frac{i}{8E}
\sum_{n=0}^\infty
\intN
\frac{(\bm{k}^2+2\bm{k}\cdot\bm{q})^{2n}}
{\bm{k}^{2n+2}\Delta(\Delta+E)^{2n}}.
\end{eqnarray}
\end{subequations}
$S^0_{3g}$ is a scaleless, power-divergent integral, which vanishes.
In $S^0_{3Q}$ and $S^0_{3\bar{Q}}$ we expand $1/\Delta$ and
$1/(\Delta+E)$ in powers of $(\bm{k}+\bm{q})^2/m^2$.
We find that every term in the expansions leads to a
scaleless, power-divergent integral, which vanishes.
Hence,
\begin{equation}
\label{s30f}%
S^0_{3}=0.
\end{equation}
Next we compute the spatial components $S^{i}_3$.
The integral of $S^{i}_3$ over $k^0$ is identical to the integral of $S_2$
over $k^0$ except that, in $S^{i}_3$, the result contains an additional factor
of $k^i$. Thus, by making use of
Eqs.~(\ref{s2g}) and (\ref{s2q}), we obtain
\begin{subequations}
\label{s3v}%
\begin{eqnarray}
\label{s3gv}%
S^{i}_{3g}&=&
\frac{i}{8}
\int_{\bm{k}}
\frac{{k}^i}
{|\bm{k}|^3
[ E^2-(\bm{q}\cdot \hat{\bm{k}})^2]
},
\\
\label{s3qv}%
S^{i}_{3Q}&=&
- \frac{i}{8E}
\sum_{n=0}^\infty
\intN
\frac{{k}^i\bm{k}^{2n}}{\Delta(\Delta+E)^{2n+3}}.
\end{eqnarray}
\end{subequations}
$S^i_{3g}$ is a scaleless, power-divergent integral, which vanishes.
Expanding $1/\Delta$ and $1/(\Delta+E)$ in $S^{i}_{3Q}$ in powers of
$(\bm{k}+\bm{q})^2/m^2$,
we also obtain only scaleless, power-divergent integrals, which vanish.
If we multiply the second term in brackets in Eq.~(\ref{s2qb2}) by
$k^i$, we obtain only scaleless, power-divergent integrals. Hence,
\begin{equation}
\label{s3qbv}%
S^{i}_{3 \bar{Q}}=
- \frac{i}{4 E} \int_{\bm{k}}
\frac{{k}^i}{\bm{k}^2 (\bm{k}^2 + 2 \bm{k}\cdot\bm{q} - i \varepsilon)}.
\end{equation}
After making a standard reduction of the tensor integral in
Eq.~(\ref{s3qbv}) to a scalar
integral, we obtain
\begin{equation}
\label{s3vqbf}%
S^{i}_{3 \bar{Q}}=
- \frac{i\,{q}^i}{8 E\bm{q}^2}
\int_{\bm{k}}
\frac{(\bm{k}^2 + 2 \bm{k}\cdot\bm{q})-\bm{k}^2}
{\bm{k}^2 (\bm{k}^2 + 2 \bm{k}\cdot\bm{q} - i \varepsilon)}
=
\frac{i\,{q}^i}{8 E\bm{q}^2} n_1
=
- \frac{1}{32 \pi} \frac{{q}^i}{E|\bm{q}|},
\end{equation}
where $n_1$ is defined in Eq.~(\ref{n1}) and we have discarded
scaleless, power-divergent integrals. Hence,
\begin{equation}
\label{s3vf}%
S^{i}_{3}=
- \frac{1}{32 \pi} \frac{{q}^i}{E|\bm{q}|}.
\end{equation}
Writing our results in Eqs.~(\ref{s30f}) and (\ref{s3vf}) in
covariant form, we obtain
\begin{equation}
\label{s3muf}%
S^\mu_3
= \frac{i}{(4\pi)^2}\,
\frac{1-\delta^2}{2m^2}\,
\frac{i \pi}{\delta}\, q^\mu,
\end{equation}
where we have made use of Eq.~(\ref{eq-delta}) to
express $E$ and $|\bm{q}|$ in terms of $\delta$.
\subsubsection{$S_4^{\mu\nu}$}
The integral $S_4^{\mu\nu}$ is the sum of three contributions:
$S_4^{\mu \nu} =S_{4g}^{\mu \nu}+S_{4Q}^{\mu \nu}+S_{4\bar{Q}}^{\mu \nu}$.
We first evaluate $S_4^{00}$.
The integral of $S^{00}_4$ over $k^0$ is identical to the integral of $S_3^0$
over $k^0$ except that, in $S_4^{00}$,
the result contains an additional factor
of $k^0$ evaluated at the gluon, quark, or antiquark pole.
Thus, by making use of
Eqs.~(\ref{d2e2}) and (\ref{s30}), we obtain
\begin{subequations}
\label{s004}%
\begin{eqnarray}
S^{00}_{4g}&=&
\frac{i}{8}
\int_{\bm{k}}
\frac{1}
{|\bm{k}| [E^2-(\bm{q}\cdot \hat{\bm{k}})^2]},
\\
S^{00}_{4Q}&=&
-\frac{i}{8E}
\sum_{n=0}^{\infty}
\intN
\frac{\bm{k}^{2n}}{\Delta(\Delta+E)^{2n+1}},
\\
S^{00}_{4\bar{Q}}&=&
-\frac{i}{8E}
\sum_{n=0}^\infty
\intN\frac{(\bm{k}^2+2 \bm{k}\cdot\bm{q} )^{2n+1}}
{\bm{k}^{2n+2}\Delta(\Delta+E)^{2n+1}} .
\end{eqnarray}
\end{subequations}
Every integral in Eq.~(\ref{s004}) is a scaleless,
power-divergent integral.
Hence,
\begin{equation}
\label{s004f}%
S^{00}_{4}=0.
\end{equation}
Next we compute ${S}_4^{0i}$.
The integral of ${S}_4^{0i}$ over $k^0$ is identical to the integral of $S_3^0$
over $k^0$ except that, in ${S}_4^{0i}$,
the result contains an additional factor
of $k^i$. Thus, by making use of Eq.~(\ref{s30}), we obtain
\begin{subequations}
\label{s0i4}%
\begin{eqnarray}
{S}^{0i}_{4g}&=&
- \frac{i}{8}
\int_{\bm{k}}\frac{k^i}
{\bm{k}^{2}[
E^2- (\bm{q}\cdot \hat{\bm{k}})^2
]
},
\\
{S}^{0i}_{4Q}&=&
\frac{i}{8E}
\sum_{n=0}^\infty
\intN
\frac{k^i \bm{k}^{2n}}{\Delta(\Delta+E)^{2n+2}},
\\
{S}^{0i}_{4 \bar{Q}}&=&
\frac{i}{8E}
\sum_{n=0}^\infty
\intN
\frac{k^i(\bm{k}^2+2\bm{k}\cdot\bm{q})^{2n}}
{\bm{k}^{2n+2}\Delta(\Delta+E)^{2n}}.
\end{eqnarray}
\end{subequations}
$S^{0i}_{4g}$ is a scaleless, power-divergent integral, which vanishes.
$S^{0i}_{4Q}$ and $S^{0i}_{4\bar{Q}}$ also vanish, once we expand $1/\Delta$
and $1/(\Delta+E)$ in powers of $(\bm{k}+\bm{q})^2/m^2$. Thus,
\begin{equation}
\label{s0i4f}%
S^{0i}_{4}=0.
\end{equation}
Finally, we evaluate the integrals $S^{ij}_4$. The integral of ${S}_4^{ij}$
over $k^0$ is identical to the integral of $S_3^{i}$ over $k^0$ except that,
in ${S}_4^{ij}$, the result contains an additional factor of $k^j$.
Thus, by making use of Eqs.~(\ref{s3v}) and (\ref{s3qbv}), we obtain
\begin{subequations}
\label{sij4}%
\begin{eqnarray}
{S}^{ij}_{4g}&=&
\frac{i}{8}
\int_{\bm{k}}
\frac{k^i k^j}
{|\bm{k}|^3[
E^2- (\bm{q}\cdot \hat{\bm{k}})^2
]
},
\\
{S}^{ij}_{4Q}&=&
- \frac{i}{8E}
\sum_{n=0}^\infty
\intN
\frac{k^i k^j \bm{k}^{2n}}{\Delta(\Delta+E)^{2n+3}},
\\
\label{sijqb4-simple}%
{S}^{ij}_{4 \bar{Q}}&=&
-\frac{i}{4E}
\int_{\bm{k}}
\frac{k^i k^j}
{\bm{k}^2(\bm{k}^2+2\bm{k}\cdot\bm{q}-i \varepsilon)}.
\end{eqnarray}
\end{subequations}
$S^{ij}_{4g}$ is a scaleless, power-divergent integral, which vanishes.
${S}^{ij}_{4Q}$ also vanishes, once we expand $1/\Delta$ and $1/(\Delta+E)$
in powers of $(\bm{k}+\bm{q})^2/m^2$. The tensor integral
${S}^{ij}_{4 \bar{Q}}$ in Eq.~(\ref{sijqb4-simple}) must be a linear
combination of the two symmetric tensors $\delta^{ij}$ and
${q}^i{q}^j$. By contracting these tensors into Eq.~(\ref{sijqb4-simple}),
we determine the coefficients of the linear combination. The result is
\begin{eqnarray}
\label{sijqb4-n1}%
{S}^{ij}_{4\bar{Q}}
&=&
-\frac{i}{4E(d-2)}
\left[
\delta^{ij}\left( n_1 -\frac{1}{4 \bm{q}^2}n_3 \right)
-\frac{q^i q^j}{\bm{q}^2}\left(n_1-\frac{d-1}{4\bm{q}^2} n_3 \right)
\right]
\nonumber \\
&=&
\frac{|\bm{q}|}{32\pi(d-2)E}
\bigg[
\delta^{ij}
+(d-3)\frac{q^i q^j}{\bm{q}^2}
\bigg],
\end{eqnarray}
where $n_3$ is defined in Eq.~(\ref{n3}). Because $S_{4 g}^{ij}$ and
$S_{4 \bar{Q}}^{ij}$ vanish, we find that $S_4^{ij} = S_{4 \bar{Q}}^{ij}$.
The integral in Eq.~(\ref{sijqb4-n1}) is finite and, therefore,
we may set $d=4$.
The covariant form of the integral $S^{\mu\nu}_4$ at $d=4$ is then
\begin{equation}
\label{s4f}%
S_4^{\mu \nu}
= \frac{i}{(4\pi)^2}\,\,
\frac{ i\pi\delta}{4}
\bigg[
g^{\mu \nu}
- \frac{1-\delta^2}{m^2}
\left(p^\mu p^\nu+\frac{q^\mu q^\nu}{\delta^2}
\right)
\bigg].
\end{equation}
\subsubsection{Summary of the NRQCD vertex correction}
Substituting
$S_1$ -- $S_4^{\mu \nu}$ in Eqs.~(\ref{s1f}), (\ref{s2f}), (\ref{s3muf}), and
(\ref{s4f}) into Eq.~(\ref{LambdaNRQCD0}) and using the equations of motion
in Eq.~(\ref{diraceq}), we obtain
\begin{subequations}
\label{LambdaNRQCDf}%
\begin{eqnarray}
\Lambda_{\textrm{NRQCD}}
&=&
\frac{\alpha_s C_F}{4 \pi}
(1+\delta^2)
\bigg[
2 L(\delta)
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
-\frac{1}{\epsilon_{\textrm{UV}}}
\right)
+\frac{\pi^2}{\delta}
- \frac{i \pi}{\delta}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+ \log \frac{\pi \mu^2e^{-\gamma_{_{\!\textrm{E}}}} }
{\bm{q}^2 }
+
\frac{3 \delta^2 }{1+\delta^2} \right)
\bigg],
\nonumber\\
&&
\\
H_{\textrm{NRQCD}}
&=&
\frac{\alpha_s C_F}{4\pi} \,\frac{1-\delta^2}{m}
\left( - \frac{i\pi}{\delta}\right) .
\end{eqnarray}
\end{subequations}
\subsection{Wave-function Renormalization\label{subsec:NRQCDZQ}}
In the Feynman gauge, the self energy of the heavy quark, evaluated
at four-momentum $p_1$, is
\begin{equation}
[\Sigma(p_1)]_{\textrm{NRQCD}}=
-ig_s^2C_F\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}
\frac{\gamma_\mu (/\!\!\!p_1+/\!\!\!k+m)\gamma^\mu }
{(k^2+i\varepsilon)[(p_1+k)^2-m^2+i\varepsilon]},
\end{equation}
where $m$ is the mass of the heavy quark and $k$ is the loop momentum,
which has been chosen to be the momentum of the virtual gluon.
In $d$ dimensions, we find that the numerator factor reduces to
\begin{equation}
\label{sigmap1}%
[\Sigma(p_1)]_{\textrm{NRQCD}}=
-ig_s^2C_F\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}
\frac{(2-d)(/\!\!\!p_1+/\!\!\!k)+dm}
{(k^2+i\varepsilon)[(p_1+k)^2-m^2+i\varepsilon]}.
\end{equation}
The heavy-quark wave-function renormalization $Z_Q$ is defined by
\begin{eqnarray}
\label{zq-as}%
[Z_Q]_{\textrm{NRQCD}}
&=&\bigg[
1-
\left.
\frac{p_1^\mu}{m}
\frac{\partial[\Sigma(p_1)]_{\textrm{NRQCD}}}{\partial p_1^\mu}
\right|_{/\!\!\!p_1=m}
\bigg]^{-1}
\nonumber\\
&=&
1+
\left.
\frac{p_1^\mu}{m}
\frac{\partial[\Sigma(p_1)]_{\textrm{NRQCD}}}{\partial p_1^\mu}
\right|_{/\!\!\!p_1=m}+O(\alpha_s^2).
\end{eqnarray}
Differentiating Eq.~(\ref{sigmap1}), we find that
\begin{eqnarray}
\label{dsdp}%
\left.
\frac{p_1^\mu}{m}
\frac{\partial[\Sigma(p_1)]_{\textrm{NRQCD}}}{\partial p_1^\mu}
\right|_{/\!\!\!p_1=m}
&=&
-ig_s^2C_F\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}
\bigg\{
\frac{2-d}
{D_0D_1}
-
\frac{2[(2-d)(/\!\!\!k+m)+dm](p_1\cdot k+m^2)}
{mD_0D_1^2}
\bigg\}
\nonumber\\
&=&
-ig_s^2C_F\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}
\bigg[
\frac{2-d}
{D_0D_1}
-
\frac{(2-d)/\!\!\!k+2m}{m}
\left(
\frac{1}{D_0D_1}
-
\frac{1}{D_1^2}
+
\frac{2m^2}{D_0D_1^2}
\right)
\bigg],
\nonumber\\
\end{eqnarray}
where $D_0$ and $D_1$ are defined in Eq.~(\ref{Di-definition}).
The expression in Eq.~(\ref{dsdp}) can be written in terms of
the integrals $T_{02}$, $T_{11}$, $T_{12}$, $T_{02}^\mu$, $T_{11}^\mu$,
and $T_{12}^{\mu}$,
which are defined by
\begin{subequations}
\label{ti-definition}%
\begin{eqnarray}
T_{ab}
&=&
\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}\frac{1}{D_0^a D_1^b},
\\
T_{ab}^{\mu}
&=&
\,\bm{\mathcal{N}}\hskip -2.945ex\int_{k}\frac{k^\mu}{D_0^a D_1^b}.
\end{eqnarray}
\end{subequations}
These integrals are evaluated in Appendix~\ref{appendix:T-integrals},
and the results are summarized in Eqs.~(\ref{t-vanishing}) and (\ref{t12f}).
The only nonvanishing integral is $T_{12}$. Hence,
\begin{eqnarray}
\label{dsdpf}%
\left.
\frac{p_1^\mu}{m}
\frac{\partial[\Sigma(p_1)]_{\textrm{NRQCD}}}{\partial p_1^\mu}
\right|
_{/\!\!\!p_1=m}
=
4ig_s^2C_F\, m^2\, T_{12}.
\end{eqnarray}
Making use of Eqs.~(\ref{zq-as}),
(\ref{dsdpf}), and (\ref{t12f}),
we obtain the heavy-quark wave-function renormalization in NRQCD:
\begin{equation}
\label{ZQNRQCD}%
[Z_Q]_{\textrm{NRQCD}}
= 1+ \frac{\alpha_s C_F}{2 \pi} \left(
\frac{1}{\epsilon_{\textrm{UV}}}
- \frac{1}{\epsilon_{\textrm{IR}}}
\right)+O(\alpha_s^2).
\end{equation}
\subsection{Summary of NRQCD results\label{subsec:NRQCDsummary}}
Making use of Eqs.~(\ref{LambdaNRQCDf}) and (\ref{ZQNRQCD}),
we find that
\begin{subequations}
\label{GHNRQCDf}%
\begin{eqnarray}
G_{\textrm{NRQCD}}
&=&
1+ \frac{\alpha_s C_F}{4 \pi}
\left\{
2 [(1+\delta^2) L(\delta) - 1]
\Big(
\frac{1}{\epsilon_{\textrm{IR}}}
-\frac{1}{\epsilon_{\textrm{UV}}}
\Big)
\right.
\nonumber \\
&& \hspace{8ex}
\left.
+
(1+\delta^2)
\left[
\frac{\pi^2}{\delta}
- \frac{i \pi}{\delta}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+ \log \frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}
{\bm{q}^2}
+ \frac{3 \delta^2}{1+\delta^2}
\right)
\right]
\right\},
\\
H_{\textrm{NRQCD}}
&=&
\frac{\alpha_s C_F}{4\pi} \,\frac{1-\delta^2}{m}
\left( - \frac{i\pi}{\delta}\right) .
\end{eqnarray}
\end{subequations}
Expanding Eq.~(\ref{amp-A-B-NRQCD}) through order $v^2$,
we obtain
\begin{eqnarray}
\label{Ai-NRQCD}%
i\left[{\cal A}_{Q\bar Q_1}^{i}\right]_{\rm NRQCD}
&=&
\eta^\dagger \sigma^i \xi
\Bigg[
\,
1+ \frac{\alpha_s C_F}{4 \pi}
\bigg\{
\frac{8 v^2}{3}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
-\frac{1}{\epsilon_{\textrm{UV}}}
\right)
\nonumber \\
&&\hspace{10ex}+
\left( 1 + \frac{3 v^2}{2} \right)
\bigg[ \frac{\pi^2}{v} - \frac{i \pi}{v}
\bigg(
\frac{1}{\epsilon_{\textrm{IR}}}
+\log
\frac{\pi \mu^2
e^{-\gamma_{_{\!\textrm{E}}}}
}{\bm{q}^2}
\bigg)
\bigg]
-3 i \pi v
\bigg\}
\,
\Bigg]
\nonumber \\
&-&
\frac{q^i\eta^\dagger \bm{q}\cdot\bm{\sigma} \xi}{2 m^2}
\left\{
1+ \frac{\alpha_s C_F}{4 \pi}
\left[
\frac{\pi^2}{v}
-
\frac{i \pi}{v}
\left(
\frac{1}{\epsilon_{\textrm{IR}}}
+\log
\frac{\pi \mu^2 e^{-\gamma_{_{\!\textrm{E}}}}}
{\bm{q}^2}
\right)
-
\frac{2 i \pi}{v}
\right]
\right\}+ O(v^3).
\nonumber\\
\end{eqnarray}
Comparing Eq.~(\ref{Ai-NRQCD}) with Eqs.~(4.28) and (4.29) of
Ref.~\cite{Luke:1997ys}, we find agreement. We have also checked
Eq.~(\ref{Ai-NRQCD}) by carrying out a conventional calculation in NRQCD.
\section{Results for the short-distance coefficients\label{sec:RESCoeff}}
Now we can collect the results of our calculations and obtain the
short-distance coefficients. By making use of Eqs.~(\ref{DeltaAB}),
(\ref{QCDAB}), and (\ref{GHNRQCDf}), we find that
\begin{subequations}
\label{DeltaGH}%
\begin{eqnarray}
\Delta G^{(1)}
&=& \frac{\alpha_s C_F}{4 \pi}
\bigg\{
2 \,\big[ (1+\delta^2) L(\delta) - 1\big]
\left(
\frac{1}{\epsilon_{\textrm{UV}}}
+ \log \frac{4\pi \mu^2e^{-\gamma_{_{\!\textrm{E}}}}}{m^2}
\right)
\nonumber \\
&& \hspace{8ex}
+\,6\delta^2 L(\delta) - 4 (1+\delta^2) K (\delta) - 4 \bigg\},
\\
\label{DeltaH}%
\Delta H^{(1)} &=&
\frac{\alpha_s C_F}{4 \pi} \frac{2 (1-\delta^2)}{m} L(\delta).
\end{eqnarray}
\end{subequations}
As expected, the infrared poles in $G^{(1)}$ and $G_{\rm NRQCD}^{(1)}$
have canceled in $\Delta G^{(1)}$. Note that
$\Delta G^{(1)}$ and $\Delta H^{(1)}$ are real and contain only even
powers of $v=|\bm{q}|/m$. Renormalizing the matrix elements in the
$\overline{\rm MS}$ scheme, we obtain
\begin{equation}
\label{DeltaGMS}%
\Delta G^{(1)}_{\overline{\rm MS}}
= \frac{\alpha_s C_F}{4 \pi}
\left\{
2\, \big[ (1+\delta^2) L(\delta) - 1\big]
\log \frac{\mu^2}{m^2}
+
6 \delta^2 L(\delta) - 4 (1+\delta^2) K (\delta) - 4 \right\},
\end{equation}
where now $\mu$ is the NRQCD factorization scale. Using Eq.~(\ref{an0-bn0}),
we obtain the short-distance coefficients $a_n^{(0)}$ and $b_n^{(0)}$:
\begin{subequations}
\begin{eqnarray}
\label{lo-results-a}%
a_n^{(0)}&=&\delta_{n0},
\\
\label{lo-results-b}%
b_1^{(0)}&=& - \frac{1}{2 m^2}
,
\\
\label{lo-results-c}%
b_2^{(0)}&=& \frac{3}{8 m^4}
,
\\
\label{lo-results-d}%
b_3^{(0)}&=& - \frac{5}{16 m^6} .
\end{eqnarray}
\end{subequations}
The results in Eqs.~(\ref{lo-results-a})--(\ref{lo-results-c}) agree
with those in Eq.~(5.5) of Ref.~\cite{Bodwin:2002hg} and those in
Eqs.~(3.13)--(3.20) of Ref.~\cite{Brambilla:2006ph}.
Using Eqs.~(\ref{ab-MSbar}), (\ref{DeltaH}), and (\ref{DeltaGMS}),
we obtain the short-distance coefficients
$\left[a_n^{(1)}\right]_{\overline{\rm MS}}$ and
$\left[b_n^{(1)}\right]_{\overline{\rm MS}}$:
\begin{subequations}
\begin{eqnarray}
\left[a_0^{(1)}\right]_{\overline{\rm MS}}&=&
\frac{\alpha_s C_F}{4 \pi}\,\,
(- 8),
\\
\left[a_1^{(1)}\right]_{\overline{\rm MS}}&=&
\frac{\alpha_s C_F}{4 \pi}\,\,
\frac{1}{m^2}
\left( \frac{2}{9} + \frac{8}{3} \log \frac{\mu^2}{m^2} \right),
\\
\left[a_2^{(1)}\right]_{\overline{\rm MS}}&=&
\frac{\alpha_s C_F}{4 \pi}\,\,
\frac{1}{m^4}
\left( -\frac{92}{75} - \frac{8}{5} \log \frac{\mu^2}{m^2} \right),
\\
\left[a_3^{(1)}\right]_{\overline{\rm MS}}&=&
\frac{\alpha_s C_F}{4 \pi}\,\,
\frac{1}{m^6}
\left( \frac{13744}{11025} + \frac{128}{105} \log \frac{\mu^2}{m^2} \right),
\\
\left[b_1^{(1)}\right]_{\overline{\rm MS}}&=&
\frac{\alpha_s C_F}{4 \pi}\,\,
\frac{2}{m^2},
\\
\left[b_2^{(1)}\right]_{\overline{\rm MS}}&=&
-
\frac{\alpha_s C_F}{4 \pi}\,\,
\frac{1}{ m^4 }
\left( \frac{7}{9} + \frac{4}{3} \log \frac{\mu^2}{m^2} \right),
\\
\left[b_3^{(1)}\right]_{\overline{\rm MS}}&=&
\frac{\alpha_s C_F}{4 \pi}\,\,
\frac{1}{ m^6 }
\left( \frac{107}{150} + \frac{9}{5}
\log \frac{\mu^2}{m^2} \right).
\end{eqnarray}
\end{subequations}
The operators ${\cal O}_{A0}$, ${\cal O}_{A1}$, and ${\cal O}_{B1}$ in
Eq.~(\ref{OPerators}) correspond to the operators that were considered
in Ref.~\cite{Luke:1997ys}, provided that one neglects the gauge fields
in the latter operators.
Therefore, short-distance coefficients
$\left[a_0\right]_{\overline{\rm MS}}$,
$\left[a_1\right]_{\overline{\rm MS}}$, and
$\left[b_1\right]_{\overline{\rm MS}}$ are related to
the coefficients $c_i$ in Eq.~(4.29) of Ref.~\cite{Luke:1997ys} as follows:
\begin{subequations}
\begin{eqnarray}
\left[a_0\right]_{\overline{\rm MS}}&=& c_1,
\\
\left[a_1\right]_{\overline{\rm MS}}&=& -\frac{1}{m^2}\,c_3,
\\
\left[b_1\right]_{\overline{\rm MS}}&=& -\frac{1}{2 m^2}\,c_2.
\end{eqnarray}
\end{subequations}
Our results for these short-distance coefficients agree with those in
Eq.~(4.29) of Ref.~\cite{Luke:1997ys}.
\subsection{Resummation}
Let us define ratios of the $S$-wave $Q\bar Q$ operator matrix elements
to the $S$-wave $Q\bar Q$ operator matrix element of lowest order in $v$:
\begin{eqnarray}
\label{me-ratios}%
\langle \bm{q}^{2n}\rangle_{H({}^3S_1)}&=&
\frac{\langle 0|\mathcal{O}_{An}^i|H({}^3S_1)\rangle}
{\langle 0|\mathcal{O}^{i}_{A0}|H({}^3S_1)\rangle},
\end{eqnarray}
where $\mathcal{O}^{i}_{An}$ is defined in Eq.~(\ref{OPeratorsA}),
and we have used the property that the ratios are independent of the value
of the index $i$. In Ref.~\cite{Bodwin:2006dn}, it was shown that these
ratios of operator matrix elements are related according to a
generalized Gremm-Kapustin relation \cite{Gremm:1997dq}:
\begin{equation}
\label{gen-Gremm-Kapustin}%
\left[\langle \bm{q}^{2n}\rangle_{H({}^3S_1)}\right]_{\overline{\rm MS}}=
\left[\langle \bm{q}^2\rangle_{H({}^3S_1)}\right]^n_{\overline{\rm MS}}.
\end{equation}
This relation holds for the matrix elements in spin-independent-potential
models. Hence, for each value of $n$, it holds up to
corrections of relative order $v^2$.
We can use the relation (\ref{gen-Gremm-Kapustin}) to resum a class
of relativistic corrections to the quarkonium electromagnetic current. From
Eqs.~(\ref{coeffs-s}) and (\ref{ab-MSbar}),
we find that
\begin{eqnarray}
\label{resummed}%
&& \hspace{-10ex}
\sum_{n=0}^\infty
\left(s_n^{(0)} + \left[s_n^{(1)}\right]_{\overline{\rm MS}}
\right)
\langle 0|\mathcal{O}^{i}_{An}|H({}^3S_1)\rangle
\nonumber \\
&=&
\left.
\left\{
\left[ 1- \frac{\bm{q}^2}{E (E+m) (d-1)} \right]
\left(1+\Delta G^{(1)}_{\overline{\textrm{MS}}}\right)
- \frac{\bm{q}^2}{E (d-1)}
\Delta H^{(1)}
\right\}\right|_{\bm{q}^2=\langle \bm{q}^2 \rangle_{H({}^3S_1)}}
\nonumber\\[1.5ex]
&
\hspace{0ex}
\times\,
\langle 0|\mathcal{O}^{i}_{A0}|H({}^3S_1)\rangle.
\end{eqnarray}
Because the relation (\ref{gen-Gremm-Kapustin}) contains corrections of
relative order $v^2$ at each order $v^{2n}$, the resummation in
Eq.~(\ref{resummed}) does not improve the nominal accuracy beyond order
$v^4$. The resummation might, however, improve the numerical accuracy
beyond the accuracy that is obtained through order $v^4$ if the
coefficients in the velocity expansion grow rapidly with the order in
$v$. In any case, it is interesting to use the resummed result to
examine the rate of convergence of the velocity expansion.
\subsection{Numerical results and convergence of the velocity expansion}
Let us evaluate the sums of products of $S$-wave short-distance
coefficients and operator matrix elements, using the relation
(\ref{gen-Gremm-Kapustin}). For $\langle\bm{q}^{2}\rangle_{H({}^3S_1)}$,
we take the central value of the
$J/\psi$ matrix element from Ref.~\cite{Bodwin:2007fz}:
$\langle\bm{q}^{2}\rangle_{J/\psi}=0.441\,\textrm{GeV}^2$.
Taking $m_c=1.5~\,\textrm{GeV}$ and setting $\mu=m_c$, we find that
\begin{subequations}
\label{num-coefficient}%
\begin{eqnarray}
\sum_{n=0}^0\left[s_n^{(1)}\right]_{\overline{\rm MS}}
\,
[\langle\bm{q}^{2}\rangle_{J/\psi}]^{n}_{\overline{\rm MS}}
&=&
- \frac{\alpha_s C_F}{4 \pi}\times
8
,
\\
\sum_{n=0}^1\left[s_n^{(1)}\right]_{\overline{\rm MS}}
\,
[\langle\bm{q}^{2}\rangle_{J/\psi}]^{n}_{\overline{\rm MS}}
&=&
- \frac{\alpha_s C_F}{4 \pi}\times
7.826
,
\\
\sum_{n=0}^2\left[s_n^{(1)}\right]_{\overline{\rm MS}}
\,
[\langle\bm{q}^{2}\rangle_{J/\psi}]^{n}_{\overline{\rm MS}}
&=&
- \frac{\alpha_s C_F}{4 \pi}\times
7.883
,
\\
\sum_{n=0}^3\left[s_n^{(1)}\right]_{\overline{\rm MS}}
\,
[\langle\bm{q}^{2}\rangle_{J/\psi}]^{n}_{\overline{\rm MS}}
&=&
- \frac{\alpha_s C_F}{4 \pi}\times
7.872
,
\\
\sum_{n=0}^\infty\left[s_n^{(1)}\right]_{\overline{\rm MS}}
\,
[\langle\bm{q}^{2}\rangle_{J/\psi}]^{n}_{\overline{\rm MS}}
&=&
- \frac{\alpha_s C_F}{4 \pi}\times
7.873 .
\end{eqnarray}
\end{subequations}
In the last line of Eq.~(\ref{num-coefficient}), we have used the resummed
result in Eq.~(\ref{resummed}). Taking $\alpha_s=\alpha_s(2m_c)=0.25$,
we see that the corrections of order $\alpha_s v^2$ and $\alpha_s v^4$ are
$0.5\%$ and $-0.2\%$, respectively. These are not very significant at the
current level of precision of the theory of $J/\psi$ decays to a lepton pair.
As can be seen from Eq.~(\ref{num-coefficient}), the velocity expansion
converges rapidly for approximate charmonium matrix elements.
In fact, the expressions for $\Delta G^{(1)}_{\overline{\rm MS}}$ in
Eq.~(\ref{DeltaGMS}) and $\Delta H^{(1)}$ in Eq.~(\ref{DeltaH}), taken as
functions of $v=|\bm{q}|/m$, have finite radii of convergence. The logarithms
in $L(\delta)$ [Eq.~(\ref{L-delta})] and the Spence functions in $K(\delta)$
[Eq.~(\ref{K-delta})] have branch points at $\delta=\pm 1$, i.e.,
$v=\pm\infty$. The quantity $\delta=v/\sqrt{1+v^2}$ has branch points at
$v=\pm i$. Therefore, the closest singularities to the origin in
$\Delta G^{(1)}_{\overline{\rm MS}}$ or $\Delta H^{(1)}$ are at $v=\pm i$.
Consequently, the radii of convergence of $\Delta G^{(1)}_{\overline{\rm MS}}$
and $\Delta H^{(1)}$ as functions of $v$ are one. It follows that the
velocity expansion for the $Q\bar Q$ operators is absolutely convergent,
provided that the absolute values of the operator matrix elements are bounded
by a geometric sequence in which the ratio between elements of the sequence
is less than $m^2$.
\section{Conclusions\label{sec:Conclusion}}
We have presented a calculation in NRQCD of the order-$\alpha_s$ corrections
to the quarkonium electromagnetic current. Our calculation gives expressions
for the short-distance coefficients of all of the $Q\bar Q$ NRQCD operators
that contain any number of derivatives but no gauge fields.
Our operators are not gauge invariant, and we evaluate their matrix elements
in the Coulomb gauge. Our principal
results are given in Eqs.~(\ref{DeltaH}) and (\ref{DeltaGMS}). The NRQCD
short-distance coefficients can be obtained, according to
Eq.~(\ref{ab-MSbar}), from the Taylor-series expansions of
$\Delta G^{(1)}_{\overline{\rm MS}}$ in Eq.~(\ref{DeltaGMS}) and
$\Delta H^{(1)}$ in Eq.~(\ref{DeltaH}). Our results at relative order $v^2$
agree with those in Ref.~\cite{Luke:1997ys}.
Our calculation makes use of a new method for computing, to all orders
in $v$, the one-loop NRQCD corrections that enter into the matching of
NRQCD to full QCD. In this new method, we begin with QCD expressions for
the loop integrands. We obtain the NRQCD corrections from these QCD
expressions by carrying out the integration over the temporal
component of the loop momentum and then expanding the loop integrands
in powers of the loop and external momenta divided by the heavy-quark
mass $m$. We carry out this expansion {\it before} implementing the
dimensional regularization. The new approach allows one to avoid the
daunting task of obtaining NRQCD operators and interactions to all
orders in $v$, along with their Born-level short-distance coefficients,
and computing their contributions to the one-loop corrections. In terms
of the total labor involved, the computation of the NRQCD corrections
to all orders through the new approach is comparable to the
calculation of the NRQCD corrections at relative order $v^2$ through
conventional NRQCD methods. This new method should be applicable to
matching calculations for a variety of effective field theories,
including heavy-quark effective theory and soft-collinear effective
theory.
As we have mentioned, our approach is related to the method of regions
\cite{Beneke:1997zp}. The NRQCD corrections in our approach correspond
in the method of regions to the sum of the contributions from the
potential, soft, and ultrasoft regions, i.e., the contribution
from the small-loop-momentum region \cite{Beneke:1997zp}. In our
approach we have computed the quantities $\Delta G^{(1)}$ and $\Delta
H^{(1)}$ by subtracting the NRQCD corrections from the full-QCD
corrections. In the method of regions, $\Delta G^{(1)}$ and $\Delta
H^{(1)}$ could, in principle, be computed directly from the
contribution from the hard region. However, a straightforward
computation of the contribution from the hard region, carried out by
expanding the integrand in powers of the small momentum, would yield
Taylor-series expansions of $\Delta G^{(1)}$ and $\Delta H^{(1)}$ in
Eq.~(\ref{DeltaGH}) in powers of $\delta$. It would be nontrivial to
sum those expansions to obtain the compact expressions in
Eq.~(\ref{DeltaGH}). In contrast, in our approach, expansions of the
integrand occur only in the NRQCD expressions and lead to very simple
series that can be summed at the integrand level. Hence, our method may
be more efficient than the method of regions for computations of
short-distance coefficients to all orders in $v$. Our method is also
applicable in the case of a hard-cutoff regulator, such as lattice
regularization, while the method of regions applies only in the case of
dimensional regularization.
Because we have omitted operators that contain gauge fields, the operators
that we consider are not the complete set of NRQCD operators that
describe the quarkonium electromagnetic current. In the Coulomb gauge,
the gauge-field
operators first enter at relative order $v^4$, and so our results cannot
be considered to be complete beyond order $v^2$. However, the operators
that we consider account for all of the contributions that are contained
in the Coulomb-gauge wave function of the quarkonium $Q\bar Q$ Fock state.
The correction to the $S$-wave component of the electromagnetic current
that we find in relative order $\alpha_s v^4$ is
only about $-0.2$\%, which is not significant at the current level of
the precision of the theory of $J/\psi$ decays to a lepton pair.
We have examined the convergence of the NRQCD velocity expansion for
$S$-wave $Q\bar{Q}$ operators. In Eq.~(\ref{num-coefficient}), we give the
numerical values for the sums of the first few $S$-wave contributions to
the electromagnetic current and for the sum of all of the $S$-wave
contributions. In these computations, we have made
use of the value of the relative-order-$v^2$ $J/\psi$
matrix element that is given in Ref.~\cite{Bodwin:2007fz} and the
approximate relation
between operator matrix elements in Eq.~(\ref{gen-Gremm-Kapustin}),
which holds in spin-independent-potential models \cite{Bodwin:2006dn}.
It can be seen from Eq.~(\ref{num-coefficient}) that the velocity
expansion converges rapidly in this case. In fact, the expressions for
$\Delta G^{(1)}_{\overline{\rm MS}}$ in Eq.~(\ref{DeltaGMS}) and
$\Delta H^{(1)}$ in Eq.~(\ref{DeltaH}), taken as functions of
$v=|\bm{q}|/m$, have radii of convergence one. Therefore, the velocity
expansion for the $Q\bar Q$ operators is absolutely convergent, provided
that the absolute values of the operator matrix elements are bounded by
a geometric sequence in which the ratio between elements of the sequence
is less than $m^2$.
\begin{acknowledgments}
The work of G.T.B. was supported by the U.S. Department of Energy,
Division of High Energy Physics, under contract No.~DE-AC02-06CH11357.
The work of H.S.C. was supported by the BK21 program.
The work of C.Y. was supported by the Korea Research Foundation under
MOEHRD Basic Research Promotion grant No.~KRF-2006-311-C00020.
The work of J.L. was supported by the Korea Science and Engineering
Foundation (KOSEF) funded by the Korea government (MEST) under
grant No.~R01-2008-000-10378-0.
\end{acknowledgments}
|
1,314,259,993,349 | arxiv | \section{Introduction}
\label{s:intro}
Inference about a target population based on sample data relies on the assumption that the sample is representative. However, simple random samples are often not available in real data problems. Therefore, there is a need to generalize inference from the available non-random sample to the target population of interest. For example, randomized controlled trials (RCTs) are considered a gold standard to estimate treatment effects, but the measured effects can only be formally generalized to the participants within the trial. Recent evidence has indicated that subjects in an RCT can be much different from patients in routine practice. Such concern among clinicians about the external validity of RCTs has led to the underuse of effective treatments \citep{rothwell2005external}. This highlights the importance of generalizing treatment effect of RCTs to a definable patient population.
Survey sampling is a field that specifically deals with inference on populations with non-random samples, which can be viewed as a special case of generalizing inference. Probability samples collected via probability surveys have historically proven effective. However, such data comes with considerable cost, both time and budget. In the past several decades, large scale probability surveys have suffered increasingly high non-response rates, besides the rising costs. The probability surveys with low response rates are often non-representative, which challenges the validity of survey inference. In the meanwhile, recent development of information technology makes it increasingly convenient and cost-effective to collect large numbers of samples with detailed information via online surveys and opt-in panels. Such samples are highly non-representative due to selection bias. Classical weighting methods in survey literature such as post-stratification \citep{valliant1993poststratification} and raking \citep{deming1940least} can improve representativeness of survey samples when a small number of discrete auxiliary variables about populations are available for survey adjustments. However, such weighting methods can yield highly variable estimates of population quantities in the presence of extreme weights. Alternatively, model-based methods can be used. \citet*{wang2015forecasting} demonstrates, through election forecast with non-representative voter intention polls on the Xbox gaming platform, that multilevel regression and post-stratification (MRP)
can be used to generate accurate survey estimates from non-representative samples. Their estimates are in line with the forecasts from leading poll analyst. MRP is very appealing when statistical adjustment are made using a small number of discrete auxiliary variables.
In recent years, population data of high volume, variety, and velocity has become increasing available, with examples including administrative data or electronic medical records. Such data contained detailed individual level information with high-dimensionality and can be used to generalize inference of non-random samples to their target populations. Although post-stratification, raking, and MRP methods can improve representativeness in the presence of a small number of discrete auxiliary variables, they are infeasible to be applied in high-dimensional settings.
With high-dimensional auxiliary variables, Bayesian machine learning techniques have been shown to be effective in improving statistical inference in missing data and causal inference \citep*{hill2011bayesian,tan2019robust,hahn2020bayesian}. Specially, \cite{hill2011bayesian} shows that Bayesian additive regression trees (BART) produces more accurate estimates of average treatment effects compared to propensity score matching, propensity-weighted estimators, and regression adjustment when the response surface is nonlinear and not parallel between treatment and control groups. \citet*{tan2019robust} demonstrate, in the presence of missing data, that BART reduces bias and root mean square error of the doubly robust estimators when both propensity and mean models were misspecified. Inspired by these works, we propose Bayesian machine learning model-based methods and extensions for estimating population means using non-random samples. The proposed methods can be applied not only in the context of survey inference but also in more general settings, such as RCTs and epidemiological observational studies. We evaluate the proposed methods using simulation studies and demonstrate their applications in a mental health survey of Ohio Army National Guard service members and a non-random sample from an observational study using electronic medical records of COVID-19 patients.
\section{Methods}
\subsection{Notation and background}
Let $U$ be the finite population of size $N$ and $s$ be a non-random sample of size $n$ from the population. In the sample $s$, information on the outcome of interest $Y$, discrete auxiliary variables $\boldsymbol{Z}$ and continuous auxiliary variables $\boldsymbol{X}$ were collected. In addition, data from the population $U$ (e.g. census, administrative data, or electronic medical records) is also available with the same set of auxiliary variables $\boldsymbol{Z}$ and $\boldsymbol{X}$ measured for all units in the population. Figure~\ref{notation} illustrates the scenario under consideration, with population data on the left and the sample data on the right. Without loss of generality, we consider a continuous variable of interest $Y$ with the estimand of interest being the finite population mean $Q(Y) = \frac{1}{N}\sum_{i \in U} Y_i$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{notation_general.eps}
\caption{Population $U$ and non-random sample $s$ with shared discrete auxiliary variables $\boldsymbol{Z}$ and continuous auxiliary variables $\boldsymbol{X}$ as well as outcome $Y$ measured only in $s$.}
\label{notation}
\end{figure}
When the dimensions of $\boldsymbol{Z}$ and $\boldsymbol{X}$ are small, post-stratification, raking, and MRP can be applied by first discretizing the continuous auxiliary variables $\boldsymbol{X}$ as $\boldsymbol{X}^*$ using quantiles. Using the joint distribution of discrete auxiliary variables $(\boldsymbol{Z}, \boldsymbol{X}^{*})$, \emph{post-stratification} partitions the population into $J$ disjoint post-strata with $U = \bigcup_{j = 1}^{J} U_j$ of size $N_j$ and the sample into subsamples with $s = \bigcup_{j = 1}^{J} s_j$ of size $n_j$ for the $j$th post-stratum, correspondingly. With respect to the post-strata, the finite population mean can be rewritten as $$Q(Y) = \frac{1}{N} \sum_{j = 1}^J \sum_{i \in U_j} Y_{i} = \frac{1}{N} \sum_{j = 1}^J N_j \theta_j,$$ where $\theta_j = \frac{1}{N_j} \sum_{i \in U_j} Y_i$ is subpopulation mean of post-stratum $U_j$. With the assumption that the
sample units in each post-stratum are representative of population units in that post-stratum, the post-strata means are estimated using corresponding subsample means $\hat{\theta}_j = \bar{y}_j = \frac{1}{n_j} \sum_{i \in s_j} y_i$. Naturally, the post-stratification (PS) estimator takes the form \begin{equation}
\widehat{Q}_{\text{PS} } = \frac{1}{N} \sum_{j = 1}^J N_j \bar{y}_j = \frac{1}{N} \sum_{i \in s} w_{i} y_i, \label{PS}
\end{equation} where $w_{i} = N_j / n_j$ for $i \in U_j$ is the post-stratification weight assigned to sample unit $i$ in post-stratum $j$ which is inverse proportional to the sampling fraction $n_j / N_j$. The post-stratification estimator could be numerically unstable when such partition results in small cells in the sample, in other words, small $n_j$ and large weights $w_j$.
Alternatively, \emph{raking} generates weights $w_i$ to match successively the marginal (rather than the joint) distributions of $(\boldsymbol{Z}, \boldsymbol{X}^{*})$ via iterative proportional fitting.
The raking weighted estimator takes the form \begin{equation}
\widehat{Q}_{ \text{R} } = \frac{1}{N} \sum_{i \in s} w_i {y_i}, \label{rake}
\end{equation} with $w_i$ denoting raking weights. Raking weights could be highly variable, so the resulting weighted estimators could be inefficient. Also, raking may have convergence issues as the number of auxiliary variables increases.
\citet*{gelman2007struggles} reviews a model-based perspective on the PS estimator. In the model-based approach, a regression model is specified to model the conditional distribution of outcome given the discrete auxiliary variables $p(Y | \boldsymbol{Z}, \boldsymbol{X}^{*})$. Define stratum-specific means $\theta_j = \text{E}(Y_i | \boldsymbol{Z}_i, \boldsymbol{X}^{*}_i)$, $i \in U_j$. And estimating $\hat{\theta}_j = \widehat{\text{E}}(Y_i | \boldsymbol{Z}_i, \boldsymbol{X}^{*}_i)$ based on the fitted model leads to the \emph{regression and post-stratification} (RP) estimator \begin{equation}
\widehat{Q}_{\text{RP} } = \frac{1}{N} \sum_{j = 1}^J N_j \hat{\theta}_j. \label{RP}
\end{equation} As a special case, specifying a saturated regression model (including all possible interactions terms) allows $J$ post-stratum specific means and the least square estimators $\hat{\theta}_j = \bar{y}_j = \frac{1}{n_j} \sum_{i \in s_j} y_i$. As a result, $\widehat{Q}_{\textrm{RP} } = \widehat{Q}_{\textrm{PS} }$.
From the model-based perspective, the problem of unstable estimates due to small cells in post-stratification can be viewed as a model fitting problem due to model complexity. Such perspective motivates using alternative modeling techniques to improve estimation. Instead of using classical saturated regression models, \emph{multilevel regression and post-stratification} (MRP) utilizes hierarchical regression models to achieve stable estimates. Both main effects and interaction terms could be specified as multilevel random effects so that information across post-strata can be partially pooled in the model fitting procedure \citep{gelman1997poststratification}. MRP improves efficiency in the population mean estimation than post-stratification and raking when data are sparse in some post-strata.
Still, it is challenging to perform MRP in high-dimensional setting, especially in the presence of a large number of noise variables not associated with $Y$, because a parametric form needs to be specified for the multilevel regression. Also, continuous auxiliary variables need to be discretized before modeling.
The model-based RP approach can also be viewed as a prediction approach and the RP estimator in (\ref{RP}) can be rewritten as $$\widehat{Q}_{\textrm{RP} } = \frac{1}{N} \sum_{j = 1}^J N_j \hat{\theta}_j = \frac{1}{N} \sum_{j = 1}^J \sum_{i \in U_j} \hat{\theta}_j = \frac{1}{N} \sum_{j = 1}^J \sum_{i \in U_j} \widehat{\textrm{E}}(Y_i | \boldsymbol{Z}_i, \boldsymbol{X}^{*}_i) = \frac{1}{N} \sum_{i \in U} \widehat{\textrm{E}}(Y_i | \boldsymbol{Z}_i, \boldsymbol{X}^{*}_i), $$ where $\widehat{\textrm{E}}(Y_i | \boldsymbol{Z}_i, \boldsymbol{X}^{*}_i)$ is predictive value of $Y_i$ based on model $p(Y | \boldsymbol{Z}, \boldsymbol{X}^{*})$. Such perspective motivates the use of modern statistical techniques for generalization of inference via valid predictions of the outcomes in the population. Specifically, the classical regression models in \emph{regression and post-stratification} can be replaced by any regularized prediction methods that achieve stable estimates while including high-dimensional covariates. Such models also allows modeling the continuous $\boldsymbol{X}$ directly.
\subsection{New Approach: Regularized Prediction}
Tree-based methods are appealing techniques for handling high-dimensional problems. Sum-of-trees ensembles achieve high prediction accuracy and better approximate the functional forms of continuous variables, with each single tree regularized to obtain stable predictions and achieve bias variance trade-off. Taking a model-based predictive perspective, we extend the RP approach to high-dimensional setting by replacing parametric regression models with regularized additive regression trees. We consider the Bayesian modeling framework, as it is natural to implement predictive inference and straightforward for quantification of uncertainty.
\subsubsection{ BART and Soft BART Prediction}\label{sec:bart}
In the current setting, the conditional distribution of a continuous outcome given the high-dimensional auxiliary variables $p(Y | \boldsymbol{Z}, \boldsymbol{X})$ can be modeled using Bayesian additive regression trees (BART) \citep*{chipman2010bart} or soft Bayesian additive regression trees (SBART) \citep*{linero2018bayesian}.
For continuous outcomes, BART and SBART assume Gaussian noise and model the location parameter using a non-parametric sum-of-trees structure, allowing both discrete and continuous auxiliary variables \begin{align} \label{BART-P}
Y = G(\boldsymbol{Z}, \boldsymbol{X}) + \epsilon = \sum_{m = 1}^M g(\boldsymbol{Z}, \boldsymbol{X}; T_m, \boldsymbol{\mu}_m) + \epsilon, \quad \epsilon \overset{ \textrm{i.i.d.} }{\sim} N(0, \sigma^2),
\end{align} where $M$ is fixed number of trees in the sum-of-trees structure, $T_m$ is the $m$-th binary tree with $\boldsymbol{\mu}_m$ being the parameters associated with the terminal nodes, and $g(\cdot)$ is the function assigning $\boldsymbol{\mu}_m$ according to $(\boldsymbol{Z}, \boldsymbol{X})$. The sum-of-trees structure naturally handles high-dimensional auxiliary variables without specifying a parametric form, accounting for categorical variables, continuous variables and possible interactions. In the Bayesian framework, quantification of uncertainty is naturally characterized by the posterior and posterior predictive distributions.
In BART, $g(\cdot)$ is a deterministic function and the potential effect of continuous predictors, either linear or nonlinear, is approximated by step functions generated by cutting the continuous predictor at various splitting points in different trees. Regularization priors are specified on $p(T_m)$, $p(\boldsymbol{\mu}_m | T_m)$, $p(\sigma^2)$ such that each single tree $T_m$ is a weak learner. Such specification aims at preventing the individual tree effects from unduly influential and achieving stable predictions, with automatic default specifications facilitating easy implementation. For $p(T_m)$, the prior is specified by three aspects: (i) the probability that a node is nonterminal, (ii) the distribution on the splitting variable assignments at each interior node, and (iii) the distribution on the splitting rule assignment in each interior node, conditional on the splitting variable. For $p(\boldsymbol{\mu}_m | T_m)$ and $p(\sigma^2)$, conjugate normal distributions and inverse chi-square distributions are specified.
In SBART, $g(\cdot)$ associates the values of covariates with a probabilistic (instead of deterministic as in BART) path down the tree, with certain probability going left at each node. With such modification, a particular set of values of $(\boldsymbol{Z}, \boldsymbol{X})$ is associated with a certain terminal node with certain probability, obtained by averaging over all possible paths. Unlike hard decision trees in BART where each terminal node is constrained to influence the regression function locally, the soft decision trees in soft BART allow each terminal node to impose a global effect on the function. This global effect of local terminal nodes enables the soft decision trees to borrow information adaptively across different covariate regions. Sparsity-inducing priors are specified to achieve a balance between sparse and non-sparse settings.
In practice, cross validation could be applied to determine the number of trees $M$ and the hyperparameters in the Bayesian priors. \citet{linero2018bayesian} develop default prior specification with $M = 50$ which performs universally well in all the 10 benchmark datasets considered in the paper. In this paper, we consider $M = 50$ for BART and SBART in the simulation studies and applied examples. The BART and SBART prediction estimators of finite population mean, $\widehat{Q}_{ \textrm{BART}}$ and $\widehat{Q}_{ \textrm{SBART}}$, are obtained with the following steps. \begin{description}
\item [Step 1] Model $p(Y | \boldsymbol{Z}, \boldsymbol{X})$ using BART or soft BART, $Y = G(\boldsymbol{Z}, \boldsymbol{X}) + \epsilon , \epsilon \sim N(0, \sigma^2)$ with corresponding Bayesian priors.
\item [Step 2] Obtain posterior distributions of $Q(Y) = \frac{1}{N} \sum_{i \in U} y_i$ using Markov chain Monte Carlo (MCMC) simulations. Specifically, in MCMC iteration $t$,
\begin{enumerate}
\item draw $G^{(t)}, \sigma^{(t)} | Y_{i \in s}, \boldsymbol{Z}_{i \in U}, \boldsymbol{X}_{i \in U}$
\item compute $\tilde{\theta}_i^{(t)} = G^{(t)} (\boldsymbol{Z}_i, \boldsymbol{X}_i)$ for $i \in U$
\item obtain $\widehat{Q}_{ \textrm{(S)BART}}^{(t)} = \frac{1}{N} \left[ \sum_{i \in U} \tilde{\theta}_i^{(t)} + \left (\sum_{i \in s} y_i - \sum_{i \in s} \tilde{\theta}_i^{(t)} \right ) \right]$, using the observed $y_i$ in the sample and the predicted values for the population units that are not in the sample.
\end{enumerate}
\item [Step 3] Obtain $\widehat{Q}_{ \textrm{(S)BART}}$: point estimates using (posterior) median of $\widehat{Q}_{ \textrm{(S)BART}}^{(t)}$ with credible intervals constructed using quantiles splitting the tails of posterior distribution equally.
\end{description}
In some cases, inference on subpopulation means are also of interest, which can be obtained via modification of item 3 in Step 2, restricting the average to predictions and observed outcomes in the corresponding subpopulation $\Omega \subset U$ and subsamples $s \cap \Omega$, $\widehat{Q}_{\Omega, \textrm{(S)BART}}^{(t)} = \frac{1}{N_{\Omega}} \left[ \sum_{i \in \Omega} \tilde{\theta}_i^{(t)} + \left (\sum_{i \in s \cap \Omega } y_i - \sum_{i \in s \cap \Omega } \tilde{\theta}_i^{(t)} \right ) \right]$ .
\subsubsection{BART and Soft BART Propensity Prediction}
In the missing data literature, \citet*{little2004robust} proposed including logit-transformed response propensity score as covariates using splines in the imputation models. This response propensity prediction method yields robust estimates of sample means when the imputation model is misspecified. \citet*{tan2019robust} extended the method of \citet*{little2004robust} by using BART to fit both the imputation model and the response propensity model. They show that adding BART-estimated propensity score in the BART imputation model reduces bias and RMSE and improves confidence interval coverage rates in the mean estimation.
Inspired by this, we extend the BART and SBART prediction with a two-step approach. First, we estimate sample inclusion propensity using a propensity model. If the sample data are linked to the population data, we code the sample inclusion indicators $I=1$ for the units in the sample and $I=0$ for the rest of the units in the population. The propensity score $\hat{\pi}$ can then be estimated via modeling $p(I | \boldsymbol{Z}, \boldsymbol{X})$ using probit Bayesian additive regression trees \citep{chipman2010bart}. If the sample data is unlinked to the population data, we round the continuous $\boldsymbol{X}$ to $\left[ \boldsymbol{X} \right]$ at a certain precision level and identify $K$ categories with unique values of $(\boldsymbol{Z},\left[ \boldsymbol{X} \right])$. Within each category $k = 1, \ldots, K$, the number of units in the population $N_k$ and that in the sample $n_k$ can be counted.
Once the counts $(N_k, n_k)$ are created for each category, the propensity score $\hat{\pi}$ for the units to be included in the sample, given $(\boldsymbol{Z}, \left[ \boldsymbol{X} \right])$, can be obtained via models for binomial outcomes.
Next, we model $p(Y | \boldsymbol{Z}, \boldsymbol{X}, \hat{\pi})$ by additionally including $\hat{\pi}$ as a covariate in BART or SBART model with the rest of the steps being the same as Section~\ref{sec:bart}. The detailed steps of obtaining the BART propensity
(BART-P) prediction estimator $\widehat{Q}_{ \textrm{BART-P}}$ and the SBART propensity (SBART-P) prediction estimator $\widehat{Q}_{ \textrm{SBART-P}}$ are outlined as follows. \begin{description}
\item[Step 1] Model $p(I | \boldsymbol{Z}, \boldsymbol{X})$ with probit BART and estimate $\hat{\pi}$ using posterior mean
\item[Step 2] Obtain the (S)BART-P prediction estimator for finite population mean
\begin{itemize}
\item model $p(Y | \boldsymbol{Z}, \boldsymbol{X}, \hat{\pi})$ using (S)BART, $Y = G(\boldsymbol{Z}, \boldsymbol{X}, \hat{\pi}) + \epsilon , \epsilon \sim N(0, \sigma^2)$
\item estimate $\tilde{\theta}_i^{(t)} = G^{(t)} (\boldsymbol{Z}_i, \boldsymbol{X}_i, \hat{\pi}_i)$
\item $\widehat{Q}_{ \textrm{(S)BART-P}}^{(t)} = \frac{1}{N} \left[ \sum_{i \in U} \tilde{\theta}_i^{(t)} + \left (\sum_{i \in s} y_i - \sum_{i \in s} \tilde{\theta}_i^{(t)} \right ) \right]$
\item $\widehat{Q}_{ \textrm{(S)BART-P}}$: point estimates using (posterior) median of $\widehat{Q}_{ \textrm{(S)BART-P}}^{(t)}$ with credible intervals constructed using quantiles splitting the tails of posterior distribution equally.
\end{itemize}
\end{description}
BART-P and SBART-P prediction methods are expected to be doubly robust \citep*{long2012doubly}. More specifically, as long as either of the mean model for the outcome or the propensity model is correctly specified, a consistent estimator of the population mean is obtained.
\section{Simulation Studies}
\subsection{Simulation Design}\label{sec:design}
Artificial populations with size $N = 3,000$ were simulated. For each unit $i$ in the population, a total number of $p$ binary auxiliary variables and $r$ continuous variables were generated. The $p$ binary variables $\{Z_{il} \}_{l = 1, \ldots p}$ were obtained with $Z_{il} = I(W_{il} < U_l)$, where $\{ W_{il} \} \overset{\text{i.i.d}}{\sim} \textrm{N}(0, 1)$ and $U_l \overset{\text{i.i.d}}{\sim} \textrm{U}(-.4, .4)$, so that $\text{Pr}(Z_{il} = 1)$ falls in the range $(.34, .66)$, ${l = 1, \ldots, p}$. The $r$ continuous $\{X_{il} \}_{l = 1, \ldots r}$ were generated independently from $\textrm{U}(0, 1)$. Samples of size $n = 600$ were drawn from the populations with inclusion probability $\pi = \textrm{Pr}(I = 1|\boldsymbol{Z}, \boldsymbol{X})$ as a function of the auxiliary variables $\boldsymbol{X}$ and $\boldsymbol{Z}$. We considered the following four simulation scenarios:
\begin{description}
\item[S1] {\bf Low-dimensional auxiliary variables ($\boldsymbol{p = 3, r = 1}$) with higher inclusion propensity at the lower tail of $\boldsymbol{X_1}$.} The outcomes $\{Y_{i}\}_{i=1,\ldots,N}$ were generated using an additive model: $Y = 26.81 - Z_1 - 2 Z_2 - 3.5 Z_3 - 25 (X_1 - .75)^2 + \epsilon, \epsilon \sim \text{N}(0, 3^2)$, and the samples were selected with $\pi \propto \text{logit}^{-1}[-13.66 + .5 Z_1 + Z_2 + 1.75 Z_3 + 12.5 (X_1 - .75)^2 ]$. Consequently, units with values of $X_1$ falling between 0.5 and 1 were under-sampled.
\item[S2] {\bf High-dimensional auxiliary variables ($\boldsymbol{p = 30, r = 10}$) with higher inclusion propensity at the lower tail of $\boldsymbol{X_1}$.} Same $Y$ and $\pi$ models as S1, but add noise auxiliary variables $\{Z_l \}_{l = 4, \ldots, 30}$ and $\{ X_l \}_{l = 2, \ldots, 10}$ that are not associated with $Y$ or $\pi$.
\item[S3] {\bf High-dimensional auxiliary variables ($\boldsymbol{p = 30, r = 10}$) with lower inclusion propensity at the lower tail of $\boldsymbol{X_1}$.} Same as S2, but change the signs of the coefficients in the model for $\pi$ to introduce selection bias in the opposite direction: $\pi \propto \text{logit}^{-1}[4.01 - .5 Z_1 - Z_2 - 1.75 Z_3 - 12.5 (X_1 - .75)^2 ]$. Consequently, units with small values of $X_1$ were under-sampled, especially among those with $X_1 \le 0.25$.
\item[S4] {\bf High-dimensional auxiliary variables ($\boldsymbol{p = 30, r = 10}$) with interaction and different relevant continuous predictors for $\boldsymbol{Y}$ and $\boldsymbol{\pi}$.}
The outcomes $\{Y_i\}_{i=1,\ldots,N}$ were generated using $Y = 36.81 - Z_1 - 2 Z_2 - 3.5 Z_3 - 10 Z_1 Z_2 - 9 (X_1 - .75)^2 - 16 Z_3 (X_1 - .75)^2 + \epsilon, \epsilon \sim \text{N}(0, 3^2)$, with samples selected using $\pi \propto \text{logit}^{-1}[3.27 - .5 Z_1 - Z_2 - 1.75 Z_3 - 2 Z_1 Z_2 - 4 (X_3 - .75)^2 - 3 Z_3 (X_3 - .75)^2 - (X_5 - .75)^2]$. Units at the lower tails of $X_3$ and $X_5$ were under-sampled,
but $X_3$ and $X_5$ were not associated with $Y$.
\end{description}
Figure~\ref{fig:selection}(a)-(b) show the scatter plots of $Y$ against $X_1$, the continuous variable that is associated with both $Y$ and $\pi$, of the simulated population overlaid with a selected sample in scenarios S1-S3. Population units with lower values of $X_1$ were more likely to be selected into samples in scenarios S1/S2 but less likely to be selected in scenario S3.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{selection_all_new_L.eps}
\caption{Scatterplots of outcomes $Y$ versus continuous auxiliary variables of units in the population (in gray dots) and a selected sample (in red diamonds) for (a) Scenario S1/S2 (b) Scenario S3 (c) Scenario S4.}
\label{fig:selection}
\end{figure}
Scenario S4 was designed to assess whether tree-based methods handle interactions well and how they perform when the continuous variables that are associated with undersampling are not associated with outcome. Figure~\ref{fig:selection}(c) visualizes population with a selected sample in scenario S4, using scatter plots of $Y$ against $X_1$, the continuous variable related to $Y$ but not $\pi$, and of $Y$ against $X_3$, the continuous variables related to $\pi$ but not $Y$. The plot on the left shows a positive association between $Y$ and $X_1$ but units with different values of $X_1$ are equally likely to be included in the sample; while the plot on the right shows no association between $Y$ and $X_3$ but units at the lower tail of $X_3$ are less likely to be included in the sample.
For each scenario, 500 replicates of simulation were conducted, with point and interval estimates of finite population mean computed for each. The Bayesian tree-based methods used all available auxiliary variables, as it is unknown which variables are involved in the true data generating process in practice. For scenario S1 with low-dimensional auxiliary variables, the tree-based methods were also compared to the PS and raking estimators using all four available variables with $X_1$ discretized using tertiles in PS and using quintiles in raking. Raw estimates
were also calculated using sample means.
\subsection{Simulation Results}
The performance of point estimates are evaluated with empirical bias and empirical root mean squared error (RMSE), summarised in Table~\ref{tab:point}. Scenarios S1-S3 share the same outcome model, the same outcome values $\{Y_i\}_{i=1,\ldots,N}$ and the same ground truth for the finite population mean defined as $Q = \frac{1}{N} \sum_{i=1}^N Y_i$. The empirical coverage rates and average widths of $80\%$ and $95\%$ probability intervals are visualized in Figure~\ref{fig:CI}. The raw estimates ignoring selection bias are off the chart, leading to confidence intervals with $0\%$ coverage rates, therefore, not shown in Figure~\ref{fig:CI}.
\begin{table}[htbp]
\centering
\caption{{Simulation results - empirical bias and RMSE of various methods in estimating population means, from 500 simulation replicates, for each simulation setting}}
\begin{threeparttable}
\begin{tabular}{lcccccccc} \hline\hline
\multirow{3}{*}{Method} & \multicolumn{2}{c}{S1} & \multicolumn{2}{c}{S2} & \multicolumn{2}{c}{S3} & \multicolumn{2}{c}{S4} \\
\cmidrule(l){2-7} \cmidrule(l){8-9}
& \multicolumn{6}{c}{$Q = 19.88$} & \multicolumn{2}{c}{$Q = 27.74$} \\
\cmidrule(l){2-7} \cmidrule(l){8-9}
& Bias & RMSE & Bias & RMSE & Bias & RMSE & Bias & RMSE \\ \hline
raw & $-2.99$ & $2.99$ & $-2.99$& $2.99$ & $2.43$ & $2.43$ & $3.13$ & $3.14$ \\
PS* & $-0.37$ & $0.43$ & & & & & & \\
raking** & $-0.16$ & $0.22$ & & & & & & \\
BART & $-0.08$ & $0.17$ & $-0.17$ & $0.22$ & $0.37$ & $0.43$ & $0.07$ & $0.17$ \\
BART-P & $-0.06$ & $0.18$ & $-0.12$ & $0.20$ & $0.30$ & $0.38$ & $0.06$ & $0.17$ \\
SBART & $-0.08$ & $0.17$ & $-0.10$ & $0.19$ & $0.24$ & $0.32$ & $0.04$ & $0.16$ \\
SBART-P & $-0.07$ & $0.18$ & $-0.10$ & $0.19$ & $0.24$ & $0.32$ & $0.04$ & $0.16$ \\ \hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item *PS is based on $Z_1$, $Z_2$, $Z_3$ and $X_1$ discretized using tertiles
\item **Raking is based on $Z_1$, $Z_2$, $Z_3$ and $X_1$ discretized using quintiles.
\item The standard errors of empirical bias from 500 simulation replicates are $< 7.5 \times 10^{-3}$ for all methods
\end{tablenotes}
\end{threeparttable}
\label{tab:point}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{simulation_mean_intervals.eps}
\caption{{Simulation results - empirical} coverage rates of $80 \%$ and $95 \%$ probability intervals (with the horizontal dashed lines denoting the nominal levels) against average probability interval widths, from 500 simulation replicates, for each simulation setting.}
\label{fig:CI}
\end{figure}
In scenario S1, where the weighting methods are feasible, raking is less biased as well as more efficient than post-stratification (PS). This is because raking maintains more information from the continuous variable $X_1$ by discretizing $X_1$ using quintiles as compared to tertiles in PS, and raking implicitly assumes an additive propensity model while PS assumes an interaction model. Both PS and raking generate confidence intervals with coverage rates lower than the nominal levels, with raking yielding shorter intervals but higher coverage rates. BART and SBART both outperform the weighting methods via utilizing the continuous form of $X_1$, generating credible intervals with coverage rates close to the nominal levels. BART and SBART perform similarly as all auxiliary variables are relevant in this low-dimensional setting. Including propensity score in BART and SBART leads to a small bias reduction which is offset by efficiency loss, indicated by slightly higher RMSE and slightly wider credible intervals.
Scenario S2 differs from scenario S1 by adding irrelevant auxiliary variables. PS and raking are not feasible due to high-dimensionality. Units with $X_1$ falling between 0.5 and 1.0 have lower selection probabilities than those with $X_1$ in between 0 and 0.5 as shown in Figure~\ref{fig:selection}(a). SBART outperforms BART with lower bias, lower RMSE and better credible interval coverage. Including propensity score in BART reduces bias and RMSE and fixes credible interval coverage. However, such improvement is not obvious for SBART. BART-P, SBART and SBART-P all yields valid credible intervals but not BART, with SBART having the shortest intervals.
Scenario S3 differs from scenario S2 in the direction of selection bias. Moreover, because $\pi$ was negatively associated with $(X_1-.75)^2$, the units in the lower tail of $X_1$ (e.g. $X_1 < .25$ in Figure~\ref{fig:selection}(b)) have even smaller inclusion probabilities than the units in the upper tail of $X_1$ in scenario S2. Consequently, there are sparse data in the lower tail of $X_1$. In this setting, neither BART nor SBART performs well with large bias and RMSE, although SBART yields smaller bias and RMSE than BART. The empirical coverage rates for both BART and SBART are lower than the nominal levels due to bias in the estimation. By including propensity score, BART-P improves credible interval coverage as well as bias and RMSE than BART, but does not fix the undercoverage issue. Again, such improvement is not obvious for SBART.
In scenario S4, both BART and SBART performs well with small bias and RMSE and close to nominal level coverage rate. SBART yields slightly smaller bias, smaller RMSE, and better coverage rate than BART. Including propensity score slightly reduces bias and improves coverage rate in BART. Although there are sparse data in the lower tails of $X_3$ and $X_5$, these two $X$ variables are not associated with $Y$ and thus such biased selection did not yield poor performance of the tree-based methods like in Scenario S3.
In all the scenarios considered here, SBART outperforms other competing methods and is, therefore, recommended. However, it should be used with caution, as it still does not perform well when selection bias results in sparse data at the tails of continuous auxiliary variables associated with the outcome.
\subsection{Comparison of BART and SBART Prediction}
We took a further investigation to compare the performance of BART and SBART in scenario S3, where neither BART nor SBART performs well with BART performing worse than SBART. We consider two random samples from the population. For sample I, data is sparse at the lower tail of $X_1$, while, for sample II, no data is available at the lower tail, $X_1 < .2$. The top panels I(a) and II(a) of Figure~\ref{fig:scn_Ic_pred} shows the population in gray dots, with sample I and II in red, respectively. For a closer examination of the data at the lower tail of $X_1$, we restrict to a subset with $Z_2 = Z_3 =0$ and focus on the lower tail with $X_1 < 0.3$. The middle panels I(b) and II(b) show the population and corresponding sample data in this restricted subgroup. Finally, the bottom panels I(c) and II(c) plot the population units of $Y$ in this subgroup (gray points) overlaid by the posterior means of the location parameters, $G(\boldsymbol{Z},\boldsymbol{X})$, of each population units estimated using BART and SBART as shown using red pluses and blue crosses, respectively. Panel I(c) shows that both BART and SBART fit the data well within the region $X_1 > .2$ where sample data are available. However, the SBART fit the data much better than BART when $X_1 < 0.2$ where very sparse data are available. The estimated posterior means of the location parameters based on SBART are also less noisy, due to the sparsity-induced priors that tend to exclude the noise auxiliary variables in model fitting. Panel II(c) shows that both models fail to produce valid predictions in the region $X_1 < .2$ where there is no sample data available. In the simulation study, about $5\%$ of the 500 simulated samples do not include units with $X_1 < .1$ and one third include fewer than 10 units with $X_1 < .2$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{S3pred.eps}
\caption{{Two selected samples I and II from the population in Scenario S3: (a) Scatter plots of $Y$ versus $X_1$ with the population in gray dots and a selected sample in red diamonds (b) Scatter plots of $Y$ versus $X_1$, restricted to $Z_2 = Z_3 = 0$ and $X_1 < .3$ (c) Scatter plots of $Y$ versus $X_1$ in the subpopulation, overlapped with posterior means of $G(\boldsymbol{Z},\boldsymbol{X})$ estimated from the BART and SBART models based on the whole sample.}}
\label{fig:scn_Ic_pred}
\end{figure}
\section{Applied Examples}\label{sec:app}
We demonstrate the application of the proposed methods using real data from two different studies. The first application example deals with a mental health survey assessing psychiatric disorders among the Ohio Army National Guard (OHARNG) service members. The second application is in a clinic setting where it is of interest to generalize inference on COVID-19 patients when clinical outcomes are only available in a subset of patients.
\subsection{Ohio Army National Guard Survey of Mental Health}
The Ohio Army National Guard (OHARNG) Mental Health Initiative is a population-based observational survey study for estimating the prevalence and identifying correlates of mental illness and health service utilization among the OHARNG service members. The study population of the baseline survey is defined as all $N = 12570$ soldiers who served in the OHARNG between June 2008 and February 2009. A survey sample with $n = 2562$ service members was selected. In this analysis, we are interested in estimating the mean trauma score among the OHARNG service members using the selected sample, with potential selection bias due to under-coverage of sampling frame and non-response. Auxiliary information is available at individual level for the entire study population, including age (17-24 yr, 25-34 yr, 35+ yr), sex (male, female), race (white, black, other), rank (enlisted, officer), marital status (married, non-married), and years of service (in years). We apply the proposed trees-based methods to correct the discrepancy between the sample and population utilizing the five categorical and one continuous auxiliary variables. For BART-P and SBART-P, the propensity models were built using probit BART.
Before modeling, $\log(y + 1)$ transformation was applied to trauma scores to reduce right skewness such that the normality assumption in BART and SBART is reasonable. Distributions of the only continuous variable, years of service, in the sample and population were checked to avoid prediction failure due to sparse data at the tails (see Figure S1 in Supplementary Materials). After fitting the models, we performed model checking using posterior predictive graphics checking \citep[chapter 6]{gelman2014bayesian} based on the following test quantities, including (a) $T_1 (\boldsymbol{y}) = \bar{y}$, (b) $T_2 (\boldsymbol{y}) = \frac{1}{n - 1} \sum_{i = 1}^n (y_i - \bar{y})^2$, and
(c) $T_3(\boldsymbol{y}, G, \sigma) = \frac{1}{n} \sum_{i = 1}^n \left( \frac{y_i - \theta_i}{\sigma} \right)^2$, where $\theta_i = G (\boldsymbol{z}_i, \boldsymbol{x}_i)$. The test quantities catch different aspects of the data, with $T_1(\cdot)$ and $T_2(\cdot)$ measuring the location and variability of the survey outcome while $T_3(\cdot)$ measuring the discrepancy between the survey outcome and fitted distribution. In each MCMC iteration $t$, the realized test quantities $T_i(\boldsymbol{y}, G^{(t)}, \sigma^{(t)})$ under the observed data and predictive test quantities $T_i(\boldsymbol{\tilde{y}}^{(t)}, G^{(t)}, \sigma^{(t)})$ under the simulated data were computed and compared, with $\boldsymbol{\tilde{y}}^{(t)}$ drawn from the posterior predictive distribution. For each quantity $T_i (\cdot)$, a Bayesian posterior predictive $p$-value can also be computed, which is defined as the probability that the predictive test quantity is greater than the realized test quantity, evaluated over the posterior distribution. The Bayesian $p$-value measures the discrepancy between the observed data and the posterior predictive distribution in the aspect characterized by $T(\cdot)$. A Bayesian $p$-value close to 0.5 indicates good fit while a Bayesian $p$-value near 0 or 1 indicates that the observed pattern would be unlikely to happen if the model were true and, therefore, lack of fit. Figure S2 in the Supplementary Materials shows the posterior predictive graphics checking and corresponding $p$-values for SBART. All four Bayesian methods, BART, BART-P, SBART and SBART-P, yielded fitted models with posterior predictive $p$-values close to 0.5, indicating adequate model fit.
We compare the results of proposed Bayesian methods with the raw estimates in estimating the mean trauma score on the log scale, with point estimates and $95 \%$ probability intervals visualized in Figure~\ref{fig:app}(a). The Bayesian methods yields lower estimates for mean trauma score compared to raw estimates without adjustment. BART and SBART yields similar results, as this is a low-dimensional setting with one continuous auxiliary variable and the benefit of soft decision trees is not so obvious. Including propensity scores do not lead to much change in the estimates. We recommend reporting the estimates using SBART in this analysis.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{app_intvl.eps}
\caption{(a) Point estimates and $95 \%$ probability intervals of mean $\log$(trauma score + 1) among soldiers who served in the OHARNG between June 2008 and February 2009 (b) Point estimates and $95 \%$ probability intervals of mean prolongation among all patients and patients with age $\geq 80$ years old, comparing raw sample means, SBART with baseline QTc and treatment (SBART-subset), and SBART with all auxiliary variables (SBART-all).}
\label{fig:app}
\end{figure}
\subsection{New York City COVID-19 Study}
The COVID-19 is a global pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2). The first positive case was confirmed in New York City on March 1, 2020. The city had more cases than any country other than the United States by May 2020. The urgent need for therapeutic agents has resulted in repurposing and redeployment of experimental agents. Hydroxychloroquine, combined with azithromycin, was administered to patients with COVID-19 without robust evidence supporting its use, with the U.S. Food and Drug Administration (FDA) issuing an emergency use authorization (EUA) to allow doctors to begin treating patients with hydroxychloroquine in hospitalized settings outside clinical trials on March 28, 2020. Such EUA was later revoked as of June 15, 2020, with a randomized clinical trial in hospitalized patients showing no benefits and reports of serious heart rhythm problems along with other safety issues. Both hydroxychloroquine and azithromycin are characterized as definite QTc prolongers that increase risk of sudden cardiac deaths. Between March 1st, 2020 through May 1st, 2020, there were 470 patients admitted to Columbia University Irving Medical Center, treated with hydroxychloroquine (H+) or hydroxychloroquine combined with azithromycin (A+H+) \citep{rubin2021}. All patients have baseline ECG measurements of QTc while, but only 244 of them have ECG QTc measurements on Day 2 of medication. We are interested in estimating the mean QTc prolongation, defined as difference in QTc measures between day 2 and day 0, of all the 470 COVID-19 patients who received H+ or A+H+ treatments. However, the QTc prolongation was only measured among the 244 patients who had QTc measurements at day 2. To improve the estimation, we also collected the data of these 470 patients on their demographic characteristics and relevant biomarkers from electronic medical records.
Exploratory analysis indicates a strong negative association between prolongation and baseline QTc measurement and that patients with higher baseline QTc are less likely to have ECG QTc measurement on Day 2, demonstrated in Figure S3 in the Supplementary Materials. Other auxiliary variables include treatment (H+, A+H+), demographic characteristics, including gender, age (in years), race (white, black, other), BMI (log scale), along with 7 biomarkers. We used the recommended method SBART for two estimands of interest: (i) the mean QTc prolongation among all 470 patients, and (ii) the mean QTc prolongation among the 87 (out of 470) patients who were over 80 years old. We compared two SBART models, with the first model only including baseline QTc and treatment (SBART-subset) and the second model including all covariates (SBART-all).
As is visualized in Figure~\ref{fig:app}(b), SBART yields lower estimates of mean QTc prolongation compared to the raw estimates ignoring selection bias, for both estimands of interest. For estimand (i), including baseline QTc and treatment in SBART leads to obvious drop in the mean prolongation estimates {from a raw estimate of $18.9~(95\% \text{~CI}: 14.8, 23.0)$ milliseconds to $16.7~(95\% \text{~CI}: 13.7, 19.7)$ milliseconds}. Additionally adding other auxiliary variables does not lead to further obvious change in the estimates.
For estimand (ii), estimates can be readily obtained by restricting the calculation to predicted and observed prolongation for patients over 80 years old, without additional computation for model fitting. Similar to the estimate among all patients, SBART yields smaller estimates and shorter probability intervals than the raw estimator. Although the mean prolongation estimate is higher among this subgroup of patients than all patients using the raw estimator, the estimates are similar using SBART.
\section{Discussion}
\label{sec:discuss}
We consider generalization of inference on a descriptive estimand from a non-random sample to a target population in data-rich settings where high dimensional auxiliary information is available in both the sample and population, with survey inference being a special case. Existing methods such as post-stratification, raking and MRP are challenging or infeasible to be performed due to high-dimensionality and the need to discretize continuous auxiliary variables before applying such methods leads to loss of information. To address such issues, we propose a regularized prediction approach by modeling the conditional distribution of the outcomes given the high-dimensional auxiliary variables using Bayesian machine learning techniques. In this paper, we specifically consider BART and soft BART which handles both discrete and continuous auxiliary variables as well as potential interactions. Besides the auxiliary variables, we also {consider} modified methods that estimates the propensity score for a unit to be included in the sample and also include the estimated propensity score as a covariate in the BART and soft BART model.
Artificial data simulation studies demonstrate that the Bayesian additive-tree{s}-based methods outperform post-stratification (PS) and raking in low-dimensional settings where PS and raking are feasible, as the regularized additive trees better utilize information in the continuous auxiliary variables and avoid model overspecification. The Bayesian additive-trees-based methods also yield valid inference in high-dimensional settings when PS and raking are not feasible, as long as selection bias does not result in sparse data points at the tails of relevant continuous auxiliary variables. In high-dimensional setting with sparse signals, SBART, with soft decision trees and sparsity-inducing priors, is less biased and more efficient than BART. In challenging settings where the additive-trees-based methods underperform, including propensity score in BART could reduce bias and improve credible interval coverage while such benefit is not obvious for SBART. Therefore, the soft BART prediction method is recommended for generalization of inference with high-dimensional auxiliary variables. The soft BART better utilizes information in the continuous auxiliary variables and more effectively regularize the effect of irrelevant noise auxiliary variable.
As is demonstrated in the OHARNG mental health study and the COVID-19 study, the proposed methods could be applied in both survey and more general settings, with estimands being overall population as well as subpopulation quantities.
The Bayesian additive-trees-based methods need to be used with caution. More specifically, both BART and SBART prediction fail when selection bias results in very sparse data point at the tails of the continuous covariates. Such prediction failure cannot be fixed via robust methods involving propensity scores. Therefore, for important continuous variables associated with the outcomes, the range and distribution in the sample and population need to be checked before using the methods. In some cases, transformation on such auxiliary variables could be applied to reduce sparsity at the tails.
Although BART and SBART are considered in this paper, the regularized prediction approach is general and any Bayesian machine learning techniques that achieve valid predictions could be applied.
\section*{Acknowledgements}
This work was partially supported by NIH grants R01AG067149 and R21ES029668 and ONR grant N00014-17-1-2141. The authors thank Dr.\ Sandro Galea for sharing the data on the OHARNG service members, Drs.\ Elaine Wan and Marc Waase for sharing the data on COVID-19 patients admitted at Columbia University Irving Medical Center. \vspace*{-8pt}
\bibliographystyle{agsm}
|
1,314,259,993,350 | arxiv | \section{Introduction}
An interesting feature of a non $s$-wave Fermi condensate is that it may have plural superfluid phases, originating from active orbital and/or spin degrees of freedom. Indeed, this possibility has experimentally been confirmed in various Fermi superfluid systems, such as heavy fermion superconductor UPt$_3$ \cite{Sigrist}, as well as superfluid liquid $^3$He \cite{Vollhardt,Mineev}. However, the pairing symmetry that has already been realized is still only the simplest $s$-wave one in cold Fermi gas physics \cite{Jin,Zwierlein,Kinast,Bartenstein}. Thus, going beyond this situation is a crucial challenge in this research field.
\par
In this regard, a $p$-wave superfluid Fermi gas is a strong candidate, and the possibility of this spin-triplet pairing state has extensively been discussed both experimentally \cite{Regal,Regal2,Ticknor,Zhang,Schunck,Gunter,Gaebler2,Inaba,Fuchs,Mukaiyama,Maier} and theoretically \cite{Gurarie,Ohashi,Ho,Botelho,Iskin,Iskin2,Cheng,Levinsen,Gurarie2,Grosfeld,Iskin3,Mizushima,Han,Mizushima2,Inotani}. Several experimental groups have discovered a $p$-wave Feshbach resonance in $^{40}$K \cite{Regal,Ticknor} and $^6$Li \cite{Zhang,Schunck} Fermi gases, so that we can now tune the strength of a $p$-wave interaction from the weak-coupling regime to the strong-coupling regime, by adjusting an external magnetic field. This experimental development has realized $p$-wave Feshbach molecules \cite{Regal2,Zhang,Gaebler2,Inaba,Fuchs,Mukaiyama}. Thus, although one still needs to overcome some difficulties, such as the three-body loss \cite{Levinsen,Castin,Gurarie3}, as well as the dipolar relaxation \cite{Zhang,Gaebler2} (that destroy $p$-wave molecules \cite{Chevy}), the $p$-wave superfluid state seems a very promising non $s$-wave pairing state in an ultracold Fermi gas.
\par
It has been predicted \cite{Gurarie,Gurarie2} that, when a $p$-wave interaction has a uniaxial anisotropy, a $p$-wave superfluid Fermi gas may have two superfluid phases with different $p$-wave pairing symmetries. Such an anisotropic $p$-wave pairing interaction is considered to be realized in a $^{40}$K Fermi gas \cite{Ticknor}, because the split of a $p$-wave Feshbach resonance into a $p_x$-wave channel and degenerate $p_y$-wave and $p_z$-wave channels by a magnetic dipole-dipole interaction has been observed, when an external magnetic field is applied in the $x$ direction. Since the observed resonance field of a $p_x$-wave Feshbach resonance is higher than that of the other degenerate channels \cite{Ticknor}, a $p$-wave pairing interaction associated with this $p$-wave Feshbach resonance has the uniaxial anisotropy, that is, a $p_x$-wave pairing interaction $U_x$ is stronger than $p_y$-wave ($U_y$) and $p_z$-wave ($U_z$) interactions. As a result, the $p_x$-wave superfluid state has the highest superfluid phase transition temperature $T_{\rm c}^{p_x}$. In this case, Refs. \cite{Gurarie,Gurarie2} have pointed out that the system experiences the other phase transition from the $p_x$-wave state to the $p_x+ip_y$-wave state at $T_{\rm c}^{p_x+ip_y}$ ($<T_{\rm c}^{p_x}$), when the uniaxial anisotropy satisfies a certain condition. Thus, the realization of a $p$-wave superfluid Fermi gas would enable us to study physics of plural superfluid phases, from the weak-coupling regime to the strong-coupling limit in a systematic manner.
\par
When the above-mentioned $p$-wave superfluid Fermi gas is realized, the existence of strong pairing fluctuations near $T_{\rm c}^{p_x+ip_y}$ is an interesting research topic. In the $p_x$-wave superfluid phase, since single-particle excitations are gapless in the nodal direction ($\perp p_x$) of the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})\propto p_x$, $p_y$-wave and $p_z$-wave pairing fluctuations can continue developing, to be the strongest at $T_{\rm c}^{p_x+ip_y}$. Thus, even far below $T_{\rm c}^{p_x}$, an anisotropic pseudogap phenomenon is expected in the nodal direction of the $p_x$-wave superfluid order parameter near $T_{\rm c}^{p_x+ip_y}$.
\par
In an isotropic $s$-wave superfluid Fermi gas, since the BCS gap opens in all the momentum direction of single-particle excitations, pairing fluctuations are soon suppressed below the superfluid phase transition temperature $T_{\rm c}$. Indeed, it has been shown that a pseudogap in the density of states above $T_{\rm c}$ \cite{Stewart,Gaebler,Perali2,Perali,Tsuchiya1,Levin,Hui,Bulgac} soon changes to the $s$-wave BCS superfluid gap below $T_{\rm c}$ \cite{Watanabe}. Even in the $p_x$-wave case, the enhancement of pairing fluctuations around the node would not occur in the absence of a $p_y$-wave and a $p_z$-wave interactions, because $p_x$-wave pairing fluctuations are soon suppressed by the $p_x$-wave superfluid order below $T_{\rm c}^{p_x}$. Thus, the above-mentioned pseudogap phenomenon in the $p_x$-wave state near $T_{\rm c}^{p_x+ip_y}$ is peculiar to an unconventional Fermi superfluid with a nodal superfluid order parameter, as well as with plural superfluid phases.
\par
In this paper, we theoretically investigate strong-coupling properties of a one component $p$-wave superfluid Fermi gas with a uniaxially anisotropic $p$-wave pairing interaction ($U_x>U_y=U_z$). Including $p$-wave pairing fluctuations within the framework of a strong coupling $T$-matrix approximation, we determine $T_{\rm c}^{p_x}$ and $T_{\rm c}^{p_x+ip_y}$. In the $p_x$-wave superfluid phase, we calculate the angle-resolved single-particle density of states, to clarify that pairing fluctuations in the nodal direction ($\perp p_x$) of the $p_x$-wave superfluid order parameter continue to develop, leading to a pseudogapped single-particle excitation spectrum in the nodal direction.
\par
The paper is organized as follows. In Sec.II, we explain our strong-coupling formalism for a $p$-wave superfluid Fermi gas. In Sec.III, we show our numerical results on $T_{\rm c}^{p_x}$ and $T_{\rm c}^{p_x+ip_y}$. Here, we clarify the condition for the appearance of $p_x$-wave and $p_x+ip_y$-wave superfluid phases. In Sec.IV, we examine the angle-resolved single-particle density of states (ARDOS). We show that a pseudogap anisotropically appears in ARDOS, not only in the normal state near $T_{\rm c}^{p_x}$, but also in the $p_x$-wave superfluid phase near $T_{\rm c}^{p_x+ip_y}$. To characterize this anisotropic many-body phenomenon, we introduce the characteristic temperature $T^*$ as the temperature below which a dip structure appears in ARDOS. Throughout this paper, we take $\hbar=k_{B}=1$, and the volume of the system $V$ is taken to be unity, for simplicity.
\par
\section{Formulation}
\par
We consider a one-component Fermi gas with a uniaxially anisotropic $p$-wave interaction, described by the Hamiltonian,
\begin{equation}
H=\sum_{\bm p} \xi_{\bm p}c_{\bm p}^{\dagger}c_{\bm p}
-\frac{1}{2}\sum_{{\bm p},{\bm p}',{\bm q}}
\sum_{i=x,y,z} F_n({\bm p}) p_i U_i p'_i F_n({\bm p}')
c_{{\bm p}+{\bm q}/2}^\dagger c_{-{\bm p}+{\bm q}/2}^\dagger
c_{-{\bm p}'+{\bm q}/2}c_{{\bm p}'+{\bm q}/2}.
\label{eq.1}
\end{equation}
Here, $c^{\dagger}_{\bm p}$ is the creation operator of a Fermi atom with the kinetic energy $\xi_{\bm p}=\varepsilon_{\bm p}-\mu=p^2/2m-\mu$, measured from the Fermi chemical potential $\mu$ (where $m$ is the atomic mass). $-U_i$ ($<0$) is a pairing interaction in the $p_i$-wave Cooper channel ($i=x,y,z$), having the uniaxial anisotropy $U_x > U_y= U_z$ \cite{Ticknor}. In this paper, we do not deal with details of a $p$-wave Feshbach resonance, but simply treat $(U_x,U_y,U_z)$ as a tunable parameter set. To eliminate the ultraviolet divergence, the last term in Eq. (\ref{eq.1}) involves the cutoff function \cite{Ohashi,Ho,Botelho,Iskin2,Inotani},
\begin{equation}
F_n({\bm p})={1 \over 1+(p/p_{\rm c})^{2n}},
\label{eq.2}
\end{equation}
where $p_{\rm c}$ is a cutoff momentum. For simplicity, we use the same cutoff function $F_n({\bm p})$ for all the $p$-wave Cooper channels. Equation (\ref{eq.2}) gives a Lorentzian cutoff when $n=1$ \cite{Iskin2}, and gives a sharp cutoff when $n=\infty$ \cite{Inotani}. We will discuss the cutoff dependence of our results in Sec. III.
\par
As usual \cite{Ohashi,Ho,Botelho,Iskin2,Inotani}, we conveniently measure the strength of a $p$-wave interaction in terms of the $p_i$-wave scattering volume $v_i$ ($i=x,y,z$), as well as the effective range $k_0$, that are given by, respectively,
\begin{equation}
{4\pi v_i \over m}=-
{U_i \over 3}
{1 \over \displaystyle 1-{U_i \over 3}\sum_{\bm p}{p^2 \over 2\varepsilon_{\bm p}}F_n^2({\bm p})},
\label{eq.3}
\end{equation}
\begin{equation}
k_0=-{4\pi \over m^2}
\sum_{\bm p}{p^2 \over 2\varepsilon_{\bm p}^2}F_n^2({\bm p}).
\label{eq.4}
\end{equation}
Since we take the same cutoff function $F_n({\bm p})$ for all the $p_i$-wave interaction channels, the effective range $k_0$ are channel-independent. We also introduce the anisotropy parameter,
\begin{equation}
\delta v^{-1}\equiv v^{-1}_x-v^{-1}_y=v^{-1}_x-v^{-1}_z~(>0).
\label{eq.5b}
\end{equation}
Then, the $p$-wave interaction can be specified by the parameter set $(v_x^{-1}, \delta v^{-1}, k_0, n)$. The weak-coupling side and the strong-coupling side are characterized as $(p_{\rm F}^3v_x)^{-1}\lesssim 0$ and $(p_{\rm F}^3v_x)^{-1}\gesim 0$, respectively, where $p_{\rm F}$ is the Fermi momentum.
\par
To deal with strong-coupling phenomena in the superfluid phase, it is convenient to rewrite the model Hamiltonian in Eq. (\ref{eq.1}) into the sum of the mean-field BCS part and the term describing fluctuation corrections \cite{Watanabe,OhashiTakada}. Under the Nambu representation \cite{Schrieffer}, we have,
\begin{eqnarray}
H=\frac{1}{2}\sum_{\bm p} \Psi_{\bm p}^{\dagger}
\left[ \xi_{\bm p} \tau_3 - \hat{\Delta}({\bm p}) \right] \Psi_{\bm p}
-\frac{1}{2}\sum_{{\bm q},i=x,y,z} U_i\rho_i^+({\bm q})\rho_i^-(-{\bm q}),
\label{eq.7}
\end{eqnarray}
where
\begin{eqnarray}
\Psi_{\bm p}=\left(
\begin{array}{c}
c_{\bm p} \\
c_{-{\bm p}} ^{\dagger}
\end{array}
\right)
\label{eq.6}
\end{eqnarray}
is the two-component Nambu field, and $\tau_i$ ($i=x,y,z$) are Pauli matrices acting on particle-hole space \cite{Schrieffer}. The first term in Eq. (\ref{eq.7}) is just the mean-field BCS Hamiltonian, where
\begin{eqnarray}
\hat{\Delta}({\bm p})=
\left(
\begin{array}{cc}
0 & \Delta({\bm p}) \\
\Delta^*({\bm p}) & 0
\end{array}
\right)
\label{eq.6b}
\end{eqnarray}
is a $2\times 2$ matrix $p$-wave superfluid order parameter. Here,
\begin{eqnarray}
\Delta({\bm p})={\bm b} \cdot {\bm p}F_n({\bm p}),
\label{eq.8}
\end{eqnarray}
where ${\bm b}=(b_x,b_y,b_z)$ has the form,
\begin{equation}
b_i=U_i\sum_{\bm p}p_iF_n({\bm p})
\langle c_{-{\bm p}}c_{{\bm p}} \rangle.
\label{eq.8b}
\end{equation}
The second term in Eq. (\ref{eq.7}) gives fluctuation corrections to the mean-field Hamiltonian, where the so-called generalized density operator \cite{Watanabe,OhashiTakada},
\begin{eqnarray}
\rho_i^\pm ({\bm q})= \sum_{\bm p} p_i F_n({\bm p}) \Psi_{{\bm p} +\frac{\bm q}{2}}^{\dagger} \tau_\pm \Psi_{{\bm p} -\frac{\bm q}{2}},
\label{eq.9}
\end{eqnarray}
physically describes $p_i$-wave superfluid fluctuations (where $\tau_\pm=\left( \tau_1\pm i\tau_2 \right)/2$).
\par
\begin{figure}
\centerline{\includegraphics[width=10cm]{fig1.eps}}
\caption{(a) Self-energy correction $\hat{\Sigma}({\bm p}, i\omega_n)$ in the $T$-matrix approximation (TMA). The wavy line is the particle-particle scattering matrix $\Gamma_{i,j}^{s,s'}({\bm q}, i\nu_n)$ given in (b). The solid line and the dashed line describe the mean field single-particle Green's function $\hat{G}_0({\bm p},i \omega_n)$, and the $p$-wave interaction, respectively. In (a), the factor $(p_i-q_i/2)F_n({\bm p}-{\bm q}/2)\tau_s$ is assigned to each vertex (solid circle). In (b), we assign $p_iF_n({\bm p})\tau_s$ to each vertex. In this figure, $-s$ means the opposite sign to $s=\pm$.}
\label{fig1}
\end{figure}
\par
Strong-coupling corrections to single-particle excitations can be conveniently described by the self-energy $\hat{\Sigma}({\bm p},i\omega_n)$ in the single-particle thermal Green's function,
\begin{eqnarray}
\hat{G}({\bm p}, i\omega_m)=\frac{1}{\hat{G}_0^{-1}\left({\bm p}, i\omega_m \right)-\hat{\Sigma}\left({\bm p}, i\omega_m \right)},
\label{eq.10}
\end{eqnarray}
where $\omega_m$ is the fermion Matsubara frequency. In Eq. (\ref{eq.10}),
\begin{eqnarray}
\hat{G}_0({\bm p},i\omega_m)=\frac{1}{i\omega_m-\xi_{\bm p} \tau_3
+\hat{\Delta}({\bm p})}
\label{eq.11}
\end{eqnarray}
is the single-particle Green's function in the mean-field level. Treating the last term in Eq. (\ref{eq.7}) within the $T$-matrix approximation (TMA) \cite{Perali,Tsuchiya1,Watanabe}, we have the diagrammatic expression for the self-energy $\hat{\Sigma}({\bm p},i\omega_n)$ shown in Fig. \ref{fig1}(a) \cite{note1}, which gives
\begin{eqnarray}
{\hat \Sigma}({\bm p},i\omega_m)
&=&
{2 \over \beta}
\sum_{i,j=x,y,z}
\sum_{{\bm q},i\nu_n}
F_n^2\left({\bm p}-\frac{{\bm q}}{2}\right)
\left[p_i-{q_i \over 2}\right]
\left[p_j-{q_j \over 2}\right]
\nonumber
\\
&\times&
\left(
\begin{array}{cc}
G_0^{22}({\bm p}-{\bm q},i\omega_n-i\nu_n)
\Gamma^{-+}_{i,j}({\bm q},i\nu_n) &
G_0^{21}({\bm p}-{\bm q},i\omega_n-i\nu_n)
\Gamma^{++}_{i,j}({\bm q},i\nu_n) \\
G_0^{12}({\bm p}-{\bm q},i\omega_n-i\nu_n)
\Gamma^{--}_{i,j}({\bm q},i\nu_n) &
G_0^{11}({\bm p}-{\bm q},i\omega_n-i\nu_n)
\Gamma^{+-}_{i,j}({\bm q},i\nu_n)
\end{array}
\right).
\nonumber
\\
\label{eq.12}
\end{eqnarray}
Here, $\nu_n$ is the boson Matsubara frequency. The TMA particle-particle scattering matrix $\Gamma_{i,j}^{s,s'}({\bm q},i\nu_n)$ in Eq. (\ref{eq.12}) obeys the equation,
\begin{eqnarray}
\Gamma_{i,j}^{s,s'}({\bm q},i\nu_n)=-U_i \delta_{i,j}\delta_{s,-s'}
-U_i \sum_{s''=\pm}\sum_{k=x,y,z}
\Pi_{i,k}^{s, s''}({\bm q},i\nu_n)
\Gamma_{k,j}^{-s'', s'}({\bm q},i\nu_n),
\label{eq.14}
\end{eqnarray}
where $-s$ means the opposite sign to $s=\pm$, and
\begin{eqnarray}
\Pi_{i,j}^{s,s'}({\bm q},i\nu_n)=
\frac{1}{\beta}\sum_{\bm p} p_ip_j F_n^2(p){\rm Tr} \left[
\tau_s \hat{G}_0\left({\bm p}+\frac{\bm q}{2},i\omega_n \right)
\tau_{s'} \hat{G}_0\left( {\bm p}-\frac{\bm q}{2},i\omega_n-i\nu_n \right)
\right]
\label{eq.15}
\end{eqnarray}
is the pair correlation function. In particular, $\Pi_{i,i}^{s,s'}({\bm q},i\nu_n)$ describes fluctuations in the $p_i$-wave Cooper channel, and $\Pi_{i,j}^{s,s'}({\bm q},i\nu_n)$ ($i\ne j$) describes coupling between $p_i$-wave and $p_j$-wave pairing fluctuations.
\par
In the present anisotropic case ($U_x>U_y=U_z$), the highest superfluid phase transition temperature is obtained in the $p_x$-wave Cooper channel. As in the $s$-wave case \cite{Watanabe,Thouless}, the TMA gap equation for the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})=b_xp_xF_n({\bm p})$ is obtained from the Thouless criterion $\left[ \Gamma_{x,x}^{{\rm ph}}({\bm q}=0,i\nu_n=0)\right]^{-1}=0$ in the $p_x$-wave Cooper channel, where
\begin{eqnarray}
\Gamma_{i,j}^{{\rm ph}}({\bm q},i\nu_n)=\frac{1}{4}
\left[
\Gamma_{i,j}^{-+}({\bm q},i\nu_n)
+\Gamma_{i,j}^{+-}({\bm q},i\nu_n)
-\Gamma_{i,j}^{--}({\bm q},i\nu_n)
-\Gamma_{i,j}^{++}({\bm q},i\nu_n)
\right],
\label{eq.14_2}
\end{eqnarray}
describes the phase fluctuations of the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})$. Physically, this guarantees the existence of a gapless Goldstone mode associated with the broken $U(1)$ gauge symmetry. Noting that $b_y=b_z=0$ in the $p_x$-wave superfluid phase, we obtain the gap equation in the $p_x$-wave superfluid state as
\begin{eqnarray}
1={12\pi v_x \over m}\sum_{\bm p}p_x^2F_n^2({\bm p})
\left[
{1 \over 2\sqrt{\xi_{\bm p}^2+|\Delta_{p_x}({\bm p})|^2}}
\tanh{\sqrt{\xi_{\bm p}^2+|\Delta_{p_x}({\bm p})|^2} \over 2T}
-
{1 \over 2\varepsilon_{\bm p}}
\right].
\label{eq.18q}
\end{eqnarray}
The equation for $T_{\rm c}^{p_x}$ is obtained from Eq. (\ref{eq.18q}) by setting $\Delta_{p_x}({\bm p})=0$ as
\begin{eqnarray}
1={12\pi v_x \over m}\sum_{\bm p}p_x^2F_n^2({\bm p})
\left[
{1 \over 2\xi_{\bm p}}\tanh{\xi_{\bm p} \over 2T_{\rm c}^{p_x}}
-
{1 \over 2\varepsilon_{\bm p}}
\right].
\label{eq.18}
\end{eqnarray}
\par
In the case of uniaxially anisotropic $p$-wave interaction, Ref. \cite{Gurarie2} pointed out that, without loss of generality, one may restrict the structure of the $p$-wave superfluid order parameter to the form,
\begin{eqnarray}
{\bm b}=
\left(
\begin{array}{c}
b_x\\
b_y\\
b_z\\
\end{array}
\right)
=
\left(
\begin{array}{c}
B_x\\
iB_y\\
0\\
\end{array}
\right),
\label{eq.18b1}
\end{eqnarray}
where $B_x$ and $B_y$ are real quantities. Thus, in the $p_x$-wave superfluid phase below $T_{\rm c}^{p_x}$ (where $B_x=b_x\ne0$ and $B_y=0$), the other possible superfluid instability is only associated with the $p_x+ip_y$-wave one, having the superfluid order parameter,
\begin{eqnarray}
\Delta_{p_x+ip_y}({\bm p})=[B_xp_x+iB_yp_y]F_n({\bm p}).
\label{eq.18b}
\end{eqnarray}
Since $B_x$ is already present below $T_{\rm c}^{p_x}$, the superfluid phase transition temperature $T_{\rm c}^{p_x+ip_y}$ is determined from the Thouless criterion \cite{Thouless} in the $p_y$-wave Cooper channel $\left[ \Gamma_{y,y}^{{\rm ph}}({\bm q}=0,i\nu_n=0)\right]^{-1}=0$,
\begin{eqnarray}
1={12\pi v_y \over m}\sum_{\bm p}p_y^2F_n^2({\bm p})
\left[
{1 \over 2\sqrt{\xi_{\bm p}^2+|\Delta_{p_x}({\bm p})|^2}}
\tanh{\sqrt{\xi_{\bm p}^2+|\Delta_{p_x}({\bm p})|^2} \over 2T_{\rm c}^{p_x+ip_y}}
-
{1 \over 2\varepsilon_{\bm p}}
\right],
\label{eq.18c}
\end{eqnarray}
where the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})=b_xp_xF_n({\bm p})$ obeys the gap equation (\ref{eq.18q}).
\par
We numerically solve Eqs. (\ref{eq.18q}), (\ref{eq.18}), and (\ref{eq.18c}), to self-consistently determine $T_{\rm c}^{p_x}$, $T_{\rm c}^{p_x+ip_y}$, and $\Delta_{p_x}({\bm p})$. In this procedure, we also solve the equation for the total number $N_{\rm F}$ of Fermi atoms,
\begin{equation}
N_{\rm F}={T \over 2}\sum_{{\bm p},i\omega_n}
{\rm Tr}
[\tau_3{\hat G}({\bm p},i\omega_n)],
\label{eq.18d}
\end{equation}
to include strong-coupling corrections to the Fermi chemical potential $\mu$.
\par
We examine the anisotropic pseudogap phenomenon by calculating the angle-resolved single-particle density of states (ARDOS),
\begin{equation}
\rho(\omega,\hat{\bm p})=
-{1 \over \pi}
\int_0^\infty {p^2 dp \over (2\pi)^3}
{\rm Im } \left[
G_{11}({\bm p},i\omega_n \to \omega + i\delta)
\right],
\label{eq.27}
\end{equation}
where $\hat{\bm p}={\bm p}/|{\bm p}|$, and $G_{11}({\bm p},i\omega_n \to \omega + i\delta)$ is the (1,1) component of the analytic continued TMA Green's function in Eq. (\ref{eq.10}). ARDOS in Eq. (\ref{eq.27}) is related to the ordinary density of states $\rho(\omega)$ as
\begin{equation}
\rho(\omega)=\int \sin\theta_{\bm p}d\theta_{\bm p}d\phi_{\bm p}
\rho(\omega,\hat{\bm p}).
\label{eq.27b}
\end{equation}
Here, we choose the $p_x$ axis as the polar axis ($p_x=p \cos \theta_{\bm p}$).
\par
We note that, since the anisotropy of the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})\propto p_x$ lowers the symmetry of the system, we need much time to compute the number equation (\ref{eq.18d}) below $T_{\rm c}^{p_x}$, compared to the symmetric $s$-wave case. To avoid this difficulty, in this paper, we approximate the TMA Green's function ${\hat G}({\bm p},i\omega_n)$ in the number equation (\ref{eq.18d}) to
\begin{equation}
{\hat G}({\bm p},i\omega_n)\simeq{\hat G}_0({\bm p},i\omega_n)+
{\hat G}_0({\bm p},i\omega_n){\hat \Sigma}({\bm p},i\omega_n){\hat G}_0
({\bm p},i\omega_n).
\label{eq.18e}
\end{equation}
Equation (\ref{eq.18e}) is just the same form as the Green's function in the strong-coupling theory developed by Nozi\`eres and Schmitt-Rink (NSR) \cite{NSR}. The NSR theory has extensively been used in the $s$-wave case, to successfully explain the BCS-BEC crossover behavior of the superfluid phase transition temperature \cite{NSR,Melo}, as well as the superfluid order parameter in the crossover region \cite{Randeria,OhashiGriffin}. The NSR theory has also been extended to the $p$-wave case with $U_x=U_y=U_z$ \cite{Ohashi}. Thus, we expect that the NSR Green's function in Eq. (\ref{eq.18e}) also works in determining $T_{\rm c}^{p_x}$, $T_{\rm c}^{p_x+ip_y}$, $\mu$ and $\Delta_{p_x}({\bm p})$.
\par
On the other hand, it is also known that the NSR theory unphysically gives negative density of states in the BCS-BEC crossover region \cite{Tsuchiya1,Kashimura2012}. Since this serious problem is absent in TMA, we use the TMA Green's function in Eq. (\ref{eq.10}), in considering single-particle properties of a $p_x$-wave Fermi superfluid.
\par
Here, we summarize our detailed parameter settings. For the effective range, we take $k_0=-30p_{\rm F}$, following the experimental result on a $^{40}$K Fermi gas \cite{Ticknor}. For the cutoff function $F_n({\bm p})$ in Eq. (\ref{eq.2}), we set $n=3$. The cutoff momentum $p_{\rm c}$ in $F_n({\bm p})$ is determined so as to reproduce $k_0=-30p_{\rm F}$, which gives $p_{\rm c}=27p_{\rm F}$. Since we only deal with the normal state, as well as the $p_x$-wave superfluid state, ARDOS in Eq. (\ref{eq.27}) is actually independent of the angle $\phi_{\bm p}$ around the $p_x$ axis. Thus, the anisotropy can be simply specified by the polar angle $\theta_{\bm p}$ measured from the $p_x$ axis. Noting this, we write Eq. (\ref{eq.27}) as $\rho(\omega,\theta_{\bm p})$ in what follows.
\par
\section{Phase diagram of an ultracold Fermi gas with $p$-wave interaction}
\par
Figure \ref{fig2} shows the phase diagram of a one component ultracold Fermi gas in terms of the $p$-wave interaction strength, $(p_{\rm F}^3v_x)^{-1}$, and the temperature. When $\delta v^{-1}=0$ ($U_x=U_y=U_z)$, Fig. \ref{fig2}(a) shows that the superfluid phase is dominated by the $p_x+ip_y$-wave pairing state. This superfluid region gradually shrinks with increasing the uniaxial anisotropy $\delta v^{-1}$, as shown in Figs. \ref{fig2}(b) and (c). Since the present anisotropy ($U_x>U_y=U_z$) favors the $p_x$-wave symmetry, the region of the $p_x+ip_y$-wave state eventually vanishes, as shown in Fig. \ref{fig2}(d). We briefly note that the overall structure of this phase diagram is consistent with the previous work based on mean-field analyses \cite{Gurarie,Gurarie2}.
\par
\begin{figure}
\centerline{\includegraphics[width=10cm]{fig2.eps}}
\caption{(Color online) Phase diagram of a one-component Fermi gas with a uniaxially anisotropic $p$-wave pairing interaction. The solid line and the dashed line are $T_{\rm c}^{p_x}$ and $T_{\rm c}^{p_x+ip_y}$, respectively. $T_{\rm F}$ is the Fermi temperature.}
\label{fig2}
\end{figure}
\par
In addition to $T_{\rm c}^{p_x}$, the coupled equations (\ref{eq.18}) and (\ref{eq.18d}) also give the Fermi chemical potential $\mu(T=T_{\rm c}^{p_x})$ shown in Fig. \ref{fig3}, which exhibits the typical BCS-BEC crossover behavior \cite{Ohashi,Ho,Inotani,NSR,Tsuchiya1,Watanabe,Perali,Melo,Randeria}. That is, with increasing the interaction strength, $\mu(T=T_{\rm c}^{p_x})$ gradually deviates from the Fermi energy $\varepsilon_{\rm F}$, to be negative in the strong-coupling regime, when $(p_{\rm F}^3v_x)^{-1}\gesim 0$.
\par
At $T_{\rm c}^{p_x+ip_y}$, the coupled equations (\ref{eq.18q}), (\ref{eq.18c}) and (\ref{eq.18d}) also give $\mu(T=T_{\rm c}^{p_x+ip_y})$, as well as the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p},T=T_{\rm c}^{p_x+ip_y})$. Figure \ref{fig3} shows that $\mu(T=T_{\rm c}^{p_x+ip_y})\simeq \mu(T=T_{\rm c}^{p_x})$ in the whole interaction regime, indicating that the chemical potential is almost $T$-independent in the $p_x$-wave superfluid phase. For the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p},T=T_{\rm c}^{p_x+ip_y})$, of course, this quantity has already existed above $T_{\rm c}^{p_x+ip_y}$, as shown in Fig. \ref{fig4}. Although the pairing symmetry changes from the $p_x$-wave one to the $p_x+ip_y$-wave one at $T_{\rm c}^{p_x+ip_y}$, it is known \cite{Gurarie2} that this symmetry change occurs smoothly, in the sense that $B_y$ in Eq. (\ref{eq.18b}) continuously grows from zero below $T_{\rm c}^{p_x+ip_y}$. Thus, the second order phase transition is expected at $T_{\rm c}^{p_x+ip_y}$ (unless the superfluid order parameter exhibits an unexpected discontinuity at $T_{\rm c}^{p_x+ip_y}$).
\par
\par
\begin{figure}
\centerline{\includegraphics[width=10cm]{fig3.eps}}
\caption{(Color online) Calculated Fermi chemical potential $\mu$ at the two superfluid phase transition temperatures, $T_{\rm c}^{p_x}$ and $T_{\rm c}^{p_x+ip_y}$. We set $(p_{\rm F}^3\delta v)^{-1}=0.3$.}
\label{fig3}
\end{figure}
\par
\begin{figure}
\centerline{\includegraphics[width=10cm]{fig4.eps}}
\caption{(Color online) Calculated factor $b_x$ in the $p_x$-wave superfluid order parameter $\Delta_{p_x}=b_xp_xF_{n=3}({\bm p})$, when $(p_{\rm F}^3\delta v)^{-1}=0.3$. Each result ends at $T_{\rm c}^{p_x+ip_y}$.}
\label{fig4}
\end{figure}
\par
In Fig. \ref{fig4}, we see that $\Delta_{p_x}({\bm p})$ has a discontinuity at $T_{\rm c}^{p_x}$, which is, however, an artifact of TMA we are using in this paper. The same problem has already been known in the $s$-wave case \cite{Watanabe,OhashiGriffin,OhashiJPSJ}. In the latter case, it has been pointed out \cite{OhashiJPSJ} that one needs to correctly include an effective repulsive interaction between Cooper pairs beyond TMA, in order to recover the expected second order phase transition. Although this improvement is also crucial in the $p$-wave case, we leave this problem as a future problem, and examine strong-coupling effects in the $p_x$-wave superfluid phase within TMA.
\par
\begin{figure}
\centerline{\includegraphics[width=15cm]{fig5.eps}}
\caption{(Color online) Calculated two superfluid phase transition temperatures, $T_{\rm c}^{p_x}$ and $T_{p_x+ip_y}$, as functions of the anisotropy parameter $(p_{\rm F}^3\delta v)^{-1}$. In panel (c), the solid circles shows the BEC phase transition temperature $T_{\rm BEC}$ obtained from Eqs. (\ref{eq.24a})-(\ref{eq.25}). The upper dashed line in this panel shows $T_{\rm BEC}(N_{\rm F}/2)$. The lower dashed line shows $T_{\rm BEC}(N_{\rm F}/6)$.}
\label{fig5}
\end{figure}
\par
In the weak-coupling BCS limit, the number equation (\ref{eq.18d}) simply gives $\mu=\varepsilon_{\rm F}$, so that the superfluid phase transition temperature $T_{\rm c}^{p_x}$ is determined from Eq. (\ref{eq.18}) with $\mu=\varepsilon_{\rm F}$. The resulting $T_{\rm c}^{p_x}$ does not depend on the anisotropy parameter $\delta v^{-1}$, for a fixed $p_x$-wave interaction strength $v_x$. On the other hand, $T_{\rm c}^{p_x}$ gradually comes to depend on $\delta v^{-1}$, as one approaches the strong-coupling regime, as shown in Fig. \ref{fig5}.
\par
To explain the anisotropy dependence of $T_{\rm c}^{p_x}$ in the strong-coupling regime shown in Fig. \ref{fig5}(c), we first note that the system in the strong-coupling limit \cite{note2} may be viewed as an ideal gas mixture of three kinds of tightly-bound molecules (with the molecular mass $M=2m$) that are formed by the three $p_i$-wave interactions ($i=x,y,z$). Indeed, in the strong-coupling limit, the number equation (\ref{eq.18d}) at $T_{\rm c}^{p_x}$ is reduced to
\begin{equation}
{N_{\rm F} \over 2}=\sum_{i=x,y,z}N_{\rm B}^i,
\label{eq.24a}
\end{equation}
where
\begin{equation}
N_{\rm B}^i=\sum_{\bm q}n_{\rm B}
\left( \frac{q^2}{2M} -\mu_{\rm B}^i \right)
\label{eq.24b}
\end{equation}
is the number of molecules in the $p_i$-wave Cooper channel, with $n_{\rm B}(\omega)$ being the Bose distribution function. The $T_{\rm c}^{p_x}$-equation (\ref{eq.18}) gives the Bose chemical potential $\mu_{\rm B}^i$ in Eq. (\ref{eq.24b}) as
\begin{eqnarray}
\left\{
\begin{array}{l}
\displaystyle
\mu_{\rm B}^x=0,\\
\displaystyle
\mu_{\rm B}^y=\mu_B^z=-\frac{2\delta v^{-1}}{m \left(|k_0|-3\sqrt{2m|\mu|} \right)}~~~(\le 0).
\end{array}
\right.
\label{eq.25}
\end{eqnarray}
In the absence of uniaxial anisotropy ($\delta v^{-1}$=0), all the three components simultaneously satisfy the BEC condition, $\mu_{\rm B}^x=\mu_{\rm B}^y=\mu_{\rm B}^z=0$. Thus, the phase transition temperature ($\equiv T_{\rm BEC}(N_{\rm F}/6)$) is determined from the equation, $N_{\rm F}/6=N_{\rm B}^x=N_{\rm B}^y=N_{\rm B}^z$, which gives
\begin{equation}
T_{\rm BEC}(N_{\rm F}/6)={2\pi \over \zeta(3/2)M}
\left(
{N_{\rm F} \over 6}
\right)^{2/3}
=0.066T_{\rm F},
\label{eq.25b}
\end{equation}
where $\zeta(3/2)=2.612$ is the zeta function.
\par
In contrast, when $\delta v^{-1}>0$, Eq. (\ref{eq.25}) shows that the $p_y$- and $p_z$-wave components no longer satisfy the BEC condition. In the extreme case when $\delta v^{-1}\gg 1$, the Bose chemical potentials $\mu_{\rm B}^{y}$ and $\mu_{\rm B}^z$ in Eq. (\ref{eq.25}) are much lower than zero, so that one can ignore the contributions of these components to the number equation (\ref{eq.24a}), as $N_{\rm F}/2=N_{\rm B}^x$. This means that most atoms form bound molecules in the $p_x$-wave Cooper channel, which is quite different from the case of $\delta v^{-1}=0$, where only one third of Fermi atoms contribute to $p_x$-wave molecules. Because of this, the BEC phase transition temperature $(\equiv T_{\rm BEC}(N_{\rm F}/2)$) in this extreme case is higher than of $\delta v^{-1}=0$ in Eq. (\ref{eq.25b}) as,
\begin{equation}
T_{\rm BEC}(N_{\rm F}/2)={2\pi \over \zeta(3/2)M}
\left(
{N_{\rm F} \over 2}
\right)^{2/3}
=0.137T_{\rm F}.
\label{eq.25c}
\end{equation}
Figure \ref{fig5}(c) shows that the BEC phase transition temperature $T_{\rm BEC} (N_{\rm B}^x)$ calculated from Eqs. (\ref{eq.24a})-(\ref{eq.25}) monotonically increases from $T_{\rm BEC}(N_{\rm F}/6)$ to $T_{\rm BEC}(N_{\rm F}/2)$, with increasing the anisotropy parameter $\delta v^{-1}$. The well agreement of $T_{\rm c}^{p_x}$ with $T_{\rm BEC}$ shown in this figure indicates that the anisotropy dependence of $T_{\rm c}^{p_x}$ in this regime comes from the increase of the molecular bosons in the $p_x$-wave Cooper channel, with increasing the uniaxial anisotropy of the $p$-wave interaction.
\par
\begin{figure}
\centerline{\includegraphics[width=8cm]{fig6.eps}}
\caption{(Color online) Critical value $\delta v_{\rm c}^{-1}$ of the anisotropy parameter at which $T_{\rm c}^{p_x+ip_y}$ vanishes (solid line). The dashed line shows the result within the BCS-Leggett theory. In obtaining the solid line, we have taken a small but finite value of $T$ ($\lesssim 0.01T_{\rm F}$) because of computational problems.}
\label{fig6}
\end{figure}
\par
In contrast to $T_{\rm c}^{p_x}$, we see in Fig. \ref{fig5} that the $p_x+ip_y$-wave superfluid phase transition temperature $T_{\rm c}^{p_x+ip_y}$ decreases with increasing $\delta v^{-1}$, to eventually vanish at a critical value $\delta v_{\rm c}^{-1}$. (Note that this vanishing $T_{\rm c}^{p_x+ip_y}$ has already been expected in Fig. \ref{fig2}.) Evaluating this critical value $\delta v_{\rm c}^{-1}$ in the whole interaction strength, we obtain Fig. \ref{fig6}. This figure shows that $\delta v_{\rm c}^{-1}$ is not so sensitive to the interaction strength, to always lie in the narrow range, $0.4\lesssim (p_{\rm F}^3\delta v)^{-1}\lesssim 0.6$.
\par
To understand this behavior of $\delta v_{\rm c}^{-1}$, since thermal fluctuations are absent at $T=0$, it is convenient to employ the BCS-Leggett theory \cite{Leggett}, which consists of the coupled Eq. (\ref{eq.18c}) at $T_{\rm c}^{p_x+ip_y}=0$ with the mean-field number equation at $T=0$,
\begin{equation}
N_{\rm F}=
{T \over 2}\sum_{{\bm p},i\omega_n}
{\rm Tr}
[\tau_3{\hat G}_0({\bm p},i\omega_n)]
=
{1 \over 2}
\sum_{\bm p}
\left[
1-
{\xi_{\bm p} \over \sqrt{\xi_{\bm p}^2+|\Delta_{p_x}({\bm p})|^2}}
\right].
\label{eq.25d}
\end{equation}
As shown in Fig. \ref{fig6}, the BCS-Leggett theory semi-quantitatively reproduces the TMA result for $\delta v_{\rm c}^{-1}$. In the weak-coupling BCS regime ($|\Delta_{p_x}({\bm p})|\ll\varepsilon_{\rm F}$), the number equation (\ref{eq.25d}) simply gives $\mu=\varepsilon_{\rm F}$. Substituting this into Eq. (\ref{eq.18c}) with $T_{\rm c}^{p_x+ip_y}=0$, one obtains the upper bound of $\delta v_{\rm c}^{-1}$ in the BCS-Leggett theory as
\begin{equation}
(p_{\rm F}^3\delta v_{\rm c})^{-1}={2 \over \pi} = 0.64.
\label{eq.20}
\end{equation}
In the strong coupling regime where the chemical potential is negative and $|\mu|\gg |\Delta_{p_x}({\bm p})|$, the BCS-Leggett theory gives the lower bound of $\delta v_{\rm c}^{-1}$ as
\begin{equation}
(p_{\rm F}^3\delta v_{\rm c})^{-1}=
{64 \over 5|k_0|}\sum_{\bm p}
{F_{n=3}^4({\bm p}) \over p^2}
+O
\left(
{\sqrt{2m|\mu|} \over |k_0|}
\right)
=
0.44+
O\left(
{\sqrt{2m|\mu|} \over |k_0|}
\right).
\label{eq.21}
\end{equation}
\par
Strictly speaking, although the TMA result for $\delta v_{\rm c}^{-1}$ coincides with Eq. (\ref{eq.20}) in the weak-coupling limit, it is still different from Eq. (\ref{eq.21}) even in the strong-coupling limit. This is because the finite value of the effective range ($k_0=-30p_{\rm F}$) causes an effective repulsive interaction between bound molecules in the latter limit, leading to the so-called quantum depletion \cite{Pethick}. Indeed, in the strong-coupling BEC regime, the last term in Eq. (\ref{eq.18e}), which describes fluctuation corrections to the mean-field Green's function ${\hat G}_0({\bm p},i\omega_n)$, modifies the mean-field number equation (\ref{eq.25d}) as
\begin{equation}
{N_{\rm F} \over 2}
=
{1 \over 4}
\sum_{\bm p}
\left[
1-
{\xi_{\bm p} \over \sqrt{\xi_{\bm p}^2+|\Delta_{p_x}({\bm p})|^2}}
\right]
+{8 \over 3\sqrt{\pi}}\sum_{i=x,y,z}
\left(
{N_{\rm F} \over 2}a_{{\rm B},i}^3
\right)^{1 \over 2},
\label{eq.22}
\end{equation}
where $a_{{\rm B},x}=(374/15)|k_0|^{-1}$, and $a_{{\rm B},y}=a_{{\rm B},z}=a_{{\rm B},x}/3$. The last term in Eq. (\ref{eq.22}) has the same form as the quantum depletion in a Bose superfluid with $N_{\rm F}/2$ bosons, when we interpret $a_{{\rm B},i}$ as an effective repulsive interaction between tightly bound $p_i$-wave molecules. When we include this quantum depletion, the lower bound in Eq. (\ref{eq.21}) is improved as
\begin{equation}
(p_{\rm F}^3\delta v_{\rm c})^{-1}
=
{157 \over 135\pi}+
O\left(
{\sqrt{2m|\mu|} \over |k_0|}
\right)
=
0.38+
O\left(
{\sqrt{2m|\mu|} \over |k_0|}
\right),
\label{eq.23}
\end{equation}
which agrees well with the TMA result (solid line in Fig. \ref{fig6}) in the strong-coupling regime.
\par
\begin{figure}
\centerline{\includegraphics[width=8cm]{fig7.eps}}
\caption{(Color online) Superfluid phase transition temperatures $T_{\rm c}^{p_x}$ and $T_{\rm c}^{p_x+ip_y}$ for various values of $n$ in the cutoff function $F_n({\bm p})$. We take $(p_{\rm F}^3v_x)^{-1}=0$.}
\label{fig7}
\end{figure}
\par
Before ending this section, we comment on the the cutoff function $F_{n=3}({\bm p})$ we are using. As shown in Fig. \ref{fig7}, while the phase transition temperature $T_{\rm c}^{p_x}$ is almost independent of $n$, $T_{\rm c}^{p_x+ip_y}$ depends on this parameter. Generalizing Eq. (\ref{eq.21}) to the case with an arbitrary $n$, one finds that the lower bound (in the BCS-Leggett theory) explicitly depends on $n$ as
\begin{equation}
(p_{\rm F}^3\delta v_{\rm c})^{-1}=
{4 \over 15\pi}
\left(
3-{1 \over 2n}
\right)
\left(
2-{1 \over 2n}
\right)
+
O\left(
{\sqrt{2m|\mu|} \over |k_0|}
\right).
\label{eq.24}
\end{equation}
These $n$-dependences of $T_{\rm c}^{p_x+ip_y}$ and $\delta v_{\rm c}^{-1}$ are because the factor $p_x$ in $\Delta_{p_x}({\bm p})=p_xb_xF_n({\bm p})$ enhances this superfluid order parameter in the high momentum region, so that physical quantities in the $p_x$-wave superfluid phase depend on how this enhancement is suppressed by the cutoff function $F_n({\bm p})$. This implies that, in addition to the observable parameter set $(v_x^{-1}, \delta v^{-1}, k_0)$, one need one more experimental information about the high momentum regime of a real $p$-wave interaction, in order to unambiguously predict the phase boundary between the $p_x$-wave and $p_x+ip_y$-wave superfluid phases. We briefly note that Fig. \ref{fig7} shows that our choice ($n=3$) is close to the case of discrete cutoff (which corresponds to $n=\infty$.)
\par
\begin{figure}
\centerline{\includegraphics[width=15cm]{fig8.eps}}
\caption{(Color online) Calculated angle-resolved density of states (ARDOS) $\rho(\omega,\theta_{\bm p})$. (a1)(a2) $(p_{\rm F}^3v_x)^{-1}=-12$. (b1)(b2) $(p_{\rm F}^3v_x)^{-1}=-8$. (c1)(c2) $(p_{\rm F}^3v_x)^{-1}=-4$. (d1)(d2) $(p_{\rm F}^3v_x)^{-1}=0$. Upper and lower figures show the results at $T_{\rm c}^{p_x}$ and $T_{\rm c}^{p_x+ip_y}$, respectively. In each figure, we offset the results by 0.2.
}
\label{fig8}
\end{figure}
\par
\section{Angle-resolved density of states and strong coupling effects near the phase boundaries.}
\par
Figure \ref{fig8} shows the angle resolved density of states $\rho(\omega,\theta_{\bm p})$ (ARDOS) at $T_{\rm c}^{p_x}$ (upper figures) and $T_{\rm c}^{p_x+ip_y}$ (lower figures). In Fig. \ref{fig8}(a1), a dip structure is shown around $\omega=0$. Since the superfluid order parameter vanishes at $T_{\rm c}^{p_x}$, this is just a pseudogap originating from $p$-wave pairing fluctuations. This many-body phenomenon is non-monotonic in the sense that, while this pseudogap is more remarkable in Fig. \ref{fig8}(b1), it gradually becomes obscure with further increasing the interaction strength, to eventually vanish, as shown in Figs. \ref{fig8}(c1) and (d1). Figures \ref{fig8}(a1)-(c1) also show that the pseudogap structure at $T_{\rm c}^{p_x}$ is anisotropic in momentum space, and is the most remarkable in the $p_x$ direction ($\cos\theta_{\bm p}=1$).
\par
To simply explain this anisotropic pseudogap phenomenon, it is convenient to employ the static approximation for pairing fluctuations \cite{Levin}. Noting that the particle-particle scattering matrix $\Gamma_{x,x}^{-+}({\bm q}=0,i\nu_n=0)$ in the $p_x$-wave channel diverges at $T_{\rm c}^{p_x}$ \cite{Thouless}, we may approximate the (1,1) component of the TMA self-energy in Eq. (\ref{eq.12}) in the normal state near $T_{\rm c}^{p_x}$ to
\begin{eqnarray}
\Sigma_{11}({\bm p},i\omega_n) \simeq
{2 \over \beta}
\sum_{{\bm q},i\nu_n}\Gamma_{x,x}^{-+}({\bm q},i\nu_n)
F_n^2({\bm p})p_x^2G^{22}_0({\bm p},i\omega_n)
\equiv
{\Delta^2_{\rm pg}({\bm p}) \over i\omega_n+\xi_{\bm p}},
\label{eq.28}
\end{eqnarray}
where
\begin{eqnarray}
\Delta^2_{\rm pg}({\bm p})=
\left[{2 \over \beta}\sum_{{\bm q},i\nu_n}
\Gamma_{x,x}^{-+}({\bm q},i\nu_n)
\right]
p_x^2F_n^2({\bm p})
\equiv {b_{\rm pg}^x}^2p_x^2F_n^2({\bm p})
\label{eq.29}
\end{eqnarray}
is the so-called pseudogap parameter \cite{Levin,Perali}. In obtaining Eq. (\ref{eq.28}), we have only retained effects of the strongest $p_x$-wave pairing fluctuations near $T_{\rm c}^{p_x}$, and have ignored fluctuation contributions from the $p_y$-wave and $p_z$-wave Cooper channels. Substituting Eq. (\ref{eq.28}) into the (1,1) component of the TMA Green's function in Eq. (\ref{eq.10}), one obtains
\begin{eqnarray}
G_{11}({\bm p},i\omega_n)
&=&
{1
\over
i\omega_n-\xi_{\bm p}
-
{\displaystyle \Delta^2_{\rm pg}({\bm p})
\over \displaystyle i\omega_n+\xi_{\bm p}}
}
\nonumber
\\
&=&
-
{i\omega_n+\xi_{\bm p}
\over
\omega_n^2+\xi_{\bm p}^2+\Delta^2_{\rm pg}({\bm p})}.
\label{eq.30}
\end{eqnarray}
The first line in Eq. (\ref{eq.30}) means that the pseudogap parameter $\Delta_{\rm pg}({\bm p})$ works as a coupling between the particle branch $\omega=\xi_{\bm p}$ and the hole branch $\omega=-\xi_{\bm p}$. On the viewpoint of this particle-hole coupling, the pseudogap may be interpreted as a result of the level repulsion between the particle and hole branches around $\omega=0$ \cite{Inotani,Tsuchiya1,Watanabe}.
\par
The last expression in Eq. (\ref{eq.30}) is just the same form as the diagonal component of the Green's function in the ordinary mean-field BCS theory. This coincidence immediately gives the BCS-type single-particle excitation spectra $E_{\bm p}^{\pm}=\pm \sqrt{\xi_{\bm p}^2+|\Delta_{\rm pg}({\bm p})|^2}$, having the excitation gap,
\begin{eqnarray}
\Delta E \left( \theta_p \right)
=
\left\{
\begin{array}{ll}
2|b_{\rm pg}^x\cos\theta_{\bm p}|
\sqrt{2m\mu-m^2|b_{\rm pg}^x\cos\theta_{\bm p}|^2}
&
~~~~~(\mu \ge m|b_{\rm pg}^x\cos\theta_{\bm p}|^2), \\
2|\mu|
& ~~~~~(\mu < m|b_{\rm pg}^x\cos\theta_{\bm p}|^2),
\end{array}
\right.
\label{eq.32a}
\end{eqnarray}
where the cutoff function $F_{n=3}({\bm p})$ has been approximated to unity. (Note that $p_{\rm c}\gg p_{\rm F}$.) This gap is actually a pseudogap, when one correctly includes a finite lifetime of preformed Cooper pairs, which is ignored in the static approximation \cite{Levin}.
\par
Equation (\ref{eq.32a}) indicates that, as expected, the anisotropic pseudogap phenomenon shown in Fig.\ref{fig8}(a1)-(c1) originates from the anisotropic $p_x$-wave pairing fluctuations, described by the pseudogap parameter $\Delta^2_{\rm pg}({\bm p})\propto p_x^2$. When $U_x=U_y=U_z$, fluctuations in all the three $p_i$-wave Cooper channels ($i=x,y,z$) are equally enhanced near the superfluid instability, so that the pseudogap parameter in Eq. (\ref{eq.29}) is replaced by the isotropic one,
\begin{eqnarray}
\Delta^2_{\rm pg}({\bm p})=
\left[{2 \over \beta}\sum_{{\bm q},i\nu_n}
\Gamma_{x,x}^{-+}({\bm q},i\nu_n)
\right]
p^2F_n^2({\bm p})
\label{eq.31}
\end{eqnarray}
where we have used the symmetry property, $\Gamma_{x,x}^{-+}=\Gamma_{y,y}^{-+}=\Gamma_{z,z}^{-+}$. The resulting pseudogap is isotropic in momentum space \cite{Inotani}.
\par
Equation (\ref{eq.32a}) also shows that the (pseudo)gap size $\Delta E(\theta_{\bm p})$ becomes isotropic in the strong coupling regime where the Fermi chemical potential is negative \cite{Ohashi,Ho}. This is because most Fermi atoms form tightly bound molecules in the strong-coupling regime, so that the threshold energy of single-particle excitations is simply dominated by the dissociation of these molecules with the binding energy $E_{\rm bind}\simeq 2|\mu|$, as in the strong-coupling BEC regime of the $s$-wave case \cite{NSR,Melo,Randeria}.
\par
The pseudogap parameter $\Delta_{\rm pg}({\bm p})$ in Eq. (\ref{eq.29}) also explains the non-monotonic behavior of the pseudogap structure in terms of the interaction strength shown in Figs. \ref{fig8}(a1)-(d1) \cite{Inotani}. Since pairing fluctuations are stronger for a stronger pairing interaction, the factor $b_{\rm pg}^x$ appearing in Eq. (\ref{eq.29}) also becomes larger, which enhances the pseudogap parameter $\Delta_{\rm pg}({\bm p})$. At the same, since strong pairing fluctuations are known to decrease the Fermi chemical potential $\mu$ \cite{Ohashi,Ho} as shown in Fig. \ref{fig3}, the effective Fermi momentum defined by ${\tilde p}_{\rm F}=\sqrt{2m\mu}$ becomes small. This decreases the pseudogap parameter at the effective Fermi momentum because $\Delta^2_{\rm pg}({\tilde {\bm p}}_{\rm F})\sim {\tilde p}^2_{{\rm F},x}\sim 2m\mu$. As a result, while the pseudogap first becomes remarkable with increasing the interaction strength in the weak-coupling region because of the enhanced pairing fluctuations, it gradually shrinks when the decrease of the Fermi chemical potential dominantly contributes to $\Delta_{\rm pg}({\bm p})$. In the case of Fig. \ref{fig8}(d1), one has $\mu(T_{\rm c}^{p_x})\simeq 0$. Thus, the low momentum region $|{\bm p}|\sim 0$ dominantly contributes to the density of states around $\omega=0$, leading to the vanishing pseudogap in this figure \cite{notePG}.
\par
We briefly note that the vanishing pseudogap in the intermediate coupling regime ($(p_{\rm F}^3v_x)^{-1}\sim 0$) is quite different from the $s$-wave case, where the pseudogap monotonically develops, as one passes through the BCS-BEC crossover region. This is simply because the contact-type $s$-wave pairing interaction is independent on the momentum ${\bm p}$, so that the factor $p_x$ is absent in the $s$-wave pseudogap parameter.
\par
At $T_{\rm c}^{p_x+ip_y}$, since the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})=b_xp_xF_{n=3}({\bm p})\propto p_x$ is already present, ARDOS $\rho(\omega,\cos\theta_{\bm p}=1, 0.5)$ in the low energy region is dominated by the superfluid energy gap, as shown in Figs. \ref{fig8}(a2)-(c2). On the other hand, such a gap structure is not shown in $\rho(\omega,\cos\theta_{\bm p}=0)$ in the weak-coupling regime (Fig. \ref{fig8}(a2)), because of the vanishing $p_x$-wave superfluid order parameter there. However, ARDOS in the nodal direction ($\cos\theta_{\bm p}=0$) gradually exhibits a dip structure around $\omega=0$, with increasing the pairing interaction. (See Figs. \ref{fig8}(b2) and (c2).) Since the $p_x+ip_y$-wave superfluid order parameter still vanishes at $T_{\rm c}^{p_x+ip_y}$, this is a pseudogap induced by fluctuations in the $p_y$- and $p_z$-wave Cooper channels. Indeed, when we apply the static approximation to the region near $T_{\rm c}^{p_x+ip_y}$, the (1,1) component of the single-particle Green's function with ${\bm p}=(0,p_y,p_z)$ is reduced to Eq. (\ref{eq.30}) where the pseudogap parameter is replaced by
\begin{equation}
\Delta^2_{\rm pg}(0,p_y,p_z)=
\sum_{i=y,z}
\left[{2 \over \beta}\sum_{{\bm q},i\nu_n}
\Gamma_{i,i}^{-+}({\bm q},i\nu_n)
\right]
p_i^2F_n^2({\bm p})
\equiv
\sum_{i=y,z}{b_{\rm pg}^i}^2p_i^2F_{n=3}^2({\bm p}).
\label{eq.32}
\end{equation}
Although Eq. (\ref{eq.32}) is similar to Eq. (\ref{eq.29}), the former involves effects of $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})$. Since gapless Fermi excitations only remain along the line node of the $p_x$-wave superfluid order parameter, pairing fluctuations described by $b_{\rm pg}^{i=y,z}(T=T_{\rm c}^{p_x+ip_y})$ in Eq. (\ref{eq.32}) are weaker than pairing fluctuations described by $b_{\rm pg}^x(T=T_{\rm c}^{p_x})$ in Eq. (\ref{eq.29}). This explains why the pseudogap appearing in ARDOS $\rho(\omega,\cos\theta_{\bm p}=0)$ at $T_{\rm c}^{p_x+ip_y}$ (Figs. \ref{fig8}(b2) and (c2)) is less remarkable, compared to the dip structure in $\rho(\omega,\cos\theta_{\bm p}=1)$ at $T_{\rm c}^{p_x}$ (Figs. \ref{fig8}(a1)-(c1)).
\par
The reason for the vanishing superfluid gap and pseudogap gap in Fig. \ref{fig8}(d2) is the same as that in the case of Fig. \ref{fig8}(d1). That is, at this interaction strength, the chemical potential is very small ($\mu(T_{\rm c}^{p_x+ip_y})=0.03\varepsilon_{\rm F}$), so that the low-momentum region ($|{\bm p}|\sim 0$) dominantly contributes to ARDOS $\rho(\omega,\theta_{\bm p})$ around $\omega=0$. Thus, the $p_x$-wave superfluid order parameter $\Delta_{p_x}({\bm p})\propto p_x$, as well as effects of the pseudogap parameter $\Delta_{\rm pg}^2(0,p_y,p_z)\propto p_y^2+p_z^2$ in Eq. (\ref{eq.32}), do not almost affect ARDOS around $\omega=0$ in this case.
\par
\begin{figure}
\centerline{\includegraphics[width=15cm]{fig9.eps}}
\caption{(Color online) Calculated angle-resolved density of states (ARDOS) $\rho(\omega,\theta_{\bm p})$ at various temperatures, when $((p_{\rm F}^3v_x)^{-1}, (p_{\rm F}^3\delta v)^{-1})=(-8, 0.3)$. (a1)(a2) $\cos\theta_{\bm p}=1$. (b1)(b2) $\cos\theta_{\bm p}=0.5$. (c1)(c2) $\cos\theta_{\bm p}=0$. The upper and lower figures show the results in the normal state, and in the $p_x$-wave superfluid state, respectively. Evaluating the pseudogap temperature $T^*(\theta_{\bm p})$ as the temperature below which a dip structure appears in $\rho(\omega,\theta_{\bm p})$ around $\omega=0$, one obtains $T^*(\cos\theta_{\bm p}=1)=0.13T_{\rm F}$, $T^*(\cos\theta_{\bm p}=0.5)=0.11T_{\rm F}$, and $T^*(\cos\theta_{\bm p}=0)=0.1T_{\rm F}$. In each figure, we offset the results by 0.3.}
\label{fig9}
\end{figure}
\par
\begin{figure}
\centerline{\includegraphics[width=10cm]{fig10.eps}}
\caption{(Color online) Characteristic temperature $T^*(\theta_{\bm p})$ below which a dip structure appears in ARDOS $\rho(\omega,\theta_{\bm p})$. We take $(p_{\rm F}^3\delta v)^{-1}=0.3$. The dashed-dotted line shows the temperature at which the Fermi chemical potential $\mu$ vanishes. The chemical potential $\mu$ is negative in the right side of this line, so that this strong-coupling regime may be regarded as a gas of two-body bound molecules with the binding energy $E_{\rm bind}\sim 2|\mu|$ \cite{Inotani}, rather than a gas of Fermi atoms.}
\label{fig10}
\end{figure}
\par
Figures \ref{fig9}(a1) and (a2) show that the pseudogap in $\rho(\omega,\cos\theta_{\bm p}=1)$ in the normal state continuously changes to the superfluid gap, as one passes through the superfluid instability at $T_{\rm c}^{p_x}$. The same phenomenon is also shown when $\cos\theta_{\bm p}=0.5$, as shown in Figs. \ref{fig9}(b1) and (b2). On the other hand, since the $p_x$-wave superfluid order parameter vanishes when $\cos\theta_{\bm p}=0$, Figs. \ref{fig9} (c1) and (c2) show how the pseudogap in the nodal direction continues developing in the $p_x$-wave superfluid phase, to be the most remarkable at $T_{\rm c}^{p_x+ip_y}$.
\par
When we introduce the characteristic temperature $T^*(\theta_{\bm p})$ as the temperature below which a dip structure appears in $\rho(\omega,\cos\theta_{\bm p})$, we obtain Fig. \ref{fig10}. Because $\Delta_{p_x}({\bm p})=0$ in the nodal direction ($\cos\theta_{\bm p}=0$), we may regard $T^*(\cos\theta_{\bm p}=0)$ as the pseudogap temperature \cite{Inotani,Tsuchiya1,Watanabe} in this momentum direction, below which strong pairing fluctuations induce a pseudogap in ARDOS $\rho(\omega,\cos\theta_{\bm p}=0)$. In this case, one may call the region surrounded by $T^*(\cos\theta_{\bm p}=0)$ and $T_{\rm c}^{p_x+ip_y}$ the pseudogap regime.
\par
\begin{figure}
\centerline{\includegraphics[width=8cm]{fig11.eps}}
\caption{(Color online) (a) Angle-resolved density of states $\rho(\omega,\cos\theta_{\bm p}=1)$ in the $p_x$-wave superfluid phase. We take $((p_{\rm F}^3v_x)^{-1}, (p_{\rm F}^3\delta v)^{-1})=(0, 0.3)$. Each line is offset by 0.03. (b) Mean-field result at $T=T_{\rm c}^{p_x+ip_y}$, which is obtained by ignoring the self-energy correction ${\hat \Sigma}({\bm p},i\omega_n)$ in Eq. (\ref{eq.10}) in calculating ARDOS.
}
\label{fig11}
\end{figure}
\par
In the case of $\cos\theta_{\bm p}\ne 0$, $T^*(\theta_{\bm p})$ also has the meaning of the pseudogap temperature, when $T^*(\theta_{\bm p})>T_{\rm c}^{p_x}$. On the other hand, $T^*(\cos\theta_{\bm p}=1,~0.5)$ is lower than $T_{\rm c}^{p_x}$ around $(p_{\rm F}^3v_x)^{-1}=0$ in Fig.\ref{fig10}, which means that the superfluid gap does not appear in ARDOS when $T^*(\theta_{\bm p})\le T\le T_{\rm c}^{p_x}$. As mentioned previously, since $|\mu|\ll \varepsilon_{\rm F}$ in this intermediate coupling regime, single-particle excitations around ${\bm p}=0$ dominantly contribute to ARDOS around $\omega=0$. Because of this, a small superfluid excitation gap by a small $p_x$-wave superfluid order parameter, $\Delta_{p_x}({\bm p})\sim b_x\sqrt{2m\mu}\sim 0$, around ${\bm p}=0$ is easily smeared out by strong pairing fluctuations existing in this regime even below $T_{\rm c}^{p_x}$. Since this strong-coupling effect is gradually suppressed below $T_{\rm c}^{p_x}$, ARDOS starts to exhibit a superfluid gap structure below $T^*(\theta_{\bm p})$ (See Fig. \ref{fig11}(a).), to approach the BCS-type superfluid density of states shown in Fig. \ref{fig11}(b). Thus, $T^*(\theta_{\bm p})$ in this regime may be regarded as the characteristic temperature, below which the $p_x$-wave superfluid order overwhelms pairing fluctuations.
\par
In Fig. \ref{fig10}, we also plot the temperature at which the Fermi chemical potential $\mu$ changes its sign \cite{noteM}. As mentioned previously, in the strong-coupling regime where $\mu<0$, since the system is dominated by tightly bound molecules, the $p$-wave character of Cooper pairs is less important. In the normal state near $T_{\rm c}^{p_x}$, this fact gives the isotropic pseudogap size $\Delta E(\theta_{\bm p})$ in Eq. (\ref{eq.32a}). In the $p_x$-wave superfluid phase, the Bogoliubov single particle excitation spectrum in the strong coupling regime,
\begin{equation}
E_{\bm p}=\sqrt{(\varepsilon_{\bm p}+|\mu|)^2+|\Delta_{p_x}({\bm p})|^2},
\label{eq.F}
\end{equation}
also has the isotropic energy gap $2|\mu|$, reflecting that the threshold energy of Fermi excitations is simply dominated by the binding energy ($E_{\rm bind}\sim 2|\mu|$) of a two-body bound molecule. Thus, the $p$-wave anisotropy is not important in the right side of this line in Fig. \ref{fig10}, as far as we consider low-energy single-particle excitations.
\par
\section{Summary}
\par
To summarize, we have discussed strong-coupling properties of a one-component superfluid Fermi gas with a uniaxially anisotropic $p$-wave pairing interaction ($U_x>U_y=U_z$). Including $p$-wave pairing fluctuations within a $T$-matrix approximation, we determined the two superfluid phase transition temperatures $T_{\rm c}^{p_x}$, which gives the phase boundary between the normal state and the $p_x$-wave superfluid state, and $T_{\rm c}^{p_x+ip_y}$ ($<T_{\rm c}^{p_x}$), which gives the phase boundary between the $p_x$-wave and $p_x+ip_y$-wave superfluid states.
\par
We examined single-particle excitations near $T_{\rm c}^{p_x}$, as well as near $T_{\rm c}^{p_x+ip_y}$. In the normal state near $T_{\rm c}^{p_x}$, we showed that strong pairing fluctuations in the $p_x$-wave Cooper channel induce an anisotropic pseudogap phenomenon where a pseudogap structure in the angle-resolved density of states (ARDOS) is the most remarkable in the $p_x$ direction. We also showed that this pseudogap continuously changes to the $p_x$-wave superfluid gap below $T_{\rm c}^{p_x}$. On the other hand, the pseudogap was found to continue developing below $T_{\rm c}^{p_x}$ in the nodal direction ($\perp p_x$) of the $p_x$-wave superfluid order parameter, to be the most remarkable at $T_{\rm c}^{p_x+ip_y}$. Since pairing fluctuations are simply suppressed in an isotropic $s$-wave superfluid state, this phenomenon is characteristic of a $p$-wave Fermi superfluid with a nodal superfluid order parameter and with plural superfluid phases. To characterize the anisotropic pseudogap phenomenon, we determined the characteristic temperature $T^*(\theta_{\bm p})$, below which a dip structure appears in ARDOS.
\par
In this paper, we have considered the normal state, as well as the $p_x$-wave superfluid phase. To obtain the complete understanding of a $p$-wave superfluid Fermi gas with a uniaxially anisotropic $p$-wave interaction, we need to also examine the $p_x+ip_y$-wave superfluid phase below $T_{\rm c}^{p_x+ip_y}$. In addition, for simplicity, we employed the BCS Hamiltonian with a $p$-wave pairing interaction, which implicitly assumes a broad Feshbach resonance. In this regard, all current experiments are using a narrow $p$-wave Feshbach resonance, so that it is an important problem to clarify how the resonance width affects strong-coupling properties of a $p$-wave superfluid Fermi gas. Since a $p$-wave superfluid state is known to be sensitive to spatial inhomogeneity, inclusion of a realistic harmonic trap also remains as our future problem.
\par
The realization of a $p$-wave superfluid Fermi gas is an exciting challenge, in order to qualitatively go beyond the current stage of cold Fermi gas physics that the $s$-wave Fermi superfluid has only been realized. Since the anisotropic pairing is a crucial key in a $p$-wave Fermi superfluid, our results would contribute to understanding how this character affects many-body properties of an ultracold Fermi gas, especially in the superfluid phase.
\par
\acknowledgments
We would like to thank M. Sigrist, S. Tsuchiya, S. Watabe, R. Watanabe and T. Kashimura for useful discussions. This work was supported by KiPAS project in Keio University. YO was also supported by Grant-in-Aid for Scientific research from MEXT and JSPS in Japan (25105511, 25400418, 15H00840).
\par
|
1,314,259,993,351 | arxiv | \section{I Want to Believe...}
\begin{itemize}
\item
That elementary particle physics will prosper for a 2nd century with
laboratory experiments based on innovative particle sources.
\item
That a full range of new phenomena will be investigated:
\begin{itemize}
\item
mass $\Rightarrow$ a 2nd $3 \times 3$ (or larger?) mixing matrix.
\item
Precision studies of Higgs bosons.
\item
A rich supersymmetric sector.
\item
... And more ...
\end{itemize}
\item
That our investment in future accelerators will result in more
cost-effective technology, capable of extension to 10's of TeV
of constituent CoM energy.
\item
That a {\bf Muon Collider} \cite{status,collabpage} based on ionization cooling
is the best option to accomplish the above.
\end{itemize}
\section{Ionization Cooling}
\centerline{(An Idea So Simple It Might Just Work)}
\begin{itemize}
\item
Ionization: takes momentum away.
\item
RF acceleration: puts momentum back along $z$ axis.
\item
$\Rightarrow$ Transverse ``cooling''.
\centerline{\epsfxsize 3.5 truein \epsfbox{mcdonald1306fig1.eps}}
Origin: G.K. O'Neill (1956) \cite{Oneill}.
\item
This won't work for electrons or protons.
\item
So use muons: Balbekov \cite{Ado}, Budker \cite{Budker}, Skrinsky
\cite{Skrinsky71}, late 1960's.
\end{itemize}
\section{The Details are Delicate}
Use channel of LH$_2$ absorbers, rf cavities and alternating solenoids
(to avoid buildup of angular momentum).
One cell of the cooling channel:
\centerline{\epsfxsize 3.5 truein \epsfbox{mcdonald1306fig2.eps}}
But, the energy spread rises due to ``straggling''.
$\Rightarrow$ Must exchange longitudinal and transverse emittance
frequently to avoid beam loss due to bunch spreading.
Can reduce energy spread by a wedge absorber at a momentum dispersion
point:
\centerline{\epsfxsize 4.0 truein \epsfbox{mcdonald1306fig3.eps}}
[6-D emittance constant (at best) in this process.]
\section{What is a Muon Collider?}
An accelerator complex in which
\begin{itemize}
\item
Muons (both $\mu^+$ and $\mu^-$) are collected from pion decay
following a $pN$ interaction.
\item
Muon phase volume is reduced by $10^6$ by ionization cooling.
\item
The cooled muons are accelerated and then stored in a ring.
\item
$\mu^+\mu^-$ collisions are observed over the useful muon life of
$\approx 1000$ turns at any energy.
\item
Intense neutrino beams and spallation neutron beams are \hfill\break
available as byproducts.
\end{itemize}
Muons decay: $\mu \to e \nu \qquad \Rightarrow$
\begin{itemize}
\item
Must cool muons quickly (stochastic cooling won't do).
\item
Detector backgrounds at LHC level.
\item
Potential personnel hazard from $\nu$ interactions.
\end{itemize}
\newpage
\begin{table
\caption
{Baseline parameters for high- and low-energy muon colliders.
Higgs/year assumes a cross section $\sigma=5\times 10^4$~fb; a Higgs width
$\Gamma=2.7$~MeV; 1~year = $10^7$~s.}
\begin{tabular}{llccccc}
CoM energy & TeV & 3 & 0.4 &
\multicolumn{3}{c}{0.1 } \\
$p$ energy & GeV & 16 & 16 & \multicolumn{3}{c}{16}\\
$p$'s/bunch & & $2.5\times 10^{13}$ & $2.5\times 10^{13}$ &
\multicolumn{3}{c}{$5\times 10^{13}$ } \\
Bunches/fill & & 4 & 4 & \multicolumn{3}{c}{2 } \\
Rep.~rate & Hz & 15 & 15 & \multicolumn{3}{c}{15 } \\
$p$ power & MW & 4 & 4 & \multicolumn{3}{c}{4} \\
$\mu$/bunch & & $2\times 10^{12}$ & $2\times 10^{12}$ &
\multicolumn{3}{c}{$4\times 10^{12}$ } \\
$\mu$ power & MW & 28 & 4 & \multicolumn{3}{c}{1 } \\
Wall power & MW & 204 & 120 & \multicolumn{3}{c}{81 } \\
Collider circum. & m & 6000 & 1000 & \multicolumn{3}{c}{350 } \\
Ave bending field & T & 5.2 & 4.7 &\multicolumn{3}{c}{3 } \\
Depth & m & 500 & 100 & \multicolumn{3}{c}{10 } \\
Rms ${\Delta P/P}$ & \% & 0.16 & 0.14 & 0.12 & 0.01 & 0.003 \\
6d $\epsilon_6$ & $(\pi \textrm{m})^3$&$1.7\times 10^{-10}$&$1.7\times
10^{-10}$&$1.7\times 10^{-10}$&$1.7\times 10^{-10}$&$1.7\times 10^{-10}$\\
Rms $\epsilon_n$ &$\pi$ mm-mrad & 50 & 50 & 85 & 195 & 290\\
$\beta^*$ & cm & 0.3 & 2.6 & 4.1 & 9.4 & 14.1\\
$\sigma_z$ & cm & 0.3 & 2.6 & 4.1 & 9.4 & 14.1 \\
$\sigma_r$ spot &$\mu$m & 3.2 & 26 & 86 & 196 & 294\\
$\sigma_{\theta}$ IP &mrad & 1.1 & 1.0 & 2.1 & 2.1 & 2.1\\
Tune shift & &0.044 &0.044 & 0.051 &0.022 & 0.015\\
$n_{\rm turns}$ (effective) & & 785 & 700 & 450 & 450 & 450 \\
Luminosity & cm$^{-2}$s$^{-1}$ & $7\times 10^{34}$ & $10^{33}$ &
$1.2\times 10^{32}$ & $2.2\times 10^{31}$ & $10^{31}$ \\
& & & & & & \\
Higgs/year & & & & $1.9\times 10^3$ & $4\times 10^3$ & $3.9\times 10^3$ \\
\end{tabular}
\end{table}
Comparison of footprints of various future colliders:
\centerline{\epsfxsize 6.5 truein \epsfbox{mcdonald1306fig4.eps}}
\newpage
A First Muon Collider to study light-Higgs production:
\centerline{\epsfxsize 6.0 truein \epsfbox{mcdonald1306fig5.eps}}
\section{The Case for a Muon Collider}
\begin{itemize}
\item
More affordable than an $e^+e^-$\ \ collider at the TeV (LHC) scale.
\item
More affordable than either a hadron or an $e^+e^-$\ \ collider for (effective)
energies beyond the LHC.
\item
Precision initial state superior even to $e^+e^-$\ .
\hfill\break \phantom{a}\hspace{0.5in}
Muon polarization $\approx 25\%,\ \Rightarrow$ can determine $E_{beam}$ to
$10^{-5}$ via $g-2$ spin precession \cite{Raja}.
\noindent
\parbox{2.5in}
{$t \overline t$ threshold:
{\epsfxsize 2.5 truein \epsfbox{mcdonald1306fig6.eps}}}
\hspace{0.25in}
\parbox{2.5in}
{Nearly degenerate $A^0$ and $H^0$:
{\epsfxsize 3 truein \epsfbox{mcdonald1306fig7.eps}} }
\item
Initial machine could produce light Higgs via $s$-channel \cite{mup}:
\hfill\break \phantom{a}\hspace{0.5in}
Higgs coupling to $\mu$ is $(m_\mu/m_e)^2 \approx 40,000 \times$ that to
$e$.
\hfill\break \phantom{a}\hspace{0.5in}
Beam energy resolution at a muon collider $< 10^{-5}$,
\hfill\break \phantom{a}\hspace{1.0in}
$\Rightarrow$ Measure Higgs width.
\hfill\break \phantom{a}\hspace{0.5in}
Add rings to 3 TeV later.
\item
Neutrino beams from $\mu$ decay about $10^4$ hotter than present.
\hfill\break \phantom{a}\hspace{0.5in}
Possible initial scenario in a low-energy muon storage ring \cite{ring}.
\hfill\break \phantom{a}\hspace{0.5in}
$$
\mbox{Study}\ CP\ \mbox{violation via}\ CP\ \mbox{conjugate initial states:}
\left\{ \begin{array}{c} \mu^+ \to e^+ \overline \nu_\mu \nu_e \\
\mu^- \to e^- \nu_\mu \overline \nu_e \end{array}
\right. .
$$
\end{itemize}
\section{Future Frontier Facilities}
\centerline{(A Personal Assessment)}
\begin{itemize}
\item
Hadron collider (LHC, SSC): $\approx$ \$100k/m [magnets].
\hfill\break \phantom{a}\hspace{0.5in}
$\approx$ 2 km per TeV of CM energy.
\hfill\break \phantom{a}\hspace{0.5in}
Ex: LHC has 14-TeV CM energy, 27 km ring, $\approx$ \$3B.
\item
Linear $e^+e^-$\ \ collider (SLAC, NLC(?)): $\approx$ \$200k/m [rf].
\hfill\break \phantom{a}\hspace{0.5in}
$\approx$ 20 km per TeV of CM energy;
\hfill\break \phantom{a}\hspace{0.5in}
But a lepton collider needs only $\approx$ 1/10 the CM energy
\hfill\break \phantom{a}\hspace{0.5in}
to have equivalent physics reach to a hadron collider.
\hfill\break \phantom{a}\hspace{0.5in}
Ex: NLC, 1.5-TeV CM energy, 30 km long, $\approx$ \$6B (?).
\item
Muon collider: $\approx$ \$1B for source/cooler + \$100k/m for rings
\hfill\break \phantom{a}\hspace{0.5in}
Well-defined leptonic initial state.
\hfil\break \phantom{a}\hspace{0.5in}
$m_\mu/m_e \approx 200 \Rightarrow$ Little beam radiation.
\hfill\break \phantom{a}\hspace{1.5in}
$\Rightarrow$ Can use storage rings.
\hfill\break \phantom{a}\hspace{1.5in}
$\Rightarrow$ Smaller footprint.
\hfill\break \phantom{a}\hspace{0.5in}
Technology: closer to hadron colliders.
\hfill\break \phantom{a}\hspace{0.5in}
$\approx$ 6 km of ring per TeV of CM energy.
\hfill\break \phantom{a}\hspace{0.5in}
Ex: 3-TeV muon collider, $\approx$ \$3B (?), would have physics reach well
beyond the LHC.
\end{itemize}
\section{Muon Collider R\&D Program}
\begin{itemize}
\item
Targetry and Capture at a Muon Collider Source \cite{targetprop,targetpage}.
Baseline scenario:
\centerline{\epsfxsize 5.0 truein \epsfbox{mcdonald1306fig8.eps}}
To achieve useful physics luminosity, a muon collider must produce about
$10^{14}\ \mu$/sec.
\begin{itemize}
\item
$\Rightarrow > 10^{15}$ proton/sec onto a high-$Z$ target
$\Leftrightarrow $ 4 MW beam power.
\item
Capture pions of $P_\perp \lsim 200$ MeV/$c$ in
a 20-T solenoid magnet.
\item
Transfer the pions into a 1.25-T-solenoid decay channel.
\item
Compress $\pi/\mu$ bunch energy with rf cavities and deliver to muon
cooling channel.
\end{itemize}
Proposed R\&D facility:
\centerline{\epsfxsize 5.0 truein \epsfbox{mcdonald1306fig9.eps}}
\item
Ionization Cooling for a High Luminosity Muon Collider \cite{mucool,coolpage}.
Test basic cooling components:
\begin{itemize}
\item
Alternating solenoid lattice, RF cavities, LH$_2$ absorber.
\item
Lithium lens (for final cooling).
\item
Dispersion + wedge absorbers to exchange longitudinal and
transverse phase space.
\end{itemize}
Track individual muons; simulate a bunch in software.
Possible site: Meson Lab at Fermilab:
\centerline{\epsfxsize 4.5 truein
\epsfbox[0 240 490 580]{mcdonald1306fig10.eps}}
\noindent
\parbox{3.75in}
{Cooling channel components:
\centerline{\epsfxsize 3.75 truein
\epsfbox[60 250 450 450]{mcdonald1306fig11.eps}} }
\parbox{2.75in}
{\centerline{\epsfxsize 2.75 truein
\epsfbox[90 145 430 410]{mcdonald1306fig12.eps}} }
\end{itemize}
\section{Upcoming Workshops}
\centerline{(See http://www.cap.bnl.gov/mumu/table\_workshop.html)}
\begin{itemize}
\item
Muon Collider Collaboration Meeting, May 20-26, 1999, St.~Croix.
\item
Neutrino Factories Based on Muon Accumulators, July 5-9, 1999, Lyon/CERN.
\item
Muon Colliders at the Highest Energies, Sept.\ 27-Oct.\ 1, 1999, Montauk, NY.
\item
Physics Potential and Development of $\mu^+\mu^-$\ Colliders, Dec.\ 14-19, 1999,
San Francisco.
\end{itemize}
\vspace{-0.25in}
|
1,314,259,993,352 | arxiv | \section{Introduction}
A brain computer interface (BCI) uses neurophysiological signals from the brain, e.g., electrocorticography (ECoG), electroencephalogram (EEG), and functional magnetic resonance imaging (fMRI), to control external devices or computers \cite{1}. Among these signals, fMRI non-invasively measures the task-induced blood-oxygen-level-dependent (BOLD) changes related to brain neuronal activities. Unlike EEG, fMRI has excellent spatial resolution and whole brain coverage, so it can accurately locate activation areas in the brain.
This paper reviews the basic architecture of real-time fMRI-based BCI (rtfMRI-BCI), an emerging machine learning based data analysis approach (also known as multi-voxel pattern analysis), and the applications and recent advances of rtfMRI-BCI.
\section{The Architecture of rtfMRI-BCI}
Different from conventional fMRI, in which image analysis can only be performed after all scans are finished, rtfMRI-based BCI allows the simultaneous acquisition, analysis and visualization of whole brain images. A typical closed-loop rtfMRI-BCI system consists of four components: image acquisition, image preprocessing, image analysis, and feedback.
\begin{enumerate}
\item \emph{Image acquisition}: According to some pre-defined scanning parameters, a MRI scanner uses an echo planar imaging sequence to stimulate brain MRI echo signals and then records them. An image reconstruction workstation then assembles these signals into three-dimensional images.
\item \emph{Image preprocessing}: fMRI images need to be preprocessed to improve their quality before further analyses can be performed. This usually involves the following steps:
\begin{enumerate}
\item \emph{Slice timing correction}: An fMRI image consists of multiple slices that are sampled sequentially at different time instances, so the same region from different slices are shifted in time relative to each other. Slice timing correction interpolates the slices so that they can be viewed as being sampled at exactly the same time \cite{SLADKY_Slice}, as shown in Fig.~\ref{fig:slice}.
\item \emph{Realignment}: Any head motion of the subject can contaminate the neighboring voxels. A common practice for motion correction is to treat the brain as a rigid body, and then calculate its translation and rotation relative to a reference image \cite{Friston_head}.
\item \emph{Coregistration}: fMRI images typically have low spatial resolution and do not include enough anatomical details, so they are usually registered to a high resolution structural MRI image of the same subject before presentation \cite{Wells_registration}.
\item \emph{Normalization}: Group analysis requires the voxels from the same brain location of different subjects are comparable. Normalization is used to register a subject's anatomical structure to a standardized stereotaxic space defined by a template, such as the Montreal Neurological Institute or Talairach brain \cite{Ashburner_normalization}.
\item \emph{Spatial smoothing}: This is usually performed by convolving the functional image with a Gaussian kernel. Smoothing can suppress random noise, and hence increase the signal-to-noise ratio. However, it also reduces the actual spatial resolution and blurs the details, so generally it is not used in machine learning based fMRI analysis.
\end{enumerate}
\begin{figure}\centering
\includegraphics[width=.55\linewidth,clip]{slice_timing.eps}
\caption{Illustration of slice timing correction. Adopted from \cite{SLADKY_Slice}.} \label{fig:slice}
\end{figure}
\item \emph{Image analysis}: This step locates the real-time activation areas within the brain and then performs univariate or multivariate analysis. Typical tasks include statistical analysis of a specific region of interest (ROI) to determine its activation level, and online classification of brain states to find the subject's intention.
Univariate analysis measures brain activities from thousands of locations repeatedly, and then analyzes each location individually to understand how a particular perceptual or cognitive state is encoded \cite{2}. If the response at a certain location in the brain is different between two states, then the voxel strength at that location can be used to decode the state. Therefore, univariate analysis uses statistical analysis to identify the voxels that are significantly correlated to a specific task, and hence the regions that are significantly activated in the brain, which are called ROIs or functional areas.
While the majority of work in rtfMRI-BCI is done through conventional univariate analysis, there is a growing interest in machine learning based multivariate analysis, particularly, in the emerging field of brain state classification, i.e., decoding the brain state to determine the intention of the subject. This typically includes feature extraction, feature selection/dimensionality reduction, and classification.
\begin{enumerate}
\item \emph{Feature extraction}: The resting-state fMRI is commonly used to diagnose mental diseases. In addition to calculating regional attributes such as the amplitude of low-frequency fluctuations \cite{3} and regional homogeneity \cite{4}, functional connections between different regions can also be calculated, and the connection matrix can be used to compute its network properties \cite{5}. For the task-based fMRI, in addition to calculating the functional connections between different regions, the voxel intensities at different times can also be used as features in pattern analysis, and the resulting method is called multi-voxel pattern analysis (MVPA).
\item \emph{Feature selection/dimensionality reduction}: Feature selection selects the most useful features from a feature set and discards the rest, so it also results in dimensionality reduction. It is an important data preprocessing process that can alleviate the curse of dimensionality and simplify the subsequent learning tasks. Dimensionality reduction maps the original high-dimensional feature space to a low-dimensional subspace using a mathematical transformation. The new features are linear or nonlinear combinations of the original features, and are usually more informative \cite{6}.
\item \emph{Classification}: Simple linear classifiers, such as correlation-based classifier \cite{7,8}, neural networks without hidden layers \cite{9}, linear discriminant analysis \cite{10,11,12,13}, linear support vector machine (SVM) \cite{14,15,16}, and Gaussian naive Bayes classifiers \cite{14}, are frequently used in MVPA. They compute a weighted sum of the voxel intensities and pass it to a decision function to classify the brain state. Nonlinear classifiers, such as nonlinear SVM \cite{14,17} and multi-layer neural networks \cite{18}, have also been used in MVPA. Compared with linear classifiers, nonlinear ones can capture more complex mappings between features and the brain states. Though theoretically nonlinear classifiers can implement more complex mappings, there is no guarantee that they can significantly outperform linear classifiers in MVPA \cite{15}. This may be because nonlinear classifiers generally need a large amount of training data to achieve their best performance, which may not be easily available in neuroimaging. Additionally, by using a simple linear classifier one can visualize and explain which voxels are more important in decision making, but it is much more difficult to do so for a nonlinear classifier. As a result, the linear SVM classifier is frequently used in fMRI research.
\end{enumerate}
\item \emph{Feedback}: This step feeds the online analysis results back to the subject in real-time, so that the subject can voluntarily self-regulate his/her cognitive function or state. It also presents task-related stimuli to the subject.
\end{enumerate}
\section{Applications of rtfMRI-BCI}
The applications of rtfMRI in BCI can be roughly partitioned into two categories: 1) neurofeedback, in which a subject can voluntarily self-regulate his/her brain activity in a specific region through the feedback of the activation level there; and, 2) brain state decoding, which analyzes the subject's fMRI data to determine his/her intention, which can be then used to control an external device or computer.
\subsection{Neurofeedback}
Because fMRI has high spatial resolution and can image the entire brain, rtfMRI-BCI can extract the activation levels of specific anatomical locations (ROIs) as feedback. Among the various feedback modalities (auditory, visual, verbal, olfactory, and tactile), visual feedback has been the most popular one. The form of visual feedback also changes with the purpose of the experiment. deCharms et al. \cite{19} introduced a flame-like feedback in a pain-related study, as shown in Fig.~\ref{fig:flame}, where the intensity of the flame increases with the intensity of the signal. Sitaram et al. \cite{21} described a thermometer feedback, where red and blue colors are used to indicate whether the signal is above or below a baseline, as shown in Fig.~\ref{fig:thermometer}. Weiskopf et al. \cite{22} used the differential feedback intensity curve as feedback, where an upward arrow indicates an activity enhancement, as shown in Fig.~\ref{fig:curve}.
The seminal rtfMRI-BCI work by deCarms et al. \cite{19} on chronic pain is worth special mentioning here. The purpose was to find out whether adjusting the activity on the rostral part of the anterior cingulate cortex (rACC) can affect the perception of pain. Their study showed that the pain introduced by noxious stimulus may be perceived differently if the subject intentionally induces an increase or inhibition in the BOLD level of rACC. Through rtfMRI-based neurofeedback, subsequent experiments have been able to voluntarily adjust the level of activity in many other brain regions, including the anterior cingulate cortex \cite{20}, the insula \cite{21}, the motor area \cite{24}, the amygdala \cite{25}, the inferior frontal gyrus \cite{26}, and the parahippocampal place area \cite{27}. After enough training, a subject can even voluntarily adjust the corresponding brain region without neurofeedback, and this ability can last for some time after the training.
\begin{figure}\centering
\subfigure[]{\includegraphics[width=.3\linewidth,clip]{flame.eps}\label{fig:flame} }
\subfigure[]{\includegraphics[width=.315\linewidth,clip]{thermometer.eps}\label{fig:thermometer} }
\subfigure[]{\includegraphics[width=.29\linewidth,clip]{curve.eps}\label{fig:curve} }
\caption{Three different forms of visual feedback. (a) flame, adopted from \cite{19}; (b) thermometer, adopted from \cite{21}; (c) intensity curve, adopted from \cite{22}.}\label{fig:feedback}
\end{figure}
These research results suggest that rtfMRI-BCI provides a new approach in neuroscience for studying brain plasticity and functional reorganization through sustained training of specific brain regions \cite{28}. One potential application of neurofeedback is clinical rehabilitation, e.g., reducing the effects of abnormal brain activities, overcoming stroke-induced dyskinesia and Parkinson, relieving chronic pain, and treating depression and other neurological problems such as psychosis, social phobia and addiction \cite{30,31,32,33,34,35}.
\subsection{Brian State Decoding}
Another main application of rtfMRI-BCI is similar to ``brain reading", which classifies a subject's brain state to determine his/her intention. Its implementation can be divided into two categories: 1) pattern matching based on task-specific ROIs, and 2) machine learning based brain state classification.
Pattern matching was used by Yoo et al. \cite{36} in 2004 to perform BCI-based spatial navigation, in which a subject's brain signal was classified into four states so that they can control the computer to navigate through a maze. \cite{37,38,39,40,41} reported similar work. In all these studies the number of classifiable brain states did not exceed four.
In 2007, Sorger et al. \cite{42} used pattern matching to distinguish among 27 brain states, and implemented the world's first rtfMRI-BCI based spelling system. In this system, a subject can independently alter three aspects of the BOLD signal:
\begin{enumerate}
\item The location of the signal source, by performing three different mental tasks (motor imagery, mental calculation, and inner speech).
\item Delay of the mental task start time (0s, 10s, and 20s).
\item The duration of the mental task, which in turn determines the duration of the brain signal (10s, 20s, and 30s).
\end{enumerate}
The combination of these aspects resulted in 27 unique brain responses, which can be assigned to 27 characters, as shown in Fig.~\ref{fig:coding}.
\begin{figure}\centering
\includegraphics[height=4.05cm]{coding.eps}
\caption{Letter coding scheme. Adopted from \cite{42}.} \label{fig:coding}
\end{figure}
The spelling system required very little pre-training to help patients in locked-in syndrome to communicate in real time. Its main disadvantage is that the information transfer rate was very low (on average 50s per letter).
In summary, pattern matching based on task-specific ROIs needs very little pre-training and preparation to implement a BCI system, but generally has low transfer efficiency. Machine learning based brain state classification, also known as MVPA, is expected to improve it. Its main advantages include: 1) it does not require \emph{a priori} assumptions about the functional positioning and individual performance strategies, and 2) it can significantly improve the sensitivity of human neuroimaging analysis by considering the full spatial pattern of brain activities that are measured at many locations.
The application of MVPA to offline fMRI data analysis originated from Haxby et al.'s work \cite{7} in 2001. Since then, cognitive neuroscience research has witnessed a rapidly growing interest on brain state classification using fMRI and experimental designs.
In 2007, LaConte et al. \cite{43} performed online classification of the left and right index finger movement using SVM, which verified the feasibility of using machine learning to implement a BCI system. They first trained a SVM classifier on offline fMRI data, then applied it to online fMRI images to predict the brain state, and next updated the computer-presented stimulus accordingly. This study also showed that machine learning based stimulus feedback can respond to changes in the brain state much earlier than the time-to-peak limitation of the BOLD response, i.e., the former has higher sensitivity. In 2009 Eklund et al. \cite{44} used a neural network to classify three activities (left hand movement, right hand movement, and resting) from rtfMRI, and then controlled the balance of a virtual reality inverted pendulum. In 2011, Hollmann et al. \cite{45} used relevance vector machine to predict a person's decision in the game. In 2013, Andersson et al. \cite{46} used SVM to classify visuospatial attentions based on the fMRI data collected by an ultrahigh field MRI scanner (7 Tesla). Four subjects succeeded in navigating a robot with virtually no training. Compared with methods based on the local activation of ROIs, MVPA has significantly higher information transfer rate.
\section{Future Developments and Ethical Considerations}
In BCIs, EEG has excellent temporal resolution but poor spatial resolution, whereas fMRI has high spatial resolution and low temporal resolution. Recent advances in sensing hardware have enabled the simultaneous acquisition of EEG and fMRI signals, but sophisticated signal processing and machine learning approaches are still needed to optimally integrate these two modalities to achieve both high temporal resolution and high spatial resolution \cite{47,48,49}. Then, brain stimulation techniques like the transcranial magnetic stimulation (TMS) can be better used to treat brain disorders.
The rapid development of BCIs also raises ethical concerns. Both structural and functional brain signals are related to mental states and traits, which could potentially be used to reveal sensitive private information \cite{2}. So, ethics and regulations are also very important to the healthy development of BCIs.
\section{Conclusions}
This paper has introduced the architecture of rtfMRI based BCI, which includes image acquisition, image preprocessing, image analysis, and feedback. Among them, image preprocessing and analysis are the most important components. Though there have been lots of algorithms for offline fMRI data processing and analysis, how to modify and optimize them for online real-time tasks still calls for more research.
We also reviewed the applications of rtfMRI in BCI, which can be divided into two directions: neuralfeedback and brain state decoding. Both can be of great significance to clinical rehabilitation and cognitive neuroscience research.
|
1,314,259,993,353 | arxiv | \section{Introduction}
Since the development of the VLSI industry and the consequent scaling down of devices following Moore's law, transistor sizes have significantly decreased. As a result, supply voltages have also been reduced. Therefore, voltage regulators are needed, which output a fixed voltage despite varying input voltages. The IC's other parts receive this fixed output voltage as a supply. There are two main types of regulators: linear and switching. LDOs are linear regulators that are commonly used in VLSI chips. The reduction in supply voltage due to scaling has made LDOs an important component of power management ICs because we require lower input-output voltage differences, i.e., lower dropout.
An LDO consists of four main blocks ~\cite{ref_article10}: an error amplifier, a pass element, a feedback network, and a load. An error amplifier is a differential amplifier based on an OTA or Operational Transconductance Amplifier. Like an Opamp, it has similar characteristics. OTAs, on the other hand, are designed for capacitive loads, while OPAMPs are designed for resistive loads. MOSFETs or BJTs can be used as the pass element. It is preferable to use MOSFETs because they are not very sensitive to temperature changes ~\cite{ref_article11}. A simple resistive voltage divider network can be used as a feedback network.
The design has been simulated using LTspice developed by Linear Technology and Analog Devices. It is widely used for analog circuit simulations.
\section{Comparative study between performances of PMOS and NMOS LDOs}
The two main architectural types of the LDO~\cite{ref_article1} is shown in
Fig.~\ref{fig1}, the difference between them is the type of pass transistor. The first one is a PMOS pass transistor, whereas the second is an NMOS pass transistor.
\begin{figure}
\includegraphics[width=\textwidth]{fig1.png}
\caption{Block level design of PMOS and NMOS based LDO} \label{fig1}
\end{figure}
Three major components make up a typical LDO circuit, namely a high gain error amplifier ~\cite{ref_article12}, a pass transistor, and a feedback network. Using the pass transistor to control the load current, the High gain error amplifier compares output voltages with reference voltages, and the error amplifier receives a return voltage signal from resistors that act as voltage-voltage feedback to sense output voltages from the LDO.
\subsection{The reason behind selecting a PMOS-based design}
The dropout voltage of PMOS is lower than that of NMOS. However, because the NMOS pass transistor is connected as a common drain, it leads to a small output resistance at high load currents due to the increase in transconductance. This makes it more cumbersome to fabricate the IC since an additional charge pump~\cite{ref_article3} is required to support a wide range of load currents. PMOS LDO has higher loop gain as compared to NMOS-based LDO. PMOS is, however, a bit slower compared to NMOS since the mobility of electrons is greater than that of holes, thus PMOS design occupies a larger area which accounts for larger capacitances. This makes the PMOS LDO~\cite{ref_article5} slower than its NMOS counterpart. Thus, we have selected PMOS for our design by balancing odds and favours.
\section{Error Amplifier Design}
The error amplifier~\cite{ref_article4} is a two-stage OTA. The first stage is a differential amplifier stage. The second stage is a gain-enhancing common source stage. We have used a current mirror biasing for the first stage formed by MOSFETs $M_1$ and $M_6$. The MOSFETs $M_2$ and $M_3$ form the inverting and non-inverting terminals of the OTA respectively. The design has been done for 180nm technology. The current flowing through $M_6$ is copied in $M_1$ and is twice the current flowing through $M_4$ and $M_5$ each since the same current flows through $M_4$ and $M_5$ due to equal gate-source voltage. $C_c$ is the Miller Compensation capacitance and $C_L$ is the load capacitance. The miller capacitance has been added to increase the stability of the error amplifier.
\begin{figure}
\includegraphics[width=\textwidth]{fig2.png}
\caption{Circuit diagram of error amplifier} \label{fig2}
\end{figure}
\section{Significance of Miller Compensation Capacitance}
The compensation capacitance has been used to stabilize the system by increasing its phase margin. Mosfet $M_4$ is diode-connected, so it has a very low output impedance ($~1/g_{m_{4}}$). Hence, the overall port impedance at the drain of $M_4$ is low. However, the port impedances at the drains of $M_5$ and $M_8$ are very high (comparable to the $r_o$). This forms two low-frequency poles ($~1/(r_{o}*C_{L})$). Each pole contributes a $-90^o$ phase shift, and thus a total phase shift of $-180^o$ at low frequencies. This results in $0^o$ phase margin and the OTA will oscillate in negative feedback. Thus, we are adding a compensation capacitor~\cite{ref_article2} between the drain of $M_5$ and $M_8$. Now, the output impedance at the drain of $M_5$ will see a much larger capacitance according to Miller’s theorem and this pole will shift towards a lower frequency. The other pole shifts towards higher frequency. Thus the system will essentially become a 1st order system with a significant phase margin ($~60^o$). Thus, the OTA will function as an amplifier instead of an oscillator.
\section{Proposed PMOS LDO}
The architecture of this LDO is similar to a basic LDO regulator. A voltage reference of 1.2V~\cite{ref_article6} is generated by the bandgap which is given to the negative terminal of the error amplifier. The output of the error amplifier block is fed to the gate terminal of the PMOS pass network and at the drain, a resistive divider network is connected. The feedback voltage from the resistive divider is fed back to the positive terminal of the error amplifier to ensure negative feedback.
\begin{figure}
\includegraphics[width=\textwidth]{fig3.png}
\caption{Circuit design of PMOS based LDO} \label{fig3}
\end{figure}
\subsection{Working Principle of Circuit}
From Fig.~\ref{fig2}, the error amplifier is in negative feedback. This is because according to Barkhausen criteria, the total phase shift should be $-180^o$ for negative feedback. The gate-drain phase shift is $-180^o$ and the voltage is fed back to the positive terminal. Therefore, the total phase shift is calculated to be $-180^o$. We can assume a virtual short condition for negative feedback in opamp. Thus, $V_+ = V_- = V_{REF}$ which leads to the following working formula for $V_{OUT}$.
\begin{equation}
V_{out} = V_{REF} * (R_{1} + R_{2})/R_{2}
\end{equation}
Now, let's understand the LDO regulation principle intuitively. If the load current increases, the current through the pass element cannot increase immediately. It will undergo some transient. Initially, the required additional load current is drawn from the load capacitor. This will lead to a decrease in the output node voltage. This output voltage reduction will lead to a decrease in the feedback voltage. Therefore, the gate voltage of the pass element will also decrease, because the voltage has been fed back to the positive terminal. Thus, the source-gate voltage of the PMOS pass element increases, which finally increases the current through the pass network to the required level. The same is the scenario when the load current drops. In this way, the output voltage and the load current are maintained by the LDO.
\subsection{Simulation and Analysis}
The 2-stage OTA~\cite{ref_article7} forming the error amplifier was simulated in LTspice. The OTA was designed for a DC differential voltage gain greater than 5000 and a Gain Bandwidth Product (GBP) of 5MHz. The bode plot of the OTA was obtained as shown in Fig. below.
\begin{figure}
\includegraphics[width=\textwidth]{fig4.png}
\caption{Bode plot of the OTA} \label{fig4}
\end{figure}
The DC gain has been found to be 81dB = 11220 which is greater than 5000 as expected . The 3dB cutoff frequency is found to be 440Hz. Thus, the GBP evaluates to 5MHz which matches our design constraints.
\begin{figure}
\includegraphics[width=\textwidth]{fig5.png}
\caption{The LDO circuit was simulated for a load current~\cite{ref_article9} of 10mA and the input supply was varied from 5.0V to 1.0V. The output was maintained at 2.0V as given by equation (1) until $V_{in}$ drops below 2.6V. Therefore, the dropout voltage is (2.6 - 2.0) V = 0.6V} \label{fig5}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{fig6.png}
\caption{Behaviour of the LDO for varying load currents} \label{fig6}
\end{figure}
We observed the behaviour of the LDO for varying load currents in Fig.~\ref{fig6}. It is found that as load current increases the performance of the LDO degrades since the dropout voltage increases~\cite{ref_article8,ref_article13}. The LDO supplies a maximum current of 23mA after which output voltage cannot be regulated.
\subsection{Result Summary}
\begin{table}
\centering
\caption{Simulation and analysis results of the PMOS based LDO}\label{tab1}
\begin{tabular}{|l|ll|}
\hline
Parameters & \multicolumn{2}{l|}{Values} \\ \hline
Input Voltage & \multicolumn{2}{l|}{1.0V - 5.0V} \\ \hline
Output Voltage & \multicolumn{2}{l|}{2.0V} \\ \hline
Reference Voltage & \multicolumn{2}{l|}{1.2V} \\ \hline
\multirow{3}{*}{Dropout Voltage} & \multicolumn{1}{l|}{$I_{LOAD}$ = 5mA} & 0.3V \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 7.5mA} & 0.45V \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 10mA} & 0.6V \\ \hline
\multirow{3}{*}{Power Consumed} & \multicolumn{1}{l|}{$I_{LOAD}$ = 5mA} & 7.42mW \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 7.5mA} & 10.25mW \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 10mA} & 12.31mW \\ \hline
Miller Capacitance & \multicolumn{2}{l|}{3pF} \\ \hline
Maximum Tolerable Load Current & \multicolumn{2}{l|}{23mA} \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
This paper illustrates how low dropout (LDO) voltage regulator topology can be applied to voltage regulator design and why PMOS-based designs are preferred. The simulation clearly shows that the proposed technique can tolerate up to 23mA of current, which is an excellent result as compared to other LDOs. The PMOS-based design was stabilized with an efficient compensation technique and a Miller Capacitance of 3pF was used to achieve $~60^o$ phase margin.
\bibliographystyle{splncs04}
|
1,314,259,993,354 | arxiv | \section{Introduction} \label{sec:intro}
Markov decision processes (MDPs) provide a class of stochastic optimisation models
that have found wide applicability to problems in Operational Research.
The standard methods for computing an optimal policy
are based on value iteration, policy iteration and linear programming algorithms
\cite{Whi93}.
Each approach has its advantages and disadvantages.
In particular, each step in value iteration is relatively computationally
inexpensive but the value function may take some time to converge
and the algorithm provides no direct check that it has computed
the optimal value function and an optimal policy.
Conversely, each step in policy iteration may be computationally expensive
but the algorithm can be proved to converge in a finite number of steps,
confirms when it has converged and automatically identifies the optimal
value function and an optimal policy on exit.
Here we focus on models with special structure, in that they
are {\em skip-free in the negative direction} \cite[p.10]{Kei65}
or {\em skip-free to the left} \cite{StWe89};
i.e.\ whatever the action taken,
the process cannot pass from one state to a `lower'
state without passing through all the intervening states.
Such skip-free models arise naturally in many areas where OR is applied.
The most obvious examples are the control of discrete time random walks
and continuous time birth and death processes \cite{Ser81}
such as queueing control problems with single unit arrivals and departures
(see, for example, \citeasnoun{StWe89} and references therein).
In these basic one-dimensional models, the state space $S$ is (a subset of) the integer
lattice and transitions are only possible to the next higher or lower integer state.
However there are several other standard OR models that fall within the wider
one-dimensional skip-free framework, including examples from the areas of
queueing control with batch arrivals \cite{StWe89},
inventory control \cite{Mil81} and reliability and maintenance \cite{Der70,Tho82}.
Previous treatments of controlled skip-free processes
have considered only the one-dimensional formulation.
For processes with the `skip-free to the left' property, work has focused
on qualitative properties, in particular the existence of monotone optimal
policies for models with appropriately structured cost functions \cite{StWe89,StWe99}.
Conversely, work on processes with the corresponding `skip-free to the right'
property has concentrated on analysis of an approximating bisection method
for countable state space models \cite{WiSt86,WiSt00}.
We note that skip-free type ideas have also been exploited in a different direction by
\cite{Whi05} and citing authors,
where the emphasis has been on reducing the computational complexity
associated with policy iteration for quasi birth-death processes.
An intuitive way of characterising the essential features of our finite
skip-free recurrent model is that
the model is skip-free if and only if the state space
can be identified with the graph of a finite tree, rooted at $0$,
with each state $i$ corresponding to a unique node in the tree,
and such that for every action $a \in A$,
the only possible transitions from state $i$ under action $a$ are either to
its `parent' state or to a state in the subtree rooted at $i$,
with appropriate modifications for state $0$ which has no parent
and for terminal nodes which have only a parent and no descendants.
In this setting, the one-dimensional skip-free model above, with state space
$S = \{0,1,\ldots,M\}$, corresponds to the simplest case where
the tree reduces to a single linearly ordered branch connecting the
root node $0$ through states $1, 2, \ldots, M-1$ to the terminal node $M$,
and transitions from state $i$ are possible only to states $ j \in \{i-1, i, \ldots,M \}$.
However, the analysis extends easily to cases with a
richer, possibly multidimensional, state space,
where the appropriate model is in terms of transitions on a finite tree.
Examples of genuinely skip-free models with multidimensional state spaces
arise in simple multi-class queueing systems with batch arrivals
\cite[and references therein]{YeSe94,He00},
but such treatments have focused mainly on describing the behaviour
of the process for a fixed set of parameters (actions) rather than
comparing actions in an optimality framework.
The rest of the paper is organized as follows.
We start by describing models for average cost finite state recurrent MDPs
that are skip-free in the negative direction,
illustrating our approach with a motivating example.
We then propose a skip-free algorithm
that combines the advantages of values iteration and policy iteration:
the computational effort required for each iteration step
is comparable with that for value iteration,
but the algorithm is guaranteed to converge after a finite number of iterations
and automatically identifies the optimal value function and an optimal policy on exit.
We go on to show that the algorithm can be also be used to solve
discounted cost models and continuous time models, and
that a suitably modified algorithm can be used to solve communicating models.
Finally, we build on the relationship between
the average cost problem and a corresponding
$x$-{\em revised first passage problem}
to provide a proof of the main theorem and
identify other possible variants of the algorithm.
\section{The skip-free MDP model} \label{sec:sf-model}
Consider a discrete time Markov decision process (MDP)
with finite state space $S$
over an infinite time horizon $t \in \{0,1,2,\ldots \}$.
Associated with each state $i \in S$ is a non-empty finite set of
possible actions;
since $S$ is finite, we assume without loss of generality
that the set of actions $A$ is the same for each $i$.
If action $a \in A$ is chosen when the process is in state $X_t = i$
at time $t$,
then the process incurs an immediate cost $c_i(a)$
and the next state is $X_{t+1} = j$ with probability $p_{ij}(a)$.
A policy $\pi$ is a sequence of (possibly history dependent and randomised)
rules for choosing the action at each given time point $t$.
A {\em deterministic} decision rule corresponds to a function $d\!:\!S\!\rightarrow\!A$
and specifies taking action $a = d(i)$ when the process is in state $i$.
A {\em stationary deterministic} policy is one which
always uses same the deterministic decision rule at each time point $t$.
Where the meaning is clear from the context, we use the same notation
$d$ for both the decision rule and the corresponding stationary deterministic policy.
The expected average cost incurred by a policy $\pi$ with initial state $i$ is given by
$g_{\pi}(i) = \limsup_{n \to \infty}
\frac{1}{n} \; E_{\pi} \left( \sum_{t=0}^{n-1} c_{X_t}(a_t) | X_0 = i \right ),$
where $X_t$ is the state at time $t$ and $a_t$ is the action chosen at time $t$ under $\pi$.
Similarly, for a given discount factor $0 < \beta < 1$,
the total expected discounted cost incurred by a policy $\pi$ with initial state $i$
is given by
$V^{\beta}_{\pi}(i) = E_{\pi} \left( \sum_{t=0}^{\infty}
\beta^n \, c_{X_t}(a_t) | X_0 = i \right ).$
We say an MDP model is {\em recurrent} if the transition matrix corresponding to
every stationary deterministic policy consists of a single recurrent class.
We say an MDP model is {\em communicating} if, for every pair of states $i$ and $j$ in $S$,
$j$ is reachable from $i$ under some (stationary deterministic) policy $d$;
i.e.\ there exists a policy $d$, with corresponding transition matrix $P_d$,
and an integer $\, n \ge 0$, such that $P_d(X_n = j | X_0 = i) > 0$.
When $S = \{ 0, 1, 2, \ldots, M \}$ is a subset of the integer lattice,
we say the MDP model is {\em skip-free in the negative direction}
\cite{Kei65,StWe89} if
$p_{ij}(a) = 0$ for all $j < i-1$ and $a \in A$,
i.e.\ the process cannot move from state $i$ to a state with index $j < i$
without passing through all the intermediate states.
We will often find it easier to work in terms of the upper tail probabilities
$\bar{p}_{i j}(a) \equiv P(X_{t+1} \ge j \, | \, X_t = i, A_t = a) = \sum_{s=j}^M p_{i s}(a)$.
To avoid degeneracy, we assume that $p_{00}(a) < 1$ for $a \in A$
and that for each $i \in \{1, \ldots,M \}$, $p_{i i-1}(a) > 0$ for at least one $a \in A$.
In this setting, a recurrent model requires that, for all $a \in A$,
$p_{ii-1}(a) > 0$ for $i = 1,\ldots,M$ and $p_{ii}(a) < 1$ for all $i \in S$.
In contrast a communicating model allows
there to be $i$ and $a$ with $p_{ii-1}(a) = 0$ and /or $p_{ii}(a) = 1$.
To apply this idea in a wider context, we note that the essence of a skip free model is that:
~(i) there is a single distinguished state, say $0$;
~(ii) for any other state $i$ there is a unique shortest path from $i$ to $0$;
~(iii) from each state $i \ne 0$ the process can only make transitions to either
the adjacent state in the unique path from $i$ to $0$,
or to some state $j$ for which $i$ lies in the unique shortest path from $j$ to $0$.
In the finite one dimensional case, for each $k$
there is exactly one state for which the shortest path to state $0$ has length $k$.
Thus there is a $1$--$1$ mapping of the states to the integers
$\{0, 1, \ldots,M\}$ such that the distinguished state maps to $0$
and the state for which the shortest path had length $k$ maps to $k$.
In a more general setting, for each $k$
there may be more than one state for which the shortest path has length $k$.
In this case, rather than $S$ mapping to the integer lattice,
there is a fixed tree ${\cal T}$ (in the graph theoretic sense)
such that each state corresponds to a unique node of the tree,
with the distinguished state mapping to the root node.
It may help to visualise movement between states in terms
of the corresponding movement between nodes on the tree.
To formalise this general model, we start by considering a finite rooted tree ${\cal T}$
with $N + 1$ nodes labelled $0,1,2,\dots,N$, with root node $0$, and with a given edge set.
The tree structure implies that for each pair of nodes $i$ and $j$
there is a unique minimal path (set of edges) in the tree that connects $i$ and $j$.
Thus the nodes in the tree can be partitioned into level sets $L_0 = \{0\}, L_1, \ldots, L_M$
such that, for $m = 0, \ldots,M-1$, $i \in L_{m+1}$ if and only if
the minimal path from $i$ to $0$ passes through exactly $m$ intermediate nodes.
For adjacent nodes $i \in L_m$ and $j \in L_{m+1}$,
we say $i$ is the parent of $j$ and $j$ is a child of $i$
if the minimal path from $j$ to $0$ passes through $i$.
More generally, for $i \in L_m$ and $j \in L_r, \, r > m$,
we say $j$ is a descendant of $i$ if the minimal path from $j$ to $0$
passes through $i$. Each node $j \ne 0$ has a unique parent.
We write $\rho(j)$ for the parent of $j$,
we write ${\cal D}(j)$ for the set of descendants of $j$,
and we write ${\cal T}(j) \subset {\cal T}$ for (the nodes of the)
sub-tree rooted at $j$, so ${\cal T}(j) = \{j\} \cup {\cal D}(j)$.
A state with no descendants is said to be a terminal state,
so all states in the highest level $L_M$ are terminal states.
For simplicity of presentation we will assume that these are the only terminal states;
the analysis easily extends to cases where intermediate levels $L_m$ can also contain some terminal states.
For each $j \in {\cal D}(i)$, we write $\Delta(i,j)$ for the set of states
following $i$ in the unique minimal path in the tree connecting $i$ to $j$,
so if the path passes through $s-1$ intermediate states
and takes the form $i = r_0 \rightarrow r_1 \rightarrow \cdots \rightarrow r_s = j$,
then $\Delta(i,j) = \{r_1,\ldots,r_s\}$.
Now consider a finite MDP with state space $S$ and action space $A$.
Assume we can construct a rooted tree ${\cal T}$ such that
(i) the states in $S$ correspond to the nodes of ${\cal T}$,
and (ii) for every state $i \in S$ and action $a \in A$,
the only possible transitions from state $i$ under action $a$ are either to
its parent state $\rho(i)$
or to a state in the subtree ${\cal T}(i)$ rooted at $i$,
with appropriate modifications for state $0$ which has no parent
and for terminal nodes which have only a parent and no descendants.
We will say that such an MDP is {\em skip-free (in the negative direction) on the tree } ${\cal T}$.
As with the integer lattice model above,
it is often convenient work in terms of the the upper tail probabilities
$\bar{p}_{ij}(a) = P(X_{t+1}\in {\cal T}(j)|X_t=i, A_t=a),$
corresponding to the probability that the next transition from state $i$
under action $a$ is to a state in the subtree rooted at $j$.
To illustrate and motivate the general case, where a multidimensional model is required,
consider (\cite{He00,YeSe94}) a single-server multi-class queueing system
with $K > 1 $ customer classes and finite capacity $M$ (including the job,
if any, in service).
Assume the service discipline is pre-emptive but otherwise takes no account of class.
A job that arrives when the system is not full enters service immediately
and the job currently in service at that point returns to the head of the buffer.
When a job completes service, the server next serves the job at the head of the buffer.
Any job that arrives when the system is full is lost.
The model is most naturally formulated in continuous time,
with exponential inter-arrival and service time distributions,
though it can easily be translated to a discrete time setting
using the methods of section \ref{sec:cont}.
Assume class $k$ jobs arrive at rate $\lambda_k$
and complete service at class and action dependent rate $\mu_k(a)$,
where different actions $a \in A$ correspond to different service levels.
Since the model needs to keep track of the class of each job as it enters service,
we take the state to be the multidimensional vector
$\bm{i} = (i_1, \ldots, i_{M})$
where $i_1$ denotes the class of the job currently in service,
$i_m$ denotes the class of the job waiting for service
in the buffer in place $m, \; m = 2, \ldots, M$,
and $i_m = 0$ if the $m$th place is empty.
Assume costs are incurred at rate $c(\bm{i},a)$ reflecting both holding costs and action costs.
The possible transitions under the model are
the completion of the job currently in service,
corresponding to the transition
$\bm{i} = (i_1, \ldots, i_{M}) \rightarrow
(i_2, \ldots, i_{M},0)$,
or the arrival of a class $k$ job ($k = 1,\ldots,K$)
to a partially full system,
corresponding to the transition
$\bm{i} = (i_1, \ldots, i_{M}) \rightarrow
\bm{j} = (k,i_1, \ldots,i_{M-1})$.
For $M \ge 2$ this model cannot be represented as a skip-free MDP
with linear structure, i.e.\ with each state $\bm{i}$
having exactly one child $\bm{j}$ with $\bm{i} = \rho(\bm{j})$.
To see this,
let $\bm{a}$ denote the state $(a, i_2, \ldots, i_{M})$ with $i_M \ne 0$,
let $\bm{b}$ denote the state $(b, i_2, \ldots, i_{M})$,
differing from $\bm{a}$ in only the first component,
and let $\bm{c}$ denote the state $(i_2, \ldots, i_{M},0)$.
The only possible direct transitions to and from $\bm{a}$ are from $\bm{c}$ and to $\bm{c}$.
Similarly for $\bm{b}$.
If $\bm{c}$ is restricted to having just one child, then the only possibilities
are either
(i) $\bm{a}$ has no parent (so $\bm{a}$ is the root state),
$a = \rho(\bm{c})$ and $c = \rho(\bm{b})$, or
(ii) $\bm{b}$ has no parent (so $\bm{b}$ is the root state),
$b = \rho(\bm{c})$ and $c = \rho(\bm{a})$.
In case (i), $\bm{b}$ can have no children
so none of the other states can reach the root state
as they cannot reach $\bm{b}$ in a skip-free manner under any policy;
in case (ii) $\bm{a}$ can have no children and a similar argument applies.
\vspace*{-0.2in}
\begin{figure}[ht]
\resizebox{12cm}{8cm}{\includegraphics{fig1}}
\vspace*{-0.7in}
\caption{The tree ${\cal T}$ corresponding to
the state space for the pre-emptive multi-class queueing system
of with $K=2$ job classes and capacity $M = 3$.} \label{fig:pe}
\end{figure}
However we can represent the model as a skip-free MDP on a tree ${\cal T}$ as follows.
We take $L_0 = \{ (0, \ldots,0) \}$ to contain the state corresponding to the empty queue
and take the level sets $L_m, \; m=1, \ldots,M$ to each contain the $K^m$ states
of the form $\bm{i} = (i_1, \ldots,i_m,0,\ldots,0)$.
Given a state $\bm{i} = (i_1, \ldots,i_M) \in L_m$
we assign it parent $\rho(\bm{i}) = (i_2, \ldots, i_M, 0)$
and assign it $K$ children of the form $\bm{j} = (k,i_1, \ldots,i_{M-1}), \; k = 1, \ldots,K$
(with appropriate modifications for $L_0$ and $L_M$).
The set of descendants ${\cal D}(\bm{i})$ is the set of all states
of the form $(k_1, \ldots,k_r, i_1, \ldots,i_m, 0, \ldots,0)$ for $r = 1, \ldots, M-m$
(where there are $M-m-r$ trailing $0$s).
The possible transitions under the model
correspond exactly to transitions
from $\bm{i}$ to its parent $\rho(\bm{i})$
or to one of its $K$ children, so
the MDP satsifies the conditions required for it to be skip free
in the negative direction on the tree ${\cal T}$.
Figure \ref{fig:pe} illustrates the tree corresponding to the
state space for a system with $K=2$ job classes and capacity $M = 3$.
Extensions with direct transitions to more general descendants,
of form $(k, \ldots,k, i_1, \ldots,i_m, 0, \ldots,0)$
are possible if batch arrivals are allowed, subject to appropriate capacity constraints.
\section{The skip-free algorithm} \label{sec:sf-alg}
For finite recurrent MDP models, the solution to the
expected average cost problem can be characterised by the corresponding
{\em average cost optimality equations} \cite[\S 8.4]{Put94}
\begin{align}
h_i &= \min\nolimits_{a \in A} \{ \; c_i(a) - g + \sum\nolimits_{j \in S} p_{ij}(a) h_j \; \}
& i \in S \label{eq:oe}
\end{align}
\noindent in that
(i) there exist real numbers $g^*$ and $h^*_i, i \in S$ satisfying the optimality
equations;
(ii) the optimal average cost is the same for each initial state and is given by $g^*$;
(iii) the optimality equations uniquely determine $g^*$ and determine the $h^*_i$
up to an arbitrary additive constant;
(iv)
the stationary deterministic policy $d^*$ is an average cost optimal policy,
where, for each $i \in S$, $d^*(i)$ is an action achieving $
\min_a \{ \; c_i(a)+\sum_{j \in S} p_{ij}(a)h^*_j \, \}.$
It follows from (iv) above that there is an optimal policy in the class of
stationary deterministic policies.
We therefore restrict attention from now on to stationary deterministic policies,
writing `policy' as a shorthand for `stationary deterministic policy'
and writing $g(d)$ for the average cost under a given stationary deterministic policy $d$.
For each $i,j \in S$, we can interpret $h^*_i - h^*_j$ as
the asymptotic relative difference in the total cost
that results from starting the process in state $i$ rather than state $j$,
under the stationary deterministic policy $d^*$.
Thus the quantities $h^*_i - h^*_j$ are uniquely defined,
but the quantities $h^*_i, i \in S$ are defined only up to an arbitrary additive constant.
We focus on the particular solution normalised by setting $h^*_0=0$
and refer to the corresponding $h^*_i$ as the normalised relative costs under an optimal
policy.
In general, the optimality equations~(\ref{eq:oe}) cannot be solved directly.
Instead an optimal policy in the class of stationary deterministic policies is
usually found by methods based on value iteration, policy iteration or linear programming,
or combinations of these approaches \cite{Put94}.
For skip-free models, however, we have the following simplification.
\begin{lemma} \label{lem:oe}
For finite recurrent skip-free average cost MDPs,
the optimality equations (\ref{eq:oe}) are equivalent to the equations
\begin{subequations} \label{eq:oe-sfall}
\begin{align}
y_i &= \min\nolimits_a \{ \; (c_i(a) - x)/p_{i \rho(i)}(a) \; \} \; & i \in L_M \label{eq:oe-sfa} \\
y_i &= \min\nolimits_a \{ \; (c_i(a) - x + \sum\nolimits_{k \in {\cal D}(i)} \bar{p}_{i k}(a) y_k )/p_{i \rho(i)}(a)
\; \} & i \in L_{M-1},\ldots,L_1 \label{eq:oe-sfb} \\
0 &= \min\nolimits_a \{ \; c_0(a) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a) y_k \; \} \label{eq:oe-sfc}
\end{align}
\end{subequations}
\noindent in that
(i) these equations also have unique solutions $x$ and $y_i, \, i \in {\cal D}(0)$;
(ii) the optimal average cost is $g^* = x$ and the normalised relative costs under an optimal
policy satisfy $h^*_i - h^*_{\rho(i)} = y_i, \; i \in {\cal D}(0)$;
(iii) an optimal stationary deterministic policy is given by $d^*$,
where $d^*(i)$ is any action minimising the rhs of the corresponding equation for $y_i$
and $a_0$ is an action minimising the rhs in (\ref{eq:oe-sfc}).
\end{lemma}
\noindent {\bf Proof}
For skip-free models,
the only possible transitions from state $i \in {\cal D}(0)$
are to state $\rho(i)$, to state $i$ itself, or to a state $j \in {\cal D}(i)$.
Thus equations~(\ref{eq:oe}) take the form
\begin{align}
h_i &= \min\nolimits_{a \in A} \{ \; c_i(a) - g + \sum\nolimits_{j \in {\cal D}(i)} p_{i j}(a) h_j + p_{i i}(a) h_{i}
+ p_{i \rho(i)}(a) h_{\rho(i)} \; \} & i \in S \label{eq:new-oe}
\end{align}
with appropriate modification to give the normalised solution with $h_0 = 0$.
Values $h_i$ and $g$ satisfy (\ref{eq:new-oe})
if and only if in each equation $h_i \le $
the rhs for all $a$, with equality for at least one $a$.
With appropriate modifications for the root node $0$ and for terminal nodes,
simple rearrangement in shows that
$h_i \le c_i(a) - g + \sum_{j \in {\cal D}(i)} p_{i j}(a) h_j + p_{i i}(a) h_{i}
+ p_{i \rho(i)}(a) h_{\rho(i)}$
if and only if
$p_{i \rho(i)}(a) (h_i - h_{\rho(i)}) \le c_i(a) - g + \sum_{j \in {\cal D}(i)} p_{i j}(a) (h_j - h_i),$
and that equality in one expression implies equality in the other.
Now write $x$ for $g$ and for each $i \in {\cal D}(0)$ write $y_i$ for $h_i - h_{\rho(i)}$.
For each $j \ne i \in {\cal D}(i)$, write $\Delta(i,j) = \{r_1,\ldots,r_s\}$
for the states following $i$ in the unique minimal path from $j$ to $i$.
For each $k = 1,\ldots,s$, $r_{k-1}$ is the parent of $r_k$ so that $r_{k-1} = \rho(r_k)$.
Hence
$h_j - h_i
= h_{r_s} - h_{r_0}
= \sum_{k=1}^s (h_{r_k} - h_{r_{k-1}})
= \sum_{k=1}^s (h_{r_k} - h_{\rho(r_k)})
= \sum_{k=1}^s y_{r_k}
= \sum_{r \in \Delta(i,j)} y_{r}$.
Now if $j $ is a descendant of $i$
and $r \ne j$ is in the path connecting $i$ and $j$,
then $r$ is a descendant of $i$ and $j$ is in the subtree rooted at $r$,
and vice versa.
Thus for fixed $i$ and $a$ we have that
$\sum_{j \in {\cal D}(i)} p_{i j}(a) (h_j - h_i)
= \sum_{j \in {\cal D}(i)} \sum_{r \in \Delta(i,j)} p_{i j}(a) y_{r}
= \sum_{r \in {\cal D}(i)} \sum_{j \in {\cal T}(r)} p_{i j}(a) y_r
= \sum_{r \in {\cal D}(i)} \bar{p}_{ir}(a) y_r$.
Taking account of the modifications for the root state $i=0$
and the terminal states $i \in L_M$,
and the fact that $i \in L_m \implies {\cal D}(i) \subset L_{m+1} \cup \cdots \cup L_M$,
it follows that there are $g$ and $h_i$ satisfying (\ref{eq:new-oe}) if and only if
there are values $x$ and $y_i$ satisfying (\ref{eq:oe-sfall}).
\hfill $\Box$
In the optimality equations ~(\ref{eq:oe-sfall}),
the value of $y_i, \, i \in L_M$ depends only on $x$,
and in each subsequent equation the value of $y_i$ depends only
on $x$ and the values of $y_k$ for $k \in {\cal D}(i)$.
Thus, if the value of $x$ was known, it would be easy
to compute the $y_i$ in turn for $y_i \in L_M, \ldots,L_1$
and to determine the corresponding policy which takes
the optimal action in each state $i \in S$.
This observation motivates an iterative approach to finding an average cost optimal policy:
(i) choose an initial policy $d_0$ and compute its expected average cost $g_0 = g(d_0)$;
(ii) given a current policy $d_n$ with expected average cost $g_n$,
compute an updated policy $d_{n+1}$ by setting $x = g_n$
and solving (\ref{eq:oe-sfa}) and (\ref{eq:oe-sfb}),
and compute its expected average cost $g_{n+1}$;
(iii) iterate until convergence.
This approach forms the basis for the following {\em skip-free} algorithm.
Its properties are set out in the subsequent theorem.\\
\noindent {\bf Skip-free algorithm}
\noindent 1. \underline{Initialisation}:
\noindent Choose an arbitrary initial policy $d_0$.
Perform a single iteration of step 2 below, with $x = 0$
and with $a_i$ restricted to the single value $d_0(i)$, $i \in S$.
Set $g_0 = u_0$.
\medskip
\noindent 2. \underline{Iteration}:
\noindent Set $x = g_n$.
\medskip
\noindent $\bullet$ For $i \in L_M$ compute:\\
$\begin{array}{llcl}
& a_i & = & \mathrm{argmin}_a \{ \; (c_i(a) - x)/p_{i \rho(i)}(a) \; \} \\
& y_i & = & (c_i(a_i) - x)/p_{i \rho(i)}(a_i) \\
& t_i & = & 1/p_{i \rho(i)}(a_i) \\
\end{array}$
\medskip
\noindent $\bullet$ For $i \in L_r, \, r = M-1,\ldots,1$ compute:\\
$\begin{array}{llcl}
& a_i & = & \mathrm{argmin}_a \{ \; (c_i(a) - x + \sum_{k \in {\cal D}(i)} \bar{p}_{i k}(a) y_k )
/p_{i \rho(i)}(a) \; \} \\
& y_i & = & (c_i(a_i) - x + \sum_{k \in {\cal D}(i)} \bar{p}_{i k}(a_i) y_k )/p_{i \rho(i)}(a_i)\\
& t_i & = & (1 + \sum_{k \in {\cal D}(i)} \bar{p}_{i k}(a_i) )/p_{i \rho(i)}(a_i)\\
\end{array}$
\medskip
\noindent $\bullet$ For $j = 0$ compute:\\
$\begin{array}{llcl}
& a_0 & = & \mathrm{argmin}_a \{ \; (c_0(a) - x +
\sum_{k \in {\cal D}(0)} \bar{p}_{0 k}(a) y_k) /
(1+ \sum_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) \; \} \\
& u_0 & = & (c_0(a_0) - x + \sum_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) y_k) / (1+ \sum_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) \\
& t_0 & = & (1+ \sum_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) / (1 - p_{0 0}(a_0)) \\
\end{array}$
\medskip
\noindent Set $d_{n+1}(i) = a_i$ for $i \in S$
and set $g_{n+1} = g_n + u_0 $.
\medskip
\noindent 3. \underline{Termination}:
\noindent If $u_0 < 0$ then return to step 2.
\medskip
\noindent If $u_0 = 0$ then stop.
Return $d_{n+1}$ as an optimal policy,
return $g_{n+1}$ as the optimal average cost,
and for each $i \in {\cal D}(0)$ return $h_i = \sum_{j \in \Delta(0,i)}y_j$
as the corresponding normalised relative cost.
\begin{theorem} \label{th:grr}
Consider the skip-free algorithm above applied to a finite recurrent
skip-free average cost MDP model. Then:\\
(i) At each iteration either
$g_{n+1} < g_n$, so $d_{n+1}$ is a strict improvement on $d_n$,
or $g_{n+1} = g_n$.
In the latter case $g_{n+1} = g^*$, $d_{n+1}$ is an optimal average cost policy,
and the corresponding normalised relative costs
are given by $h_0^* =0, \, h_j^* = \sum_{i \in \Delta(0,j)}y_i, \; j \in {\cal D}(0)$.\\
(ii) The algorithm converges after a finite number of iterations.
\end{theorem}
\noindent {\bf Remarks}
(1) The motivation for the particular choice of action in state $0$ is given in
the remarks following the proof of the theorem.
(2) The updates are particularly simple in the one dimensional case
where $S = \{ 0,1,\ldots,M\}$.
Here $\sum_{k \in {\cal D}(i)}$ simplifies to $\sum_{k =i+1}^M$
and $\rho(i)$ simplifies to $i-1$.
(3) The computational requirement for each iteration in step 2 of the algorithm
is clearly similar to that of the corresponding step in value iteration, in that it only
requires simple evaluations rather than the solution of a set of equations.
While the algorithm is also similar to policy evaluation in that it returns the average cost
of policy $d_n$ at the end on the $n$th iteration,
it differs from standard policy iteration in that
it the values of $y_i$ returned do not correspond to the relative costs
under $d_n$. Only at convergence do the relative costs and average cost correspond
to the same (optimal) policy.
(4) The basic principle underlying this iterative approach appears to be
similar to that used in \cite{Low74}, but
the results there were restricted to a very specific model
with simple birth and death structure.
Other treatments of skip-free models \cite{WiSt86,StWe89,StWe99,WiSt00}
have used iterative methods to search for a good approximation for the average cost $x$,
based on the value of current and previous approximations,
or used the form of the optimality equations to
derive qualitative properties of the solution,
in particular monotonicity of optimal policies,
but neither approach explicitly identified the
simple skip-free improvement algorithm described here.
\section{Discounted, continuous and communicating models} \label{sec:var}
The skip-free algorithm can also be used to solve discounted cost
and continuous time problems, in each case by transforming the problem
into an equivalent average cost problem.
Moreover, a suitably modified algorithm can be used to solve communicating models.
For ease of presentation, we focus on the one dimensional case,
indicating how the argument can be extended to the general model as required.
\subsection{Discounted cost models} \label{sec:disc}
Consider a recurrent MDP model that is skip-free in the negative direction,
with state space $S = \{0, 1, \ldots,M\}$,
finite action space $A$,
transition probabilities $p_{ij}(a)$, immediate costs $c_i(a)$
and discount factor $\beta$.
Following \citeasnoun[p.31]{Der70}, we construct an average cost MDP
with modified state space $\{0,1, \ldots,M,M+1 \}$
and modified transition probabilities and immediate costs given by:
\begin{align}
&p'_{ij}(a) = \beta p_{ij}(a), &&c'_i(a) = c_i(a), & i,j = 0,1, \ldots,M,
\; \; a \in A \nonumber\\
&p'_{M+1 \, M}(a) = \beta, \quad &&c_{M+1}(a) = 0, & a \in A \nonumber\\
&p'_{i \, M+1}(a) = 1 - \beta, && & i = 0,1, \ldots,M + 1,
\; \; a \in A \nonumber
\end{align}
\noindent In the spirit of similar models \cite{Low74,WiSt86}, we note that
this new average cost MDP inherits from the original model the property of being
skip-free in the negative direction.
Let $g'$ and $h'_i, \, i=0,\ldots,M+1$ be the optimal average cost and the corresponding
relative costs
for the new average cost problem, normalised by setting $h'_0=0$.
From above, $g'$ and $h'_i, \, i=1,\ldots,M+1$, are the unique solutions
to the optimality equations~(\ref{eq:oe}),
and any set of actions achieving the minimum on the rhs defines an optimal policy.
In terms of the original parameters, these equations take the form
\begin{align}
&h'_{M+1} &&= - g' + \beta h'_M + (1-\beta) h'_{M+1} \nonumber\\
&h'_i &&= \min_a \{ \, c_i(a) - g' + \beta \sum_{j=0}^{M} p_{ij}(a) h'_j +
(1 - \beta)h'_{M+1} \, \} & i = 0,\ldots,M \nonumber
\end{align}
Now set $v_j = h'_j - h'_{M+1} + g'/(1-\beta), \; j = 0,\ldots,M$.
Then rewriting the equations for $h_0, \ldots, h_M$ in terms of $v_0,\ldots,v_M$,
we see that the $v_i$ satisfy the equations
\begin{align}
v_i &= \min_a \{ \, c_i(a) + \beta \sum_{j=0}^{M} p_{ij}(a) v_j \, \} &i = 0,\ldots,M. \nonumber
\end{align}
Thus the $v_j$ satisfy the optimality equations for the discounted cost problem,
and so represent the unique optimal $\beta$ discounted cost function \cite[p.148]{Put94}.
Finally, let $x'$ and $y'_0, \ldots,y'_{M+1}$ be solutions to the policy iteration algorithm
applied to the new skip-free average cost problem.
Then $g' = x'$ and $h'_j = y'_j + \cdots + y'_1, \, j=1, \ldots,M + 1$.
Thus the optimal value function for the discounted problem is given
explicitly in terms of the output of the policy iteration algorithm by
\begin{align}
v_j &= x'/(1-\beta) -(y'_{j+1} + \cdots + y'_{M+1}) & j = 0,\ldots,M \nonumber
\end{align}
and a policy which is optimal for the modified average cost problem
is also optimal for the original discounted cost problem.
The extension to the general skip-free MDP tree model is straightforward,
requiring just the addition of an extra state for each terminal state (node)
to preserve the skip-free property. This extra state now becomes
the terminal node in that branch.
Transitions from this extra state are to the corresponding previous terminal
node, with probability $\beta$, or back to itself, with probability $1-\beta$.
Transition probabilities from non-terminal states are modified as above,
by setting $p'_{ij}(a) = \beta p_{ij}(a)$ if $j$ is a non-terminal
node of the modified tree and by assigning the remaining transition probability
$1-\beta$ to the newly added terminal nodes of the modified sub-tree ${\cal T}(i)$ rooted at $i$.
The precise assignment may be chosen arbitrarily -- for example, each new
terminal node in the modified sub-tree may be chosen with equal probability
-- as long as the total probability sums to $1 - \beta$.
\subsection{Continuous time models} \label{sec:cont}
Consider a continuous time Markov decision process (CTMDP)
with finite state space $S$ and finite action space $A$.
Assume that when the current action is $a$ and the process is in
state $X_t = i$, the process incurs costs at rate $c_i(a)$
and makes transitions to state $j \in S$ at rate $q_{ij}(a)$
(where transitions back to the same state are allowed).
For infinite horizon problems, under either an average cost or
a discounted cost criterion,
we can restrict attention to stationary policies and to models in
which decisions are made only at transition epochs \cite[p.560]{Put94}.
For simplicity of presentation we again restrict attention to recurrent
models and defer treatment of unichain and communicating models to
Section \ref{sec:com}.
As for MDPs, we say a CTMDP is skip-free in the negative direction
if the process cannot move from each state $i$ to a state
$j < i$ without passing through all the intermediate states,
i.e.\ $q_{ij}(a) = 0$ for all $j < i-1$ and $a \in A$.
To apply the skip-free algorithm, we first convert the model to an equivalent uniformised model
\cite{Lip75} with rate $\Lambda = \max_{i \in S \; a \in A} \sum_{j \in S} q_{ij}(a)$.
In this model, when the current action is $a$ and the process is in state $i$,
transitions back to state $i$ occur at rate
$\Lambda - \sum_{j \ne i} q_{ij}(a)$ while transitions
to state $j \ne i$ occur at rate $q_{ij}(a)$,
so that overall transitions occur at uniform rate $\Lambda$.
Next we construct a discrete time problem with the same state and action space,
where for $i,j \in S$ and $a \in A$ the transition probabilities and immediate costs are given by
$p'_{ij}(a) = q_{ij}(a)/\Lambda,\; i \ne j; p'_{ii}(a) = 1 - \sum_{j \ne i} q_{ij}(a)/\Lambda;
c'_i(a) = \Lambda c_i(a)$.
If the original CTMDP is recurrent and skip-free,
then the discretised model is recurrent and skip-free
and can be solved using the algorithm.
Finally, let $d'$ and $g'$ be the optimal policy and the optimal average cost
identified by the algorithm for the discrete time problem.
Then the optimal policy $d^*$ and the optimal average cost $g^*$
for the uniformised continuous time problem are the same
as $d'$ and $g'$,
and the normalised relative costs for the uniformised problem
are given in terms of those for the discrete problem
by $h_i^* = h'_i/\Lambda , \; i \in S$ \cite[\S 11.5]{Put94}.
\subsection{Communicating models} \label{sec:com}
So far we have assumed the MDP model is recurrent.
There are natural applications for which this assumption excludes sensible policies,
such as policies that are recurrent only on a strict subset of $S$.
Simple examples include:
maintenance/replacement problems where a policy might specify replacing an item when
the state reached some lower level $K > 0$ with a item of level $L < M$;
inventory problems where a policy might reorder when the stock reached some
lower level $K > 0$ and/or reorder up to level $L < M$;
queueing control problems where a policy might turn the server off when the queue size
reached some lower level $K > 0$ and/or might refuse to admit new entrants when the queue
size reached level $L < M$.
In each case, determining optimal values for $K$ and $L$ might be part of the problem.
In this section we extend our result to the wider class of communicating MDP models,
to enable us to address examples like these.
We say an MDP model is {\em communicating} if, for every pair of states $i$ and $j$ in $S$,
$j$ is reachable from $i$ under some (stationary deterministic) policy $d$;
i.e.\ there exists a policy $d$, with corresponding transition matrix $P_d$,
and an integer $\, n \ge 0$, such that $P_d(X_n = j | X_0 = i) > 0$.
We say that $d$ is {\em unichain} if it decomposes $S$ into a single recurrent class
plus a (possibly empty) set of transient states;
if there is more than one recurrent class we say $d$ is {\em multichain}.
Let $d$ be a multichain policy and, for each $k$,
let $g_k$ denote the average cost under $d$ starting in a state in $E_k$,
and let $E_m$ be a recurrent set with smallest average cost, say $g_m$.
Because the model is skip-free, $E_m$ must consist of a sequence of
consecutive states $K_m,\ldots,L_m$;
again, because the model is skip-free, the action in each each state $j$ greater than $L_m$
can be changed if necessary so that $E_m$ is reachable from $j$;
finally, because the model is communicating, the action in each state $j$ less than $K_m$
can be changed if necessary so that $E_m$ is reachable from $j$.
Denote by $d'$ the new policy created by changing actions in this way,
if necessary, but leaving the actions in $E_m$ unchanged.
Then $d'$ is unichain by construction,
and the average cost starting in each state $j \in S$
is $g_m$, which is no greater than the average cost starting in $j$ under $d$.
Thus, for average cost skip-free communicating models,
nothing is lost by restricting attention to unichain policies.
In contrast to recurrent models, communicating models allow
there to be $i$ and $a$ with $p_{ii}(a) = 1$ and/or $p_{ii-1}(a) = 0$.
For each $r = 0,1,\ldots,M$,
let $U_r$ be the (possibly empty) set of unichain policies $d$ for which
$p_{rr-1}(d(r)) = 0$ but $p_{ii-1}(d(i)) > 0$ for $i = r+1, \ldots,M$
(where we take $p_{ii-1}(a) \equiv 0$ for all $a$ for $i=0$).
Every unichain policy must be in $U_r$ for some $r$.
Partition the possible actions for each state $i \in S$ into
$B_i = \{a \in A : p_{ii-1}(a) >0 \}$
and its complement $\bar{B}_i = \{a \in A : p_{ii-1}(a) = 0 \}$,
where $\bar{B}_i$ may be empty but
$B_i$ is non-empty by the assumptions of the skip free model in Section~\ref{sec:sf-model}.
Then for a unichain policy $d \in U_r$, we have that
$d(i) \in B_i, \; i = r+1, \ldots,M$;
that state $r$ is recurrent and $d(r) \in \bar{B}_r$ by definition;
and that states $i < r$ are transient.
Thus the minimum average cost over policies in $U_r$
is the same as the minimum average cost for a modified skip-free MDP model $\Pi_r$
with the same transition probabilities and immediate costs but
with reduced state space $S_r = \{r,\ldots,M\}$
and with state-dependent action spaces
$A_i = B_i$ for $i = r+1, \ldots,M$ and $A_r = \bar{B}_r$.
In this notation, the model of Section~\ref{sec:sf-model} corresponds to $\Pi_0$
and state $r$ plays the same role as the recurrent distinguished
state in $\Pi_r$ that state $0$ plays in $\Pi_0$.
If we compare the result of applying the skip-free algorithm to $\Pi_r$
with the result of applying it to $\Pi_0$,
we see that, for the same current value of $x$,
the algorithm computes the same values of
$y_i$, $t_i$, and $a_i$ in states $i = M,M-1,\ldots,r+1$.
However, in state $r$, the skip-free algorithm applied to $\Pi_r$
computes quantities appropriate to the distinguished state,
say $a^r$ and $u^r$, where\\
$\begin{array}{lcl}
a^r & = & \mathrm{argmin}_{a \in \bar{B}_r}
\{ \; (c_r(a) - x + \sum_{k=r+1}^M \bar{p}_{r k}(a) y_k) /
(1+ \sum_{k=r+1}^M \bar{p}_{r k}(a) t_k) \; \} \\
u^r & = & (c_r(a^r) - x + \sum_{k=r+1}^M \bar{p}_{r k}(a^r) y_k) /
(1+ \sum_{k=r+1}^M \bar{p}_{r k}(a^r) t_k)
\end{array}$\\
\noindent
and computes an updated `minimising' policy $d^r_{n+1}$
with average cost $g^r_{n+1}$,where \\
$\begin{array}{llcl}
d^r_{n+1}(r) & = & a^r; \; \; d^r_{n+1}(i) = a_i, \; i=r+1,\ldots,M, \quad \mbox{and}\\
g^r_{n+1} & = & x + u^r.
\end{array}$\\
This motivates the following modified skip-free algorithm.
First, it includes these extra computations for each state $r$,
so that, in a single iteration, it simultaneously computes
the optimal policy $d^r_{n+1}$ and its average cost $g^r_{n+1}$
for each $S_r$.
Secondly, at the end of the $n-1$th iteration
it sets $x = g_{n} = \min_r g^r_{n}$, and sets $d_{n}$ to be the corresponding policy,
where ties are broken by choosing the $d^r_{n}$ with the smallest index $r$.
Say the minimum average cost at this stage is achieved by a policy with index $r = K$
Then, by the properties of the skip-free algorithm applied to $\Pi_K$,
at the end of the next iteration
either (i) $g^K_{n+1} < g^K_{n} = x$,
in which case $g_{n+1} = \min_r g^r_{n+1} < x = g_n$;
or (ii) $u^K_{n+1}=0$ and $g^K_{n+1} = g^K_{n} = x = \min_r g^r_{n+1}$,
so $g_{n+1} = g_n$ and $d_{n+1} = d^K_{n+1}$ is an optimal average cost policy
for starting states $i = K, \ldots,M$.
In this case, because the model is communicating,
it is possible \cite[p.351]{Put94} to modify the actions chosen by the policy
in the, now transient, states $0, \ldots, K-1$ so that the modified $d_{n+1}$ satisfies the
optimality equations for all states $0, \ldots, M$
and is an average cost optimal policy.
We summarise this discussion in the following theorem.
\begin{theorem}
Consider the skip-free algorithm modified as above applied to a finite communicating discrete time
average cost skip-free MDP model with state space $S = \{0,1,2,\ldots,M\}$. Then:\\
(i) At each iteration of the skip-free algorithm either
$g_{n+1} < g_n$ and $d_{n+1}$ is a strict improvement on $d_n$,
or $g_{n+1} = g_n$ and for some $K$
the policy satisfies the optimality equations for states $K, \ldots, M$.\\
(ii) The modified skip-free algorithm converges after a finite number of iterations.
\end{theorem}
Finally, note that it is easy to check if a skip-free model is communicating.
An assumption of the (non-degenerate) skip-free model was that
each state $i<M$ was reachable from $i+1$.
It follows that a skip-free MDP with state space $S = \{0,1,\ldots,M\}$
is communicating if and only if $M$ is reachable from $0$ under at least one stationary
deterministic policy $d$.
Let $N_0 = 0$, let $N_1$ be the index of the maximum state $j$ for which $p_{0j}(a) > 0$
for some $a \in A$, and for $m = 1, 2, \dots $ let $N_{m+1}$ be the index of the maximum
state $j$ for which $p_{ij}(a) > 0$ for some $0 \le i \le N_m$ and $a \in A$.
As the state space is finite, the sequence $\{N_m\}$ terminates, say with state $N$.
Since the model is skip-free, $N$ is the largest state that is reachable by all states below
it, and the model is communicating if and only if $N = M$.
The extension to a general skip-free communicating models is straightforward.
Again, the idea is that for each state $i$ the skip-free algorithm
is modified so that in passing it solves the corresponding sub-problem
$\Pi_i$ with state space ${\cal T}(i)$ and with state $i$ as the distinguished state,
and then computes the optimal updated average cost and policy
by minimising over the costs and policies for each of the sub-problems.
\section{Proof of Theorem \ref{th:grr}} \label{sec:proofs}
We start our analysis of the average cost MDP model by defining a
related problem (or class of problems)
that we will call the $x$-{\em revised first return problem}.
The model for this problem has the same state space $S$,
the same action space $A$ and the same
transition probabilities $\{p_{ij}(a)\}$ as the average cost model.
However, for each fixed $x$, the immediate costs in the corresponding
$x$-revised problem are revised downward by $x$,
so $c_i(a)$ is revised to $c_i(a) - x$.
Whereas the original problem was to find a policy $d$
that minimised the expected average cost $g(d)$,
the objective for this new problem is to find a policy that minimises
the expected $x$-revised cost until first return to state $0$,
where, for a process starting with $X_0 = 0$, we define the first return epoch
to state $0$ to be the smallest value $\tau > 0$ such that $X_{\tau-1} \ne 0$
and $X_\tau = 0$. The MDP is assumed recurrent under any stationary
deterministic policy, so $\tau$ is well defined and almost surely finite.
For a fixed policy $d$, starting in state $0$, write
$\tau(d)$ for the expected first return epoch under $d$,
$C(d)$ for the expected first return cost under $d$, and
$H(d,x)$ for the expected $x$-revised first return cost under $d$.
The average costs and the $x$-revised costs under $d$
are related by the equations
\begin{equation} \label{eq:hct}
g(d) = C(d)/\tau(d), \quad H(d,x) = C(d) - x \tau(d), \quad g(d) = x + H(d,x)/\tau(d),
\end{equation}
where the first equation follows from viewing the average cost problem
from a renewal-reward perspective \cite[p.160]{Ros70}
and noting that state $0$ is recurrent under any stationary deterministic policy $d$,
and the second follows from noting that the expected $x$-revised cost under $d$ until first return to state $0$
is just the original expected cost $C(d)$ adjusted downwards by an amount $x$ for
an expected time period $\tau(d)$.
\begin{lemma} \label{lem:dfr}
For fixed $x$, let $a_i, \, i \in {\cal D}(0)$ be actions minimising
the rhs in equations~(\ref{eq:oe-sfa}) and (\ref{eq:oe-sfb})
and let $y_i, \, i \in {\cal D}(0)$ be the corresponding $y$ values.
Set
\begin{equation} \label{eq:a-fr}
a_{0} = \mathrm{argmin}_a \{ \; (c_0(a) - x + \sum\nolimits_{k \in {\cal D}(0)}
\bar{p}_{0 k}(a) y_k) / (1 - p_{0 0}(a)) \; \}.
\end{equation}
and let $d$ be the policy that takes action $a_i$ in state $i, \; i \in S$.
Then $d$ minimises the expected $x$-revised cost until first return to state $0$,
and the expected $x$-revised first return cost under $d$ is
\begin{equation} \label{eq:h-fr}
H(d, x) = (c_0(a_{0}) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_{0}) y_k) / (1 - p_{0 0}(a_{0})).
\end{equation}
\end{lemma}
\noindent {\bf Proof}
Since the process is Markov and skip-free in the negative direction,
it follows that a policy minimises the expected $x$-revised cost
until first return to state $0$ if and only if it also minimises the expected $x$-revised
total cost until first passage to state $0$ for each starting state
$i \ne 0 \in $, i.e.\ $i \in {\cal D}(0)$,
and hence minimises the expected cost until first passage from $i$ to to $\rho(i)$
for each $i \in {\cal D}(0) $.
For the one-dimensional case where $S = \{0, 1, \ldots,M\}$,
this problem has been called the $x$-revised first passage problem \cite{StWe89}.
For fixed $x$ and $i \in \{1, \ldots,M \}$, let $a_i$ be actions minimising
the rhs in equations~(\ref{eq:oe-sfa}) and (\ref{eq:oe-sfb})
and let $y_i$ be the corresponding $y$ values.
Then they show that the policy $d$ that takes action $d(i) = a_i$ in state $i$
is optimal for the $x$-revised first passage problem
and the minimal expected cost until first passage from $i$ to $i-1$ is given by $y_i$.
With only minor notational changes, their results extend directly to the general
case where $S$ corresponds to the nodes of a tree,
$\{ 1, \ldots,M\}$ is replaced by ${\cal D}(0)$
and $i-1$ is replaced by $\rho(i)$.
It follows that the policy that uses actions $a_i$ in $i \in {\cal D}(0) $
has the property that for each state $i$ it also minimises
the expected total $x$-revised cost until first passage to state $0$ and that
the minimum expected $x$-revised total cost until first passage
to state $0$, starting in state $i \ne 0$, is given by the sum of the $y_i$
values along the path from $i$ to $0$,
i.e.\ $\sum\nolimits_{k \in \Delta(0,i)} y_k$.
Now consider a process that starts in state $0$.
Under a policy that specifies action $a$ in state $0$,
the expected time until the process first leaves state $0$ is $1/(1-p_{0 0}(a))$ and
during that time it incurs $x$-revised costs at rate $c_0(a) - x$ per unit time.
Conditional on leaving state $0$, the first transition is to state $j$
with probability $p_{0 j}(a)/(1-p_{0 0}(a))$.
From above, the minimum additional expected total cost until the process next
re-enters state $0$ is $\sum\nolimits_{k \in \Delta(0,j)} y_k$,
and this minimum expected cost is achieved by the policy that takes actions
$a_i$ in states $i \in {\cal D}(0) $.
Thus, if a policy $d$ takes action $a$ in state $0$,
the minimum expected $x$-revised cost from leaving state $0$
until first return to state $0$ is
$ H(d,x) = \sum\nolimits_{j \in {\cal D}(0)} p_{0 j}(a) \sum\nolimits_{k \in \Delta(0,j)} y_k /(1 - p_{0 0}(a))$
$=$$ \sum\nolimits_{k \in {\cal D}(0)} \sum\nolimits_{j \in {\cal T}(k)} p_{0 j}(a) y_k /(1 - p_{0 0}(a))$
$=$$ \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a) y_k /(1 - p_{0 0}(a))$.
It follows that the optimal action in state $0$ is one that minimises the quantity
$(c_0(a) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a) y_k) / (1 - p_{0 0}(a))$
and the expected $x$-revised first return cost $H(d,x)$ is as shown. \hfill $\Box$
\begin{lemma} \label{lem:imp}
Let $d$ be a fixed policy with expected average cost $g(d)$
and let $d^1$ be the optimal $x$-revised policy specified in Lemma \ref{lem:dfr}
for the case $x = g(d)$.
Then:\\
(i) the average cost under $d^1$ is no greater than the average cost under $d$,\\
(ii) if the average cost under $d^1$ is the same as the average cost under $d$
then $d^1$ is an optimal policy for the average cost problem.
\end{lemma}
\noindent {\bf Proof}
(i) For the fixed $x$, we know from Lemma \ref{lem:dfr} that
$d^1$ is an optimal policy for the $x$-revised first return problem.
Thus $H(d^1,x)) \le H(d,x)$, and from (\ref{eq:hct}) this implies
$C(d^1) - x \tau(d^1) \le C(d) - x \tau(d)$.
Because $x$ corresponds to the average cost under $d$,
then, from (\ref{eq:hct}), $x = g(d) = C(d)/\tau(d)$ so $C(d) - x \tau(d) = 0$.
Thus, $H(d^1) = C(d^1) - x \tau(d^1) \le 0$
and $g(d^1) = C(d^1)/\tau(d^1) \le x = g(d)$.\\
(ii)
If $g(d^1) = g(d)$, then from above $H(d^1,x) = H(d) = 0$.
But, from Lemma~\ref{lem:dfr},
$H(d^1,x) = (c_0(a_0) - x + \sum_{k=1}^M
\bar{p}_{0 k}(a_0) y_k) / (1 - p_{0 0}(a_0))$,
where $p_{0 0}(a_0) < 1$.
It follows that $H(d^1,x) = 0 \implies
(c_0(a_0) - x + \sum_{k=1}^M \bar{p}_{0 k}(a_0) y_k) = 0$.
Thus, when $g(d^1) = g(d)$,
the values $x = g(d^1)$ and the corresponding values of $y_i, \, i \in {\cal D}(0)$ satisfy
the optimality equations~(\ref{eq:oe-sfa}-\ref{eq:oe-sfc})
and $d^1$ is a decision rule corresponding to the actions minmising the
rhs of each equation.
It follows that $d^1$ is an optimal average cost policy,
the optimal average cost is $g^* = g(d^1) = g(d)$
and the normalised relative costs under the optimal policy are
$h^*_j = \sum_{k \in \Delta(0,j)}y_k$. \hfill $\Box$
\begin{lemma} \label{lem:eval}
Let $a_i, \, i \in S$ be fixed actions
and let $d$ be the fixed policy for which $d(i) = a_i, \, i \in S$.
Perform a single iteration of step 2 of the skip-free algorithm
with starting value $x$ and with the action in each state $i$ restricted to the single value $a_i$.
If the algorithm output values are $u_0$, $y_i, \, i \in {\cal D}(0)$ and $t_i, \, i \in S$,
then $H(d,x)$ and $\tau(d)$ are given by equations (\ref{eq:h-fr})
and (\ref{eq:timec}).
Further, if the starting value is $x = 0$, then $g(d) = u_0$.
\end{lemma}
\noindent {\bf Proof}
The expression for $H(d,x)$ follows from Lemma \ref{lem:dfr}
by considering the possible actions in state $i$ to be
restricted to just the given $a_i$.
For the expected first return epoch under $d$, write $t_0 = \tau(d) > 0$
and write $t_i > 0$ for the expected first passage time $t_i$ from $i$ to $i-1$.
Interpret $t_0$ as the expected $0$-revised first return cost under $d$
for a model with immediate costs $c_i(a) = 1$ for all states and actions (and with $x = 0$),
with a similar interpretation for the $t_i$.
Then, as with the $y_i$, the $t_i$ can be computed recursively
using the equations
$t_i = 1/p_{i \rho(i)}(a_i), \, i \in L_M;
t_i = (1 + \sum_{k \in{\cal D}(i)} \bar{p}_{i k}(a_i) t_k )/p_{i \rho(i)}(a_i),
\; i \in L_{M-1},\ldots,L_1,$ and
\begin{equation} \label{eq:timec}
\tau(d) = t_0 = (1 + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) / (1 - p_{0 0}(a_0)).
\end{equation}
Finally set $x = 0$. Then $g(d) = H(d,0)/\tau(d)$ from (\ref{eq:hct}),
so from (\ref{eq:h-fr}) and (\ref{eq:timec})
$ g(d) = (c_0(a_0) + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) y_k) /
(1 + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) = u_0.$ \hfill $\Box$
Given a current policy $d$ with average cost $x = g(d)$,
both the original optimality equations (\ref{eq:oe-sfall})
and the $x$-revised approach suggest updating $d$ with a
policy that for $i \in {\cal D}(0)$ uses the actions $a_i$ identified by
equations (\ref{eq:oe-sfa}) and (\ref{eq:oe-sfb}).
However they differ in their suggested action $a_0$ in state $0$ --
the former suggests using the action minimising
the rhs in equation~(\ref{eq:oe-sfc}) while the latter suggests using
the action identified in (\ref{eq:a-fr}). However, the above lemma suggests
another possible choice would be
\begin{equation} \label{eq:a-mi}
a_{0} = \mathrm{argmin}_a \{ \; (c_0(a) - x + \sum\nolimits_{k \in {\cal D}(0)}
\bar{p}_{0 k}(a) y_k) / (1+ \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) \; \}.
\end{equation}
This results in a policy that minimises the average cost over all policies that take
the given actions $a_i$ in states $i \in {\cal D}(0)$.
The next lemma shows all three variations either strictly improve on $d$
or identify an optimal policy.
\begin{lemma} \label{lem:123}
Let $d$ be a fixed policy and let $x = g(d)$.
For this $x$, let $a_i, \, i \in {\cal D}(0)$ be actions minimising
the rhs in equations~(\ref{eq:oe-sfa}) and (\ref{eq:oe-sfb})
and let $y_i, \, i \in {\cal D}(0)$ be the corresponding $y$ values.
Let $a_0^1$ be the action specified by equation (\ref{eq:a-fr}),
let $a_0^2$ be the action minimising the rhs of equation (\ref{eq:oe-sfc}),
and let $a_0^3$ be the action specified by equation (\ref{eq:a-mi}).
For $k = 1, 2, 3$, let $d^k$ be the policy that takes action $a_i$ in state $i \in {\cal D}(0)$
and takes action $a_0^k$ in state $0$.
Then either (i) all three policies $d^k$ satisfy $g(d^k) < g(d)$, or
(ii) all three policies satisfy $g(d^k) = g(d)$ and each of the three (and $d$ itself) provides an optimal average cost policy.
\end{lemma}
\noindent {\bf Proof}
For fixed $x$ and any policy $d$, $g(d) - x = H(d,x)/\tau(d)$ from (\ref{eq:hct})
and $\tau(d)$ is positive, so $g(d) - x$ has the same sign as $H(d,x)$.
Since all three policies take actions $a_i$ in states $i \in {\cal D}(0)$,
expression (\ref{eq:h-fr}) gives their respective expected $x$-revised first return
costs as
$H(d^k,x) = (c_0(a_0^k) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0^k) y_k) / (1 - p_{0 0}(a_0^k))$,
where each $p_{00}(a_0^k) < 1$ by the assumptions of the skip-free model.
Now $H(d^2,x) < 0 \implies
(c_0(a_0^2) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0^2) y_k) / (1 - p_{0 0}(a_0^2)) < 0 \implies
(c_0(a_0^1) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0^1) y_k) / (1 - p_{0 0}(a_0^1)) < 0$
(as $a_0^1$ minimises this quantity over choice of $a$)
$\implies H(d^1,x)< 0$.
Conversely $H(d^1,x) < 0 \implies
(c_0(a_0^1) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0^1) y_k) / (1 - p_{0 0}(a_0^1)) < 0 \implies
(c_0(a_0^1) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0^1) y_k) < 0 \implies
(c_0(a_0^2) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0^2) y_k) < 0$
(as $a_0^2$ minimises this quantity over choice of $a$)
$\implies H(d^2,x)< 0$.
A similar argument utilising the definition of $a_0^3$
and the positivity of $(1+ \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k)$
shows that $H(d^2,x) < 0 \Longleftrightarrow H(d^3,x) < 0$.
Exactly similar arguments then show
that $H(d^1,x) = 0 \Longleftrightarrow H(d^2,x) = 0 \Longleftrightarrow H(d^3,x) = 0$,
and that $H(d^1,x) > 0 \Longleftrightarrow H(d^2,x) > 0
\Longleftrightarrow H(d^3,x) > 0$.
The second part of the lemma then follows from Lemma~\ref{lem:imp}. \hfill $\Box$
\noindent {\bf Proof of Theorem \ref{th:grr}}
(i) It follows from Lemma \ref{lem:eval} that the initialisation step outputs $g_{0} = g(d_{0})$.
Now let $x = g_n$ and assume $g_n = g(d_n)$.
Then iteration $n+1$ outputs $g_{n+1} = g_n + u_0$,
where $u_0 =
(c_0(a_0) - x + \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) y_k) / (1+ \sum\nolimits_{k \in {\cal D}(0)} \bar{p}_{0 k}(a_0) t_k) =
H(d_{n+1},x)/\tau(d_{n+1})$ from (\ref{eq:h-fr}) and (\ref{eq:timec}).
Thus $g_{n+1} = x + H(d_{n+1},x)/\tau(d_{n+1}) = g(d_{n+1})$ from equation (\ref{eq:hct}).
Since $g_0 = g(d_0)$, it follows by induction that $g_n = g(d_n)$ for $n = 0,1,2,\ldots$.
By construction at iteration $n+1$ the skip-free algorithm
specifies $d_{n+1}(i) = a_i, \; i \in S$,
where $a_i, \, i \in {\cal D}(0)$ are the actions minimising
the rhs in equations~(\ref{eq:oe-sfa}) and (\ref{eq:oe-sfb}) for this value of $x$
(and $y_i, \, i \in {\cal D}(0)$ and $t_i, \, i \in {\cal D}(0)$ are the corresponding $y$ and $t$ values),
and $a_0$ is the action minimising the rhs in equation (\ref{eq:a-mi}).
It follows from Lemma \ref{lem:123} that either
$g(d_{n+1}) < g(d_n)$, or $g(d_{n+1}) = g(d_n)$ and both $d_{n+1}$ and $d_n$ provide optimal average cost policies.
Finally the expression for $h_j^*$ follows from considering the case $i = 0$
in the representation $h_j - h_i = \sum_{k \in \Delta(i,j)}y_k$ in Lemma \ref{lem:oe}
with the normalisation $h_0 = 0$.
(ii) Since the set of possible stationary deterministic decision rules is finite,
and each iteration prior to convergence leads to a strict improvement
and hence a strictly different decision rule,
the process must converge after a finite number of steps. \hfill $\Box$
\noindent{\bf Remark}
The update proposed in the skip-free algorithm uses $a_0$
satisfying (\ref{eq:a-mi}).
It has the property that, for each current policy $d$,
it generates an improved policy with average cost at least as small as
the other two variants considered in Lemma \ref{lem:123}.
This does not guarantee that improvements
using this update converge faster than improvements using either
of the other two variants. After one iteration, each policy
may generate a different starting point for the next iteration,
and our results do not allow us to compare the policies from these different
starting points -- indeed it might be that the larger the improvement from the
first iteration, the smaller the improvement resulting from the second iteration,
as the average cost is now closer to the optimal value.
Our experience has been that the number of iterations taken by all three methods was
often the same.
Where one was fastest, it was always the one using (\ref{eq:a-mi}),
but the relative ranking of the other two depended on the model parameters.
\bibliographystyle{agsm}
|
1,314,259,993,355 | arxiv | \section{Introduction}
Graph data structures are ubiquitous. There had been several advancements in graph processing techniques across several domains such as citation networks \cite{sen2008collective}, medicine \cite{anderson1992infectious}, and e-commerce \cite{choudhary2021self}. Recently, non-Euclidean geometries such as hyperbolic spaces have shown significant potential in representation learning and graph prediction \cite{chami2019hyperbolic}. Due to their exponential growth in volume with respect to radius \cite{krioukov2010hyperbolic}, hyperbolic spaces are able to embed hierarchical structures with low distortion, i.e., increase in the number of nodes in a tree with increasing depth is analogous to moving outwards from the origin in a Poincaré ball (since both grow exponentially). This similarity enables the hyperbolic space to project the child nodes exponentially further away from the origin as compared to their parents
(as shown in figure \ref{fig:1}.a)
which captures hierarchy in the embedding space. However, there are certain fundamental questions about hyperbolic models which are yet to be investigated. For example,
\begin{itemize}[leftmargin=*]
\item Hyperbolic models operate on completely new manifolds with their own gyrovector space \cite{2008Ungar} (to be differentiated from the Euclidean vector space). Because they operate in different manifold, techniques such as dropout or L1/2 regularization does not guarantee to produce similar behavior.
\item There are also some critical limitations in the extensibility of hyperbolic space. While the currently developed hyperbolic operations \cite{ganea2018hyperbolic} provide fair support for basic sequential neural networks, complex Euclidean architectures such as (multi-headed) attention networks \cite{vaswani2017attention} require additional operations such as mean and concatenation. Accurate translation of the operations that are trivial in the Euclidean space becomes highly non-trivial when it comes to supporting them on the hyperbolic manifold due to the special properties of hyperbolic models \cite{shimizu2021hyperbolic}.
\item Hyperbolic operators are computationally very expensive i.e., the number of complex mathematical operations for each primitive operator (see section 3) as well as limited GPU support for functions (such as $tanh^{-1}$) limits their scalability as compared to the Euclidean graph neural networks.
\item Lastly, current hyperbolic frameworks also perform well in the presence of inherent hierarchy in graphs (such as taxonomies or ontologies), but they lack their competitive edge when it comes to the non-hierarchical graphs (e.g., Hyperbolic networks perform worse than the Euclidean counterparts in citation networks dataset. The dataset is known to have low hyperbolicity \cite{chami2019hyperbolic}).
\end{itemize}
Traditionally, Hyperbolic models use mixture of hyperbolic geodesics, Möbius math, and tangent space approximation at varying points. Instead of switching between multiple frameworks with potentially incongruent operations, in this work, we propose to use Poincaré disk model as our search space, and apply all approximations on the disk as if the disk is a tangent space derived from the origin. This enables us to replace non-scalable Möbius-math with an Euclidean approximation, and then simplify the whole hyperbolic model as an Euclidean model cascaded with a hyperbolic normalization function. In our approach we replace all Möbius math with Euclidean math, yet we still work in Riemannian manifold (use Riemannian optimizer to determine the direction of the gradient), thus, we call it \textbf{Pseudo-Poincaré} framework (shown in Figure \ref{fig:1}.b)..
Our approach enables the use of inexpensive operators also leads to scalability on existing hardware infrastructures well-optimized for machine-learning workloads.
\begin{figure}[tb]
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\textwidth]{images/poincare_ball.png}
\caption{}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\textwidth]{images/pseudo_poincare.png}
\caption{}
\end{subfigure}
\vspace{-0.5em}
\caption{\footnotesize (a) Mapping hierarchical data onto a Poincaré ball. Starting with the root as origin (red), the children are continuously embedded further away with exponentially increasing distances. (b) Aggregation of vectors in Euclidean, Poincaré Ball and pseudo-Poincaré space (hyperbolic vectors in Euclidean space). {\color{red}Red} points are the aggregation of {\color{blue}blue} points. In the pseudo-Poincaré paradigm, we reformulate hyperbolic operations to capture hierarchy in the Euclidean space.}
\label{fig:1}
\vspace{-0.6em}
\end{figure}
Although this approximation can cause a possible distortion in the latent space, all the hyperbolic methods use tangent space approximation for their point aggregation, and we argue that fixating the tangent space at origin does not reduce the performance in general cases (when spatial locality for the aggregated points are not guaranteed).
Given that our formulation lies in the Euclidean space, it is unbounded by the closure of hyperbolic space and can explore greater degrees of freedom in gradient descent. Additionally, the freedom also allows our model to explore Euclidean solution space, in congruence with the hyperbolic solution space, to get the best results in both the presence and absence of hierarchy in our datasets. For further evidence, we apply our formulation to construct a new pseudo-Poincaré graph convolution ($NGCN$) and attention network ($NGAT$), and a multi-relational graph embedding model ($NMuR$). We demonstrate that the reformulated models outperform several state-of-the-art Euclidean and non-Euclidean graph networks on standard graph tasks.
To summarize, the main contributions of our paper are as follows:
\begin{enumerate}[noitemsep,leftmargin=*]
\item We propose a generalized formulation of hyperbolic operations in a pseudo-Poincaré paradigm, that can effectively and efficiently capture the hierarchical features in a Euclidean space, on par with the hyperbolic space.
\item Our paradigm allows for greater degrees of freedom in gradient descent, satisfies the universal approximation theorem, and is also able to explore Euclidean solutions.
\item We demonstrate that Euclidean networks with the proposed hyperbolic normalization ($NGCN$, $NGAT$, and $NMuR$ models) outperform the state-of-the-art Euclidean and non-Euclidean baselines across standard benchmarks on several graph problems including node classification, link prediction, and multi-relational embedding.
\end{enumerate}
The rest of the paper is structured as follows: Section \ref{sec:related} discusses the related work in the area. Section\ref{sec:hyperbolic} delves into the current hyperbolic formulation and section \ref{sec:bridge} reformulates the hyperbolic layers in the pseudo-Poincaré paradigm. Section \ref{sec:experiments} demonstrates our experimental results and insights into the performance of our paradigm. Section \ref{sec:conclusion} concludes the paper.
\section{Related Work
\label{sec:related}
\textbf{Euclidean Networks.} Research into graph representation learning can be broadly classified into two categories based on the nature of the underlying graph dataset: (i) Homogeneous graphs - where the edges connect the nodes but do not have any underlying information and (ii) Heterogeneous or multi-relational graphs - where the edges contain information regarding the type of connection or a complex relation definition. Early research in this area relied on capturing information from the adjacency matrix through factorization and random walks. In matrix factorization based approaches \cite{cao2015grarep}, the adjacency matrix $A$ is factorized into low-dimensional dense matrices $L$ such that $\|L^TL-A\|$ is minimized. In random walk based approaches \cite{grover2016node2vec,NIPS2012_7a614fd0,narayanan2017graph2vec,tang2015line,wang2016structural}, the nodes' information is aggregated through message passing over its neighbors through random or metapath \cite{fu2020magnn} traversal. More recently, the focus has shifted towards neighborhood aggregation through neural networks, specifically, Graph Neural Networks (GNNs) \cite{scarselli2008graph}. In this line of approach, several architectures such as GraphSage \cite{hamilton2017inductive}, GCN \cite{kipf2017semi}, and GAT \cite{velickovic2018graph} have utilized popular deep learning frameworks to aggregate information from the nodes' neighborhood. Alongside this, we also notice the concurrent development of architectures for multi-relational graphs. Generally focused on representation learning in knowledge graphs, approaches such as TransE \cite{bordes2013translating}, DistMult \cite{yang2015embedding}, RotatE \cite{sun2018rotate}, and MuRE \cite{balazevic2019multi} model the tail entities as head entities transformed by a relation embedding.
\textbf{Hyperbolic Networks.} While the given graph-specific approaches (in Euclidean space) show significant performance gains over their vanilla counterparts, they are limited to a certain depth of neighborhood and are also incapable of capturing the latent hierarchy in the graph datasets. They also suffer from high distortion \cite{chami2019hyperbolic}. The reason is that the representational space grows linearly while the number of nodes grows exponentially with increasing depth \cite{bottleneck-21} in large graph datasets. This deficiency has led to the growth of hyperbolic networks which are shown to be an effective alternative. First proposed as basic Hyperbolic Neural Networks \cite{ganea2018hyperbolic} for multinomial logistic regression, the use of hyperbolic space to capture hierarchy in graph datasets has led to the development of several architectures for representation learning in both homogeneous (HGCN \cite{chami2019hyperbolic}, HAT \cite{zhang2021hyperbolic}) and heterogeneous (MuRP \cite{balazevic2019multi}, HypE \cite{choudhary2021self}) graph datasets. These architectures, primarily, utilize the gyrovector space model to formulate the neural network operations in a Poincaré ball of curvature $c$ as Möbius addition $(\oplus_c)$, exponential map $(\exp_x^c)$, logarithmic map $(\log_x^c)$, Möbius {scalar multiplication} $(\otimes_c)$, and hyperbolic activation $(\sigma_c)$ \cite{ganea2018hyperbolic}.
While Hyperbolic Networks show promising results, switching back and forth between Möbius gyro-vector operations and Euclidean tangent approximations (at various points) makes them computational very expensive, limits their extensibility, and makes them less predictable when applying techniques such as L1/2 regularization, drop-out. Furthermore, such back and forth between the manifolds (at variable points) comes with an implicit assumption that the embeddings have spatial locality in all dimensions for the models to be performant.
\section{Hyperbolic Networks}
\label{sec:hyperbolic}
\subsection{Hyperbolic Transformation}
\textbf{Hyperbolic space} is a Riemannian manifold with constant negative curvature. The coordinates in the hyperbolic space can be represented in several isometric models such as Minkowski space $\mathbb{H}^{n}_1$, Beltrami-Klein Model $\mathbb{K}^n_c$ and Poincaré Ball $\mathbb{B}^n_c$. In this paper, we consider the Poincaré Ball model because it is the most widely used one for hyperbolic networks.
Since most of the raw data (and true outputs) are assumed to be in the Euclidean space, the first step to use hyperbolic models is to have mapping between the Euclidean and hyperbolic space. For an embedding $x\in\mathbb{R}^n$, its corresponding hyperbolic embedding in the Poincaré ball of curvature $c$, $p \in\mathbb{B}^n_c$ is calculated with the exponential map as $ \label{eq1}
p = \exp_0^c(x) = \frac{tanh(\sqrt{c}|x|)}{\sqrt{c}|x|} x$.
Conversely, logarithmic map is used to transform a hyperbolic embedding $p \in\mathbb{B}^n_c$ to a Euclidean vector $x\in\mathbb{R}^n$ is formulated as $\label{eq2-1}
x = \log_0^c(p) = \frac{tanh^{-1}(\sqrt{c}|p|)}{\sqrt{c}|p|}p$.
\subsection{Hyperbolic Layers}
\label{sec:hyperbolic-layers}
\textbf{Hyperbolic Feed-forward layer.} For the Euclidean function $f:\mathbb{R}^n\rightarrow \mathbb{R}^m$, its equivalent Möbius version $f^{\otimes_c}:\mathbb{B}^n_c\rightarrow \mathbb{B}^m_c$ for the Poincaré ball is defined as $f^{\otimes_c}(p) \coloneqq \exp_0^c\left(f\left(\log_0^c(p)\right)\right) \label{eq:h_ffn}$. Extending this, the hyperbolic equivalent $h^{\mathbb{B}}(p)$ of the Euclidean linear layer $h^{\mathbb{R}}(x) = Wx$ with weight matrix $W \in \mathbb{R}^{n\times m}$ is formulated as:
\begin{align} \label{eq3}
h^{\mathbb{B}}(p) &= W \otimes_c p = f^{\otimes_c}\left(Wp\right) = \frac{tanh\left(\frac{|Wp|}{|p|}tanh^{-1}\left(\sqrt{c}|p|\right)\right)}{\sqrt{c}|Wp|} Wp
\end{align}
\textbf{Hyperbolic Convolution.} For a hyperbolic graph node representation $p_0 \in \mathbb{B}^n_c$ with neighbors $\{p_i\}_{i=1}^k$. As given in \cite{chami2019hyperbolic}, the hyperbolic convolution layer $GCN^{\otimes_c}:\mathbb{B}^n_c \rightarrow \mathbb{B}^m_c$ constitutes of a linear transform, followed by neighborhood aggregation\footnote{The aggregation occurs in the tangent space as concatenation is not well-defined for hyperbolic space.} which is computed as:
\begin{align}
h^{\mathbb{B}}(p_i) &= W_i \otimes_c p_i \oplus_c b_i;\quad
h^{\mathbb{B}}_{agg}(p_0) = exp^c_{p_0}\left(\Sigma_{i=0}^kw_{ij}\log^c_{p_0}\left(h^{\mathbb{B}}(p_i)\right)\right) \label{eq:agg_1}\\
GCN^{\otimes_c}(p_0) &= exp^c_0\left(\sigma(\log^c_0(h^{\mathbb{B}}_{agg}(p_0))\right) \label{eq:hgcn}
\end{align}
where $w_{ij}= softmax_{i=1}^k(MLP(log^c_0(p_0)||log^c_0(p_i)))$ is an aggregation weight for neighbors.
\textbf{Multi-relational Hyperbolic Representation.} The multi-relational representations for knowledge graph $KG$ are modeled using a scoring function. For a triplet $(h,r,t) \in KG$, where relation $x_r \in \mathbb{R}^{n}$ connects the head entity $x_h \in \mathbb{R}^{n}$ to the tail entity $x_t \in \mathbb{R}^{n}$ and $R \in \mathbb{R}^{n\times n}$ is the diagonal relation matrix, the hyperbolic equivalent $\phi^{\mathbb{B}}(p_h,p_r,p_t)$ of the Euclidean scoring $\phi(x_h,x_r,x_t)=-\|Rx_h-(x_t+x_r)\|^2$ function is computed using the hyperbolic distance $d^\mathbb{B}$ as:
\begin{align}
d^\mathbb{B}(p_1,p_2) &= \|\exp^c_0(-p_1\oplus_cp_2)\| = \frac{2}{\sqrt{c}}\tanh^{-1}\left(\sqrt{c}\|-p_1\oplus_c p_2\|\right) \label{eq:mrh_dist}\\
\phi^{\mathbb{B}}(p_h, p_r, p_t) &= -d^\mathbb{B}(exp^c_0(R\log_0^c(p_h)),p_t\oplus_c p_r)^2\label{eq:mrh_score}
\end{align}
\section{Pseudo-Poincaré Hyperbolic Networks}
\label{sec:bridge}
This section introduces our methodology for reformulating each of the aforementioned hyperbolic layers in Euclidean space. We will base our framework on the arguments unless the spatial locality is guaranteed, approximating on tangent space at a variable point does not necessarily increase the approximation accuracy. Therefore, fixating tangent space to be at the origin can bring both simplicity to the model and improve its performance in general cases.
Moreover, only tangent space at origin has one-to-one mapping with the Poincaré (hyperbolic) half-plane model. This enables us to combine the tangent space approximation with Poincaré half-plane model and apply all approximations in the half-plane. This makes the pseudo-Poincaré networks much easier to understand, develop and optimize.
The notion of the curvature becomes more complicated in high-dimensional space since the manifold can curve in many different directions. Therefore, inline with the literature \cite{lee2019introduction}, we focus our arguments on a sub-space of the manifold (specifically, 2-D sub-space). Now, assume 2 points in a 2D Riemannian manifold with constant curvature (i.e., a ball or a hyperbolic-plan in the ambient space). There are infinite planes that cross-section the manifold and passes from these two points. the distance between these two points is the magnitude of the shortest curve (which lies in the optimal plane-sections). We further limit our argument to this plane-section (and curve) for the sake of simplicity. The arguments can be generalized to higher dimensions.
\begin{lemma}
\label{th:approx}
For a tangent line from any point in the curve, the midpoint approximation error is relative to the angle between the x-y chord and the tangent line.
\end{lemma}
Thus, if the approximation happens on a tangent line of point $p$, where $p$ is between the $x$, $y$, the approximation error is smaller than the one obtained using tangent lines at any of the points that are being aggregated.
Now, on a half-circle, if we fixate $p$ to be the origin ($p_0$), there is a $50\%$ chance that $p$ lies between the aggregated points (assuming the given points' coordinates have a uniform distribution).
\textbf{Pseudo-Poincaré Feed-forward Layer.} We state that cascading hyperbolic feed forward layers (including linear layers) is equivalent to cascading their Euclidean counterparts and then applying a hyperbolic normalization to the result.
\begin{lemma}
\label{lm:simp_exp}
For a point in the tangent space at origin $x\in\mathcal{T}_{0_n}\mathbb{B}^n_c$, the exponential map to the hyperbolic space $\mathbb{B}^n_c$, $\exp_0^c:\mathcal{T}_{0_n}\mathbb{B}^n_c\rightarrow \mathbb{B}^n_c$ $\exp_0^c(x)$, is equivalent to the input scaled by a non-linear scalar function $\omega:\mathbb{R}^n\rightarrow\mathbb{R}^n$, i.e.,
\begin{equation} \label{eq2}
p = \exp_0^c(x) = \omega(x)x;\quad \omega(x) = \frac{tanh(\sqrt{c}|x|)}{\sqrt{c}|x|}
\end{equation}
where $\omega(x)$ is the hyperbolic normalization for exponential maps. This follows from equation (1).
\end{lemma}
\begin{lemma}
\label{lm:ff}
Approximating on tangent space at origin for all stages causes the cascade of $n$ hyperbolic feed-forward layers $F_n^{\otimes_c} = \{f^{\otimes_c}_i\}_{i=1}^n$ to be equivalent to cascading $n$ of their Euclidean counterparts $F_n = \{f_i\}_{i=1}^n$ encapsulated in $\log_0^c$ and $\exp_0^c$ functions, i.e.,
\begin{align}
\nonumber F_n(x) &= f_n\left(f_{n-1}\left( \cdots f_1\left(x\right)\right)\right)\\
\nonumber F_n^{\otimes_c}(p) &= f_n^{\otimes_c}(f^{\otimes_c}_{n-1}( \cdots f_1^{\otimes_c}(p)) = exp_0^c\left(F_n(log^c_0(p))\right)
\end{align}
\end{lemma}
\textbf{Proposition} Lemma \ref{lm:ff} implies that the hyperbolic transformations $\exp_0^c$ and $\log_0^c$ are only needed at the initial and final stages to capture hierarchy in Euclidean networks.
\begin{lemma}
\label{lm:input-features} For given input features $x=log_0^c(p) \in \mathbb{R}^n$, a given hyperbolic feed forward layer can be rewritten as $f^{\otimes_c}(x) \coloneqq \omega(f(x))f(x)$.
\end{lemma}
From lemmas \ref{lm:ff} and \ref{lm:input-features}, we arrive at the following theorem for cascaded hyperbolic layers.
\begin{theorem}
\label{ff-generalized}
Fixating the tanget space to be always at origin, cascaded hyperbolic feed-forward layers $F_n^{\otimes_c}$ for Euclidean input $x\in \mathbb{R}^n$ can be reformulated as $F_n^{\otimes_c}(x) \coloneqq \Omega(F_n(x))F_n(x)$, where $\Omega(F_n(x))$ is the hyperbolic normalization for a cascaded hyperbolic feed-forward network.
\end{theorem}
\textbf{Hyperbolic Normalization.} Theorem \ref{ff-generalized} allows us to state that cascaded hyperbolic layers can be reformulated in the Euclidean space using a hyperbolic normalization ($\Omega$) function. This approximation is more accurate at the first layers (where the adjacent inputs embeddings does not show strong spatial locality property, and hence allows us to approximate all operations on tangent space at origin accurately). The approximation resembles the technique proposed in \cite{salimans2016weight} to normalize the weights, which uses the equation; $\label{w-norm} w = \frac{e^s}{\|v\|}v$, where $v$ is an $n$-dimensional vector used to learn the direction and $s$ is a trainable parameter. However unlike weight normalization, the hyperbolic normalization happens on the features (and not the weights). Furthermore, hyperbolic normalization uses $tanh$ scaling function instead of $e^s$.
However, both are similar in terms of using the feature vector magnitude as the denominator and hence remove the orthogonality of the dimensions.
The ``unrolled deep hyperbolic network" (shown in Figure \ref{fig:hyperbolic_normalization_framework}) presents the general idea of stacking multiple hyperbolic layers together. The input into each layer is passed through a logarithmic map to project into an Euclidean space and it's output is projected back into the hyperbolic space after applying the computations. Equations (1)-(9) demonstrate how various hyperbolic layers are implemented.
Algorithm \ref{alg:metapath} provides the generalized algorithm for our proposed hyperbolic model.
\begin{algorithm}[htbp]
\SetAlgoLined
\KwIn{Input Euclidean model $F_L$, sample node $x \in \mathbb{R}^n$}
\KwOut{Output of Normalized hyperbolic model $F_L^{\otimes_c}(x)$;}
$f_{0}^{\otimes_c}(x) = x;$\\
\For{layer $l:1\rightarrow L$}
{
$x \leftarrow f^{\otimes_c}_{l-1}(x);$\\
{\color{blue} \# Layer $l$ of the Euclidean model is $f_l(x)$}\\
$\omega(f_l(x)) = \frac{tanh(\sqrt{c}|f_l(x)|)}{\sqrt{c}|f_l(x)|}$; using Eq. (\ref{eq2})
$f^{\otimes_c}_l(x) = \omega(f_l(x))f_l(x)$; using Lemma \ref{lm:input-features}\\
}
$F_n^{\otimes_c}(x) = f^{\otimes_c}_L(x);$\\
\Return{$F_L^{\otimes_c}(x)$}
\caption{Normalized hyperbolic model. (Figure \ref{fig:hyperbolic_normalization_framework})}
\label{alg:metapath}
\end{algorithm}
\begin{figure*}[!h]
\footnotesize
\vspace{-1em}
\centering
\includegraphics[width=.9\textwidth]{images/pseudo_poincare_overview.png}
\vspace{-.4em}
\caption{\footnotesize The top row labeled ``Unrolled Deep Hyperbolic Network" presents the general idea of stacking deep hyperbolic network with $L$ layers. The input to each layer is passed through a logarithmic map to project onto an Euclidean space, and following the Euclidean layer computation, the output is projected back onto the hyperbolic space. Section \ref{sec:hyperbolic-layers} demonstrates how such an operator or function chaining is performed, and it is easy to see how stacking of such deep layers can become expensive. Our methodology shown in the lowest layer completely avoids the repeated application of these subspace transformations.}
\vspace{-1em}
\label{fig:hyperbolic_normalization_framework}
\end{figure*}
\textbf{Pseudo-Poincaré Graph Convolution.} The individual components of the Hyperbolic Graph Convolution, as defined in Eq. (\ref{eq:hgcn}), can be reformulated as a cascaded hyperbolic feed forward layers. Thus, we approximate both the aggregation weights and the aggregation function in $\mathcal{T}_0\mathbb{B}^n_c$ as:
\begin{align}
h^{\mathbb{B}}_{agg}(p_i) &= exp^c_0\left(\Sigma_{i=0}^kw_{ij}\log^c_0\left(h^{\mathbb{B}}(p_i)\right)\right) \label{eq:agg}
\end{align}
With this approximation, we apply theorem \ref{ff-generalized} to reformulate reformulate the $GCN^{\otimes_c}:\mathbb{B}^n_c \rightarrow \mathbb{B}^m_c$ layers as Normalized GCN $NGCN^{\otimes_c}:\mathbb{R}^n \rightarrow \mathbb{R}^m$ over Euclidean inputs $x_0 \in \mathbb{R}^n$ with neighbors $\{x_i\}_{i=1}^k$. This is computed as $NGCN^{\otimes_c} \coloneqq \Omega(GCN(x_0))GCN(x_0)$, where $GCN(x_0)$ is the Euclidean Graph Convolution layer \cite{kipf2017semi}.
Note that the cascaded feed-forward functions of HGCN \cite{chami2019hyperbolic} operate on different tangent spaces, i.e., aggregation weights are calculated at the linear tangent space at origin $\mathcal{T}_0\mathbb{B}^n_c$ whereas the aggregation function as given in Eq. (\ref{eq:agg}) is driven from a linear tangent space at the root node $\mathcal{T}_{p_0}\mathbb{B}^n_c$. An argument to use different tangent spaces for different function is based on the fact that Euclidean approximation is most accurate when the tangent space is close to the true resulting point. For aggregation, using tangent space at $p_{0}$ requires an implicit assumption that all aggregated hyperbolic points preserve a form of spatial locality, which implies similarity and not exactly the notion of hierarchy. However, we can still argue that hierarchical structures have a loose form of spatial locality, and hence using $log^{c}_0(p)$ will add some noise to the embedding compared to when we use $log^{c}_{p_0}(p)$.
\textbf{Pseudo-Poincaré Graph Attention Layer.}\label{sec:model} Here, our goal is to project the Euclidean GAT layer into the hyperbolic space by merely applying the hyperbolic normalization (e.g., Eq. (\ref{eq2})), then, study its approximation in relation to the existing hyperbolic graph attention layers as well as with a true hyperbolic graph attention mechanism.
Hyperbolic attention is formulated as a two-step process (as defined in \cite{gulcehre2018hyperbolic}); (1) matching, which computes attention weights, and (2) the aggregation, which takes a weighted average of the values (attention coefficients as weights).
This way, the attention layer can be seen as two cascaded layers, in which the first calculates the attention weights, while the second layer aggregates the weighted vectors.
Existing hyperbolic graph attention method \cite{zhang2021hyperbolic} $GAT^{\otimes_c}:\mathbb{B}^n_c \rightarrow \mathbb{B}^m_c$ views the hyperbolic aggregation as a hyperbolic feed forward layer with a Euclidean weighted average as its function. Similarly $GAT:\mathbb{R}^n\rightarrow\mathbb{R}^m$ \cite{velickovic2018graph} also uses weighted-average for the aggregation.
Since we are using Euclidean weighted average over cascaded feed-forward layers, we can apply theorem \ref{ff-generalized} to hyperbolic GAT and approximate it in the hyperbolic space with a hyperbolic normalization function and Euclidean GAT as $NGAT^{\otimes_c}(x) \coloneqq \Omega(GAT(x))GAT(x)$.
\textbf{Pseudo-Poincaré Multi-relational Representation.} For Euclidean inputs $x_1,x_2 \in \mathbb{R}^n$ and L1-norm $d^\mathbb{R}(x_1,x_2)$, the hyperbolic formulation of the multi-relational hyperbolic distance $d^\mathbb{B}(x,y)$, given in Eq. (\ref{eq:mrh_dist}) can alternatively be reformulated as $d^\mathbb{B}(x_1,x_2) = \omega(x)d^\mathbb{R}(x_1,x_2) \label{eq:ball_score}$.
Similarly, for Euclidean triplet $(x_h,x_r,x_t) \in \mathbb{R}^n$, the scoring function $\phi^{\mathbb{B}}(p_h, p_r, p_t)$ can also be approximated to $NMuR^{\mathbb{B}}(x_h, x_r, x_t)$ as:
\begin{align}
\phi^{\mathbb{R}}(x_h, x_r, x_t) &= -d^{\mathbb{R}}\left(Rx_h,x_t+x_r\right)^2 \label{eq:real_score}\\
NMuR^{\mathbb{B}}(x_h, x_r, x_t) &= \Omega(\phi^{\mathbb{R}}(x_h, x_r, x_t))\phi^{\mathbb{R}}(x_h, x_r, x_t) \label{eq:new_score}
\end{align}
\textbf{Optimization.} \label{sec:ortho}The scaling function in Eq. (\ref{eq2}) uses the vector norm in the denominator and hence, each dimension depends on the value of other dimensions (i.e., non-orthogonal). Due to this, optimizers that are developed based on the orthogonality assumptions of the dimensions such as ADAM are not expected to produce the most optimal weights. Thus, we choose Riemannian optimizers such as RiemannianADAM \cite{becigneul2018riemannian} for our model training. For traditional hyperbolic models, regularization techniques such as L2-regularization do not work as expected since the weights are in hyperbolic space. However, the trainable parameters in our formulation are in the Euclidean space. Hence, we apply regularization techniques that are applied to Euclidean networks.
\section{Experimental Setup}\label{sec:experiments}
In this section, we aim to answer following research questions (RQs) through our experiments:
\begin{itemize}[noitemsep,leftmargin=*]
\item \textbf{RQ1:} How do the pseudo-Poincaré variants, NGAT and NGCN, compare to their Euclidean and hyperbolic counterparts in terms of performance on node classification and link prediction?
\item \textbf{RQ2:} How does the pseudo-Poincaré variant, NMuR, compare to its Euclidean and hyperbolic counterpart in terms of performance on reasoning over knowledge graphs?
\item \textbf{RQ3:} How do pseudo-Poincaré variants react to the choice of optimizers, and what are the effects on their execution time? How does the hyperbolic normalization compare to its counterparts?
\item \textbf{RQ4:} What is the difference in the representations obtained through hyperbolic projection and hyperbolic normalization?
\end{itemize}
\subsection{Problem Setting}
For comparison of hyperbolic normalization with other methods, we conduct our experiments on the following tasks: (i) graph prediction (node classification and link prediction) and (ii) reasoning over knowledge graphs, which are described below:
\textbf{Graph Prediction.} Given a graph $\mathcal{G}=(V\times E)$, where $v_i \in \mathbb{R}^n, \{v_i\}_{i=1}^{|V|} \in V$ is the set of all nodes and $E \in \{0,1\}^{|V|\times|V|}$ is the boolean adjacency matrix, where $|V|$ is the number of nodes in the graph and $e_{ij}=1$, if link exists between node $i$ and $j$, and $e_{ij}=0$, otherwise. Each node $v_i \in V$ is also labeled with a class $y_i$.
In the task of \textbf{node classification}, the primary aim is to estimate a predictor model $P_\theta$ with parameters $\theta$ such that for an input node $v_i$; $P_\theta(v_i|E) = y_i$.
In the task of \textbf{link prediction}, the primary aim is to estimate a predictor model $P_\theta$ with parameters $\theta$ such that for an input node-pair $v_i, v_j$ and an incomplete adjacency matrix $\hat{E}$; $P_\theta(v_i,v_j|\hat{E})= e_{ij}$, where $e_{ij} \in E$ is an element in the complete adjacency matrix.
\textbf{Reasoning over Knowledge Graphs.} {Let us say that a set of triplets constitutes a knowledge graph $\mathcal{KG} = \{(h_i,r_i,t_i)\}_{i=1}^{|\mathcal{KG}|}$, where $h_i \in \mathbb{R}^n$ and $t_i \in \mathbb{R}^n$ are the head and tail entity, respectively connected by a relation $r_i \in \mathbb{R}^n$. In the task of \textbf{reasoning} \cite{balazevic2019multi}, our goal is to learn representations for the entities and relations using a scoring function that minimizes the distance between heads and tails using the relation as a transformation. The scoring function for Euclidean $MuRE$, hyperbolic $MuRP$ and pseudo-Poincaré $NMuR$ are presented in Eq. (\ref{eq:real_score}), Eq. (\ref{eq:ball_score}) and Eq. (\ref{eq:new_score}), respectively.}
\subsection{Datasets and Baselines}
\textbf{Datasets.} For comparing our model with the baselines, we chose the following standard benchmark datasets; (i) \textbf{Cora} \cite{rossi2015the} contains 2708 publications with paper and author information connected by citation links and classified into 7 classes based on their research areas, (ii) \textbf{Pubmed} \cite{sen2008collective} contains medical publications pertaining to diabetes labeled into 3 classes. The dataset is collected from the Pubmed database, (iii) \textbf{Citeseer} \cite{sen2008collective} contains 3312 annotated scientific publications connected by citation links and classified into 6 classes of research areas, (iv) \textbf{WN18RR} \cite{dettmers2018convolutional} is a subset of the hierarchical WordNet relational graph that connects words with different types of semantic relations. WN18RR consists of 40,943 entities connected by 11 different semantic relations and (v) \textbf{FB15k-237} \cite{bordes2013translating, toutanova2015representing} contains triple relations of the knowledge graph and textual mentions from the Freebase entity pairs, with all simple invertible relations are removed for better comparison. The dataset contains 14,541 entities connected by 237 relations.
(vi) Protein-Protein Interaction (\textbf{PPI}) network dataset \cite{NIPS2017_5dd9db5e} contains 12M+ triples with 15 relations (preserving the original ratio of the relations in the dataset, we randomly sampled the dataset, and trained on the sampled subset).
We use Cora, Pubmed, and Citeseer for our experiments on homogeneous graph prediction, i.e., node classification and link prediction. For comparing the multi-relational networks on the task of reasoning, we utilize the FB15k-237 and WN18RR datasets. For a fair comparison of evaluation metrics, we use the same training, validation, and test splits as used in the previous methods, i.e., the splits for Cora, Pubmed, and Citeseer are as given in \cite{chami2019hyperbolic} and for FB15k-237 and WN18RR, the splits are in accordance with \cite{balazevic2019multi}.
\textbf{Baselines.} For comparing the performance of our Pseudo-Poincaré models, we utilize the Euclidean and hyperbolic variants of the architecture as our baselines; (i) \textbf{Graph Convolution (GCN)} \cite{kipf2017semi} aggregates the message of a node's k-hop neighborhood using a convolution filter to compute the neighbor's significance to the root, (ii) \textbf{Graph Attention (GAT)} \cite{velickovic2018graph} aggregates the messages from the neighborhood using learnable attention weights, (iii) \textbf{Hyperbolic Graph Convolution (HGCN)} \cite{chami2019hyperbolic} is a hyperbolic formulation of the GCN network that utilizes the hyperbolic space and consequently, Möbius operations to aggregate hierarchical features from the neighborhood, (iv) \textbf{Hyperbolic Graph Attention (HGAT)} \cite{yang2021hgat} is a hyperbolic formulation of the GAT networks to capture hierarchical features in the attention weights, (v) \textbf{Multi-relational Embeddings (MuRE)} \cite{balazevic2019multi} transforms the head entity by a relation matrix and then learns representation by minimizing L1-norm between head and tail entities translated by the relation embedding and (vi) \textbf{Multi-relational Poincaré (MuRP)} \cite{balazevic2019multi} is a hyperbolic equivalent of MuRE that transforms the head entity by a relation matrix in the Poincaré ball and then learns representation by minimizing the hyperbolic distance between the head and tail entities translated by the relation embedding.
\textbf{Implementation Details.} We run our experiments on an Nvidia T4 GPU with 16 GB of VRAM. Our models are implemented on the Pytorch framework \cite{paszke2019pytorch} and the baselines are adopted from \cite{chami2019hyperbolic} for GAT, GCN, and HGCN, and from \cite{balazevic2019multi} for MuRE and MuRP. Due to the unavailability of a public implementation, we use our own implementation of HGAT based on \cite{yang2021hgat}. The implementations have been thoroughly tuned with different hyper-parameter settings for the best performance\footnote{\footnotesize \href{https://github.com/oom-debugger/pseudo-poincare}{https://github.com/oom-debugger/pseudo-poincare}.}.
For GAT baseline, we used 64 dimensions with 4 and 8 attention heads (we found 4 heads perform better and hence we only present that result in our tables), with dropout rate of 0.6 and L2 regularization of $\lambda = 0.0005$ (for Pubmed and Cora), and $\lambda = 0.001$ (for CiteSeer) to be inline with the hyper-parameters reported in \cite{velickovic2018graph}. We used same number of dimensions and hyper-parameters for our NGAT model. For GCN baseline, \cite{velickovic2018graph} used 64 dimensions. we used the same number of dimensions and hyperparameter setting as GAT baseline.
It should be noted that, we observed poor performance when we applied regularization techniques (that GAT baseline used) to the hyperbolic models. For HGCN, the original paper used 16 hidden dimension, however, we tried 64, 32, and 16 for the hidden dimensions, we found 64 dimensions produces a better results.
For the curvature choice, we used different curvatures for different tasks. For the task of node classification, we used $c=0.3$ (for Cora), and $c=1$ (for Pubmed and Citeseer) as they produced the best results, while for the link prediction task, $c=1.5$ produced the best results. at the end, we empirically found that scaling the output of the hyperbolic normalization layer by the factor of 5 produces the best results when $c=0.3-0.5$ (and for $c=1.5$, we set the scaling factor to 3).
\begin{table*}[tbp]
\centering
\footnotesize
\vspace{-0.5em}
\caption{\footnotesize Performance comparison results between hyperbolic normalization (ours) and the baseline methods on the graph prediction tasks of node classification and link prediction. The columns present the evaluation metrics, which are Accuracy (for node classification) and Area under ROC (ROC) (for link prediction), along with their corresponding 95\% confidence intervals. {The cells with the best performance are highlighted in bold and the second-best performance is marked with a box.}}
\vspace{-0.5em}
\resizebox{\textwidth}{!}{
\begin{tabular}{ll|ccc|ccc}
\hline
\textbf{Repr.}&\textbf{Datasets}&\multicolumn{3}{c|}{\textbf{Node Classification (Accuracy in \%)}}&\multicolumn{3}{c}{\textbf{Link Prediction(ROC in \%)}}\\
\textbf{Space}& \textbf{Models}&\textbf{CORA}&\textbf{Pubmed}&\textbf{Citeseer}&\textbf{Cora}&\textbf{Pubmed}&\textbf{Citeseer}\\\hline
\textbf{Euclidean}&\textbf{GCN}&80.1$\pm$0.3&78.5$\pm$0.3&71.4$\pm$0.3&90.2$\pm$0.2&92.6$\pm$0.2&91.3$\pm$0.2\\
&\textbf{GAT}&\fbox{82.7$\pm$0.2}&79.0$\pm$0.3&71.6$\pm$0.3&89.6$\pm$0.2&92.4$\pm$0.2&\textbf{93.6$\pm$0.2}\\
\hline
\textbf{Hyperbolic}&\textbf{HGCN}&77.9$\pm$0.3&78.9$\pm$0.3&69.6$\pm$0.3&\textbf{91.4$\pm$0.2}&\textbf{95.0$\pm$0.1}&\fbox{92.8$\pm$0.3}\\
&\textbf{HGAT}&79.6$\pm$0.3&\textbf{79.2$\pm$0.3}&68.1$\pm$0.3&90.8$\pm$0.2&93.9$\pm$0.2&92.2$\pm$0.2\\
\hline
\textbf{Pseudo-}&\textbf{NGCN}&82.4$\pm$0.2&78.8$\pm$0.3&\fbox{71.9$\pm$0.3}&\fbox{91.3$\pm$0.2}&\fbox{94.7$\pm$0.1}&\fbox{92.8$\pm$0.2}\\
\textbf{Poincaré}&\textbf{NGAT}&\textbf{83.1$\pm$0.2}&\fbox{79.1$\pm$0.3}&\textbf{73.8$\pm$0.3}&90.5$\pm$0.2&93.9$\pm$0.2&\fbox{92.8$\pm$0.2}\\\hline
\end{tabular}}
\vspace{-0.5em}
\label{tab:overal-perf}
\vspace{-0.5em}
\end{table*}
\subsection{RQ1: Performance on Graph Prediction}
To evaluate the performance of hyperbolic normalization with respect to Euclidean and hyperbolic alternatives, we compare the $NGCN$ and $NGAT$ models against the baselines on the tasks of node classification and link prediction. We utilize the standard evaluation metrics of accuracy and ROC-AUC for the tasks of node classification and link prediction, respectively. The results for our experiments are provided in Table \ref{tab:overal-perf}.
From the results, in all cases (except link-prediction on Citeseer), pseudo-Poincaré variants outperform the Euclidean model. Pseudo-Poincaré variants perform better than their hyperbolic counterparts in node classification, but marginally lower in link-prediction. This is inline with our expectations as embeddings of adjacent points in link-prediction on homogeneous graphs tend to have spatial locality. However, spatial locality is less likely for node-classification (as far away nodes still can belong to the same class).
It is also worth noting that Pseudo-Poincaré outperforms Euclidean in all node-classification datasets regardless of the hyperbolicity of the dataset (e.g. Cora has low hyperbolicity \cite{chami2019hyperbolic} which were previously suggested as a reason that inferior Hyperbolic networks over Euclidean counterparts). This observation suggests that Pseudo-Poincaré is more general purpose framework (i.e. it is performant in wider variety of tasks/datasets as compared to pure-Hyperbolic model).
It should be noted that pseudo-Poincaré variants are consistently at least the second-best performing methods, closely following the top performer which makes it very appealing given the low-complexity, and speedup of the model (see Table \ref{tab:speed-up} for execution time).
\begin{table*}[tbp]
\centering
\footnotesize
\caption{\footnotesize Performance comparison results between Pseudo-Poincaré (ours) and the baseline methods on the on the task of multi-relational graph reasoning. The columns present the evaluation metrics of Hits@K (H@K) (\%) and Mean Reciprocal Rank (MRR) (\%) along with their corresponding 95\% confidence intervals. The best results are highlighted in bold.}
\vspace{-0.5em}
\resizebox{\textwidth}{!}{
\begin{tabular}{p{.8em}p{2.8em}|ccc|ccc|ccc}
\hline
&\textbf{Datasets}&\multicolumn{3}{c|}{\textbf{WN18RR}}&\multicolumn{3}{c|}{\textbf{FB15K-237}}&\multicolumn{3}{c}{\textbf{PPI}}\\\hline
\textbf{Dim}&\textbf{Models}&\textbf{MRR}&\textbf{H@10}&\textbf{H@3}&\textbf{MRR}&\textbf{H@10}&\textbf{H@3}&\textbf{MRR}&\textbf{H@10}&\textbf{H@3}\\\hline
\textbf{40}&\textbf{MuRE}&40.9$\pm$0.3&49.7$\pm$0.3&44.5$\pm$0.3
&29.0$\pm$0.3&46.2$\pm$0.3&31.9$\pm$0.3& 2.81$\pm$0.3&9.24$\pm$0.3&4.11$\pm$0.3\\
&\textbf{MuRP}&42.5$\pm$0.3&52.2$\pm$0.3&45.9$\pm$0.
&29.8$\pm$0.3&47.4$\pm$0.3&32.8$\pm$0.3& 5.55$\pm$0.3&10.21$\pm$0.3&4.43$\pm$0.3
\\
&\textbf{NMuR}&\textbf{43.6$\pm$0.3}&\textbf{57.5$\pm$0.3}&\textbf{47.4$\pm$0.3
&\textbf{31.5$\pm$0.3}&\textbf{49.7$\pm$0.3}&\textbf{34.7$\pm$0.3}&\textbf{8.21$\pm$0.3}&\textbf{14.3$\pm$0.3}&\textbf{11.7$\pm$0.3
\\\hline
\textbf{200}&\textbf{MuRE}&44$\pm$0.3&51.3$\pm$0.3&45.5$\pm$0.
&31.4$\pm$0.3&49.5$\pm$0.3&34.5$\pm$0.3&10.68$\pm$0.3&14.55$\pm$0.3&11.90$\pm$0.3
\\
&\textbf{MuRP}&44.6$\pm$0.3&52.4$\pm$0.3&46.2$\pm$0.
&31.5$\pm$0.3&49.8$\pm$0.3&34.8$\pm$0.3& 8.23$\pm$0.3&15.09$\pm$0.3&9.98$\pm$0.
\\
&\textbf{NMuR}&\textbf{44.7$\pm$0.3}&\textbf{57.9$\pm$0.3}&\textbf{48.1$\pm$0.3
&\textbf{32.2$\pm$0.3}&\textbf{50.6$\pm$0.3}&\textbf{35.5$\pm$0.3}& \textbf{12.68$\pm$0.3}&\textbf{15.07$\pm$0.3}&\textbf{13.82$\pm$0.3
\\\hline
\end{tabular}}
\label{tab:multi-relation}
\vspace{-1em}
\end{table*}
\subsection{RQ2: Multi-relational Reasoning}
To compare the performance of hyperbolic normalization against the baselines for multi-relational graphs, we compare the $NMuR$ model against $MuRE$ and $MuRP$ on the task of reasoning. We consider the standard evaluation metrics of Hits@K and Mean Reciprocal Rank (MRR) to compare our methods on the reasoning task. Let us say that the set of results for a head-relation query $(h,r)$ is denoted by $R_{(h,k)}$, then the metrics are computed by $MRR = \frac{1}{n}\sum_{i=1}^n \frac{1}{rank(i)}$ and $Hits@K = \frac{1}{K} \sum_{k=1}^K e_k$, where $e_k= 1$, if $e_k \in R_{(h,r)}$ and $0$, otherwise. Here $rank(i)$ is the rank of the $i^{th}$ retrieved sample in the ground truth. To study the effect of dimensionality, we evaluate the models with two values of the embedding dimensions: $n=40$ and $n=200$. The results from our comparison are given in Table \ref{tab:multi-relation}.
From the results, note that $NMuR$ outperforms the hyperbolic model $MuRP$, on an average, by $\approx 5\%$ with 40 dimensions and $\approx 3\%$ with 200 dimensions across various metrics and datasets. This shows that $NMuR$ is able to effectively learn better representations at lower number of dimensions. Also, note that the difference in performance by increasing the number of dimensions is $<1\%$ for most cases, implying that NMuR is already able to capture the hierarchical space of the knowledge graphs at lower dimensions. This should limit the memory needed for reasoning without a significant loss in performance.
\begin{table}[tbp]
\footnotesize
\vspace{-1em}
\caption{\footnotesize (a) Comparison of normalization methods (Layer-Norm, Hyperbolic Norm, and Constant magnitude). The columns present results on Cora (CR), Pubmed (PM), and Citeseer (CS) datasets using GAT and GCN models. (b) The training epoch times (in seconds - lower is better) for Euclidean, hyperbolic, and Pseudo-Poincaré models. Note that the provided results are without any hyper-parameter tuning, thus the comparison is only valid column-wise. Table \ref{tab:overal-perf} provides the results for cross model comparison.}
\begin{subtable}{.5\linewidth}
\vspace{-.8em}
\caption{Comparison of normalization methods.}
\vspace{-0.5em}
\centering
\begin{tabular}{ll|ccc}
\hline
\textbf{Base Model} & &\textbf{CR} & \textbf{PM} & \textbf{CS}\\\hline
\textbf{GCN} & \textbf{Constant} & 67.9 & 76.7 & 68.3\\
& \textbf{Layer-Norm} & 78.3 & 75.0 & 63.6\\
& \textbf{Hyp-Norm} & \textbf{80.9} & \textbf{78.0} & \textbf{72.1}\\\hline \textbf{GAT} & \textbf{Constant} & 75.3 & 73.0 & 61.2\\
& \textbf{Layer-Norm} & 75.6 & 74.9 & 65.3\\
& \textbf{Hyp-Norm} & \textbf{76.7} & \textbf{76.3} & \textbf{68.6}\\\hline
\end{tabular}
\vspace{-1em}
\label{tab:layer-norm}
\end{subtable}
\begin{subtable}{.5\linewidth}
\centering
\vspace{-1em}
\caption{Comparison of execution times.}
\vspace{-0.5em}
\begin{tabular}{ll|ccc}
\hline
\textbf{Model} & &\textbf{CR} & \textbf{PM} & \textbf{CS}\\\hline
\textbf{Convolution} & \textbf{GCN} &0.07 & 0.13 & 0.04\\
& \textbf{HGCN} & 0.29 & 0.33 & 0.21\\
& \textbf{NGCN} & \textbf{0.08} & \textbf{0.14} &\textbf{0.06}\\\hline
\textbf{Attention} & \textbf{GAT} & 0.16 & 0.38 & 0.09\\
& \textbf{HGAT} & 1.10 & 1.34 & 0.81\\
& \textbf{NGAT} & \textbf{0.19} & \textbf{0.37} & \textbf{0.12}\\\hline
\end{tabular}
\vspace{-1em}
\label{tab:speed-up}
\end{subtable}
\label{tab:full_table}
\end{table}
\subsection{RQ3: Model Analysis}
\textbf{Hyperbolic Normalization vs. Layer Normalization.} In this experiment, we study the difference between our hyperbolic normalization compared to other existing normalization approaches. For this, we select Layer-norm \cite{ba2016layer} as it is the closest in terms of input/output. Also, in order to study the effect of hyperbolic scaling in our normalization layer, we added constant-norm which is computed as follows: $norm(x) = x / \|x\|$.
In order to compare the effect of normalization layers, we did not use any extra hyper-parameter tuning and regularization. We set the embedding size to 64 for both GAT and GCN, and used 4 heads for GAT-models. Table \ref{tab:layer-norm} shows the comparison of different normalization methods on Cora, Pubmed, and Citeseer datasets, and confirms the outperformance of hyperbolic normalization. Since Layer-norm keeps the GAT model in the Euclidean space, we used ADAM optimizer. However, since both hyperbolic normalization and constant normalization have $\|x\|$ as their denominator, we used RiemannianAdam (see Section \ref{sec:ortho}).
\textbf{Execution Time Comparison.} In this analysis, we demonstrate the impact of hyperbolic normalization in improving the scalability of hyperbolic networks.
We study the computational cost to train our model versus the hyperbolic and Euclidean counterparts. We ran all the experiments on Quadro RTX 8000 with 48GB memory, and reported the mean of $epoch/second$ for each training experiment for link prediction task in Table \ref{tab:speed-up}. Our model NGCN is 3 times faster than the HGCN on average over the entire training, while NGAT is 5 times faster than its own hyperbolic counterpart (HGAT). NGCN and NGAT are $25\%$ and $12\%$ slower compared to their Euclidean counterparts, respectively.
We observed the smallest speedup gap when training Pubmed which is due to the very large dataset memory footprint. Even in this case, our NGCN model is still more than 2 times faster than HGCN and NGAT is more than 3.5 times faster than HGAT.
\textbf{Optimizer Choice.} In this study, we experimentally validate our observations regarding the choice of optimizers, as discussed in Section \ref{sec:ortho}. Table \ref{tab:optimizer} shows that NGAT works better when RiemannianAdam is used as an optimizer. This shows that, although the parameters are in the Euclidean space, their gradient updates occur in accordance with hyperbolic gradients. Therefore, NGAT leverages the architecture of GAT, but behaves more closely to HGAT when it comes to parameter optimization.
\begin{table}[tbp]
\centering
\footnotesize
\vspace{-0.2em}
\caption{\footnotesize \label{tab:optimizer} Performance of NGAT, GAT, and HGAT on node classification with RAdam and Adam optimizers.}
\begin{tabular}{ ll|ccc}
\hline
\textbf{Models}&\textbf{Optimizers}&\textbf{Cora}&\textbf{Pubmed}&\textbf{Citeseer}\\\hline
\textbf{GAT}&\textbf{Adam}&\textbf{82.7$\pm$0.2}&78.7$\pm$0.2&\textbf{71.6$\pm$0.3}\\
&\textbf{RAdam}&78.4$\pm$0.3&\textbf{79.0$\pm$0.3}&71.1$\pm$0.3\\\hline
\textbf{HGAT}&\textbf{Adam}&76.8$\pm$0.3&76.0$\pm$0.3&\textbf{68.1$\pm$0.3}\\
&\textbf{RAdam}&\textbf{79.6$\pm$0.3}&\textbf{78.9$\pm$0.3}&\textbf{68.1$\pm$0.3}\\\hline
\textbf{NGAT}&\textbf{Adam}&78.8$\pm$0.3&77.7$\pm$0.3&69.7$\pm$0.3\\
&\textbf{RAdam}&\textbf{83.9$\pm$0.2}&\textbf{79.1$\pm$0.3}&\textbf{73.8$\pm$0.3}\\\hline
\end{tabular}
\vspace{-0.5em}
\end{table}
\begin{figure*}[tbp]
\vspace{-.2em}
\centering
\includegraphics[width=0.76\textwidth]{images/emb_viz.png}
\includegraphics[width=0.12\textwidth]{images/emb_labels.png}
\vspace{-.2em}
\caption{Embedding visualizations of the Cora citation graph for Euclidean (GAT), hyperbolic (HGAT), and our proposed Pseudo-Poincaré (NGAT) methods. NGAT achieves better separation between the node classes than the corresponding Euclidean and hyperbolic counterparts.}
\vspace{-0.4em}
\label{fig:emb_viz}
\end{figure*}
\subsection{RQ4: Embedding Visualizations}
The goal of this section is to visually confirm that our approach of learning in Euclidean space, indeed preserves the hierarchical characteristics of hyperbolic space. Towards this goal, we present the visualization of node embeddings from the Cora citation graph. We choose Cora, since it is a citation-based graph and does not contain explicit hierarchical (is-a) relations. We contrast the embeddings learned by GAT (a method based on Euclidean space), with HGAT (learning in hyperbolic space) with the proposed NGAT approach. Figure \ref{fig:emb_viz} shows that NGAT ensures clearer separation between node classes in comparison to its Euclidean and hyperbolic counterparts
\vspace{-0.1em}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we showed that an Euclidean model can be converted to behave as a hyperbolic model by simply (1) connecting it to the hyperbolic normalization layer and (2) using a Riemannian optimizer.
From the training perspective, this approach allows the trainable parameters to interact in the Euclidean vector space, consequently enabling the use of Euclidean regularization techniques for the model training. Furthermore, the training of our proposed approach is significantly faster compared to the existing hyperbolic methods. This allows the hyperbolic networks to be more flexible in application to a wider variety of graph problems.
From task/application perspective, we removed the implicit assumption of having spatial locality of the adjacent nodes (in all dimensions), to be prerequisite of the hyperbolic model training. This allows the framework to be applied in a wider variety of tasks.
\vspace{-0.1em}
\section{Broader Impact}
\label{sec:broader_impact}
Pseudo-Poincaré framework is the first of its kind approach that reformulates the hyperbolic network into a Euclidean network with an added normalization function for effectively capturing hierarchy of datasets. The method is generalizable to all hyperbolic networks and provides empirical evidence that any Euclidean deep learning model can be converted to its hyperbolic equivalent by using the normalization function. This extends its application to several domains including natural language processing, computer vision, and knowledge graphs. However, our framework is also sensitive to the nature of the dataset and it is necessary to understand the hyperbolicity of the underlying data and hierarchical nature of the problem before applying our framework as the solution. Additionally, the integrity of the training dataset needs to be ensured as attacks would propagate over the entire network and cause errors in the downstream applications.
\bibliographystyle{plainnat}
|
1,314,259,993,356 | arxiv | \section{Introduction}
\label{intro}
Optical transients which are spatially coincident or associated with elongated and extended sources have a high probability of being supernovae (SNe). Thus when searching for SNe, concentrating on high mass or intrinsically high luminosity galaxies can be a fruitful endeavour to optimise the yield of recorded events as it will maximise the number of stars observed that can potentially explode as SNe. To date the majority of such searches at low redshift have adopted this approach to good effect, for example the Lick Observatory Supernova Search \citep{loss}. The unbiased nature of surveys like the Palomar Transient Factory \citep[PTF,][]{dwarfrates} and Pan-STARRS1 \citep[PS1,][]{PS1_system} mean that transients are being discovered without a bright galaxy bias and the neighbourhoods in which SNe are being found are not restricted to large, star forming galaxies. \cite{dwarfrates} found that the core-collapse SNe (CCSNe) population in dwarf galaxies is different to that found in giant galaxies, in the sense that there are many more broad-lined, Type Ic SNe in the former population. They link this to the metallicity of the underlying stellar population and its effect on stellar evolution.
Another class of SNe that have so far been discovered almost exclusively hosted by smaller, fainter galaxies is the relatively rare breed of superluminous SNe (SLSNe).
\cite{bluedeath} unravelled the mysteries of the luminous SN2005ap and SCP 06F6 by grouping them with a number of PTF discoveries to suggest these SLSNe as the death throes of at least some of the most massive of stars.
Detailed studies of some of these events, such as SN2010gx
\citep[also known as PTF10cwr,][]{10gx,bluedeath} and PTF12dam \citep{12dam},
have increased the knowledge base on these highly energetic events and the recent \protect \hbox {PS1}\ discoveries of PS1-10ky, PS1-10awh \citep{10kyawh}, PS1-10bzj \citep{lun} and PS1-11ap \citep{11ap} are supporting and expanding progenitor and physical explosion mechanism theories.
The highest redshift discoveries ($z>1.5$) from PS1 of \cite{berger} and from the Supernova Legacy Survey \citep[SNLS,][]{2013arXiv1310.0470H} illustrate that SLSNe can be spectroscopically followed at significantly higher redshifts than Type Ia SNe (SNe\,Ia).
\cite{bluedeath} used the identification of narrow Mg\,{\sc ii} $\lambda\lambda$2796,2803 absorption lines from foreground gas to place robust lower limits on the redshift of these SLSNe, which immediately provided an estimate of the enormous luminosities. In a few cases the redshifts of the Mg\,{\sc ii} absorption exactly matches the emission lines of the host galaxy, confirming the reasonable assumption that the Mg\,{\sc ii} absorption arises in the host galaxy itself.
The redshifts derived by \cite{bluedeath} and then by \cite{10kyawh} using this method find peak absolute SLSNe magnitudes of $M_{u}\simeq-22\pm0.5$\,mag and total radiated energies $\gtrsim10^{51}$\,erg, making them substantially more luminous than any other SN-type events.
\begin{table*}
\caption{\protect \hbox {PS1}\ Medium Deep Field Centres. }
\label{table:fields}
\begin{tabular}{lrr}
\hline
\hline
{\bf Field} & {\bf RA (degrees, J2000)} & {\bf Dec (degrees, J2000)} \\
\hline
MD00 & 10.675 & $ 41.267$ \\
MD01 & 35.875 & $ -4.250$ \\
MD02 & 53.100 & $-27.800$ \\
MD03 & 130.592 & $ 44.317$ \\
MD04 & 150.000 & $ 2.200$ \\
MD05 & 161.917 & $ 58.083$ \\
MD06 & 185.000 & $ 47.117$ \\
MD07 & 213.704 & $ 53.083$ \\
MD08 & 242.787 & $ 54.950$ \\
MD09 & 334.188 & $ 0.283$ \\
MD10 & 352.312 & $ -0.433$ \\
MD11 & 270.000 & $ 66.561$ \\
\hline
\end{tabular}
\medskip
\end{table*}
\begin{table*}
\caption{\protect \hbox {PS1}\ Medium Deep Survey, typical cadence. FM$\pm$3 designates 3 nights on either side of Full Moon.}
\label{table:cadence}
\begin{tabular}{ccc}
\hline
\hline
{\bf Night} & {\bf Filter} & {\bf Exposure Time} \\
\hline
1 & \ensuremath{g_{\rm P1}} \& \ensuremath{r_{\rm P1}} & 8$\times$113s each\\
2 & \ensuremath{i_{\rm P1}} & 8$\times$240s \\
3 & \ensuremath{z_{\rm P1}} & 8$\times$240s \\
repeats... & . . . & . . . \\
FM$\pm3$ & \ensuremath{y_{\rm P1}} & 8$\times$240s \\
\hline
\end{tabular}
\medskip
\end{table*}
Despite this common lower limit, differences in the photometric and spectroscopic evolution of the observed SLSNe suggests a number of progenitor possibilities.
\cite{10gx} find iron and other features normally associated with Type Ic SNe in the spectra of SN2010gx\ at $30-50$ days after peak showing that the transient evolved to resemble an energetic Type Ic SN but on a much slower timescale. \cite{11xk} present data on another five such objects at redshifts 0.1 - 0.2 with detailed modelling suggesting that the explosions are simply `normal' Type Ic SNe with an additional power source providing a boost to the luminosity. The best fitting models presented by \cite{11xk} are those where a fast rotating neutron star (magnetar) provides the extra energy required, an idea proposed and developed by \cite{pulsara}, \cite{pulsarb} and \cite{magn1}. Other power sources, such as the radioactive decay of $^{56}$Ni \citep{arnett}, are also explored but could not be reconciled with the light curves of SLSNe.
These kind of models were used by \cite{12dam} and \cite{11ap} to explain the slower photometric evolution of the SLSNe PTF12dam\ and PS1-11ap, where again the magnetar models gave the most satisfactory fits to the data. Prior to the discovery of these two SLSNe however, the only well studied object of this slowly evolving type was SN2007bi\ \citep{07bi,DY} which was believed to have been the result of a pair-instability supernova \citep[PISN,][]{1stpisn, 2ndpisn, 3rdpisn, 4thpisn}.
\cite{snsh} suggest the interaction of the SN shock with a dense circumstellar material of H-poor material as a possible mechanism for the production of the observed features of the light curves and spectra. This theory also accounts for the Ic-like features and the unusually high magnitudes associated with the aforementioned objects.
Thus we will refer to these two subclasses of SLSNe, those sharing the properties of the \cite{bluedeath} sample and those that display the prolonged light curve evolution of SN2007bi, as SLSNe-Ic and slowly evolving SLSNe-Ic respectively.
A number of discoveries of a completely separate class of SLSNe that display strong H emission similar to the Type II classes of SNe have also been made. SN2006gy \citep{06gya,06gyb} and SN2003ma \citep{03ma} are examples of this class, where the observed features are likely produced by the interaction between an energetic SN explosion and a very dense circumstellar medium.
Any objects of this class will be referred to here as SLSNe-II \citep{gal-yam}.
In contrast to the SLSNe-Ic, SN2006gy and SN2003ma occured in bright host galaxies, suggesting that
SLSNe-II do not follow the apparent trend visible for SLSNe Ic which have, almost exclusively, dwarf hosts.
An important feature of the known SLSNe-Ic sample is the noticeable preference for them appearing in faint host galaxies. This trend has been apparent right from the early discoveries with
a faint host galaxy of no brighter than $M_{r}\sim-18$ found for SN2005ap \citep{05ap} and an upper limit of $-18.1$ mag set at the pre-explosion location of SCP06F6 \citep{scp06f6}. \cite{10gxgal} and \cite{bluedeath} found a faint dwarf host for SN2010gx in archive SDSS images (SDSS J112546.72-084942.0), with an absolute magnitude of $M_r\sim-18$ and a more recent study further refined this value to $M_g=-17.42\pm0.17$ \citep{janet}.
\cite{DY} also calculated an absolute magnitude for the host of SN2007bi using SDSS archive images (SDSS J131920.14+085543.7) and found an $M_{B}\sim-16.4$.
A similar trend has been noted with virtually all subsequent discoveries \citep{bluedeath,10kyawh,10gx,06oz,scp06f6, janet,lun,2014arXiv1405.1325N}, which
led \cite{2013arXiv1311.0026L} to study a large sample of host galaxies and suggest that they are similar hosts to those of Gamma-Ray Bursts (GRBs). An exception to this supposed trend is the event so far reported with one of the highest redshifts.
PS1-11bam \citep{berger}, which has $z=1.55$ derived from strong absorption of both Fe\,{\sc ii} and Mg\,{\sc ii} and the detection of [OII] 3727\AA\ from the host galaxy in emission, has the most luminous host discovered so far for a SLSN-Ic,
with a near UV absolute magnitude of M$_{\rm NUV} \simeq -20.3$.
However the host is still some 2 magnitudes fainter than the SN itself.
In summary, the currently known SLSNe-Ic sample typically appear to be $>2-4$ mag brighter than their hosts
\citep[see][for a compilaton of host magnitudes]{2013arXiv1311.0026L}
indicating that a simple way of isolating them in higher redshift
searches could be to target transients with either no host detected or with a significant difference between total host luminosity and peak SLSN-Ic magnitude. It is striking that
no SLSNe-Ic erupting in bright galaxies have been uncovered by any previous wide-field survey or galaxy targeted low redshift searches. Although one might argue there is always a preference for observers to take spectra of isolated transients to avoid galaxy contamination effects, the very large
sample of low redshift supernovae now classified do not contain any examples of SLSNe Ic
in galaxies close to $L^{\star}$ \citep[the characteristic luminosity in the Schechter luminosity function for galaxies, approximately corresponding to $M_{B}\simeq-21$][]{1976ApJ...203..297S}. This is
somewhat surprising since the bulk of stellar mass, and star formation, occurs in galaxies close
to $L^{\star}$.
The focus of this paper is to use this apparent preference for dwarf galaxy hosts as a method of finding SLSNe-Ic in the first year of the \protect \hbox {PS1}\ Medium Deep Survey (MDS), to quantify their numbers in a magnitude limited survey and to approximately estimate the volumetric rates between the redshift range of $z = 0.3 - 1.4$. \cite{quimbrate} previously estimated the SLSN-Ic rate to be $32^{+77}_{-26}$ events Gpc$^{-3}$yr$^{-1}h^{3}_{71}$ at a redshift of $z\sim0.2$ (although based on 1 event) and the SLSN-II rate to be $151^{+151}_{-82}$ events Gpc$^{-3}$yr$^{-1}h^{3}_{71}$ at $z \sim 0.15$.
At the beginning of the PS1 survey we used reference images which were made
of a small number ($\sim8$) of single input images from a
good night. As the survey has progressed we have been able to
stack the images to build much deeper template and stack images.
However at the beginning of the PS1 survey and during the first year of operations, we used reference stacks which reached a limiting apparent magnitudes of $\sim23.5$ in the \ensuremath{g_{\rm P1}}\ensuremath{r_{\rm P1}}\ensuremath{i_{\rm P1}}-bands.
Hence SNe which are brighter than 22, but have no host visible brighter than 23.5 were immediately candidates for hunting down these exciting phenomena. This paper focuses on all hostless transients discovered during the first year of the \protect \hbox {PS1}\ survey operations.
\section{The \protect \hbox {PS1}\ Medium Deep Survey}
\label{sec:observations}
\begin{figure*}
\begin{center}$
\begin{array}{cc}
\includegraphics[scale=0.3,angle=270]{figs/r_Ia.eps} &
\includegraphics[scale=0.3,angle=270]{figs/z_Ia.eps} \\
\includegraphics[scale=0.3,angle=270]{figs/r_CC.eps} &
\includegraphics[scale=0.3,angle=270]{figs/r_unk.eps}
\end{array}$
\end{center}
\caption{Magnitude (\ensuremath{r_{\rm P1}} band) and redshift distributions of the Type Ia SNe orphans and magnitude distributions of the core-collapse orphans and the miscellaneous orphans. The classifications are split into two mutually exclusive classes, spectroscopic (red) classifications from a variety of telescopes and photometric (green) classifications using the \textsc{soft} and \textsc{psnid} algorithms from Rodney \& Tonry (2009) and Sako et al. (2008, 2011). Note that in the bottom left panel there are 16 Photometric CCSNe, whereas Table\,\ref{table:CC?} has 17. This is due to PS1-11ag not having \ensuremath{r_{\rm P1}}-band points at peak, hence we do not plot it.}
\label{fig:Ia}
\end{figure*}
The \protect \hbox {PS1}\ system is a high-\'{e}tendue wide-field imaging system, designed for dedicated survey observations. The system is installed on the peak of Haleakala on the island of Maui in the Hawaiian island chain.
The telescope has a 1.8-m diameter primary mirror and the gigapixel camera (GPC1) located at the $f/4.4$ cassegrain focus consists of sixty 4800$\times$4800 pixel detectors (pixel scale 0.26$''$) giving a field of view of 3.3$^{\circ}$ diameter.
Routine observations are conducted remotely, from the Waiakoa Laboratory in Pukalani. A more complete description of the \protect \hbox {PS1}\ system, both hardware and software, is provided by \cite{PS1_system}. The survey philosophy and execution strategy are described in Chambers et al. (in preparation).
The \protect \hbox {PS1}\ observations are obtained through a set of five broadband filters, which we have designated as \ensuremath{g_{\rm P1}}, \ensuremath{r_{\rm P1}}, \ensuremath{i_{\rm P1}}, \ensuremath{z_{\rm P1}}, and \ensuremath{y_{\rm P1}}. Although the filter system for \protect \hbox {PS1}\ has much in common with that used in previous surveys, such as SDSS \citep{SDSS}, there are important differences. The \ensuremath{g_{\rm P1}}\ filter extends 20~nm redward of $g_{SDSS}$, paying the price of 5577\AA\ sky emission for greater sensitivity and lower systematics for photometric redshifts, and the \ensuremath{z_{\rm P1}}\ filter is cut off at 930~nm, giving it a different response than the detector response which defined $z_{SDSS}$. SDSS has no corresponding \ensuremath{y_{\rm P1}}\ filter. Further information on the passband shapes is described in \cite{PS_lasercal}.
The \protect \hbox {PS1}\ photometric system and its response is covered in detailed in \cite{tonry12}. Photometry is in the ``natural'' \protect \hbox {PS1}\ system, $m=-2.5\mathrm{log}(flux)+m'$, with a single zeropoint adjustment $m'$ made in each band to conform to the AB magnitude scale.
This paper uses images and photometry from the \protect \hbox {PS1}\ MDS, the observations of which are described in more detail in \cite{JTwds}, \cite{PS1a} and \cite{scol}.
\protect \hbox {PS1}\ has observed 12 MDS\ fields, but we will only describe data from ten of them (MD01 to MD10). MD00 is centered on M31 and we have not been searching systematically for SNe in this field behind Andromeda. MD11 was observed for a short period in 2010-11, but has since been dropped from the MDS\ schedule. The MD field centres and the exposure times in the 5 filters are listed in Tables \ref{table:fields} and \ref{table:cadence}. Observations of between 3-5 MD fields are taken each night and the filters are cycled through in the following pattern : \ensuremath{g_{\rm P1}}\ and \ensuremath{r_{\rm P1}}\ in the same night (dark time), followed by \ensuremath{i_{\rm P1}}\ and \ensuremath{z_{\rm P1}}\ on the subsequent second and third night respectively. Around full moon only \ensuremath{y_{\rm P1}}\ data are taken. Any one epoch consists of 8 dithered exposures of either $8\times113$s for \ensuremath{g_{\rm P1}}\ and \ensuremath{r_{\rm P1}}\ or $8\times240$s for the other three, giving nightly stacked images of 904s and 1920s duration.
Images obtained by the \protect \hbox {PS1}\ system are processed through the Image Processing Pipeline (IPP) \citep{PS1_IPP}, on a computer cluster at the Maui High Performance Computer Center (MHPCC). The pipeline runs the images through a succession of stages including device ``de-trending'', a flux-conserving warping to a sky-based image plane, masking and artifact location. De-trending involves bias and dark correction and flatfielding using white light flatfield images from a dome screen, in combination with an illumination correction obtained by rastering sources across the field of view. After determining
an initial astrometric solution the flat-fielded images were then warped onto the tangent plane of the sky using a flux conserving algorithm. The plate scale for the warped images was originally set at 0.200 arcsec/pixel,
but has since been changed to 0.25 arcsec/pixel in what is known internally as the V3 tesselation for the MD fields.
Bad pixel masks are applied to the individual images and carried through the stacking stage to give the ``nightly stacks'' of 904s and 1920s total duration.
\begin{table*}
\begin{center}
\caption{28 spectroscopically confirmed \protect \hbox {PS1}\ SNIa. For the objects marked with an `*', the spectral classifications for these as SNe Ia can be found in Rest et al. (2014). The MD06 object PS1-11acn is designated as PTF11dws in the cited source of classification.}
\label{table:Ia!}
\begin{tabular}{lcccccccc}
\hline
\hline
{\bf Field} & {\bf PS1 ID} & {\bf RA (deg, J2000)} & {\bf Dec (deg, J2000)} & {\bf \emph{z}$_{\bf SPEC}$} & {\bf Peak \emph{r}$_{\bf P1}$} & {\bf Telescope} & {\bf Date} & {\bf Ref.} \\
\hline
MD01 & PS1-10nu & 36.8001 & -4.5347 & 0.065 & 18.276 (0.004) & Magellan & 14/08/2010 & * \\
MD01 & PS1-11cn & 35.7730 & -3.6101 & 0.250 & 20.866 (0.028) & GN & 31/01/2011 & * \\
MD03 & PS1-1000026 & 129.0524 & 44.0071 & 0.140 & 20.370 (0.025) & WHT & 23/02/2010 & \cite{val1a} \\
MD03 & PS1-10bka & 131.6802 & 44.0035 & 0.247 & 22.179 (0.096) & GN & 31/01/2011 & * \\
MD03 & PS1-10bzt & 132.2495 & 44.8656 & 0.420 & 22.291 (0.224) & MMT & 28/12/2010 & * \\
MD04 & PS1-10iv & 150.5018 & 2.0604 & 0.369 & 21.478 (0.055) & GN & 14/05/2010 & * \\
MD04 & PS1-10l & 151.2314 & 2.5908 & 0.370 & 23.435 (0.201) & Magellan & 20/01/2010 & * \\
MD04 & PS1-11s & 150.7889 & 2.143 & 0.400 & 22.251 (0.091) & Magellan & 12/01/2011 & * \\
MD04 & PS1-11t & 150.5261 & 2.0877 & 0.450 & 21.863 (0.093) & Magellan & 12/01/2011 & * \\
MD04 & PS1-11p & 148.7919 & 1.7302 & 0.480 & 22.372 (0.132) & Magellan & 12/01/2011 & * \\
MD04 & PS1-11bh & 149.6284 & 2.8717 & 0.350 & 21.830 (0.072) & Magellan & 12/01/2011 & * \\
MD05 & PS1-10ix & 162.0978 & 57.1481 & 0.381 & 21.950 (0.283) & GN & 14/05/2010 & * \\
MD06 & PS1-11acn & 184.7782 & 47.3557 & 0.150 & 20.079 (0.010) & Keck & 13/06/2011 & \cite{galatel} \\
MD06 & PS1-10kj & 183.5271 & 46.9923 & 0.350 & 22.136 (0.106) & MMT & 18/06/2010 & * \\
MD06 & PS1-11jo & 184.0016 & 47.9204 & 0.330 & 21.632 (0.045) & MMT & 23/02/2011 & * \\
MD06 & PS1-11xw & 184.9750 & 48.1402 & 0.270 & 21.318 (0.042) & MMT & 11/06/2011 & * \\
MD06 & PS1-11yr & 186.5881 & 46.5959 & 0.530 & 22.573 (0.131) & GN & 11/06/2011 & * \\
MD07 & PS1-10ig & 211.8623 & 53.3429 & 0.260 & 20.891 (0.035) & MMT & 04/04/2010 & * \\
MD07 & PS1-10iy & 214.1196 & 54.0535 & 0.443 & 22.171 (0.124) & GN & 15/05/2010 & * \\
MD07 & PS1-10iw & 214.4605 & 52.8010 & 0.447 & 21.845 (0.092) & GN & 15/05/2010 & * \\
MD07 & PS1-10kf & 212.9905 & 52.0718 & 0.450 & 22.188 (0.133) & MMT & 18/06/2010 & * \\
MD07 & PS1-10kv & 212.6639 & 53.9895 & 0.530 & 22.242 (0.148) & GN & 07/07/2010 & * \\
MD07 & PS1-11zd & 214.6670 & 54.1830 & 0.100 & 19.214 (0.006) & WHT & 08/06/2011 & * \\
MD08 & PS1-10jz & 241.7032 & 54.9809 & 0.550 & 22.283 (0.145) & MMT & 18/06/2010 & * \\
MD08 & PS1-10jv & 244.4487 & 55.3022 & 0.360 & 21.586 (0.057) & MMT & 17/06/2010 & * \\
MD10 & PS1-10bjz & 353.2949 & -1.2353 & 0.310 & 21.503 (0.175) & MMT & 10/12/2010 & * \\
MD10 & PS1-10byj & 353.3821 & 0.1340 & 0.511 & 22.136 (0.118) & GN & 17/12/2010 & * \\
MD10 & PS1-10axm & 353.3285 & -0.9505 & 0.510 & 22.274 (0.224) & GN & 16/10/2010 & * \\
\hline
\end{tabular}
\medskip
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{48 plausible \protect \hbox {PS1}\ SNIa, classified using the \textsc{soft} and \textsc{psnid} photometric classification codes (Rodney \& Tonry 2009; Sako et al. 2008, 2011). The values in the \textsc{soft} and \textsc{psnid} columns represent the probability that the object is classified as a SNIa.}
\label{table:Ia?}
\begin{tabular}{lcccccccc}
\hline
\hline
{\bf Field} & {\bf PS1 ID} & {\bf RA (deg, J2000)} & {\bf Dec (deg, J2000)} & {\bf SOFT} & {\bf PSNID} & {\bf \emph{z}$_{\bf PHOT}$} & {\bf \emph{dz}} & {\bf Peak \emph{r}$_{\bf P1}$} \\
\hline
MD01 & PS1-10aat & 34.6005 & -4.2033 & - & 0.999 & - & - & 22.532 (0.172) \\
MD01 & PS1-10bcd & 36.4632 & -5.4049 & 0.996 & 0.974 & 0.480 & 0.028 & 22.289 (0.078) \\
MD01 & PS1-10blx & 35.2726 & -3.8241 & 0.970 & 0.987 & 0.280 & 0.028 & 22.678 (0.181) \\
MD01 & PS1-10zv & 36.5108 & -4.1125 & - & 1.000 & - & - & 21.421 (0.041) \\
MD01 & PS1-10zz & 36.8222 & -3.2903 & 0.981 & 0.946 & 0.340 & 0.045 & 22.594 (0.104) \\
MD02 & PS1-10afj & 52.6562 & -28.3717 & - & 0.987 & 0.260 & 0.028 & 22.948 (0.192) \\
MD02 & PS1-10bxr & 54.1715 & -28.3673 & 0.890 & 0.998 & 0.460 & 0.117 & 22.483 (0.227) \\
MD03 & PS1-10ayn & 131.2068 & 43.8823 & 0.999 & 0.997 & 0.520 & 0.057 & 22.738 (0.132) \\
MD03 & PS1-10bkm & 129.6447 & 44.8684 & - & 1.000 & - & - & 21.510 (0.051) \\
MD03 & PS1-10cbs & 131.8921 & 44.5374 & 0.997 & 1.000 & 0.480 & 0.028 & 22.292 (0.116) \\
MD03 & PS1-11bw & 129.8787 & 43.9868 & 0.998 & 0.996 & 0.560 & 0.117 & 22.648 (0.178) \\
MD03 & PS1-11ex & 130.1915 & 43.8002 & 0.941 & 0.977 & 0.580 & 0.117 & 23.223 (0.165) \\
MD03 & PS1-11gs & 131.3379 & 44.6013 & 1.000 & 0.923 & 0.760 & 0.028 & - \\
MD04 & PS1-11du & 149.6446 & 1.2582 & 0.876 & 0.819 & 0.620 & 0.117 & 23.599 (0.295) \\
MD04 & PS1-11r & 150.2982 & 1.5754 & 0.954 & 1.000 & 0.400 & 0.045 & 22.357 (0.105) \\
MD05 & PS1-10uu & 162.7694 & 58.4253 & 0.963 & 0.865 & 0.400 & 0.126 & 22.093 (0.122) \\
MD05 & PS1-10wb & 159.8594 & 57.2051 & 0.998 & 1.000 & 0.480 & 0.146 & 21.748 (0.296) \\
MD05 & PS1-10jx & 160.7420 & 56.9001 & - & 0.955 & - & - & 21.568 (0.055) \\
MD05 & PS1-11bp & 161.9585 & 57.2893 & 0.998 & 0.998 & 0.480 & 0.128 & 21.911 (0.112) \\
MD05 & PS1-11oh & 162.7667 & 59.1037 & - & 0.981 & - & - & 21.465 (0.067) \\
MD06 & PS1-11ql & 183.4322 & 46.2902 & - & 0.97 & - & - & 21.357 (0.054) \\
MD06 & PS1-11tc & 186.4050 & 46.7988 & - & 0.844 & - & - & 22.802 (0.316) \\
MD06 & PS1-10qu & 186.4676 & 47.8554 & 0.999 & 1.000 & 0.380 & 0.028 & 22.129 (0.103) \\
MD06 & PS1-10qv & 184.0243 & 47.3677 & 1.000 & 1.000 & 0.280 & 0.028 & 20.977 (0.049) \\
MD06 & PS1-10rj & 184.9626 & 47.7476 & 1.000 & 1.000 & 0.300 & 0.028 & 21.624 (0.065) \\
MD06 & PS1-10tb & 185.7344 & 46.9296 & 0.877 & 0.986 & 0.480 & 0.172 & 22.434 (0.134) \\
MD06 & PS1-10wn & 186.7645 & 46.8898 & 0.949 & 0.905 & 0.460 & 0.057 & 22.582 (0.219)\\
MD06 & PS1-10xc & 185.3965 & 46.0963 & 0.992 & 0.914 & 0.520 & 0.117 & 22.671 (0.120)\\
MD06 & PS1-10xe & 186.4073 & 46.7506 & 1.000 & 1.000 & 0.500 & 0.057 & 22.678 (0.127) \\
MD07 & PS1-11nb & 214.9577 & 53.3471 & - & 1.000 & - & - & 20.721 (0.030) \\
MD07 & PS1-10lb & 212.4660 & 53.7906 & 0.866 & 0.998 & 0.480 & 0.122 & 21.827 (0.081) \\
MD08 & PS1-10acd & 243.1638 & 55.0691 & 1.000 & 1.000 & 0.500 & 0.108 & 22.410 (0.181) \\
MD08 & PS1-10aex & 244.6181 & 55.2436 & 0.942 & 0.997 & 0.420 & 0.082 & 22.157 (0.131) \\
MD08 & PS1-10afb & 243.1847 & 56.0324 & 1.000 & 0.999 & 0.500 & 0.045 & 21.903 (0.077) \\
MD08 & PS1-10afq & 242.3236 & 56.4400 & 1.000 & 1.000 & 0.200 & 0.045 & 21.137 (0.054) \\
MD08 & PS1-10lh & 243.0192 & 56.1721 & 1.000 & 0.998 & 0.500 & 0.082 & 22.441 (0.168) \\
MD08 & PS1-10nf & 243.4779 & 54.1900 & 0.989 & 1.000 & 0.380 & 0.028 & 22.064 (0.149) \\
MD08 & PS1-10np & 240.4345 & 55.0052 & 1.000 & 0.954 & 0.540 & 0.100 & 22.521 (0.195) \\
MD08 & PS1-10zo & 240.3358 & 55.1399 & 0.988 & 0.997 & 0.520 & 0.146 & 21.927 (0.091) \\
MD09 & PS1-10aac & 334.6540 & 0.6184 & 1.000 & 1.000 & 0.600 & 0.028 & 21.970 (0.106) \\
MD09 & PS1-10afi & 333.5490 & 0.7323 & 0.964 & 0.957 & 0.500 & 0.072 & 22.868 (0.170) \\
MD09 & PS1-10axb & 332.6959 & 0.1951 & 0.947 & 0.966 & 0.520 & 0.141 & 22.663 (0.218) \\
MD09 & PS1-10ayl & 333.8780 & -0.8293 & 0.983 & 0.985 & 0.460 & 0.072 & 22.956 (0.210) \\
MD09 & PS1-10ls & 333.9379 & 0.4928 & 0.999 & 0.990 & 0.420 & 0.057 & 22.408 (0.236) \\
MD09 & PS1-10lw & 333.2522 & 0.8601 & 0.979 & 0.959 & 0.500 & 0.028 & 21.869 (0.058) \\
MD09 & PS1-10mi & 334.6715 & 0.4075 & 1.000 & 0.997 & 0.360 & 0.161 & 22.440 (0.211) \\
MD10 & PS1-10act & 353.0132 & 0.6024 & 1.000 & 1.000 & 0.240 & 0.028 & 20.857 (0.033) \\
MD10 & PS1-10lp & 351.3700 & -0.1235 & 1.000 & 1.000 & 0.360 & 0.045 & 21.646 (0.059) \\
\hline
\end{tabular}
\medskip
\end{center}
\end{table*}
\subsection{Image Subtraction Pipelines}
We have had two parallel difference image pipelines running since the start of full PS1 science operations in May 2010. The \textsc{photpipe} pipeline \citep{Rest} is hosted at Harvard/CfA and this is the primary source of the final photometry
presented in this paper. We briefly outline the process below, but the reader is referred to \cite{PS1a}
for a full description of this pipeline.
This pipeline produces difference images from the MD nightly stacks compared to a
deep, good image quality reference made from pre-season data. Forced-centroid, point-spread function-fitting photometry is applied
on its difference images, with a point-spread function (PSF) derived from reference stars in each nightly stack. The zeropoints were
measured for the AB system from comparison with field stars in the SDSS catalog. We propagate
the Poisson error on the pixel values through the resampling and difference imaging. Since this does not take the covariance between neighbouring pixels
into account, we also do forced photometry in apertures at random positions and calculate
the standard deviation of the ratio between the flux and the error. We then multiply all errors by
the standard deviation to correct for the covariance. Nightly difference images typically yield 3$\sigma$
limiting magnitudes of $\sim$23.5 mag in \ensuremath{g_{\rm P1}}, \ensuremath{r_{\rm P1}}, \ensuremath{i_{\rm P1}}, and \ensuremath{z_{\rm P1}}.
In parallel, the \protect \hbox {PS1}\ system has developed the Transient Science Server
(TSS) which uses the difference imaging and photometric data from the IPP running in
Hawaii. This process was described initially in \cite{2012Natur.485..217G} and is repeated here, expanded
upon for completeness.
The TSS automatically takes the nightly stacks created by the IPP
in the MHPCC, creates difference images with manually created reference
images, carries out PSF fitting photometry on the difference images
and returns catalogues of variables and transient candidates. In the current version
forced photometry is not implemented. Mask
and variance arrays are carried forward at each stage of the IPP
processing. Photometric and astrometric measurements performed by the
IPP system are described in \cite{PS1_photometry} and
\cite{PS1_astrometry} respectively. Individual detections made on the
difference images are copied nightly from the MHPCC and ingested into a MySQL database (located at Queen's University)
after an initial culling of objects based on the detection of saturated, masked or
suspected defective pixels within the PSF area. Sources detected on
the nightly difference images are assimilated into potential real
astrophysical transients based on a set of quality tests. The TSS
requires more than 3 quality detections within the last 7 observations
of the field, including detections in more than one filter, and an RMS
scatter in the positions of $\leq0.5"$. Each of these quality
detections must be of $5\sigma$ significance (defined as an instrumental
magnitude error $<0.2^{m}$) {\em and} have a Gaussian morphology
($XY_{moments} < 1.2$). Transient candidates which pass this automated
filtering system are promoted for human screening, which currently
runs at around 10\% efficiency (i.e. 10\% of the transients promoted
automatically are judged to be real after human screening).
The overlap with the \textsc{photpipe} system is good, with most high significance, real transients found by both pipelines. Each pipeline has been used to inform the other of small numbers missed and the reasons. Within the PS1 TSS, real transients are crossmatched with all available catalogues of astronomical sources in the MDS fields (e.g. SDSS, GSC, 2MASS, APM, Veron AGN, X-ray catalogues) in order to have a first pass classification of supernovae, variable star, active galactic nuclei (AGN) and nuclear transients. While the difference imaging runs within the PS1 IPP system in the MHPCC, the TSS database is hosted at Queen's University Belfast. The long term goal is to fully integrate the TSS into the PS1 system in Hawaii.
\begin{table*}
\begin{center}
\caption{12 confirmed \protect \hbox {PS1}\ CCSNe. Of note is the high percentage of SLSNe-Ic confirmed. The two objects marked with an `*' are explored further in this paper. The RA and Dec values are in the deg, J2000 format. The values in the \textsc{soft} and \textsc{psnid} columns represent the probability that the object is classified as the SNe type given in brackets. The numbered references refer to the following papers; [1] McCrum et al. (2014), [2] This paper, [3] Lunnan et al. (2014), [4] Chomiuk et al. (2011), [5] Chornock et al. (2013), [6] Quimby et al. (2013), [7] Quimby et al. (2014). PS1-10afx
has been shown to be a lensed SN Ia, and is included here for completeness of objects, although
we do not use it in any of our rate estimates.}
\label{table:CC!}
\begin{tabular}{lccccccccccc}
\hline
\hline
{\bf Field} & {\bf PS1 ID} & {\bf RA} & {\bf Dec} & {\bf Type} & {\bf \emph{z}$_{\bf SPEC}$} & {\bf SOFT} & {\bf PSNID} & {\bf Peak \emph{r}$_{\bf P1}$} & {\bf Telescope} & {\bf Date} & {\bf Reference} \\
\hline
MD05 & PS1-11ap & 162.1155 & 57.1526 & SLSN Ic & 0.524 & - & 1 (II) & 20.217 (0.017) & NOT & 07/02/2011 & [1] \\
MD05 & PS1-11ad & 164.0876 & 57.6654 & IIn & 0.422 & 1 (II) & 0.992 (II) & 20.935 (0.053) & GN & 15/02/2011 & [2] \\
MD06 & PS1-10pm* & 183.1758 & 46.9915 & SLSN Ic & 1.206 & 1 (II) & 1 (II) & 22.090 (0.141) & GN & 03/06/2010 & [2] \\
MD06 & PS1-11afv & 183.9074 & 48.1801 & SLSN Ic & 1.407 & - & - & 22.211 (0.186) & GN & 09/07/2011 & [3] \\
MD07 & PS1-11yh & 212.7977 & 51.9868 & II & 0.146 & - & - & 21.189 (0.048) & MMT & 05/06/2011 & [2] \\
MD08 & PS1-11tt & 243.1907 & 54.0713 & SLSN Ic & 1.283 & - & - & 22.654 (0.167) & GN & 07/06/2011 & [3] \\
MD09 & PS1-10ky & 333.4076 & 1.2398 & SLSN Ic & 0.956 & - & - & 21.190 (0.077) & GN & 17/07/2010 & [4]\\
MD09 & PS1-10afx & 332.8507 & 0.1621 & Lensed SNIa & 1.388 & 0.999 (Ibc) & - & 23.730 (0.200) & GS & 06/09/2010 & [5,6,7] \\
MD09 & PS1-10ahq & 333.5172 & 1.1084 & Ic & 0.283 & 1 (Ibc) & 1 (Ibc) & 21.474 (0.046) & MMT & 18/10/2010 & [2] \\
MD09 & PS1-10awh & 333.6242 & -0.0676 & SLSN Ic & 0.908 & 1 (II) & - & 21.607 (0.075) & GN & 12/10/2010 & [4] \\
MD10 & PS1-10acl & 352.4529 & -0.2916 & IIn & 0.260 & - & - & 21.313 (0.054) & MMT & 08/10/2010 & [2] \\
MD10 & PS1-10ahf* & 353.1180 & -0.3621 & SLSN Ic & 1.1 & 1(II) & 1 (II) & 22.680 (0.158) & GS & 11/06/2010 & [2] \\
\hline
\end{tabular}
\medskip
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\caption{17 plausible \protect \hbox {PS1}\ CCSNe. The values in the \textsc{soft} and \textsc{psnid} columns represent the probability that the object is classified as the Type II sub-class listed.}
\label{table:CC?}
\begin{tabular}{lccccccc}
\hline
\hline
{\bf Field} & {\bf PS1 ID} & {\bf RA (deg, J2000)} & {\bf Dec (deg, J2000)} & {\bf Type} & {\bf SOFT} & {\bf PSNID} & {\bf Peak \emph{r}$_{\bf P1}$}\\
\hline
MD01 & PS1-10acp & 35.4861 & -3.8369 & IIL & 1 & 1 & 22.699 (0.147) \\
MD01 & PS1-10add & 35.0764 & -4.1370 & IIP & 1 & 1 & 22.170 (0.080) \\
MD03 & PS1-10axq & 130.5385 & 44.5480 & IIL & 1 & 1 & 22.929 (0.154) \\
MD04 & PS1-10dq & 150.2418 & 2.2475 & IIP & 1 & 1 & 22.568 (0.128) \\
MD04 & PS1-11ag & 149.2396 & 3.2529 & IIL & 1 & 1 & - \\
MD04 & PS1-11er & 149.1978 & 2.4177 & IIL & 0.959 & 0.999 & 23.427 (0.340) \\
MD06 & PS1-10sq & 186.3077 & 46.6273 & IIL & 1 & 0.996 & 22.624 (0.133) \\
MD06 & PS1-10vu & 184.9374 & 46.1197 & IIL & 0.999 & 0.804 & 22.642 (0.107) \\
MD07 & PS1-10wk & 211.6645 & 52.4217 & IIP & 1 & 1 & 22.515 (0.140) \\
MD08 & PS1-10acq & 244.6603 & 55.1885 & IIP & 1 & 0.999 & 22.880 (0.331) \\
MD09 & PS1-10aal & 334.1762 & -0.5736 & IIL & 1 & 1 & 22.358 (0.139) \\
MD09 & PS1-10abf & 334.0138 & 1.0291 & IIL & 1 & 1 & 23.431 (0.250) \\
MD09 & PS1-10agf & 334.7398 & -0.1656 & IIP & 0.999 & 0.911 & 22.573 (0.328) \\
MD09 & PS1-10aht & 334.9031 & 1.2136 & IIP & 0.889 & 0.928 & 22.935 (0.201) \\
MD10 & PS1-10acn & 351.7261 & -0.2600 & IIL & 1 & 1 & 22.985 (0.271) \\
MD10 & PS1-10kz & 351.2487 & -0.3529 & II & - & 1 & 20.970 (0.032) \\
MD10 & PS1-10ayg & 350.9872 & -0.3808 & IIL & 1 & 0.837 & 22.649 (0.118) \\
\hline
\end{tabular}
\medskip
\end{center}
\end{table*}
\section{The Transient Sample}
\label{sec:results}
From the period starting February 25$^{th}$ 2010 and ending July 9$^{th}$ 2011, 249 hostless transients or `orphans' were discovered in the \protect \hbox {PS1}\ Medium Deep fields. For the practical purposes of this paper, which will become clear for our scientific motivations, an orphan is defined as an object that is $>3.4''$ away from the centre of a catalogued galaxy or point source brighter than approximately 23.5$^{m}$ (in any of the \ensuremath{g_{\rm P1}}\ensuremath{r_{\rm P1}}\ensuremath{i_{\rm P1}} filters that the transient was detected in). This magnitude limit was chosen for two reasons. At the beginning of the search period in 2010 the limit of a reference stack was not significantly deeper than the nightly stack, hence the transients which were observed as hostless by definition had no host brighter than $23.5^{m}$ in the specific band that they were detected in. Although the reference stacks now reach around 1 magnitude deeper, this limit is still useful as the transients we discuss in this paper are typically brighter than $22 - 22.5 ^{m}$, hence significantly brighter than the their hosts. In many cases deeper images \citep{2013arXiv1311.0026L}, or deep \protect \hbox {PS1}\ stacks \citep{JTwds} do indeed reveal a host or stellar counterpart.
Deep imaging in the \emph{z}-band with the Subaru telescope revealed hosts for 5 of our orphans with magnitudes listed in the Appendix in Table \ref{table:subaru}.
As can be seen in the table, two of these transients do have hosts brighter than $23.5^{m}$ in the $z$-band. Whilst our definition of `hostless' here is somewhat arbitrary, it is reasonably well defined and serves the science motivations of this paper well as we shall see.
We set out with a goal of spectroscopically classifying as many of this hostless sample as possible, which peaked brighter than approximately $22 - 22.5$ in any of the \ensuremath{g_{\rm P1}}\ensuremath{r_{\rm P1}}\ensuremath{i_{\rm P1}}\ filters. This magnitude limit was chosen as a practical limit of the largest aperture telescopes (8 metre) that we had significant access to. The primary source was the
Gemini observatory, for which we had UK, US and UH time access although we also used the UK resources of
the 4.2m William Herschel Telescope and the CfA Harvard access to Magellan and the Multi-mirror Telescope.
A combination of the finite time resources available for these spectroscopic programme, ambient weather conditions, field visibility and scheduling, meant that spectra could not be obtained for every transient brighter than our chosen limit. The general PS1 spectroscopic follow-up of transients is also described in \cite{PS1a} and we emphasise that during the period described here, there were several multi-purpose spectroscopic classification and follow-up programmes running at these facilities which combined extensive classification with follow-up of scientifically interesting targets. Due to the ease of observing hostless transients (because of a lack of host galaxy contamination), a significant effort was invested to classify as many as possible. The results of the spectroscopic classification programmes are listed in Tables\,\ref{table:Ia!} and \ref{table:CC!} \citep[virtually all of the Type Ia sample are discussed in the cosmological analysis of][]{PS1a}.
This sample is of course not spectroscopically complete, as illustrated in Fig.\,\ref{fig:Ia}. Our next step was to attempt photometric classification of those SNe for those which we did not manage to get spectra, but for which we had well sampled and relatively complete light curves. This sample included all those candidates for which we weren't able to take spectra and also those which had peak magnitudes too faint for inclusion in the spectroscopic typing programmes.
We did not use this photometric fitting method for selecting targets for spectroscopy, since the lightcurve fitting methods (PSNID and SOFT) require a well sampled and complete lightcurve which obviously is not available when a spectrum needs to be triggered around peak brightness.
As expected,
the distribution for the photometrically classified objects peaks at about a magnitude fainter than that of the spectroscopic sample.
The light curve fitting is described below in Sections\,\ref{Ia} and \ref{ccnse-slsne}
At a canonical redshift of $z\sim0.2$, the $3.4$ arcsec separation corresponds to a minimum separation of $\sim11$\,kpc.
\cite{sullivan} used a host-association algorithm to ensure that the SNe\,Ia under study were being associated with appropriate host galaxies.
This was important for conclusions within the paper involving specific host galaxy properties, such as the star formation rate (SFR), however
for the purposes of this study the $3.4$ arcsec separation parameter we defined was sufficient.
Our motivation is simply that a transient has no apparent host, we are not concerned with finding the most likely offset galaxy. The two possibilities for these orphans are either that the host is fainter than $\ensuremath{r_{\rm P1}}\sim23.5$ or that the transient has been expelled by a nearby galaxy and has a long enough lifetime that it can travel $>10$\,kpc before some energetic event generates sufficient luminosity to be captured by \protect \hbox {PS1}. Due to the shorter life cycle of CCSNe, runaway transients are likely SNe\,Ia or at least some type of thermonuclear event involving a WD \citep[for example see the transients described in][which are significantly offset from their likely host galaxies]{faintsne}.
The alternative, where the host galaxy is simply too faint to be imaged by \protect \hbox {PS1}\ suggests that the transient could be intrinsically bright or the galaxy
intrinsically faint. For example, the host galaxy could be undetected simply because it is at a high redshift which places it beyond the PS1 detection limit. If the host were a typical $L^{\ast}$ galaxy (but undetected due to distance) this would imply the transient has an AB magnitude $\lesssim-22$. Alternatively the transient could have a typical
core-collapse magnitude of $M_{\rm AB}\sim-18$ and if the host galaxy is undetected then it is likely an intrinsically
faint dwarf, possibly of low metallicity, but either way this combination has previously been found to be associated with SLSNe-Ic \citep{bluedeath,10kyawh}. They are not always dwarf galaxies as indicated by the high-\emph{z} discovery of \cite{berger} but they are typically, with very few exceptions in the current SLSNe-Ic sample,
at least 2\,mag brighter than their hosts. The small subset of events which do not meet this criterion are all at higher redshifts, possibly indicating trends of metallicity and luminosity in these younger galaxies. This is explored in greater detail in Section\,\ref{sec:discussion}.
As we show below, this search method allows a fairly straightforward way of identifying high-\emph{z}, superluminous transients.
Through the spectroscopic classification programmes discussed above, optical spectra were taken for 40 orphans in total. While we would have liked to be spectroscopically complete to a defined
magnitude limit (around $22^{m}$), spectroscopic facility access and weather constraints
did not allow it. As with all spectroscopic programmes, there is some human preference that
plays a role in target selection. In PS1, we have been particularly looking for transients that
might be a high-z which could mean red colors and/or slow rise times. Later in the paper we
discuss an estimate of the volumetric rates of SLSNe and the spectroscopic completeness
plays a role in this. The number of SLSNe in the transient set without spectra (Tables\,\ref{table:CC?} and \ref{table:CC??}) is then the important question which we discuss in Section\,\ref{sec:MC-rates}.
\subsection{Hostless Type Ia supernovae}
\label{Ia}
Of these 40 transients which were spectroscopically confirmed, 28 turned out to be Type Ia SNe at redshifts between approximately $0.2-0.7$ and these are listed in Table \ref{table:Ia!}. All but two of these are already presented in \citep{PS1a}, with one extra from \cite{val1a} and one more which is
the same object as discovered and reported by PTF \citep{galatel} \footnote{We thank Avishay Gal-Yam and Peter Nugent for access to the spectrum to confirm classification.}.
All transients for which no spectra were available (and which had relatively complete light curves) were passed through the \textsc{soft} and \textsc{psnid} photoclassification algorithms
\citep{SOFT,SAKO,SAKO2}. We identified 48 SNe as having light curves matching SNe\,Ia between redshifts 0.2 and 0.7. In order to make these confident Type Ia classifications we initially demanded that {\em both} the \textsc{soft} and \textsc{psnid} algorithms gave a SN\,Ia classification with a probability of $>$ 80$\%$. This resulted in 40 SNe\,Ia, however we found a further 8 that failed to get a secure classification in \textsc{soft} but \textsc{psnid} returned a high probability of being a SN\,Ia. A visual inspection of these light curves suggests to us that they are plausible SNe Ia and all 48 photometrically classified SNe Ia are listed in Table \ref{table:Ia?}.
The redshift values and their associated errors that are listed in this table are output from the \textsc{soft} photoclassification code. The method for generating these numbers is described in \cite{SOFT2}, including a figure comparing the photo-\emph{z} of a test sample of SNe\,Ia against \emph{z} values obtained from spectroscopic data. As can be seen in the paper, the RMS scatter about $z_{SOFT}=z_{SPEC}$ is very small ($\sim0.05$) indicating that the photo-\emph{z} values offer a fair approximation of the actual redshift values of the probable SNe in question.
In summary, through spectroscopy and light curve fitting we find that 76 of the orphan transient sample
are confidently identified as SNe\,Ia.
\subsection{Core-collapse and superluminous supernove}
\label{ccnse-slsne}
The other 12 transients for which we gathered spectra are listed in Table\,\ref{table:CC!} with their
redshifts and classifications. Of these 12 spectroscopically confirmed SNe, there are four
normal CCSNe, of types II, IIn and Ic.
\cite{10afx} presented the discovery of PS1-10afx at a redshift of $z=1.388$, suggesting it to be a SLSN which is different to the currently known population.
But this has now been shown to be more likely a normal SN\,Ia lensed by a foreground galaxy
\cite{2013ApJ...768L..20Q, quimblens}. We include it in Table\,\ref{table:CC!} for completeness,
as we originally had identified is a transient which was not obviously a normal type Ia supernova, although we do not use it any further in SLSN rate calculations. This leaves seven which are confirmed as SLSNe-Ic, lying at redshifts beyond $z\sim0.5$. These events include PS1-10pm and PS1-10ahf, the nature of which are discussed further in this paper. Detailed analysis of PS1-10ky, PS1-10awh, PS1-11ap can be found in \cite{10kyawh} and \cite{11ap}. The
classifications of PS1-11tt and PS1-11afv are presented in \cite{2013arXiv1311.0026L} (and more details will be given in Lunnan et al. in prep.). This immediately suggests quite a high fraction of
SLSNe, if one could remove the SNe\,Ia from the sample efficiently and early enough.
There are another 45 transients for which we were unable to get spectroscopic confirmation, but have
well sampled and complete light curves which resemble CCSNe rather than SNe\,Ia.
We passed these light curves through the \textsc{psnid} and \textsc{soft} photoclassification algorithms,
finding that 17 were classified as Type II by both algorithms with greater than 80\% confidence. These
are listed in Table\,\ref{table:CC?}, and we propose that they are likely to be core-collapse, Type II SNe given the high confidence light curve fits by both light curve fitters. The other 28 transients had light curves which appeared SN-like but the fitting algorithms gave lower confidence levels for specific fits of SNe\,Ia, Type II or Type Ibc.
These 28 transients are listed in Table \ref{table:CC??}, along with the lower confidence results from
\textsc{psnid} and \textsc{soft}. Additionally,
a selection of these light curves are shown in Fig.\,\ref{fig:CCLC} and show typical trends such as a
single asymmetric peak or an extended, declining plateau. We propose that these are high confidence
SNe but the classification of the light curve as SNe Ia, Ibc or II is uncertain.
Fig.\,\ref{fig:Ia} shows the magnitude distribution of the confirmed and plausible CCSNe, which again illustrates the spectroscopic and photometric limits of our sample. A simple conclusion from this is that if one selects orphan candidates from a wide-field, magnitude limited survey (such as the PS1 MDS) and one photometrically selects objects from the data stream with light curves which are unlike SNe\,Ia, then a large fraction of the brighter objects that remain are actually high redshift SLSNe-Ic. Of course there is still an outstanding question about how to identify the SNe\,Ia early enough in their light curve that one can securely identify them from photometry alone. This issue still remains open, but we show here that if it can be done then a large fraction of the transients
brighter than about 22$^{m}$ are SLSNe-Ic candidates.
One major caveat to this is that there may be more SLSN-Ic candidates in the photometrically classified
samples (either the Type Ia or core-collapse samples, or both), since the light curve fitters do not contain SLSNe-Ic (or SLSNe-II) template light curves. As a check, we report the values output from \textsc{soft} and \textsc{psnid} for the light curves of the
spectroscopically confirmed, hostless CCSNe sample (see Table \ref{table:CC!}).
None of the CCSNe sample were photometrically misclassified as SNe Ia, which supports our proposal that
the sample in Table\,\ref{table:Ia?} is relatively pure.
However two of the SLSNe-Ic would be misclassified as Type II SNe. Hence it is possible that there are
further SLSNe-Ic masquerading as normal CCSNe in the objects in Tables \ref{table:CC?}
and \ref{table:CC??}. The lower left hand histogram in Fig.\,\ref{fig:Ia} again highlights the magnitude
limit of approximately $22-22.5$ in the spectroscopic observations. The spectroscopically confirmed
sample peaks at a significantly brighter \ensuremath{r_{\rm P1}}-band magnitude than the photometrically classified sample. It is therefore possible that some of the photometrically classified CCSNe presented in Table \ref{table:CC?} could be unclassified SLSNe-Ic. When we discuss the rates of SLSNe-Ic, we note that they may be lower limits.
In summary, our spectroscopic follow-up programmes took spectra of 12 transients which were not
Type Ia SNe. A large fraction of these (7) were confirmed to be SLSNe-Ic at redshifts greater than
$z\sim0.5$, and another is an unusually luminous transient at $z=1.388$. We photometrically
classified 17 transients as plausible CCSNe and a further 28 had lower confidence photometric classifications.
\subsection{Miscellaneous orphans}
The remaining 116 hostless transients discovered by the \protect \hbox {PS1}\ survey have insufficient data for any reliable classification to be made due to light curve data either being incomplete, variable or the object being too faint and only sneaking into the limiting magnitude the survey can reach. As can be seen in the lower right hand plot in Fig.\,\ref{fig:Ia} these objects represent the fainter end of the detected orphans and so only limited data is currently available concerning them. A typical trend seen in some of these orphans is that of a discrete peak suggesting that some of the objects are at a high redshift and peak just above our magnitude limit giving us an incomplete light curve. It is probable that these
are SNe or SNe-like transients. It is interesting to note that the
spectroscopically and photometrically classified samples do not contain any obvious AGN or
QSO type variable sources, suggesting that the likelihood of finding such black-hole driven
events is low if an underlying galaxy is not detected. We find many AGN and QSO variables in the
full PS1 MDS transient search, but imposing the requirement for a host brighter than $\sim 23.5$
does seem to reduce their detected frequency.
\section{SLSNe-Ic analysis}
\label{sec:analysis}
\begin{figure}
\begin{center}
\includegraphics[angle=270,scale=0.36]{figs/10pm_oblc_wlim.eps}
\caption{Observed \emph{gri}\ensuremath{z_{\rm P1}}\ light curves of PS1-10pm. The non \protect \hbox {PS1}, WHT detections are the \emph{griz} points shown at MJD 55384 and any upper limits are marked with arrows. Measurements of the host galaxy from late-time, deep GN \emph{i}- and \emph{z}-band images are shown as horizontal lines. During the 2011 season, between MJD 55594.56 and 55736.33, there have been 10 non-detections in \ensuremath{g_{\rm P1}}\ down to a 3$\sigma$ indicative magnitude limit of 23.89, 12 non-detections in \ensuremath{r_{\rm P1}}\ down to 23.90, 12 non-detections in \ensuremath{i_{\rm P1}}\ down to 24.10 and 10 non-detections in \ensuremath{z_{\rm P1}}\ down to a limit of 23.24. Over the entire course of the observations there are also 24 non-detections in \ensuremath{y_{\rm P1}}, however given that the limiting magnitude for this filter only reached a maximum magnitude of 21.86, observations of an object at the redshift of PS1-10pm were unfeasible.}
\label{fig:10pmoblc}
\end{center}
\end{figure}
\begin{table*}
\tiny
\caption{Observed photometry for PS1-10pm. No \emph{K}-corrections have been applied and the phase values have not been corrected to the restframe. Note that the \protect \hbox {PS1}\ observations have had any flux from a previous reference image removed through image subtraction (although note that no host object can be seen at the location of PS1-10pm) whereas the late time WHT and GN observations have not.}
\label{table:10pm}
\begin{tabular}{c c c c c c c c c c c c}
\hline
\hline
{\bf Date} & {\bf MJD} & {\bf Phase (days)} & {\bf g$_{P1}$} & {\bf r$_{P1}$} & {\bf i$_{P1}$} & {\bf z$_{P1}$} & {\bf \emph{g}} & {\bf \emph{r}} & {\bf \emph{i}} & {\bf \emph{z}} & {\bf Telescope} \\
\hline
24/12/2009 & 55189.65 & -136.35 & - & - & $>$23.97 & - & - & - & - & - & PS1 \\
28/12/2009 & 55193.65 & -132.35 & - & - & - & $>$23.03 & - & - & - & - & PS1 \\
07/01/2010 & 55203.61 & -122.39 & $>$23.80 & $>$23.71 & - & - & - & - & - & - & PS1 \\
11/01/2010 & 55207.65 & -118.35 & - & - & $>$24.08 & - & - & - & - & - & PS1 \\
12/01/2010 & 55208.65 & -117.35 & - & - & - & $>$23.24 & - & - & - & - & PS1 \\
13/01/2010 & 55209.64 & -116.36 & $>$23.82 & - & - & - & - & - & - & - & PS1 \\
13/01/2010 & 55209.65 & -116.35 & - & $>$23.71 & - & - & - & - & - & - & PS1 \\
14/01/2010 & 55210.61 & -115.39 & - & - & $>$24.00 & - & - & - & - & - & PS1 \\
16/01/2010 & 55212.64 & -113.36 & $>$23.78 & $>$23.69 & - & - & - & - & - & - & PS1 \\
17/01/2010 & 55213.56 & -112.44 & - & - & $>$24.11 & - & - & - & - & - & PS1 \\
18/01/2010 & 55214.56 & -111.44 & - & - & - & $>$23.26 & - & - & - & - & PS1 \\
19/01/2010 & 55215.63 & -110.37 & - & $>$23.76 & - & - & - & - & - & - & PS1 \\
23/01/2010 & 55219.65 & -106.35 & - & - & $>$24.02 & - & - & - & - & - & PS1 \\
24/01/2010 & 55220.63 & -105.37 & - & - & - & $>$23.19 & - & - & - & - & PS1 \\
25/01/2010 & 55221.56 & -104.44 & $>$23.78 & $>$23.64 & - & - & - & - & - & - & PS1 \\
26/01/2010 & 55222.59 & -103.41 & - & - & $>$24.04 & - & - & - & - & - & PS1 \\
27/01/2010 & 55223.57 & -102.43 & - & - & - & $>$23.12 & - & - & - & - & PS1 \\
03/02/2010 & 55230.60 & -95.40 & $>$23.48 & $>$23.47 & - & - & - & - & - & - & PS1 \\
04/02/2010 & 55231.58 & -94.42 & - & - & $>$23.99 & - & - & - & - & - & PS1 \\
06/02/2010 & 55233.53 & -92.47 & $>$23.72 & $>$23.59 & - & - & - & - & - & - & PS1 \\
07/02/2010 & 55234.56 & -91.44 & - & - & $>$24.06 & - & - & - & - & - & PS1 \\
08/02/2010 & 55235.59 & -90.41 & - & - & - & $>$23.14 & - & - & - & - & PS1 \\
09/02/2010 & 55236.60 & -89.40 & $>$23.82 & - & - & - & - & - & - & - & PS1 \\
09/02/2010 & 55236.61 & -89.39 & - & $>$23.69 & - & - & - & - & - & - & PS1 \\
10/02/2010 & 55237.57 & -88.43 & - & - & $>$24.11 & - & - & - & - & - & PS1 \\
11/02/2010 & 55238.57 & -87.43 & - & - & - & $>$23.20 & - & - & - & - & PS1 \\
12/02/2010 & 55239.54 & -86.46 & $>$23.86 & $>$23.74 & - & - & - & - & - & - & PS1 \\
13/02/2010 & 55240.53 & -85.47 & - & - & $>$24.07 & - & - & - & - & - & PS1 \\
15/02/2010 & 55242.54 & -83.46 & $>$23.85 & $>$23.71 & - & - & - & - & - & - & PS1 \\
16/02/2010 & 55243.53 & -82.47 & - & - & $>$24.02 & - & - & - & - & - & PS1 \\
19/02/2010 & 55246.46 & -79.54 & - & - & $>$23.97 & - & - & - & - & - & PS1 \\
20/02/2010 & 55247.54 & -78.46 & - & - & - & $>$23.13 & - & - & - & - & PS1 \\
21/02/2010 & 55248.53 & -77.47 & $>$23.84 & $>$23.71 & - & - & - & - & - & - & PS1 \\
24/02/2010 & 55251.51 & -74.49 & 23.31 (0.29) & $>$23.47 & - & - & - & - & - & - & PS1 \\
25/02/2010 & 55252.55 & -73.45 & - & - & 23.59 (0.30) & - & - & - & - & - & PS1 \\
11/03/2010 & 55266.54 & -59.46 & 23.23 (0.25) & 23.68 (0.38) & - & - & - & - & - & - & PS1 \\
13/03/2010 & 55268.58 & -57.42 & - & - & - & $>$22.78 & - & - & - & - & PS1 \\
17/03/2010 & 55272.58 & -53.42 & - & 23.35 (0.26) & - & - & - & - & - & - & PS1\\
20/03/2010 & 55275.52 & -50.48 & 23.16 (0.20) & 23.53 (0.33) & - & - & - & - & - & - & PS1\\
25/03/2010 & 55280.35 & -45.65 & - & - & - & $>$23.20 & - & - & - & - & PS1\\
02/04/2010 & 55288.44 & -37.56 & - & - & 23.11 (0.20) & - & - & - & - & - & PS1\\
11/04/2010 & 55297.34 & -28.66 & - & - & 22.43 (0.08) & - & - & - & - & - & PS1\\
12/04/2010 & 55298.39 & -27.61 & - & - & - & 22.41 (0.16) & - & - & - & - & PS1\\
15/04/2010 & 55301.33 & -24.67 & 22.93 (0.17) & 22.87 (0.18) & - & - & - & - & - & - & PS1\\
17/04/2010 & 55303.49 & -22.51 & - & - & 22.32 (0.08) & - & - & - & - & - & PS1\\
18/04/2010 & 55304.32 & -21.68 & - & - & - & 22.34 (0.16) & - & - & - & - & PS1\\
19/04/2010 & 55305.34 & -20.66 & 23.07 (0.21) & 22.67 (0.15) & - & - & - & - & - & - & PS1\\
06/05/2010 & 55322.39 & -3.61 & - & - & - & 21.97 (0.11) & - & - & - & - & PS1\\
09/05/2010 & 55325.37 & -0.63 & - & - & - & 21.74 (0.09) & - & - & - & - & PS1\\
10/05/2010 & 55326.41 & 0.41 & 22.98 (0.18) & 22.12 (0.10) & - & - & - & - & - & - & PS1\\
11/05/2010 & 55327.43 & 1.43 & - & - & 21.98 (0.06) & - & - & - & - & - & PS1\\
13/05/2010 & 55329.37 & 3.37 & 23.13 (0.18) & 22.22 (0.09) & - & - & - & - & - & - & PS1\\
14/05/2010 & 55330.30 & 4.30 & - & - & 22.02 (0.06) & - & - & - & - & - & PS1\\
16/05/2010 & 55332.31 & 6.31 & 23.41 (0.22) & 22.33 (0.09) & - & - & - & - & - & - & PS1\\
18/05/2010 & 55334.33 & 8.33 & - & - & - & 21.79 (0.09) & - & - & - & - & PS1\\
23/05/2010 & 55339.28 & 13.28 & - & - & 21.63 (0.08) & - & - & - & - & - & PS1\\
24/05/2010 & 55340.28 & 14.28 & - & - & - & 21.98 (0.14) & - & - & - & - & PS1\\
01/06/2010 & 55348.34 & 22.34 & - & - & 22.06 (0.06) & - & - & - & - & - & PS1\\
03/06/2010 & 55350.30 & 24.30 & 23.66 (0.28) & 22.42 (0.10) & - & - & - & - & - & - & PS1\\
05/06/2010 & 55352.33 & 26.33 & - & - & - & 21.68 (0.08) & - & - & - & - & PS1\\
06/06/2010 & 55353.35 & 27.35 & 23.56 (0.29) & 22.80 (0.17) & - & - & - & - & - & - & PS1\\
07/06/2010 & 55354.26 & 28.26 & - & - & 22.31 (0.08) & - & - & - & - & - & PS1\\
08/06/2010 & 55355.26 & 29.26 & - & - & - & 21.70 (0.12) & - & - & - & - & PS1\\
12/06/2010 & 55359.31 & 33.31 & - & 22.72 (0.26) & - & - & - & - & - & - & PS1\\
13/06/2010 & 55360.26 & 34.26 & - & - & 22.21 (0.08) & - & - & - & - & - & PS1\\
14/06/2010 & 55361.26 & 35.26 & - & - & - & 21.94 (0.12) & - & - & - & - & PS1\\
15/06/2010 & 55362.28 & 36.28 & - & 22.66 (0.13) & - & - & - & - & - & - & PS1\\
16/06/2010 & 55363.27 & 37.27 & - & - & 22.22 (0.07) & - & - & - & - & - & PS1\\
17/06/2010 & 55364.26 & 38.26 & - & - & - & 22.08 (0.12) & - & - & - & - & PS1\\
18/06/2010 & 55365.28 & 39.28 & - & 22.72 (0.15) & - & - & - & - & - & - & PS1\\
19/06/2010 & 55366.27 & 40.27 & - & - & 22.36 (0.09) & - & - & - & - & - & PS1\\
20/06/2010 & 55367.26 & 41.26 & - & - & - & 21.89 (0.10) & - & - & - & - & PS1\\
06/07/2010 & 55383.97 & 57.97 & - & - & - & - & - & 23.52 (0.13) & 22.65 (0.23) & - & WHT\\
07/07/2010 & 55384.00 & 58.00 & - & - & - & - & 24.82 (0.33) & - & - & 22.16 (0.28) & WHT\\
\hline
30/01/2011 & 55591.62 & 265.62 & - & - & - & - & - & - & 24.72 (0.30) & 24.60 (0.30) & GN\\
\hline
\end{tabular}
\medskip
\end{table*}
\subsection{PS1-10pm}
\label{sec:pm}
PS1-10pm was first detected with \protect \hbox {PS1}\ in the \ensuremath{r_{\rm P1}}-band on MJD 55248 (24$^{th}$ February 2010) in MD06 at a location of RA\,=\,12$^h$12$^m$42$^s$.18, DEC\,=\,46$^\circ$59$'$29$''$.5 (J2000). Detections in \emph{gri}\ensuremath{z_{\rm P1}}\ continued until a final \ensuremath{z_{\rm P1}}-band point on MJD 55367 (20$^{th}$ June 2010) when the MD06 season ended and \protect \hbox {PS1}\ no longer continued to observe the object. Further data were taken on the 7$^{th}$ July 2010 with the ACAM instrument at the WHT (\emph{griz}) and deep imaging in \emph{i} and \emph{z} was performed with the GMOS instrument on GN on the 30$^{th}$ January 2011. The details of the photometry performed can be found in Table \ref{table:10pm}. The transient was caught as it began to rise and the \protect \hbox {PS1}\ coverage captures a reasonably well-defined peak. Fig.\,\ref{fig:10pmoblc} shows observed \emph{gri}\ensuremath{z_{\rm P1}}\ light curves for PS1-10pm.
Spectra of PS1-10pm were obtained with GMOS on GN on the 3$^{rd}$ June and the 2$^{nd}$ July 2010\footnote{Gemini Program ID: GN-2010A-Q-45}. The first spectrum was taken using the R400 grating with a GG455 filter and a single, 2400s exposure gave a signal-to-noise ratio (SNR) in the detected continuum of $\sim10$ per pixel (at approximately 6500\AA). With the R400 grating, a 1$''$ slit provides a resolution of 7.9\AA\ and the actual useful wavelength range of the obtained spectrum was from $\sim5000-9000$\AA.
The second spectrum consists of $4\times1800$s exposures taken using the R150 grating (G5306). The 1$''$ slit provided a resolution of 22.7\AA\ with this grating but the useful wavelength range of the obtained spectrum increased to $\sim4000-9500$\AA.
The SNR in the continuum was similar to the first spectrum (around 10 per pixel), albeit at lower spectral resolution.
The centroids and widths of the two strong absorption lines at around 6170\AA\ were measured by fitting simultaneous Gaussian profiles (see Fig.\,\ref{fig:10pmprofile}). This was done using our custom built \textsc{idl} spectral analysis package {\em procspec}, and checked with the \textsc{starlink} spectral analysis package \textsc{dipso}.
The FWHM was allowed to vary in tandem, and the centroids were measured at 6166.84\AA\ and 6182.15\AA.
We found a best fit with FWHM=5.4\AA\, which is slightly lower than the expected instrument resolution of
6.4\AA\ at this wavelength, for a 1$''$ slit width. The image quality at the time of observations was
lower than
the slit width (around 0.7-0.8$''$) and the source did not completely fill the slit. The lines widths are hence
effectively unresolved. If these were the
Mg\,{\sc ii} $\lambda\lambda$2795.528,2802.704\footnote{http://www.nist.gov/pml/data/asd.cfm}
doublet, then the centroids both imply redshifts of z = 1.206. Hence this is a robust identification of the absorption components, likely in the interstellar medium (ISM) of the host galaxy of the transient. The Mg\,{\sc ii} absorber could conceivably be foreground, which would then imply an even higher redshift for the transient.
We do detect a probable host galaxy, coincident with PS1-10pm in deep Gemini images after the transient has faded (see below). Although this could be a foreground or background source, the simplest explanation is that the Mg\,{\sc ii} absorption is associated with the host galaxy and the redshift of the transient PS1-10pm is the same
as the Mg\,{\sc ii} absorption. In all reported cases in the literature where SLSNe-Ic have detections of both Mg\,{\sc ii} absorption and host galaxy emission lines, the redshifts are the same.
Fig.\,\ref{fig:10pmspec} shows the two PS1-10pm spectra compared with spectra of SCP06F6, SN2010gx, PTF09cwl, PS1-10awh and PS1-10ky
\citep{scp06f6,10gx,bluedeath,10kyawh}. The Mg\,{\sc ii} $\lambda\lambda$2796,2803 absorption doublet can be seen in most of the spectra, corrected for each
respective redshift, which allowed \cite{bluedeath} and \cite{10kyawh} to determine the redshifts and luminosities of these transients. The
broad absorption in PS1-10pm is almost certainly due to the same Mg\,{\sc ii} resonance transition, but in the expanding photosphere of
the transient. The similarity of the depth and strength of the feature immediately suggests that PS1-10pm could be a SLSN, similar to the SLSNe-Ic class illustrated here.
\begin{figure*}
\begin{center}$
\begin{array}{cc}
\includegraphics[angle=270,scale=0.3]{figs/10pm_mgIIgauss.eps} &
\includegraphics[angle=270,scale=0.3]{figs/10pm_mgIIsn.eps}
\end{array}$
\end{center}
\caption{Detail of the GMOS, R400 PS1-10pm spectrum showing the observed wavelength of the two absorption features thought to be the Mg\,{\sc ii} $\lambda\lambda$2796,2803 doublet used to determine a redshift of 1.206. By taking the narrow doublet (seen here at the observed wavelength in the left hand figure and at the implied rest wavelength of $\sim2800\AA$ in the right hand figure) to be Mg\,{\sc ii} in the host galaxy and thus using it as a rest frame for the wider, bluer profile from the supernova, simple Gaussian profiles could be fitted to the absorption profiles and an expansion velocity of $\sim17,000$\,kms$^{-1}$ determined for PS1-10pm.}
\label{fig:10pmprofile}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=270,scale=0.6]{figs/10pm_speccomp_new_bw.eps}
\caption{Two GMOS spectra of PS1-10pm at $z=1.206$ compared with PS1-10awh at $z=0.908$, PS1-10ky at $z=0.956$, PTF09cwl at $z=0.349$, SCP06F6 at $z=1.189$ and SN2010gx at $z=0.23$ (see references in text). All of the spectra have been corrected to restframe and re-binned to 10\AA\ and some chip gaps have been smoothed over.}
\label{fig:10pmspec}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[angle=270,scale=0.5]{figs/10pm_ablccomp.eps}
\caption{A comparison of the PS1-10pm \ensuremath{r_{\rm P1}}-band absolute magnitude light curve with a \emph{u}-band SN2010gx light curve and \ensuremath{g_{\rm P1}}-band PS1-10awh and PS1-10ky light curves. Light curve shapes obtained from \emph{u}-band PTF12dam and \ensuremath{g_{\rm P1}}-band PS1-11ap data are also presented here, highlighting the difference in the evolution of the SLSNe-Ic dataset. The rest wavelengths of these bands for each object are given in Table \ref{table:filters} and the comparison here approximately represents the NUV, with a range of $\sim2500-3200$\AA.}
\label{fig:10pmlc}
\end{center}
\end{figure*}
\begin{table*}
\caption{Central rest wavelengths ({\AA}) of optical passbands for each of the SLSNe-Ic used in the photometric comparisons in this paper, where values in \emph{italic} represent filters used for comparison purposes. The redshift of each object is given in the top row and the central wavelength of each filter in the second column.}
\label{table:filters}
\begin{tabular}{lcccccccc}
\hline
\hline
{\bf Filter} && {\bf PS1-10pm} & {\bf PS1-10ahf} & {\bf PS1-10awh} & {\bf PS1-10ky} & {\bf SN 2010gx} & {\bf PS1-11ap} & {\bf PTF12dam} \\
\hline
&& 1.206 & 1.1 & 0.908 & 0.956 & 0.230 & 0.524 & 0.107\\% & 2.05 \\
\hline
u & 3540 & - & - & - & - & \emph{2878} & - & \emph{3196}\\% & -\\
g & 4860 & 2206 & 2256 & \emph{2550} & \emph{2488} & 3878 & \emph{3193} & \emph{4396} \\%& $\sim1600$\\
r & 6230 & \emph{2826} & \emph{2890} & 3267 & 3188 & 5065 & \emph{4091} & 5632\\% & $\sim2050$\\
i & 7525 & 3412 & 3489 & 3944 & 3848 & 6199 & 4939 & 6799\\% & \emph{$\sim$2530}\\
z & 8660 & 3924 & \emph{4012} & 4536 & 4426 & 7426 & 5680 & 7820\\% & -\\
y & 9720 & 4409 & 4508 & 5097 & 4974 & - & 6382 & 8786\\% & -\\
\hline
\end{tabular}
\medskip
\end{table*}
Fig.\,\ref{fig:10pmprofile} shows a 500\AA\ segment of the first, R400 GMOS spectrum, corrected for the redshift obtained above. If we assume that the broad absorption line is Mg\,{\sc ii} then the centroid is at a blue shifted velocity of $\sim17,000$\,kms$^{-1}$ in comparison to the Mg\,{\sc ii} ISM doublet. \cite{10kyawh} find similar velocities of $\sim19,000$\,kms$^{-1}$ and $\sim12,000$\,kms$^{-1}$ for PS1-10ky and PS1-10awh respectively. Line widths ranging from $9000-12,000$\,kms$^{-1}$ were found by \cite{10kyawh}, which are slightly less than the $\sim13,000$\,kms$^{-1}$ found for PS1-10pm.
However differences could arise due to the simplistic approach in fitting a FHWM to such a broad feature and ignoring possible blends. Also, we have used only the Mg\,{\sc ii} line whereas \cite{10kyawh} used multiple features in the analysis of PS1-10ky and PS1-10awh.
To compare the respective absolute AB magnitude of each supernova we used the following:
\begin{equation}
M=m-5\mathrm{log}(\frac{d_{L}}{10(pc)})+2.5\mathrm{log}(1+z)
\label{eq:abmag}
\end{equation}
\citep{Hogg}, where \emph{m} is the apparent AB magnitude. As can be seen, the measured magnitudes were corrected for cosmological expansion but not given a full $K-$correction. However, suitable filters were chosen to make the comparison valid (see Table \ref{table:filters} for central wavelengths in the rest frame) with the resulting wavelength range ($\sim2500-4000$\AA) falling approximately in the near-ultraviolet (NUV). We applied a correction for foreground reddening due to the Galactic line of sight only \citep{extinct}, as we have no information on the extinction in the host. The foreground extinction and the \cite{exlaw} extinction law implies $A_{r} \simeq 0.05$.
A standard cosmology with $H_0=72$ kms$^{-1}$, $\Omega_M=0.27$ and $\Omega_\lambda=0.73$ is used throughout.
Fig.\,\ref{fig:10pmlc} shows an absolute magnitude light curve of the PS1-10pm \ensuremath{r_{\rm P1}}-band (assuming a redshift of 1.206), along with \emph{u}-band data for SN2010gx \citep{10gx} and \ensuremath{g_{\rm P1}}-band data for PS1-10awh and PS1-10ky \citep{10kyawh}. As can be seen from Table \ref{table:filters}, these are restframe light curves in the NUV ($\sim2500-2900$\AA) and they have been corrected for time dilation for the figure. PS1-10pm reached a peak absolute magnitude M$_{NUV}=-21.59\pm0.07$ after a rise time lasting $\sim35$d. The
similarities in the light curve shapes further supports the classification of PS1-10pm as a SLSN-Ic, but at one of the highest known redshifts of $z = 1.206$. Two further light curves, obtained from \emph{u}-band PTF12dam data \citep{12dam} and \ensuremath{g_{\rm P1}}-band PS1-11ap data \citep{11ap}, are also presented in this figure, showing a clear distinction between these slowly evolving, SLSNe-Ic and the normal SLSNe-Ic class. The light curves for the former are much broader and easily distinguishable.
\cite{10kyawh} and \cite{11xk} showed that the colour and luminosity evolution of these SLSNe-Ic are physically consistent
with hot blackbody temperatures ranging from $T_{\rm eff}\sim20000$\,K at 20 days before peak through
$T_{\rm eff}\sim15000$\,K at peak luminosity. Hence we can use the well sampled PS1 multi-colour light curve to trace the
blackbody temperature of PS1-10pm to check for consistency with the known population of these transients.
The PS1 bandpasses probe the restframe NUV wavelengths 2200-4000\AA\ for PS1-10pm and although this is
a rather narrow window for a spectral energy distribution (SED) it is well suited to the high photospheric temperatures.
We chose to determine a blackbody fit at five epochs which had approximately simultaneous and consistent $griz$ coverage
(see Table \ref{table:bbfits}), giving a range between -20d and +30d with respect to peak. The fits and temperatures
are shown in Fig.\,\ref{fig:10pmbb}, which illustrate a physically consistent evolution of temperature which is similar to the
lower redshift SLSNe-Ic as shown in Fig.\,8 of \cite{10kyawh}. At peak luminosity, a temperature of $T_{\rm eff}\sim10000$
provides a blackbody spectrum fit to the
flux of PS1-10pm. This gives an integrated luminosity (between 1000 and 10000\AA) of $\sim3\times10^{44}$erg\,s$^{-1}$, or
$7.3\times10^{10}$\,L$_{\odot}$. This is again very similar to the SLSNe-Ic in \cite{bluedeath}, \cite{10gx}
and \cite{10kyawh}. The radius of the emitting surface must then be of order 6$\times10^{15}$\,cm, which is
a factor of two larger than previously determined by \cite{10kyawh} for PS1-10awh and PS1-10aky, due to the lower peak
$T_{\rm eff}$ that we determine. However within the intrinsic uncertainties of the assumptions of blackbody radiation, the
narrow spectral energy range and flux measurements, we cannot say if this is real diversity or limitations of the fairly simple physics we employ.
In conclusion, the PS1 measured multi-colour light curve is physically consistent with PS1-10pm being a SLSN-Ic at
$z=1.206$.
The spectrum in Fig,\,\ref{fig:10pmspec} illustrates the difficulties in classifying high$-z$ SN candidates
from optical spectra. At $z>1$, one typically gets a region of the rest frame UV that is
a factor of two smaller in wavelength coverage than the observer frame spectrum. The lack
of large numbers of SNe (particularly unusually luminous SNe which will be
preferentially detected at high-z) with restframe UV spectra often
makes the classification and redshift determination difficult.
\subsection{Host galaxy of PS1-10pm}
\label{sec:host}
\begin{figure}
\begin{center}
\includegraphics[angle=270,scale=0.35]{figs/10pm_bbfits.eps}
\caption{Blackbody fitting of PS1-10pm across 5 epochs.}
\label{fig:10pmbb}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\caption{PS1-10pm estimated temperatures from blackbody fitting.}
\label{table:bbfits}
\begin{tabular}{c c c}
\hline
\hline
{\bf MJD} & {\bf Phase (days, rest)} & {\bf T$_{BB}$ (K)} \\
\hline
$\sim$55283 & -20 & 20000 $\pm$ 5000 \\
$\sim$55305 & -8 & 12500 $\pm$ 2500 \\
$\sim$55325 & 0 & 10000 $\pm$ 1000 \\
$\sim$55367 & 20 & 7500 $\pm$ 1000 \\
$\sim$55384 & 30 & 6000 $\pm$ 1000 \\
\hline
\end{tabular}
\medskip
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[scale=0.5]{figs/10pm_gal.eps}
\end{center}
\caption{PS1 and GN \ensuremath{i_{\rm P1}}- and \emph{i}-band images of the SLSN-Ic PS1-10pm at peak and after the explosion has faded. The lower set of images show subsections of the host galaxy of PS1-10pm, 265 days after peak. The circle in the zoomed, lower right image is centred on the SLSN-Ic position with a radius corresponding to 3$\sigma$. The perpendicular lines in this image meet at the determined centroid of the galaxy which can be seen to be just inside the 3$\sigma$ boundary of the SLSN-Ic position.}
\label{fig:10pmim}
\end{figure*}
We obtained deep images in \emph{i} and \emph{z} at the position of PS1-10pm with $9\times150$s exposures at GN, $\sim265$d after the \ensuremath{i_{\rm P1}}-band peak. The data were reduced as normal by subtracting a bias level gleaned from the overscan region of the Gemini CCD, dividing each image by an appropriate flatfield image and subtracting an appropriately scaled, sourceless fringe frame created using the \emph{gifringe} function in the \emph{gemini} \textsc{iraf}\footnote{\textsc{iraf} is distributed by the
National Optical Astronomy Observatories, which are operated by the
Association of Universities for Research in Astronomy, Inc., under
the cooperative agreement with the National Science Foundation.} package. Although no host was seen in the PS1 reference templates,
a faint object can be seen in the deeper Gemini images.
Aperture photometry was carried out using the aperture photometry procedure available in the Graphical Astronomy and Image Analysis tool software package\footnote{http://astro.dur.ac.uk/$\sim$pdraper/gaia/gaia.html} \citep[\textsc{gaia},][]{naylor97}, giving an
\emph{i}-band magnitude of $\emph{i}=24.99\pm0.42$ and an observed \emph{z}-band magnitude of $\emph{z}=24.86\pm0.31$. These correspond to absolute magnitude values of M$_{3400}\sim-18.7$ and M$_{3900}\sim-18.9$ respectively when corrected to $z=1.206$ and for foreground extinction.
The position of a SN with respect to its host galaxy can provide evidence against it being misclassified as an AGN, provided an offset from the galactic centre is found. Alignment of the GN $i$-band image with a PS1 \ensuremath{i_{\rm P1}}-band image of the SLSN-Ic at peak was carried out by first measuring the pixel coordinates of 10 bright stars in a 6.5$'$ x 2.8$'$ field using the \textsc{iraf} \emph{phot} task utilising the centroid centring algorithm on both images. The list of matched coordinates was then used as an input to the \textsc{iraf} \emph{geomap} task to derive a geometric transformation between the two images, allowing for translation, rotation and independent scaling in the \emph{x} and \emph{y} axes. The root mean square (RMS) of the fit was 0.061$''$ (GMOS pixels are 0.1454$"$ and PS1 GPC pixels are 0.25$"$ after warping).
Aperture photometry was then carried out on the host galaxy (in the Gemini image) and PS1-10pm (in the original \protect \hbox {PS1}\ image). The coordinates of the SLSN-Ic and of the host galaxy were measured in both images with three different centring algorithms provided by the \emph{phot} task; centroid, Gaussian and optimal filtering. This provided a mean position and a standard deviation. The standard deviation of the three measurements was taken as the positional error measurement in $x$ and $y$ of the two objects.
The $x,y$ position of PS1-10pm was then transformed to the coordinate system of the GMOS frame using the transformation defined by the 10 stars in common. This revealed a difference of 2.24 GMOS pixels which, at the 0.1454$''$ resolution of GN (for a 2x2 binned CCD with a pixel scale of 0.0727 arcsec/pixel\footnote{http://www.gemini.edu/sciops/instruments/gmos/imaging/detector-array/gmosn-array-eev}) corresponds to an offset of 0.33$''$ (see Fig.\,\ref{fig:10pmim}).
The total uncertainty in the alignment of the two objects is hence the quadrature sum of the uncertainties in the centroids of the host and PS1-10pm and the RMS of the alignment transformation (see \cite{pos} for example). This was found to be $\sigma=0.806$ pixels (or 0.12$''$) and hence the SLSN-Ic and the galaxy centroid differ by 2.8$\sigma$. While this is not quite a formal 3$\sigma$ difference, it indicates that the SN is not coincident with the centre of the galaxy, hence supports evidence that PS1-10pm is a SLSN and is not a UV transient event due to any type of AGN variability.
\begin{figure}
\begin{center}
\includegraphics[angle=270,scale=0.35]{figs/10ahf_oblc.eps}
\caption{Observed \ensuremath{r_{\rm P1}}-, \ensuremath{i_{\rm P1}}- and \ensuremath{z_{\rm P1}}-band light curves of PS1-10ahf. Measurements of the host galaxy from a deep, WHT \emph{i}-band observation and deep, GS \emph{r}- and \emph{z}-band images are shown as horizontal lines and any upper limits are indicated with arrows. During this period, from MJD 55389.54 until 55530.33, 20 non-detections in \ensuremath{g_{\rm P1}}\ are also recorded down to a photometric limit of 24.59 and a \emph{g}-band limit of $\sim27$ is found for the host from a deep, GS \emph{g}-band observation.}
\label{fig:10ahfoblc}
\end{center}
\end{figure}
\cite{2013arXiv1311.0026L} has shown that the host of PS1-10pm actually breaks up into
a resolved source which is significantly extended and has an irregular morphology in Hubble
Space Telescope images. They determine
AB magnitudes of $m_{\rm F606W} = 25.38\pm0.05$ and
$m_{\rm F110W} = 24.40\pm0.08$. Our $i$ and $z$ band filter mags sit comfortably between
these mags, which would allow four points on the SED to be used for future analysis.
\subsection{PS1-10ahf}
\label{sec:ahf}
PS1-10ahf was first detected on MJD 55414 (6$^{th}$ August 2010) in MD10 at RA\,=\,23$^h$32$^m$28$^s$.3, DEC\,=\,-00$^\circ$21$'$43$''$.6. The initial light curve of PS1-10ahf showed a faint, slowly rising source clearly detected in the \ensuremath{i_{\rm P1}}- and \ensuremath{z_{\rm P1}}-bands ($\ensuremath{i_{\rm P1}}\sim23.3$). \protect \hbox {PS1}\ followed the field until MJD 55535 (5$^{th}$ December 2010), thereafter the field was dropped from the \protect \hbox {PS1}\ observing cycle due to airmass constraints. The details of these observations can be found in Table \ref{table:10ahf}. No sign of a host was found in any of the major survey catalogues, nor was it visible in the \protect \hbox {PS1}\ reference stack images.
As a check, manual photometry was also carried out independently of both the automated pipelines described in Section 2.2. The optimal photometry facility within \textsc{gaia} was again used to perform photometry on the target images rather than the difference images. As there was clearly no host contribution in the PS1 reference stack image, there should not be any contribution to the luminosity other than that of the transient. The manual light curves for the \ensuremath{i_{\rm P1}}- and \ensuremath{z_{\rm P1}}-bands were calibrated using 8 SDSS stars in the field of the transient, and in all cases the photometry was consistent with the pipeline measurements. The epoch of the $i_{\rm P1}$-band maximum was found from a second order polynomial fit and determined to be MJD $55540\pm5$. Observed \ensuremath{r_{\rm P1}}-, \ensuremath{i_{\rm P1}}- and \ensuremath{z_{\rm P1}}-band light curves for PS1-10ahf can be found in Fig.\,\ref{fig:10ahfoblc}.
\begin{table*}
\tiny
\caption{Observed photometry for PS1-10ahf. No \emph{K}-corrections have been applied. Phase is in observer frame, not restframe, as more than one possible redshift value is presented in the text. Note that the \protect \hbox {PS1}\ observations have had any flux from previous reference image removed through image subtraction (although note that no host object can be seen at the location of PS1-10ahf) whereas the late time WHT and GS observations have not.}
\label{table:10ahf}
\begin{tabular}{c c c c c c c c c c}
\hline
\hline
{\bf Date} & {\bf MJD} & {\bf Phase (days)} & {\bf r$_{P1}$} & {\bf i$_{P1}$} & {\bf z$_{P1}$} & {\bf \emph{r}} & {\bf \emph{i}} & {\bf \emph{z}} & {\bf Telescope} \\
\hline
11/07/2010 & 55388.59 & -151.41 & - & - & $>$23.89 & - & - & - & PS1 \\
12/07/2010 & 55389.52 & -150.48 & $>$24.37 & - & - & - & - & - & PS1 \\
21/07/2010 & 55398.61 & -141.39 & $>$23.69 & - & - & - & - & - & PS1 \\
02/08/2010 & 55410.55 & -129.45 & $>$23.80 & - & - & - & - & - & PS1 \\
04/08/2010 & 55412.55 & -127.45 & - & - & 23.98 (0.31) & - & - & - & PS1 \\
05/08/2010 & 55413.54 & -126.46 & 23.77 (0.19) & - & - & - & - & - & PS1 \\
08/08/2010 & 55416.56 & -123.44 & 24.42 (0.28) & - & - & - & - & - & PS1 \\
09/08/2010 & 55417.56 & -122.44 & - & 23.16 (0.09) & - & - & - & - & PS1 \\
14/08/2010 & 55422.56 & -117.44 & $>$23.99 & - & - & - & - & - & PS1 \\
15/08/2010 & 55423.56 & -116.44 & - & 23.00 (0.06) & - & - & - & - & PS1 \\
16/08/2010 & 55424.57 & -115.43 & - & - & $>$23.68 & - & - & - & PS1 \\
17/08/2010 & 55425.56 & -114.44 & $>$24.27 & - & - & - & - & - & PS1 \\
19/08/2010 & 55427.57 & -112.43 & - & - & 23.85 (0.23) & - & - & - & PS1 \\
20/08/2010 & 55428.56 & -111.44 & 23.94 (0.24) & - & - & - & - & - & PS1 \\
30/08/2010 & 55438.58 & -101.42 & - & 23.08 (0.10) & - & - & - & - & PS1 \\
31/08/2010 & 55439.54 & -100.46 & - & - & 22.96 (0.12) & - & - & - & PS1 \\
01/09/2010 & 55440.53 & -99.47 & 23.65 (0.24) & - & - & - & - & - & PS1 \\
02/09/2010 & 55441.52 & -98.48 & - & 22.94 (0.07) & - & - & - & - & PS1 \\
03/09/2010 & 55442.57 & -97.43 & - & - & 23.41 (0.28) & - & - & - & PS1 \\
04/09/2010 & 55443.50 & -96.5 & 23.41 (0.14) & - & - & - & - & - & PS1 \\
05/09/2010 & 55444.48 & -95.52 & - & 22.88 (0.06) & - & - & - & - & PS1 \\
06/09/2010 & 55445.52 & -94.48 & - & - & 23.39 (0.24) & - & - & - & PS1 \\
07/09/2010 & 55446.55 & -93.45 & 23.65 (0.18) & - & - & - & - & - & PS1 \\
08/09/2010 & 55447.53 & -92.47 & - & 22.72 (0.06) & - & - & - & - & PS1 \\
09/09/2010 & 55448.47 & -91.53 & - & - & 23.29 (0.13) & - & - & - & PS1 \\
10/09/2010 & 55449.51 & -90.49 & - & - & - & - & - & - & PS1 \\
12/09/2010 & 55451.51 & -88.49 & - & - & $>$23.29 & - & - & - & PS1 \\
13/09/2010 & 55452.43 & -87.57 & 23.40 (0.13) & - & - & - & - & - & PS1 \\
14/09/2010 & 55453.46 & -86.54 & - & 22.75 (0.05) & - & - & - & - & PS1 \\
17/09/2010 & 55456.45 & -83.55 & - & 22.64 (0.04) & - & - & - & - & PS1 \\
18/09/2010 & 55457.35 & -82.65 & - & - & 22.86 (0.19) & - & - & - & PS1\\
19/09/2010 & 55458.33 & -81.67 & - & - & - & - & - & - & PS1\\
26/09/2010 & 55465.52 & -74.48 & - & 22.56 (0.12) & - & - & - & - & PS1\\
27/09/2010 & 55466.41 & -73.59 & - & - & 22.56 (0.09) & - & - & - & PS1\\
28/09/2010 & 55467.43 & -72.57 & 23.28 (0.22) & - & - & - & - & - & PS1\\
06/10/2010 & 55475.27 & -64.73 & - & - & 22.45 (0.10) & - & - & - & PS1\\
07/10/2010 & 55476.45 & -63.55 & 22.76 (0.10) & - & - & - & - & - & PS1\\
08/10/2010 & 55477.27 & -62.73 & - & 22.62 (0.04) & - & - & - & - & PS1\\
09/10/2010 & 55478.26 & -61.74 & - & - & 22.82 (0.10) & - & - & - & PS1\\
10/10/2010 & 55479.27 & -60.73 & 23.45 (0.21) & - & - & - & - & - & PS1\\
11/10/2010 & 55480.36 & -59.64 & - & 22.62 (0.06) & - & - & - & - & PS1\\
12/10/2010 & 55481.29 & -58.71 & - & - & 22.75 (0.13) & - & - & - & PS1\\
13/10/2010 & 55482.27 & -57.73 & 23.25 (0.19) & - & - & - & - & - & PS1\\
14/10/2010 & 55483.26 & -56.74 & - & 22.51 (0.04) & - & - & - & - & PS1\\
15/10/2010 & 55484.24 & -55.76 & - & - & 22.59 (0.09) & - & - & - & PS1\\
16/10/2010 & 55485.26 & -54.74 & 22.93 (0.16) & - & - & - & - & - & PS1\\
29/10/2010 & 55498.26 & -41.74 & - & 22.56 (0.05) & - & - & - & - & PS1\\
30/10/2010 & 55499.26 & -40.74 & - & - & 22.45 (0.10) & - & - & - & PS1\\
31/10/2010 & 55500.05 & -39.95 & - & - & - & - & 22.60 (0.14) & - & WHT\\
31/10/2010 & 55500.26 & -39.74 & 22.95 (0.07) & - & - & - & - & - & PS1\\
01/11/2010 & 55501.39 & -38.61 & - & 22.63 (0.08) & - & - & - & - & PS1\\
02/11/2010 & 55502.30 & -37.7 & - & - & 22.63 (0.09) & - & - & - & PS1\\
03/11/2010 & 55503.34 & -36.66 & 23.17 (0.11) & - & - & - & - & - & PS1\\
04/11/2010 & 55504.41 & -35.59 & - & 22.74 (0.09) & - & - & - & - & PS1\\
07/11/2010 & 55507.33 & -32.67 & - & 22.56 (0.06) & - & - & - & - & PS1\\
08/11/2010 & 55508.25 & -31.75 & - & - & 22.57 (0.12) & - & - & - & PS1\\
10/11/2010 & 55510.31 & -29.69 & - & 22.52 (0.05) & - & - & - & - & PS1\\
30/11/2010 & 55530.33 & -9.67 & - & - & - & - & - & - & PS1\\
04/12/2010 & 55534.30 & -5.7 & - & 22.52 (0.09) & - & - & - & - & PS1\\
05/12/2010 & 55535.24 & -4.76 & - & - & 22.28 (0.07) & - & - & - & PS1\\
06/12/2010 & 55536.27 & -3.73 & 22.76 (0.12) & - & - & - & - & - & PS1\\
\hline
24/07/2011 & 55766 & 226.0 & - & - & - & 24.52 (0.06) & - & - & GS\\
26/07/2011 & 55768 & 228.0 & - & - & - & - & - & 23.80 (0.10) & GS\\
08/08/2011 & 55781.14 & 241.14 & - & - & - & - & 24.17 (0.07) & - & WHT\\
\hline
\end{tabular}
\medskip
\end{table*}
A spectrum of PS1-10ahf was obtained with GMOS on GS\footnote{Gemini Program ID: GS-2010B-Q-43} on the 6$^{th}$ November 2010 using the R150 grating (G5306) with a 1$''$ slit, giving a useful wavelength range from $\sim4300-8000$\AA.
A set of $4\times2700$s exposures gave a combined SNR of $\sim19$ in the continuum, when rebinned to 10\AA\ per pixel.
The flux calibrated GMOS spectrum provides a synthetic $r_{\rm P1}$-band magnitude of
22.8 as calculated in SYNPHOT. This flux (on 20101106) is in reasonable agreement (within $\pm0.2^{m}$)
of the PS1 photometry. The synthetic $g_{\rm P1}$-band magnitude from the same spectrum of 24.7 is
consistent with the non-detection in the nightly PS1 images.
There are no strong and obvious narrow features either in absorption (e.g. Mg\,{\sc ii} or Ca\,{\sc ii} ISM lines) or in emission (e.g. nebular lines) to provide an unambiguous redshift. Hence we initially compared it to a range of SNe, including the SLSNe-Ic that we used for the PS1-10pm comparison and the confirmed $z\simeq1$ SLSNe-Ic already from PS1 \citep{10kyawh}. There are a number of broad absorption or P-Cygni features, and a plausible redshift of $z=1.1$ would put the deepest absorption at a rest wavelength similar to the broad Mg\,{\sc ii} absorption seen in other SLSNe-Ic (Fig.\,\ref{fig:10ahfspectra}). However the spectrum lacks C\,{\sc ii} and Si\,{\sc ii} as detected in previous SLSNe-Ic \citep{bluedeath,10kyawh} and overall is not an entirely convincing match.
Attempts to match the PS1-10ahf spectrum with other features typical of Type Ic SNe, such as Ca\,{\sc ii} H\&K features and some Fe\,{\sc ii} blends, also proved unconvincing.
Fig.\,\ref{fig:10ahfablc} shows a comparison of an absolute magnitude \ensuremath{r_{\rm P1}}-band PS1-10ahf light curve with \emph{u}-band data for SN2010gx \citep{10gx}, \ensuremath{g_{\rm P1}}-band data for PS1-10awh and PS1-10ky \citep{10kyawh} and \ensuremath{r_{\rm P1}}-band PS1-10pm data (this paper), again created using Eq.\,\ref{eq:abmag}. The central wavelengths in the rest frame for these filters offer a reliable comparison as shown in Table \ref{table:filters}.
The measured magnitudes were corrected for cosmological expansion and foreground reddening for Galactic line of sight only \citep{extinct} as again we have no host extinction information. The foreground extinction and the \cite{exlaw} extinction law implies $A_{i}\simeq0.07$. The long rise time of PS1-10ahf is still prevalent after correcting for time dilation and clearly sets the transient apart from the normal SLSNe-Ic class.
\begin{figure*}
\begin{center}$
\begin{array}{cc}
\includegraphics[angle=270,scale=0.45]{figs/10ahf_speccomp_bw.eps} \\
\includegraphics[angle=270,scale=0.45]{figs/10ahf_spec_11ap.eps}
\end{array}$
\end{center}
\caption{GMOS spectrum of PS1-10ahf at $z=1.1$, taken with GS, compared with PS1-11ap at $z=0.524$, PS1-11aib at $z=0.997$, SCP06F6 at $z=1.189$ and PTF12dam at $z=0.108$.
The lower plot shows an overlay of the PS1-10ahf spectrum with the same PS1-11ap spectrum as in the top panel to emphasise the similarities between the objects.
All of the spectra have been re-binned to 10\AA\ and some chip gaps have been smoothed over. See references in text.}
\label{fig:10ahfspectra}
\end{figure*}
If this redshift of $z=1.1$ from the spectral comparisons is secure then the
transient is a closer match to the slowly evolving SLSNe-Ic PS1-11ap \citep{11ap} and PTF12dam \citep{12dam}, as seen in the lower plot in Fig.\,\ref{fig:10ahfablc}.
PS1-11ap has broad Mg\,{\sc ii} absorption with a line width of $\sim14,500$\,kms$^{-1}$. The line width of possible Mg\,{\sc ii} absorption was determined to be $\sim12,000$\,kms$^{-1}$ for PS1-10ahf which compares well to PS1-11ap (see lower plot in Fig.\,\ref{fig:10ahfspectra}).
The overall spectral match to PS1-11ap was the closest we could find, after comparing with all known types of SNe for which NUV spectra exist.
At this redshift the light curve is very broad, even after applying time dilation. The transient has an intrinsically slow rest-frame rise of 60 days in the NUV bands (see Fig.\,\ref{fig:10ahfablc}).
The rising slope, within the photometric uncertainties is similar to 45-60 day rise time deduced from the modelling of PS1-11ap \citep{11ap}.
Unfortunately we do not sample the decay time after peak for PS1-10ahf, simply due to the length of the PS1 observing season.
\begin{figure*}
\begin{center}$
\begin{array}{cc}
\includegraphics[angle=270,scale=0.475]{figs/10ahf_ablccomp_new.eps} \\
\includegraphics[angle=270,scale=0.475]{figs/10ahf_ablccomp_pisn2.eps}
\end{array}$
\end{center}
\caption{A variation on Fig. \ref{fig:10pmlc}, again showing an absolute \emph{u}-band SN2010gx light curve, absolute \ensuremath{g_{\rm P1}}-band PS1-10awh and PS1-10ky light curves and an absolute \ensuremath{r_{\rm P1}}-band PS1-10pm light curve. An absolute \ensuremath{r_{\rm P1}}-band light curve of PS1-10ahf is shown here for comparison purposes, with $z=1.1$. The lower image offers a comparison of a \ensuremath{z_{\rm P1}}-band light curve of PS1-10ahf with \ensuremath{r_{\rm P1}}-band PS1-11ap and \emph{g}-band PTF12dam data. See references within.
}
\label{fig:10ahfablc}
\end{figure*}
The plots show a peak absolute magnitude M$_{NUV}=-21.39\pm0.07$ for PS1-10ahf however the polynomial fit used to determine the peak MJD suggests that the transient continued to brighten for a short time after the observing period had ended.
This illustrates a practical limitation in following the evolution of transients with broad light curves and slow evolution times combined with time dilation at $z > 1$. A typical PS1 observing season for a MD field is 150-180 days, meaning that a transient at $z\sim1$ with a symmetric light curve and rise-time of 40 days (restframe) needs to be discovered close to the start of the observing season for the MD field if one is to sample the full rise and decay time. We were fortunate that this occurred for PS1-11ap \citep[at $z=0.524$,][]{11ap}. In conclusion, we find the most likely match to the light curve and spectrum of PS1-10ahf is with the slowly evolving SLSNe-Ic PS1-11ap.
To further illustrate the connection we show a spectrum of PS1-11aib. The latter transient fell outside the survey window set for this paper (discovered on the 27$^{th}$ July, 2011 in MD09), but it too has a broad, red light curve and a spectrum with very similar absorption features and slope to both PS1-10ahf and PS1-11ap. PS1-11aib has a convincing detection of Mg\,{\sc ii} $\lambda\lambda$2796,2803 ISM doublet at a redshift of $z=0.997$ and will be discussed in a future PS1 paper (Lunnan et al., in preparation). Unfortunately due to the high redshift of PS1-10ahf, a direct comparison with other published objects of this class (SN2007bi\ and PTF12dam) is not possible as the rest wavelength ranges do not have a sufficient overlap.
A PTF12dam spectrum is included for completeness but, as can be seen in Fig.\,\ref{fig:10ahfspectra}, the main features of the PS1-10ahf spectrum fall just bluewards of the reach of PTF12dam.
Nevertheless, as is exemplified in the lower plot of Fig.\,\ref{fig:10ahfspectra}, the continuum shape of the PS1-10ahf spectrum and the slowly evolving SLSNe-Ic spectra are very similar.
\subsection{Was PS1-10ahf a variable BAL QSO?}
\label{ahfqso}
\begin{figure*}
\begin{center}$
\begin{array}{cc}
\includegraphics[angle=270,scale=0.3]{figs/10ahf_speccomp_qso_1.eps} &
\includegraphics[angle=270,scale=0.3]{figs/10ahf_speccomp_qso_2.eps} \\
\includegraphics[angle=270,scale=0.3]{figs/10ahf_speccomp_qso_3.eps} &
\includegraphics[angle=270,scale=0.3]{figs/10ahf_speccomp_qso_4.eps} \\
\end{array}$
\end{center}
\caption{A comparison of PS1-10ahf with the SDSS Quasars 1730+5850 ($z=2.035$), 1154+0300 ($z=1.458$), SDSS 0300+0048 ($z=0.892$) and 0819+4209 ($z=1.926$) where $z=1.07$ for PS1-10ahf. All of the spectra have been re-binned to 10\AA\ and some chip gaps have been smoothed over. See references in text.}
\label{fig:10ahfspectra2}
\end{figure*}
Initial difficulties in determining any spectral features in the PS1-10ahf GMOS spectrum led us to
consider if the object was perhaps a broad absorption line quasar (BAL QSO). Fig.\,\ref{fig:10ahfspectra2} shows a PS1-10ahf spectrum at rest wavelength compared with FeLoBal SDSS 1730+5850, 1154+0300, 0300+0048 and 0819+4209 \citep{Hall} with the redshift of PS1-10ahf set at $z=1.07$. These BAL QSOs can show absorption troughs $\sim2000 - 20,000$\,kms$^{-1}$ arising from gas with blueshifted velocities up to $66,000$\,kms$^{-1}$ \citep{qso} giving broad spectral features that can be comparable to typical SNe features in low to moderate signal-to-noise spectra. A number of other features (possibly Fe\,{\sc ii} and Ca\,{\sc ii}, most apparent in SDSS 0300+0048) in the QSO spectra match shallower absorption features in PS1-10ahf. However there is no convincing match to any of these, particularly as the spectral break in the QSOs due to broad Mg\,{\sc ii} absorption is much deeper in the QSOs compared to PS1-10ahf.
In Fig.\ref{fig:10ahfspectra2} we show the QSO spectra compared with PS1-10ahf. We matched the flux level in the restframe continuum region 2200-2700\AA\ of each object to show the contrast in the spectrum break expected if PS1-10ahf was a BAL QSO. In all cases the BAL QSO spectral breaks are significantly larger than that of PS1-10ahf and it seems unlikely that they can be similar objects. While any underlying galaxy flux might be expected to dilute the transient flux of PS1-10ahf, and hence increase the spectral break, the host galaxy is at least
2 magnitudes fainter than the transient when the spectrum was taken (see Table\,\ref{table:10ahf}). Therefore flux dilution cannot account for the differences.
In addition, the unusual QSOs of the \cite{Hall} sample are significantly more luminous than PS1-10ahf (if the latter is at $z=1.07$), as they range between $M _{NUV}\sim -23\ {\rm to} -29$, compared to $M_{NUV}\simeq-21$ for
PS-10ahf.
We carried out a similar search for the host galaxy of PS1-10ahf as for PS1-10pm to align it with the position of the transient. A deep \emph{i}-image was obtained with a 600s exposure using the ACAM instrument on the WHT, 241 days after the determined peak epoch. The data was reduced by subtracting a separate bias image and dividing the image by an appropriate flatfield image, taken on the same night. As can be seen in Fig.\,\ref{fig:10ahfim} an object can clearly be seen at the expected coordinates and an observed \emph{i}-band magnitude of $\emph{i}=24.17\pm0.07$ was determined using the photometry procedures available in the \textsc{gaia} software package \citep{naylor97}. Taking $z=1.1$ and $A_{i}=0.07$ gives an absolute magnitude of M$_{3400}\sim-19.5$. Note that a point source was again chosen as a reference PSF for the galaxy however, due to high redshift involved, the target appears approximately point-like. As before, both optimal and aperture photometry procedures were carried out to ensure that this approach was sensible.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.6]{figs/10ahf_gal.eps}
\end{center}
\caption{PS1 and WHT \ensuremath{i_{\rm P1}}- and \emph{i}-band images of the possible slowly evolving SLSN-Ic PS1-10ahf at peak and after the explosion has faded. The circle in the zoomed, lower right image is centred on the galaxy position with a radius corresponding to 3$\sigma$. The perpendicular lines in this image meet at the determined centroid of the transient position which is well within the determined central region of the galaxy. Although this does not rule out the object as a slowly evolving SLSN-Ic, this result does not provide conclusive evidence that the transient is not an AGN-like event, unlike the poor spectral comparisons of PS1-10ahf with the Hall et al. (2002) QSOs.}
\label{fig:10ahfim}
\end{figure*}
Alignment of the WHT $i$-band image with a PS1 \ensuremath{i_{\rm P1}}-band image of the transient at peak was carried out using the method described in Section \ref{sec:host}. The coordinates of 10 bright stars in a 6.5$'$ x 6.5$'$ field were determined using the \textsc{iraf} \emph{phot} task, utilising the centroid centring algorithm on both images. The list of matched coordinates was then used as an input to the \textsc{iraf} \emph{geomap} task to derive a geometric transformation between the two images, allowing for translation, rotation and independent scaling in the \emph{x} and \emph{y} axes. The RMS of the fit was 0.064$''$. Aperture photometry was then carried out on the WHT image for a host galaxy measurement and on the original PS1 image for a measurement of PS1-10ahf. The coordinates of the transient and of the host galaxy were measured in both images with three different centring algorithms provided by the \emph{phot} task; centroid, Gaussian and optimal filtering. This provided a mean position and a standard deviation. The standard deviation of the three measurements was taken as the positional error measurement in $x$ and $y$ of the two objects. The $x,y$ position of PS1-10ahf was then transformed to the coordinate system of the ACAM frame using the transformation defined by the 10 stars in common. A separation of 0.148$''$ was determined (using a pixel scale for the ACAM instrument of 0.25 arcsec/pixel\footnote{http://www.ing.iac.es/Astronomy/observing/instruments.html}). As before, the total uncertainty in the alignment of the two objects is hence the quadrature sum of the uncertainties in the centroids of the host and PS1-10ahf and the RMS of the alignment transformation. This was found to be 0.214$''$ at the pixel scale of ACAM, hence there is no evidence for an offset between the transient and the centroid of its host galaxy.
PS1-10ahf is coincident, within the errors, to the centroid of the host galaxy which doesn't offer the same strong argument of it not being QSO variability when compared with PS1-10pm being offset. However the differences in the spectral features and the relatively low luminosity of PS1-10ahf compared to the \cite{Hall} QSO sample does not favour a QSO origin. Overall the comparisons above indicate a best match to a slowly evolving SLSN-Ic of the same type as SN2007bi, PTF12dam\ and PS1-11ap\ \citep{07bi,DY,12dam,11ap}.
\begin{figure*}
\begin{center}
\vspace{1.25cm}
\includegraphics[angle=270,scale=0.5]{figs/snVShost3.eps}
\end{center}
\caption{Observed SLSNe-Ic apparent magnitudes (opaque shapes) compared with their host galaxy magnitudes (hollow shapes) in the same filter to emphasis the large differences between the two and thus support the method of finding SLSNe-Ic given in the paper. The blue hexagons show V, \emph{g} and \ensuremath{g_{\rm P1}}-band data, the green squares show \emph{r} and \ensuremath{r_{\rm P1}}-band data and the yellow triangles show \emph{i} and \ensuremath{i_{\rm P1}}-band data. Any host galaxies with only limiting magnitudes are shown as arrows. Host galaxy data were taken from Lunnan et al. (2014) and Nicholl et al. (2014) and the SLSNe-Ic data from various sources (see text for more details). The three, grey dotted lines represent the trends given by constant absolute magnitudes of -23, -21 and -19 mag. The lower panel shows the magnitude difference between the SLSNe-Ic and their hosts, plotted against the same redshift range as the upper plot.}
\label{fig:snhost}
\end{figure*}
\section{DISCUSSION}
\label{sec:discussion}
\subsection{SLSNe-Ic in PS1}
By a combination of 2 semi-automated transient alert pipelines and vigorous human screening, the catalogue of all the hostless transients from approximately the first year of the \protect \hbox {PS1}\ survey presented within can be considered to be comprehensive.
Of the 12 spectroscopically confirmed, core-collapse \protect \hbox {PS1}\ SNe with faint host galaxies (see Table \ref{table:CC!}), 5 have been classified as SLSNe-Ic independently of this paper; PS1-10ky and PS1-10awh \citep{10kyawh} and PS1-11ap \citep{11ap}.
PS1-11tt and PS1-11afv fall into the same category with high redshifts obtained from the identification of Mg\,{\sc ii} in their spectrum giving them comparable absolute magnitudes to the superluminous dataset \citep{2013arXiv1311.0026L}.
Evidence from the spectrum and light curves of PS1-10pm provide convincing arguments that it is, in fact, another such event as shown in Section \ref{sec:pm} of this paper. PS1-10ahf also seems to share characteristics with these objects, although possible reasons for classifying it as an AGN are given in Section \ref{sec:ahf}.
PS1-10afx was originally thought to be a superluminous event \citep{10afx}
but more recently has been shown to be a lensed SN\,Ia \citep{2013ApJ...768L..20Q,quimblens}.
Despite this possible contamination, probing hostless transient objects with all SNe\,Ia removed appears to prove an efficient method of finding SLSNe-Ic.
Fig.\,\ref{fig:snhost} uses literature data of published SLSNe-Ic to illustrate the trend of SLSNe-Ic having luminosities consistently brighter than their host galaxies and probes how this might evolve with redshift.
We have used host galaxy and SLSNe-Ic data from a range of sources for confirmed SLSNe-Ic.
The data are taken from this paper;
\cite{10gx,bluedeath,10kyawh,berger,10afx,11xk,2013arXiv1311.0026L,11ap,2014arXiv1405.1325N}.
The peak apparent magnitudes of these SLSNe-Ic are plotted along with the apparent magnitudes of their host galaxies, in the same observed filter.
We should highlight possible biases in interpreting this figure. At low redshift ($z < 0.5$) these SLSNe have been found in a range of surveys which searched for SNe in and outside bright galaxies, such
as the Palomar Transient Factory \citep{2009PASP..121.1395L} and La Silla QUEST \citep{2013PASP..125..683B}
However both the CRTS \citep{2009ApJ...696..870D} and the
Pan-STARRS1 ``Faint Galaxy Supernova Search" \citep{11xk}
used catalogue matching which selected transients with significant flux differences between
host object and transient. While there may be some bias in the CRTS and PS1 discovered objects
no survey (or published study) has found a SLSN in a high mass, high luminosity host (with the
threshold of roughly $M_g \sim -19$). Neither is there any explanation how this would arise
in a survey selection bias. At redshifts above $z > 0.5$ (mostly PS1 objects in the MDS fields) the trend remains, but of course many of these objects have been selected for classification in PS1 precisely because
they have no obvious host galaxy as we have described in this paper.
At redshifts below $z=0.5$ the mean difference between host and peak SLSN-Ic magnitude is at least $4.5\pm1.2$ mag. It appears to decrease to $2.2\pm0.8$ mag between redshifts 1-2
(note that the numbers quoted for the magnitude differences here include the limiting host galaxy magnitudes, meaning that they are lower limits on the magnitude difference).
Although there are caveats with the selection methods it is certainly true that no SLSN-Ic below
$z \sim 1.5$ has been found in a galaxy anywhere near the luminosity of a typical $L^{\star}$ galaxy.
If normal and slowly evolving SLSNe-Ic can occur in brighter galaxies at low redshift, the
combined survey power of all professional and amateur searches have not uncovered any.
Although this lack of evidence does not prove that these events cannot occur in such environments, the large number of SLSN-Ic discoveries exclusively faint galaxies seems to suggest that such localities are preferential.
Additionally the plot suggests that there may be a trend for the host galaxies of SLSNe-Ic to
be systematically brighter at higher redshift.
Although we have employed the technique of no visible
host to select high$-z$ candidates, this wouldn't explain why the hosts look intrinsically
$brighter$ than at low redshift in this plot. We again emphasise that there may well be selection
effects and further discoveries from low$-z$ e.g. iPTF and LSQ + PESSTO \citep{2014arXiv1405.1325N} to high$-z$
(e.g. the Dark Energy Survey) are required to shed further light on the host galaxies and expand on
the detailed work of \cite{2013arXiv1311.0026L}.
\cite{2011ApJ...727...15N}, \cite{janet} and \cite{2013arXiv1311.0026L} attribute the link between
SLSN-Ic and dwarf galaxies as being physically due to the low metallicities measured (where possible) for the hosts. If the mass-metallicity relation for galaxies evolves over redshift, one might expect a larger fraction of the massive galaxies (within a factor of two of L$^{\ast}$) at redshifts beyond 1.5 to have significantly
lower metallicity than their low-redshift counterparts. \cite{2006ApJ...644..813E} show that this is visible at $<z>=2.26\pm0.17$ with star-forming galaxies having metallicities which are typically 0.3\,dex lower than low-\emph{z} galaxies of similar mass. If there is a metallicity threshold beyond which SLSNe-Ic are generally not formed due to progenitor evolution \citep{janet,2013arXiv1311.0026L}
then one night expect the the mass-metallicity evolution of galaxies to cause SLSN-Ic to
appear in more massive, more luminous galaxies at higher redshift. We may be seeing this effect in
the trend in Fig.\,\ref{fig:snhost}, although the scatter is quite large and there are numerous non-detections at the higher redshifts. In addition, \cite{cooke} present the discoveries of
two transients with lightcurves that match SLSN-Ic in galaxies with redshifts of
$z=2.05$ and $z=3.9$. These transients were selected by virtue of being in galaxies with
estimated photometric redshifts beyond $z>2$ and the method has has an obvious selection bias as their discovery required detection of a host galaxy. However the difference between
host galaxy and transient magnitudes are only $+0.3^{m}$ and $-0.18^{m}$.
This would suggest the trend in Fig.\,\ref{fig:snhost} could continue
up to higher redshift, but this requires further work in selecting SLSN at redshifts beyond
$z=2$.
Of interest is the lack of SLSNe-II discoveries within this dataset. This could suggest an intrinsically lower rate for this particular brand of SLSNe or it could be evidence of the observational bias apparent in this investigation. The small number of SLSNe-II discovered so far have been associated with brighter hosts, which could explain the lack of discoveries here.
It is noted here for completeness that some SLSNe-II have been found in dwarf galaxies \citep[for example SN2006tf,][]{brightII,2011ApJ...727...15N}.
\subsection{Monte-Carlo simulations and estimate of SLSNe-Ic rates}
\label{sec:MC-rates}
\begin{figure*}
\begin{center}
\includegraphics[angle=270,scale=0.5]{figs/SLSNe_rate.eps}
\end{center}
\caption{The results of the Monte-Carlo simulations. The two shaded boxes represent the number of probable SLSNe-Ic (green) and slowly evolving SLSNe-Ic (blue) discovered during the search for orphans presented within the paper. The points represent the ratios of SLSNe-Ic to CCSNe against the number of SLSNe-Ic discovered within the simulation during the same time period. Thus we can deduce the approximate rate of SLSNe-Ic, of both the normal and the more rare, slowly evolving types.}
\label{fig:snrate}
\end{figure*}
Previous works have carried out rough estimates of the rates of SLSN-Ic to provide
an initial guide to the relative frequency of these transients compared to the
normal supernova population.
\cite{bluedeath} estimated their relative rate to be around 1 in every 10,000 core-collapse supernova.
From the detection of just one event in the Texas Supernova Search (SN2005ap),
\cite{quimbrate} estimated the SLSN-Ic rate to be 32$^{+77}_{-26}$ events Gpc$^{-1}$\,yr$^{-1}$\,h$^{3}_{71}$. An estimation of the rate with respect to the core-collapse SN rate (within the same
volume) is a useful parameter as it can constrain theories of the progenitor sources, as has
been done with GRB and broad-lined Ic SNe \citep[or hypernova, e.g.][]{2004ApJ...607L..17P}.
It appears that long duration GRBs rates (LGRBs) are around 1 in every 1000 core-collapse SNe
\citep{2013MNRAS.428.1410G}
hence the initial rates in \cite{bluedeath} would suggest a very low volumetric rate, roughly 10
times rarer than LGRBs. \cite{quimbrate} also compared their volumetric rate to the core-collapse
supernova rate in \cite{2008A&A...479...49B}, to estimate a relative rate of SLSN type Ic to core-collapse SN of
1 in 1000-20,000. In many of these works, the rate of the rare transients (GRBs or SLSN)
with respect to the CCSN population is assumed to be relatively constant with redshift. This is
certainly an unknown factor and may evolve due to the metallicity of the bulk of the star formation
changing. Since we have low numbers of objects we will assume here that the ratio is constant.
\cite{young} used Monte-Carlo simulations to estimate the number of CCSNe expected to be discovered in all sky transient surveys such as \protect \hbox {PS1}. Since the publication of this paper, the study of SLSNe has rapidly evolved \citep[for a recent review see][]{gal-yam} . We use the same Monte-Carlo code from \cite{young} (recoded in python) and have updated the input data to include light curves and spectra from SLSNe-Ic and their slowly evolving, SN2007bi-like\ counterparts. This allows a calculation of the rates of these SLSNe-Ic from the detections in the \protect \hbox {PS1}\ MDS\ orphan population. We can determine a robust lower limit and a reasonable estimate of the most likely range for the rates within a redshift of approximately $z<1.4$.
The simulations were tailored specifically for the \protect \hbox {PS1}\ MDS; taking into account observational cadence, limiting magnitudes and historical records of time lost due to bad weather, technical difficulties or scheduled maintenance.
A foreground extinction of $E(B-V)=0.023$ is assumed which is typical for the Galactic line of sight for the MDS fields. No internal galaxy extinction is applied, which is a reasonable assumption for the SLSNe-Ic found to date.
The simulations run in two stages. The first stage simulates 10, 000 SN events with a population demographic mimicking the input SNe light curves, spectral databases, CCSN to star-formation ratio and star-formation history, all within a volume between $0.3 < z < 1.4$. The second stage determines the fraction of these events the \protect \hbox {PS1}\ survey would have discovered.
To estimate the limiting magnitude of the MDS images and the efficiency of recovering transients we ran multiple fake star
tests. Five nightly stacks of skycell \#39 (which is approximately one of the 60 CCDs in the focal plane array, warped to the sky)
in MD08 were taken from September 2010, with FWHM of point sources between 1\farcs2-1\farcs6 . Fake stars were added at magnitudes of 21, 22, 23, 24 and 25 (including shot noise) at approximately 80 separate positions on the skycell.
When simulating sources of $>21$ mag only sparse detection efficiency curves are produced and so the points were fitted with an
'S-function' \citep[see Eq. 1 in][]{2008ApJ...681..462D}.
This allowed precise determination of the detection efficiencies.
Half of these were on empty sky regions, to simulate the orphan population and the other half were placed inside resolved galaxies. The IPP image subtraction routine (ppSub) was run using a reference image made of a stack which contained 86 individual images. This is a typical static reference sky product that was used during the period of searching of these transients, and had an image quality of 1\farcs1.
Sources were catalogued in the difference image, and clear visual detections (in a similar way to how the manual screening was done in the real transient search) were picked for photometric measurements with PSF fitting, as described in Section \ref{sec:observations}. The detection efficiency fell below 98\% at the following $r_{\rm P1}$-band magnitudes : 22.8 (in seeing of 1\farcs6), 22.9 (1\farcs4), 23.3 (1\farcs2), 23.6 (1\farcs0).
Hence we take the $r_{\rm P1}$-band limiting magnitude to be 23 as the median seeing of individual PS1 images are in the
range 1\farcs3 to 1\farcs1 (\ensuremath{g_{\rm P1}} through \ensuremath{z_{\rm P1}}).
We assume similar for $g_{\rm P1}$- and $i_{\rm P1}$-bands as the exposure times for these are set to retrieve the same limiting magnitude and previous estimates of depth have found the same \cite[e.g. see the extensive discussion in][]{JTwds}. For the $z_{\rm P1}$-band we conservatively take the depth to be 22.4.
For each of the 10, 000 SNe simulated, the simulations use rest frame SNe spectra to calculate the \emph{K}-corrections attributed to the SN at its assigned redshift for each of the \protect \hbox {PS1}\ \emph{griz}$_{P1}$ filters. To this end, the simulations require a complete spectral series, covering both the full wavelength and temporal ranges required to generate all possible \emph{K}-corrections, for each of the two SLSNe-Ic classes.
As no single object dataset fulfilled these specifications, composite spectral series were made for the normal and slowly evolving SLSNe-Ic classes.
The slowly evolving SLSNe-Ic series uses data from PTF12dam\ and PS1-11ap\ \citep{12dam,11ap} and the SLSNe-Ic series uses SN2010gx, PTF09cnd, PS1-10ky, SN2011ke, SN2012il and PS1-10pm data \citep[][this paper]{10gx, bluedeath, 10kyawh, 11xk}.
Even with data from all the aforementioned SLSNe-Ic however, the full wavelength coverage required for the \cite{young} simulations to work was still not reached at all epochs, particularly at late times when the SLSNe-Ic had become much fainter than their peak magnitudes. To extend the spectra blue-wards, blackbody fits were employed to extrapolate the observed data. The 38 day, 65 day and 185 day slowly evolving SLSNe-Ic epochs were fitted with 8500, 7500 and 6500 K blackbody SEDs respectively. 16000, 7000 and 6500 K blackbody SEDs best fitted the -21 day, 50 day and 115 day SLSNe-Ic epochs. The complete spectral series used for both of these SLSNe-Ic classes can be seen in Fig.\,\ref{fig:mc_spec}.
As each spectrum consists of data from multiple sources, the flux of each epoch had to be scaled to accurately represent the class in question. A PS1-11ap\ absolute magnitude \ensuremath{g_{\rm P1}}-band light curve was used as a template for the slowly evolving class and absolute \emph{r}-band SN2011ke\ data was used for the SLSNe-Ic class. We set the absolute peak magnitude distributions to be $M_{\rm AB} = -21.5 \pm 0.3$ for the normal SLSNe-Ic and $M_{\rm AB} = -21.25 \pm 0.5$ for the slowly evolving SLSNe-Ic, based on the observed spread of absolute magnitudes that can be seen in Fig.\,\ref{fig:10pmlc} and Fig.\,\ref{fig:10ahfablc} of this paper and in published literature such as \cite{11xk}, \cite{11ap},
and \cite{insmartt14}, the latter of which
which provides the largest compilation of absolute magnitudes of SLSNe to date.
To scale each spectrum, the \emph{calcphot} task of the \emph{synphot}\footnote{http://www.stsci.edu/institute/software\_hardware/stsdas/synphot} package within \textsc{iraf} was utilised to deduce synthetic absolute magnitude values using an appropriate filter. Each spectrum was simply multiplied by a constant until the synthetic magnitude matched that of the template light curve at the same phase.
The SDSS filters built-in to \emph{calcphot} were used for this rough comparison but the closeness of their filter functions to that of the \protect \hbox {PS1}\ filters of the template light curves is more than adequate.
For a simulated SN event to be classified as `discovered', we required that the object peaked above an AB magnitude of 22 (in any band), and had a light curve which was detectable above the limiting magnitudes listed above for 100 days in the observer frame (in at least one band). In the PS1 survey we spectroscopically detected 7 SLSNe-Ic as listed in Table \ref{table:CC!}.
PS1-10awh was detected for 75 days above the set detection limits and did not strictly meet the criteria. Hence we
will consider that we have 6 SLSNe-Ic detected (and also check this with a separate Monte Carlo calculation with
the criterion for detection set at 75 days).
While this is almost certainly incomplete it serves as a baseline observational comparison for the simulated rates and allows lower limits to be placed on the volumetric rates and plausible ranges to be discussed.
We consider the slowly evolving, SLSNe-Ic and the SLSNe-Ic separately.
Fig.\,\ref{fig:snrate} uses this information to illustrate the range of possible SLSNe-Ic/CCSNe ratios by comparing the observed data with the results of the Monte-Carlo simulations carried out here.
For the simulation to mimic the 4 standard SLSNe-Ic which were spectroscopically found during the hostless transient search presented (Table \ref{table:CC!}), the ratio of SLSNe-Ic to CCSNe has to be set to $3^{+3}_{-2}\times10^{-5}$ in the Monte-Carlo simulation \citep[with error values corresponding to $1\sigma$ Gaussian limits taken from][rounded to one decimal place]{limits}.
The slowly evolving type, of which only 2 possible events were discovered during the first year of the \protect \hbox {PS1}\ MDS, are likely less common and their simulated rate was determined to be only $3^{+4}_{-2}\times10^{-6}$ of the CCSNe rate.
As the slowly evolving SLSNe-Ic remain brighter for a longer duration after their peak luminosity,
they should be easier to detect and hence the fact that we have spectroscopically confirmed fewer of these than the faster declining SLSNe-Ic suggests that they are indeed rarer. This is in agreement with previous suggestions of \cite{gal-yam} and \cite{12dam}. As a comparison, we ran the Monte Carlo calculation with the requirement of
75 days, and hence included PS1-10awh as a detected event. We found a relative rate of $\sim10^{-5}$, within the
error bar of our estimated result of $3^{+3}_{-2}\times10^{-5}$.
However we cannot be confident that we are spectroscopically complete and there could well be SLSNe-Ic in Tables \ref{table:CC?} and \ref{table:CC??} which have not been spectroscopically confirmed. There are approximately 10 SNe in these tables which peak above $M_{\rm AB} =22$ and do not have a possible Type Ia light curve classification. If we regard these as potential SLSNe-Ic which we have not managed to classify then the ratio of
normal CCSNe to SLSNe-Ic in our spectroscopically confirmed sample would suggest that approximately 60\% of them could be SLSNe-Ic. Thus we consider a plausible upper limit to the number of SLSNe-Ic in our total detected PS1 MDS sample to be $10^{+4}_{-3}$ \citep[with errors again estimated from][]{limits}, which would imply an upper limit to the rate of $8^{+2}_{-1}\times10^{-5}$ SLSNe-Ic per CCSNe.
In summary, we have estimated a range for the rate of SLSNe-Ic compared to the
rate of CCSNe within a redshift of $0.3\leq z\leq1.4$ of between $3^{+3}_{-2}\times10^{-5}$ and $8^{+2}_{-1}\times10^{-5}$.
The rate of the slowly evolving, SN2007bi-like SLSNe-Ic appear to be a factor of $\sim10$ lower and likely to be
around $3\times10^{-6}$, although this number is uncertain by about a factor two given the small numbers
detected. To put this in context and compare with the only other quantitative rate calculation (albeit at lower
redshift), we find there is about one SLSN-Ic per 12000$-$30000 core-collapse supernovae, whereas \cite{quimbrate} finds
one per 1000-20000. This compares to the rate of LGRBs of around 1 in every 1000 core-collapse SNe
\citep{2013MNRAS.428.1410G}.
\section{CONCLUSIONS}
\label{sec:conclusion}
We have catalogued all the SN-like, hostless transients from the PS1 Medium Deep Survey and, by using multiple spectroscopic programmes and photometric classifiers, filtered out all the SNe\,Ia events. This leaves
a promising percentage of remaining objects that seem to fall into the category of SLSNe-Ic.
\begin{itemize}
\item 249 hostless transients were discovered within the first 1.37 yr of the \protect \hbox {PS1}\ MDS, 133 of which have SN-like features.
\item 40 are spectroscopically confirmed SNe, $\sim17.5\%$ of which are possible SLSNe-Ic.
\item 12 are spectroscopically confirmed, non-type Ia SNe, $\sim60\%$ of which are possible SLSNe-Ic.
\end{itemize}
PS1-10pm and PS1-10ahf were discovered in this way.
Photometric and spectroscopic comparisons place PS1-10pm comfortably in the SLSNe-Ic class. The classification of PS1-10ahf is not as robust, but reasonably solid photometric and spectroscopic comparisons give it a probable association with the slowly evolving class of SLSNe-Ic such as SN2007bi, PS1-11ap and PTF12dam.
We highlight that this was a combination of spectroscopic classification (when the SNe were close to peak) and
photometric classification after the lightcurves had been gathered. A challenge remains to carry out
accurate photometric classification in real time.
Using the SLSNe-Ic statistics gathered during the search for orphans and comparing them with Monte-Carlo simulations of SLSNe-Ic, we determined the rate of SLSNe-Ic within a redshift of $0.3\leq z\leq1.4$ to be between $3^{+3}_{-2}\times10^{-5}$ and $8^{+2}_{-1}\times10^{-5}$ that of the CCSNe rate.
The ratio of slowly evolving SLSNe-Ic to CCSNe seems to be much lower, at around $3^{+4}_{-2}\times10^{-6}$.
Using a combination of careful photometric analysis and thorough spectroscopic follow-up and the search method of exploiting the common characteristic of the $>2$ mag difference between discovered SLSNe-Ic peak magnitudes and their host galaxies, an ever increasing number of SLSNe-Ic should be found in the next few years from current and future wide-field surveys (Pan-STARRS2, The Dark Energy Survey, The Zwicky Transient Factory, La Silla-QUEST + the Public ESO Spectroscopic Survey of Transient Objects, Large Synoptic Survey Telescope).
\\ \\
{\it Facilities:} Pan-STARRS1, Gemini, William Herschel Telescope
{\small{ \textit{Acknowledgements}.
The Pan-STARRS1 Surveys (PS1) have been made possible through
contributions of the Institute for Astronomy, the University of
Hawaii, the Pan- STARRS Project Office, the Max-Planck Society and
its participating institutes, the Max Planck Institute for
Astronomy, Heidelberg and the Max Planck Institute for
Extraterrestrial Physics, Garching, The Johns Hopkins University,
Durham University, the University of Edinburgh, Queen's University
Belfast, the Harvard- Smithsonian Center for Astrophysics, the Las
Cumbres Observatory Global Telescope Network Incorporated, the
National Central University of Taiwan, the Space Telescope Science
Institute, the National Aeronautics and Space Administration under
Grant No. NNX08AR22G issued through the Planetary Science Division
of the NASA Science Mission Directorate, the National Science
Foundation under Grant No. AST-1238877, and the University of
Maryland. S.J.S. acknowledges funding from the European Research
Council under the European Union's Seventh Framework Programme
(FP7/2007-2013)/ERC Grant agreement no [291222] (PI: S.J. Smartt).
This work is based on observations made with the following telescopes: William Herschel
Telescope (operated by the Isaac Netwon Group), in the Spanish Observatorio
del Roque de los Muchachos of the Instituto de Astrofísica de
Canarias, in the island of La Palma; the Gemini Observatory, which is operated by the
Association of Universities for Research in Astronomy, Inc., under a cooperative agreement
with the NSF on behalf of the Gemini partnership: the National Science Foundation
(United States), the National Research Council (Canada), CONICYT (Chile), the Australian
Research Council (Australia), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o
(Brazil) and Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina).
Some observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona. Support for SR was provided by NASA through Hubble Fellowship grant \#HST-HF-51312.01 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. JT acknowledges support for this work provided by National Science Foundation grant AST-1009749. SM acknowledges financial support from the Academy of Finland (project: 8120503). We thank A. Gal-Yam and P. Nugent for providing the classification of PS1-11acn, which is the PTF object PTF11dws.}
|
1,314,259,993,357 | arxiv | \section{Introduction}
\label{sec:intro}
Narrow resonance searches have long been a staple of experimental efforts to identify or constrain new physics beyond the standard model (BSM). Typically, the invariant mass of the system is measured and limits are placed on the production cross section times branching ratio, $\sigma_\textrm{prod} \times \textrm{BR}$, of new resonances and interpreted within the context of specific benchmark (BM) models. In refs.~\cite{Chivukula:2016hvp,Chivukula:2017lyk}, the authors explored recasting these model-independent constraints in terms of a simplified parameter, $\zeta$, which depends only on the product of BRs and the ratio of the resonance width to its mass. This reparameterization of narrow resonance limits in terms of partonic quantities can often simplify the task of interpreting constraints in terms of many models of interest.
Combining narrow resonance searches from multiple channels can extend the exclusion limits of current and future searches for BSM physics with more than one dominant decay mode. This possibility has been explored by both ATLAS~\cite{Aaboud:2018bun} and CMS~\cite{Sirunyan:2019vgt} in diboson and dilepton channels at the LHC. The constraints from different channels were combined in the context of specific BM models where the relationships between BRs are known, a necessity in order to cast constraints in terms of $\sigma_\textrm{prod} \times \textrm{BR}$. This, however, introduces an additional level of model dependence to the results, compared to monochannel searches, which makes it difficult to apply such constraints to models whose BRs vary from the BM choices.
In this work, we extend the concept of simplified limits, demonstrating a methodology by which one may display constraints on the masses or widths of narrow resonances within the parameter space of up to three BRs \textit{without introducing new sources of model dependence}. This method can be applied to single channel searches where the production of a BSM resonance is predicted to receive contributions from multiple channels. It can also be readily applied to combined constraints on BSM scenarios with a single production mode and several decay modes.
In sec.~\ref{sec:simpLimits}, we review the foundations of simplified limits for a single experimental search. In sec.~\ref{sec:ternary}, we introduce ternary diagrams as a method of unfolding constraints coming from multiple production modes or final states. In sec.~\ref{sec:apps}, we apply these methods to experimental searches for spin-0, spin-1, and spin-2 resonances. Section~\ref{sec:simplex} discusses the use of N-simplex diagrams as a generalization of our method to cases where more than three branching ratios are sources of experimental constraints on the properties of new resonances. We conclude in sec.~\ref{sec:disc} and highlight the value of using the digital data record to enable exploration of an arbirtrary-dimensional parameter space of branching ratios.
\section{Simplified Limits on Narrow Resonances}
\label{sec:simpLimits}
In this section, we review the formulation, presented in Refs.~\cite{Chivukula:2016hvp,Chivukula:2017lyk}, leading to the model-independent simplified parameter $\zeta$ allowing one to compare bounds from data with the easily calculated product of BRs corresponding to production and decay of a narrow resonance as well as its width-to-mass ratio. We begin with the resonance's partonic cross section.
The tree-level cross section for resonant production of a state $R$ from initial state partons $i,\, j$ and decaying to final state $x,\, y$ can be written in the Breit-Wigner form as
%
\begin{align}
\hat{\sigma}_{ij \rightarrow R \rightarrow xy} (\hat{s}) = 16 \pi \mathcal{N}_{ij} \left ( 1 + \delta_{ij} \right ) \frac{\Gamma_{xy} \, \Gamma_{ij}}{(\hat{s} - m_R^2)^2 + m_R^2 \Gamma_R^2} \, ,
\end{align}
%
with $\Gamma_{ab} \equiv \Gamma(R \rightarrow a b)$ the resonance's partial decay width, $\Gamma_R$ its total width, and $m_R$ its mass. Here, $\hat{s}$ is the partonic center of mass energy of the system, while $\mathcal{N}_{ij}$ is a ratio of spin and color factors,
%
\begin{align}
\mathcal{N}_{ij} \equiv \frac{N_{S_R}}{N_{S_i} N_{S_j}} \, \frac{C_R}{C_i C_j} \, ,
\end{align}
%
with $N_S$ and $C$, respectively, counting the number of spin and color states of the incoming partons and resonance. In the narrow width approximation (NWA), $\Gamma_R / m_R \ll 1$ and
%
\begin{align}
\frac{1}{(\hat{s} - m_R^2)^2 + m_R^2 \Gamma_R^2} \approx \frac{\pi}{m_R \Gamma_R} \delta (\hat{s} - m_R^2) \, .
\end{align}
%
Thus, the tree-level cross section in the NWA is given by
%
\begin{align}
\hat{\sigma}_{ij \rightarrow R \rightarrow xy} (\hat{s}) = 16 \pi^2 \mathcal{N}_{ij} \left ( 1 + \delta_{ij} \right ) \textrm{BR}_{xy} \, \textrm{BR}_{ij} \, \frac{\Gamma_R}{m_R} \, \delta(\hat{s} - m_R^2) \, ,
\end{align}
with $\textrm{BR}_{ab} \equiv \Gamma_{ab} / \Gamma_R$ the BR of the resonance.
For hadron colliders, the partonic cross section is related to the experimentally observable cross section by convolving it with the hadrons' parton distribution functions (PDFs). For proton-proton colliders like the LHC, we have
%
\begin{align}
\sigma_{p p \rightarrow R \rightarrow xy}(s) = 16 \pi^2 \mathcal{N}_{ij} \left ( 1 + \delta_{ij} \right ) \textrm{BR}_{ij} \, \textrm{BR}_{xy} \, \frac{\Gamma_R}{m_R} \left [ \frac{1}{s} \frac{d L_{ij}}{d \tau} \right ]_{\tau = \frac{m_R^2}{s}} \, ,
\end{align}
%
with $s$ the proton-proton center of mass energy. Here, $dL_{ij} / d\tau$ is the parton luminosity function,
%
\begin{align}
\frac{d L_{ij}}{d \tau} \equiv \frac{1}{1 + \delta_{ij}} \int_\tau^1 \frac{dx}{x} \left [ f_i(x,\, \mu_F^2) f_j \left (\frac{\tau}{x},\, \mu_F^2 \right ) + f_j(x,\, \mu_F^2) f_i \left (\frac{\tau}{x},\, \mu_F^2 \right ) \right ] \, ,
\end{align}
%
with $f_i$ the PDF for parton $i$, $x$ the fraction of the proton's momentum carried by the parton, and $\mu_F$ the factorization scale. If multiple partons contribute to the same experimental signal (e.g. light quark production or decay), this can be extended to a sum over initial and/or final state partons,
%
\begin{align}
\sigma_{pp \rightarrow R \rightarrow XY}(s) = 16 \pi^2 \sum_{i^\prime j^\prime} \textrm{BR}_{i^\prime j^\prime} \sum_{xy \in XY} \textrm{BR}_{xy} \, \frac{\Gamma_R}{m_R} \left [ \sum_{ij} \omega_{ij} \mathcal{N}_{ij} \frac{1 + \delta_{ij}}{s} \frac{dL_{ij}}{d \tau} \right ]_{\tau = \frac{m_R^2}{s}} \, ,
\end{align}
%
with $X Y$ the observable final state. The weight function $\omega_{ij}$,
%
\begin{align}
\omega_{ij} \equiv \frac{\textrm{BR}_{ij}}{\sum_{i^\prime j^\prime} \textrm{BR}_{i^\prime j^\prime}} \, ,
\end{align}
%
lies between 0 and 1 such that $\sum \omega_{ij} = 1$ by construction; $\omega_{ij}$ represents the fraction of the resonance's total production rate due to each individual partonic channel.
We are now ready to define the $\zeta$ parameter,\footnote{The arrangement of kroenecker deltas in this definition of the $\zeta$ parameter differs slightly from that of Refs.~\cite{Chivukula:2016hvp,Chivukula:2017lyk}. This definition of $\zeta$ is more convenient because it has two distinct upper limits instead of four, depending only on whether or not the production modes also contribute to the observed final state.}
%
\begin{align}\label{eq:zeta}
\zeta \equiv \sum_{ij} \textrm{BR}_{ij} \sum_{xy \in XY} \textrm{BR}_{xy} \, \frac{\Gamma_R}{m_R} = \frac{\sigma_{p p \rightarrow R \rightarrow XY}(s)}{16 \pi^2} \left [ \sum_{ij} \omega_{ij} \mathcal{N}_{ij} \frac{1 + \delta_{ij}}{s} \frac{dL_{ij}}{d \tau} \right ]^{-1}_{\tau = \frac{m_R^2}{s}} \, .
\end{align}
%
Note that $\zeta$ retains much of the model-independence of the experimental search, which depends predominantly on the spin and helicity of the resonance, while translating the constraint into purely partonic quantities. As defined in eq.~\eqref{eq:zeta}, $\zeta$ is bounded from above by $\Gamma_R / m_R$ ($\Gamma_R / 4 m_R$) when there is overlap (no overlap) between initial and final states. As we are working in the NWA where $\Gamma_R / m_R \lesssim 10 \%$, this corresponds to an upper bound on $\zeta$ of $1/10$ ($1/40$) when there is overlap (no overlap) between initial and final states.
\section{Ternary Diagrams}
\label{sec:ternary}
In this section, we introduce a prescription for extending the simplified limits framework to situations which necessitate considering constraints on more than a one dimensional parameter. We begin by discussing the situations considered in this paper: the limitations of the original simplified limits formulation and combining constraints from multiple independent experimental final states.
The simplified parameter $\zeta$ defined in eq.~\ref{eq:zeta} offers a relatively model-independent conversion from the limits on $\sigma_\textrm{prod} \times \textrm{BR}$ to partonic quantities which more directly describe the properties of the resonance: its mass, width, and BRs. There are, however, several situations that strain the applicability of these one-dimensional limits, which we will explore here. The first exception is in situations where there may be more than one production mode. In these situations, deconvolving the hadronic PDFs introduces an ambiguity in constraints due to the weight factors $\omega_{ij}$, which are unknown \textit{a priori} without the introduction of additional model-dependent assumptions. An example of this would be Drell-Yan (DY) production of a heavy $Z^\prime$, which leads to a band in limits on $\zeta$. See the right panel of fig.~\ref{fig:qqWprimeZprimeZeta} in sec.~\ref{sec:spin1} for an example of this scenario.
The second scenario we consider involves combining searches from multiple experimentally distinguishable final states. Combining the results of multiple searches offers increased statistics and is particularly effective when the experiments in question are similarly sensitive to two or more distinguishable final states. While monochannel searches place constraints on $\sigma_\textrm{prod} \times \textrm{BR}$, doing so for combined searches requires making an additional model-dependent assumption about the relationship between the various channels' BRs. The question we therefore wish to address is how best to represent model-independent constraints within the multidimensional space of parameters necessitated by the above two situations.
In the framework of simplified limits, the relevant model-independent quantities are the resonance's BRs, its mass, and its total width. In combining channels, a model-independent scenario would be one where constraints on the properties of the resonance could be displayed within the space of BRs. Consider first the scenario of a resonance with one production mode and two decay modes. Here, we are naively forced to place limits within a three-dimensional space of BRs. However, we also have the sum rule
%
\begin{align}
\sum_{i = 1}^3 \textrm{BR}_i = 1 \, ,
\end{align}
%
reducing the space of independent BRs by one. Ternary diagrams,\footnote{Perhaps the most familiar example of a ternary diagram in particle physics is the Dalitz plot for three-body decays~\cite{Dalitz:1953cp}, where the sum of the two-body invariant masses of the final state is constrained by the kinematics of the system, e.g. for particle 0 decaying into particles 1, 2, and 3, the constraint is $m_{12}^2 + m_{13}^2 + m_{23}^2 = \sum_{i = 0}^3 m_i^2$.} representations of the space of three variables which sum to a constant, are ideally suited to displaying constraints within this parameter space.
Fig.~\ref{fig:ternaryEx1} shows an example of a ternary diagram, where at each point in the diagram the sum of BRs is one. The tick marks on the axes are skewed to indicate the lines of constant $\textrm{BR}_i$. Lines of constant $\textrm{BR}_2$ run parallel to the $\textrm{BR}_1$ axis, lines of constant $\textrm{BR}_3$ are parallel to the $\textrm{BR}_2$ axis, and lines of constant $\textrm{BR}_1$ are parallel to the $\textrm{BR}_3$ axis. The point $P$ is labeled as an example, corresponding to $\{ \textrm{BR}_1,\, \textrm{BR}_2,\, \textrm{BR}_3 \} = \{0.5,\, 0.4,\, 0.1\}$ with $\sum_{i = 1}^3 \textrm{BR}_i = 1$ by construction. Within this parameter space, one may then plot contours of upper limits on the remaining model parameters. In the simplified limits framework, one may therefore use ternary diagrams to display constraints on $m_R$ for fixed values of $\Gamma_R / m_R$ or on $\Gamma_R / m_R$ for fixed values of $m_R$. We present examples of ternary diagrams, displaying upper limits on $\Gamma_R / m_R$, in sec.~\ref{sec:apps}.
%
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{ternaryEx1.pdf}
\caption{Example of a ternary diagram representing the space of BRs, with each point within the diagram satisfying the constraint $\sum_{i=1}^3 \textrm{BR}_i = 1$. The dashed lines represent hypothetical experimental constraints from combining limits, showing contours of constant $m_R$ or $\Gamma_R / m_R$. The point $P$ corresponds to $\{ \textrm{BR}_1,\, \textrm{BR}_2,\, \textrm{BR}_3 \} = \{0.5,\, 0.4,\, 0.1\}$.}
\label{fig:ternaryEx1}
\end{figure}
%
Above, we mentioned the scenario where one production mode and two decay modes saturates all possible BRs. Interpreting the ternary diagrams for models of resonances with more than three BRs turns out to be straightforward. For a resonance with $n$ BRs, we instead have $\sum_{i = 1}^3 \textrm{BR}_i = 1 - \sum_{i = 4}^n \textrm{BR}_i$. We can thus define ``effective'' BRs,
%
\begin{align}
\widetilde{\textrm{BR}}_i \equiv \textrm{BR}_i \left ( 1 - \sum_{j = 4}^n \textrm{BR}_j \right )^{-1} \, ,
\end{align}
%
which satisfy the unitary sum rule, $\sum_{i = 1}^3 \widetilde{\textrm{BR}}_i = 1$, implicit in the construction of ternary diagrams. To interpret constraints displayed on the ternary diagrams in the framework of simplified limits, one must then also define an effective width,
%
\begin{align}
\widetilde{\Gamma}_R \equiv \Gamma_R \left ( 1 - \sum_{j = 4}^n \textrm{BR}_j \right )^2 \, .
\end{align}
%
A ternary diagram with sides spanning the range $[0,\,1]$ in this context generically displays the space of effective BRs, where $\widetilde{\textrm{BR}} \ge \textrm{BR}$ and $\widetilde{\Gamma}_R \le \Gamma_R$ with the equalities saturated only when the resonance has exactly three decay modes. In many cases it is expected that only a few channels will have similar experimental sensitivity over a wide range of the parameter space, and ternary diagrams then capture the most interesting region of the possible decay modes. However, a discussion of the generalization of this method to N-simplexes is presented in sec.~\ref{sec:simplex}.
With this framework in place, we have introduced the means of extending searches for narrow resonances into the multidimensional parameter space of BRs. Within the simplified limits framework, this allows us to unfold the ambiguity introduced by deconvolving the hadronic PDFs when there is more than one production mode. In the case of multiple experimentally distinguishable final states, it allows us to combine search results without introducing further model-dependent assumptions about the relationship between BRs. In what follows, we explore specific examples of each of these situations.
\section{Applications}
\label{sec:apps}
Searches for resonances in the diboson final state have been an important part of the search for BSM physics at the LHC, having been studied in detail by both ATLAS~\cite{Aad:2020tps,Aad:2020ddw,Aad:2019fbh} and CMS~\cite{Sirunyan:2019jbg}. Models usually considered for such searches include composite and little Higgs models~\cite{Marzocca:2012zn,Bellazzini:2014yua,Schmaltz:2005ky}, heavy vector triplet (HVT) models~\cite{deBlas:2012qp,Pappadopulo:2014qza}, and models of gravity in warped extra dimensions~\cite{Randall:1999ee,Agashe:2007zd}. The efficiency of a given search depends primarily on the spin and helicity of the resonance while remaining relatively model-independent otherwise. In this section, we explore the implications of searches for spin-0, 1, and 2 narrow resonances, using diboson final states as examples. We apply the simplified limits framework discussed above to existing searches by the ATLAS collaboration.
For the conversion of experimental constraints and the calculation of the simplified limits parameter $\zeta$ we use the CT18 NLO central PDF set~\cite{Hou:2019efy}. The RS radion BM is calculated using leading-order formulae given in ref.~\cite{Barger:2011qn}. Possible large higher-order corrections to the radion's $gg$ coupling, parameterized by a K-factor, and heavy quark couplings are not included. Such corrections can be comparable to those of heavy Higgs production via gluon fusion where $\textrm{K} > 1$, leading to conservative estimates of the experimental constraints presented here. The HVT BMs are calculated at leading-order using CalcHEP 3.8.7~\cite{Belyaev:2012qa} with the model file provided by ref.~\cite{Pappadopulo:2014qza}. The RS graviton BM is calculated at leading-order using formulae given in refs.~\cite{Agashe:2007zd,Bijnens:2001gh}.
Combining the statistics of multiple BSM physics searches requires delicate attention to the details of each experiment. In what follows, we do not attempt to reproduce a complete statistical analysis of the experiments considered. Instead, we make the conservative assumption that the combination of constraints from multiple channels is given by the strongest of the bounds,
%
\begin{align}
\sigma_\textrm{prod}^{95} = \textrm{min} \! \left [ \sigma_1^{95} , \, \sigma_2^{95} , \, \dots \right ] \, .
\end{align}
%
A detailed combination of the statistics of each search will in general produce stronger results, however such an analysis is beyond the scope of this paper. For more thorough discussions of the statistics involved in such searches, see e.g. refs.~\cite{Read:2002hq,Cowan:2010js}.
\subsection{Spin-0 Resonance}
\label{sec:spin0}
We first consider a neutral spin-0 resonance ($\phi$) produced via gluon fusion. In ref.~\cite{Aad:2020ddw}, ATLAS reports constraints on the production of a Randall-Sundrum (RS) radion~\cite{Goldberger:1999uk} in the combined $WW+ZZ$ channels, with the ratio $\textrm{BR}(\phi \rightarrow WW) / \textrm{BR}(\phi \rightarrow ZZ)$ fixed by the model. The neutral scalar radion is a feature of extra-dimensional models which is predicted to stabilize the size of the extra dimension. The radion coupling to SM fields is inversely proportional to the vacuum expectation value of the radion field, $\Lambda_\phi$ and proportional to the mass (mass squared) of the SM fermions (bosons) it couples to. As the light fermion couplings to the radion are suppressed by their masses, the radion is predominantly produced via gluon fusion. We use $\Lambda_\phi = 3$~TeV and $k L = 35$ as a BM, where $k L$ is the size of the extra dimension. The radion's BRs are roughly constant for masses above a few TeV, with sizable BRs of
%
\begin{align}
\begin{array}{l l l}
\textrm{BR}_{WW} = 47\% \, , \;\;\;\;\;\;\;\; &\textrm{BR}_{hh} = 23\% \, , \;\;\;\;\;\;\;\; & \textrm{BR}_{gg} = 5.8\% \, , \\
\textrm{BR}_{ZZ} = 23\% \, , & \textrm{BR}_{t \overline{t}} = 0.9\% \, , &
\end{array}
\end{align}
%
at $M_\phi = 3$~TeV.
%
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{GGWWZZspin0zeta.pdf}
\caption{Constraints on narrow scalar resonance production from gluon fusion in the $WW$ and $ZZ$ channels~\cite{Aad:2020ddw}. Constraints from ATLAS are converted to upper limits on the $\zeta$ parameter, eq.~\eqref{eq:zeta}, represented by solid red and blue lines for the $WW$ and $ZZ$ channels, respectively. The gray shaded region corresponds to the upper limit on the product of branching ratios times $\Gamma_\phi / M_\phi = 10\%$, approximately where the NWA breaks down. The dot-dashed lines show the predictions for the radion in our BM model.}
\label{fig:GGWWZZspin0zeta}
\end{figure}
%
Fig.~\ref{fig:GGWWZZspin0zeta} shows the ATLAS constraints from the individual channels converted into the language of simplified limits. Generally the constraints from each channel are quite competitive, except below $M_\phi \sim 1$~TeV and between approximately 2.0 -- 2.6~TeV where the $ZZ$ channel is significantly more constraining. Above $M_\phi \sim 4.5$~TeV, the search does not constrain any model which satisfies the NWA assumption. Also shown are the predictions for our BM radion, which sets limits on the mass of the radion of $M_\phi \gtrsim 3.3~(2.9)$~TeV in the $WW$ ($ZZ$) channel. Other models would be represented by different curves in $\zeta$ vs. $M_\phi$, corresponding to different limits on the mass of the resonance.
%
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{GGWWZZspin0m33.pdf}
\caption{Ternary diagram showing constraints on a narrow scalar resonance produced via gluon fusion and decaying to $WW$ and $ZZ$~\cite{Aad:2020ddw} for a resonance mass of 3.3~TeV. The colors depict constraints on $\widetilde{\Gamma}_\phi / M_\phi$ (labeled right of the legend), which have been translated to the corresponding constraint on $\Lambda_\phi$ for the BM value $kL = 35$ (labeled left of the legend). The black dot represents the location in the parameter space of BRs corresponding to the radion in our BM model, and the black line through the legend labels the value of the experimental constraint on the BM radion. The BM prediction $\widetilde{\Gamma_\phi}/M_\phi \sim 10^{-1.5}$ (corresponding to $\Lambda_\phi=3$~TeV), being slightly above the the experimental limit, demonstrates that the model is just slightly excluded.}
\label{fig:spin0tern}
\end{figure}
%
Applying the principles of sec.~\ref{sec:ternary}, fig.~\ref{fig:spin0tern} shows constraints on $\widetilde{\Gamma}_\phi / M_\phi$ for $M_\phi = 3.3$~TeV, which is near the experimental limit for our BM radion. The radion BM values are labeled by a point in the plane of $\widetilde{\textrm{BR}}$s and by a line on the legend of $\widetilde{\Gamma}_\phi / M_\phi$. In general, one can see that as the BRs for either the production mode or both decay modes decrease, the constraints unsurprisingly also weaken. Conversely, constraints are strongest when $\widetilde{\textrm{BR}}(\phi \rightarrow gg) \sim \widetilde{\textrm{BR}}(\phi \rightarrow VV)$ for a single channel, with the other channel's BR negligible. Contours of constant $\log_{10} \widetilde{\Gamma}_\phi / M_\phi$ are shown in increments of $10^{-1}$, and the BM value $\widetilde{\Gamma}_\phi / M_\phi \sim 10^{-1.5}$ roughly corresponds to the experimental limit, which is labeled by a solid black line through the legend. Sharp edges in the contours of constant $\widetilde{\Gamma}_\phi / M_\phi$ occur when the constraints from both channels are equal, and highlight the region of parameter space where one would expect to gain the most from a proper statistical combination of the independent searches. As the BM radion model also predicts a sizable $\textrm{BR}(\phi \rightarrow hh)$, we also see the application of effective BRs, where $\widetilde{\textrm{BR}}_{WW}\; (\textrm{BR}_{WW}) = 62\%\; (47\%)$, $\widetilde{\textrm{BR}}_{ZZ}\; (\textrm{BR}_{ZZ}) = 31\%\; (23\%)$, and $\widetilde{\textrm{BR}}_{gg}\; (\textrm{BR}_{gg}) = 7.7\%\; (5.8\%)$.
%
\begin{figure}
\includegraphics[width=0.475\textwidth]{GGWWZZspin0m10.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{GGWWZZspin0m20.pdf}
\vspace*{5mm} \\
\includegraphics[width=0.475\textwidth]{GGWWZZspin0m30.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{GGWWZZspin0m40.pdf}
\caption{Ternary diagrams showing constraints on scalar narrow resonance production from gluon fusion in the $WW$ and $ZZ$ channels~\cite{Aad:2020ddw} for a selection of resonance masses. The colors depict constraints on $\widetilde{\Gamma}_\phi / M_\phi$ (labeled right of the legends), which have been translated to the corresponding constraint on $\Lambda_\phi$ for the BM value $kL = 35$ (labeled left of the legends). The black dot represents the location in the parameter space corresponding to the radion in our BM model, and the black line through the legend labels the value of the experimental constraint on the BM radion.}
\label{fig:GGWWZZspin0ternary}
\end{figure}
%
To demonstrate how the diagrams change as a function of mass, fig.~\ref{fig:GGWWZZspin0ternary} shows constraints on $\widetilde{\Gamma}_\phi / M_\phi$ for several choices of resonance mass between $1.0~\textrm{TeV} \le M_\phi \le 4.0~\textrm{TeV}$. Here one can clearly see how the constraints weaken as the mass of the resonance increases, while within the space of $\widetilde{\textrm{BR}}$s the relative trends remain similar to those discussed above for fig.~\ref{fig:spin0tern}. Because the radion couplings are all proportional to $\Lambda_\phi^{-1}$, increasing $\Lambda_\phi$ decreases the radion's total width and vice versa, while keeping individual BRs constant at leading order. Therefore, each point on the ternary diagram also translates directly into a constraint on $\Lambda_\phi$ for a fixed value of $k L$. Constraints on $\Lambda_\phi$ for the BM value $k L = 35$ are also labeled to the left of the legend of each diagram, and the experimental constraint on the BM point is labeled by a solid black line through the legend. For the BM value $\Lambda_\phi = 3$~TeV, the radion is shown to be excluded in the diagrams corresponding to $M_\phi \le 3.0$~TeV, while it is unconstrained in the $M_\phi = 4.0$~TeV diagram.
\subsection{Spin-1 Resonance}
\label{sec:spin1}
For a spin-1 resonance, we consider the production and decay of either a charged ($W^{\prime\,\pm}$) or neutral ($Z^\prime$) narrow vector resonance. Here gluon fusion production is forbidden by charge conservation or Yang's theorem, so production occurs via either Drell-Yan (DY) or vector boson fusion (VBF) processes. For DY production of a charged vector resonance, production occurs via $q \overline{q}^\prime$ annihilation and in most cases is dominated by the valence quark combination $u \overline{d}$ or $d \overline{u}$. For a neutral vector resonance, however, production occurs via $q \overline{q}$ and is dominated by a combination of valence quarks, $u \overline{u} + d \overline{d}$. As the ratio of couplings of the neutral resonance to up and down quarks is not known \textit{a priori}, deconvolving the proton PDFs introduces an ambiguity to the $\zeta$ parameter, as discussed in sec.~\ref{sec:ternary}.
The production and decay of $W^{\prime\,\pm}$ and $Z^\prime$ can be parameterized in terms of an HVT model, which is a phenomenological framework proposed by the authors of ref.~\cite{Pappadopulo:2014qza} to cover a variety of explicit BSM models. In the HVT model, an $SU(2)_L$ vector triplet $V^\prime$ is introduced, and its interactions with the SM fields are parameterized by a variety of couplings. The parameter $g_V$ characterizes the typical interaction strength of the new triplet, and the parameters $c_H$ and $c_F$ characterize deviations from this strength in coupling to the Higgs and fermion currents, respectively. A factor of $g^2 / g_V^2$ is inserted in the coupling of $V^\prime$ to the SM fermions to make contact with many specific extended gauge models in the literature, where $g$ denotes the $SU(2)_L$ gauge coupling. Therefore, the interaction of $V^\prime$ with the Higgs doublet current (and therefore with the longitudinal components of the SM $W$ and $Z$) is parameterized by $(c_H g_v)$. Likewise, the coupling of $V^\prime$ to the SM fermions is controlled by the combination $(g^2/g_V) c_f$.
BM values are typically chosen to represent the range of various specific BSM extensions. Model A, with $g_V = 1$, is representative of a weakly coupled scenario such as in theories with an extended gauge symmetry. Model B, with $g_V = 3$, is chosen to represent a strongly-coupled composite Higgs scenario.\footnote{Our BM values for $c_F$ and $c_H$ are calculated from the relations in appendix~A of ref.~\cite{Pappadopulo:2014qza} with $\widetilde{c}_{VW} = -1$, $\widetilde{c}_H = \widetilde{c}_F = 0$ for model A and $\widetilde{c}_{VW} = -\widetilde{c}_H = 1$, $\widetilde{c}_F = 0$ for model B.} In both BM scenarios we assume a universal coupling of $V^\prime$ to all SM fermions; all interactions not mentioned above, which only contribute to the decay of $V^\prime$ via the small mixing between $V^\prime$ and the SM weak gauge bosons, are turned off. The two BM models predict dramatically different decays for the heavy spin-one resonance. Model A predicts a BR of a few percent to bosons, with dijets representing the dominant decay mode, while model B predicts decays predominantly into dibosons.
%
\begin{figure}
\includegraphics[width=0.4875\textwidth]{qqWZWHspin1zeta.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.4875\textwidth]{qqWWZHspin1zeta.pdf}
\caption{Constraints on charged (left pane) and neutral (right pane) narrow vector resonances from DY production in the $WV$~\cite{Aad:2020ddw} and $VH$~\cite{Aad:2020tps} channels. Constraints from ATLAS are converted to upper limits on the $\zeta$ parameter, eq.~\eqref{eq:zeta}, represented by solid red and blue lines for the $WV$ and $VH$ channels, respectively. The shaded bands in the constraints of the right plot are due to the model-dependent relationship between the production mode couplings of the $Z^\prime$, with the lower constraint corresponding to purely $u \overline{u}$ production and the upper constraint corresponding to purely $d \overline{d}$ production. The gray shaded region corresponds to the upper limit on the product of branching ratios times $\Gamma_{V^\prime} / M_{V^\prime} = 10\%$, where the NWA is no longer valid. Also shown are the HVT $p p \rightarrow V^\prime \rightarrow WV$ theory predictions for BM models A (black dot-dashed line) and B (black dotted line).}
\label{fig:qqWprimeZprimeZeta}
\end{figure}
%
We recast constraints from ATLAS narrow resonance searches in the $WV$~\cite{Aad:2020ddw} and $VH$~\cite{Aad:2020tps} channels in terms of the simplified limits framework. The left plot of fig.~\ref{fig:qqWprimeZprimeZeta} shows constraints on the production of a narrow $W^\prime$ resonance in the $WZ$ and $WH$ channels, as well as the $W^\prime \rightarrow WZ$ predictions for the HVT BM models A and B. Although not shown, predictions for the $W^\prime \rightarrow WH$ channel are quite similar. We see that the $WZ$ channel dominates the constraints, with a narrow $W^\prime$ excluded for masses below 3.8 (4.0)~TeV for model A (model B). The right plot of fig.~\ref{fig:qqWprimeZprimeZeta} shows constraints on the production of a narrow $Z^\prime$ resonance in the $WW$ and $ZH$ channels, as well as the $Z^\prime \rightarrow WW$ predictions for the HVT BM models A and B. Here, the band in the constraints is due to the model-dependent relationship between the up and down quark couplings to $Z^\prime$. The lower limit of $\zeta$ corresponds to $\omega_{u \overline{u}} = 1$ and $\omega_{d \overline{d}} = 0$, while the upper limit of $\zeta$ corresponds to $\omega_{d \overline{d}} = 1$ and $\omega_{u \overline{u}} = 0$ with other values of $\omega$ falling between these two extremes. The HVT $Z^\prime$ is seen to be excluded for masses below $3.1~(3.3)~\textrm{TeV} \lesssim M_{Z^\prime} \lesssim 3.7~(4.0)~\textrm{TeV}$ for model A (model B).
%
\begin{figure}
\includegraphics[width=0.475\textwidth]{qqWZWHspin1m10.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{qqWZWHspin1m20.pdf}
\vspace*{5mm} \\
\includegraphics[width=0.475\textwidth]{qqWZWHspin1m30.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{qqWZWHspin1m40.pdf}
\caption{Ternary diagrams showing constraints on a charged vector narrow resonance from DY production in the $WZ$~\cite{Aad:2020ddw} and $WH$ channels~\cite{Aad:2020tps}. The colors depict constraints on $\widetilde{\Gamma}_{W^\prime} / m_{W^\prime}$ for a selection of resonance masses. Points labeled ``HVT A'' and ``HVT B'' mark the locations in the parameter space corresponding to the BMs for model A and model B, respectively. The black lines through the legend (labeled ``EXP A'' and ``EXP B'') show the values of the experimental constraints on the BM models, while the tick marks to the left of the legend (labeled ``HVT A'' and ``HVT B'') show their predicted values.}
\label{fig:qqWprimeSpin1ternary}
\end{figure}
%
As the production of a charged vector resonance suffers no ambiguity in the initial state, we may apply the same principles used in sec.~\ref{sec:spin0} to combine limits from the $WZ$ and $WH$ channels. Fig.~\ref{fig:qqWprimeSpin1ternary} shows the combined constraints on $\widetilde{\Gamma}_{W^\prime} / M_{W^\prime}$ for several choices of resonance mass. At $M_{W^\prime} = 1.0$~TeV, there are no constraints from the $WH$ search, so the ternary diagram displays only constraints from the $WZ$ channel. On the other hand, the constraints at $M_{W^\prime} = 2.0$~TeV are similar for both channels which is reflected in the diagram. The remaining diagrams are again dominated by the $WZ$ channel, in agreement with the left panel of fig.~\ref{fig:qqWprimeZprimeZeta}. The HVT BMs are also displayed on the ternary diagrams, both by points labeling their predicted branching ratios and by tick marks on the left of legend labeling the predicted values of $\log_{10} ( \widetilde{\Gamma}_{W^\prime} / M_{W^\prime} )$. The solid lines through the legends label the experimental constraints on the BM points, so that one can more easily see that both BMs are excluded for $M_{W^\prime} \le 3.0$~TeV, while model A is unconstrained at $M_{W^\prime} = 4.0$~TeV and model B is very close to the experimental constraint.
%
\begin{figure}
\includegraphics[width=0.475\textwidth]{uuddWWspin1m10.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{uuddWWspin1m20.pdf}
\vspace*{5mm} \\
\includegraphics[width=0.475\textwidth]{uuddWWspin1m30.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{uuddWWspin1m40.pdf}
\caption{Ternary diagrams showing constraints on a neutral vector narrow resonance from DY production in the $WW$~\cite{Aad:2020ddw} and $ZH$ channels~\cite{Aad:2020tps}. The colors depict constraints on $\widetilde{\Gamma}_{Z^\prime} / m_{Z^\prime}$ for a selection of resonance masses. Points labeled ``HVT A'' and ``HVT B'' mark the locations in the parameter space corresponding to the BMs for model A and model B, respectively. The black lines through the legend (labeled ``EXP A'' and ``EXP B'') show the values of the experimental constraints on the BM HVT models, while the tick marks to the left of the legend (labeled ``HVT A'' and ``HVT B'') show their predicted values.}
\label{fig:uuddWWspin1ternary}
\end{figure}
%
Conversely, while searches for a neutral vector resonance via DY production may not be ideal candidates for model-independent combined searches, one may instead probe its coupling to light quarks via ternary diagrams. We take for example the search for a DY-produced $Z^\prime$ resonance in the $WW$ channel, which dominates the constraints when compared to the $ZH$ channel for a majority of the HVT parameter space. The main production modes are via either $u \overline{u}$ or $d \overline{d}$ annihilation, and fig.~\ref{fig:uuddWWspin1ternary} shows constraints on $\widetilde{\Gamma}_{Z^\prime} / M_{Z^\prime}$ over a range of masses. The HVT BMs are also displayed on the ternary diagrams, both by points labeling their predicted branching ratios and by tick marks on the left of the legend labeling the predicted values of $\log_{10} ( \widetilde{\Gamma}_{Z^\prime} / M_{Z^\prime} )$. The solid lines through the legends label the experimental constraints on the BM points, so that one can more easily see that both BMs are excluded for $M_{Z^\prime} \le 3.0$~TeV, while they are both unconstrained at $M_{Z^\prime} = 4.0$~TeV. In these examples, the tilt in the contours of constant $\log_{10} ( \widetilde{\Gamma}_{Z^\prime} / M_{Z^\prime} )$ follows from the roughly 2-to-1 luminosity ratio of $u$-to-$d$ quarks in the proton's PDF.\footnote{For a discussion of $Z^\prime$ properties characterized in part by their couplings to quarks, see e.g. Ref.~\cite{Carena:2004xs}.}
\subsection{Spin-2 Resonance}
\label{sec:spin2}
For a spin-2 resonance, we consider the production of a narrow resonance via gluon fusion in the $WW$ and $ZZ$ channels. Heavy spin-2 resonances are a generic feature of models of quantum gravity in extra dimensions, where Kaluza-Klein (KK) towers of heavy gravitons ($G_\textrm{KK}$) are predicted. Typical BM models for considering the lightest graviton mode are the original RS model, referred to as RS1~\cite{Randall:1999ee}, where all SM fields are localized on the IR brane, and the bulk RS model~\cite{Agashe:2007zd}, where the SM fields are allowed to propagate in the bulk. In RS1, the localization of the graviton in the warped bulk near the IR brane induces couplings to all SM fields which are only TeV suppressed, so the graviton has significant BRs to light fermions. On the other hand, in the bulk RS model light fields are localized toward the UV brane which greatly suppresses their couplings to gravitons. Instead, the graviton is mainly produced via gluon fusion or VBF and has significant BRs to $t \overline{t}$, $W W$, and $Z Z$. In what follows we will therefore consider the bulk RS model.
%
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{GGWWZZspin2zeta.pdf}
\caption{Constraints on spin-2 narrow resonance production from gluon fusion in the $WW$ and $ZZ$ channels~\cite{Aad:2020ddw}. Constraints from ATLAS are converted to upper limits on the $\zeta$ parameter, eq.~\eqref{eq:zeta}, represented by solid red and blue lines for the $WW$ and $ZZ$ channels, respectively. The gray shaded region corresponds to the upper limit on the product of branching ratios times $\Gamma_\phi / M_\phi = 10\%$, approximately where the NWA breaks down. The dot-dashed lines show the predictions from the Bulk RS BM model.}
\label{fig:GGWWZZspin2zeta}
\end{figure}
%
The graviton's couplings to SM fields are determined by the parameter $k/\overline{M}_\textrm{Pl}$ where $k$ is the warped curvature scale and $\overline{M}_\textrm{Pl}$ is the reduced Planck mass. For our BM model of graviton production, we assume $k/\overline{M}_\textrm{Pl}=1.0$. Fig.~\ref{fig:GGWWZZspin2zeta} shows constraints from ATLAS searches in the $WW$ and $ZZ$ channels~\cite{Aad:2020ddw}, converted into the language of simplified limits. The constraints from each channel are competitive except for the region below $M_{G_\textrm{KK}} \lesssim 1.2$~TeV, where the $ZZ$ channel dominates. Also shown is the prediction from the bulk RS BM model, which sets a limit on the mass of the graviton of $M_{G_\textrm{KK}} \gtrsim 1.7~(1.5)$~TeV in the $WW$ ($ZZ$) channel.
%
\begin{figure}
\includegraphics[width=0.475\textwidth]{GGWWZZspin2m10.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{GGWWZZspin2m15.pdf}
\vspace*{5mm} \\
\includegraphics[width=0.475\textwidth]{GGWWZZspin2m20.pdf}
\hspace*{0.025\textwidth}
\includegraphics[width=0.475\textwidth]{GGWWZZspin2m25.pdf}
\vspace*{5mm}
\caption{Ternary diagrams showing constraints on spin-2 narrow resonance production from gluon fusion in the $WW$ and $ZZ$ channels~\cite{Aad:2020ddw}. The colors depict constraints on $\widetilde{\Gamma}_R / m_R$ for a selection of resonance masses. The black dot represents the location in the parameter space corresponding to the bulk RS BM model. The black line through the legend labeled ``EXP'' shows the value of the experimental constraints on the bulk RS BM model, while the tick mark to the left of the legend labeled ``RS'' shows its predicted value.}
\label{fig:GGWWZZspin2ternary}
\end{figure}
%
Fig.~\ref{fig:GGWWZZspin2ternary} shows ternary diagrams associated with these searches, displaying constraints on $\widetilde{\Gamma}_{G_\textrm{KK}} / M_{G_\textrm{KK}}$ for a selection of graviton masses. The predictions from the bulk RS BM are also shown, both by a point labeling their predicted branching ratios and by a tick mark on the left of the legend labeling the predicted value of $\log_{10} ( \widetilde{\Gamma}_{G_\textrm{KK}} / M_{G_\textrm{KK}} )$. The solid line through the legend labels the experimental constraint on the BM point, so that one can more easily see that the BM model is excluded for $M_{G_\textrm{KK}} \le 1.5$~TeV and unconstrained for $M_{G_\textrm{KK}} \ge 2.0$~TeV, in agreement with fig.~\ref{fig:GGWWZZspin2zeta}.
\section{Further Generalization: N-Simplexes}
\label{sec:simplex}
In this section, we indicate how to further extend the simplified limits framework to situations in which more than a few branching ratios are important sources of experimental constraints on new resonances.
In sec.~\ref{sec:ternary}, we introduced a means of extending searches for narrow resonances into the multidimensional parameter space of BRs, focusing on situations where only three BR's were of greatest importance: either two production modes and one primary decay mode, or one production mode and a pair of experimentally distinguishable final states. Section~\ref{sec:apps} explored a number of specific examples, showing how ternary diagrams can provide insight.
Let us now consider the situation where a new resonance has N+1 branching ratios, all able to provide significant experimental constraints. Then we can generalize the work of sec.~\ref{sec:ternary}, by noting that the branching ratios now obey the sum rule:
%
\begin{align}
\sum_{i = 1}^{N+1} \textrm{BR}_i = 1 \, ,
\end{align}
%
which tells us that the dimension of the space of independent BR's is N. Just as compositional data in three variables can be represented by a ternary plot, where the ratios summing to 1 are plotted within a two-dimensional equilateral triangle, so compositional data in N+1 variables can be represented in a simplicial sample space where the ratios summing to 1 are located within an N-simplex.\footnote{A simplex generalizes a triangle to arbitrary dimensions; it is the minimal polytope in the space of a given number of dimensions. A 0-simplex is a point; a 1-simplex is a line segment; a 2-simplex is a triangle; a 3-simplex is a tetrahedron; a 4-simplex is a 5-cell, and so forth.}
Moreover, if the new resonance has M+1 branching ratios, of which only N+1 are able to provide significant experimental constraints, we can generalize the notion of "effective" branching ratios from sec.~\ref{sec:ternary} as well. Now, we have $\sum_{i = 1}^{N+1} \textrm{BR}_i = 1 - \sum_{i = N+2}^{M+1} \textrm{BR}_i$. We can thus define ``effective'' BRs,
%
\begin{align}
\widetilde{\textrm{BR}}_i \equiv \textrm{BR}_i \left ( 1 - \sum_{j = N+2}^{M+1} \textrm{BR}_j \right )^{-1} \, ,
\end{align}
%
which satisfy the unitary sum rule, $\sum_{i = 1}^{N+1} \widetilde{\textrm{BR}}_i = 1$, implicit in the construction of simplex diagrams. To interpret constraints displayed on the N-simplex diagrams in the framework of simplified limits, one must then also define an effective width,
%
\begin{align}
\widetilde{\Gamma}_R \equiv \Gamma_R \left ( 1 - \sum_{j = N+2}^{M+1} \textrm{BR}_j \right )^2 \, .
\end{align}
%
An N-simplex diagram with sides spanning the range $[0,\,1]$ in this context generically displays the space of effective BRs, where $\widetilde{\textrm{BR}} \ge \textrm{BR}$ and $\widetilde{\Gamma}_R \le \Gamma_R$ with the equalities saturated only when $N=M$.
Again, within the simplified limits framework, this construction allows us to handle a wider variety of scenarios. On the one hand, we can unfold the ambiguity introduced by deconvolving the hadronic PDFs when there is more than one production mode. On the other hand, we can combine search results for multiple experimentally distinguishable final states without introducing further model-dependent assumptions about the relationship between BRs.
While one cannot easily plot a higher-dimensional N-simplex as a two-dimensional image, one can nonetheless still perform a statistical analysis to discern how the experimental constraints shape the allowed region of the N-simplex and check whether a given new resonance's location within the N-simplex is in the allowed region. One might even illustrate this in a journal article by displaying the ternary diagram sub-space of the full N-simplex that provides the strongest limit on the model in question.
\section{Discussion}
\label{sec:disc}
In this article we have introduced a more integrative method of presenting constraints on production and decay of narrow resonances when multiple branching ratios yield valuable experimental information about the properties of new resonances. The method utilizes the NWA to parameterize constraints in terms of products of BRs, the mass of the resonance, and the total width of the resonance. We have seen that representing the results of searches for narrow resonances in terms of the parameterization of the simplified limits framework---$\widetilde{\textrm{BR}}$s, $\widetilde{\Gamma}_R$, and $m_R$---provides a natural context for combining the statistics from multiple search channels for a common resonance.
We have largely focused on cases where only three channels (two production and one decay mode, or vice versa) are relevant, by employing ternary diagrams to display the combined constraints on the expanded space of BRs. We have illustrated applications to resonances of spin 0, spin 1, and spin 2, arising in a variety of beyond-the-standard-model scenarios. Our approach clearly offers a more model-independent method of interpreting constraints from multiple channels compared to the traditional product of production cross section times BR, which would require making assumptions about the relationship between decay BRs. It is also applicable to situations with multiple production modes by unfolding the uncertainty inherent in the one-dimensional simplified limits parameter $\zeta$. This method is complementary to traditional limits, with $\sigma \times \textrm{BR}$ offering the cleanest display of constraints at the expense of sometimes introducing specific model assumptions, while ternary diagrams can encompass more a more model-independent parameter space, easily translatable to a variety of disparate models.
While the applications we considered in detail had three-channel parameter spaces, we have also discussed how the use of ternary diagrams can be readily generalized to the use of N-simplex diagrams for situations where additional branching ratios also offer valuable experimental constraints on the properties of new resonances. Making use of this more generalized method will, of course, rely on the availability of data about the multiple branching ratios. Indeed, having access to digital data sets for searches combining multiple experimental channels would be ideal for documenting and leveraging limits on a many-dimensional parameter space, capable of encompassing all of the initial and final state BRs relevant for the searches being considered. Recently, there has been a tremendous effort to make digitalized data available to all researchers with the introduction of the HepData repository~\cite{Maguire:2017ypu}.
As a closing thought, we would like to advocate for experimental collaborations to provide these larger digital data sets (parameterized in terms of the BRs, total width, and mass of the new resonance) to supplement the information presentable in the traditional article format. This will enable the data to be most fully leveraged to explore the widest possible range of models in detail, enabling constraints to be quickly understood for a plethora of interesting theories.
\section{Acknowledgments}
\label{sec:ack}
We thank J. Duarte for suggesting the application of this work to digital data repositories. We also thank D. Foren, K.A. Mohan, D. Sengupta, and X. Wang for useful discussions and comments. This material is based upon work supported by the National Science Foundation under Grant No.~PHY-1915147. P.I. was supported by the CUniverse research promotion project of Chulalongkorn University in Bangkok, Thailand, under Grant No.~CUAASC.
|
1,314,259,993,358 | arxiv | \section{Introduction}
\label{sec:intro}
The main aim in proof complexity is to understand the complexity of theorem proving. Arguably, what is even more important is to establish techniques for lower bounds, and the recent history of computational complexity speaks volumes on how difficult it is to develop general lower bound techniques. Understanding the size of proofs is important for at least two reasons. The first is its tight relation to the separation of complexity classes: NP vs.\ coNP for propositional proofs, and NP vs. PSPACE in the case of proof systems for quantified boolean formulas (QBF). New superpolynomial lower bounds for specific proof systems rule out specific classes of non-deterministic poly-time algorithms for problems in co-NP or PSPACE, thereby providing an orthogonal approach to the predominantly machine-oriented view of computational complexity.
The second reason to study lower bounds for proofs is the analysis of
SAT and QBF solvers: powerful algorithms that efficiently solve the
classically hard problems of SAT and QBF for large classes of
practically relevant formulas. Modern SAT solvers routinely solve
industrial instances in even millions of variables for various
applications. Even though QBF solving is at a much earlier state, due
to its power to express problems more succinctly, QBF even applies to further fields such
as formal verification or
planning~\cite{Rin07,BM08a,EglyKLP14}. Each
successful run of a solver on an unsatisfiable instance can be
interpreted as a proof of unsatisfiability; and many modern SAT solvers
based on conflict-driven clause learning (CDCL) are known to
implicitly generate resolution proofs.
Thus, understanding the complexity of resolution proofs helps obtain
worst-case bounds for the performance of CDCL-based SAT solvers.
The picture is more complex for QBF solving, as there exist two main, yet conceptually very different paradigms: CDCL-based and expansion-based solving. A variety of QBF resolution systems have been designed to capture the power of QBF solvers based on these paradigms. The core system of these is Q-Resolution (\qrc), introduced by Kleine B\"{u}ning et al.\ \cite{DBLP:journals/iandc/BuningKF95}. This has been augmented to capture ideas from CDCL solving, leading to long-distance resolution (\lqrc) \cite{DBLP:journals/fmsd/BalabanovJ12}, universal resolution (\qurc) \cite{Gelder12}, or its combinations like \lquprc \cite{BWJ14}.
Powerful proof systems for expansion-based solving were developed in the form of \ecalculus \cite{JM15}, and the stronger \irc and \irmc \cite{BCJ14}. Recent findings show that CDCL and expansion are indeed orthogonal paradigms as the underlying proof systems from the two categories are incomparable with respect to simulations \cite{BCJ15}.
Understanding which general techniques can be used to show lower bounds for proof systems is of paramount importance in proof complexity. For propositional proof systems we have a number of very effective techniques, most notably the size-width technique of Ben-Sasson and Wigderson \cite{BW01}, deriving size from width bounds, game characterisations (e.g.\ \cite{Pud00,BK14}), the approach via proof-complexity generators (cf.\ \cite{Kra11}), and feasible interpolation. Feasible interpolation, first introduced by \Krajicek\ \cite{Kra97}, is a particularly successful paradigm that transfers circuit lower bounds to size of proof lower bounds. The technique has been shown to be effective for resolution \cite{Kra97}, cutting planes \cite{Pud97} and even strong Frege systems for modal and intuitionistic logics \cite{Hru09}. However, feasible interpolation fails for strong propositional systems as Frege systems under plausible cryptographic and number-theoretic assumptions \cite{KP98,BPR00,BDGMP04}.
The situation is drastically different for QBF proof systems, where we currently possess a very limited bag of techniques.
In particular, the classical size-width technique of Ben-Sasson and Wigderson \cite{BW01} by which most resolution lower bounds are obtained drastically fails in \qrc \cite{BCMS16}.
At present we only have the recent strategy extraction technique of Beyersdorff et al.\ \cite{BCJ15}, which works for \qrc as well as for stronger QBF Frege systems \cite{BBC16,BP16}, a game characterisation of the very weak tree-like \qrc \cite{BCS15}, and ad-hoc lower bound arguments for various systems \cite{BCJ15,DBLP:journals/iandc/BuningKF95}. In addition, Balabanov et al.\ \cite{BWJ14} develop methods to lift some previous lower bounds from \qrc to stronger systems.
We now proceed to explain the main contributions of the article.
\subsection*{1.\ A general lower bound technique.} We show that the feasible interpolation technique applies to all resolution-type QBF proof systems, whether expansion or CDCL based. This provides the first truly general lower bound technique for QBF proof systems, and---at the same time---hugely extends the scope of the feasible interpolation method.
(We note that in recent work \cite{BCMS-FST16}, this technique has
also been shown to apply to a QBF version of the cutting planes proof system.)
In a nutshell, feasible interpolation works for true implications
$A(\vec{p},\vec{q}) \to B(\vec{p},\vec{r})$ (or, equivalently, false conjunctions $A(\vec{p},\vec{q}) \wedge \neg B(\vec{p},\vec{r})$), which by Craig's
interpolation theorem \cite{Cra57} possess interpolants $C(\vec{p})$
in the common variables $\vec{p}$. Such interpolants, even though they
exist, may not be of polynomial size \cite{Mun84}. However, it may be
the case that we can always efficiently extract such interpolants from
a proof of the implication in a particular proof system $P$, and in
this case, the system $P$ is said to admit feasible interpolation. If
we know that a particular class of formulas does not admit small
interpolants (either unconditional or under suitable assumptions),
then there cannot exist small proofs of the formulas in the system
$P$. Here we show that this feasible interpolation theorem holds for
arbitrarily quantified formulas $A(\vec{p},\vec{q})$ and
$B(\vec{p},\vec{r})$ above, when the common variables $\vec{p}$ are
existentially quantified before all other variables.
\subsection*{2.\ New lower bounds for QBF systems.} As our second
contribution we exhibit new hard formulas for QBF resolution
systems. Of course, exponential lower bounds for these
systems follow immediately from the known lower bounds for resolution
(in these systems, refuting a totally quantified false sentence that
uses only existential quantifiers degenerates to classical
resolution). However, we can better understand the power of such systems to
handle arbitrary QBFs if we have more examples of false QBFs that use
existential and universal quantifiers in non-trivial ways and that are
hard to refute in these systems.
It is fair to say that we are currently quite short of hard
examples: research so far has mainly concentrated on formulas of Kleine B\"{u}ning et al.\ \cite{DBLP:journals/iandc/BuningKF95} and their modifications \cite{BCJ15,BWJ14}, a principle by Janota and Marques-Silva \cite{JM15}, and a class of parity formulas recently introduced by Beyersdorff et al.\ \cite{BCJ15}. This again is in sharp contrast with classical proof complexity where a wealth of different combinatorial principles as well as random formulas are known to be hard for resolution.
Our new hard formulas are QBF contradictions formalising the easy and appealing fact that a graph cannot both have and not have a $k$-clique. The trick is that in our formulation, each interpolant for these formulas has to solve the $k$-clique problem. Using our interpolation theorem together with the exponential lower bound for the monotone circuit complexity of clique \cite{AB87}, we obtain exponential lower bounds for the clique-no-clique formulas in all CDCL and expansion-based QBF resolution systems.
We remark that conceptually our clique-no-clique formulas are different from and indeed simpler than the clique-colour formulas used for the interpolation technique in classical proof systems. This is due to the more succinct expressivity of QBF. Indeed it is not clear how the clique-no-clique principle could even be formulated succinctly in propositional logic.
\subsection*{3.\ Comparison to strategy extraction.} On a conceptual level,
we uncover a tight relationship between feasible interpolation and
strategy extraction. Strategy extraction is a very desirable property
of QBF proof systems and is known to hold for the main
resolution-based systems. From a refutation of a false QBF,
a winning strategy for the universal player can be efficiently extracted.
Like feasible interpolation, the lower bound technique based on
strategy extraction by Beyersdorff et al.\ \cite{BCJ15,BBC16} also transfers circuit lower bounds to proof size bounds. However, instead of monotone circuit bounds as in the case of feasible interpolation, the strategy extraction technique imports $\mathsf{AC}^0$ circuit lower bounds (or further circuit bounds for circuit classes directly corresponding to the lines in the proof system \cite{BBC16}). Here we show that each feasible interpolation problem can be transformed into a strategy extraction problem, where the interpolant corresponds to the winning strategy of the universal player on the first universal variable. This clarifies that indeed feasible interpolation can be viewed as a special case of strategy extraction.
\subsection*{Organisation of the paper}
The remaining part of this article is organised as follows. In Section~\ref{sec:prelim} we review the definitions and relations of relevant QBF proof systems. In Section~\ref{sec:interpolation} we start by recalling the overall idea for feasible interpolation and show interpolation theorems for the strongest CDCL-based system \lquprc as well as the strongest expansion-based proof system \irmc. This implies feasible interpolation for all QBF resolution-based systems. Further we show that all these systems even admit monotone feasible interpolation. In Section~\ref{sec:lower-bounds} we obtain the new lower bounds for the clique-no-clique formulas. Section~\ref{sec:strat-extraction} reformulates interpolation as a strategy extraction problem.
\section{Preliminaries}
\label{sec:prelim}
A literal is a boolean variable or its negation. We say a literal $x$ is complementary to the literal $\neg x$ and vice versa.
A {\em clause} is a disjunction of literals. For notational
convenience, we sometimes also refer
to a clause as a set of literals.
The empty clause is denoted by~$\Box$, and is semantically equivalent
to false. We denote true by 1 and false by 0.
A formula in {\em conjunctive normal form} (CNF) is a
conjunction of clauses.
For a literal $l=x$ or $l=\lnot x$, we write $\var(l)$ for~$x$ and extend this notation to $\var(C)$ for a clause $C$.
Let $\alpha$ be any partial assignment. For a clause $C$, we write
$C|_{\alpha}$ for the clause obtained after applying the partial
assignment $\alpha$ to $C$. For example, applying $\alpha:~x_1 \leftarrow 0$ to the clause $C \equiv (x_1 \vee x_2 \vee x_3)$
yields $C|_{\alpha} \equiv (x_2 \vee x_3)$, and applying $\alpha':~x_1 \leftarrow 1$ to the same clause gives $C|_{\alpha'} \equiv 1$.
In the former case, we say that $C$ evaluates to the clause $(x_2 \vee x_3)$ under the assignment $\alpha$, and in the latter case, it
evaluates to $1$ under the assignment $\alpha'$.
Similarly, for a formula $F$, we write
$F|_{\alpha}$ for the restriction of the formula to the partial
assignment.
Quantified Boolean Formulas (QBFs) extend propositional logic with
the boolean quantifiers $\forall$ and $\exists$. They have the
standard semantics that $\forall x. F$ is
satisfied by the same truth assignments to its free variables as
$F|_{x = 0} \wedge F|_{x = 1}$, and $\exists x. F$ as $F|_{x = 0} \vee
F|_{x = 1}$.
We assume that QBFs are fully quantified (no free variables),
in \emph{closed prenex form}, and with a CNF
matrix, i.e, we consider the form $Q_1 x_1 \dots
Q_n x_n. \phi$, where $Q_i \in \{\exists, \forall \}$, and
the formula $\phi$ is in CNF and is defined on the set of variables
$X = \{x_1, \ldots , x_n\}$. Further, we assume that complementary
literals do not appear in the same clause; that is, no clause in the
matrix is tautological.
The propositional part $\phi$ is called the {\em matrix} and
the rest the {\em prefix}.
We abbreviate the prefix by the notation $\mathcal{Q} x$.
The \emph{index} $\ind(x)$ of a variable is its position in the
prefix; thus $\ind(x_i)=i$. When $Q_i=\exists$ ($Q_i=\forall$,
respectively), we say that $x_i$ is an existential variable (a
universal variable, resp.\ ). A literal $l$ is said to be
existential (universal) if $\var(l)$ is existential (universal,
resp.\ ). For a literal $l$, we write $\ind(l)$ for $\ind(\var(l))$.
Often it is useful to think of a QBF $\mathcal{Q} x .\, \phi$
as a \emph{game} between the \emph{universal} and the
\emph{existential player}.
In the $i$-th step of the game, player $Q_i$ assigns a value to the
variable $x_i$.
The existential
player wins the game iff the matrix~$\phi$ evaluates to $1$ under
the assignment constructed in the game. The universal player wins
iff the matrix~$\phi$ evaluates to $0$.
Let $u$ be a universal variable~$u$ with index~$i$. At the $i$th step
of the game, when the universal player has to decide what value to
assign to $u$, all variables with index less than $i$ already have
values assigned to them. A \emph{strategy for $u$} is thus a function
from the set of assignments to the variables with index $<i$ to
$\{0,1\}$. A strategy for the universal player is a collection of strategies, one for each universal
variable. A strategy is a \emph{winning strategy} for the universal player if,
using it, the universal player can win any possible game,
irrespective of the strategy used by the existential
player.
A QBF is false iff there exists a \emph{winning strategy}
for the universal player
~(\cite{Goultiaeva-ijcai11}, \cite[Sec.\,4.2.2]{AroraBarak09},
\cite[Chap.\,19]{Pap94}).
\smallskip
\textbf{Resolution-based calculi for QBF.}
We now give a brief overview of the main existing resolution-based
calculi for QBF. For the technical proofs in this paper, full details
are needed only for two systems, \lquprc\ and \irmc, both
of which are included in the overview.
Recall that resolution for propositional proofs (where all variables
are existential) operates by inferring clauses, starting from the
clauses of the given formula (axioms), until the empty clause is
derived. From clauses $C\vee x$ and $D\vee \neg x$ that have been
already inferred, it can infer the clause $C \vee D$, by resolving on
the variable $x$. Here, $x$ is referred to as the pivot, and $C\vee D$
is the resolvent. The clauses $C\vee x$ and $D\vee \neg x$ are
referred to as parents of the clause $C\vee D$ in the proof. In a
representation of a proof as a graph, each clause in the proof is a
node, and edges are directed from parent to child.
This system can be augmented in various ways to
handle QBFs with universal variables.
We start by describing the proof systems modelling
\emph{CDCL-based QBF solving};
their rules are summarized in Figure~\ref{fig:allrules}. The most basic and important
system is \emph{Q-resolution (\qrc)} by Kleine B\"{u}ning et al.\
\cite{DBLP:journals/iandc/BuningKF95}. It is a resolution-like
calculus
that operates on QBFs in prenex form with CNF matrix. The lines in a \qrc proof are clauses. In addition to the axioms,
\qrc comprises the resolution rule S$\exists$R and universal reduction $\forall$-Red (cf.\ Figure~\ref{fig:allrules}).
Note that the conditions in S$\exists$R explicitly disallow inferring
a tautology; this is syntactically essential for soundness.
\begin{figure}[h!]
\framebox{\parbox{\breite}
{
\begin{prooftree}
\AxiomC{}
\RightLabel{(Axiom)}
\UnaryInfC{$C$}
\end{prooftree}
\begin{minipage}{0.99\linewidth}
$C$ is a clause in the matrix.
\end{minipage}
\begin{prooftree}
\AxiomC{$D\vee u$}
\RightLabel{($\forall$-Red)}
\UnaryInfC{$D$}
\DisplayProof\hspace{0cm}
\AxiomC{$D\vee u^*$}
\RightLabel{($\forall$-Red$^*$)}
\UnaryInfC{$D$}
\end{prooftree}
\begin{minipage}{0.99\linewidth}
Literal $u$ (or $u^*$) is universal and
$\ind(u)\geq\ind(l)$ for all existential $l\in D$.
\end{minipage}
\begin{prooftree}
\AxiomC{$C_1\vee U_1\vee\{x\}$}
\AxiomC{$C_2 \vee U_2\vee\{\lnot{x}\}$}
\RightLabel{(Res)}
\BinaryInfC{$C_1\vee C_2\vee U$}
\end{prooftree}
\begin{minipage}{0.99\linewidth}
We consider four instantiations of the Res-rule:\\
\textbf{S$\exists$R:} $x$ is existential.\\ If $z\in C_1$, then $\lnot{z}\notin C_2$. $U_1=U_2=U=\emptyset$.\\
\textbf{S$\forall$R:} $x$ is universal. Other conditions same as S$\exists$R.\\
\textbf{L$\exists$R:} $x$ is existential. \\If $l_1\in C_1, l_2\in C_2$, $\var(l_1)=\var(l_2)=z$ then $l_1=l_2\neq z^*$.
$U_1, U_2$ contain only universal literals with $\var(U_1)=\var(U_2)$.
$\ind(x)<\ind(u)$ for each $u\in\var(U_1)$.\\
If $w_1\in U_1, w_2\in U_2$, $\var(w_1)=\var(w_2)=u$ then $w_1=\lnot w_2$, $w_1=u^*$ or $w_2=u^*$. $U=\{u^* \mid u\in \var(U_1)\}$.\\
\textbf{L$\forall$R:} $x$ is universal. Other conditions same as L$\exists$R.
\end{minipage}
\caption{The rules of CDCL-based proof systems}
\label{fig:allrules}
}}
\end{figure}
\emph{Long-distance resolution (\lqrc)} appears originally in the work of Zhang and Malik \cite{DBLP:conf/iccad/ZhangM02}
and was formalized into a calculus by Balabanov and Jiang \cite{DBLP:journals/fmsd/BalabanovJ12}.
It allows resolving clauses $C\vee x$ and $D\vee \neg x$ on an
existential variable $x$ even if $C$ and $D$ contain complementary
literals (where $C\vee D$ would be a tautology), provided the
complementary literals correspond to universal variables with index greater than the index of the pivot variable $x$.
It merges complementary literals of a universal variable~$u$
into the special literal~$u^*$ which then appears in the resolvent. We define $\ind(u^*) = \ind(u)$.
\lqrc uses the rules L$\exists$R, $\forall$-Red and $\forall$-Red$^*$ (cf.\ Figure~\ref{fig:allrules}).
\emph{QU-resolution (\qurc)} by Van Gelder \cite{Gelder12} removes the restriction from \qrc that the resolved variable must be an existential variable and allows resolution of universal variables. The rules of \qurc are S$\exists$R, S$\forall$R and $\forall$-Red (cf.\ Figure~\ref{fig:allrules}).
\emph{\lquprc} by Balabanov et al.\ \cite{BWJ14} extends \lqrc by allowing short and long distance resolution pivots to be universal. However, the pivot is never a merged literal $z^*$. \lquprc uses the rules L$\exists$R, L$\forall$R, $\forall$-Red and $\forall$-Red$^*$ (cf.\ Figure~\ref{fig:allrules}).
The second type of calculi models \emph{expansion-based QBF solving}. These calculi are
based on \emph{instantiation} of universal variables:
\ecalculus by Janota and Marques-Silva \cite{JM15}, \irc, and \irmc by Beyersdorff et al.\ \cite{BCJ14}. All these
calculi operate on clauses that comprise only existential variables from the original QBF, which
are additionally \emph{annotated} by a substitution to some universal variables, e.g.\ $\lnot
x^{0/u_1 1/u_2}$.
For any annotated literal $l^\sigma$, the substitution $\sigma$ must not make
assignments to variables at a higher quantification level than $l$, i.e.\ if
$u\in\domain(\sigma)$, then $u$ is universal and $\ind(u)<\ind(l)$.
To preserve this invariant, we use the \emph{auxiliary notation $l^{[\sigma]}$}, which for an existential literal $l$ and an assignment $\sigma$ to the universal variables
filters out all assignments that are not permitted,
i.e.\ $l^{[\sigma]}=l^{\comprehension{c/u\in\sigma}{\ind(u)<\ind(l), ~c \in \{0, 1 \} }}$.
As annotations can be partial assignments, we use auxiliary operations of \emph{completion} and \emph{instantiation}. For assignments
$\tau$ and $\mu$, we write $\complete{\tau}{\mu}$ for the assignment $\sigma$ defined as follows:
$\sigma(x)=\tau(x)$ if $x\in\domain(\tau)$,
otherwise $\sigma(x)=\mu(x)$ if $x\in\domain(\mu)\backslash\domain(\tau)$.
The operation $\complete{\tau}{\mu}$ is called \emph{completion} because $\mu$
provides values for variables not defined in $\tau$.
The operation is associative and therefore
we can omit parentheses. For an assignment $\tau$ and
an annotated clause $C$, the function $\instantiate(\tau,C)$ returns the annotated clause
$\comprehension{l^{[\complete{\sigma}{\tau}]}}{l^\sigma\in C}$. The system \irc is
defined in Figure~\ref{fig:IRC}.
\begin{figure}[h!]
\framebox{\parbox{\breite}
{
\begin{prooftree}
\AxiomC{}
\RightLabel{(Axiom)}
\UnaryInfC{$\comprehension{l^{[\tau]}}{l\in C, \var(l)\text{ is existential}} $}
\end{prooftree}
$C$ is a clause from the matrix. \\$\tau=\comprehension{0/u}{u\text{ is universal in }C}$, where the notation $0/u$ for literals $u$ is shorthand for $0/x$ if $u=x$ and $1/x$ if $u=\neg x$.
\begin{prooftree}
\AxiomC{$x^\tau\lor C_1 $ }
\AxiomC{$\lnot x^\tau\lor C_2 $}
\RightLabel{(Res)}
\BinaryInfC{$C_1\union C_2$}
\end{prooftree}
\begin{prooftree}
\AxiomC{$C$}
\RightLabel{(Instantiation)}
\UnaryInfC{$\instantiate(\tau,C)$}
\end{prooftree}
$\tau$ is an assignment to universal variables with $\range(\tau) \subseteq \{0,1\}$.
\caption{The rules of \irc \cite{BCJ14}}\label{fig:IRC}
}}
\end{figure}
The calculus \irmc further extends \irc by enabling annotations containing $*$.
The rules of the calculus \irmc are presented in Figure~\ref{fig:IRMC}.
The symbol $*$ may be introduced by the merge rule, e.g.\ by
collapsing $x^{0/u}\lor x^{1/u}$ into $x^{*/u}$.
\begin{figure}[h!]
\framebox{\parbox{\breite}
{
Axiom and instantiation rules as in \irc in Figure \ref{fig:IRC}.
\begin{prooftree}
\AxiomC{$x^{\tau\cup\xi}\lor C_1 $ }
\AxiomC{$\lnot x^{\tau\cup\sigma}\lor C_2 $}
\RightLabel{(Res)}
\BinaryInfC{$\instantiate(\sigma,C_1)\union\instantiate(\xi,C_2)$}
\end{prooftree}
\text{$\domain(\tau)$, $\domain(\xi)$ and $\domain(\sigma)$ are
mutually disjoint.} $\range(\tau)\subseteq\{0,1\}$
\begin{prooftree}
\AxiomC{$C\lor b^\mu\lor b^\sigma$}
\RightLabel{(Merging)}
\UnaryInfC{$C\lor b^\xi$}
\end{prooftree}
\text{$\domain(\mu)=\domain(\sigma)$.}
\text{$\xi=\comprehension{c/u}{c/u\in\mu,c/u\in\sigma}\union$}
\text{\hphantom{$\xi$}$\comprehension{*/u}{c/u\in\mu,d/u\in\sigma,c\neq d}$ }
\caption{The rules of \irmc \cite{BCJ14} }\label{fig:IRMC}
}}
\end{figure}
The simulation order of QBF resolution systems is shown in Figure~\ref{fig:sim-structure}. All proof systems have been exponentially separated (cf.\ \cite{BCJ15}).
\begin{figure}[h!]
\parbox{\breite}{
\centering
\begin{tikzpicture}[scale=1.1]
\node[calcn](n1) at (3,1) {{\sf Tree-}\qrc} ;
\node[calcn](n2) at (3,2) {\qrc} ;
\node[expcalcn](n3) at (0,2) {\ecalculus} ;
\node[calcn](n4) at (2,3) {\lqrc} ;
\node[calcn](n5) at (4,3) {\qurc} ;
\node[calcn](n6) at (3,4) {\lquprc} ;
\node[expcalcn](n8) at (0,3) {\irc} ;
\node[expcalcn](n9) at (0,4) {\irmc} ;
\draw(n4)--(n2);
\draw(n6)--(n4);
\draw(n2)--(n8);
\draw(n8)--(n3);
\draw (n3)--(n1)--(n2)--(n5);
\draw(n5)--(n6);
\draw(n8)--(n9)--(n4);
\end{tikzpicture}
}
\caption{The simulation order of QBF resolution systems. Systems on the left correspond to expansion-based solving, whereas the systems on the right are CDCL based.}\label{fig:sim-structure}
\end{figure}
We end this section with a definition that is used in later
sections. It generalises the notion of weakening.
For clauses containing only literals of the
form $x_i, \neg x_i$ (no $l^*$), clause $D$ weakens clause $C$ if
every literal in $C$ is also present in $D$; i.e.\ $C \subseteq D$.
With merged literals, the analogous notion of weakening is as defined below.
\begin{definition}\label{def:preceq}
For clauses $C,D$ we write $C\preceq D$ if for any literal $l\in C$ we have $l\in D$ or $l^*\in D$ and for any $l^*\in C$ we have $l^*\in D$.
For annotations $\tau$ and $\sigma$ we say that $\tau\preceq\sigma$ if $\domain(\tau)=\domain(\sigma)$ and for any $c/u\in \tau$ we have $c/u\in \sigma$ or $*/u\in \sigma$ and for any $*/u\in \tau$ we have $*/u\in \sigma$.
If $C,D$ are annotated clauses, we write $C\preceq D$ if there is an injective function $f:C \hookrightarrow D$ such that for all $l^\tau\in C$ we have $f(l^\tau) =l^\sigma$ with $\tau\preceq\sigma$.
\end{definition}
Note: the requirement above that $f$ is injective ensures that
$x^{0/u}\vee x^{1/v} \not\preceq x^{0/u,1/v}$.
\section{Feasible Interpolation and Feasible Monotone Interpolation}
\label{sec:interpolation}
In this section we show that feasible interpolation and feasible
monotone interpolation hold for \lquprc and \irmc. We adapt the
technique first used by \Pudlak \cite{Pud97} to re-prove and
generalise the result of \Krajicek \cite{Kra97}.
\subsection{The setting}\label{subsec:interpolation-setting}
Consider a false QBF $\mathcal{F}$ of the form
$$\exists \vec{p} \mathcal{Q} \vec{q} \mathcal{Q} \vec{r} \big[
A(\vec{p}, \vec{q}) \wedge B(\vec{p}, \vec{r})\big],
$$
where, $\vec{p}$, $\vec{q}$, and $\vec{r}$ are mutually disjoint sets of
propositional variables, $A(\vec{p}, \vec{q})$ is a CNF formula on
variables $\vec{p}$ and $\vec{q}$, and $B(\vec{p}, \vec{r})$ is a CNF
formula on variables $\vec{p}$ and $\vec{r}$.
Thus $\vec{p}$ contains all the
common variables between them. The $\vec{q}$ and
$\vec{r}$ variables can be quantified arbitrarily, with any number of
alternations between quantifiers. The QBF is equivalent to the
following, not in prenex form
$$\exists \vec{p} \big[ \mathcal{Q} \vec{q}. A(\vec{p}, \vec{q}) \wedge \mathcal{Q} \vec{r}. B(\vec{p}, \vec{r})\big].$$
Let $\vec{a}$ denote an assignment to the $\vec{p}$ variables. We
denote $A(\vec{p},\vec{q})|_{\vec{a}}$
by $A(\vec{a},\vec{q})$ and $B(\vec{p},\vec{q})|_{\vec{a}}$ by
$B(\vec{a},\vec{q})$.
\begin{definition}
Let $\mathcal{F}$ be a false QBF of the form $\exists \vec{p} \mathcal{Q} \vec{q} \mathcal{Q} \vec{r}. \left[
A(\vec{p}, \vec{q}) \wedge B(\vec{p}, \vec{r})\right]$.
An \emph{interpolation circuit} for $\mathcal{F}$ is a boolean circuit $G$
such that on every $0, 1$ assignment $\vec{a}$ for $\vec{p}$ we have
\begin{align*}
G(\vec{a}) = 0 &\implies \mathcal{Q} \vec{q}. A(\vec{a}, \vec{q}) \text{ is false, and }\\
G(\vec{a}) = 1 &\implies \mathcal{Q} \vec{r}. B(\vec{a}, \vec{r}) \text{ is false. }
\end{align*}
We say that a QBF proof system $S$ has \emph{feasible interpolation}
if there is an effective procedure that, given any
$S$-proof $\pi$ of a QBF $\mathcal{F}$ of the form above,
outputs an interpolation circuit for $\mathcal{F}$ of size
polynomial in the size of $\pi$.
We say that the procedure \emph{extracts} the circuit from the proof.
We say that $S$ has \emph{monotone feasible interpolation} if the
following holds: in the same setting as above, if $\vec{p}$ appears
only positively in $A(\vec{p}, \vec{q})$, then the
interpolation circuit for $\mathcal{F}$ extracted from $\pi$ is monotone.
\end{definition}
As our main results, we show that both \lquprc and \irmc have monotone feasible interpolation.
Before proving the interpolation theorems, we first outline the general idea:
\subsubsection*{Proof idea} Fix a proof system $S\in \{$\lquprc, \irmc{}\,$\}$ and an $S$-proof $\pi$ of $\mathcal{F}$.
Consider the following definition of a $\vec{q}$-clause and an $\vec{r}$-clause.
\begin{definition}
We call a clause $C$ in $\pi$ a $\vec{q}$-clause
(resp.\ $\vec{r}$-clause), if $C$ contains only variables $\vec{p},
\vec{q}$ (resp. $\vec{p}, \vec{r}$). We also call $C$ a
$\vec{q}$-clause (resp.\ $\vec{r}$-clause), if $C$ contains only
$\vec{p}$ variables, but all its descendant clauses in the proof $\pi$
(all clauses with a directed path to $C$ in $\pi$) are $\vec{q}$
(resp.\ $\vec{r}$)-clauses. In the case of \irmc the variables
appearing in the annotations are irrelevant and can be from either set.
\end{definition}
From $\pi$ we construct a circuit $C_\pi$ with the $\vec{p}$-variables
as inputs: for each node $u$ with clause $C_u$ in the proof $\pi$,
associate a gate $g_u$ (or a constant-size circuit) in the circuit
$C_\pi$. Next, we inductively construct, for any assignment $\vec{a}$ to the
$\vec{p}$ variables, another proof-like structure $\pi'(\vec{a})$. For
each node $u$ with clause $C_u$ in the proof $\pi$, associate a clause
$C'_{u, \vec{a}}$ in the structure $\pi'(\vec{a})$. Finally, we
obtain $\pi''(\vec{a})$ from the structure $\pi'(\vec{a})$ by
instantiating $\vec{p}$ variables to the assignment $\vec{a}$ (that
is, $C''_{u,\vec{a}} = C'_{u,\vec{a}}|_{\vec{a}}$ for each node $u$) and
doing some pruning, and show that $\pi''(\vec{a})$ is a valid proof in
$S$. We then find that if $C_\pi(\vec{a})=0$, then $\pi''(\vec{a})$
uses only $\vec{q}$-clauses and thus is a refutation of $\mathcal{Q}
\vec{q}. A(\vec{a}, \vec{q})$, and if $C_\pi(\vec{a})=1$, then
$\pi''(\vec{a})$ uses only $\vec{r}$-clauses and thus is a refutation
of $\mathcal{Q} \vec{r}. B(\vec{a}, \vec{r})$. Thus $C_\pi$ is the
desired interpolant circuit.
More precisely, we show by induction on the height of $u$ in $\pi$
(that is, the length of the longest path to $u$ from a source node in
$\pi$) that:
\begin{enumerate}
\item $C'_{u,\vec{a}} \preceq C_u$.
\item $g_{u}(\vec{a}) = 0 \implies C''_{u,\vec{a}}$ is a $\vec{q}$-clause and can be obtained from the clauses of
$A(\vec{a},\vec{q})$ alone using the rules of $S$.
\item $g_{u}(\vec{a}) = 1 \implies C''_{u,\vec{a}}$ is an $\vec{r}$-clause and can be obtained from the clauses of
$B(\vec{a},\vec{r})$ alone using the rules of $S$.
\end{enumerate}
From the above, we have the following conclusion. Let $r$ be the root of $\pi$. Then on any assignment $\vec{a}$ to the $\vec{p}$ variables we have:
\begin{enumerate}
\item $C'_{r,\vec{a}} \preceq C_r = \Box$, so $C'_{r,\vec{a}} = \Box$. Therefore, $C''_{r,\vec{a}} = C'_{r,\vec{a}}|_{\vec{a}} = \Box$.
\item $g_{r}(\vec{a}) = 0 \implies \Box$ is a $\vec{q}$-clause and can be obtained from the clauses of $A(\vec{a},\vec{q})$ alone using the rules of system $S$. Hence by soundness of $S$,
$\mathcal{Q} \vec{q}. A(\vec{a}, \vec{q})$ is false.
\item $g_{r}(\vec{a}) = 1 \implies \Box$ is an $\vec{r}$-clause and can be obtained from the clauses of $B(\vec{a},\vec{r})$ alone using the rules of system $S$. Hence by soundness of $S$,
$\mathcal{Q} \vec{r}. B(\vec{a}, \vec{r})$ is false.
\end{enumerate}
\noindent
Thus $g_r$, the output gate of the circuit, computes an interpolant.
When $\mathcal{F}$ has only existential quantification, $\pi$ is a
classical resolution proof, and this is exactly the interpolant computed
by \Pudlak's method \cite{Pud97}. The challenge here is to construct $\pi'$
and $\pi''$ appropriately when the stronger proof systems are used for
general QBF, while maintaining the inductive invariants.
\subsection{Interpolants from \lquprc proofs}
\label{subsec:lpq-interpolant}
We now implement the idea described above for \lquprc.
\begin{theorem} \label{thm:lqup}
\lquprc has feasible interpolation.
\end{theorem}
\begin{proof}
As mentioned in the proof idea, for an \lquprc proof $\pi$ of $\mathcal{F}$ we first describe the circuit $C_{\pi}$ with input $\vec{p}$.
\medskip\noindent
{\bf Construction of the circuit $C_{\pi}$:} The DAG underlying the
circuit is exactly the same as the DAG underlying the proof $\pi$.
For each node $u$ with clause $C_u$ in $\pi$ we associate a gate $g_u$ as follows:
\begin{description}
\item[$u$ is a leaf node: ] If $C_u \in A(\vec{p}, \vec{q})$ then $g_u$ is a constant $0$ gate.
If $C_u \in B(\vec{p}, \vec{r})$ then $g_u$ is a constant $1$ gate.
\end{description}
\noindent
{\bf $u$ is an internal node:} We distinguish four cases.
\begin{enumerate}
\item $u$ was derived by
a universal reduction step.
In this case put a no-operation gate (identity gate) for $g_u$.
\item $u$ corresponds to a resolution step with an existential
variable $x \in \vec{p}$ as pivot.
Nodes $v$ and $w$ are its two parents, i.e.
$$\frac{\overbrace{C_1 \vee x}^\text{node $v$} \hspace{7mm} \overbrace{C_2\vee \neg x}^\text{node $w$}}{\underbrace{ C }_\text{node $u$} }$$
In this case, put a selector gate
$\sel(x, g_v, g_w)$ for $g_u$. Here,
$\sel(x, a, b) = a$, when $x = 0$ and
$\sel(x, a, b) = b$, when $x =
1$. That is,
$\sel(x, a, b) = (\neg x \wedge a) \vee
(x \wedge b)$.
Note that all the variables in
$\vec{p}$ are existential variables
without annotations (equivalently, with empty annotations).
\item $u$ corresponds to a resolution step with an existential or universal variable $x \in \vec{q}$ as pivot. Put an OR gate for $g_u$.
\item $u$ corresponds to a resolution step with an existential or universal variable $x \in \vec{r}$ as pivot. Put an AND gate for $g_u$.
\end{enumerate}
This completes the description of the circuit $C_{\pi}$.
\medskip\noindent
{\bf Construction of $\pi'$ and $\pi''$:}
Following our proof idea, we now describe, for each node $u$ in $\pi$
with clause $C_u$, the associated clause $C'_{u,\vec{a}}$ in
$\pi'(\vec{a})$. Once $\pi'(\vec{a})$ is defined,
the structure $\pi''(\vec{a})$ is
obtained by instantiating $\vec{p}$ variables by
the assignment $\vec{a}$ in each clause of $\pi'(\vec{a})$, cutting
away any edge out of a node
where the clause evaluates to $1$, and deleting nodes which now have
no path to the root node.
That is, for each node $u$, if $C'_{u,\vec{a}}|_{\vec{a}} = 1$, then
the node $u$ is removed, and otherwise the node $u$ survives and
the associated clause $C''_{u,\vec{a}}$ is equal
to $C'_{u,\vec{a}}|_{\vec{a}}$.
We show (by induction on the height of $u$ in $\pi$) that:
\begin{enumerate}
\item $C'_{u,\vec{a}} \preceq C_u$.
\item $g_{u}(\vec{a}) = 0 \implies C''_{u,\vec{a}}$ is a $\vec{q}$-clause and can be obtained from the clauses of
$A(\vec{a},\vec{q})$ alone using the rules of system \lquprc.
\item $g_{u}(\vec{a}) = 1 \implies C''_{u,\vec{a}}$ is a $\vec{r}$-clause and can be obtained from the clauses of
$B(\vec{a},\vec{r})$ alone using the rules of system \lquprc.
\end{enumerate}
As described in the proof outline, this suffices to conclude that
$C_\pi$ computes an interpolant.
We now present the construction details.
\medskip\noindent
{\bf At leaf level:} Let node $u$ be a leaf in $\pi$. Then
$C'_{u,\vec{a}} = C_u$; that is, we copy the clause as it is. Trivially,
we have $C'_{u,\vec{a}} \preceq C_u$. By construction of $C_\pi$, the
conditions concerning $g_u(\vec{a})$ and $C''_{u,\vec{a}}$ are satisfied.
\medskip\noindent
At an internal node we distinguish four cases based on the rule that was applied.
\medskip\noindent
{\bf At an internal node with universal reduction:}
Let node $u$ be an internal node in $\pi$ corresponding to a universal reduction step on some universal literal
$x$ or $x^*$. Let node $v$ be its only parent. Here we consider only the case where the universal literal is $x$. The case of $x^*$ is identical. We have
$$\frac{C_v =\overbrace{D_v \vee x }^\text{node $v$}}{C_u =
\underbrace{D_v}_\text{node $u$}}, \qquad x \text{ is a universal
literal, } \forall \text{~existential literal~} l \in D_v, \ind(l) < \ind(x).$$
In this case, define $C'_{u,\vec{a}} = C'_{v,\vec{a}} \setminus \{ x, \neg x, x^* \}$.
By induction, $C'_{v,\vec{a}} \preceq C_v = D_v \vee x $. Therefore, $C'_{u,\vec{a}} = C'_{v,\vec{a}} \setminus \{ x , \neg x, x^*\} \preceq D_v = C_u$.
If $g_u(\vec{a}) = 0 $, then we know that $g_v(\vec{a}) = 0 $ as
$g_u(\vec{a}) = g_v(\vec{a})$. By the induction hypothesis, we know
that $C''_{v,\vec{a}} = C'_{v,\vec{a}}|_{\vec{a}}$ is a $\vec{q}$-clause and can be derived using
$A(\vec{a}, \vec{q})$ alone via \lquprc. Recall that $C'_{u,\vec{a}} = C'_{v,\vec{a}} \setminus \{x, \neg x, x^*\}$
in this case. Since $\vec{a}$ is an assignment to the $\vec{p}$
variables and $x \notin \vec{p}$, $C'_{u,\vec{a}}|_{\vec{a}} =
C''_{u,\vec{a}}$ is also a $\vec{q}$-clause and can be derived using
$A(\vec{a}, \vec{q})$ alone via \lquprc. (Either
$C''_{u,\vec{a}}$ already equals $C''_{v,\vec{a}}$, or $x$ needs to be dropped. In the
latter case,
the condition on $\ind(x)$
is satisfied at $C''_{u,\vec{a}}$ because it is satisfied
at $C_v$ in $\pi$ and $C'_{v,\vec{a}} \preceq C_v$.
So we can drop $x$ from $C''_{v,\vec{a}}$ to get $C''_{u,\vec{a}}$.)
The situation is dual for the case when $g_u(\vec{a})=1$; we get
$\vec{r}$-clauses.
\medskip\noindent
{\bf At an internal node with $\vec{p}$-resolution:}
Let node $u$ in the proof $\pi$ correspond to a
resolution step with pivot $x \in \vec{p}$. Note that
$x$ is existential, as $\vec{p}$ variables occur only existentially in $\mathcal{F}$. We have
$$
\frac{C_v = \overbrace{C_1 \vee U_1 \vee x}^\text{node $v$} \hspace{7mm} \overbrace{C_2 \vee U_2 \vee \neg x}^\text{node $w$} = C_w}{C_u = \underbrace{C_1 \vee C_2 \vee U}_\text{node $u$}}.
$$
In the assignment $\vec{a}$, if $x = 0$, then define
$C'_{u,\vec{a}} = C'_{v,\vec{a}} \setminus \{x\} $ and if $x = 1$ then define
$C'_{u,\vec{a}} = C'_{w,\vec{a}} \setminus \{ \neg x\}$. By induction, we have
$C'_{v,\vec{a}} \preceq C_v$ and $C'_{w,\vec{a}} \preceq
C_w$.
So, if $x = 0$, we have $C'_{u,\vec{a}} = C'_{v,\vec{a}} \setminus \{x\} \preceq C_1 \vee U_1 \preceq C_u$.
If $x = 1$, we have $C'_{u,\vec{a}} \preceq C'_{w,\vec{a}} \setminus \{ \neg x\} \preceq C_2 \vee U_2 \preceq C_u$.
In this case $g_u$ is a selector gate. If $x = 0$ in the assignment
$\vec{a}$, then $g_u(\vec{a}) = g_v(\vec{a})$ and $C''_{u,\vec{a}} =
C''_{v,\vec{a}}$. Since the conditions concerning $g_v(\vec{a})$ and
$C''_{v,\vec{a}}$ are satisfied by induction, the conditions concerning $g_u(\vec{a})$ and
$C''_{u,\vec{a}}$ are satisfied as well. Similarly, if $x=1$, then
$g_u(\vec{a}) = g_w(\vec{a})$ and $C''_{u,\vec{a}} =
C''_{w,\vec{a}}$, and the statements that are inductively true at $w$
hold at $u$ as well.
\begin{comment}
Hence if $g_u(\vec{a}) = 0$ then $g_v(\vec{a}) = 0$. By induction, we know that $C''_{v,\vec{a}} = C'_{v,\vec{a}}|_{\vec{a}}$ is a $\vec{q}$-clause and can be derived using $A(\vec{a}, \vec{q})$ alone via \lquprc.
In this case $C'_{u,\vec{a}} = C'_{v,\vec{a}} \setminus \{x\}$. Since $x = 0$,
$C'_{u,\vec{a}}|_{\vec{a}} = C''_{v,\vec{a}}$ is a $\vec{q}$-clause and can be derived using $A(\vec{a}, \vec{q})$ alone via \lquprc. This is dual for $\vec{r}$-clauses
\end{comment}
\begin{comment}
Hence, when $x = 0$, we have $C'_{u(\vec{a})} = C'_{v(\vec{a})}
\preceq C_v = C_1 \vee U_1 \vee x$. Therefore,
$C'_{u(\vec{a})} \preceq C_1 \vee U_1 \preceq C_1 \vee C_2 \vee U^* = C_u$. Similarly for $x=1$,
$C'_{u(\vec{a})} = C'_{w(\vec{a})} \preceq C_w = C_2 \vee U_2 \vee \neg x$. Therefore
$C'_{u(\vec{a})} \preceq C_1 \vee C_2 \vee U^* = C_u$.
\end{comment}
\medskip\noindent
{\bf At an internal node with $\vec{q}$-resolution:}
Let node $u$ in the proof $\pi$ correspond to a
resolution step with pivot $x \in \vec{q}$. Note that $x$ may be existential or universal. We have
$$
\frac{C_v = \overbrace{C_1 \vee U_1 \vee x}^\text{node $v$} \hspace{7mm} \overbrace{C_2 \vee U_2 \vee \neg x}^\text{node $w$} = C_w}{C_u = \underbrace{C_1 \vee C_2 \vee U}_\text{node $u$}}, \quad x \in \vec{q}.
$$
If $g_v(\vec{a}) = 1$ then define $C'_{u,\vec{a}} = C'_{v,\vec{a}}$.
By induction, we know that $C''_{u,\vec{a}} = C''_{v,\vec{a}}$ is
an $\vec{r}$-clause. Since $x$ is a $\vec{q}$-variable and is not
instantiated by $\vec{a}$, it must be
the case that $x\not\in C'_{v,\vec{a}}$.
Thus $C'_{u,\vec{a}} = C'_{v,\vec{a}} \preceq C_v \setminus \{x\}
\preceq C_u$.
Else if $g_w(\vec{a}) =1$, define $C'_{u,\vec{a}} = C'_{w,\vec{a}}$.
By a similar analysis as above, $C'_{u,\vec{a}} = C'_{w,\vec{a}}
\preceq C_w \setminus \{\neg x\}
\preceq C_u$.
If $g_v(\vec{a}) = g_w(\vec{a}) = 0$, and if $x \notin
C'_{v,\vec{a}}$, define $C'_{u,\vec{a}} = C'_{v,\vec{a}}$. Otherwise, if
$\neg x \notin C'_{w,\vec{a}} $, define $C'_{u,\vec{a}} =
C'_{w,\vec{a}}$. It follows from induction that $C'_{u,\vec{a}}
\preceq C_u$.
Else, define $C'_{u,\vec{a}}$ to be the resolvent of
$C'_{v,\vec{a}}$ and $C'_{w,\vec{a}}$ on $x$.
By induction, we know that $C'_{v,\vec{a}} \setminus \{x\}\preceq C_1
\vee U_1$ and $C'_{w,\vec{a}}\setminus \{\neg x\} \preceq C_2 \vee
U_2$. Hence
$C'_{u,\vec{a}} \preceq C_1 \vee C_2 \vee U =C_u$.
We need to verify the conditions on $g_u(\vec{a})$ and $C''_{u,\vec{a}}$.
The case when $g_u(\vec{a}) = 1$ is immediate,
since $C''_{u,\vec{a}}$ copies a clause known by induction to be an
$\vec{r}$-clause. So now
consider the case when
$g_u(\vec{a}) = 0$.
By induction, we know that both $C''_{v,\vec{a}} =
C'_{v,\vec{a}}|_{\vec{a}}$ and $C''_{w,\vec{a}} =
C'_{w,\vec{a}}|_{\vec{a}}$ are $\vec{q}$-clauses and can be derived
using $A(\vec{a}, \vec{q})$ alone via \lquprc.
We have three cases. If $C'_{u,\vec{a}} = C'_{v,\vec{a}}$ or
$C'_{u,\vec{a}} = C'_{w,\vec{a}}$, then by induction we are done.
Otherwise, $C'_{u,\vec{a}}$ is obtained from $C'_{v,\vec{a}}$ and
$C'_{w,\vec{a}}$ via a resolution step on pivot $x$. Since
$\vec{a}$ is an assignment to the $\vec{p}$ variables and $x \notin
\vec{p}$, $C''_{u,\vec{a}}$ can be derived from $C''_{v,\vec{a}}$ and
$C''_{w,\vec{a}}$ via the same resolution step.
\noindent
{\bf Note:} A simple observation is that $C'_{u,\vec{a}}$ is always a
subset of $C_u$ with only one exception, which is that some special
symbol $u^*$ in $C_u$ may be converted into $u$ in
$C'_{u,\vec{a}}$. This leads us to define the relation
$\preceq$. Also, the resolution step in $\pi''(\vec{a})$ is applicable in
\lquprc because (1)~every mergable universal variable in
$C''_{v,\vec{a}}$ and $C''_{w,\vec{a}} $ was also mergable earlier in
$C_v$ and $C_w$ in $\pi$. (2)~Every common non-mergable existential
variable in $C''_{v,\vec{a}}$ and $C''_{w,\vec{a}}$ was also a
non-mergable existential variable in $C_v$ and $C_w$. (3)~Every
non-mergable universal variable in $C''_{v,\vec{a}}$ and
$C''_{w,\vec{a}}$ was also a non-mergable universal pair in $C_v$ and
$C_w$. (4)~The operations do not disturb the indices of variables,
therefore if variable $x$ satisfies the index condition in $\pi$ it
satisfies it in $\pi''(\vec{a})$ as well.
\medskip\noindent
{\bf At an internal node with $\vec{r}$-resolution:}
Let node $u$ in $\pi$ correspond to a resolution step with pivot $x \in \vec{r}$. This is dual to the case above.
\end{proof}
\subsection{Interpolants from \irmc proofs}
\label{subsec:irm-interpolant}
We now establish the interpolation theorem for the expansion-based
calculi, following the same overall idea described in
Section~\ref{subsec:interpolation-setting}.
\begin{theorem} \label{thm:irm}
\irmc has feasible interpolation.
\end{theorem}
\begin{proof}
This proof closely follows that of Theorem~\ref{thm:lqup}, but
with several changes in the proof details. We describe the changes
here.
{\bf Construction of the circuit $C_{\pi}$:}
The circuit construction is very similar to that for
\lquprc. Leaves and resolution nodes are treated as before.
Instantiation and merging nodes are treated as the universal reduction
nodes were; that is, the corresponding gates are no-operation
(identity) gates.
{\bf Construction of $\pi'$ and $\pi''$:}
As before we construct a proof-like structure $\pi'(\vec{a})$,
which depends on the assignment $\vec{a}$ to the $\vec{p}$ variables,
the proof $\pi$ of $\mathcal{F}$, and the circuit $C_{\pi}$. For each
node $u$ in $\pi$, with clause $C_u$, we associate a clause
$C'_{u,\vec{a}}$ in $\pi'(\vec{a})$, and let $C''_{u,\vec{a}}$ be the
instantiation of $C'_{u,\vec{a}}$ by the assignment $\vec{a}$. We
show (by induction on the height of $u$ in $\pi$) that:
\begin{enumerate}
\item $C'_{u,\vec{a}} \preceq C_u$.
\item $g_{u}(\vec{a}) = 0 \implies C''_{u,\vec{a}}$ is a $\vec{q}$-clause and can be obtained from the clauses of
$A(\vec{a},\vec{q})$ alone using the rules of system \irmc.
\item $g_{u}(\vec{a}) = 1 \implies C''_{u,\vec{a}}$ is a $\vec{r}$-clause and can be obtained from the clauses of
$B(\vec{a},\vec{r})$ alone using the rules of system \irmc.
\end{enumerate}
Once again, as described in the proof outline, this suffices to conclude that the circuit $C_\pi$ computes an interpolant.
Recall that for annotated clauses,
the meaning of $\preceq$ is slightly different and is given in Definition~\ref{def:preceq}.
\medskip\noindent
{\bf At a leaf level: } Let node $u$ be a leaf in $\pi$. Then
$C'_{u,\vec{a}} = C_u$; that is, copy the clause as it is. Trivially,
$C'_{u,\vec{a}} \preceq C_u$.
By construction of $C_\pi$, the
conditions concerning $g_u(\vec{a})$ and $C''_{u,\vec{a}}$ are satisfied.
\medskip\noindent
{\bf At an internal node with instantiation:}
Let node $u$ be an internal node in $\pi$ corresponding to an instantiation step by $\tau$. And let node $v$ be its only parent.
We know $C_u=\instantiate(\tau, C_v)$.
Suppose $l^{\sigma'} \in \instantiate(\tau, C'_{v,\vec{a}})$. Then for some $\xi'$, $l^{\xi'} \in
C'_{v,\vec{a}}$, and $l^{\sigma'} = l^{[\complete{\xi'}{\tau}]}$; hence
$\sigma'$ is a subset of $\xi'$ completed with $\tau$. By
induction we know that $C'_{v,\vec{a}}\preceq C_v$. We have an injective function $f:C'_{v,\vec{a}}\hookrightarrow C_v$ that demonstrates this. Let $f(l^{\xi'})=l^\xi$. Hence $l^\xi\in
C_v$ for some $\xi'\preceq\xi$.
So $l^{\sigma} = l^{[\complete{\xi}{\tau}]}\in C_u$.
Since the annotations introduced by instantiation match,
$\sigma'\preceq \sigma $. We use this to define a function
$g:\instantiate(\tau, C'_{v,\vec{a}})\rightarrow C_u$ where
$g(l^{\sigma'})=l^{\sigma}$. Now we find any $l^{\tau_1},l^{\tau_2}$
where $g(l^{\tau_1})=g(l^{\tau_2})=l^\tau$ and perform a merging step
on $l^{\tau_1}$ and $l^{\tau_2}$; note that the resulting literal
$l^{\tau'}$ will still satisfy $\tau'\preceq \tau$. Eventually we get a clause which we define as $\minst(\tau, C'_{v,\vec{a}},C_u)=C'_{u,\vec{a}}$ where this function is injective. We will use this notation to refer to this process of instantiation and then deliberate merging to get $\preceq C_u$.
Therefore $C'_{u,\vec{a}}\preceq C_u$.
If the node $u$ is not pruned out in $\pi''(\vec{a})$, then
$C''_{u,\vec{a}}$ contains no satisfied $\vec{p}$ literals; hence
neither does $C'_{v,\vec{a}}$. Therefore $C''_{u,\vec{a}}$ is derived from $C''_{v,\vec{a}}$; this is a valid step in the proof
system.
Because we only use instantiation and merging or a dummy step, $C''_{u,\vec{a}}$
is a $\vec{q}$-clause if and only if $C''_{v,\vec{a}}$ is a
$\vec{q}$-clause. Therefore the no-operation (identity) gate $g_u$ gives a valid
result by induction.
\medskip\noindent
{\bf At an internal node with merging:}
Let node $u$ be an internal node in $\pi$ corresponding to a merging step.
Let node $v$ be its only parent.
We have $$\frac{C_v=D_v\vee b^\mu\vee b^\sigma}{C_u=D_v\vee b^\xi}$$
where $\domain(\mu)=\domain(\sigma)$ and
$\xi$ is obtained by merging the annotations $\mu,\sigma$. That is,
$\xi = \AMerge(\mu,\sigma) = \merge{\mu}{\sigma}$.
Note that $\mu, \sigma \preceq \AMerge(\mu,\sigma)$.
Note that from the induction hypothesis, $C'_{v,\vec{a}}\preceq C_v$, so there is an injective function $f:C'_{v,\vec{a}}\hookrightarrow C_v$. Suppose $C'_{v,\vec{a}}$ contains two distinct literals $b^{\mu'}$
and $b^{\sigma'}$ where $f(b^{\mu'})=b^{\mu}$ and $f(b^{\sigma'})=b^{\sigma}$.
So $C'_{v,\vec{a}} = D'_v\vee b^{\mu'}\vee b^{\sigma'}$.
Then let
$C'_{u,\vec{a}}=D'_v\vee b^{\xi'}$, where
$\xi' = \AMerge(\mu',\sigma')$.
Otherwise let $C'_{u,\vec{a}}=C'_{v,\vec{a}}$.
We first observe whenever we do actual merging, if $c/u\in \xi'$ then
one of the following holds:
\begin{enumerate}
\item $c/u\in \sigma'$. Then $c/u \in \sigma$ or $*/u\in \sigma$, and
so $c/u \in \xi$ or $*/u\in \xi$.
\item $c/u\in \mu'$. Then $c/u \in \mu$ or $*/u\in \mu$, and so $c/u
\in \xi$ or $*/u\in \xi$.
\item $e/u\in\mu'$, $d/u\in\sigma'$, $e\neq d$, in which case $*/u\in \xi$.
\end{enumerate}
Since all other annotated literals are unaffected,
$C'_{u,\vec{a}}\preceq C_u$.
We never merge $\vec{p}$ literals as they have no annotations, so if
$C''_{u,\vec{a}}$ is not pruned away, then
$C''_{u,\vec{a}}$ is derived from $C''_{v,\vec{a}}$ via merging.
In case we do not merge, there might be some $b^{\sigma'}\in
C'_{v,\vec{a}}$ with $\sigma'\preceq\sigma$, which is not removed by
merging.
However $\sigma'\preceq\sigma\preceq \xi$, so
$C'_{u,\vec{a}}=C'_{v,\vec{a}}\preceq C_u$. As
$C''_{u,\vec{a}}=C''_{v,\vec{a}}$, this is a valid inference step (in
fact, a dummy step).
Because we only use merging or a dummy step, $C''_{u,\vec{a}}$ is a
$\vec{q}$-clause if and only if $C''_{v,\vec{a}}$ is a
$\vec{q}$-clause, therefore the no-operation (identity) gate $g_u$ gives a valid result by induction.
\medskip\noindent
{\bf At an internal node with $\vec{p}$-resolution:}
We do not have any annotations on $\vec{p}$-literals. So in this case
we construct $C'_u$ and $C''_u$ exactly as we would for an \lquprc proof.
\medskip\noindent
{\bf At an internal node with $\vec{q}$-resolution:}
When we have a resolution step between nodes $v$ and $w$ on a
$\vec{q}$ pivot to get node $u$, we
have $$\frac{C_v=x^{\tau\cup\xi}\lor D_v \hspace{5mm} C_w=\lnot
x^{\tau\cup\sigma}\lor
D_w}{C_u=\instantiate(\sigma,D_v)\cup\instantiate(\xi,D_w)}$$
where $\domain(\tau)$, $\domain(\xi)$ and $\domain(\sigma)$ are
mutually disjoint, and $\range(\tau) \subseteq \{0,1\}$.
In order to do dummy instantiations we will need to define a $\{0,1\}$ version of $\xi$ and $\sigma$. So we define $\xi'=\{c/u \mid c/u \in \xi , c\in\{0,1\} \}\cup\{0/u \mid */u \in \xi \}$, $\sigma'=\{c/u \mid c/u \in \sigma , c\in\{0,1\} \}\cup\{0/u \mid */u \in \sigma \}$. This gives us the desirable property that $\xi'\preceq\xi$, $\sigma'\preceq\sigma$.
Now resuming the construction of $C'$, we use information from the
circuit to construct this. If $g_v(\vec{a})=1$, then we define
$C'_{u,\vec{a}}=\minst(\sigma',C'_{v,\vec{a}}, C_u )$. Otherwise, if
$g_w(\vec{a})=1$, then we define
$C'_{u,\vec{a}}=\minst(\xi',C'_{w,\vec{a}}, C_u )$. In these cases,
we know by the inductive claim that $C'_{u,\vec{a}}$ does not contain
any $\vec{q}$ literals. Therefore $C'_{u,\vec{a}}$ is the correct
instantiation (as $\xi'\preceq\xi$, $\sigma'\preceq\sigma$) of some
subset of $D_v$ or $D_w$. Hence $C'_{u,\vec{a}}\preceq C_u$. Furthermore since $g_u$ is an OR gate evaluating to 1 and since $C''_{u, \vec{a}}$, an $\vec{r}$-clause, can be obtained by an instantiation step, our inductive claim is true.
Now suppose $g_v(\vec{a})=0$ and $g_w(\vec{a})=0$. If there is no
$x^{\mu} \in C'_{v,\vec{a}}$ such that $ \mu\preceq \tau\cup\xi$, then
define $C'_{u,\vec{a}}=\minst(\sigma',C'_{v,\vec{a}}, C_u )$. Else, if
there is no $\neg x^{\mu} \in C'_{w,\vec{a}}$ such that $ \mu\preceq
\tau\cup\sigma$, then define
$C'_{u,\vec{a}}=\minst(\xi',C'_{w,\vec{a}}, C_u )$. In these cases we
know that $C'_{u, \vec{a}}$ is the correct instantiation (as
$\xi'\preceq\xi$, $\sigma'\preceq\sigma$) of some subset of $D_v$ or
$D_w$; hence $C'_{u,\vec{a}}\preceq C_u$. Furthermore, since $g_u$ is
an OR gate evaluating to 0, and since $C''_{u,\vec{a}}$, a $\vec{q}$-clause, can be obtained by an instantiation step, our inductive claim is true.
The final case is when $g_v(\vec{a})=g_w(\vec{a})=0$ and $x^{\tau \cup
\xi_1} \in C'_{v,\vec{a}}$ for some $ \xi_1\preceq \xi $ and $\neg
x^{\tau \cup\sigma_1} \in C'_{w,\vec{a}}$ for some $ \sigma_1\preceq
\sigma $. Here, because $\domain(\tau)$, $\domain(\xi)$ and
$\domain(\sigma)$ are mutually disjoint, $\domain(\tau)$,
$\domain(\xi_1)$ and $\domain(\sigma_1)$ are also mutually
disjoint. Thus we can do the resolution step
$$\frac{C'_{v,\vec{a}}=x^{\tau\cup\xi_1}\lor D'_v \hspace{5mm} C'_{w,\vec{a}}=\lnot x^{\tau\cup\sigma_1}\lor D'_w}{\instantiate(\sigma_1 ,D'_v)\cup\instantiate(\xi_1,D'_w)}.$$
Since $\minst(\sigma_1 ,D'_v, C_u)\preceq \instantiate(\sigma,D_v)$
and $\minst(\xi_1 ,D'_w, C_u)\preceq \instantiate(\xi,D_w)$, we can
follow up $\instantiate(\sigma_1 ,D'_v)\cup\instantiate(\xi_1,D'_w)$
with sufficient merging steps to get a clause $C' \preceq C_u$; we
define this clause to be the clause $C'_{u,\vec{a}}$.
By the inductive claim, both
$C''_{v,\vec{a}}$ and $C''_{w,\vec{a}}$ are $\vec{q}$-clauses; hence
$C''_{u,\vec{a}}$ is also a $\vec{q}$-clause and is obtained via a
valid resolution step.
\medskip\noindent
{\bf At an internal node with $\vec{r}$-resolution:}
When we have a resolution step between nodes $u$ and $v$ on an $\vec{r}$-literal, this is the dual of the previous case.
\end{proof}
\iffalse
Let $\vec{a}$ be any assignment to the $\vec{p}$ variables. Following our proof idea, we now show the following:
\begin{lemma} \label{lemma1}
$g_{u(\vec{a})} = 0 \implies C''_{u(\vec{a})}$ is a $\vec{q}$-clause and can be obtained from the clauses of
$A(\vec{a},\vec{q})$ alone using the rules of system \lquprc.
\end{lemma}
\begin{proof}
We prove this by the induction on the height of $u$ in $\pi$. Suppose the leaves are at height $0$.\\
\noindent
{\bf Base Case: } Node $u$ is a leaf node. If $g_{u(\vec{a})} = 0 $ then by construction of the circuit we know that $C'_{u(\vec{a})} \in A(\vec{p}, \vec{q})$. Hence $C'_{u(\vec{a})}|_{\vec{a}} = C''_{u(\vec{a})} \in A(\vec{a}, \vec{q})$.\\
\noindent
{\bf Induction Step: }
\begin{description}
\item[1. ] Let node $u$ in $\pi$ corresponds to a universal reduction step on some universal variable
$x$ or $x^*$. And node $v$ be its only parent. Here we see only for the case when universal variable is $x$. The case for $x^*$ is identical. We have,
$$\frac{C_v =\overbrace{D_v \vee x}^\text{node v}}{C_u = \underbrace{D_v}_\text{node u}}, x \text{ a universal variable, } \forall \ell \in D_v, \lev(l) < \lev(x).$$
If $g_{u(\vec{a})} = 0 $, then we know that $g_{v(\vec{a})} = 0 $ as
$g_{u(\vec{a})} = g_{v(\vec{a})}$. By induction hypothesis, we know
that $C''_{v(\vec{a})} = C'_{v(\vec{a})}|_{\vec{a}}$ is a $\vec{q}$-clause and can be derived using
$A(\vec{a}, \vec{q})$ alone via \lquprc. Recall that $C'_{u(\vec{a})} = C'_{v(\vec{a})} \setminus \{x, \neg x, x^*\}$
in this case. Since $\vec{a}$ is an assignment to the $\vec{p}$ variables and $x \notin \vec{p}$, we have
$C'_{u(\vec{a})}|_{\vec{a}} = C''_{u(\vec{a})}$ is a $\vec{q}$-clause and can be derived using
$A(\vec{a}, \vec{q})$ alone via \lquprc.
\item[2. ] Let node $u$ in the proof $\pi$ corresponds to a resolution step with the pivot $x \in \vec{p}$. Note that $x$ is existential, as $\vec{p}$ variables occur existential in $\mathcal{F}$. We have,
$$\frac{C_v = \overbrace{C_1 \vee U_1 \vee x}^\text{node v} \hspace{7mm} \overbrace{C_2 \vee U_2 \vee \neg x}^\text{node w} = C_w}{C_u = \underbrace{C_1 \vee C_2 \vee U^*}_\text{node u}} $$
In this case $g_u$ is a selector gate. If $x = 0$ in the assignment $\vec{a}$, then $g_{u(\vec{a})} = g_{v(\vec{a})}$. Hence if $g_{u(\vec{a})} = 0$ then $g_{v(\vec{a})} = 0$. By induction, we know that $C''_{v(\vec{a})} = C'_{v(\vec{a})}|_{v(\vec{a})}$ is a $\vec{q}$-clause and can be derived using $A(\vec{a}, \vec{q})$ alone via \lquprc.
Recall that in this case $C'_{u(\vec{a})} = C'_{v(\vec{a})}$. Therefore $C'_{u(\vec{a})}|_{\vec{a}} = C''_{u(\vec{a})}$ is a $\vec{q}$-clause and can be derived using $A(\vec{a}, \vec{q})$ alone via \lquprc.
\item[3. ] Let node $u$ in the proof $\pi$ corresponds to an (LD)-resolution step with the pivot $x \in \vec{q}$. $x$ may be existential or universal. We have in $\pi$,
$$\frac{C_v = \overbrace{C_1 \vee U_1 \vee x}^\text{node v} \hspace{7mm} \overbrace{C_2 \vee U_2 \vee \neg x}^\text{node w} = C_w}{C_u = \underbrace{C_1 \vee C_2 \vee U^*}_\text{node u}}, x \in \vec{q} $$
By construction of $C_{\pi}$, we know that $g_u$ is an $OR$ gate. If $g_u(\vec{a}) = 0$ then both $g_v(\vec{a})$ and $g_w(\vec{a})$ must evaluate to $0$. By induction, we know that both $C''_{v(\vec{a})} = C'_{v(\vec{a})}|_{\vec{a}}$, $C''_{w(\vec{a})} = C'_{w(\vec{a})}|_{\vec{a}}$ are $\vec{q}$-clauses and can be derived using $A(\vec{a}, \vec{q})$ alone via \lquprc.
We have three cases. If $C'_{u(\vec{a})} = C'_{v(\vec{a})}$ or $C'_{u(\vec{a})} = C'_{w(\vec{a})}$, by induction
$C''_{u(\vec{a})}$ is a $\vec{q}$-clauses and can be derived using $A(\vec{a}, \vec{q})$ alone via \lquprc.
Otherwise, $C'_{u(\vec{a})}$ is obtained from $C'_{v(\vec{a})}$ and $C'_{w(\vec{a})}$ via an (LD)-resolution step on pivot $x$.
Since $\vec{a}$ is an assignment to the $\vec{p}$ variables and $x \notin \vec{p}$, $C''_{u(\vec{a})}$ can be derived from $C''_{v(\vec{a})}$ and $C''_{w(\vec{a})}$ via the same (LD)-resolution step.
\item[4. ] Let node $u$ in the proof $\pi$ corresponds to a resolution step with the pivot $x \in \vec{r}$. This is just a dual of the above step.
\end{description}
\end{proof}
The proof of the claim $(iii)$ of the proof idea is similar to the proof of Lemma \ref{lemma1}.
\fi
\subsection{Monotone Interpolation}
To transfer known circuit lower bounds into size of proof bounds, we need a monotone version of the previous interpolation theorems, which we prove next.
\begin{theorem} \label{theorem2}
\lquprc and \irmc have monotone feasible interpolation.
\end{theorem}
\begin{proof}
In previous subsections, we have shown that the circuit
$C_{\pi}(\vec{p})$ is a correct interpolant for the QBF
$\mathcal{F}$. That is, if $C_{\pi}(\vec{p}) = 0$ then $\mathcal{Q}
\vec{q}. A(\vec{a}, \vec{q})$ is false, and if $C_{\pi}(\vec{p}) = 1$
then $\mathcal{Q} \vec{r}. B(\vec{a}, \vec{r})$ is false.
However, if $\vec{p}$ occurs only positively in $A(\vec{p}, \vec{q})$ then we construct a monotone circuit $C^\mon_{\pi}(\vec{p})$ such that, on every $0, 1$ assignment $\vec{a}$ to $\vec{p}$ we have
\begin{align*}
C^\mon_{\pi}(\vec{a}) = 0 &\implies \mathcal{Q} \vec{q}. A(\vec{a}, \vec{q}) \text{ is false, and } \\
C^\mon_{\pi}(\vec{a}) = 1 &\implies \mathcal{Q} \vec{r}. B(\vec{a}, \vec{r}) \text{ is false. }
\end{align*}
We obtain $C^\mon_{\pi}(\vec{p})$ from $C_{\pi}(\vec{p})$ by replacing all selector gates $g_u = \sel(x, g_v, g_w)$ by the following monotone ternary connective: $g_u = (x \vee g_v) \wedge g_w$ where nodes $v$ and $w$ are the parents of $u$ in $\pi$.
\begin{comment}
The node where the output of the selector gate and
the monotone ternary differs, we also change the associated clause $C'_{u(\vec{a})}$ in the proof $\pi'(\vec{a})$.
\end{comment}
We also change the proof-like structure $\pi'(\vec{a})$; the
construction is the same as before except that at $\vec{p}$-resolution nodes,
the rule for fixing $C'_{u,\vec{a}}$ is also changed to reflect the
monotone function used instead.
More precisely, the functions $\sel(x, g_v, g_w)$ and
$g_u = (x \vee g_v) \wedge g_w$ differ only when $x = 0$,
$g_v(\vec{a}) = 1$, and
$g_w(\vec{a}) = 0$. We set $C'_{u,\vec{a}}$ to
$C'_{w,\vec{a}}\setminus \{\neg x\}$ if $x=1$ or if $x=0$, $g_v(\vec{a})=1$ and
$g_w(\vec{a})=0$, and to $C'_{v,\vec{a}}\setminus \{x\}$ otherwise.
It suffices to verify the inductive statements in the case when
$x=0$, $g_v(\vec{a}) = 1$, and $g_w(\vec{a}) = 0$. We have to show
that $C'_{u,\vec{a}} \preceq C_u$; this holds by induction.
We also have to show that
$C''_{u,\vec{a}}$ is a $\vec{q}$-clause, and can be derived using
$A(\vec{a}, \vec{q})$ clauses alone via the appropriate proof system.
By induction, since $g_w(\vec{a}) = 0$, we conclude that
$C''_{w,\vec{a}}$ cannot contain $\neg x$: it can be derived
from the clauses of $A(\vec{p},\vec{q})$ alone,
by the positivity constraint, these clauses do not contain $\neg x$, and the derivation cannot introduce literals.
Hence $C''_{w,\vec{a}}=
C'_{w,\vec{a}} \setminus \{\neg x\}|_{\vec{a}}$, which is
$C''_{u,\vec{a}}$.
\end{proof}
\section{New Exponential Lower Bounds for \irmc and \lquprc}
\label{sec:lower-bounds}
We now apply our interpolation theorems to obtain new exponential lower bounds for a new class of QBFs. The lower bound will be directly transferred from the following monotone circuit lower bound for the problem $\clique(n,k)$, asking whether a given graph with $n$ nodes has a clique of size $k$.
\begin{theorem}[Alon \& Boppana \cite{AB87}] \label{thm:raz}
All monotone circuits that compute $\clique(n,n/2)$ are of exponential size.
\end{theorem}
We now build the QBF. Fix an integer $n$ (indicating the number of
vertices of the graph) and let $\vec{p}$ be the set of variables
$\{p_{uv} \mid 1\leq u<v\leq n\}$. An assignment to $\vec{p}$ picks
a set of edges, and thus an $n$-vertex graph.
Let $\vec{q}$ be the set of variables $\{q_{iu} \mid i \in [\frac{n}{2}], u \in [n] \}$.
We use the following clauses.
\[
\begin{array}{c@{\ }c@{\ }ll@{\ }c@{\ }ll}
C_i&=&q_{i1}\vee \dots \vee q_{in} & \quad \text{for } i \in [\frac{n}{2}]\\
D_{i,j,u}&=&\neg q_{iu}\vee \neg q_{ju} & \quad \text{for } i, j \in [\frac{n}{2}], i< j \text{ and } u\in [n]\\
E_{i,u,v}&=&\neg q_{iu}\vee \neg q_{iv} & \quad \text{for } i\in [\frac{n}{2}] \text{ and } u, v\in [n], u<v\\
F_{i,j,u,v}&=&\neg q_{iu}\vee \neg q_{jv}\vee p_{uv} & \quad \text{for
} i, j \in [\frac{n}{2}], i\neq j$ \text{ and } $u, v\in [n], u < v.
\end{array}
\]
(For notational convenience, we interpret the assignment $p_{uv}=0$ to
mean that the edge $uv$ is present in the graph.)
We can now express $\clique(n,n/2)$ as a polynomial-size QBF $\exists \vec{q}. A_n(\vec{p},\vec{q})$, where
$$
A_n(\vec{p},\vec{q})=\bigwedge_{i \in [\frac{n}{2}]}C_i\wedge \bigwedge_{i<j,u\in [n]}D_{i,j,u}\wedge \bigwedge_{i\in [\frac{n}{2}],u<v}E_{i,u,v}\wedge \bigwedge_{i<j,u\neq v}F_{i,j,u,v}.
$$
Here the edge variables $\vec{p}$ appear positively in $A_n(\vec{p},\vec{q})$.
Likewise no-$\clique(n,n/2)$ can be written as a polynomial-size QBF $\forall \vec{r_1}\exists \vec{r_2}. B_n(\vec{p},\vec{r_1}, \vec{r_2})$. To construct this we use a polynomial-size circuit that checks whether the nodes specified by $\vec{r_1}$ fail to form a clique in the graph given by $\vec{p}$. We then use existential variables $\vec{r_2}$ for the gates of the circuit and can then form a CNF $B_n(\vec{p},\vec{r_1}, \vec{r_2})$ that represents the circuit computation.
Now we can form a sequence of false QBFs, stating that the graph encoded in $\vec{p}$ both has a clique of size $n/2$ (as witnessed by $\vec{q}$) and likewise does not have such a clique as expressed in the $B$ part:
$$
\Phi_n=\exists \vec{p}\exists \vec{q}\forall \vec{r_1}\exists \vec{r_2}. A_n(\vec{p},\vec{q})\wedge B_n(\vec{p},\vec{r_1}, \vec{r_2}).
$$
This formula has the unique interpolant $\clique(n,n/2)(\vec{p})$. But
since all monotone circuits for this are of exponential size by
Theorem~\ref{thm:raz}, and since monotone circuits of size polynomial in \irmc and \lquprc proofs
can be extracted by Theorem~\ref{theorem2}, all such proofs must be of exponential size, yielding:
\begin{theorem} \label{thm:lower-bound-clique}
The QBFs $\Phi_n(\vec{p},\vec{q},\vec{r})$ require exponential-size proofs in \irmc and \lquprc.
\end{theorem}
Note: A slightly different, and arguably more transparent, way of
encoding no-$\clique(n,n/2)$ is described in \cite{BCMS-FST16}.
\section{Feasible Interpolation vs.\ Strategy Extraction}
\label{sec:strat-extraction}
Recall the two player game semantics of a QBF explained in Section~\ref{sec:prelim}. Every false QBF has a
winning strategy for the universal player, where the strategy value for each
variable depends only on the values of the variables played before. We
now explain
the relation between strategy extraction --- one of the main paradigms
for QBF systems --- and feasible interpolation.
In Section~\ref{sec:interpolation} we studied QBFs of the form
$\mathcal{F}= \exists \vec{p} \mathcal{Q} \vec{q} \mathcal{Q} \vec{r}. \left[A(\vec{p}, \vec{q}) \wedge B(\vec{p}, \vec{r})\right].$ If we add a common universal variable $b$ we can change it to an equivalent QBF
$$
\mathcal{F}^b= \exists \vec{p}\, \forall b\, \mathcal{Q} \vec{q}\, \mathcal{Q} \vec{r}. \left[ (A(\vec{p}, \vec{q})\vee b) \wedge (B(\vec{p}, \vec{r})\vee \neg b) \right].
$$
This can be expressed with a CNF matrix by inserting the literal $b$
into each clause of $A(\vec{p}, \vec{q})$ and the literal $\neg b$
into each clause of $B(\vec{p}, \vec{r})$. Let $\mathcal{F}^b$ also
denote this equivalent QBF.
If $\mathcal{F}$ is false, then also $\mathcal{F}^b$ is false and thus the universal player has a winning strategy, including a strategy for $b=\sigma(\vec{p})$ for the common universal variable $b$.
\begin{remark}
Every winning strategy $\sigma(\vec{p})$ for $b$ is an interpolant for $\mathcal{F}$, i.e., for every $0, 1$ assignment $\vec{a}$ of $\vec{p}$ we have
\begin{align*}
\sigma(\vec{a}) = 0 &\implies \mathcal{Q} \vec{q}. A(\vec{a}, \vec{q}) \text{ is false, and }\\
\sigma(\vec{a}) = 1 &\implies \mathcal{Q} \vec{r}. B(\vec{a}, \vec{r}) \text{ is false. }
\end{align*}
\end{remark}
\begin{proof}
Suppose not. Then there are two possibilities.
\begin{itemize}
\item There is some $\vec{a}$ where $\sigma(\vec{a})=0$ and $\mathcal{Q} \vec{q}. A(\vec{a}, \vec{q})$ is true. Then setting $b=0$ would satisfy $\mathcal{Q} \vec{r}. B(\vec{p}, \vec{r})\vee \neg b$. But $\mathcal{Q} \vec{q}.A(\vec{a}, \vec{q}) \vee b$ is also satisfied. Hence this cannot be part of the winning strategy for the universal player.
\item There is some $\vec{a}$ where $\sigma(\vec{a})=1$ and
$\mathcal{Q} \vec{r}. B(\vec{a}, \vec{r})$ is true. This is the dual
of the above.
\qedhere
\end{itemize}
\end{proof}
This observation means that every interpolation problem can be reformulated as a strategy extraction problem. We will now show that from proofs of these reformulated interpolation problems we can extract a (monotone) Boolean circuit for the winning strategy on the new universal variable $b$.
Strategy extraction was recently shown to be a powerful lower bound technique for QBF resolution systems.
In strategy extraction, from a refutation of a false QBF, winning strategies for the universal player for all universal variables can be efficiently extracted. Devising QBFs that require computationally hard strategies then leads to lower bounds for QBF proof systems.
This technique applies both to \qrc \cite{BCJ15}, where \AC{0} lower bounds for e.g.\ parity are used, as well as to much stronger QBF Frege systems where the full spectrum of current (and conjectured) lower circuit bounds is employed \cite{BBC16}. In fact, Beyersdorff and Pich \cite{BP16} show that lower bounds for QBF Frege systems can only come either (a) from circuit lower bounds via the strategy extraction technique or (b) from lower bounds for classical proposition Frege. This picture is reconfirmed here as well: QBF resolution lower bounds via feasible interpolation fall under paradigm (a) as they are in fact lower bounds via strategy extraction.
To show this we now prove how to extract strategies for interpolation problems, first for \lquprc and then for \irmc.
\begin{theorem} \label{thm:b-lqu} \hfill
\begin{enumerate}
\item From each \lquprc refutation $\pi$ of $\mathcal{F}^b$ we can extract in polynomial time a boolean circuit for $\sigma(\vec{p})$, i.e., the part of the winning strategy for variable $b$.
\item If in the same setting as above for $\mathcal{F}^b$, the variables $\vec{p}$ appear only positively in $A(\vec{p}, \vec{q})$, then we can extract a monotone boolean circuit for $\sigma(\vec{p})$ from a \lquprc refutation $\pi$ of $\mathcal{F}^b$ in polynomial time (in the size of $\pi$).
\end{enumerate}
\end{theorem}
\begin{proof}
As we can compute the (monotone) interpolant when $b$ is absent, we use the same proof with a few modifications for the new formula.
We first change the definition of $\vec{q}$ and $\vec{r}$-clauses to allow for $b$ and $\neg b$ literals.
\begin{definition}
We call any clause in the proof a $\vec{q}$-clause (resp.\ $\vec{r}$-clause) if it contains only variables $\vec{p}, \vec{q}$ or literal $b$ (resp. $\vec{p}, \vec{r}$ or literal $\neg b$). We retain the inheritance property for clauses only containing $\vec{p}$ variables.
\end{definition}
{\bf Construction of the circuit $C_{\pi}$:}
When constructing the circuit, we now also need to consider a resolution
step on the common universal variable $b$:
$$
\frac{C_v = \overbrace{C_1 \vee U_1 \vee b}^\text{node $v$} \hspace{7mm} \overbrace{C_2 \vee U_2 \vee \neg b}^\text{node $w$} = C_w}{C_u = \underbrace{C_1 \vee C_2 \vee U}_\text{node $u$}}.
$$
Here we can arbitrarily pick one of $v$ or $w$. For example here we
pick $v$ and let $g_u$ be wired to $g_v$ with the no-operation
(identity) gate, disregarding the input from $g_w$.
{\bf Construction of $\pi'$ and $\pi''$:}
We slightly modify the invariants to include the new definitions. Additionally we make a small change to the first invariant.
\begin{enumerate}
\item $C'_{u,\vec{a}}\backslash\{b, \neg b\} \preceq C_u$.
\item $g_{u}(\vec{a}) = 0 \implies C''_{u,\vec{a}}$ is a $\vec{q}$-clause and can be obtained from the clauses of
$A(\vec{a},\vec{q})$ alone using the rules of \lquprc.
\item $g_{u}(\vec{a}) = 1 \implies C''_{u,\vec{a}}$ is a $\vec{r}$-clause and can be obtained from the clauses of
$B(\vec{a},\vec{r})$ alone using the rules of \lquprc.
\end{enumerate}
Notice also that $b^* \notin C''_{u,\vec{a}}$ as $b^*$ can only arise
from a long distance resolution step on a $\vec{p}$ variable but these
are instantiated and so never occur as pivots in the proof $\pi''$ assuming the induction hypothesis.
We observe that the base cases work for the construction of $\pi'$ and $\pi''$. The only new part of the inductive step is when we have
$$\frac{C_v = \overbrace{C_1 \vee U_1 \vee b}^\text{node $v$} \hspace{7mm} \overbrace{C_2 \vee U_2 \vee \neg b}^\text{node $w$} = C_w}{C_u = \underbrace{C_1 \vee C_2 \vee U}_\text{node $u$}}.
$$
To find $C'_{u,\vec{a}}$ we look at our choice of wiring in the circuit construction. If $g_u$ is wired to $g_v$ ($g_u=g_v$) then we take $C'_{u,\vec{a}}$ to equal $C'_{v,\vec{a}}$. Since $C'_{v,\vec{a}}\backslash\{b, \neg b\}\preceq C_v\backslash\{b, \neg b\}\preceq C_u$ we get $C'_{u,\vec{a}}\backslash\{b, \neg b\}\preceq C_u$.
Since our choice of the clause is determined by our choice of wiring, then we retain our invariants in that way.
Notice that we never resolve a $\vec{q}$-clause with a $\vec{r}$ clause in $\pi''$ so $b, \neg b$ will always be retained in their respective type of clauses.
From the above, we have the following conclusion. Let $r$ be the root of $\pi$. Then on any assignment $\vec{a}$ to the $\vec{p}$ variables we have:
\begin{description}
\item[(1)] $C'_{r,\vec{a}} \backslash \{b, \neg b\} \preceq
C_r = \Box$. Therefore, $C''_{r,\vec{a}}\setminus \{b, \neg
b\} = \Box$. But $C''_{r,\vec{a}}$ can contain at most one
of these literals, which can be universally reduced to
complete a refutation.
\item[(2)] $g_{r}(\vec{a}) = 0 \implies C''_{r,\vec{a}}$ is a $\vec{q}$-clause and can be obtained from the clauses of $A(\vec{a},\vec{q})$ alone using the rules of system \lquprc. Hence by soundness of \lquprc,
$\mathcal{Q} \vec{q}. A(\vec{a}, \vec{q})$ is false.
\item[(3)] $g_{r}(\vec{a}) = 1 \implies C''_{r,\vec{a}}$ is an $\vec{r}$-clause and can be obtained from the clauses of $B(\vec{a},\vec{r})$ alone using the rules of system \lquprc. Hence by soundness of \lquprc,
$\mathcal{Q} \vec{r}. B(\vec{a}, \vec{r})$ is false.
\end{description}
\noindent
Thus $g_r$, the output gate of the circuit, computes $\sigma(\vec{p})$.
\end{proof}
An analogous result to Theorem~\ref{thm:b-lqu} also holds for \irmc.
\begin{theorem} \label{thm:b-irmc} \hfill
\begin{enumerate}
\item From each \irmc refutation $\pi$ of $\mathcal{F}^b$ we can extract in polynomial time a boolean circuit for $\sigma(\vec{p})$, i.e., the part of the winning strategy for variable $b$.
\item If in the same setting as above for $\mathcal{F}^b$, the variables $\vec{p}$ appear only positively in $A(\vec{p}, \vec{q})$, then we can extract a monotone boolean circuit for $\sigma(\vec{p})$ from a \irmc refutation $\pi$ of $\mathcal{F}^b$ in polynomial time (in the size of $\pi$).
\end{enumerate}
\end{theorem}
\begin{proof}
We can use exactly the same constructions as in Theorem~\ref{thm:irm}. The $b$ literals do not affect the argument.
\end{proof}
As a corollary, the versions $\Phi^b_n(\vec{p},\vec{q},\vec{r})$ of the formulas from Section~\ref{sec:lower-bounds} also require exponential-size proofs in \irmc and \lquprc.
\section*{Acknowledgements}
We thank Pavel \Pudlak and Mikol\'a\v{s} Janota for helpful
discussions on the relation between feasible interpolation and
strategy extraction during the Dagstuhl Seminar `Optimal
Agorithms and Proofs' (14421).
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,314,259,993,359 | arxiv | \section{Introduction}
Lifetime of localized surface plasmon (SP) in metal nanostructures is one of fundamental problems in plasmonics that has been continuously addressed for more than 40 years \cite{kawabata-jpsj66,kreibig-book}. The importance of this issue stems from one of the major objectives of plasmonics -- generation of extremely strong local fields at the nanoscale. The range of physical phenomena and applications related to this goal cuts across physics, chemistry, biology, and device applications. A small sample of examples includes SP-enhanced spectroscopies of molecules near metal nanostructures, such as surface-enhanced Raman scattering (SERS) and fluorescence resonance energy transfer (FRET) and their numerous biomedical applications \cite{duyne-nm06}, focusing of extremely large electric fields in metal nanoparticle (NP) clusters \cite{stockman-prl03}, and quantum coherent SP generation in nanostructures (SPASER) \cite{stockman-prl03-spaser}. High Ohmic losses in bulk metal due to strong electron-phonon interactions impose limitation on the quantum yield of metal-based plasmonic devices, which can, to some extend, be remedied by reducing the metal component size. However, at the lengthscale below $\sim 10$ nm, new limitations on SP lifetime and, consequently, on quantum yield arise due to quantum-size effects. Among those, the most important is the Landau damping (LD) of SP -- decay of SP into the Fermi sea electron-hole pair. This process is made possible by relaxation of the momentum conservation due to electron surface scattering. The electron-hole momentum uncertainty, $\delta p \sim \hbar /a$, where $a$ is the characteristic size of nanostructure, translates into the corresponding energy relaxation scale, $\delta E \sim p\,\delta p/m$, $m$ being electron mass in metal, and this leads to the following estimate of SP lifetime,
\begin{align}
\label{ld_old}
\Gamma \sim A\, \dfrac{v_{F}}{a},
\end{align}
where $v_{F}$ is the electron Fermi velocity (hereafter we set $\hbar=1$). The first explicit quantum-mechanical calculation of $\Gamma $ for spherical NP by Kawabata and Kubo \cite{kawabata-jpsj66} confirmed the estimate (\ref{ld_old}) with $a$ replaced by particle radius $R$ and, for NP in vacuum, yielded the value $A=3/4$. While the subsequent studies indicated that the coefficient $A$ varies in the range (0.5-2.0) depending on dielectric constant and chemical composition of surrounding medium, the size dependence (\ref{ld_old}) was thoroughly confirmed experimentally for \textit{spherical} particles \cite{kreibig-book}. The corresponding quality factor,
\begin{align}
Q=\dfrac{\omega_{sp}}{\Gamma},
\end{align}
where $\omega_{sp}$ is SP frequency, decreases linearly with $R$. This causes a rapid reduction of, e.g., SERS enhancement factor, $M\propto Q^{4}$, while the other quantum-size effects, such as electron density spillover or surface screening, play a subdominant role \cite{pustovit-prb06}.
The above qualitative arguments have been subsequently used for estimates of SP lifetime in nanostructures of various shapes, notably in metal nanoshells \cite{halas-prl97}. The \textit{single-electron} picture of SP damping, based solely on surface scattering, inevitably leads to a SP damping of the form (\ref{ld_old}), with $a$ replaced by the \textit{smallest} spatial scale of a confined nanostructure. For example, in a nanoshell with shell thickness $d$, the characteristic electron scattering length is $\sim d$, and Eq.~(\ref{ld_old}) predicts a very short SP lifetime for thin nanoshells. First attempts to measure SP lifetime in nanoshells seemed to be consistent with Eq.~(\ref{ld_old}) with estimated values $A=2.0$ \cite{halas-prb02} and $A=0.5$ \cite{klar-nl04} . However, more recent systematic studies of optical spectra of single gold nanoshells indicated no size-dependence at all ($A=0$) for the extracted SP lifetime \cite{halas-nl04}.
There is also a physical argument that renders Eq.~(\ref{ld_old}) invalid for nanostructures of general shape. Indeed, LD is a \textit{many-body} phenomenon, rather than a single-particle one, and it involves the SP own electric field by which an electron-pair is excited. Therefore, the SP lifetime must be highly sensitive to the field \textit{magnitude} inside the NP. Note that for a solid sphere, the SP electric field magnitude is a frequency-dependent constant that is independent of NP radius. This is the reason why Eq.~(\ref{ld_old}) holds remarkably well for spherical NP in a very wide range of their radii, from several tens of nm down to nanometer-sized small clusters \cite{kreibig-book}. In contrast, for other NP shapes, the magnitude of SP electric field inside the NP can change with NP size. As we show below, the latter effect can, in fact, be much stronger than surface scattering contribution, resulting in a drastically different, from Eq.~(\ref{ld_old}), SP damping rate.
In this paper, we calculate the LD of bright SP modes in a metal nanoshell with dielectric core. We demostrate that in the limit of \textit{thin} nanoshell, the LD rate \textit{vanishes} linearly with the nanoshell thickness $d$. Furthermore, for small overall nanoshell sizes, SP lifetime exhibits \textit{quantum beats} as a function of shell thickness, caused by the interference between electron scattering amplitudes near inner and outer nanoshell boundaries. We also show that despite the redshift of the SP frequency, the nanoshell quality factor \textit{increases} with decreasing $d$. The latter finding indicates that, with appropriate choice of geometry, the quantum-size limitations imposed on plasmonics can be sidelined, and the quantum yield of even small plasmonic devices can be made limited only by the Ohmic losses.
In Section \ref{sec:general} we obtain a formal expression for LD rate in terms of plasmon eigenmodes in metal nanoshell with dielectric core. In Section \ref{sec:u} we evaluate of nanoshell internal energy, and in Section \ref{sec:q} we evaluate the dissipated power by calculating the imaginary part of electron polarization operator in semiclassical approximation. The analysis and numerical results are presented in Section \ref{sec:disc}, and Section \ref{sec:conc} concludes the paper.
\section{Plasmon eigenmodes in a nanoshell}
\label{sec:general}
We consider metal nanoshell with inner and outer radii $ R_{1} $ and $ R_{2} $, respectively, in a medium with dielectric constant $\varepsilon_{d}$. In long-wave approximation, the local potential $ \Phi(\textbf{r}) $ is determined from self-consistent equation \cite{mahan-book}
\begin{equation}
\label{rpa}
\Phi(\textbf{r}) = \varphi(\textbf{r}) + \int d{\bf r}_{1} d{\bf r}_{2}U(\textbf{r} -\textbf{r}_{1}) P(\textbf{r}_{1}, \textbf{r}_{2}) \Phi(\textbf{r}_{2}),
\end{equation}
where $ \varphi \propto r^{L} Y_{L M}(\hat{\textbf{r}})$ is an external potential ($Y_{L M}(\hat{\textbf{r}})$ are spherical harmonics), $ P = P^{\prime } + i P^{\prime \prime}$ is polarization operator, and $ U(r) $ is Coulomb potential. We are interested in plasmonic eigenmodes that are described by Eq.~(\ref{rpa}) without external potential, which we write as
\begin{equation}
\label{poisson}
(1+ 4 \pi \hat{\chi}) \Phi = 0,
\end{equation}
where $\hat{\chi} $ is a susceptibility operator defined as
\begin{equation}
\label{succept}
\hat{\chi} \Phi({\bf r}) = e^2 \int d {\bf r}_1 d {\bf r}_2 G({\bf r} - {\bf r}_1)
P({\bf r}_1, {\bf r}_2) \Phi({\bf r}_2),
\end{equation}
and $G(r) = - U(r)/4 \pi e^2= -1/4\pi r$. For optical frequencies, the real part of polarization operator can be approximated by its local limit and decomposed into the core, metal and outside dielectric contributions, $P'=P'_c+P'_m+P'_d$, with
\begin{align}
P'_{i}({\bf r}, {\bf r}') = e^{-2}\nabla \left[ \chi_{i}(r) \nabla \delta ({\bf r} - {\bf r}')\right],
\end{align}
where $ \chi_{c}(r) = \chi_c \theta(R_1 - r)$, $ \chi_{m}(r) = \chi_m \theta(r - R_1) \theta(R_2 - r)$, $ \chi_{d}(r) = \chi_d \theta(r - R_{2})$ are local susceptibilities, $ \chi_{i} = (\varepsilon_{i} - 1)/4 \pi $.
In the absence of damping, plasmon energies are obtained by setting $P^{\prime\prime}=0$ and solving Eq.~(\ref{poisson}) in the basis of eigenfunctions of composite operator $\hat{\chi}'=\hat{G}\hat{P}'$: $\hat{\chi}'F_{LM}=\lambda_LF_{LM}$. The eigenvalues are found by setting
\begin{eqnarray}
\label{lambda}
1 + 4 \pi \lambda_L = \varepsilon_{m } + B \pm \sqrt{B^{2} + C} = 0,
~
\end{eqnarray}
where
\begin{align}
B = \frac{L \varepsilon_{c m} - (L+1) \varepsilon_{m d}}{2 (2 L +1)},
\qquad
C = \frac{L(L+1)}{(2L+1)^2}(1-\kappa^{2 L +1})\varepsilon_{md}\varepsilon_{c m},
\end{align}
$\kappa=R_{1}/R_{2}$ is the shell aspect ratio, and we denote $\varepsilon_{\alpha \beta} = \varepsilon_{\alpha} -\varepsilon_{\beta}$. The eigenfunctions $F_{LM}=F_{L}(r)Y_{LM}(\hat{\textbf{r}})$ are obtained from continuity of $F_{L}(r)$ across all interfaces
\begin{eqnarray}
\label{eigen}
F_{L}^{(c)}=\left( \kappa^L + \eta_{L}\right) \left( \frac{r}{R_1}\right) ^L,
~~
F_{L}^{(m)}=\left( \frac{r}{R_2}\right) ^L +\eta_{L}\left(\frac{R_1}{r}\right)^{L+1},
\nonumber\\
F_{L}^{(d)}=\left( 1+\kappa^{L+1} \eta_{L}\right) \left(\frac{R_2}{r}\right)^{L+1},
~~
\eta_L
=
- \frac{L\varepsilon_{cm}\kappa^{L}}{L\varepsilon_{cm}+(2L+1)\varepsilon_{m}}.
\end{eqnarray}
Note that $F_{\alpha}$ satisfy orthogonality condition $\langle \mu|\hat{P}'|\nu\rangle=\delta_{\mu\nu}P'_\mu$; $\mu$ is composite index $(LMj)$, where $j=\pm$ in Eq.~(\ref{lambda}) is plasmon type (bright or dark, respectively).
Plasmon decay into electron-hole pairs is described by incorporating the imaginary part of polarization operator into Eq.~(\ref{poisson}), $\hat{\chi}=\hat{G}\hat{P}=\hat{G}\hat{P}^{\prime}+i\hat{G}\hat{P}^{\prime\prime}$, as follows.
First, Eq.~(\ref{poisson}) is multiplied from left by $\langle \mu |\hat{P}'$ and, using that $\hat{P}'\hat{\chi}=\hat{\chi}'\hat{P}$, we obtain
\begin{align}
\left(1+4\pi\lambda_{\mu}\right)\langle \mu |\hat{P}^{\prime}|\Phi\rangle +4\pi i\lambda_\mu \langle \mu |\hat{P}^{\prime\prime}|\Phi\rangle= 0.
\end{align}
Then, expanding the potential over eigenstates of $\hat{\chi}'$, $\Phi=\sum C^{j}_{L} F_{LM}^{j}$, we get
\begin{eqnarray}
\label{C}
(1+4\pi\lambda_{L}^{j})P_L^{\prime j}C_L^j+4\pi i \lambda_{L}^{j} \sum\limits_{L'j'}P_{LL'}^{\prime\prime jj'}C_{L'}^{j'}= 0,
\end{eqnarray}
where
\begin{align}
P^{\prime\prime jj'}_{LL'} = \frac{1}{2L+1} \sum\limits_{M M'} \langle L M j | P^{\prime\prime } | L' M' j' \rangle
\end{align}
is the reduced, due to spherical symmetry, matrix element of $\hat{P}^{\prime\prime}$. Admixture of plasmon modes by $ P^{\prime\prime} $ being weak, the non-diagonal terms in Eq.~(\ref{C}) can be neglected. The new resonance condition takes the form $ 1+4\pi(\lambda_{L}^{j}+\delta\lambda_{L}^{j}) =0$, where $ \delta\lambda_L^{j}=i\lambda_L^{j}P^{\prime\prime jj}_{LL}/P^{\prime j}_{L} $ is an \textit{imaginary} correction to the eigenvalue. Its appearance implies a shift of eigenmode frequency into the complex plane, $\lambda_L(\omega_L + i \Gamma_{L}/2) = \lambda_{L}(\omega_L) + i (\Gamma_{L}/2) \left[\partial\lambda_L(\omega_L)/\partial \omega_{L}\right]$. Equating the last term to $\delta \lambda_L$, we obtain the following expression for the decay rate
\begin{equation}
\label{LD}
\Gamma_{L} = \frac{2\lambda_L P^{\prime\prime}_{L L} (\omega_L)}{\frac{\partial \lambda_L}{\partial\omega_L} P_{L}^{\prime}(\omega_L)} = \frac{{\cal Q}_{L}}{{\cal U}_{L}},
\end{equation}
where we suppressed bright/dark indices. Here $ {\cal Q}_{L} $ is the dissipated power of plasmon eigenmode,
\begin{align}
{\cal Q}_{L} = -\omega_{L} P_{LL}^{\prime\prime} = {\rm Re}\int d\textbf{r}\, \textbf{J}_{LM}\cdot \textbf{E}_{LM},
\end{align}
$ \textbf{E}_{LM} =-\nabla F_{LM}$ and $ \textbf{J}_{LM}$ being the plasmon electric field and the current, respectively, and
\begin{equation}
\label{energy}
{\cal U}_{L} = - \frac{\omega_{L}}{2 \lambda_{L}}\frac{\partial \lambda_L}{\partial \omega_{L}} P_{L}^{\prime}
=\frac{1}{8\pi}\frac{\omega_{L}}{\lambda_{L}}\frac{\partial \lambda_L}{\partial \omega_{L}}
\int d{\bf r}\left[ \varepsilon(r) - 1\right] {\bf E}_{LM}^{2},
\end{equation}
is the plasmon eigenmode energy stored in a metal-dielectric composite.
\section{Calculation of eigenmode energy}
\label{sec:u}
Consider first the palsmon eigenmode energy, ${\cal U}^{L}$. The electric field integral in Eq.~(\ref{energy}) reduces to boundary contribution,
\begin{align}
\int d{\bf r}\left[ \varepsilon(r) - 1\right] {\bf E}_{LM}^{2}=-4\pi \left [ R_{1}^{2} F_{L}(R_{1}) \sigma_{L}^{(1)} + R_{2}^{2} F_{L}(R_{2})\sigma_{L}^{(2)}\right ],
\end{align}
where $\sigma_{L}^{(i)}$ are surface charges at \emph{i}th interface. Those are given by $\sigma_{L}^{(1)}=\epsilon_{cm}{\cal E}_{L}(R_{1})/4\pi \epsilon_{c}$ and $\sigma_{L}^{(2)}=\epsilon_{md}{\cal E}_{L}(R_{2})/4\pi \epsilon_{d}$, where ${\cal E}_{L}(R_{i})=-\partial F_{L}(R_{i})/\partial R_{i}$ are the interfacial electric fields (on metal side). The prefactor in Eq.~(\ref{energy}) can be split as
\begin{align}
\frac{\omega_{L}}{\lambda_{L}}\frac{\partial \lambda_L}{\partial \omega_{L}}=-A_{L}\omega_{L}(\partial \epsilon'_{m}/\partial \omega_{L}),
\end{align}
where
\begin{align}
A_{L}=-\lambda_{L}^{-1}\left (\partial \lambda_L/\partial \epsilon_{m}\right )=4\pi \left (\partial \lambda_L/\partial \epsilon_{m}\right ).
\end{align}
For metal shell, we use the Drude dielectric function $\varepsilon_{m}=\varepsilon_{db}-\omega_{p}^{2}/\omega(\omega+i\gamma)$, where $\varepsilon_{db}$ is the inderband dielectric function due to \textit{d}-band to \textit{sp}-band transitions, $\omega_{p}$ is bulk plasmon frequency, and $\gamma$ is the bulk damping rate. Then we have $\omega_{L} (\partial \varepsilon^{\prime}_{m}/\partial \omega_{L})=2\left( \omega_{L}/\omega_{p}\right)^{2}\left[\left( \omega_{L}/\omega_{p}\right)^{2}+\left( \gamma/\omega_{p}\right)^{2}\right] ^{-2}\approx 2(\omega_{p}/\omega_{L})^{2}$ for optical frequencies ($\omega_L\gg\gamma$). Putting all together, we finally obtain
\begin{eqnarray}
\label{energy-final}
{\cal U}_{L}
=
\frac{A_{L}}{4\pi e^{2}}\left( \frac{\omega_{p}}{\omega_{L}}\right) ^{2}\left[\frac{\epsilon_{md}\epsilon_{m}}{L+1} \, {\cal E}_{L}^{2}(R_{2})R_{2}^{3} + \frac{\epsilon_{mc}\epsilon_{m}}{L}\, {\cal E}_{L}^{2}(R_{1})R_{1}^{3}\right ],
\end{eqnarray}
where we used ${\cal E}_{L}(R_{1})=-L\left (\varepsilon_{c}/\varepsilon_{m}R_{1}\right)F_{L}(R_{1})$ and ${\cal E}_{L}(R_{2})=(L+1)\left (\varepsilon_{d}/\varepsilon_{m}R_{2}\right )F_{L}(R_{2})$.
The dependence of dipole SP energy on aspect ratio is plotted in Fig.~\ref{fig:energy} for gold nanoshell in vacuum, and for gold nanoshell with Au$_{2}$S core in water. Importantly, with increasing core radius, the stored energy remains comparable to (vacuum) or exceeds (dielectric) that in a solid particlle. This can be understood by noting that a nanoshell is analogous to a (high-frequency) capacitor formed by two concentric spherical surfaces. For bright dipole mode, the opposite charges are located in \textit{different} hemispheres, so the energy is proportional to the core-shell particle volume; a dielectric core then leads to an increase of the stored energy. In contrast, for dark modes, the opposite charges are placed at inner and outer boundaries, so the energy vanishes as with the volume between capacitor plates decreases.
Simple expressions can be obtained for a metal shell in vacuum, $\varepsilon_{c}=\varepsilon_{d}=1 $. The corresponding eigenvalues are given by
\begin{align}
\lambda_{L} = A_{L}(\varepsilon_{m}-1)/4\pi, \qquad A_{L} = \frac{1}{2} \left[ 1 \mp \sqrt{1+4L(L+1)\kappa^{2L+1}}/(2L+1)\right],
\end{align}
yielding the well known resonance frequencies $\omega_{L}/\omega_{p}=\sqrt{A_{L}\left[ 1+A_{L}(\varepsilon_{db}-1)\right]^{-1} }$. In this case, the integration in Eq.~(\ref{energy}) is effectively restricted to metal region and we recover the known expression for time-averaged internal energy in dispersive media
\begin{eqnarray}
\label{energy-metal}
{\cal U}_{L} = \frac{\omega_{L} }{8\pi} \frac{\partial \varepsilon_{m}^{\prime}}{\partial \omega_{L}}\int\limits_{m} d\textbf{r}\textbf{E}_{LM}^{2} =\frac{1}{8\pi} \int d\textbf{r} \left[ \frac{\partial \omega_{L}\varepsilon^{\prime}(\omega_{L},r)}{\partial \omega_{L}}\right] \textbf{E}_{LM}^{2}
\end{eqnarray}
where we used $\nabla\cdot (\varepsilon \textbf{E}_{LM})=0$. Then Eq.~(\ref{energy-final}) can be evaluated using eigenfunctions (\ref{eigen}) with $\eta_{L}=L\kappa^{L}\left[ L+1-(2L+1)A_{L}\right] ^{-1}$. The result reads
\begin{eqnarray}
\label{WS-final}
{\cal U}_{L} = \left(1-\kappa^{2L+1} \right)\frac{LR_2}{4\pi e^{2}}\left( \frac{\omega_{p}}{\omega_{L}}\right) ^{2}\left[ 1+\frac{L(L+1)\kappa^{2L+1}}{\left[ L+1-(2L+1)A_{L}\right] ^{2} }\right].
\end{eqnarray}
For thin shells, $d/R_2=(1-\kappa)\ll 1$, we have $(\omega_{L}/\omega_{p})^{2}\approx A_{L}\approx (1-\kappa)L(L+1)/(2L+1)$ (for bright plasmon mode), so that ${\cal U}_{L}\approx R_{2}(2L+1)^{3}/4\pi (L+1)^{2}$, while for solid particle we have
${\cal U}_{L}^{p} = R_{2}(L\varepsilon_{db}+L+1)/4\pi$, yielding ${\cal U}_{1}/{\cal U}_{1}^{p}\approx 0.6$ [note that ${\cal U}_{L}$ vanishes in the d.c. limit $\omega/\gamma\lesssim 1$ corresponding to $d/R_{2}\lesssim (\gamma/\omega_{p})^{2}$].
\section{Calculation of dissipated power}
\label{sec:q}
We now turn to the calculation of dissipated power, ${\cal Q}_{L}=-\omega_{L}P_{LL}^{\prime\prime}$. Within RPA for imaginary part of electron polarization operator, it is given by
\begin{align}
\label{Q0}
{\cal Q}_{L}=\frac{2\pi\omega_{L}}{2L+1}\sum\limits_{M\alpha\alpha'}|\langle \alpha | F_{L M}| \alpha' \rangle |^2 \delta(\epsilon_{\alpha} - \epsilon_{\alpha'}+ \omega_{L} ),
\end{align}
where the sums run over occupied ($\alpha$) and unoccupied ($\alpha'$) electronic states with wave-functions $\psi_{\alpha}({\bf r})=\psi_{nl}(r)Y_{lm}(\hat{\bf r})$ and energies $\epsilon_{nl}$ ($n$ is radial quantum number), and the factor 2 accounts for the spin degeneracy. The angular part factorizes out and Eq.~(\ref{Q0}) can be written as
\begin{eqnarray}
\label{Q}
{\cal Q}_{L}=2\pi\omega_{L}\sum\limits_{nn'll'}a_{ll'}^{L}|M_{nl,n'l'}^{L}|^2
\delta(\epsilon_{nl} - \epsilon_{n'l}+\omega_{L}),
\end{eqnarray}
where $M_{nl,n'l'}^{L}=\langle nl | F_{L}| n'l' \rangle$ and $ a_{ll'}^{L}=\frac{1}{2L+1}\sum\limits_{Mmm'}\left|\int d {\bf \Omega} Y_{L M}Y^{*}_{l m} Y_{l' m'} \right|^2$ are radial and angular contribution, respectively. The latter is non-zero only for $l=l'\pm L$, and for typical $l,l'\gg L$, can be approximated as $a_{ll'}^{L}\approx \delta_{ll'}l/2\pi$.
Let us first qualitatively discuss the LD of the $L=1$ SP in a solid particle of radius $R$. In a finite system, the dipole matrix element, $\langle nl|r|n'l'\rangle=\frac{1}{m\omega_{1}}\langle nl |p|n'l'\rangle$ is non-zero because the momentum is not a good quantum number. For low-energy transitions across the Fermi level ($ \omega_{L} \ll E_{F} $) we have $\langle nl|p|n'l'\rangle\sim p_{F}/(n-n')\sim \epsilon_{0}p_{F}/\omega_{1} $, where $\epsilon_{0}$ is separation between levels at the Fermi level with the \textit{same} (or fixed difference of) $l$ ($\epsilon_{0}=\hbar v_F/R$ for a solid particle). We then obtain $\langle nl|p|n'l'\rangle^{2}\sim(\epsilon_{0}p_{F})^{2}/m^{2}\omega_{1}^{4}$. The sums over $n,n'$ in Eq.~(\ref{Q}) are transformed into energy integral over a strip of width $\omega_{1}$ around the Fermi level, resulting in a factor $\omega_{1}\rho_{l}^{2}$, where $\rho_{l}\sim \epsilon_{0}^{-1}$ is the partial density of states (DOS) at the Fermi level. Finally, the sums over angular momenta is dominated by the maximal $l$ in a quantum well of width $R$, $l_{max}\sim p_{F}R$, contributing another factor of $(p_{F}R)^2$. Putting all together, we obtain ${\cal Q}_{1}\sim R^{2}\left( E_{F}/\omega_{1}\right) ^{2}$. Note that the level spacing $\epsilon_{0}$ \textit{cancels out}, and dissipated power is determined by the nanoparticle \textit{surface area}. At the same time, the energy of dipole plasmon (with $F_{1}=r$) is proportional to the particle \textit{volume}, $W_{1}\sim \left (R^{3}/e^{2}\right )\left (\omega_{p}/\omega_{1}\right )^{2}$, yielding $\Gamma\sim \left ({E_{F}}/{\omega_{p}}\right )^{2}\left( {e^2}/{R}\right) \sim {v_{F}}/{R}$. We emphasize that the $1/R$ dependence of $\Gamma$ originates from the distribution of surface charges rather than from quantum-size corrections. The dissipated power is simply proportional to the surface area available for electron scattering, but it is insensitive to the structure of electron energy spectrum, as indicated by the cancellation of electronic DOS in the transition matrix elements. A similar cancellation takes place also for a shell geometry so that ${ \cal Q}_{L} $ is also proportional to its surface area. On the other hand, the bright plasmon mode energy $ { \cal U}_{L} $ is proportional to the core-shell particle volume, so one would expect a similar size-dependence of $\Gamma$ as for a solid particle of the same overall size. However, as we show below, LD damping in thin nanoshells is even weaker to due the enhancement of screening, caused by the SP frequency redshift, that pushes the bright SP electric field out of the metal shell region.
To evaluate ${\cal Q}_{L}$, we represent nanoshell confining potential as 3D quantum well with hard boundaries at $R_{1}$ and $R_{2}$, $V(r)=V_{0}\theta(R_{1}-r)\theta(r - R_{2})$. The matrix element of plasmon potential $F_{LM}$ is dominated by surface contribution, $\langle\alpha|F_{LM}|\alpha'\rangle=\frac{1}{\omega_{L}^{2}}\langle\alpha\left[H\left[H,F_{LM}\right]\right]|\alpha'\rangle\approx\frac{1}{m\omega_{L}^{2}} \langle\alpha|\nabla F_{LM}\cdot\nabla V|\alpha'\rangle $, and, after separating out angular part, takes the form $M_{nl,n'l'}^{L}=\frac{V_{0}}{m\omega_{L}^{2}}\left [\psi_{nl}(R_{1})\psi_{n'l'}(R_{1}){\cal E}_{L}(R_{1})-\psi_{nl}(R_{2})\psi_{n'l'}(R_{2}){\cal E}_{L}(R_{2})\right ]$. For deep enough well, matching of wave-functions across the well boundaries gives $\sqrt{2mV_{0}}\psi_{nl}(R_{i})\approx -\psi'_{nl}(R_{i})$ (here prime stands for derivative), and the matrix element takes the form
\begin{equation}
\label{matrix}
M_{nl,n'l'}^{L}=\frac{1}{2m^{2}\omega_{L}^{2}}\left [\psi'_{nl}(R_{1})\psi'_{n'l'}(R_{1}){\cal E}_{L}(R_{1})-\psi'_{nl}(R_{2})\psi'_{n'l'}(R_{2}){\cal E}_{L}(R_{2})\right ].
\end{equation}
The two terms describe excitation, by the SP electric field, of a Fermi sea electron-hole pair with the momentum transfer to inner and outer boundaries. Correspondingly, ${\cal Q}_{L}$ can be decomposed as ${\cal Q}_{L}={\cal Q}_{L}^{11}+{\cal Q}_{L}^{22}-2{\cal Q}_{L}^{12}$, where
\begin{equation}
\label{Qij}
{\cal Q}_{L}^{ij}=
\frac{1}{4m^{4}\omega_{L}^{3}}{\cal E}_{L}(R_{i}){\cal E}_{L}(R_{j})\sum\limits_{lnn'} \psi'_{nl}(R_{i})\psi'_{n'l}(R_{i})\psi'_{nl}(R_{j})\psi'_{n'l}(R_{j})\delta(\epsilon_{nl} - \epsilon_{n'l}+\omega_{L}),
\end{equation}
and we used that $a_{ll'}^{L}\approx \delta_{ll'}l/2\pi$. For typical electron energies $\epsilon_{nl}\sim E_{F}$, we can use semiclassical approximation for electron wave-functions, $\psi_{nl}(r)=\sqrt{4m/p_{l}\tau_{l}}\sin\int\limits_{r}^{R_{2}}p_{l}dr$, where $p_{l}(\epsilon,r)=\sqrt{2m\epsilon-(l+1/2)^{2}/r^{2}}$, and $\tau_{l} (\epsilon)$ is the period of classical motion between two turning points. Consider first the outer surface contribution, $Q_{L}^{22}$. In this case $\psi'_{nl}(R_{2})=-\sqrt{4mp_{l}(R_{2})/\tau_{l}}$. When plasmon energy $ \omega_{L} $ is larger than fixed-$ l $ level spacing in the nanoshell, $ \epsilon_{0}=v_{F}/d $, the sums in Eq.~(\ref{Qij}) can be replaced by integrals, $\sum\limits_{n}\rightarrow \int d\epsilon \rho_{l}({\epsilon)}$ (with $\epsilon < E_{F}, \epsilon' > E_{F}$), where the partial DOS $\rho_{l}(\epsilon)={\partial n}/{ \partial\epsilon_{nl}}$ is related to the classical period as $\rho_{l}=\tau_{l}/2\pi$. The result reads
\begin{eqnarray}
\label{Q22}
{\cal Q}_{L}^{22}=
\frac{{\cal E}_{L}^{2}(R_{2})}{\pi^2m^{2}\omega_{L}^{3}}
\sum\limits_{l} l \int\limits_{E_{F}-\omega_{L}}^{E_{F}} d\epsilon p_{l}(\epsilon,R_{2})p_{l}(\epsilon+\omega_{L},R_{2}).
\end{eqnarray}
Note that the partial DOS cancels out and $Q_{L}^{22}$ depends only on electron and hole velocities at the outer surface $R_{2}$. In the energy integral, the integration variable is first shifted as $\epsilon\rightarrow E_{F}+\epsilon-\omega_{L}/2$, where $\epsilon$ now changes in the interval $(-\omega_{L}/2, \omega_{L}/2)$, and then rescaled to $x=\epsilon/\omega_{L}$. The sum over $l$ is replaced by integral restricted by maximal value $l\sim p_{F}R_{2}$ that is determined by the condition $p_{l}(\epsilon,R_{2})\geq 0$. After rescaling to $s=l^{2}/(p_{F}R_{2})^{2}$, it contributes a factor proportional to the outer surface area. The result reads
\begin{eqnarray}
\label{Q22-final}
{\cal Q}_{L}^{22}=
\frac{E_{F}^{2}R_{2}^{2}}{\pi^2\omega_{L}^{2}} {\cal E}_{L}^{2}(R_{2})g(\omega/E_{F}),
\end{eqnarray}
where $g(\xi)=2\int\limits_{-1/2}^{1/2}dx\int ds f(\xi,x,s)$, with $f(\xi,x,s)=\sqrt{\left (1+\xi x -s\right )^{2}-\xi^{2}/4}$, is dimensionless function of order 1 normalized to $g(0)=1$.
Turning to the inner surface term, ${\cal Q}_{L}^{11}$, the main contribution into the r.h.s. of Eq.~(\ref{Qij}) comes from the terms where the turning point lies outside of potential well, i.e., $p_{l}(\epsilon,R_{1})\geq 0$ (otherwise $\psi_{nl}(R_{1})$ are exponentially small). In this case, we have $\psi'_{nl}(R_{1})=-(-1)^{n}\sqrt{4mp_{l}(R_{1})/\tau_{l}}$, where the sign factor $(-1)^{n} $ accounts for the \emph{parity} of electron wave-function with $n-1$ nodes between $R_{1}$ and $R_{2}$. The rest of the calculation is carried in a similar way, and the result,
\begin{eqnarray}
\label{Q11-final}
{\cal Q}_{L}^{11}=
\frac{E_{F}^{2}R_{1}^{2}}{\pi^2\omega_{L}^{2}} {\cal E}_{L}^{2}(R_{1}) g(\omega/E_{F}),
\end{eqnarray}
is proportional to the inner surface area.
Consider now the interference term $Q_{L}^{12}$. Using the above boundary values of $\psi'_{nl}(R_{i})$, we write
\begin{eqnarray}
\label{Q12}
{\cal Q}_{L}^{12}=
\frac{4{\cal E}_{L}(R_{1}){\cal E}_{L}(R_{2})}{m^{2}\omega_{L}^{3}}
\sum\limits_{lnn'}\frac{l(-1)^{n-n'}}{\tau_{l}(\epsilon_{nl})\tau_{l}(\epsilon_{n'l})} {\cal F}_{l}(\epsilon_{nl},\epsilon_{n'l})\delta(\epsilon_{nl} - \epsilon_{n'l}+\omega_{L}),
\end{eqnarray}
where ${\cal F}_{l}(\epsilon_{nl},\epsilon_{n'l})=\sqrt{p_{l}(\epsilon_{nl},R_{1})p_{l}(\epsilon_{n'l},R_{1})p_{l}(\epsilon_{nl},R_{2})p_{l}(\epsilon_{n'l},R_{2})}$. The alternating sign reflects the \emph{parity} of excited electron-hole state. As $\omega_{L}$ changes (e.g., with changing aspect ratio), the relative parity of electron and hole states separated by energy $\omega_{L}$ changes too, leading to a different sequence of alternating signs in the sum in Eq.~(\ref{Q12}). This leads to oscillatory behavior of ${\cal Q}_{L}^{12}$. The number of states contributing into the sum on Eq.~(\ref{Q12}) being large, the oscillations are relatively weak and can be accounted, in the continuum limit, by a replacement $(-1)^{n-n'}=\cos\pi(n-n')=\cos\Bigl[\pi\int\limits_{\epsilon}^{\epsilon'}d\epsilon \rho_{l}(\epsilon)\Bigr]$. Then $ {\cal Q}_{L}^{12} $ takes the form
\begin{eqnarray}
\label{Q12-int}
{\cal Q}_{L}^{12}=
\frac{{\cal E}_{L}(R_{1}){\cal E}_{L}(R_{2})}{\pi^2m^{2}\omega_{L}^{3}}
\sum\limits_{l}l\int\limits_{E_{F}-\omega_{L}}^{E_{F}} d\epsilon {\cal F}_{l}\left (\epsilon,\epsilon+\omega_{L}\right )
\cos\left [\pi \int\limits_{\epsilon}^{\epsilon+\omega_{L}}d\epsilon' \rho_{l}(\epsilon')\right ] ,
\end{eqnarray}
where $l$ is restricted by the condition $p_{l}(\epsilon,R_{1})\geq 0$. After shifting integration variables as $\epsilon\rightarrow E_{F}+\epsilon-\omega_{L}/2$ and $\epsilon'\rightarrow E_{F}+\epsilon+\epsilon'$ and rescaling,
Eq.~(\ref{Q12}) takes the form
\begin{eqnarray}
\label{Q12-final}
{\cal Q}_{L}^{12}=
\frac{R_{1}^{2}E_{F}^{2}}{\pi^2\omega_{L}^{2}} {\cal E}_{L}(R_{1}){\cal E}_{L}(R_{2})G(\omega_{L}/E_{F}),
\end{eqnarray}
where
\begin{eqnarray}
\label{G}
G(\xi)=2\int\limits_{-1/2}^{1/2}dx\int ds \sqrt{f(\xi,x,s)f(\xi,x,\kappa^{2} s)}\cos\left [\pi \omega_{L} \int\limits_{-1/2}^{1/2}dx' \rho_{l}\bigl[E_{F}[1+\xi(x+x')]\bigr]\right] ,
\end{eqnarray}
and the partial DOS is given by $\rho_{l}(\epsilon)=\frac{m}{\pi}\int\limits_{R_{1}}^{R_{2}}\frac{dr}{p_{l}(\epsilon,r)}=({1}/{2\pi\epsilon})\left [R_{2}p_{l}(\epsilon,R_{2})-R_{1}p_{l}(\epsilon,R_{1})\right ]$ with $l=\sqrt{s}(p_{F}R_{1})$. For $\omega/E_{F}\ll 1$, the $x'$-integrals is easily evaluated, yielding
\begin{eqnarray}
\label{G-close}
G(\xi)=2\int\limits_{-1/2}^{1/2}dx\int\limits_{0}^{1+\xi x} ds \sqrt{f(\xi,x,s)f(\xi,x,\kappa^{2} s)}\cos\left [w(\xi,x,s)\omega_{L}/\epsilon_{0}\right] ,
\end{eqnarray}
where $w(\xi,x,s)=\sqrt{f(\xi,x,\kappa^{2}s)}-\kappa \sqrt{f(\xi,x,s)}$ with $f(\xi,x,s)\approx 1+\xi x -s$ (here $\epsilon_{0}= v_{F}/R_{2}$). Rescaling $s$ by $1+\xi x$, Eq.~(\ref{G-close}) factorizes as $G(\xi)=\int\limits_{-1/2}^{1/2}dx \left (1+\xi x\right )^{3/2} S(\xi,x)$, where
\begin{eqnarray}
\label{G-simple}
S(\xi,x)=2\int\limits_{0}^{1}ds \sqrt{(1-s)(1-\kappa^{2} s)}\cos\left [a(\xi,x)\left ( \sqrt{1-\kappa^{2} s} -\kappa\sqrt{1-s}\right )\right] ,
\end{eqnarray}
with shorthand notation $a(\xi,x)=(\omega_{L}/\epsilon_{0})\sqrt{1+\xi x}$. With substitution $s=1-\frac{1-\kappa^2}{\kappa^2}\sinh^{2}\alpha$, $S$ is brought to the form
\begin{eqnarray}
\label{G-trig}
S(\xi,x)=\frac{4(1-\kappa^2)^2}{\kappa^3}\int\limits_{0}^{\alpha_{0}}d\alpha \left (\sinh\alpha\cosh\alpha\right )^{2} \cos\left [a(\xi,x)\sqrt{1-\kappa^{2}} e^{-\alpha}\right] ,
\end{eqnarray}
where $\sinh\alpha_{0}=\kappa/\sqrt{1-\kappa^{2}}$. For $a(\xi,x)\gg 1$, the integral is dominated by the upper limit, and for thin shells, $1-\kappa \ll 1$, corresponding to $\alpha_{0}>1$, can be evaluated as
\begin{eqnarray}
\label{G-final}
S\approx -4 \frac{\sin(a\sqrt{1-\kappa^{2}} e^{-\alpha_{0}})}{a\sqrt{1-\kappa^{2}} e^{-\alpha_{0}}}
= -4 \frac{\sin\left [a(1-\kappa)\right ]} {a(1-\kappa)}.
\end{eqnarray}
With the above $S$ and after change of variable $t=\sqrt{1+\xi x}$, the expression for $G(\xi)$ takes the form
\begin{eqnarray}
\label{G-factor}
G(\xi)=-\frac{8}{\xi D}\int\limits_{t_{-}}^{t_{+}}dt t^{3}\sin (tD),
\end{eqnarray}
where $t_{\pm}=\sqrt{1\pm\xi/2}$, and we introduced $D=(1-\kappa)\omega_{L}/\epsilon_{0}=\omega_{L}d/v_{F}$. Note that even though for $\xi\ll 1$ the integration interval is small, the integrand is still an oscillating function since $D\gg 1$ and so the product $D\xi$ can be arbitrary. In this case, a straightforward evaluation yields
\begin{eqnarray}
\label{G-anal}
G(\xi)=-4\frac{\sin D}{D}\frac{\sin(\xi D/4)}{\xi D/4}.
\end{eqnarray}
Putting all together, we finally obtain
\begin{eqnarray}
\label{Q-final}
{\cal Q}_{L}=\frac{R_{2}^{2}E_{F}^{2}}{\pi^2\omega_{L}^{2}}
\left [{\cal E}_{L}^{2}(R_{2})+\kappa^{2}{\cal E}_{L}^{2}(R_{1})-2\kappa^{2} {\cal E}_{L}(R_{1}){\cal E}_{L}(R_{2}) G\right ].
\end{eqnarray}
The last term in the above expression oscillates as a function of $\kappa$. Oscillations are caused by the change of electron-hole state parity as its energy changes with $d$, and reveal themselves through the interference between electron scatterings from outer and inner surfaces. The oscillations period depends weakly on shell thickness throught $\omega_{L}(\kappa)$, and their amplidude slowly dies out with increasing $d$. Note that although the interference correction, Eq.~(\ref{Q12}), was obtained for thin shells, the full dissipated power ${\cal Q}_{L}$, Eq.~(\ref{Q-final}) remains a good approximation in the entire range of $\kappa$, since for small $\kappa$ the interference term is also small.
\section{Landau damping in nanoshells}
\label{sec:disc}
We now turn to SP LD rate, $\Gamma_{L}={\cal Q}_{L}/{\cal U}_{L}$, with ${\cal Q}_{L}$ and ${\cal U}_{L}$ given by Eqs.~(\ref{energy-final}) and (\ref{Q-final}). To simplify the analysis, consider the case of a metal shell in vacuum ($\varepsilon_{c}=\varepsilon_{d}=1$). Then we obtain
\begin{eqnarray}
\label{G-vacuum}
\Gamma_{L}=
\left (\frac{4E_{F}^{2}}{\pi\omega_{p}^{2}}\right ) \left[\frac{-e^2}{\varepsilon_{m}(\omega_{L})R_{2}}\right ]\frac{1+\kappa^{2}q_{L}^{2}-2\kappa^{2}q_{L}G}{1/(L+1)+\kappa^{3}q_{L}^{2}/L},
\end{eqnarray}
where $q_{L}={\cal E}_{L}(R_{1})/{\cal E}_{L}(R_{2})$ is the ratio of electric fields at the boundaries,
\begin{eqnarray}
\label{qL}
&&
q_{L}=\frac{L\kappa^{L}-(L+1)\eta_{L}}{\kappa\left[ L-(L+1)\eta_{L}\kappa^{L+1}\right] }
=\frac{\kappa^{L-1}A_{L}}{A_{L}-(1-\kappa^{2L+1})(L+1)/(2L+1)}.
\end{eqnarray}
The expression (\ref{G-vacuum}) is our central result. The SP LD rate is proportional to the energy of Fermi sea electron-hole pair created by SP (second factor), multiplied by the phase space of the transition (first factor). The last factor describes the relative contribution of the nanoshell inner surface and includes the interference correction. Importantly, the electron-hole Coulomb potential is screened by metal dielectric function at \emph{SP frequency}.
For solid nanoparticle ($\kappa=0$), the plasmon eigenfrequency is determined from $L\epsilon_{m}(\omega_{L})+ L+1=0$, and we recover the result of Kawabata and Kubo,
\begin{eqnarray}
\label{G-particle}
\Gamma_{L}^{p}=
\left (\frac{4E_{F}^{2}}{\pi\omega_{p}^{2}}\right ) \frac{e^2L}{R_{2}}=\frac{3L}{4}\frac{v_{F}}{R_{2}}.
\end{eqnarray}
Note that although the r.h.s. resembles the level spacing (at constant $l$) in spatially-confined system, this resemblance is \emph{coincidental} because Eq.~(\ref{G-particle}) contains no information about electron energy spectrum.
For thin nanoshell, the situation is drastically different. The redshift of plasmon energy, $\omega_{L}$ implies large (negative) value of $\epsilon_{L}(\omega_{L})$, which leads to \emph{overscreening} of the electron-hole Coulomb energy. As a result, $\Gamma$ \emph{vanishes} in the thin shell limit. The precise behavior of $\Gamma_{L}$ can be deduced from $(1-\kappa)\ll 1$ asymptotics, $q_{L}\approx -L/(L+1)$ and $\epsilon_{m}\approx -A_{L}^{-1}\approx - (1-\kappa)^{-1} (2L+1)/L(L+1)$, yielding
\begin{eqnarray}
\label{G-thin}
\Gamma_{L}\approx \frac{v_{F}d}{R_{2}^{2}}\frac{3L(L+1)}{4(2L+1)^{2}}\bigl[ 1+2L(L+1)(1+G) \bigr ],
\end{eqnarray}
i.e., $\Gamma_{L}$ decreases \textit{linearly} with shell thickness.
In Figs. \ref{fig:ld_30nm} and \ref{fig:ld_10nm} we show the calculated LD for dipole SP as well as the corresponding quality factors in the entire range of aspect ratios for Au nanoshells in vacuum with outer radii 30 nm and 10 nm, respectively. The decrease of $\Gamma$ is accompanied by the increase of $Q$ despite the SP frequency redshift for thin nanoshells. The asymptotic behavior of quality factor, $Q\propto d^{-1/2}$, follows from the weaker dependence of the nanoshell SP frequency, $\omega_{sp}\propto \sqrt{d}$. For small $d$, both $\Gamma$ and $Q$ exhibit oscillations as a function of $d$, with period
$d_{L}=v_{F}T_{L}$, where $T_{L}$ is the SP period. The latter itself weakly depends on $d$, resulting in a slow increase of oscillation period with $d$, however undetectable due to the reduction of their amplitude. The oscillations are quite pronounced for smaller overall nanoshell size and could be observable for typical experimental range of aspect ratios (0.6-0.8). Note that these oscillations should be distinguished from those observed in solid NP \cite{molina-prb02}; in the latter case, the oscillations originate from size-quantization of electron energy levels in a confined nanostructure, while here they are due to quantum interference of electron wave-functions at different nanoshell boundaries.
\section{Conlusion}
\label{sec:conc}
In summary, we performed the first quantum-mechanical calculation of surface plasmon Landau damping in a metal nanoshell, and found that it vanishes in the thin shell limit. The physical origin of a long bright SP lifetime is that its electric field magnitude inside a thin metal shell is reduced. Obviously, such a situation is not limited to nanoshells with spherical symmetry considered here, but can take place in other structures too. Therefore, the quantum-size limitations on plasmon lifetime in small metal nanostructures are, in fact, spurious and can be avoided by an appropriate choice of geometry.
This work was supported by the NSF under Grant Nos. DMR-0906945 and HRD-0833178.
|
1,314,259,993,360 | arxiv | \section{Introduction}
The notion of relatively hyperbolic groups was introduced in \cite{Gro87} and has been studied by many authors (see for example \cite{Bow12}, \cite{D-S05}, \cite{Far98} and \cite{Osi06a}).
In this paper we consider relatively hyperbolic groups in accordance with a definition due to D. Osin \cite[Definition 2.35]{Osi06a}.
Note that for certain cases (e.g. for finitely generated groups), this definition has several equivalent formulations (see for example \cite[Sections 3 and 5]{Hru10}).
In \cite{Osi06b}, D. Osin introduced the notion of hyperbolically embedded subgroups of a relatively hyperbolic group.
\begin{Def}\label{def-hypemb} (\cite[Definition 1.4]{Osi06b})
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of subgroups. A subgroup $H$ of $G$ is said to be hyperbolically embedded into $G$ relative to $\bbk$ if $G$ is hyperbolic relative to $\bbk\cup\{H\}$.
\end{Def}
If $G$ is infinite and hyperbolic relative to a family $\bbk$ of proper subgroups, then there exists a virtually infinite cyclic subgroup of $G$ which is hyperbolically embedded into $G$ relative to $\bbk$ (see \cite[Corollaries 1.7 and 4.5]{Osi06b}).
Also if $G$ is torsion-free, hyperbolic and not cyclic, then it contains a free subgroup of rank two which is quasiconvex and malnormal in $G$, that is, hyperbolically embedded into $G$ relative to the empty family $\emptyset$ (see \cite[Theorem C]{Kap99} and \cite[Theorem 7.11]{Bow12}).
In this paper we show the following (see also Theorem \ref{hypemb'}).
\begin{Thm}\label{hypemb}
Suppose that a group $G$ is not virtually cyclic and is hyperbolic relative to a family $\bbk$ of proper subgroups.
Then there exists a finitely generated and virtually non-abelian free subgroup of $G$ which is hyperbolically embedded into $G$ relative to $\bbk$.
Moreover if $G$ is torsion-free, then it contains a free subgroup of rank two which is hyperbolically embedded into $G$ relative to $\bbk$.
\end{Thm}
We refer to \cite[Theorems 1.4 and 1.5]{M-O-Y5} for applications of this theorem to the study of convergence actions of groups.
We also refer to \cite[Theorem 6.3]{M-O-Y3} for another application.
\begin{Rem}\label{rem-DGO}
After the first version of this paper appeared, the notion of a hyperbolically embedded subgroup was further generalized in \cite{D-G-O11}.
It turns out that we can prove a stronger version of Theorem \ref{hypemb} by using the argument in the proof of \cite[Theorem 6.14 (c)]{D-G-O11} (see \cite[Appendix B]{M-O-Y5} for details).
In what follows, hyperbolically embedded subgroups which we consider are those in the sense of \cite{Osi06b}.
\end{Rem}
In Section \ref{sect-char}, we recall the fact that hyperbolically embedded subgroups of a relatively hyperbolic group are characterized as strongly relatively undistorted and almost malnormal subgroups.
Strongly relatively undistorted free subgroups of rank two of a relatively hyperbolic group are found in Section \ref{sect-sruf}.
In Section \ref{sect-amvf}, we construct almost malnormal subgroups of a virtually free group with additional properties.
Theorem \ref{hypemb} is proved in Section \ref{sect-proof}.
\section{Characterization of hyperbolically embedded subgroups}
\label{sect-char}
The strategy of our proof of Theorem \ref{hypemb} is based on Osin's characterization of hyperbolically embedded subgroups of relatively hyperbolic groups stated below.
To state the characterization, we begin by introducing several definitions.
Let $G$ be a group.
For a family $\bbk$ of subgroups of $G$, we put $\calk=\bigcup_{K\in\bbk} K\setminus\{1\}$.
A subset $X$ of $G$ is called a relative generating set of $G$ with respect to $\bbk$ if $G$ is generated by $X\cup\calk$.
The group $G$ is said to be finitely generated relative to $\bbk$ if there exists a finite relative generating set of $G$ with respect to $\bbk$.
When $Z$ is a (possibly infinite) generating set of $G$, we denote by $\Gamma(G,Z)$ the Cayley graph of $G$ with respect to $Z$ and by $d_Z$ the word metric with respect to $Z$.
\begin{Def}\label{def-sru}
Let $G$ be a group which is finitely generated relative to a family $\bbk$ of subgroups.
A subgroup $H$ of $G$ is said to be strongly undistorted relative to $\bbk$ in $G$ if $H$ is generated by some finite subset $Y$ and for some finite relative generating set $X$ of $G$ with respect to $\bbk$, the natural map $(H,d_Y)\to (G,d_{X\cup\calk})$ is a quasi-isometric embedding.
\end{Def}
\begin{Def}\label{def-am}
Let $G$ be a group and $H$ a subgroup of $G$.
The subgroup $H$ is said to be malnormal (resp. almost malnormal) in $G$ if for every element $g$ of $G\setminus H$, the intersection $H \cap gHg^{-1}$ is trivial (resp. finite).
\end{Def}
\begin{Thm}\label{hypemb-sruam} (\cite[Theorem 1.5]{Osi06b})
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of subgroups. Then a subgroup $H$ of $G$ is hyperbolically embedded into $G$ relative to $\bbk$ if and only if $H$ is strongly undistorted relative to $\bbk$ and almost malnormal in $G$.
\end{Thm}
For finitely generated relatively hyperbolic groups, we have the following characterization of strongly relatively undistorted subgroups.
\begin{Def}\label{def-sq} (\cite[Definitions 4.9 and 4.11]{Osi06a})
Let $G$ be a group with a finite family $\bbk$ of subgroups.
Suppose that $G$ is generated by a finite set $X$.
A subgroup $H$ of $G$ is said to be quasiconvex relative to $\bbk$ in $G$ if there exists a constant $\sigma\ge 0$ satisfying the following:
if $h_1$ and $h_2$ are elements of $H$ and $p$ is a geodesic from $h_1$ to $h_2$ in $\Gamma(G,X\cup\calk)$, then for every vertex $v$ on $p$, there exists an element $h$ of $H$ such that $d_X(v,h)\le\sigma$.
A subgroup $H$ of $G$ is said to be strongly quasiconvex relative to $\bbk$ in $G$ if $H$ is quasiconvex relative to $\bbk$ in $G$ and for every element $K$ of $\bbk$ and every element $g$ of $G$, the intersection $H \cap gKg^{-1}$ is finite.
\end{Def}
\begin{Thm}\label{sru-sq} (\cite[Theorem 4.13]{Osi06a})
Let $G$ be a group which is hyperbolic relative to a finite family $\bbk$ of subgroups. Suppose that $G$ is finitely generated.
Then a subgroup $H$ of $G$ is strongly undistorted relative to $\bbk$ if and only if $H$ is strongly quasiconvex relative to $\bbk$.
\end{Thm}
\section{Strongly relatively undistorted free subgroups}
\label{sect-sruf}
When a group $G$ is hyperbolic relative to a family $\bbk$ of subgroups, a subgroup of $G$ is said to be parabolic with respect to $\bbk$ if it is conjugate to a subgroup of some element of $\bbk$.
The main purpose of this section is to show the following.
\begin{Prop}\label{sq'}
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of proper subgroups and $\Gamma$ a subgroup of $G$ which is neither virtually cyclic nor parabolic with respect to $\bbk$.
If $\Gamma$ contains an element of infinite order, then it contains a free subgroup $F$ of rank two which is strongly undistorted relative to $\bbk$ in $G$.
\end{Prop}
Proposition \ref{sq'} yields the following corollary.
\begin{Cor}\label{sq}
Let $G$ be a group which is not virtually cyclic and is hyperbolic relative to a family $\bbk$ of proper subgroups.
Then $G$ contains a free subgroup $F$ of rank two which is strongly undistorted relative to $\bbk$ in $G$.
\end{Cor}
\begin{proof}[Proof of Corollary \ref{sq} using Proposition \ref{sq'}]
It follows from \cite[Corollary 4.5]{Osi06b} that the group $G$ contains an element of infinite order.
Hence the assertion follows from Proposition \ref{sq'}.
\end{proof}
For the proof of Proposition \ref{sq'}, we prepare several lemmas.
When a group $G$ is hyperbolic relative to a family $\bbk$ of proper subgroups, an element $g$ of $G$ is said to be parabolic with respect to $\bbk$ if it is conjugate to an element of a subgroup of $G$ which belongs to $\bbk$.
Otherwise $g$ is said to be hyperbolic with respect to $\bbk$.
\begin{Lem}\label{hyp-element}
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of proper subgroups and $\Gamma$ a subgroup of $G$ which is neither virtually cyclic nor parabolic with respect to $\bbk$.
Suppose that either $G$ is countable and $\bbk$ is finite or $\Gamma$ contains an element of infinite order.
Then there exists an element $h$ of $\Gamma$ which is of infinite order and hyperbolic with respect to $\bbk$.
\end{Lem}
\begin{proof}
First suppose that $G$ is countable and $\bbk$ is finite.
Then we can consider a geometrically finite convergence action of $G$ on a compact metrizable space such that the set of all maximal parabolic subgroups of the action is equal to the collection of all conjugates of elements of $\bbk$ which are infinite (see for example \cite[Definition 3.1]{Hru10}).
Since $\Gamma$ is neither virtually cyclic nor parabolic with respect to $\bbk$, the restriction of this action to $\Gamma$ is a non-elementary convergence action.
Hence $\Gamma$ contains an element $h$ which is loxodromic with respect to this action (see \cite[Theorem 2T]{Tuk94}).
Next suppose that $\Gamma$ contains an element $h$ of infinite order.
We only have to consider the case where $h$ belongs to the conjugate $gKg^{-1}$ for some element $K$ of $\bbk$ and some element $g$ of $G$.
Since $\Gamma$ is not parabolic with respect to $\bbk$, we can take an element $\gamma$ of $\Gamma\setminus gKg^{-1}$.
By \cite[Lemma 4.4]{Osi06b}, there exists an integer $n$ such that the element $\gamma h^n$ of $\Gamma$ is of infinite order and hyperbolic with respect to $\bbk$.
\end{proof}
\begin{Lem}\label{hyp-hypemb} (\cite[Theorem 4.3 and Corollary 1.7]{Osi06b})
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of subgroups and $h$ be an element of $G$ which is of infinite order and hyperbolic with respect to $\bbk$.
Then there exists a unique subgroup $E(h)$ of $G$ such that $E(h)$ is virtually cyclic, contains $h$ and maximal among such subgroups of $G$.
Moreover $E(h)$ is hyperbolically embedded into $G$ relative to $\bbk$.
\end{Lem}
The following lemma is shown by a similar argument in the proof of \cite[Corollary 1.12]{MP12}.
\begin{Lem}\label{mp}
Let $G$ be a group generated by a finite set $X$ and hyperbolic relative to a finite family $\bbk$ of subgroups.
Let a subgroup $H$ of $G$ be hyperbolically embedded into $G$ relative to $\bbk$.
We denote the union $\bbk\cup\{H\}$ by $\bbh$.
Suppose that a subgroup $Q$ of $G$ is strongly quasiconvex relative to $\bbh$ in $G$.
Then there exists a constant $C(Q,H)\ge 0$ with the following property:
for every subgroup $R$ of $H$ such that
\begin{enumerate}
\setlength{\itemsep}{0mm}
\item[(a)] $Q\cap H\subset R$;
\item[(b)] $d_X(1, r) \ge C(Q,H)$ for every element $r$ of $R\setminus Q$;
\item[(c)] the subgroup $R$ is quasiconvex relative to $\bbk$ in $G$,
\end{enumerate}
the natural homomorphism $Q*_{Q\cap R}R\to G$ is injective and its image $\langle Q\cup R \rangle$ is strongly quasiconvex relative to $\bbk$ in $G$.
\end{Lem}
\begin{proof}
By \cite[Theorem 5.12]{MP12}, there exists a constant $C(Q,H)\ge 0$ with the following property:
for every subgroup $R$ of $H$ satisfying above conditions (a) and (b), the natural homomorphism $Q*_{Q\cap R}R\to G$ is injective, its image $\langle Q\cup R \rangle$ is quasiconvex relative to $\bbh$ in $G$ and for every element $g$ of $G$ and every element $H'$ of $\bbh$, the intersection $\langle Q\cup R \rangle\cap gH'g^{-1}$ is either finite or conjugate to $R$ in $\langle Q\cup R \rangle$.
Now we suppose that $R$ satisfies above condition (c) and show that $\langle Q\cup R \rangle$ is strongly quasiconvex relative to $\bbk$ in $G$.
First we show that $\langle Q\cup R \rangle$ is quasiconvex relative to $\bbk$ in $G$.
By \cite[Theorem 1.1 (2)]{MP12}, it suffices to show that for every element $g$ of $G$, the intersection $\langle Q\cup R \rangle\cap gHg^{-1}$ is quasiconvex relative to $\bbk$ in $G$.
Every finite subgroup of $G$ is automatically quasiconvex relative to $\bbk$ in $G$.
Since $R$ is quasiconvex relative to $\bbk$ in $G$, it is well-known that every conjugate of $R$ is also quasiconvex relative to $\bbk$ in $G$.
Thus the subgroup $\langle Q\cup R \rangle$ is quasiconvex relative to $\bbk$ in $G$.
Next we show that for every element $g$ of $G$ and every element $K$ of $\bbk$, the intersection $\langle Q\cup R \rangle\cap gKg^{-1}$ is finite.
We have only to consider the case where there exists an element $s$ of $\langle Q\cup R \rangle$ such that $\langle Q\cup R \rangle\cap gKg^{-1}$ is equal to $sRs^{-1}$.
Then $\langle Q\cup R \rangle\cap gKg^{-1}$ is contained in $s(s^{-1}gK(s^{-1}g)^{-1}\cap H)s^{-1}$.
Since $H$ is hyperbolically embedded into $G$ relative to $\bbk$, the intersection $s^{-1}gK(s^{-1}g)^{-1}\cap H$ is finite.
Hence $\langle Q\cup R \rangle\cap gKg^{-1}$ is also finite.
Thus $\langle Q\cup R \rangle$ is strongly quasiconvex relative to $\bbk$ in $G$.
\end{proof}
By using the above lemmas, we prove the following, which implies Proposition \ref{sq'} for finitely generated groups.
\begin{Lem}\label{sq'fg}
Let $G$ be a group which is hyperbolic relative to a finite family $\bbk$ of proper subgroups and $\Gamma$ a subgroup of $G$ which is neither virtually cyclic nor parabolic with respect to $\bbk$.
Suppose that $G$ is finitely generated.
Then $\Gamma$ contains a free subgroup $F$ of rank two which is strongly quasiconvex relative to $\bbk$ in $G$.
\end{Lem}
\begin{proof}
By Lemma \ref{hyp-element}, there exists an element $h$ of $\Gamma$ which is of infinite order and hyperbolic with respect to $\bbk$.
We denote by $H$ a subgroup $E(h)$ of $G$ given by Lemma \ref{hyp-hypemb}.
We put $\bbh=\bbk\cup\{H\}$.
Then $\bbh$ consists of proper subgroups of $G$ and $G$ is hyperbolic relative to $\bbh$.
Since $H$ is virtually infinite cyclic, the subgroup $\Gamma$ is not parabolic with respect to $\bbh$.
Hence it follows from Lemma \ref{hyp-element} that there exists an element $q$ of $\Gamma$ which is of infinite order and hyperbolic with respect to $\bbh$.
We denote by $Q$ the infinite cyclic subgroup of $\Gamma$ generated by $q$.
By Lemma \ref{hyp-hypemb} and Theorem \ref{sru-sq}, the subgroup $Q$ is strongly quasiconvex relative to $\bbh$ in $G$.
Since $Q$ is torsion-free, this implies that the intersection $Q\cap H$ is trivial.
Let $X$ be a finite generating set of $G$ and $C(Q,H)\ge 0$ a constant given by Lemma \ref{mp}.
Since $h$ is of infinite order, there exists a positive integer $k$ such that $d_X(1,h^{kn}) \ge C(Q,H)$ for every integer $n\in\Z\setminus\{0\}$.
We denote by $R$ the infinite cyclic subgroup of $\Gamma$ generated by $h^k$.
Since $R$ is a finite index subgroup of $H$, it is strongly quasiconvex relative to $\bbk$ in $G$ by Theorem \ref{sru-sq}.
Hence $R$ satisfies conditions (a), (b) and (c) in Lemma \ref{mp}.
By Lemma \ref{mp}, the subgroup $\langle Q\cup R \rangle$ of $\Gamma$ is a free group of rank two which is strongly quasiconvex relative to $\bbk$ in $G$.
\end{proof}
\begin{figure}
\begin{center}
\includegraphics[width=4.8cm,height=2.8cm]{figure1.eps}
\end{center}
\caption{The graph of groups $\mathcal{G}$}
\label{graphofgroups}
\end{figure}
For the proof of the general case,
we need the following lemma which is obtained from a specialization of \cite[Theorem 2.44]{Osi06a} together with \cite[Proposition 2.49]{Osi06a}.
\begin{Lem}\label{fgru}
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of proper subgroups and $X$ a finite relative generating set of $G$ with respect to $\bbk$.
Then there exists a finite subfamily $\bbk_0=\{K_1, \ldots, K_m\}$ of $\bbk$ such that $G$ splits as the free product
\begin{eqnarray*}
G=G_0\ast(\ast_{K\in\bbk\setminus\bbk_0}K),
\end{eqnarray*}
where $G_0$ is the subgroup of $G$ which is generated by $K_1, \ldots, K_m$ and $X$.
Moreover there exist a finitely generated subgroup $Q$ of $G_0$ and a family $\bbl=\{L_1, \ldots, L_m\}$ of subgroups of $Q$ satisfying the following:
\begin{enumerate}
\setlength{\itemsep}{0mm}
\item[(i)] the finite relative generating set $X$ is contained in $Q$ and for every $i \in\{1, \ldots, m\}$, the subgroup $L_i$ is contained in $K_i$;
\item[(ii)] the group $G_0$ is isomorphic to the fundamental group of the graph of groups $\mathcal{G}$ drawn in Figure \ref{graphofgroups};
\item[(iii)] the subgroup $Q$ is hyperbolic relative to $\bbl$, the set $X$ is a relative generating set of $Q$ with respect to $\bbl$ and the natural map $(Q,d_{X\cup\call})\to (G,d_{X\cup\calk})$ is an isometric embedding, where we put $\call=\bigcup_{L\in\bbl} L\setminus\{1\}$.
\end{enumerate}
\end{Lem}
\begin{Lem}\label{fgru'}
In the setting of Lemma \ref{fgru}, we have the following:
\begin{enumerate}
\setlength{\itemsep}{0mm}
\item[(1)] if no elements of $\bbk$ contain $X$, then $\bbl$ consists of proper subgroups of $Q$.
\item[(2)] if a subgroup of $Q$ is strongly quasiconvex relative to $\bbl$ in $Q$, then it is strongly undistorted relative to $\bbk$ in $G$;
\item[(3)] if a subgroup of $Q$ is hyperbolically embedded into $Q$ relative to $\bbl$, then it is hyperbolically embedded into $G$ relative to $\bbk$;
\end{enumerate}
\end{Lem}
\begin{proof}
(1) This follows from condition (i) in Lemma \ref{fgru}.
(2) This follows from Theorem \ref{sru-sq} and condition (iii) in Lemma \ref{fgru}.
(3) Suppose that a subgroup $V$ of $Q$ is hyperbolically embedded into $Q$ relative to $\bbl$.
Then $V$ is strongly quasiconvex relative to $\bbl$ in $Q$ by Theorems \ref{hypemb-sruam} and \ref{sru-sq}.
By assertion (2), the subgroup $V$ is strongly undistorted relative to $\bbk$ in $G$.
We claim that $V$ is almost malnormal in $G$.
Indeed, since $V$ is hyperbolically embedded into $Q$ relative to $\bbl$, it is almost malnormal in $Q$.
Hence it suffices to show that if $g$ is an element of $G\setminus Q$, then the intersection $V \cap gVg^{-1}$ is finite.
First suppose that $g$ belongs to $G \setminus G_0$.
Since $G_0$ is a free factor of $G$, the intersection $G_0 \cap gG_0g^{-1}$ is trivial.
Hence the intersection $V \cap gVg^{-1}$ is also trivial.
Next suppose that $g$ belongs to $G_0\setminus Q$.
We denote by $T$ the Bass-Serre covering tree of the graph of groups $\mathcal{G}$.
Then the group $Q$ is the stabilizer group of a vertex $v$ of $T$ and we have $gv \neq v$.
Since the intersection $Q \cap gQg^{-1}$ fixes both $v$ and $gv$, it fixes an edge of $T$.
Hence $Q \cap gQg^{-1}$ is parabolic with respect to $\bbl$.
Since every element of $\bbl$ is contained in a element of $\bbk$, the intersection $Q \cap gQg^{-1}$ is parabolic with respect to $\bbk$.
Since the subgroup $V$ is strongly quasiconvex relative to $\bbk$ in $G$, the intersection $V \cap (Q \cap gQg^{-1})$ is finite.
Hence the intersection $V \cap gVg^{-1}$ is also finite.
\end{proof}
\begin{proof}[Proof of Proposition \ref{sq'}]
Since $\Gamma$ contains an element of infinite order, it follows from Lemma \ref{hyp-element} that there exists an element $h$ of $\Gamma$ which is of infinite order and hyperbolic with respect to $\bbk$.
Let $E(h)$ be a subgroup of $G$ given by Lemma \ref{hyp-hypemb}.
Since $\Gamma$ is not virtually cyclic, we can take an element $\gamma$ of $\Gamma\setminus E(h)$.
By Lemma \ref{hyp-hypemb}, every subgroup of $G$ that contains $\{h,\gamma\}$ is not virtually cyclic.
We take a finite relative generating set $X$ of $G$ with respect to $\bbk$ which contains $\{h,\gamma\}$.
Note that since $h$ is hyperbolic with respect to $\bbk$, no elements of $\bbk$ contain $X$.
Let $Q$ and $\bbl$ be given by Lemma \ref{fgru}.
By Lemma \ref{fgru'} (1), the family $\bbl$ consists of proper subgroups of $Q$. Since $\Gamma\cap Q$ contains $\{h,\gamma\}$ and each element of $\bbl$ is contained in some element of $\bbk$, the subgroup $\Gamma\cap Q$ is neither virtually cyclic nor parabolic with respect to $\bbl$.
Since $Q$ is finitely generated, it follows from Lemma \ref{sq'fg} that $\Gamma\cap Q$ contains a free subgroup $F$ of rank two which is strongly quasiconvex relative to $\bbl$ in $Q$.
By Lemma \ref{fgru'} (2), the subgroup $F$ is strongly undistorted relative to $\bbk$ in $G$.
\end{proof}
\begin{Rem}\label{alt}
We give an alternative proof of Corollary \ref{sq}.
Let $F'$ be a free group of rank two with a free basis $Y'$ and $X$ a finite relative generating set of $G$ with respect to $\bbk$.
By \cite[Theorem 1.1]{A-M-O07}, there exists a quotient group $G'$ of $G$ and an embedding $\iota \colon F' \to G'$ such that $G'$ is hyperbolic relative to $\{\psi(K)\}_{K\in\bbk} \cup \{\iota(F')\}$, where $\psi \colon G \to G'$ denotes the natural projection.
Since $\iota(F')$ is a hyperbolic group, it follows from \cite[Theorem 2.40]{Osi06a} that $G'$ is hyperbolic relative to $\{\psi(K)\}_{K\in\bbk}$.
Hence $\iota(F')$ is hyperbolically embedded into $G'$ relative to $\{\psi(K)\}_{K\in\bbk}$.
By Theorem \ref{hypemb-sruam}, the natural map $(F',d_{Y'}) \to (G',d_{\psi(X)\cup\calk'})$ is a quasi-isometric embedding, where $\calk'$ denotes $\bigcup_{K\in\bbk} \psi(K) \setminus\{1\}$.
We take a subset $Y$ of $G$ such that $Y$ consists of two elements and $\psi(Y)$ is equal to $\iota(Y')$.
We denote by $F$ the subgroup of $G$ which is generated by $Y$.
Then $F$ is a free group of rank two.
We can confirm that the natural map $(F,d_Y) \to (G,d_{X\cup\calk})$ is a quasi-isometric embedding.
This finishes the proof of Corollary \ref{sq}.
\end{Rem}
\section{Almost malnormal subgroups of virtually free groups}
\label{sect-amvf}
In this section, we show the following (compare with \cite[Theorem 5.16]{Kap99} for the case of non-abelian free groups of finite rank),
which is necessary for the proof of Theorem \ref{hypemb}.
\begin{Thm}\label{kap1}
Let $M$ be a finitely generated and virtually non-abelian free group and let $\{M_l ~|~ l \in \{1, \ldots, n\}\}$ be a finite family of finitely generated subgroups of $M$ of infinite index.
Then there exists a proper subgroup $V$ of $M$ satisfying the following:
\begin{enumerate}
\setlength{\itemsep}{0mm}
\item[(i)] the subgroup $V$ is finitely generated, virtually non-abelian free, and almost malnormal in $M$;
\item[(ii)] for every $l \in \{1, \ldots, n\}$ and every element $m$ of $M$, the intersection $mVm^{-1} \cap M_l$ is finite.
\end{enumerate}
\end{Thm}
For the proof of Theorem \ref{kap1}, we prepare two lemmas.
\begin{Lem}\label{aminfind}
If a proper subgroup $H$ of an infinite group $G$ is almost malnormal in $G$, then $H$ is an infinite index subgroup of $G$.
\end{Lem}
\begin{proof}
Assume that $H$ is a finite index subgroup of $G$.
Then for every element $g$ of $G$, the intersection $H \cap gHg^{-1}$ is a finite index subgroup of $G$ and hence it is infinite.
This contradicts the assumption that $H$ is almost malnormal in $G$.
\end{proof}
\begin{Lem}\label{kap2}
Let $F$ be a non-abelian free group of finite rank and let $\{H_l ~|~ l \in \{1, \ldots, n\}\}$ be a finite family of finitely generated subgroups of $F$ of infinite index.
Let $U$ be a finite subgroup of $\Out(F)$.
We denote by $\pi \colon \Aut(F) \to \Out(F)$ the quotient map and put $A=\pi^{-1}(U)$. Then there exists a proper subgroup $H$ of $F$ satisfying the following:
\begin{enumerate}
\setlength{\itemsep}{0mm}
\item[(i)] the subgroup $H$ is a free subgroup of rank two and is malnormal in $F$;
\item[(ii)] for every $l \in \{1, \ldots, n\}$ and every element $a$ of $A$, the intersection $H_l \cap a(H)$ is trivial;
\item[(iii)] for every element $a$ of $A$, either $a(H)$ is equal to $H$ or the intersection $H \cap a(H)$ is trivial.
\end{enumerate}
\end{Lem}
\begin{proof}
We put $U=\{u_i ~|~ i \in \{1, \ldots, m\}\}$ and choose an element $a_i$ of $\pi^{-1}(u_i)$ for each $i \in \{1, \ldots, m\}$.
We denote by $\calm$ the collection of all proper subgroups $H'$ of $F$ satisfying the following:
\begin{itemize}
\setlength{\itemsep}{0mm}
\item the subgroup $H'$ is a free subgroup of rank two and is malnormal in $F$;
\item for every $i \in \{1, \ldots, m\}$, every $l \in \{1, \ldots, n\}$ and every element $f$ of $F$, the intersection $a_i^{-1}(H_l) \cap fH'f^{-1}$ is trivial.
\end{itemize}
By \cite[Theorem 5.16]{Kap99}, the collection $\calm$ is not empty.
We remark that every element of $\calm$ satisfies conditions (i) and (ii) in Lemma \ref{kap2}.
For each $i \in \{1, \ldots, m\}$ and each element $H'$ of $\calm$, we put as follows:
\begin{eqnarray*}
\overline{\bbk}_i &=& \{K \subset H' ~|~ K \neq \{1\} \text{ and } K=H'\cap fa_i(H')f^{-1} \text{ for some } f \in F\}; \\
I_1(H') &=& \{i \in \{1, \ldots, m\} ~|~ \overline{\bbk}_i=\emptyset\}; \\
I_2(H') &=& \{i \in \{1, \ldots, m\} ~|~ \overline{\bbk}_i=\{H'\}\};\\
I_3(H') &=& \{i \in \{1, \ldots, m\} ~|~ \overline{\bbk}_i\neq\emptyset\text{ and }\overline{\bbk}_i\neq\{H'\}\}.
\end{eqnarray*}
Since every finitely generated subgroup of $F$ is quasiconvex in $F$ (see \cite[Section 2]{Sho91}), the subgroup $a_i(H')$ as well as $H'$ is quasiconvex and malnormal in $F$.
By \cite[Theorem 7.11]{Bow12}, the group $F$ is hyperbolic relative to $\{a_i(H')\}$.
Since $H'$ is quasiconvex in $F$, it follows from \cite[Theorem 1.1 (1)]{MP12} that $H'$ is quasiconvex relative to $\{a_i(H')\}$ in $F$.
By \cite[Theorem 9.1]{Hru10}, the collection $\overline{\bbk}_i$ has a finite set of representatives of $H'$-conjugacy classes $\bbk_i=\{K_{i,j} ~|~ j \in \{1, \ldots, n_i\}\}$ and $H'$ is hyperbolic relative to $\bbk_i$.
For every $j \in \{1, \ldots, n_i\}$ the subgroup $K_{i,j}$ is finitely generated and malnormal in $H'$ (see \cite[Propositions 2.29 and 2.36]{Osi06a}).
For the proof of the lemma, it suffices to show that there exists an element $H$ of $\calm$ such that $I_3(H)$ is empty.
Indeed if $H$ is such an element of $\calm$, then for every $a\in A$, either both $H\cap a(H)$ and $H\cap a^{-1}(H)$ are equal to $H$ or the intersection $H\cap a(H)$ is trivial.
If the former occurs, then $a(H)$ is equal to $H$.
Hence the subgroup $H$ has the desired properties.
Let $H'$ be an element of $\calm$.
We have only to consider the case where the set $I_3(H')$ is not empty.
Then for every $i\in I_3(H')$ and every $j \in \{1, \ldots, n_i\}$, the group $K_{i,j}$ is a proper malnormal subgroup of $H'$ and hence it is of infinite index in $H'$ by Lemma \ref{aminfind}.
By \cite[Theorem 5.16]{Kap99}, there exists a proper subgroup $H''$ of $H'$ satisfying the following:
\begin{itemize}
\setlength{\itemsep}{0mm}
\item $H''$ is a free subgroup of rank two and is malnormal in $H'$;
\item for every $i \in I_3(H')$, every $j \in \{1, \ldots, n_i\}$ and every element $h'$ of $H'$, the intersection $K_{i,j} \cap h'H''h'^{-1}$ is trivial.
\end{itemize}
Since $H'$ belongs to $\calm$, the subgroup $H''$ also belongs to $\calm$.
We claim that if $i\in \{1, \ldots, m\}$ belongs to $I_3(H')$,
then for every element $f$ of $F$, the intersection $H'' \cap a_i(fH''f^{-1})$ is trivial.
Indeed, the intersection $H' \cap a_i(fH'f^{-1})$ is either trivial or conjugate to $K_{i,j}$ in $H'$ for some $j \in \{1, \ldots, n_i\}$.
If the former occurs, the claim obviously holds.
If the latter occurs, the intersection $H'' \cap a_i(fH''f^{-1})$ is conjugate to a subgroup of $K_{i,j} \cap h'H''h'^{-1}$ for some $j \in \{1, \ldots, n_i\}$ and some element $h'$ of $H'$.
By the choice of $H''$, the intersection $H'' \cap a_i(fH''f^{-1})$ is trivial.
The above claim implies that $I_3(H')$ is contained in $I_1(H'')$.
Since $H''$ is a subgroup of $H'$, the set $I_1(H')$ is also contained in $I_1(H'')$.
Hence the union $I_1(H') \cup I_3(H')$ is a proper subset of the union $I_1(H'') \cup I_3(H'')$ if $I_3(H'')$ is not empty.
By repeating this procedure if necessary, we can find an element $H$ of $\calm$ such that $I_3(H)$ is empty.
\end{proof}
Now we are ready to prove Theorem \ref{kap1}.
Given a subgroup $H$ of a group $G$, we put
\begin{eqnarray*}
V_G(H) &=& \{ g \in G ~|~ H \cap gHg^{-1}\text{ is of finite index both in }H\text{ and }gHg^{-1}\}.
\end{eqnarray*}
\begin{proof}[Proof of Theorem \ref{kap1}]
It follows from the assumption that $M$ has a finite index normal subgroup $F$ which is a non-abelian free group of finite rank.
The action of $M$ on $F$ by conjugations induces a homomorphism from $M$ to $\Out(F)$.
We denote the image of this homomorphism by $U$.
Since $F$ is a finite index subgroup of $M$, $U$ is a finite subgroup of $\Out(F)$.
For each $l \in \{1, \ldots, n\}$, we put $H_l=M_l \cap F$.
Since $M_l$ is finitely generated and $F$ is a finite index subgroup of $M$, this implies that $H_l$ is also finitely generated.
Since the subgroup $M_l$ is of infinite index in $M$ and the subgroup $F$ is of finite index in $M$, the subgroup $H_l$ is of infinite index in $F$ and of finite index in $M_l$.
Therefore we can take a subgroup $H$ of $F$ given by Lemma \ref{kap2}.
We put $V=V_M(H)$.
By \cite[Theorem 1.6]{H-W09}, the group $H$ is a finite index subgroup of $V$.
Hence $V$ is a finitely generated and virtually non-abelian free group.
By the definition of $U$ and condition (iii) in Lemma \ref{kap2}, for every element $m$ of $M$, either $mHm^{-1}$ is equal to $H$ or the intersection $H \cap mHm^{-1}$ is trivial.
Hence for every element $m$ of $M \setminus V$, the intersection $H \cap mHm^{-1}$ is trivial.
Since $H$ is a finite index subgroup of $V$, this implies that $V$ is almost malnormal in $M$.
By condition (ii) in Lemma \ref{kap2}, for every $l \in \{1, \ldots, n\}$ and every element $m$ of $M$, the intersection $H_l \cap mHm^{-1}$ is trivial and hence the intersection $M_l \cap mVm^{-1}$ is finite.
\end{proof}
\section{Proof of Theorem \ref{hypemb}}
\label{sect-proof}
We prove Theorem \ref{hypemb}.
As the case of Corollary \ref{sq}, it suffices to show the following in view of \cite[Corollary 4.5]{Osi06b}.
\begin{Thm}\label{hypemb'}
Let $G$ be a group which is hyperbolic relative to a family $\bbk$ of proper subgroups and $\Gamma$ a subgroup of $G$ which is neither virtually cyclic nor parabolic with respect to $\bbk$.
If $\Gamma$ contains an element of infinite order, then there exists a finitely generated and virtually non-abelian free subgroup $V$ of $G$ which is hyperbolically embedded into $G$ relative to $\bbk$ and contains $V\cap \Gamma$ as a finite index subgroup.
Moreover if $G$ is torsion-free, then $\Gamma$ contains a free subgroup of rank two which is hyperbolically embedded into $G$ relative to $\bbk$.
\end{Thm}
This generalizes a result due to I. Kapovich \cite[Theorem C]{Kap99} for torsion-free hyperbolic groups.
For the proof of Theorem \ref{hypemb'}, we show the following lemma.
\begin{Lem}\label{hypemb'fg}
Let $G$ be a group which is hyperbolic relative to a finite family $\bbk$ of proper subgroups and $\Gamma$ a subgroup of $G$ which is neither virtually cyclic nor parabolic with respect to $\bbk$.
Suppose that $G$ is finitely generated.
Then there exists a finitely generated and virtually non-abelian free subgroup $V$ of $G$ which is hyperbolically embedded into $G$ relative to $\bbk$ and contains $V\cap \Gamma$ as a finite index subgroup.
Moreover if $G$ is torsion-free, then $\Gamma$ contains a free subgroup of rank two which is hyperbolically embedded into $G$ relative to $\bbk$.
\end{Lem}
\begin{proof}
By Lemma \ref{sq'fg}, the group $\Gamma$ contains a free subgroup $F$ of rank two which is strongly quasiconvex relative to $\bbk$ in $G$.
We put $M=V_G(F)$.
Since $F$ is strongly quasiconvex relative to $\bbk$ in $G$, it follows from \cite[Theorem 1.6]{H-W09} that $F$ is a finite index subgroup of $M$.
Hence $M$ is strongly quasiconvex relative to $\bbk$ in $G$ and we have $V_G(M)=M$.
By \cite[Corollary 8.7]{H-W09}, there exists only finitely many double cosets $MgM$ in $G$ such that $gMg^{-1}$ is not equal to $M$ and the intersection $M \cap gMg^{-1}$ is infinite.
We denote the collection of such double cosets by $\{Mg_lM ~|~ l \in\{1, \ldots, n\}\}$.
For each $l \in \{1, \ldots, n\}$, we put $M_l=M \cap g_lMg_l^{-1}$.
We claim that for each $l \in \{1, \ldots, n\}$, $M_l$ is of infinite index in both $M$ and $g_lMg_l^{-1}$.
Indeed, assume that this does not hold.
Since $g_l$ does not belong to $M$ and $V_G(M)$ is equal to $M$, the subgroup $M_l$ is of infinite index in either $M$ or $g_lMg_l^{-1}$.
By replacing $g_l$ by its inverse if necessary, we may assume that the subgroup $M_l$ is of finite index in $M$ and of infinite index in $g_lMg_l^{-1}$.
It follows from \cite[Lemma 6.6]{Kap99} that for every positive integer $k$, the subgroup $M \cap g_l^kMg_l^{-k}$ is of finite index in $M$ and of infinite index in $g_l^kMg_l^{-k}$.
Then for every positive integer $p$, the subgroup $\bigcap_{k=1}^p M \cap g_l^kMg_l^{-k}$ is of finite index in $M$ and hence the intersection $\bigcap_{k=1}^p g_l^kMg_l^{-k}$ is infinite.
By \cite[Theorem 1.4]{H-W09}, there exists a positive integer $p$ such that $g_l^p$ belongs to $M$.
This contradicts the assumption that the subgroup $M\cap g_l^pMg_l^{-p}$ is of infinite index in $g_l^pMg_l^{-p}$.
We also claim that for every $l \in \{1, \ldots, n\}$, the subgroup $M_l$ is finitely generated.
Indeed, since $M$ is strongly quasiconvex relative to $\bbk$ in $G$, it follows from Theorem \ref{sru-sq} that the conjugate $g_lMg_l^{-1}$ is also strongly quasiconvex relative to $\bbk$ in $G$.
Hence $M_l$ is strongly quasiconvex relative to $\bbk$ in $G$ (see for example \cite[Theorem 9.8]{Hru10} and \cite[Theorem 4.18]{Osi06a}).
By Theorem \ref{sru-sq}, the subgroup $M_l$ is finitely generated.
Hence it follows from Theorem \ref{kap1} that there exists a finitely generated and virtually non-abelian free subgroup $V$ of $M$ such that $V$ is almost malnormal in $M$ and $mVm^{-1} \cap M_l$ is finite for every $l \in \{1, \ldots, n\}$ and every element $m$ of $M$.
Since $M$ contains a finitely generated free subgroup of finite index and $V$ is a finitely generated subgroup of $M$, the subgroup $V$ is undistorted in $M$, that is, it is strongly undistorted relative to the empty family $\emptyset$ in $M$.
Since $M$ is strongly undistorted relative to $\bbk$ in $G$ by Theorem \ref{sru-sq}, the subgroup $V$ is strongly undistorted relative to $\bbk$ in $G$.
We claim that $V$ is almost malnormal in $G$.
Indeed, assume that $V$ is not almost malnormal in $G$.
Then there exists an element $g$ of $G \setminus V$ such that the intersection $V \cap gVg^{-1}$ is infinite.
In particular the intersection $M \cap gMg^{-1}$ is also infinite.
Since $V$ is almost malnormal in $M$, the element $g$ belongs to $G \setminus M$.
Then for some $l \in \{1, \ldots, n\}$ and some elements $m_1$ and $m_2$ of $M$, the element $g$ is equal to $m_1g_lm_2$.
Therefore the intersection $V \cap (m_1g_lm_2)V(m_1g_lm_2)^{-1}$ is infinite and hence the intersection $m_1^{-1}Vm_1 \cap g_lMg_l^{-1}$ is also infinite.
This contradicts the condition that for every $l \in \{1, \ldots, n\}$ and every element $m$ of $M$, the intersection $mVm^{-1} \cap M_l$ is finite.
Thus the subgroup $V$ is strongly undistorted relative to $\bbk$ and almost malnormal in $G$.
By Theorem \ref{hypemb-sruam}, the subgroup $V$ is hyperbolically embedded into $G$ relative to $\bbk$.
For the case where $G$ is torsion-free, we can take a desired subgroup $V$ of $\Gamma$ by applying \cite[Theorem 5.16]{Kap99} to $F$ instead of applying Theorem \ref{kap1} to $M$ in the above argument.
\end{proof}
\begin{proof}[Proof of Theorem \ref{hypemb'}]
The proof is done in the same way as the proof of Proposition \ref{sq'} by using Lemma \ref{hypemb'fg} and Lemma \ref{fgru'} (3) instead of Lemma \ref{sq'fg} and Lemma \ref{fgru'} (2), respectively.
\end{proof}
\section*{Acknowledgements}
The authors would like to thank Professor Ilya Kapovich for his useful suggestion.
|
1,314,259,993,361 | arxiv | \section{Introduction}
\label{sec:Intro}
The era of gravitational-wave (GW) astronomy has begun with the detection of the first GW signal GW150914 from the merger of binary black holes (BBHs) by the Advanced LIGO interferometers \cite{LIGOScientific:2016aoc}. More than ninety GW events \cite{LIGOScientific:2021djp} has been discovered in the following years after the first detection, including one with follow-up electromagnetic signals GW170817 \cite{LIGOScientific:2017vwq}. It is expected that with this new observational window, we can explore the nature of gravity, and also extract useful astronomy and cosmological informations.
To learn about the universe and its cosmic expansion history we need to measure both distances and redshifts. One shortcome of GW is that it does not provide direct information about redshift. If EM counterpart can be identified, redshift of the source can be directly extracted. However, the probability for finding EM counterparts to the stellar mass BBH mergers is almostly theoretically prohibited. Various methods has been developed to circumvent this need for EM follow-ups. For example, utilizing the anisotropies of compact binaries originated from the large-scale structure \cite{Namikawa:2015prh}; the cross-correlation of gravitational wave standard sirens and galaxies \cite{Oguri:2016dgk}; \cite{Nair:2018ign}; \cite{Mukherjee:2018ebj}; \cite{Zhang:2018ekk}, the statistical information of galaxy redshifts \cite{Chen:2017rfc,Fishbach:2018gjp,Gray:2019ksv,Wang:2020dkc,Zhu:2021aat}, the mass distribution function of binary black hole sources \cite{Farr:2019twy}; \cite{You:2020wju}, and other methods \cite{Seto:2001qf}; \cite{Messenger:2011gi}; \cite{DelPozzo:2015bna}, {\it etc.}
During recent years, several updated GW detectors have been proposed around the world. The future spaceborne GW observatories and the third-generation ground-based detectors will achieve unprecedented sensitivity in a broad frequency range. The Einstein Telescope (ET) is expected to have a sensitivity 10 times better than the current second-generation instruments, and may have a detection rate of $10^5$ events per year \cite{Punturo:2010zz}; \cite{TP-toolbox-web}. The spaceborne GW observatory, such as the Laser Interferometer Space Antenna (LISA) \cite{Audley:2017drz}, Taiji \cite{Hu:2017mde} and TianQin \cite{TianQin:2015yph}, will open up a $10^{-4}\sim 1$ Hz window to the gravitational universe. The Big Bang Observer (BBO), which is a proposed successor to LISA, mainly focuses on the observation of gravitational waves from physical precesses shortly after the Big Bang. It will be able to detect almost {\it all} the GW signals from compact binaries in the universe, and may have a detection rate of $10^{7}$ events per year \cite{Cutler:2005qq}; \cite{Cutler:2009qv}.
With more and more GW events to be detected by the future GW observatories, the large scale structure (LSS) can be traced in the luminosity-distance space (LDS), just as what has already been done in the redshift surveys of galaxies \cite{Zhang:2018nea}. Hence, one way to solve the issues associated with dark sirens is to construct a 3D map of GW sources in LDS. Via the anisotropic clustering measurement, one can obtain the luminosity distance ($D_L$) as well as angular diameter distance ($D_A$) in different redshift bins.
By utilizing these $D_L-D_A$ relation, one can extract the LSS formation and evolution law in LDS. It plays essentially the same role as the standard redshift-distance duality.
Several works has been done in this direction: \cite{Zhang:2018nea,Libanore:2020fim,Palmese:2020kxn,Zhang:2021tdt,Libanore:2021jqv,Namikawa:2020twf,Yu:2020agu}.
In this paper, we are going to constrain the cosmological expansion history as well as LSS growth rate in LDS, {\it ie.} the angular diameter distance $D_A$, luminosity distance space Hubble parameter $H_L$ and the growth rate $f_L\sigma_8$.
We adopt a Fisher matrix analysis to give the measurement errors for each of these parameters. As for the experimental configurations, we explore the capability of ET and BBO.
In particular, we investigate some major systematics to the luminosity distance measurement, namely the weak lensing magnification along the line of sight \cite{PhysRevD.81.124046}, as well as the peculiar velocities in the low redshift regime \cite{Gordon:2007zw}.
We consider both of these effects and see how they will downgrade the cosmological parameter constraining ability of the GW LDS measurements.
The rest of the paper is organized as follows. In section \ref{sec:Methods}, we will first briefly overview the physical picture of luminosity distance space and the anisotropy power spectrum in LDS. And then, we will introduce some details of our analysis, including the source of luminosity distance measurement errors, the event rate, and the tomographic redshift distribution of the GW sources.
In Section \ref{sec:res}, we present our forecasted result by considering BBO and ET. Section \ref{sec:con} is devoted to summaries and discussions.
\section{Methods}
\label{sec:Methods}
GW signals do not directly provide the redshift information. However, since the amplitude of the GW signal is inverse proportional to the luminosity distance to the source, and masses and orbital inclination can be extracted from frequency evolution and relativistic amplitude, the GW signals can provide a direct and absolute measurement of the luminosity distance.
A map of large scale structure in luminosity distance space could be constructed from GW signals when the number density of the GW sources reached some threshold. In this section we will first briefly introduce the physical picture of statistical anisotropic power spectrum in the luminosity distance space, which was firstly proposed by \cite{Zhang:2018nea}.
Due to the presence of peculiar velocity, it will contaminate the background recession velocity and hence bias the luminosity distance.
Up to the first order in the peculiar velocity, the observed luminosity distance reads
\begin{eqnarray}
D_L^{\rm obs}\approx D_L(1+2\vec{v}\cdot \hat{n})\;,
\end{eqnarray}
where $D_L$ is the luminosity distance in the unperturbed background, $\vec{v}$ is the peculiar velocity of the source, and $\hat{n}$ is the line of sight unit vector.
This effect distorts the matter distribution pattern in LDS w.r.t. the one assuming the unperturbed background.
It resembles the redshift space distortion (RSD) in the galaxy spectrographic survey.
The resulted power spectrum reads
\begin{eqnarray}\label{PSinLDS}
P^{\rm LDS}(k_{\perp}, k_{||})=P_m(k)\bigg{(}1+\frac{f_L}{b_g}\mu^2\bigg{)}^2F(k_{||})\;,
\end{eqnarray}
where $k_{\perp}(k_{||})$ represents the wave vector perpendicular (parallel) to the line of sight, $\mu$ is the cosine of the angle between the line of sight direction and the peculiar velocity of the source with $\mu\equiv k_{||}/k$ and $k\equiv \sqrt{k^2_{\perp}+k^2_{||}}$, $b_g$ is the density bias of GW host galaxies, and $F(k_{||})$ describe the finger of god (FoG) effect. Eq.~\eqref{PSinLDS} has similar structure as the formula in RSD, except for the pre-factor in the second term where
\begin{eqnarray}
f_L=\Bigg{(}\frac{2D_L/(1+z)}{d(D_L)/dz}\Bigg{)}\times f\;,
\end{eqnarray}
and $f$ is the standard dimensionless growth rate defined as
\begin{eqnarray}
f=\frac{a}{D_1}\frac{dD_1}{da}=\frac{1}{aHD_1}\frac{dD_1}{d\eta}\;,
\end{eqnarray}
where $a$ is the scale factor, $\eta$ is the conformal time, and $D_1$ is the linear growth factor.
$f_L$ is zero at $z=0$, and increases monotonically with redshift. As a result, the peculiar velocity induced distortion effect in LDS is negligible at small redshift, but become larger than that in redshift space at around $z=1.7$.
In this paragraph, we will introduce the details of the methods which we used to constrain the cosmological parameters.
We employ a Fisher matrix analysis for 3 parameters, namely $D_A$, $H_L$, and $f_L\sigma_{8}$.
We adopt the same method as well as the same parameters as those in \cite{Zhang:2018nea}.
However, the difference lies in we considered the luminosity distance errors induced by the lensing magnification and peculiar velocity.
We would like to have a direct comparisions with those without considering these unavoidable uncertainties.
Under the Gaussian statistical assumptions, the Fisher matrix is given by
\begin{eqnarray}\label{Fisher}
F_{\alpha\beta}=\sum_{\textit{\textbf{k}}} \frac{\partial P^{\rm LDS}(k,\mu, z)}{\partial \lambda_{\alpha}}\frac{\partial P^{\rm LDS}(k,\mu, z)}{\partial \lambda_{\alpha}}\frac{1}{P^{\rm LDS}(k,\mu.z)^2}V_{\rm eff}(k)\;.
\end{eqnarray}
Here the sum is over different $\textit{\textbf{k}}$ bins, $P^{\rm LDS}$ is the statistically anisotropic power spectrum given in Eq.~\eqref{PSinLDS}, and $\lambda_\alpha$ stands for the cosmological parameters. For the matter power spectrum in Eq.~\eqref{PSinLDS}, we simply use the empirical formula in \cite{White:1996pz}, {\it ie.}
\begin{eqnarray}
P_m(k)=\frac{2\pi^2k}{H_0^4}\delta^2_H(k)T^2(k)\;,
\end{eqnarray}
where $\delta_H$ is the density fluctuation and $T(k)$ is the transfer function.
\begin{comment}
\begin{eqnarray}
\delta_H(k)=\delta_H(k=H_0)\times(\frac{k}{H_0})^{n-1}
\end{eqnarray}
with
\begin{eqnarray}
\begin{aligned}
\delta_H(k&=H_0)=1.94\times10^{-5}{\nonumber}\\
&\times\exp\{-0.95(n-1)-0.169(n-1)^2\}\\
T(k)&=\frac{\log(1+2.34q)}{2.34q}{\nonumber}\\
&\times\{1+3.89q+(1.61q)^2+(5.46q)^3+(6.71q)^4\}^{-\frac{1}{4}}
\end{aligned}
\end{eqnarray}
\end{comment}
$V_{\rm eff}$ in Eq.~\eqref{Fisher} is the effective survey volume for a certain $\textit{\textbf{k}}$ shell, which is given by \cite{Feldman:1993ky,Hamilton:1997kv,Tegmark:1997rp}
\begin{eqnarray}
V_{\rm eff}^{(i)}(k)=\int\bigg{(}\frac{\bar{n}_{i}(z)P(k,z)}{W_{||}^{-2}(k)W_{\perp}^{-2}(k)+\bar{n}_{i}(z)P(k,z)}\bigg{)}^2d^3\textit{\textbf{r}}\;,
\label{eq:Veff}
\end{eqnarray}
where $W_{||}$ and $W_{\perp}$ are window functions in the parallel and perpendicular directions. $\bar{n}_{i}(z)$ is the source number density in the $i$-th ``photo-$z$'' bin, which we will define later.
Since we are interested at large scale, we can take $W_{\perp}=1$ to a good approximation. However, since we are investigating the distortion effect induced by the peculiar velocity field (a sub-leading order effect), we have to keep the same order term in the window function along the parallel direction. Hence, we have $W_{||}=\exp(-k^2\chi^2\sigma^2_{\log D}/2)$, among which $\chi$ is the conformal distance, and $\sigma_{\log D}$ is the distance measurement error in the logarithmic scale.
Besides of the experimental error of the luminosity distance which has already been considered in \cite{Zhang:2018nea}, in this paper we also include some of the unavoidable distance uncertainties which are generated during the propagation along the line of sight. Among these, two of the most significant effects are the weak lensing magnification in the high redshift and the peculiar velocity in the low redshift. We adopt the following fitting formula \cite{PhysRevD.81.124046,Kocsis_2006,PhysRevLett.99.081301} for the distance errors due to lensing magnification $\sigma^{\rm lens}_{D_L}$ and that due to velocity dispersion $\sigma^{\rm pv}_{D_L}$
\begin{eqnarray}
\label{eq:lens1}
\sigma_{D_L}^{\rm lens}(z)=D_L(z)\times C_l \bigg{(}\frac{1-(1+z)^{-\beta_l}}{\beta_l}\bigg{)}^{\alpha_l}\;,\\
\label{eq:pv1}
\sigma_{D_L}^{\rm pv}(z)=D_L(z)\times \bigg{(}1-\frac{c(1+z)^2}{H(z)D_L(z)}\bigg{)}\frac{\sqrt{\langle v^2\rangle}}{c}\;,
\end{eqnarray}
where $C_l=0.066, \beta_l=0.25, \alpha_l=1.8$ and $\sqrt{\langle v^2\rangle}=500~km/s$, which is the peculiar velocity root mean square of the host galaxy with respect to the Hubble flow. For the intrinsic measurement uncertainties, we consider four cases with $\sigma_{\log D}^{\rm GW}=0.001, 0.005, 0.01, 0.02$, which may well cover the typical $D_L$ uncertainty in the whole redshift range. Hence, the total uncertainty is given by
\begin{eqnarray}
\sigma_{D_L}^{\rm tot}=\sqrt{(\sigma^{\rm GW}_{D_L})^2+(\epsilon\sigma^{\rm lens}_{D_L})^2+(\sigma^{\rm pv}_{D_L})^2}\;.
\end{eqnarray}
The coefficient $\epsilon$ in front of $\sigma^{\rm lens}_{D_L}$ denotes for the delensing efficiency.
This is due to the fact that lensing is dominant error on the luminosity distance estimation in the high redshift, in order to utilise the GW data for precision cosmology purpose, we have to do the delensing operation to the original GW data. However, given the experience from CMB \cite{SimonsObservatory:2018koc}, we know that it is technically very challenging.
For GW delensing, we can use the synthetic observation of wide shallower weak lensing shear measurement ({\it eg.} 20 galaxy/arcmin$^2$) and small deeper weak lensing flexion measurement ({\it eg.} 500 galaxy/arcmin$^2$) \cite{Shapiro:2009sr,Hilbert:2010am}. In this paper, we adopt an optimal choice of delensing efficiency, namely a $50\%$ delensing, which is the similar level as the stage-IV CMB experimental capability \cite{SimonsObservatory:2018koc}.
Next, let us introduce the true average number density of GW source $\bar n_{\rm GW}$. It is different from $\bar{n}_{i}(z)$ in Eq. (\ref{eq:Veff}). The latter is the observed averaged number density in the $i$-th ``photo-$z$'' bin. Here, we borrow the phrase ``photo-$z$'' from the galaxy survey to describe the luminosity distance measurement uncertainty in GW data. The ``observed'' photo-$z$ can be computed from the observed luminosity distance by assuming some fiducial distance-redshift duality.
The true density and the ``photo-$z$'' density are related via
\begin{eqnarray}
\bar{n}_{i}(z)=\int_{z^i_{p, {\rm low}}}^{z^i_{p,{\rm up}}}~\bar n_{\rm GW}(z)p(z_p|z)~dz_p\;,
\end{eqnarray}
where $z_p$ is the measured ``photo-$z$'', $p(z_p|z)dz_p$ describes the true $z$ distribution in a certain ``photo-$z$'' interval.
We consider both BH-BH and NS-NS mergers that may be detected in a ten year period by ET and BBO.
For BBO, $\bar n_{\rm GW}$ is dominated by NS-NS events, we adopt a model in \cite{Cutler:2009qv} to describe the redshift evolution the NS-NS merger rate
\begin{eqnarray}
\bar n_{\rm GW}(z)=n_0\times r(z)\;,
\end{eqnarray}
where $r(z)$ is the time evolution factor which is given by the following linear fit formula
\begin{eqnarray}
\begin{split}
r(z)= \left \{
\begin{array}{ll}
1+2z\;, & z\le 1\\
\frac{3}{4}(5-z)\;, & 1\le z\le 5\\
0\;, & z\ge 5
\end{array}
\right.
\end{split}
\end{eqnarray}
The local merger rate $r_0$ for BBO is assumed to be $1.54\times 10^{-6}~\rm{Mpc^{-3}yr^{-1}}$.
For ET, we adopt an interpolation formula given in \cite{Libanore:2020fim}
\begin{eqnarray}
\frac{d^2N_m}{dzd\Omega}=2\bigg{[}A\exp \bigg{(}-\frac{(z-\bar{z})^2}{2\sigma^2}\bigg{)}\bigg{]}
\bigg{[}\frac{1}{2}\bigg{(}1+{\rm erf}(\frac{\alpha(z-\bar{z})}{\sigma^2\sqrt{2}})\bigg{)}\bigg{]}\;,
\end{eqnarray}
where $A=10^{3.22}, \bar{z}=0.37, \sigma^2=1.42, \alpha=5.48$ for BH-BH merger events, and $A=10^{3.07}, \bar{z}=0.19, \sigma^2=0.15, \alpha=0.8$ for NS-NS merger events.
We adopt the tomographic method for measuring the LDS power spectrum. We divide the survey depth range, $D_L\in(0,30)$Gpc, into three bins. As for comparison, \cite{Zhang:2018nea} divide the same luminosity range into six bins. The differences are due to in our case we take into account the lensing magnification and peculiar velocity uncertainties in the luminosity distance estimation. From Eq. (\ref{eq:lens1}) and (\ref{eq:pv1}), one can see that the corresponding errors range from a few percent to twenty percent, especially for the lensing magnification in the high redshifts. These unavoidable errors degrade the $D_L$ measurement from the spectroscopic-like survey to the photometric-like survey. In details, for BBO we choose the redshift bins as $z\in(0-1.54)$, $(1.54-2.40)$, $(2.40-3.70)$; for ET, we have $z\in(0-0.71)$, $(0.71-1.36)$, $(1.36-6)$. Each bin contains comparable amounts of GW events.
Hence, one can adopt the conventional procedure in the photometric galaxy survey in our GW analysis.
We take a simple assumption that the deduced redshift has a Gaussian scatter given the true redshift, just like the photometric redshift distributions given the true redshift in photometric galaxy surveys \cite{Ma:2005rc}; \cite{Yao:2017dnt}
\begin{eqnarray}
p(z'|z)=\frac{1}{\sqrt{2\pi\sigma_z(1+z)}}\exp \bigg{[}-\frac{(z-z'-\Delta^i_z)^2}{2(\sigma_z(1+z))^2}\bigg{]}\;,
\end{eqnarray}
where $z$ is the true redshift, $z'$ is the photo-$z$ redshift, $\sigma_z$ is the redshift uncertainty induced from the luminosity distance measurement error. In Fig.~\ref{BBObin} (for BBO) and Fig.~\ref{ETbin} (for ET), we show the true $z$ distributions for the three tomographic bins with the different luminosity distance uncertainties, and thus different photo-$z$ scatter $\sigma_z$.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{./pictures/binplotBBO/001.png}
\caption{\small The true redshift distributions in different tomographic bins with BBO experimental configuration. Top panel: $D_L$ uncertainty from GW intrinsic measurement error only; Middle panel: $D_L$ uncertainty from GW intrinsic measurement error plus peculiar velocity error; Bottom panel: $D_L$ uncertainty from GW intrinsic measurement error plus lensing magnification error. All three panels assume the GW intrinsic measurement error level, $\sigma_{\log D_L}^{\rm GW}=0.001.$}\label{BBObin}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{./pictures/binplotET/001.png}
\caption{\small The true redshift distributions in different tomographic bins with ET experimental configuration. Top panel: $D_L$ uncertainty from GW intrinsic measurement error only; Middle panel: $D_L$ uncertainty from GW intrinsic measurement error plus peculiar velocity error; Bottom panel: $D_L$ uncertainty from GW intrinsic measurement error plus lensing magnification error. All three panels assume the GW intrinsic measurement error level, $\sigma_{\log D_L}^{\rm GW}=0.001.$}\label{ETbin}
\end{figure}
One can see that, if we only consider the intrinsic measurement error as shown in the top panel of Fig.~\ref{BBObin} and Fig.~\ref{ETbin}, there are negligible overlap among different redshift bins. This is similar to the case of spectroscopic redshift in galaxy survey. Once we include the statistical errors from the peculiar velocity (Middle panel) and lensing magnification (Bottom panel), the true redshifts are scattered back and forth in and among the bins. The true redshift distributions of different photo-$z$ bins overlap significantly.
We will see how this will affect the cosmological parameter uncertainties is the next section.
\section{Results}
\label{sec:res}
In this section we present our Fisher matrix results for the measurement errors on $D_A$, $H_L$ and $f_L\sigma_8$ . We assume a ten year observation time for both BBO and ET. We show the results with different types of luminosity distance errors.
\begin{figure*}
\centering
\begin{minipage}{0.32\textwidth}
\includegraphics[width=1\textwidth]{./pictures/BBO/BBO1.jpg}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=1\textwidth]{./pictures/BBO/BBO2.jpg}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=1\textwidth]{./pictures/BBO/BBO3.jpg}
\end{minipage}
\caption{\small The forecasted errors on three cosmological parameters from three tomographic bins, assuming a ten-year observation period for BBO. The red, green, yellow and blue curves denote for different luminosity distance measurement errors, namely $\sigma^{\rm GW}_{\log D_L}=0.001, 0.005, 0.01$ and $0.02$. Left panel: $D_L$ measurement uncertainty is from the intrinsic GW observations error only; Middle panel: $D_L$ measurement uncertainty is from the intrinsic GW observations and peculiar velocity dispersion errors; Right panel: $D_L$ measurement uncertainty is from the intrinsic GW observations and gravitational lensing errors.}\label{BBOresult}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{0.32\textwidth}
\includegraphics[width=1\textwidth]{./pictures/ET/ET1.jpg}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=1\textwidth]{./pictures/ET/ET2.jpg}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=1\textwidth]{./pictures/ET/ET3.jpg}
\end{minipage}
\caption{\small The forecasted errors on three cosmological parameters from three tomographic bins, assuming a ten-year observation period for ET. The red, green, yellow and blue curves denote for different luminosity distance measurement errors, namely $\sigma^{\rm GW}_{\log D_L}=0.001, 0.005, 0.01$ and $0.02$. Left panel: $D_L$ measurement uncertainty is from the intrinsic GW observations error only; Middle panel: $D_L$ measurement uncertainty is from the intrinsic GW observations and peculiar velocity dispersion errors; Right panel: $D_L$ measurement uncertainty is from the intrinsic GW observations and gravitational lensing errors.}\label{ETresult}
\end{figure*}
In Fig.~\ref{BBOresult}, we show our result for BBO. The three panels represents for the cases with different luminosity errors, {\it ie.}, only the intrinsic measurement error; intrinsic error plus the peculiar velocity error; intrinsic error plus lensing magnification error.
It is clearly seen from the figure that, the uncertainty for the cosmological parameters increases with increasing redshift.
This is due to the decrement of GW event detection rate at high redshifts.
In the case with only $D_L$ intrinsic measurement uncertainty, {\it ie.} the left panel of Fig.~\ref{BBOresult}, the cosmological parameters can be detected with high precision.
The fractional errors for all the three parameters are below $10^{-3}$ for the case with $\sigma^{\rm GW}_{\log D_L}=0.001$. For the worst case, namely $\sigma^{\rm GW}_{\log D_L}=0.02$, the estimated fractional errors can be degraded into $10^{-1}$.
In the middle and right panel of Fig.~\ref{BBOresult}, we see that once the LSS induced luminosity distance measurement uncertainties are included, the errors on the cosmological parameters increase significantly.
In the case with velocity dispersion errors, the fractional errors for cosmological parameters stay in the range of a few $10^{-3}$ to $10^{-1}$.
This is still an acceptable results.
However, once the lensing uncertainty is included, the fractional errors increase dramatically in the high redshift. It reaches about order $O(1)$ in the third luminosity bin which is centered at $D_L=25$Gpc. In the first and second luminosity bins, the results with lensing errors are similar as those with velocity dispersion errors.
In Fig.~\ref{ETresult}, we show our result for ET.
The results have similar tendency as BBO.
However, since the detection rate for ET is much smaller than that for BBO, the fractional error is about 1 to 2 orders of magnitudes worse than that of BBO.
For the case with only the intrinsic $D_L$ measurement errors, {\it ie.} the left panel of Fig.~\ref{ETresult}, the fractional errors in the first luminosity bin ranges from
a few $10^{-3}$ to $10^{-2}$ for different intrinsic luminosity errors. The corresponding errors in the second luminosity bin reach the level of $10^{-2}$ to 1. The former is for $\sigma^{\rm GW}_{\log D_L}=0.001$ and the latter is for $\sigma^{\rm GW}_{\log D_L}=0.02$.
For the case after adding the velocity dispersion to the luminosity distance uncertainty, the results in the second and third luminosity bins (middle panel of Fig.~\ref{ETresult}) are almost unchanged compared with the one with only the intrinsic measurement error. The results in the first bin are about a factor of few worse than those shown in the left panels. This is different from the BBO result. For BBO, the intrinsic luminosity distance measurement error is small, hence the results are sensitive to the addition of velocity dispersion. However, for ET, the addition of velocity dispersion error does not change the results significantly.
For the case with the inclusion of lensing magnification errors, as shown in the right panel of Fig.~\ref{ETresult}, the data in the third luminosity bin is completely useless for the cosmological purpose. For instance, the fractional error of $f_L\sigma_8$ has increased to $5\times 10^1$.
On the other side, since the lensing effect is still sub-dominant in the low redshift, the cosmological results from the first luminosity bin is still comparable w.r.t. the galaxy survey method.
\section{Conclusions}
\label{sec:con}
Unlike galaxy surveys, GW events are observed in the luminosity distance space (LDS) rather than the redshift space. With the accumulation of the GW events, one can use this data as a tracer of the large scale structure and hence study its cosmology implication.
In this work, we investigate the possibility of using GW source clustering data in the luminosity distance space to constrain the cosmological parameters. In particular, we study the contamination effects from weak lensing magnification and peculiar velocity dispersion uncertainties. In order to qualify the cosmology study requirement of using number counts, the GW source events need to be accumulated to or above the level of 1 million. To satisfy this condition, we consider the future spaceborne GW observatories, such as Big Bang Observatory (BBO), and the third-generation ground-based observatory, such as Einstein Telescope (ET). For each of these two experiments, we assumed a ten-year observation period. For BBO, one can accumulate $10^8$ BH-BH and NS-NS merger events within 10 years; for ET, this number is about $10^6$.
We adopted a Fisher matrix analysis to constrain several cosmological parameters, {\it ie.} the angular diameter distance $D_A$, luminosity distance space Hubble parameter $H_L$, and the linear growth rate $f_L\sigma_8$.
We considered several different sources of luminosity distance measurement uncertainty, {\it ie.} velocity dispersion, gravitational lensing as well as the intrinsic uncertainty from GW detection. We forecasted the fractional errors of the cosmological parameters with different $D_L$ error budget. We showed how they affect the cosmological parameter constraining ability by using this LDS method. In order to make our data to qualify the minimum requirement of using the number count clustering analysis\footnote{To suppress the number count shot noise.}, we have to make sure that in each of the luminosity distance bins we have enough GW sources. Hence, we divide the whole range of luminosity distance from 0 to 30Gpc into three wide bins. Each of the bins contains comparable GW source numbers.
For BBO, our results showed that the constraining ability is sensitive to the $D_L$ uncertainty. The Fisher forecasted cosmological parameter errors increases with increasing of the intrinsic GW luminosity measurement errors. The fractional errors on the cosmological parameters in the first luminosity distance bin, which is centered at 5Gpc, can achieve the level of a few $10^{-4}$ level under the circumstance of intrinsic $D_L$ measurement error of $\sigma^{\rm GW}_{\log D_L}=0.001$. When we degrade the intrinsic measurement error to the level of $\sigma^{\rm GW}_{\log D_L}=0.02$, the cosmological parameter errors also degrade about 1 order of magnitude. The cosmological parameter fractional errors in the second and third bins monotonically increase by a factor of a few compared with the previous adjacent bin. At the $D_L=25$Gpc distance, the fractional errors are about $10^{-1}$ level for all three parameters which are concerned. Once we include the velocity dispersion errors, the cosmological parameter fractional errors in the first bin are enlarged by roughly one order magnitude. However, in the second and third bins, the velocity dispersion has very limited impact on the final results. On the contrary, the lensing magnification errors modify the results mainly in the high redshift leaving the low redshift almost unchanged. For instance, even optimally assuming a $50\%$ delensing efficiency, the fractional errors on the cosmological parameters in the third luminosity distance bin are still degraded by one order magnitude compared with the one in the same bin but with velocity dispersion errors. Hence, we can not use the third bin to do any precision cosmology studies if we could not remove this lensing error significantly. The results for ET are similar as those from BBO. Due to the GW source number is less than the former, the corresponding results also get a bit worse. Moreover, because of the increment of the measurement shot noise, the cosmology results from ET are also less sensitive to the velocity dispersion and lensing errors.
Ideally, the GW source number count can be used as a novel tracer of LSS in the luminosity distance space. In practice, once we include the peculiar velocity dispersion and lensing magnification uncertainties in $D_L$, we can only use the GW data as the ``photometric'' data instead of the originally proposed as the ``spectroscopic'' data. And due to the uncertainty along the line of sight, the robustness of this method is degraded significantly in the high redshift.
\section*{Acknowledgements}
QY is supported by the National Natural Science Foundation of China Grants No. 12005146.
BH is supported by the National Natural Science Foundation of China Grants No. 11973016.
\bibliographystyle{mnras}
|
1,314,259,993,362 | arxiv | \section{Method\label{method:sec}}
\include{results}
\include{discussion}
\section*{Acknowledgments}
We warmly thank Peter Biermann, John Black, Ralf-J\"urgen Dettmar, Carmelo Evoli, Horst Fichtner, Philipp Graeser, Francis Halzen, Matthias Mandelartz, Florian Schuppan and Andy Strong for intensive and fruitful discussions.
We
acknowledge the support from the DFG for the project B4 {\it Transport of cosmic rays from supernova remnants through the Galactic Magnetic Field} within the research unit ``Instabilities, turbulence and transport in
cosmic magnetic fields'' (FOR1048), from the Research Department of Plasmas with Complex Interactions (Bochum) and from the MERCUR-funded Ruhr Astroparticle-Plasma Physics centrum, RAPP centrum (St-2014-0040).
\bibliographystyle{elsarticle-num}
\section{Discussion and Outlook \label{discussion:sec}}
\subsection{Discussion}
We consider this paper as a first proof of concept that gamma-ray data can be used in the future to try to constrain the cosmic ray energy budget from supernova remnants. At this point, the data only allow for an upper limit calculation: the central conclusion is that the energy budget of the observed cosmic ray flux can be matched by the population of gamma-ray emitting SNRs. , they {\bf fail to explain the detected flux if...}
\begin{itemize}
\item ... several of those sources without gamma-ray detected cutoff actually cuts off earlier than in the knee region;
\item ... many of the gamma-ray signatures are lepton-dominated, i.e.\ produced by electrons rather than cosmic rays;
\item ... diffusion is significantly stronger than the Kolmogorov-case ($\delta =0.33$).
\end{itemize}
Future investigations will have to confirm these conclusions. In particular, data from CTA and HAWC will help to extend the measurements up to cosmic ray knee energies.
It is still interesting to see that, although gamma-ray measurements show very diverse spectra, the result matches the observed cosmic ray budget very well within the uncertainties.
One further conclusion is that that a cosmic ray wind is not needed to explain the data: without including convection, the energy budget can be well-reproduced. The introduction of convective effects would reduce the energy budget observed at Earth and thus possibly lead to the underestimation of the cosmic ray flux.
\subsection{Outlook}
One thing that we want to improve in the future is a better quantification of the variance of the method. To quantify the agreement between the predicted CR observables statistically, a measure of the variance is needed. The variance of the CR observables could be calculated and presented here along with the mean value. However, this variance has a limited statistical interpretation in the current simulation approach. It merely measures the spread of the simulated CR observable which is induced by randomly selecting positions for the 21 SNR in the Galaxy. In particular, this variance would decrease if the number of SNRs would be increased. It is expected that the variance of those 21 known SNRs pose an upper limit of the true variance. It would be desirable, in particular as soon as gamma-ray data allow for a more precise calculation of the CR spectra at the source, to have precise knowledge of the variance.
The calculation of the true variance would require to include temporal aspects of the SNRs - such as their production rate and lifetime - into our simulation approach. This could be done as a generalization of our procedure to include spatial aspects via the selection random positions and mapping of SNRs to the closest grid point on the spatial simulation grid \cite{nils_ecrs2014}. As similar issues are already addressed in the GALPROP manual ~\cite{strong_galprop_2011,Galprop_Web_Standford} (see chapter 6), an implementation including the temporal aspects of SNRs should be feasible.
On longer terms we intend to perform Monte-Carlo (MC) simulations of the propagation of Galactic CRs. As a basis, we suggest to use the publicly available CRPropa MC-framework to study the propagation of UHECRs in extragalactic environments~\cite{2013APh....42...41K}. Especially the redesigned object orientated structure of the upcoming version 3.0 of CRPropa seems to allow for an easy extensions for the propagation of Galactic CRs ~\cite{2013arXiv1307.2643A}. With today's computer technologies, a Monte Carlo treatment of Galactic CRs down to T$\sim$10-100~TeV seems possible with reasonable run times. For lower energies it may be sufficient to switch to a diffusive approximation to avoid the time intensive numerical solution of the equation of motion in the Galactic magnetic field. This method has two central advantages with respect to the approach of solving the transport equation:
\begin{itemize}
\item As single particles are propagated, it is possible to follow each individual trajectory.
\item The method allows for the implementation of arbitrary Galactic field models. This means that it is not necessary to assume a diffusion scalar, but that the particles can travel through a realistic magnetic field. By comparing different models, the effect of the magnetic field on the propagation and detection at Earth is possible. It can even be possible to derive the diffusion tensor using this method.
\end{itemize}
\section{Introduction\label{intro:sec}}
The question of the origin of cosmic rays is one of the most crucial ones in physics and astrophysics. Since their first detection in 1912 \cite{hess1912}, the energy spectrum of cosmic rays has been studied in great detail, revealing a general power-law structure of the differential energy spectrum, $dN/dE\propto E^{\gamma}$ and distinct features as the two prominent breaks in the power-law at $10^{15}$~eV (the so-called cosmic ray knee) and $10^{18.5}$~eV (the so-called cosmic ray ankle), see e.g.\ reviews given in \cite{gaisser1991,stanev2003}. Due to the deflection of these charged particles in cosmic magnetic fields, the observed flux arrives at a very high level of isotropy and sources are difficult to identify. During the past years, great progress has been made to start tying the observed cosmic ray flux to specific source classes. In particular, gamma-rays and neutrinos, arising from cosmic ray interactions in the vicinity of their production region, could be used to start to identify the cosmic ray origin. These neutral particles can arise in photohadronic interactions or in the interaction of cosmic rays with matter,
\begin{eqnarray}
p\,p&\rightarrow& \pi^{\pm,0}\\
p\,\gamma &\rightarrow& \Delta^{+}\rightarrow \left\{\begin{array}{l}p\,\pi^{0}\\n\,\pi^{+}\end{array}\right.\\
\pi^{+}/\pi^{-}&\rightarrow& (e^{+}+\nu_{e}+\overline{\nu}_{\mu})+\nu_{\mu}/(e^{-}+\overline{\nu}_{e}+\nu_{\mu})+\overline{\nu}_{\mu}\\
\pi^{0}&\rightarrow& \gamma\gamma\,.
\end{eqnarray}
Here, it is assumed that all pions and muons decay before further interaction/acceleration in the source.
The detection of astrophysical high-energy gamma-rays, neutrinos and charged particles has provided us with important pieces of information on Galactic cosmic ray sources in the past few years:
\begin{enumerate}
\item {\bf Gamma-rays} are in general considered an ambiguous signature, as both bremsstrahlung and Inverse Compton scattering can contribute to a potential signal from $\pi^{0}-$decays, see e.g.\ \cite{blumenthal_gould1970,schlickeiser2002}. There are, however, two quite unambiguous signatures which make it possible to identify the $\pi^{0}-$decay energy spectrum: (1) The high-energy cutoff at above $100$~TeV photon energy corresponds to acceleration up to $1$~PeV primary energy, assuming that about $10\%$ of the primary's energy is transferred into photons. With leptonic processes, it is extremely difficult to reach such high energies. Future observations with HAWC and CTA, improving the sensitivity of gamma-ray detection above $100$~TeV will help finding those sources accelerating up to PeV-energies. (2) The low-energy cutoff in the $\pi^{0}-$decay spectrum is basically independent of the power-law index of the primary spectrum and is located at around $200$~MeV photon energy. Recently, it was possible to identify such a pronounced low-energy cutoff at $100$~MeV for the two SNRs W44 \cite{abdo_w44} and IC443 \cite{adbo_ic443}, using the Fermi satellite. While these detections provide first proof that SNRs actually do accelerate cosmic rays, these two sources have steep energy spectra, or even an energy cutoff in the TeV-range. The unambiguous identification of PeVatrons in the Galaxy must be done in the future at the highest energies. Today, about 30 gamma-ray spectra from SNRs have been detected with around 20 shell-type SNR detections with imaging air Cherenkov Telescopes at TeV energies \cite{tevcat}. Although the cutoff at the highest energies could in many cases not be identified yet, the spectral behavior up to 10 TeV gamma-ray energy is known. This corresponds to cosmic ray proton energies of around 100 TeV, reaching to a factor of 10 below the knee. Thus, current spectra can now be used to try to estimate a possible contribution of SNRs to the total Galactic cosmic ray spectrum and energy budget.
\item Astrophysical {\bf neutrinos} provide unambiguous proof of hadronic interactions, but their detection is challenging \cite{halzen_klein2010}. The first experimental proof of the existance of astrophysical high-energy neutrinos succeeded with the 1~km$^{3}$-sized IceCube detector \cite{icecube2013,icecube2014}. The detection in the neutrino energy range corresponds to cosmic ray energies between approximately $600$~TeV up to $>40$~PeV, assuming that about $1/20$th of the primary's energy is going into the neutrino. In a dedicated analysis of three years of data, 37 neutrinos were detected at a background of $\sim 4-6$~atmospheric neutrinos per year. The approximate number of $6-10$ astrophysical neutrinos per year does not point towards one or a few sources, but represents a diffuse flux from a larger number of sources. From the spatial distribution of events, it is clear that a larger fraction of the flux must come from sources off the Galactic plane. Different estimates show that the diffuse neutrino flux from Galactic sources can only contribute with $2-4$ events or less \cite{kistler_beacom2006,ahlers_murase2014,neronov2014,winter_galactic2014,kachelriess2014,mandelartz2015}. Future measurements with IceCube and IceCube-Gen2 \cite{icecubegen2_whitepaper} will increase the number of astrophysical neutrinos with improved pointing information. This way, it is expected that neutrino astrophysics will contribute significantly to disentangling the contribution from Galactic and extragalactic sources in the energy region between the knee and the ankle.
\item The physics of {\bf charged cosmic rays} is proceeding rapidly, mainly through the measurement and modeling of the composition-dependent energy spectra of cosmic rays, see e.g.\ \cite{haungs2011,blasi2013,wolfendale2014} for reviews. Up to TeV-energies, direct measurements with balloons and satellites are possible. CREAM data reveal a break in the nuclei spectra for helium and heavier nuclei, which occurs around 100 GeV to 1 TeV in energy per nucleus \cite{ahn2010,biermann_apj2010}. Newer AMS02-data provide an update of the proton\cite{ams02_protons_icrc2013} and helium \cite{ams02_helium_icrc2013} spectra at high precision, where the break seen in CREAM data could not be confirmed for helium yet. Indirect measurements of cosmic ray air showers at above TeV energies now also provide us with information about the light and heavy components of the cosmic ray spectrum. Using KASCADE and KASCADE-Grande, first evidence of the concrete the composition around the cosmic ray knee and above could be reconstructed \cite{kascade_grande2013,kascade_grande2014}. These air-shower data reveal a possible iron-knee at close to $10^{17}$~eV, indicating that the composition above the knee is actually becoming heavy as it would be expected by a spectral cutoff proportional to the charge of the nucleus. However, the exact position of this iron knee, the question of the spectral behavior and composition at higher energies and other details still have to be resolved. Future IceTop data, including information on the composition, will help to answer this question \cite{aartsen_icetop2013}. As the region between the knee and the ankle is expected to be the transition region between Galactic and extragalactic cosmic rays, it is important to analyze the spectral behavior and composition above the ankle as well in order to get a handle on the extragalactic contribution. Recent results by the Auger collaboration show that even here, the composition seems to become heavier towards the high-energy cutoff of the cosmic ray spectrum \cite{auger_composition_icrc2013}.
\end{enumerate}
Composition-dependent cosmic ray transport through the Galaxy has been studied intensively in the past years, leading to the development of different numerical tools \cite{strong_propagation_1998,dragon,kissmann2014}, for a detailed description, see Section \ref{method:sec}. The GALPROP code was developed in the 1990s and for the first time provided a tool with realistic molecular distributions in the Galaxy, necessary to describe among others the longitude- and latitude-dependent, diffuse gamma-ray emission in the GeV energy range \cite{strong_propagation_1998}.
The transport equation solved numerically in GALPROP has the options to include a source term from
discrete sources and/or a source distribution, diffusion, convection,
diffusive re-acceleration, energy loss, fragmentation and radioactive
decay. The GALPROP code includes propagation in 3D, as well as
3D-maps of the Galactic magnetic field and the gas in the Galaxy. It
also includes the interstellar radiation field, which contributes to
inverse Compton scattering.
GALPROP has among others
been used to estimate the production of cosmic ray secondaries like
gamma-rays from $\pi^0-$decays and positrons from charged pion
decays. In particular, it can be used to fit the observed gamma-ray
sky, see e.g.\ \cite{strong_new_1998,abdo_milagro2007,abdo_milagro2008}. Further, the code is
applied to determine the extragalactic gamma-ray background
\citep{abdo_extragalactic2010}. It has also been shown that, for a given
source distribution function using a diffusive transport equation,
the Milky Way behaves like a
calorimeter \citep{strong2010}. In the modeling, a
generic cosmic ray spectrum and with no discrete, known sources is typically assumed. Approaches
using discrete, nearby supernova remnants with a generic
cosmic ray spectrum with a diffusive transport equation were used in \cite{buesching2008} and \cite{ahlers2009} and could
reproduce the observed increasing ratio of positrons to electrons and positrons. Similar results are
found in calculations for SNRs from heavy stars
\citep{erlykin2013,biermann_prl2009}.
The PAMELA data show that
the contribution of positrons to the electron and positron spectrum
is rising towards higher energies \citep{pamela2009}, which cannot be explained by the
typical results received with the GALPROP code, using a distribution
of sources and without including discrete sources in the
neighborhood. Results on the electron and positron spectra are now available form the AMS-02 experiments, improving the results and confirming the unexpected trend of an increasing positron fraction, which does not show a clear sign of a turnover yet \cite{ams_positrons2013}.
Including positrons produced in cosmic ray
interactions in nearby SNRs can produce the signature. Further
observation features like the excess of microwave- and
gamma-rays emission in the Galactic halo, named the Fermi-bubbles
(GeV energies) and the WMAP-haze ($22-90$~GHz), could be due
to an enhanced number of SNRs towards the Galactic center
\citep{biermann_apjl2010}. In \cite{finkbeiner_galactic_wind2010}, the signal
is interpreted as a possible signature from AGN activity or from a
bipolar Galactic wind.
A similar deviation from expected cosmic ray features is observed in
the arrival direction of cosmic rays. Several experiments observe a large-scale anisotropy in the cosmic ray
spectrum at around $10$~TeV, which is at a level of $\sim 6\cdot
10^{-4}$ \citep[e.g.]{amenomori_ta_anisotropy2006,guillian_superk_anisotropy2007,abdo_milagro_anisotropy2009,abbasi2010}. The feature cannot
be explained by the Compton-Getting effect, i.e.\ the motion of the
solar system through the Galaxy, which implies that the feature must
arise from the interstellar medium. Either the magnetic field
structure or the source distribution can be the reason \cite{giacinti_sigl2012,sveshnikova2013,pohl2013}. Alternatively, a local stream from an ancient supernova remnant in our local environment can explain the anisotropy \cite{biermann_anisotropie}.
In the recent years, in a huge effort to improve numerical modeling of cosmic ray transport, two other tools have been developed, both providing a cross-check for the GALPROP results
and also providing other, improved features. The DRAGON code builds on previous GALPROP results and includes major improvements including a generalization of the diffusion process by introducing a radial dependence of the diffusion
coefficient \cite{dragon,gaggero2013}. The PICARD code was developed in order to be able to use a full diffusion tensor \cite{kissmann2014}. Here, a large effort was put on the implementation modern numerical methods to solve the partial differential equation and by that improving performance \cite{kissmann2014}. Both codes have included a spiral structure of the Milky Way \cite{gaggero2014,werner2015}. This way, electron and positron spectra including the new features revealed by AMS02 and PAMELA can be described with more realistic primary spectra as before.
In previous investigations with the different propagation tools, the normalization of the hadronic cosmic ray spectrum in the numerical framework is typically done using the total observed cosmic ray energy at Earth. No information from individual sources is considered and the normalization to the spectrum itself has therefore not been a major focus of previous work. In this paper, we focus on investigating the spectral behavior and the total cosmic ray energy budget to determine if observations of these two quantities match the hypothesis that supernova remnants are the sources of cosmic rays.
While there is no unambiguous proof yet, SNRs are the most promising source class to explain the cosmic ray energy budget below the knee. In a simplified calculation, it is assumed that a typical supernova explosion provides a kinetic energy budget of $E_{\rm SN}\sim 10^{51}$~erg. If the cosmic ray spectrum below the knee represents a Galactic cosmic ray flux focussed within the Galactic plane, the inferred cosmic ray luminosity in the Galaxy is approximately $L_{\rm CR}\sim 2\cdot 10^{41}$~erg/s, within an uncertainty of about an order of magnitude as derived in e.g.\ \cite{drury2014}. Supernova explosions occur at an approximate rate of $\dot{n}\sim (1/50-1/100)$yr$^{-1}$ \footnote{In \cite{diehl2006}, the core collapse supernova rate is derived to be $1.9\pm1.1$ per century. We assume approximately $1-2$ SN per century which is compatible within the uncertainties.}. Assuming now that a constant fraction $\eta$ of the kinetic energy is converted to hadronic cosmic rays, it can be shown that $\eta$ needs to be on the order of $10\%$ in order for SNRs to explain the total cosmic ray energy budget,
\begin{equation}
L_{\rm CR}\approx 2\cdot 10^{41}\,{\rm erg/s}\cdot \left(\frac{\eta}{0.1}\right)\cdot \left(\frac{\dot{n}}{0.02\,{\rm yr}^{-1}}\right)\cdot \left(\frac{E_{\rm SN}}{10^{51}\,{\rm erg}}\right)\,.
\label{cr_lumi:equ}
\end{equation}
While the above presented calculation shows qualitatively that the total cosmic ray energy budget up to the knee can be produced by SNRs, a quantitative proof using realistic SNR energy spectra has not been possible. In particular, this back-of-the-envelope calculation assumes that all SNRs have (a) the same energy budget; (b) the same spectral index; (c) the same maximum energy. The theory of particle acceleration in SNRs, however, suggests that the SNR cosmic ray spectra actually change with time, both concerning all three parameters. A mixture of these parameters will therefore provide the average propagated cosmic ray energy spectrum. It is important to show that a realistic distribution of parameters actually does lead to the correct spectral behavior and energy budget.
In this paper, we use the proton spectra derived gamma-ray data from 21 well-studied SNRs in \cite{mandelartz2015} in order to investigate if the class of gamma-ray emitting SNRs can account for the cosmic ray proton energy budget. It is the first time that pieces of information from such a large sample of individual remnants is available. The studied population includes about $10\%$ of all supernova remnants which should be active simultaneously in the Galaxy\footnote{Here, we assume a rate of supernova explosions of $\dot{n}\sim 0.01-0.02$~yr$^{-1}$ with effective acceleration in the Sedov-Taylor phase over approximately $\tau_{\rm SN}\sim 10,000$~years. A total number of approximately $N_{\rm SNR}=100-200$ SNRs should therefore be active cosmic ray accelerators at a given time}. The observed cosmic ray budget comes from the diffused pool of cosmic rays in the Galaxy, produced within the lifetime of cosmic rays in the Galaxy, i.e.\ $\tau_{\rm esc} \sim 10^{7}$~years \cite{yanasak2001}. As an individual SNR is only active for $\tau_{\rm SN}\sim 10^{4}$~years, the observed cosmic ray flux will be the average flux of a mixture of hundreds of sets of SNRs as we see them today, but with spectra and energy budgets distributed differently as they are today. In this approach, we assume that the $\gamma-$ray detected sample is representative for the entire population of SNRs at a fixed time. We therefore use the $N_{\gamma\rm{-SNR}}=21$ spectra and simulate the propagation of the local spectra through the Galaxy $m$ times (with $m$ as a large number, as described later in the paper). Finally, we reweight the spectrum by dividing by the number of SNRs simulated, $m$, and multiplying by the actual number of SNRs that are expected to be active at a time, i.e.\ $N_{\rm SNR} = 100$. This procedure will be described in more detail later when describing the method, in Section \ref{method:sec}. In following this procedure, the concrete SNRs are placed randomly in the Galaxy, with a weight corresponding to the expected supernova explosion density in the Galaxy. The resulting cosmic ray spectrum is given by the average flux of these populations. For our simulation, we use the GALPROP code \cite{Galprop_Web_Standford,strong_propagation_1998,strong_new_1998}. This way, we can use the spectra observed at this instant of time and derive an average spectrum produced on average within the lifetime of cosmic rays.
As a result from these simulations, we draw conclusions if and under what constraints the observed population of gamma-ray emitting SNRs can represent the population responsible for the cosmic ray flux below the knee. With our simulations, we can both test the spectral behavior and the total energy budget of the sources.
This paper is organized as follows: In Section \ref{method:sec}, we describe the approach used to achieve our goal, including a short description of the tool GALPROP for Galactic cosmic ray propagation and a detailed description of the implementation of the SNR spectra with individual spectral normalizations. In the same section, the back of the envelop calculation is tested in order to validate the method. The results of our simulations using the individual SNRs are presented in Section \ref{results:sec} and they are discussed in detail in Section \ref{discussion:sec}.
\section{Method \label{method:sec}}
\subsection{Description of the numerical approach }
In order to estimate the gamma-ray emitting SNRs' contribution to the observed cosmic ray flux, we use the GALPROP tool \cite{Galprop_Web_Standford,strong_propagation_1998,strong_new_1998} with its previous applications summarized in Section \ref{intro:sec}. As our results will mostly concern the discussion of the energy behavior of a constant diffusion coefficient in order to keep the number of free parameters to a minimum, we will use the GALPROP software in this paper. Additional features as provided by DRAGON and PICARD are not considered at this stage and might become interesting to take into account in future investigations. In the GALPROP program,
the transport equation is solved numerically using the Crank-Nicholson method, including the following terms:
\begin{eqnarray}
\frac{dn}{dt}(\vec{r},t,E)&=&Q(\vec{r},t,E)+\nabla
(D_{xx}\nabla{n}-\vec{U}\,n))+\frac{\partial}{\partial
p}\left[p^{2}\,D_{pp}\, \frac{\partial}{\partial
p}\frac{n}{p^2}\right]\nonumber\\
&-&\frac{\partial}{\partial
p}\left((\frac{dp}{dt}-\frac{p}{3}\nabla\cdot\vec{U})\cdot n\right)-\frac{n}{\tau_f}-\frac{n}{\tau_d}\,.
\end{eqnarray}
Here, $p$ is the momentum of the particle, $n$ is the particle
density per momentum at a given point in space $r$ and $Q$ is a cosmic ray source spectrum at the source. The latter can either be represented by discrete sources or a continuous source distribution. Plain, scalar diffusion and diffusive re-acceleration is modeled with the constant
coefficients $D_{xx}$ and $D_{pp}$, respectively. The coefficients are usually modeled to match the observed secondary-to-primary ratio of cosmic rays \citep{strong_propagation_1998} and we will discuss details of what exact parameters we use in this simulation below. The velocity
$\vec{U}$ is the drift velocity of the particles in case of convection. Fragmentation and
radioactive decay happen on time scales of $\tau_f$ and $\tau_d$,
respectively.
Focused acceleration as discussed in \cite{schlickeiser_jenko2010} is not considered here. Particle species considered are leptons and hadrons, and
their secondaries through propagation. All parameters depend on the particle species under
consideration. The
output of the program includes hadronic and leptonic spectra, as well as
the gamma-ray emissivity in every grid point. The latter results from
synchrotron radiation, bremsstrahlung, inverse Compton scattering and
hadronic interactions. All details of the program can be found at \cite{Galprop_Web_Standford,strong_propagation_1998,strong_new_1998}. The specific settings and concrete method applied in this paper is described below.
\subsubsection{Cosmic ray spectra from gamma-ray measurements}
In this paper, gamma-ray observations are used to derive proton spectra from individual SNRs, observed in the Milky Way at this instant of time. We use the proton spectra for 21 SNRs as derived in \cite{mandelartz2015}, assuming that the sample is representative for a SNR population at a given time. The source spectra $j_{p}(T)$, with $T$ as the kinetic energy of the particle, and with $[j_p]=1/$MeV are parametrized as follows:
\begin{equation}
j_{\rm p}(T)=a_{p}\ \sqrt{\frac{T^2+2Tmc^2}{T_{0}^2+2T_{0}mc^2}}\ \frac{T+2Tmc^2}{\sqrt{T^2+2Tmc^2}}\ \tanh\left( \frac{T}{T_{\min}} \right)\ \exp\left(-\frac{T}{T_{\max}} \right).
\label{eq:SourceSpec_Mandelartz}
\end{equation}
Here, $a_p$ and $m$ represent the normalization constant and the proton mass, respectively. In \cite{mandelartz2015}, a low energy cutoff is applied at $T_{\min}=10$~MeV via a tangent hyperbolicus function in order to have a smooth transition. At high energies, the cutoff at kinetic energy $T_{\max}$ is performed via an exponential function. As we focus on the CR energy range from $\sim$GeV to $\sim$PeV in this paper, the low energy cut off is not applied in our simulations. The reference energy $T_0$ appears for a simpler parametrization of the function and is chosen to be $T_0=1$~TeV. Table \ref{params_source_spectra:tab} summarized the basic input parameters for the individual source spectra as they are provided by \cite{mandelartz2015}.
\begin{table}
\centering{
\begin{tabular}{c|cccccccc}
\hline\hline
SNR&$d$&$t_{\rm age}$&$\alpha_p$&$a_p$&$T_{\rm max}$&$E_{\rm CR,tot}$&RA&Dec\\
&[kpc]&[kyr]&&[$10^{39}$/MeV]&[GeV]&$10^{47}$~erg&&\\\hline
3C391 & 7.2 & 4.0 & 2.6&44964.2&$10^{6}$ & 3081.2&18h 49m 25s & -00$^{\circ}$ 55' 00" \\
W41 & 4.2 & 100.0 & 2.4&52175.2&$10^{6}$ & 4438.1&18h 34m 45s & -08$^{\circ}$ 48' 00" \\
W33 & 4.0 & 1.2 & 2.1&29694.1&$10^{6}$ &966.0&18h 13m 37s & -17$^{\circ}$ 49' 00" \\
W30 & 4.0 & 25.0 & 2.9&19853.4&$1.4\cdot 10^{4}$ &681.9& 18h 05m 30s & -21$^{\circ}$ 26' 00" \\
W28 & 1.9 & 33.0 & 2.8&9952.4&$10^{6}$ & 1874.6&18h 00m 30s & -23$^{\circ}$ 26' 00" \\
W28C & 1.9 & n/a& 2.5&2331.8&$10^{6}$ & 29.3&17h 58m 56s & -24$^{\circ}$ 03' 49" \\
G349.7+0.2 & 18.3 & 10.0 & 2.4 &332128.6& $10^{6}$&3155.2 & 17h 17m 59s & -37$^{\circ}$ 26' 00" \\
CTB 37B & 13.2 & 1.8 & 2.1&29721.8&$10^{6}$ & 3745.9&17h 13m 55s & -38$^{\circ}$ 11' 00" \\
CTB 37A & 7.9 & 16.0 & 2.6&92854&$10^{6}$ &1241.3& 17h 14m 06s & -38$^{\circ}$ 32' 00" \\
SN 1006 & 2.2 & 1.0 & 2.3&2676.1&$10^{6}$&1227.6& 15h 02m 50s & -41$^{\circ}$ 56' 00" \\
Puppis A & 2.0 & 4.6 & 2.5&4719.8&$10^{6}$ &231.2& 08h 22m 10s & -43$^{\circ}$ 00' 00" \\
Vela Jr & 1.3 & 4.8 & 1.8&16348.6&$4.4\cdot 10^{4}$ &1389.6& 08h 52m 00s & -46$^{\circ}$ 20' 00" \\
MSH 11-62 & 6.2 & 1.3 & 1.7&2869.8&46.0 &4.2& 11h 11m 54s & -60$^{\circ}$ 38' 00" \\
W44 & 3.0 & 10.0 & 2.6&258.4&58.7 &1.1& 18h 56m 00s & 01$^{\circ}$ 22' 00" \\
G40.5-0.5 & 3.4 & 30.0 &2.0&22697.4&$10^{6}$ &71.2& 19h 07m 10s & 06$^{\circ}$ 31' 00" \\
W49B & 10.0 & 1.0 & 2.9&76237.4&$10^{6}$&1323.3& 19h 11m 08s & 09$^{\circ}$ 06' 00" \\
W51C & 6.0 & 26.0 & 2.4&118406.8&$10^{6}$ &7872.5& 19h 23m 50s & 14$^{\circ}$ 06' 00"\\
IC443 & 1.5 & 3.0 & 2.7&6046.8&$10^{6}$ &85.2& 06h 17m 00s & 22$^{\circ}$ 34' 00" \\
Cygnus Loop & 0.6 & 15.0 & 2.9&93.2&$10^{6}$ &251.9& 20h 51m 00s & 30$^{\circ}$ 40' 00" \\
Cas A & 3.5 & 0.3 & 2.3&19276.6&$3.7\cdot 10^{4}$&2317.8& 23h 23m 26s & 58$^{\circ}$ 48' 00" \\
Tycho & 3.5 & 0.4 & 2.3&2678.0&$10^{6}$&1813.6& 00h 25m 18s & 64$^{\circ}$ 09' 00" \\
\end{tabular}
\caption{Basic input parameters of the 21 remnants in used here: $d$ is the distance to the SNR, $t_{\rm age}$ gives the SNR's age, $\alpha_p$ is the spectral index of the source spectrum, while $a_p$ gives the normalization constant at the reference kinetic energy $T_0=1$~TeV. Further, $T_{\rm max}$ represents the maximum energy of the spectrum, which is taken to be $1$~PeV in those cases where no clear cutoff could be identified in the data. $E_{\rm CR, tot}=\eta\cdot E_{\rm SNR}$ represents the total energy budget going into cosmic rays. RA/Dec provide right ascension and declination. All parameters are taken from the work of \cite{mandelartz2015}.\label{params_source_spectra:tab}}
}
\end{table}
Figure \ref{snrs:fig} shows the proton spectra {\it at the source}. The individual spectra differ significantly from each other: Their total cosmic ray energy budget varies from $10^{47}$~erg to $>10^{50}$~erg. Some spectra show a clear, early cutoff, others are flat and allow for a spectrum that continues up to the cosmic ray knee, i.e.\ up to $10^{15}$~eV in energy. For those sources that do not reveal a cutoff in gamma-ray data at this point, we assume that they continue up to the knee. This is the most optimistic scenario. It can be expected that in reality, a fraction of these sources actually has lower maximum energies, but at this point, it cannot be derived which ones and how many. This is why we consider this paper as a first, maximum scenario of how much these sources can contribute to the CR spectrum. In the future, when data from HAWC and CTA are available, this analysis can be redone with higher precision.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig01.pdf}
\end{center}
\caption{\label{snrs:fig}Figure of SNR proton spectra {\it at the source}. These spectra are used to be propagated within GALPROP and to estimate the contribution of SNRs to the detected CR spectrum.}
\end{figure}
\subsubsection{Simulation of individual SNRs as sources of the cosmic ray flux at Earth}
The cosmic ray spectrum at Earth represents the average of cosmic ray spectra, over the time period that cosmic rays diffuse through the Galaxy, i.e.\ $\tau_{\rm esc}\approx 10^{7}$~years. During this time, $\sim 100$ SNR populations contribute to the average cosmic ray flux. The positions of these SNRs follow the distribution of massive stars in the Galaxy, assuming that the SNRs are active for $10,000$~years or more. Here, we describe the procedure of how we use the sample of today's population to calculate the diffuse cosmic ray flux that has been averaged from earlier SNRs over the years:
\begin{enumerate}
\item We use those $N_{\gamma-{\rm SNR}} =21$ spectra derived in \cite{mandelartz2015} as representative for one SNR population. The sample discussed above only makes up a fraction of the total population, as the sensitivity of gamma-ray telescopes is basically limited to some $8-10$~kpc distance from Earth and does not provide data for typical SNRs from across the Galaxy. If we assume that the supernova rate in the Galaxy is $\dot{n}\approx 0.01-0.02$~yr$^{-1}$, and that one SNR actively accelerated cosmic rays in the Sedov-Taylor phase, lasting $t_{\rm SNR}\approx 10,000$~yrs, there are about $N_{\rm SNR}\approx 100-200$~active cosmic ray emitters at a given time. This is consistent with the number of shell-type SNRs that are well-identified at radio energies and catalogized in the Green catalog \cite{green2014}. Here, the number is close to 300. However, it should be kept in mind that radio emission is expected to happen even after the Sedov-Taylor phase, up to $10^{5}$ years or longer. So, while a larger number of radio SNRs than documented in Green's catalog is actually expected to be present in the Galaxy, only the stronger radio emitters are expected to be cosmic ray emitters and the detected numbers of strong radio emitters therefore provide a rough cross-check of active cosmic ray emitters.
The sample of SNRs we use here therefore provides us with a fraction
\begin{equation}
1/\alpha=:\frac{N_{\gamma-{\rm SNR}}}{N_{\rm SNR}}\approx 1/8
\end{equation}
of all SNRs that contribute to the cosmic ray spectrum.
Once we receive or final result of the cosmic ray proton spectrum at Earth from those 21 SNRs, we therefore have to weight the normalization with a factor of $\alpha$ in order to calculate the total energy output of all SNRs in the Galaxy.
\item In order to take into account the fact that the cosmic ray spectrum observed at Earth today represents the average of SNRs active for the past $10^{7}$~years, we place those spectra with individual normalization and spectral index at random positions in the Galaxy. It is not known, where in the past $10^{7}$~years SNRs have been active, but the distribution of supernova explosions should follow the mass distribution in the Galaxy. We therefore use this distribution in order to weight the probability of an SNRs being placed on the grid of the Galaxy in the GALPROP simulation. Specifically, we use the distribution function of \cite{1998ApJ...504..761C} implemented in GALPROP to implement the weighting. It is clear at this point, that this description is not optimal and does not fully represent the true distribution of SNRs \cite{2009BASI...37...45G}. At this point, it will be considered as a first-order approximation. Future work will include more precise distributions as for instance presented in \cite{2009BASI...37...45G}. For the first order approach that we follow here, the precision of the distribution presented in \cite{2009BASI...37...45G} suffices. As this provides us with a number in $(x,y)$ out of $\mathbb{R}^2$, but GALPROP as a numerical tool parametrizes the Galaxy on a grid with a certain spacing $(\delta x,\,\Delta y)$, we move the source simulated at a random position with a distribution-weight to the grid-point closest to the result. As the grid-size is typically significantly larger than the size of a single SNR, i.e. $\Delta x,\,\Delta y \gg 10$~pc, this implies we simulate SNRs as point-like sources.
\item
For statistical reasons we simulate a large number of supernova remnants, $m$, that blow up simultaneously and are active for 10,000 years. Each of the SNRs randomly receives one of the energy spectra drawn from the set of 21 SNRs. The cosmic ray flux generated by these $m$ SNRs will then be weighed by the average number of SNRs considered to be active in a certain time frame: As a total number of SNR we choose $m = 10,000$ or $m = 20,000$ (depending on the specific simulation) and we will show in the result section that this provides us with reasonable statistics. For the true, average number of SNRs active in the Galaxy at one time, we use $N_{\mathrm{SNR}} = 100$. As explained earlier, the detected number of SNRs in the radio is close to 300, but it is expected that only the brightest radio sources actually do contribute significantly to the total flux of cosmic rays, as these represent the younger SNRs that are able to accelerate to high energies. There is obviously an uncertainty attached to this number, which we consider to be a factor of $\sim 2$. Hence, in order to have a correct scaling of the normalization, the resulting properly normalized cosmic ray flux is obtained by reweighting the flux that is simulated with $m$ SNRs, $\Phi_{m}$ by the fraction $N_{\rm SNR}/m$, e.g.\ as
\begin{equation}
\Phi_{\mathrm{res}} = \frac{N_{\rm SNR}}{m}\times\Phi_{m} = \frac{100}{10,000}\times\Phi_{m}\,.
\label{phi:equ}
\end{equation}
\end{enumerate}
This procedure is straight forward, however, a few things should be kept in mind: The sample of 21 SNRs used to derive individual CR spectra at this point still only represents about 1/5th to 1/10th of the total SNR population. It is not entirely clear yet if this sample is fully representative for the population of cosmic ray emitting SNRs. Gamma-ray observations are done with good energy and spatial precision in the GeV (Fermi) and TeV (H.E.S.S./MAGIC/VERITAS) ranges. This corresponds to cosmic ray energies from around a few GeV up to approximately 100 TeV and thus does not include the knee region. There may exist flat cosmic ray sources that are not very prominent at below TeV gamma-ray energies, but which, due to their flat spectrum, contribute to the total cosmic ray spectrum. Thus, today's sample may not be statistically complete. Future measurements with HAWC \cite{hawc} and CTA \cite{cta} will help to provide a complete sample, as with these next generation telescopes, the sensitivity will be enhanced to have a full view of the entire Galaxy in gamma-rays, and the energy range is expected to be increased up to values of $100-300$~TeV. The analysis of the spectrum done here should therefore be considered as a starting point - in a few years from now, this procedure can be repeated, with higher statistical significance and more knowledge about SNRs as potential sources of Galactic CRs.
\subsubsection{Normalization of individual SNR spectra in GALPROP}
First of all it should be noted that the CR normalization in \textit{standard} GALPROP applications is usually fixed by globally scaling the calculated cosmic ray density such that the observed CR flux at Earth is met. In the approach presented in this paper, the idea is to fix the normalization by the rate of particle injection of the individual SNRs into the Galaxy.
Hence, from the technical viewpoint a key ingredient is to rewrite the source spectrum $j_{\rm p}(T)$ in terms of the \textit{internal} GALPROP units. A pragmatic approach to determine the needed conversion factor is to compare the calculation of the luminosity $L$ in GALPROP with the corresponding integral expression on the basis of $j_{\rm p}(T)$
\begin{equation}
L = \xi R^2_{\rm SNR}c \int_{10~\rm{MeV}}^{1e9~\rm{MeV}}\ dT\ \beta\ T\ j_{\rm p}(T)/V_{\rm SNR}\,.
\label{eq:LuminosityGeneral1}
\end{equation}
Here, $V_{\rm SNR}=4/3\,\pi\,R_{\rm SNR}^{3}$ is the volume of the SNRs with a radius $R_{\rm SNR}$. Note that $\xi =1/2$ corresponds to the case where the CR particles stream out of the SNR in radial direction. In contrast to that $\xi = 1$ arises from an averaging assuming that the CRs move in random directions inside the SNRs. The aforementioned comparison leads to the following relation between the initial source function $q_1(p(T))$ as implemented in GALPROP. Further details can be found in the GALPROP documentation~\cite{strong_galprop_2011,Galprop_Web_Standford} or by directly checking the source code file cr\_luminosity.cc.
In this case, the spectrum parametrization $j_{\rm p}(T)$ in GALPROP becomes
\begin{equation}
q_1(p(T))= \alpha \frac{c^{2} R_{\rm SNR}^{2} \beta^{2}}{4\pi V_{\rm grid}}\ j_{\rm p}(T).
\label{eq:ConversionISF}
\end{equation}
In an alternative approach, the luminosity can be derived from the total energy $E_{\rm tot}$ of protons in the SNR
via $L=E_{\rm tot}/\tau$ with a time-scale $\tau$, representing the distribution of the total energy over the total lifetime of the remnant,
\begin{equation}
L = \frac{1}{\tau} \int_{10~\rm{MeV}}^{1e9~\rm{MeV}}\ dT\ T\ j_{\rm p}(T).
\label{eq:LuminosityGeneral2}
\end{equation}
Using this expression for the luminosity, one finds
\begin{equation}
q'_1(p(T))= \frac{\beta c R_{\rm SNR}^{3}}{3 V_{\rm grid} \tau}\ j_{\rm p}(T).
\label{eq:ConversionISF_alter}
\end{equation}
Assuming that $E_{\rm tot}$ is the energy of the SNR converted into protons and $\tau$ is the life time of the SNR, $L=E_{\rm tot}/\tau$ can be interpreted as the average luminosity in CRs.
It is a typical approach in gamma-ray astronomy to derive the total cosmic ray energy budget in order to estimate the SNRs possible contribution to the total cosmic ray budget. A back-of the envelope calculation predicts that the observed cosmic ray luminosity of the Galaxy (about $3\cdot 10^{40}$~erg/s) can be reproduced if on average, $10^{50}$~erg are going into a single SNR at a Supernova rate of $(1/50-1/100)$~yr$^{-1}$. Here, it is assumed that cosmic rays are injected into the interaction region continuously at a constant rate during the lifetime $\tau$ with the total energy going into cosmic rays conserved over time. It is clear that this is a simplifying assumption, as it is known that at least the energy spectrum is changing with time, in particular concerning the reduction of maximum energy, see e.g.\ \cite{cox1972}. For energy spectra steeper than $E^{-2}$, the total energy budget is dominated by the lower integration threshold, so effects from this temporal development should be relatively small. It is also specified in \cite{cox1972} that the total energy of the SNR is decreasing with time due to cooling effects. This would mean that, if we assume a constant fraction of the SNR energy going into cosmic rays at a given time, the actual average luminosity of cosmic rays would be underestimated in particular for old remnants. It is not clear, however, if the fraction of energy going into cosmic rays is constant over time or if it actually decreases with the available energy budget. Thus, we judge that in first order approximation, it seems reasonable to estimate the total energy of the remnant from the given value, keeping in mind the above discussed uncertainties. In this paper, this normalization scheme, i.e.\ following the total energy argument, will be followed. This scheme normalizes the individual SNRs with respect to each other as well as each SNR individually, so that the final result will be both a realistic {\it spectral energy behavior} as well as {\it normalization} of the spectrum.
\subsubsection{Including CR Nuclei}
Although the source spectra taken from~\cite{mandelartz2015} are only provided for CR protons, it is possible to include CR nuclei in GALPROP simulations (see chapter 5.5 in ~\cite{strong_galprop_2011,Galprop_Web_Standford}). To do so, the initial source function of nuclei $q_{A}(p_{A})$ with mass number $A$ and momentum $p_{A}$ is related to $q_{1}(p_{1})$ by the relative abundance $X$ according to
\begin{equation}
X=\frac{Aq_{A}(p_{A})}{q_{1}(p_{1})}.
\label{eq:RelAbundanceAndNuclei}
\end{equation}
In this context two remarks have to be made:
\begin{enumerate}
\item Due to the high energy cut off in equation (\ref{eq:SourceSpec_Mandelartz}) $X$ is not independent of the energy in what follows.
\item Including CR nuclei injection, the total energy in hadrons of the SNRs is artificially increased. The energy budget can approximately be derived following \cite{mandelartz2015} by down-scaling the proton normalization $a_{p}$ in equation (\ref{eq:SourceSpec_Mandelartz}) appropriately.
\end{enumerate}
Here, simulations are performed for all nuclei, but the resulting energy spectra are only discussed for protons. In the future, the heavy nuclei spectra will be discussed as well in order to investigate other questions connected to cosmic ray observations, sources and transport.
Photohadronic effects as well as photo spallation is typically negligible at the given length scales and electromagnetic fields in the Galaxy.
\subsubsection{GALPROP settings}
GALPROP provides the user with a large number of parameters that can be changed to follow the users needs. Most of these were left unchanged with respect to version 54.1.984. In this section, we summarize what has been changed and which parameters were varied. All changes to the galdef-file are summarized in table \ref{default}. The main changes are discussed below:
\begin{enumerate}
\item {\bf The normalization scheme}\\
In order to investigate normalization and spectral behavior, the original GALPROP code must be modified. In its current version we are able to provide individual SNRs with their own parameters as been measured or obtained by specific analyses, see \cite{mandelartz2015}. The sources are injected with their spatial parameters, such as their distance to Earth and their actual extension. Each SNR is provided with a set of spectrum normalization, spectral index and maximum energy, provided by 21 individual sets given in \cite{mandelartz2015}. Here, for each SNR, the position is injected randomly, weighted by a source distribution function. As GALPROP propagates particles on a grid, each randomly drawn position is internally set to the closest grid point.
\item {\bf The Galaxy size}\\
The Galaxy is treated as a three dimensional object and the grid points are arranged with a distance of $d_{\mathrm grid} = 1$ kpc from each other. Thus, the SNRs can effectively be seen as point-like sources. In our analysis, we tested two Galaxy sizes. In the small galaxy configuration the horizontal plane ranges from $-10 \mathrm{\ kpc} \leq (x,y) \leq 10$ kpc, while the large Galaxy doubles the small Galaxy in each direction. We use the small Galaxy size only for first tests as simulations are quicker for this smaller size simulation. We show only the first graph of our results with both Galaxy sizes, as they deliver comparable results in all simulations. The vertical component remains constant for both configurations and extends over a range of $-4 \mathrm{\ kpc} \leq z \leq 4$ kpc.
\item {\bf The diffusion coefficient}\\
Furthermore, the diffusion coefficient has been varied in our simulations in order to analyze the energy budget of the resulting cosmic ray spectrum. We compare the Kolmorogov-type diffusion with $D_{xx} = E^{\delta}$ and $\delta = 0.33$ to a steeper diffusion with $\delta = 0.5$.\\
\item {\bf The individual supernova remnants}\\
The analyses cover two settings of SNR types. The first can rather be seen as the standard SNR as it has a fixed total energy $E_{\rm CR, tot}=10^{50}$~erg and also a fixed spectral index, where three cases are investigated: $\alpha_p=2.0,\,2.3,\,2.5$. This is done in order to test if the back-of-the-envelop calculation presented in the introduction and often used as an argument that SNRs can be the sources of Galactic cosmic rays actually holds, even in this more advanced calculation.
The latter type takes into account the individual SNR parameters as derived in \cite{mandelartz2015}. It is the aim of this paper to investigate if these gamma-ray emitting SNRs can be the sources of Galactic cosmic rays. In order to quantify how reliable the results are with respect to the primary cosmic ray spectra as derived in \cite{mandelartz2015}, the primary cosmic ray spectral indices of all SNRs have been allowed to vary within a $1\sigma$ uncertainty. This way, for the final result, a 1$\sigma$ error band can be drawn and the uncertainty of the spectrum can be estimated.
\end{enumerate}
\input{galdef_table}
\section{Results and Conclusions \label{results:sec}}
In this section, we show the results from the simulation as described above. We focus on the energy range from 10 GeV upward, as gamma-ray measurements have precise results between $\approx 1$~GeV and $10$~TeV, corresponding to a cosmic ray energy of about $10$~GeV to $100$~TeV. Those spectra that do not show a cutoff up to $100$~TeV cosmic ray energy are extrapolated with an assumed cutoff in the knee region.
\subsection{Validation of the method \label{verification:sec}}
In order to validate the approach chosen here, we use the full GALPROP simulation to test two things: first of all, we simulate individual spectra for $1,000,\,10,000,\,20,000$ and $30,000$ SNRs in order to cross-check the statistical convergence. Figure \ref{test:fig} shows the spectra for the different numbers (top) and the ratio with respect to the simulation of the highest number ($30,000$ SNRs, bottom). Here, we test how many SNRs have to be simulated in order to receive a statistically relevant result. The figure shows the CR spectrum for simulated $1,000,\,10,000,\,20,000$ and $30,000$ SNRs, always normalized to the true number of SNRs in the Galaxy at one time, see Equ.\ (\ref{phi:equ}). Thus, in the ideal case, they should give the same result, and higher numbers should result in a more precise calculation. This becomes evident when looking at the flux ratio, where deviations with respect to the largest number of simulated SNRs ($30,000$) become smaller. In the case of $20,000$ SNRs, deviations are on the order of 1\% and we use this number for all following calculations.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig02.pdf}
\end{center}
\caption{\label{test:fig} {Simulation of individual SNRs, for $1,000,\,10,000,\,20,000$ and $30,000$ SNRs, always normalized to the true number, i.e.\ 100.}
}
\end{figure}
Secondly, we test the standard approach, i.e.\ all sources have the same spectral behavior, luminosity and maximum energy as it is presented in Section \ref{intro:sec}. Here, each simulated SNR receives the same spectrum, with a normalization corresponding to a total cosmic ray luminosity of $2\cdot 10^{41}$~erg/s (compare Equ.\ \ref{cr_lumi:equ}). Concretely, the spectral behavior is assumed to be a power-law with a spectral index $\gamma$ at the source and a maximum energy $E_{\max} = 10^{15}$~eV:
\begin{equation}
j_{\rm standard} = A\cdot \left(\frac{E}{E_0}\right)^{-\gamma}\cdot \exp\left(-\frac{E}{E_{\max}}\right)\,.
\end{equation}
While the standard approach in text books does not rely on the spectral index, as it only concerns the total energy budget, we test three different indices at the sources, i.e.\ $\gamma=2.0,\, 2.3,\, 2.5$. The diffusion coefficient is assumed to follow an $E^{1/3}$-behavior, representing a Kolmogorov spectrum. In principle, the energy behavior could be as strong as $E^{0.6}$. But as the change in the primary cosmic ray spectrum towards steeper spectra has the same effect as changing the diffusion coefficient to a stronger energy behavior, we refrain from changing the diffusion coefficient in this test and only change the primary spectrum.
Figure \ref{standard_no_snrs:fig} shows how the spectra change for different number of simulated SNRs, i.e.\ $1,000,\,10,000,\,20,000$ and $30,000$ for the case of $E^{-2.3}$. It can be shown that the spectra converge for larger numbers of SNRs and that the differences between the simulation of $10,000$ and $20,000$ SNRs are below 1\%. We therefore use $10,000$ SNRs for this simulation in the following.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig03.pdf}
\end{center}
\caption{\label{standard_no_snrs:fig} The standard approach with a generic $E^{-2.3}$ spectrum and the same normalization for every SNR. Here, the total number of 10,000 simulated supernova events is sufficient as the statistical error reduces to below $1\%$ between a simulation of $10,000$ and $30,000$ SNRs.}
\end{figure}
Figure \ref{standard:fig} shows the spectra for the large (upper line in each set) and small (lower line) Galaxy in the standard approach, where all sources are treated the same. Firstly, it can be noted that the differences of the spectra for the small and the large Galaxy are rather small and can be considered as negligible given the larger uncertainties of the input parameters. We therefore only show results for the large Galaxy in the following sections. Secondly, it is shown here that the cosmic ray flux is generally underestimated by a factor of a few (approximately $2-5$). Only for very flat injection spectra, i.e.\ $E^{-2}$, in combination with Kolmogorov-type diffusion results in an overestimation at the highest energies. This combination is, however, the flattest spectrum that is generally possible and it actually underestimates the energy budget at the lowest energies by a large amount. This is therefore not a realistic scenario. For steeper spectra (or larger energy dependence in diffusion), a deviation from data by a factor of $2-4$ is certainly within the errors of this calculation: parameters like the supernova rate and the average energy budget of the SNRs are highly uncertain. Thus, we consider this result as compatible with the back-of-the-envelop calculation concerning the energy budget. The spectral behavior, on the other hand, does not follow a single power-law. For low energies, i.e.\ below a few TeV it matches an $E^{-2.5}$ primary spectrum (solid line). At higher energies, the spectrum is better represented by an $E^{-2.3}$ spectrum (dashed line). For stronger diffusion, i.e.\ $E^{0.6}$, the primary spectra would have to be flatter than this. Such a behavior at Earth can either be reproduced by a broken power-law at the source, see e.g.\ \cite{biermann_prl2009,biermann_apj2010,biermann_apjl2010} or a broken power-law in the diffusion.
While the spectral behavior is debatable here, it becomes clear that with our numerical approach, we can reproduce the standard argument of SNRs being able to reproduce the cosmic ray energy budget. In the following, we will use realistic individual SNR spectra to test if those spectra that are observed today can represent the class of source that dominate the cosmic ray spectrum below the knee.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig04.pdf}
\end{center}
\caption{\label{standard:fig} {This figure shows the comparison of the cosmic ray flux between different generic spectral indices, $E^{-2.0}$, $E^{-2.3}$ and $E^{-2.5}$ by using the standard approach. The blue lines show the small galaxy, while the blue line represents the large galaxy configuration.}
}
\end{figure}
\subsection{Results simulating individual SNRs}
In this section, we use those 21 SNRs with gamma-ray spectra that can be explained by hadronic interactions, as described in section \ref{method:sec}. Figure \ref{individual_1sigma_kolmogorov:fig} shows the results for a large galaxy with a Kolmogorov-type diffusion coefficient, i.e.\ $D\propto E^{0.33}$. Error bands show the 1$\sigma$ interval representing the uncertainty of the spectral index derived from the gamma-ray spectra. The cosmic ray data lie within these uncertainties and can be explained by these individual SNR spectra. Generally, the spectrum is somewhat steeper than what is observed on large scales. It is obvious, however, that uncertainties are still large and that future data with higher precision will have to confirm this result. Figure \ref{individual_1sigma:fig} shows the same simulation, but with a diffusion coefficient $D\propto E^{0.5}$. Here, low-energy data, i.e.\ below 10~TeV are well explained, but the high-energy tail cannot be reproduced. Figure \ref{individual_diffusions:fig} shows the result for the two different diffusion coefficients and the large galaxy setting to summarize the main results. These first results indicate that a Kolmogorov-like diffusion coefficient is well-suited to explain the cosmic ray flux, while a rather strong energy dependence of $E^{0.5}$ already leads to a spectrum that is too steep.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig05.pdf}
\end{center}
\caption{\label{individual_1sigma_kolmogorov:fig}The CR flux in the large galaxy for simulated 20,000 SNRs, with the individual injection parameters taken from \cite{mandelartz2015}, considering a 1$\sigma$ error in the spectral index. The diffusion coefficient has been set to a Kolmogorov-type diffusion, i.e. $D\propto E^{0.33}$. Experimental data taken from CREAM \cite{2011ApJ...728..122Y}, PAMELA \cite{adriani2011} and AMS-01 \cite{2010arXiv1008.5051T}.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig06.pdf}
\end{center}
\caption{\label{individual_1sigma:fig} The large galaxy with a 1$\sigma$ error in the individual spectral index. The diffusion coefficient has been chosen to be steeper than a Kolmogorov-type diffusion. Here, we use $D\propto E^{0.5}$. Experimental data taken from CREAM \cite{2011ApJ...728..122Y}, PAMELA \cite{2011Sci...332...69A} and AMS-01 \cite{2010arXiv1008.5051T}.
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{fig07.pdf}
\end{center}
\caption{\label{individual_diffusions:fig} The large galaxy examined with the Kolmogorov-type diffusion (solid, $\delta = 0.33$) and a steeper diffusion (dashed, $\delta=0.50$). Both simulations are shown along with the 1$\sigma$ error band of the spectral index. Experimental data taken from CREAM \cite{2011ApJ...728..122Y}, PAMELA \cite{2011Sci...332...69A} and AMS-01 \cite{2010arXiv1008.5051T}.}
\end{figure}
\subsection{Discussion of uncertainties}
The main uncertainties in this calculation are the following:
\begin{itemize}
\item We assume the number of sources actually contributing to the spectra at a given point of time to be $\sim 100$. This number could be larger, as close to $300$ radio emitting SNRs have been detected in the Milky Way so far. However, we use this low value as it is expected that only the brightest radio SNR are able to accelerate particles to extreme energies.
\item We use all gamma-ray detected SNRs that have the potential to be of hadronic origin. \cite{mandelartz2015} found that 21 out of 24 SNR gamma-ray spectra can be fitted hadronically, but it is not certain that all of these are dominated by $\pi^{0}$ decays.
\item The gamma-ray spectra are given between some GeV and $10$~TeV in energy, corresponding to cosmic ray energies of $10$~GeV to $100$~TeV. As the cosmic ray knee lies at about $1$~PeV, those spectra that do not show a cutoff up to $100$~TeV have to be extrapolated. The low-energy part, i.e.\ below $10$~GeV, cannot be described properly and is not subject of this investigation.
\item The current simulation approach predicts the average CR observable, e.g.\ $<dF/dT>$, but not the proper corresponding variance, e.g.\ $Var(dF/dT)$. However, the variance or alternative statistical measures are needed to fully quantify whether the aforementioned discrepancy of $<dF/dT>$ is statistically significant.
As more than those 21 known SNRs are expected to contribute to the CR flux the variance calculate here would presumably be an upper limit for the true variance. In future investigations, we plan to remove this uncertainty by means of changing the method. This will be described in detail in Section \ref{discussion:sec}.
\end{itemize}
In the future, with HAWC and CTA data available, it will be possible to draw even stronger conclusions in particular what concerns the contribution up to the knee. At this point, our result rather has to be considered as an upper limit: all physics uncertainties have been considered in a way that the maximum possible result is received. In particular, all SNRs that can possibly be fit hadronically have been used and all spectra without a detected cutoff have been extrapolated with an assumed cutoff at the knee, i.e.\ at $1$~PeV. Thus, the conclusions of this paper are rather referring to an upper limit rather than a precise flux estimate.
\subsection{Conclusions}
We simulate the propagation of cosmic rays from individual SNRs in the Galaxy. We assume that all SNRs have spectra that are represented from a sample of 21 SNRs with measured gamma-ray spectra that can be fit hadronically. The aim of this paper is to start using gamma-ray data to investigate if SNRs can be the sources of cosmic rays below the knee. The uncertainties described above do not allow for a detailed comparison of the measured spectrum with simulations, but they give first evidence of the possible contribution by gamma-ray emitting SNRs.
The cosmic ray spectrum is within errors well-described by a Kolmogorov-type diffusion. However, if only a small fraction of the gamma-ray detections actually have a hadronic origin, it will be difficult to explain the entire spectrum by using SNRs. The same is true if several SNRs that do not show a cutoff in their spectrum yet do have an early cutoff. In that case, SNRs would fail to explain the high-energy part of the spectrum.
Stronger diffusion, i.e.\ $\delta = 0.5$ fails to explain the high-energy component of the spectrum as the total spectrum becomes too steep. So, even in the most optimistic scenario, it becomes difficult to explain the detected spectrum by SNRs.
The fact that only weak diffusion can describe the detected cosmic ray spectrum even in the optimistic scenario has further consequences. It implies that there is not much room for convection: in our simulation, we neglect any convective effects. Including a convective outflow in the calculations would remove parts of the energy budget coming from the sources, as some of the energy is carried out of the Galaxy. That means the simulated spectrum at Earth is expected to be even lower when convection effects are included. These results therefore indicate that convection cannot play a major role in the transport of cosmic rays. Further investigations are necessary to confirm these first results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.